试验1失败,因为值None无法转换为浮点数。

8

我正在尝试使用Optuna来调整额外树分类器。

我在所有尝试中都收到了以下消息:

[W 2022-02-10 12:13:12,501] 尝试2失败,因为值None无法转换为float。

下面是我的代码。它发生在我所有的尝试中。请问我做错了什么?

    def objective(trial, X, y):
    
        param = {
            'verbose': trial.suggest_categorical('verbosity', [1]),
            'random_state': trial.suggest_categorical('random_state', [RS]),
            'n_estimators': trial.suggest_int('n_estimators', 100, 150),
            'n_jobs': trial.suggest_categorical('n_jobs', [-1]),
        }
            
    
        X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=RS)
    
        clf = ExtraTreesClassifier(**param)
        
        clf.fit(X_train, y_train)
        
        y_pred = clf.predict(X_test)
        
        acc = accuracy_score(y_pred, y_test)
        print(f"Model Accuracy: {round(acc, 6)}")
        print(f"Model Parameters: {param}")
        print('='*50)
        return`
    
    
        study = optuna.create_study(
            direction='maximize',
            sampler=optuna.samplers.TPESampler(),
            pruner=optuna.pruners.HyperbandPruner(),
            study_name='ExtraTrees-Hyperparameter-Tuning')

    func = lambda trial: objective(trial, X, y)

    %%time
    study.optimize(
        func,
        n_trials=100,
        timeout=60,
        gc_after_trial=True
    )

你解决了吗?我也遇到了同样的问题。 - Anakin Skywalker
2个回答

6
这是因为你的目标函数没有返回任何内容。它应该返回一个指标(如交叉验证分数、均方误差、均方根误差等)。

0

你的代码不完整。这里是一个可行的示例,展示如何完成它。我使用的是optuna==2.10.0。

import optuna

from sklearn.ensemble import ExtraTreesClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score


X, y = make_classification(n_features=4)  # Generate sample datasets

def objective(trial):
    param = {
        'random_state': trial.suggest_categorical('random_state', [0, 25, 100, None]),
        'n_estimators': trial.suggest_int('n_estimators', 100, 150)
    }

    suggested_random_state = param['random_state']  # also use the suggested random state value in train_test_split()

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=suggested_random_state)
    clf = ExtraTreesClassifier(**param)

    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    acc = accuracy_score(y_pred, y_test)
    print(f"Model Accuracy: {round(acc, 6)}")
    print(f"Model Parameters: {param}")
    return acc  # return our objective value


if __name__ == "__main__":
    study = optuna.create_study(
        direction="maximize",
        sampler=optuna.samplers.TPESampler()
    )
    study.optimize(objective, n_trials=100)

    print("Number of finished trials: {}".format(len(study.trials)))

    print("Best trial:")
    trial = study.best_trial

    print("  Value: {}".format(trial.value))

    print("  Params: ")
    for key, value in trial.params.items():
        print("    {}: {}".format(key, value))


示例输出

...
[I 2022-02-22 00:40:32,688] Trial 97 finished with value: 0.75 and parameters: {'random_state': None, 'n_estimators': 149}. Best is trial 15 with value: 1.0.
Model Accuracy: 0.75
Model Parameters: {'random_state': None, 'n_estimators': 134}
[I 2022-02-22 00:40:32,844] Trial 98 finished with value: 0.75 and parameters: {'random_state': None, 'n_estimators': 134}. Best is trial 15 with value: 1.0.
Model Accuracy: 0.8
Model Parameters: {'random_state': None, 'n_estimators': 129}
[I 2022-02-22 00:40:33,002] Trial 99 finished with value: 0.8 and parameters: {'random_state': None, 'n_estimators': 129}. Best is trial 15 with value: 1.0.

Number of finished trials: 100
Best trial:
  Value: 1.0
  Params:
    random_state: None
    n_estimators: 137

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接