我正在使用HyperOpt优化XGBoost模型的超参数,但是损失在每次迭代中都没有改变。以下是我的代码:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=random_state)
space={'max_depth': hp.quniform("max_depth", 3, 18, 1),
'gamma': hp.uniform ('gamma', 1,9),
'reg_alpha' : hp.quniform('reg_alpha', 40,180,1),
'reg_lambda' : hp.uniform('reg_lambda', 0,1),
'colsample_bytree' : hp.uniform('colsample_bytree', 0.5,1),
'min_child_weight' : hp.quniform('min_child_weight', 0, 10, 1),
'learning_rate': hp.uniform('learning_rate', 0, 1),
'n_estimators': 100000,
'seed': random_state
}
def objective(space):
clf=xgb.XGBClassifier(
n_estimators =space['n_estimators'], max_depth = int(space['max_depth']), gamma = space['gamma'],
reg_alpha = int(space['reg_alpha']),min_child_weight=int(space['min_child_weight']),
colsample_bytree=int(space['colsample_bytree']))
evaluation = [( X_train, y_train), ( X_test, y_test)]
clf.fit(X_train, y_train,
eval_set=evaluation, eval_metric="auc",
early_stopping_rounds=10,verbose=False)
pred = clf.predict(X_test)
accuracy = f1_score(y_test, pred>0.5)
print ("SCORE:", accuracy)
return {'loss': 1-accuracy, 'status': STATUS_OK }
trials = Trials()
best_hyperparams = fmin(fn = objective,
space = space,
algo = tpe.suggest,
max_evals = 1000,
trials = trials)
运行此代码后,分数未发生变化。输出如下:
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
SCORE:
0.8741788782213239
100%|██████████| 100/100 [00:21<00:00, 4.57trial/s, best loss: 0.1258211217786761]