如果我使用LightGBMTunerCV调整模型,总是会得到cv_agg的binary_logloss的大量结果。如果我在更大的数据集上执行此操作,这些(不必要的)io会降低优化过程的性能。
以下是代码:
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
import optuna.integration.lightgbm as lgb
import optuna
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=UserWarning)
breast_cancer = load_breast_cancer()
X_train, X_test, Y_train, Y_test = train_test_split(breast_cancer.data, breast_cancer.target)
train_dataset = lgb.Dataset(X_train, Y_train, feature_name=breast_cancer.feature_names.tolist())
test_dataset = lgb.Dataset(X_test, Y_test, feature_name=breast_cancer.feature_names.tolist())
callbacks = [lgb.log_evaluation(period=0)]
tuner = lgb.LightGBMTunerCV({"objective": "binary", 'verbose': -1},
train_set=test_dataset, num_boost_round=10,
nfold=5, stratified=True, shuffle=True)
tuner.run()
输出结果:
feature_fraction, val_score: 0.327411: 43%|###################2 | 3/7 [00:00<00:00, 13.84it/s]
[1] cv_agg's binary_logloss: 0.609496 + 0.009315
[2] cv_agg's binary_logloss: 0.554522 + 0.00607596
[3] cv_agg's binary_logloss: 0.512217 + 0.0132959
[4] cv_agg's binary_logloss: 0.479142 + 0.0168108
[5] cv_agg's binary_logloss: 0.440044 + 0.0166129
[6] cv_agg's binary_logloss: 0.40653 + 0.0200005
[7] cv_agg's binary_logloss: 0.382273 + 0.0242429
[8] cv_agg's binary_logloss: 0.363559 + 0.03312
有没有办法消除这个输出?
谢谢你的帮助!
optuna.logging.set_verbosity(optuna.logging.CRITICAL)
不再静音任何行。 - NonStopAggroPopcontextlib
将 stdout 重定向到 null,以便对该调用进行优化呢?这可能会节省一些性能。 - SamBob