网格搜索交叉验证是否可用于提取决策树分类器的最佳参数?http://scikit-learn.org/stable/tutorial/statistical_inference/model_selection.html
为什么不呢?
我邀请您查看GridsearchCV文档。
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
param_grid = {'max_depth': np.arange(3, 10)}
tree = GridSearchCV(DecisionTreeClassifier(), param_grid)
tree.fit(xtrain, ytrain)
tree_preds = tree.predict_proba(xtest)[:, 1]
tree_performance = roc_auc_score(ytest, tree_preds)
print 'DecisionTree: Area under the ROC curve = {}'.format(tree_performance)
提取最佳参数:
tree.best_params_
Out[1]: {'max_depth': 5}
这里是决策树网格搜索的代码
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
def dtree_grid_search(X,y,nfolds):
#create a dictionary of all values we want to test
param_grid = { 'criterion':['gini','entropy'],'max_depth': np.arange(3, 15)}
# decision tree model
dtree_model=DecisionTreeClassifier()
#use gridsearch to test all values
dtree_gscv = GridSearchCV(dtree_model, param_grid, cv=nfolds)
#fit model to data
dtree_gscv.fit(X, y)
return dtree_gscv.best_params_
tree.best_params_
返回一个类似于{'params1' : bestparam}
的字典。在这种情况下,你会得到例如{'a' : 1, 'b' : 0.4, ...}
。 - gowitheflowwGridSearchCV
,难道不想使用整个数据集来拟合而不仅仅是训练子集吗?由于GridSearchCV
在执行某种 k-fold CV 变体,只使用训练数据似乎浪费了数据。 - dsal1951