我正在比较一些使用scikit-learn的回归算法(Random Forest,Extra Trees,Adaboost和Bagging)的决策树项目。
为了进行比较和解释,我使用特征重要性评估它们,但是对于Bagging决策树来说,这似乎不可用。
我的问题是:有人知道如何获取Bagging的特征重要性列表吗?
问候,Kornee
我正在比较一些使用scikit-learn的回归算法(Random Forest,Extra Trees,Adaboost和Bagging)的决策树项目。
为了进行比较和解释,我使用特征重要性评估它们,但是对于Bagging决策树来说,这似乎不可用。
我的问题是:有人知道如何获取Bagging的特征重要性列表吗?
问候,Kornee
您是在谈论BaggingClassifier吗?它可以与多种基本估算器一起使用,因此没有实现特征重要性。有一些模型无关的方法可以计算特征重要性(例如,请参见 https://github.com/scikit-learn/scikit-learn/issues/8898 ),但scikit-learn不使用这些方法。
如果以决策树作为基本估算器,可以自行计算特征重要性:它只需在 bagging.estimators_
中所有树的 tree.feature_importances_
平均即可:
import numpy as np
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
clf = BaggingClassifier(DecisionTreeClassifier())
clf.fit(X, y)
feature_importances = np.mean([
tree.feature_importances_ for tree in clf.estimators_
], axis=0)
RandomForestClassifer 内部执行相同的计算。
在CharlesG发布的内容基础上,我提供了一个重载BaggingRegressor的解决方案(同样适用于BaggingClassifier)。
class myBaggingRegressor(BaggingRegressor):
def fit(self, X, y):
fitd = super().fit(X, y)
# need to pad features?
if self.max_features == 1.0:
# compute feature importances or coefficients
if hasattr(fitd.estimators_[0], 'feature_importances_'):
self.feature_importances_ = np.mean([est.feature_importances_ for est in fitd.estimators_], axis=0)
else:
self.coef_ = np.mean([est.coef_ for est in fitd.estimators_], axis=0)
self.intercept_ = np.mean([est.intercept_ for est in fitd.estimators_], axis=0)
else:
# need to process results into the right shape
coefsImports = np.empty(shape=(self.n_features_, self.n_estimators), dtype=float)
coefsImports.fill(np.nan)
if hasattr(fitd.estimators_[0], 'feature_importances_'):
# store the feature importances
for idx, thisEstim in enumerate(fitd.estimators_):
coefsImports[fitd.estimators_features_[idx], idx] = thisEstim.feature_importances_
# compute average
self.feature_importances_ = np.nanmean(coefsImports, axis=1)
else:
# store the coefficients & intercepts
self.intercept_ = 0
for idx, thisEstim in enumerate(fitd.estimators_):
coefsImports[fitd.estimators_features_[idx], idx] = thisEstim.coefs_
self.intercept += thisEstim.intercept_
# compute
self.intercept /= self.n_estimators
# average
self.coefs_ = np.mean(coefsImports, axis=1)
return fitd
max_features <> 1.0
,但我想如果bootstrap_features=True
,可能不会完全工作。class BaggingClassifierCoefs(BaggingClassifier):
def __init__(self,**kwargs):
super().__init__(**kwargs)
# add attribute of interest
self.feature_importances_ = None
def fit(self, X, y, sample_weight=None):
# overload fit function to compute feature_importance
fitted = self._fit(X, y, self.max_samples, sample_weight=sample_weight) # hidden fit function
if hasattr(fitted.estimators_[0], 'feature_importances_'):
self.feature_importances_ = np.mean([tree.feature_importances_ for tree in fitted.estimators_], axis=0)
else:
self.feature_importances_ = np.mean([tree.coef_ for tree in fitted.estimators_], axis=0)
return(fitted)
feature_importances = np.mean([ tree.feature_importances_ for tree in clf.estimators_ ]), axis=0)
- 8forty