如何从列号获取pyspark随机森林特征重要性得分的列名

10

我正在 Spark 中使用标准的(字符串索引器 + one hot 编码器 + 随机森林)管道,如下所示

labelIndexer = StringIndexer(inputCol = class_label_name, outputCol="indexedLabel").fit(data)

string_feature_indexers = [
   StringIndexer(inputCol=x, outputCol="int_{0}".format(x)).fit(data)
   for x in char_col_toUse_names
]

onehot_encoder = [
   OneHotEncoder(inputCol="int_"+x, outputCol="onehot_{0}".format(x))
   for x in char_col_toUse_names
]
all_columns = num_col_toUse_names + bool_col_toUse_names + ["onehot_"+x for x in char_col_toUse_names]
assembler = VectorAssembler(inputCols=[col for col in all_columns], outputCol="features")
rf = RandomForestClassifier(labelCol="indexedLabel", featuresCol="features", numTrees=100)
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=labelIndexer.labels)
pipeline = Pipeline(stages=[labelIndexer] + string_feature_indexers + onehot_encoder + [assembler, rf, labelConverter])

crossval = CrossValidator(estimator=pipeline,
                          estimatorParamMaps=paramGrid,
                          evaluator=evaluator,
                          numFolds=3)
cvModel = crossval.fit(trainingData)

现在,在进行拟合后,我可以使用cvModel.bestModel.stages[-2].featureImportances来获取随机森林和特征重要性,但这并没有给我特征/列名称,而只是特征编号。

我得到的结果如下:

print(cvModel.bestModel.stages[-2].featureImportances)

(1446,[3,4,9,18,20,103,766,981,983,1098,1121,1134,1148,1227,1288,1345,1436,1444],[0.109898803421,0.0967396441648,4.24568235244e-05,0.0369705839109,0.0163489685127,3.2286694534e-06,0.0208192703688,0.0815822887175,0.0466903663708,0.0227619959989,0.0850922269211,0.000113388896956,0.0924779490403,0.163835022713,0.118987129392,0.107373548367,3.35577640585e-05,0.000229569946193])

我应该如何将其映射回某些列名或列名+值格式?
基本上是为了获取随机森林的特征重要性以及对应的列名。


Abishek,你最终是怎么做到的? - Chuck
3个回答

7
转换后的数据集元数据具有所需的属性。以下是一个简单的方法 -
  1. create a pandas dataframe (generally feature list will not be huge, so no memory issues in storing a pandas DF)

    pandasDF = pd.DataFrame(dataset.schema["features"].metadata["ml_attr"] 
    ["attrs"]["binary"]+dataset.schema["features"].metadata["ml_attr"]["attrs"]["numeric"]).sort_values("idx")
    
  2. Then create a broadcast dictionary to map. broadcast is necessary in a distributed environment.

    feature_dict = dict(zip(pandasDF["idx"],pandasDF["name"])) 
    
    feature_dict_broad = sc.broadcast(feature_dict)
    

您也可以查看这里这里


这应该是正确的答案 - 它简洁而有效。谢谢! - CClarke

1

嘿,为什么不通过列表展开将其映射回原始列呢?下面是一个例子:

# in your case: trainingData.columns 
data_frame_columns = ["A", "B", "C", "D", "E", "F"]
# in your case: print(cvModel.bestModel.stages[-2].featureImportances)
feature_importance = (1, [1, 3, 5], [0.5, 0.5, 0.5])

rf_output = [(data_frame_columns[i], feature_importance[2][j]) for i, j in zip(feature_importance[1], range(len(feature_importance[2])))]
dict(rf_output)

{'B': 0.5, 'D': 0.5, 'F': 0.5}

2
是的,但你没有抓住重点,即在stringindexer/onehotencoder之后列名会发生变化。我想映射到由Assembler组合的列名上。我当然可以用冗长的方法来做,但我更关心是否Spark(ML)有同样的缩短路径,就像scikit learn一样 :) - Abhishek
1
啊,好的,我的错。但是长路仍然是有效的。我认为目前没有简短的解决方案。Spark ML API不像scikit learn那样强大和冗长。 - Dat Tran
是的,我知道 : ) ,只是想让这个问题保持开放状态以获取建议。谢谢 Dat。 - Abhishek

0

在使用机器学习算法后,我无法找到任何获取真实初始列列表的方法,目前只能使用这种解决方法。

print(len(cols_now))

FEATURE_COLS=[]

for x in cols_now:

    if(x[-6:]!="catVar"):

        FEATURE_COLS+=[x]

    else:

        temp=trainingData.select([x[:-7],x[:-6]+"tmp"]).distinct().sort(x[:-6]+"tmp")

        temp_list=temp.select(x[:-7]).collect()

        FEATURE_COLS+=[list(x)[0] for x in temp_list]



print(len(FEATURE_COLS))

print(FEATURE_COLS)

我在所有索引器(_tmp)和编码器(_catVar)中保持了一致的后缀命名,例如:

column_vec_in = str_col

column_vec_out = [col+"_catVar" for col in str_col]



indexers = [StringIndexer(inputCol=x, outputCol=x+'_tmp')

            for x in column_vec_in ]


encoders = [OneHotEncoder(dropLast=False, inputCol=x+"_tmp", outputCol=y)

for x,y in zip(column_vec_in, column_vec_out)]



tmp = [[i,j] for i,j in zip(indexers, encoders)]

tmp = [i for sublist in tmp for i in sublist]

这可以进一步改进和泛化,但目前这种繁琐的解决方法效果最好。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接