我一直在使用org.apache.spark.ml.Pipeline进行机器学习任务。对于预测标签,知道实际概率而非仅有预测标签尤为重要,但是我却难以获得实际概率。我正在使用随机森林进行二元分类任务,类别标签为“Yes”和“No”。我想输出"Yes"的概率。概率以DenseVector形式存储在管道输出中,例如[0.69, 0.31],但我不知道哪个对应于"Yes"(是0.69还是0.31?)。我猜测从labelIndexer中有一种方式可以检索到它。
以下是我的训练模型代码任务。
参考文献有关于RF的概率和标签: http://spark.apache.org/docs/latest/ml-classification-regression.html#random-forests
以下是我的训练模型代码任务。
val sc = new SparkContext(new SparkConf().setAppName(" ML").setMaster("local"))
val data = .... // load data from file
val df = sqlContext.createDataFrame(data).toDF("label", "features")
val labelIndexer = new StringIndexer()
.setInputCol("label")
.setOutputCol("indexedLabel")
.fit(df)
val featureIndexer = new VectorIndexer()
.setInputCol("features")
.setOutputCol("indexedFeatures")
.setMaxCategories(2)
.fit(df)
// Convert indexed labels back to original labels.
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer.labels)
val Array(trainingData, testData) = df.randomSplit(Array(0.7, 0.3))
// Train a RandomForest model.
val rf = new RandomForestClassifier()
.setLabelCol("indexedLabel")
.setFeaturesCol("indexedFeatures")
.setNumTrees(10)
.setFeatureSubsetStrategy("auto")
.setImpurity("gini")
.setMaxDepth(4)
.setMaxBins(32)
// Create pipeline
val pipeline = new Pipeline()
.setStages(Array(labelIndexer, featureIndexer, rf,labelConverter))
// Train model
val model = pipeline.fit(trainingData)
// Save model
sc.parallelize(Seq(model), 1).saveAsObjectFile("/my/path/pipeline")
接下来我将加载管道并对新数据进行预测,以下是代码片段:
// Ignoring loading data part
// Create DF
val testdf = sqlContext.createDataFrame(testData).toDF("features", "line")
// Load pipeline
val model = sc.objectFile[org.apache.spark.ml.PipelineModel]("/my/path/pipeline").first
// My Question comes here : How to extract the probability that corresponding to class label "1"
// This is my attempt, I would like to output probability for label "Yes" and predicted label . The probabilities are stored in a denseVector, but I don't know which one is corresponding to "Yes". Something like this:
val predictions = model.transform(testdf).select("probability").map(e=> e.asInstanceOf[DenseVector])
参考文献有关于RF的概率和标签: http://spark.apache.org/docs/latest/ml-classification-regression.html#random-forests