Spark Scala:如何将Dataframe [vector]转换为DataFrame [f1:Double,...,fn:Double)]

8

我刚刚使用了Standard Scaler对我的机器学习应用程序的特征进行归一化。在选择经过缩放的特征之后,我希望将其转换为Double类型的数据帧,尽管我的向量长度是任意的。我知道如何针对三个特定的特征执行此操作,方法是使用

myDF.map{case Row(v: Vector) => (v(0), v(1), v(2))}.toDF("f1", "f2", "f3")

但不适用于任意数量的特征。有没有简单的方法可以做到这一点?

示例:

val testDF = sc.parallelize(List(Vectors.dense(5D, 6D, 7D), Vectors.dense(8D, 9D, 10D), Vectors.dense(11D, 12D, 13D))).map(Tuple1(_)).toDF("scaledFeatures")
val myColumnNames = List("f1", "f2", "f3")
// val finalDF = DataFrame[f1: Double, f2: Double, f3: Double] 

编辑

我发现在创建数据框时如何解压缩列名,但仍然有困难将向量转换为创建数据框所需的序列:

finalDF = testDF.map{case Row(v: Vector) => v.toArray.toSeq /* <= this errors */}.toDF(List("f1", "f2", "f3"): _*)
5个回答

23

Spark >= 3.0.0

自 Spark 3.0 版本起,您可以使用 vector_to_array 函数。

import org.apache.spark.ml.functions.vector_to_array

testDF.select(vector_to_array($"scaledFeatures").alias("_tmp")).select(exprs:_*)

Spark < 3.0.0

一种可能的方法是类似于这样的:

import org.apache.spark.sql.functions.udf

// In Spark 1.x you'll will have to replace ML Vector with MLLib one
// import org.apache.spark.mllib.linalg.Vector
// In 2.x the below is usually the right choice
import org.apache.spark.ml.linalg.Vector

// Get size of the vector
val n = testDF.first.getAs[Vector](0).size

// Simple helper to convert vector to array<double> 
// asNondeterministic is available in Spark 2.3 or befor
// It can be removed, but at the cost of decreased performance
val vecToSeq = udf((v: Vector) => v.toArray).asNondeterministic

// Prepare a list of columns to create
val exprs = (0 until n).map(i => $"_tmp".getItem(i).alias(s"f$i"))

testDF.select(vecToSeq($"scaledFeatures").alias("_tmp")).select(exprs:_*)

如果您事先了解列的列表,那么可以稍微简化一下:

val cols: Seq[String] = ???
val exprs = cols.zipWithIndex.map{ case (c, i) => $"_tmp".getItem(i).alias(c) }

对于Python的等效方法,请参见如何使用PySpark将向量拆分为列


4
请尝试使用VectorSlicer技术:VectorSlicer
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
  Seq((1, 0.2, 0.8), (2, 0.1, 0.9), (3, 0.3, 0.7))
).toDF("id", "negative_logit", "positive_logit")


val assembler = new VectorAssembler()
  .setInputCols(Array("negative_logit", "positive_logit"))
  .setOutputCol("prediction")

val output = assembler.transform(dataset)
output.show()
/*
+---+--------------+--------------+----------+
| id|negative_logit|positive_logit|prediction|
+---+--------------+--------------+----------+
|  1|           0.2|           0.8| [0.2,0.8]|
|  2|           0.1|           0.9| [0.1,0.9]|
|  3|           0.3|           0.7| [0.3,0.7]|
+---+--------------+--------------+----------+
*/

val slicer = new VectorSlicer()
.setInputCol("prediction")
.setIndices(Array(1))
.setOutputCol("positive_prediction")

val posi_output = slicer.transform(output)
posi_output.show()

/*
+---+--------------+--------------+----------+-------------------+
| id|negative_logit|positive_logit|prediction|positive_prediction|
+---+--------------+--------------+----------+-------------------+
|  1|           0.2|           0.8| [0.2,0.8]|              [0.8]|
|  2|           0.1|           0.9| [0.1,0.9]|              [0.9]|
|  3|           0.3|           0.7| [0.3,0.7]|              [0.7]|
+---+--------------+--------------+----------+-------------------+
*/

2

几天前出现了另一种解决方案:将VectorDisassembler导入您的项目中(只要它没有合并到Spark中),现在:

import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
  Seq((0, 1.2, 1.3), (1, 2.2, 2.3), (2, 3.2, 3.3))
).toDF("id", "val1", "val2")


val assembler = new VectorAssembler()
  .setInputCols(Array("val1", "val2"))
  .setOutputCol("vectorCol")

val output = assembler.transform(dataset)
output.show()
/*
+---+----+----+---------+
| id|val1|val2|vectorCol|
+---+----+----+---------+
|  0| 1.2| 1.3|[1.2,1.3]|
|  1| 2.2| 2.3|[2.2,2.3]|
|  2| 3.2| 3.3|[3.2,3.3]|
+---+----+----+---------+*/

val disassembler = new org.apache.spark.ml.feature.VectorDisassembler()
  .setInputCol("vectorCol")
disassembler.transform(output).show()
/*
+---+----+----+---------+----+----+
| id|val1|val2|vectorCol|val1|val2|
+---+----+----+---------+----+----+
|  0| 1.2| 1.3|[1.2,1.3]| 1.2| 1.3|
|  1| 2.2| 2.3|[2.2,2.3]| 2.2| 2.3|
|  2| 3.2| 3.3|[3.2,3.3]| 3.2| 3.3|
+---+----+----+---------+----+----+*/

6
VectorDisassembler 未被纳入 Spark(SPARK-13610)。 - Alper t. Turker

1

我使用的是Spark 2.3.2,并构建了一个xgboost4j二元分类模型,结果如下:

results_train.select("classIndex","probability","prediction").show(3,0)
+----------+----------------------------------------+----------+
|classIndex|probability                             |prediction|
+----------+----------------------------------------+----------+
|1         |[0.5998525619506836,0.400147408246994]  |0.0       |
|1         |[0.5487841367721558,0.45121586322784424]|0.0       |
|0         |[0.5555324554443359,0.44446757435798645]|0.0       |

我定义了以下UDF,以从向量列 probability 中获取元素

import org.apache.spark.sql.functions._

def getProb = udf((probV: org.apache.spark.ml.linalg.Vector, clsInx: Int) => probV.apply(clsInx) )

results_train.select("classIndex","probability","prediction").
withColumn("p_0",getProb($"probability",lit(0))).
withColumn("p_1",getProb($"probability", lit(1))).show(3,0)

+----------+----------------------------------------+----------+------------------+-------------------+
|classIndex|probability                             |prediction|p_0               |p_1                |
+----------+----------------------------------------+----------+------------------+-------------------+
|1         |[0.5998525619506836,0.400147408246994]  |0.0       |0.5998525619506836|0.400147408246994  |
|1         |[0.5487841367721558,0.45121586322784424]|0.0       |0.5487841367721558|0.45121586322784424|
|0         |[0.5555324554443359,0.44446757435798645]|0.0       |0.5555324554443359|0.44446757435798645|

希望这对于处理向量类型输入的人有所帮助。

0

由于上述答案需要额外的库或仍未得到支持,我使用pandas dataframe轻松提取向量值,然后将其转换回spark dataframe。

# convert to pandas dataframe 
pandasDf = dataframe.toPandas()
# add a new column
pandasDf['newColumnName'] = 0 # filled the new column with 0s
# now iterate through the rows and update the column
for index, row in pandasDf.iterrows():
   value = row['vectorCol'][0] # get the 0th value of the vector
   pandasDf.loc[index, 'newColumnName'] = value # put the value in the new column

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接