将由NumPy数组组成的RDD转换为PySpark DataFrame

6

在尝试将由numpy数组组成的rdd转换为pyspark中的数据框时,我遇到了以下错误:

下面是导致此错误的代码片段,我甚至不确定能否找到实际出错点,即使阅读跟踪信息......

有人知道如何绕过这个问题吗?

非常感谢!

In [111]: rddUser.take(5)

Out[111]:

[array([u'1008798262000292538', u'1.0', u'0.0', ..., u'0.0', u'0.0', u'1.0'], 
       dtype='<U32'),
 array([u'102254941859441333', u'1.0', u'0.0', ..., u'0.0', u'0.0', u'1.0'], 
       dtype='<U32'),
 array([u'1035609083097069747', u'1.0', u'0.0', ..., u'0.0', u'0.0', u'1.0'], 
       dtype='<U32'),
 array([u'10363297284472000', u'1.0', u'0.0', ..., u'0.0', u'0.0', u'1.0'], 
       dtype='<U32'),
 array([u'1059178934871294116', u'1.0', u'0.0', ..., u'0.0', u'0.0', u'1.0'], 
       dtype='<U32')]

然后就出现了混乱的情况:

In [110]: rddUser.toDF(schema=None).show()  

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-110-073037afd70e> in <module>()
----> 1 rddUser.toDF(schema=None).show()

     62         [Row(name=u'Alice', age=1)]
     63         """
---> 64         return sqlContext.createDataFrame(self, schema, sampleRatio)
     65 
     66     RDD.toDF = toDF

    421 
    422         if isinstance(data, RDD):
--> 423             rdd, schema = self._createFromRDD(data, schema, samplingRatio)
    424         else:
    425             rdd, schema = self._createFromLocal(data, schema)

    308         """
    309         if schema is None or isinstance(schema, (list, tuple)):
--> 310             struct = self._inferSchema(rdd, samplingRatio)
    311             converter = _create_converter(struct)
    312             rdd = rdd.map(converter)

    253         """
    254         first = rdd.first()
--> 255         if not first:
    256             raise ValueError("The first row in RDD is empty, "
    257                              "can not infer schema")

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

嗨,我想要一个pyspark数据框架,在我的初始数组中每个元素都有一个字段。第一个字段将是用户ID,然后是第一个特征、第二个特征...直到最后一个。为了更好地理解这个概念,这是我发现的一种(可能很丑陋的)手动计算数据集上独热编码的方法。我想在生成的pyspark数据框架上计算逻辑回归... - antounes
1个回答

5
如果RDD被定义为只有maptolist,那么……
import numpy as np

rdd = spark.sparkContext.parallelize([
    np.array([u'1059178934871294116', u'1.0', u'0.0', u'0.0', u'0.0', u'1.0']),
    np.array([u'102254941859441333', u'1.0', u'0.0', u'0.0', u'0.0', u'1.0'])
])

df = rdd.map(lambda x: x.tolist()).toDF(["user_id"])

# +-------------------+---+---+---+---+---+
# |            user_id| _2| _3| _4| _5| _6|
# +-------------------+---+---+---+---+---+
# |1059178934871294116|1.0|0.0|0.0|0.0|1.0|
# | 102254941859441333|1.0|0.0|0.0|0.0|1.0|
# +-------------------+---+---+---+---+---+

但考虑到您的评论,我想您想要与ml一起使用它。那么这可能更好:

from pyspark.ml.linalg import DenseVector

(rdd
   .map(lambda x: (x[0].tolist(), DenseVector(x[1:])))
   .toDF(["user_id", "features"])
   .show(2, False))
# +-------------------+---------------------+
# |user_id            |features             |
# +-------------------+---------------------+
# |1059178934871294116|[1.0,0.0,0.0,0.0,1.0]|
# |102254941859441333 |[1.0,0.0,0.0,0.0,1.0]|
# +-------------------+---------------------+

你还应该看一下 pyspark.ml.feature.OneHotEncoder

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接