使用repartition后,pyspark出现“too many values”错误

5

我有一个DataFrame(转换为RDD),想要重新分区,使得每个键(第一列)都有自己的分区。这是我所做的:

# Repartition to # key partitions and map each row to a partition given their key rank
my_rdd = df.rdd.partitionBy(len(keys), lambda row: int(row[0]))

然而,当我试图将其映射回 DataFrame 或保存时,就会出现以下错误:
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "spark-1.5.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
        process()
      File "spark-1.5.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",     line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
  File "spark-1.5.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream
    for obj in iterator:
  File "spark-1.5.1-bin-hadoop2.6/python/pyspark/rdd.py", line 1703, in add_shuffle_key
    for k, v in iterator:
ValueError: too many values to unpack

        at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
        at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more

进一步测试发现,即使是这样也会导致相同的错误: my_rdd = df.rdd.partitionBy(x) # x = 可以是 5、100 等

你们中有没有遇到过这种情况?如果有,你们是怎么解决的呢?

1个回答

4

partitionBy需要一个PairwiseRDD,在Python中等同于长度为2的元组(列表)的RDD,其中第一个元素是键,第二个元素是值。

partitionFunc接受键并将其映射到分区号。当您在RDD [Row]上使用它时,它会尝试将行解包为键和值,但会失败:

from pyspark.sql import Row

row = Row(1, 2, 3)
k, v = row

## Traceback (most recent call last):
##   ...
## ValueError: too many values to unpack (expected 2)

即使您提供了正确的数据,也要像这样执行某些操作:
my_rdd = (df.rdd.map(lambda row: (int(row[0]), row)).partitionBy(len(keys))

这并没有太多意义。在DataFrames的情况下,分区并不特别有意义。请参见我的回答如何定义DataFrame的分区?以获取更多详细信息。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接