在使用Pyspark中的DataFrame show方法时出现错误。

4
我尝试展示Pyspark数据框,但遇到了以下错误:
Py4JJavaError: An error occurred while calling o607.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 114.0 failed 4 times, most recent failure: Lost task 0.3 in stage 114.0 (TID 15904, zw02-data-hdp-dn25211.mt, executor 416): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/data5/hadoop/yarn/nm-local-dir/usercache/hadoop-hmart-peisongpa/appcache/application_1634562540530_1814236/container_e37_1634562540530_1814236_01_001496/pyspark.zip/pyspark/worker.py", line 177, in main
    process()
  File "/data5/hadoop/yarn/nm-local-dir/usercache/hadoop-hmart-peisongpa/appcache/application_1634562540530_1814236/container_e37_1634562540530_1814236_01_001496/pyspark.zip/pyspark/worker.py", line 172, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/data5/hadoop/yarn/nm-local-dir/usercache/hadoop-hmart-peisongpa/appcache/application_1634562540530_1814236/container_e37_1634562540530_1814236_01_001496/pyspark.zip/pyspark/serializers.py", line 220, in dump_stream
    self.serializer.dump_stream(self._batched(iterator), stream)
  File "/data5/hadoop/yarn/nm-local-dir/usercache/hadoop-hmart-peisongpa/appcache/application_1634562540530_1814236/container_e37_1634562540530_1814236_01_001496/pyspark.zip/pyspark/serializers.py", line 138, in dump_stream
    for obj in iterator:
  File "/data5/hadoop/yarn/nm-local-dir/usercache/hadoop-hmart-peisongpa/appcache/application_1634562540530_1814236/container_e37_1634562540530_1814236_01_001496/pyspark.zip/pyspark/serializers.py", line 209, in _batched
    for item in iterator:
  File "<string>", line 1, in <lambda>
  File "/data5/hadoop/yarn/nm-local-dir/usercache/hadoop-hmart-peisongpa/appcache/application_1634562540530_1814236/container_e37_1634562540530_1814236_01_001496/pyspark.zip/pyspark/worker.py", line 71, in <lambda>
    return lambda *a: f(*a)
  File "<ipython-input-12-2ecb67285c3b>", line 5, in <lambda>
  File "<ipython-input-12-2ecb67285c3b>", line 4, in convert_target
TypeError: int() argument must be a string, a bytes-like object or a number, not 'DenseVector'

这是我的代码,并且它在Jupyter上运行:

df2 = spark.sql(sql_text)
assembler = VectorAssembler(inputCols=["targetstep"], outputCol="x_vec")
scaler = MinMaxScaler(inputCol="x_vec", outputCol="targetstep_scaled")
pipeline = Pipeline(stages=[assembler, scaler])
scalerModel = pipeline.fit(df2)
df2 = scalerModel.transform(df2)
df2 = df2.withColumn('targetstep',target_udf(f.col('targetstep_scaled'))).drop('x_vec')
df2.show()

我相信流水线和withColumn()是没问题的,但我不知道为什么show方法失败了。

1个回答

2

PySpark DF是延迟加载的。

当你调用.show()时,你要求之前的步骤执行,其中任何一个步骤都可能无法正常工作,但直到你调用.show()才能看到它们没有执行。

我会回到之前的步骤,并在DF的每个操作上调用.collect()。这至少可以让你确定坏数据是在哪里创建的。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接