PySpark:如何在PySpark SQL中创建计算列?

3
使用PySpark SQL,给定3个列,我想创建一个额外的列,该列将两个列相除,第三个列是ID列。
df = sqlCtx.createDataFrame(
    [
        (1, 4, 2),
        (2, 5, 2),
        (3, 10, 4),
        (4, 50, 10)
    ],
    ('ID', 'X', 'Y')
)

这是期望的输出:

+----+----+----+---------------------+
| ID | x  | y  | z (expected result) |
+----+----+----+---------------------+
|  1 |  4 |  2 | 2                   |
|  2 |  5 |  2 | 2.5                 |
|  3 | 10 |  4 | 2.5                 |
|  4 | 50 | 10 | 5                   |
+----+----+----+---------------------+

为了实现这一点,我创建了一个UDF:

def createDivision(args):
    X = float(args[0])
    Y = float(args[1])
    RESULT = X / Y
    return RESULT

udf_createDivision = udf(createDivision, FloatType())

udf_createDivision_calc = udf_createDivision(df['X'], df['Y'])

df = df.withColumn("Z", udf_createDivision_calc)

df.show()

然后我在输出中得到了一长串错误:

Py4JJavaError: An error occurred while calling o7401.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 756.0 failed 1 times, most recent failure: Lost task 0.0 in stage 756.0 (TID 7249, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/opt/spark/spark-2.4.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 372, in main
    process()
  File "/opt/spark/spark-2.4.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 367, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/opt/spark/spark-2.4.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 243, in <lambda>
    func = lambda _, it: map(mapper, it)
  File "<string>", line 1, in <lambda>.......

我非常希望得到一些帮助,因为我不知道如何解释这个错误。谢谢。
1个回答

9

只是表达式:

from pyspark.sql.functions import col

df.withColumn("Z", col("x") / col("y"))

根据您的代码(在此处您真的不应该使用udf),它应该是以下之一:
def createDivision(x, y):
    return x / y

或者
def createDivision(*args):
    return args[0] / args[1]

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接