我刚接触scala,正在尝试执行以下代码:
val SetID = udf{(c:String, d: String) =>
if( c.UpperCase.contains("EXKLUS") == true)
{d}
else {""}
}
val ParquetWithID = STG1
.withColumn("ID", SetID( col("line_item"), col("line_item_ID")))
在STG1
模式中,line_item
和line_item_id
这两个列均被定义为Strings
。
当我尝试运行代码时,出现以下错误:
`org.apache.spark.SparkException: Failed to execute user defined function($anonfun$1$$anonfun$2: (string, string) => string)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
并且。
Caused by: java.lang.NullPointerException
at MyTests$$anonfun$1$$anonfun$2.apply(MyTests.scala:356)
at MyTests$$anonfun$1$$anonfun$2.apply(MyTests.scala:355)
... 16 more
我也尝试了 c.UpperCase().contains("EXKLUS")
,但是我得到了相同的错误。
然而,如果我只运行一个“if equals
"语句,一切都可以正常工作。所以我猜问题在于在我的udf
中使用 UpperCase().contains(" ")
函数,但我不明白问题出在哪里。任何帮助都将不胜感激!