Spark Scala出现“找不到类scala.Any”的错误

6
val schema = df.schema
val x = df.flatMap(r =>
  (0 until schema.length).map { idx =>
    ((idx, r.get(idx)), 1l)
  }
)

这会导致错误。
java.lang.ClassNotFoundException: scala.Any

我不确定为什么,能帮忙吗?


请尝试重新构建您的项目,这似乎是您编辑器中索引问题的表现。 - Chaitanya
这是在Databricks Spark引擎上执行的,没有“重建”。@ChaitanyaWaikar - jayjay93
1
Row.get 方法返回一个 Any 类型的值,因为它不知道类型,但是 Any 不可序列化且不是有效的 Spark 结构化类型。如果您期望每个记录都是字符串,则可以使用 r.getString(idx) - Tom Lous
我需要每种类型按照模式预期出现,@TomLous,没有其他方法吗? - jayjay93
1个回答

2

一种方法是将所有列转换为字符串。请注意,我在您的代码中将r.get(idx)更改为r.getString(idx)。以下内容有效。

scala> val df = Seq(("ServiceCent4","AP-1-IOO-PPP","241.206.155.172","06-12-18:17:42:34",162,53,1544098354885L)).toDF("COL1","COL2","COL3","EventTime","COL4","COL5","COL6")
df: org.apache.spark.sql.DataFrame = [COL1: string, COL2: string ... 5 more fields]

scala> df.show(1,false)
+------------+------------+---------------+-----------------+----+----+-------------+
|COL1        |COL2        |COL3           |EventTime        |COL4|COL5|COL6         |
+------------+------------+---------------+-----------------+----+----+-------------+
|ServiceCent4|AP-1-IOO-PPP|241.206.155.172|06-12-18:17:42:34|162 |53  |1544098354885|
+------------+------------+---------------+-----------------+----+----+-------------+
only showing top 1 row

scala> df.printSchema
root
 |-- COL1: string (nullable = true)
 |-- COL2: string (nullable = true)
 |-- COL3: string (nullable = true)
 |-- EventTime: string (nullable = true)
 |-- COL4: integer (nullable = false)
 |-- COL5: integer (nullable = false)
 |-- COL6: long (nullable = false)


scala> val schema = df.schema
schema: org.apache.spark.sql.types.StructType = StructType(StructField(COL1,StringType,true), StructField(COL2,StringType,true), StructField(COL3,StringType,true), StructField(EventTime,StringType,true), StructField(COL4,IntegerType,false), StructField(COL5,IntegerType,false), StructField(COL6,LongType,false))

scala> val df2 = df.columns.foldLeft(df){ (acc,r) => acc.withColumn(r,col(r).cast("string")) }
df2: org.apache.spark.sql.DataFrame = [COL1: string, COL2: string ... 5 more fields]

scala> df2.printSchema
root
 |-- COL1: string (nullable = true)
 |-- COL2: string (nullable = true)
 |-- COL3: string (nullable = true)
 |-- EventTime: string (nullable = true)
 |-- COL4: string (nullable = false)
 |-- COL5: string (nullable = false)
 |-- COL6: string (nullable = false)


scala> val x = df2.flatMap(r => (0 until schema.length).map { idx => ((idx, r.getString(idx)), 1l) } )
x: org.apache.spark.sql.Dataset[((Int, String), Long)] = [_1: struct<_1: int, _2: string>, _2: bigint]

scala> x.show(5,false)
+---------------------+---+
|_1                   |_2 |
+---------------------+---+
|[0,ServiceCent4]     |1  |
|[1,AP-1-IOO-PPP]     |1  |
|[2,241.206.155.172]  |1  |
|[3,06-12-18:17:42:34]|1  |
|[4,162]              |1  |
+---------------------+---+
only showing top 5 rows


scala>

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接