使用pySpark将Spark DF转换为Pandas DF时出现java.lang.IllegalArgumentException错误

4

一些基本信息:

  • Python: 2.7
  • 操作系统: Mac 10.13.2 High Sierra
  • Anaconda-Navigator: 版本1.7.0

我的基本工作流程如下:

  1. 使用pySpark和Spark数据框架进行一些初始的数据提取和转换。
  2. 将Spark数据框架转换为Panda数据框架,以便使用像Seaborn这样的库进行绘图。在这里我使用函数.toPandas(),但它会抛出一个非常奇怪的错误。

例如,这是我测试过的一个非常小的Spark数据框架,它会产生与我的较大数据框架相同的错误:

sampleList = [('john', 10000.0),('sally', 3.0),('dude', 10.0)]

sparkTestDF = sqlContext.createDataFrame(sampleList, schema=['name','denominator'])

sparkTestDF.toPandas()

这将导致出现以下错误。有什么想法如何解决或避开这个问题吗?该怎么做?
    Py4JJavaErrorTraceback (most recent call last)
<ipython-input-15-b151034bf9ad> in <module>()
      1 sampleList = [('john', 10000.0),('sally', 3.0),('dude', 10.0)]
      2 sparkTestDF = sqlContext.createDataFrame(sampleList, schema=['name','denominator'])
----> 3 sparkTestDF.toPandas()

/anaconda2/lib/python2.7/site-packages/pyspark/sql/dataframe.pyc in toPandas(self)
   1964                 raise RuntimeError("%s\n%s" % (_exception_message(e), msg))
   1965         else:
-> 1966             pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
   1967 
   1968             dtype = {}

/anaconda2/lib/python2.7/site-packages/pyspark/sql/dataframe.pyc in collect(self)
    464         """
    465         with SCCallSiteSync(self._sc) as css:
--> 466             port = self._jdf.collectToPython()
    467         return list(_load_from_socket(port, BatchedSerializer(PickleSerializer())))
    468 

/anaconda2/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
   1158         answer = self.gateway_client.send_command(command)
   1159         return_value = get_return_value(
-> 1160             answer, self.gateway_client, self.target_id, self.name)
   1161 
   1162         for temp_arg in temp_args:

/anaconda2/lib/python2.7/site-packages/pyspark/sql/utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/anaconda2/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
    318                 raise Py4JJavaError(
    319                     "An error occurred while calling {0}{1}{2}.\n".
--> 320                     format(target_id, ".", name), value)
    321             else:
    322                 raise Py4JError(

Py4JJavaError: An error occurred while calling o155.collectToPython.
: java.lang.IllegalArgumentException
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
    at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:3195)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3192)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3192)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:3225)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3192)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.base/java.lang.Thread.run(Thread.java:844)

1
我认为这与您的系统设置有关,因为我正在运行相同的示例,它可以正常工作。 - Abhishek Choudhary
是的,我尝试了以下代码但出现了相同的错误:spark.range(10).collect()如果有人知道我需要在哪个配置中进行更改,我真的非常感激。 - bordumb
我也遇到了这个错误,使用的是anaconda,但是版本是pyspark 2.3.0和python 3.6。 - Steven Fines
@StevenFines 如果您找到了解决方法,请告诉我! :) - bordumb
我认为这与通过Anaconda运行有关,因为这是我们共同拥有的。 - Steven Fines
当您将脚本作为 spark_submit 调用的一部分运行时,可以比较环境设置与在 conda 环境中运行它时的设置。我曾经遇到过相同类型的错误消息(即使 show 能够正常工作,但 collect 却不能),并注意到系统路径和环境变量之间存在令人启迪的差异。 - Oliver W.
1个回答

1
我曾经遇到过完全相同的问题,并通过将JAVA_HOME环境变量设置为指向Java SDK 8来解决它。这个错误的关键部分是:
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3192)

出现了一个Java错误。这是一个已知问题(请参见相关的Stack Overflow链接)。

您可以在bashrc、Spark的配置文件或笔记本中直接设置JAVA_HOME,例如,在Ubuntu上:

%env JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/

对于Mac电脑,大致是这样的:
%env JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接