Spark错误: java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/StreamWriteSupport

4

我在Hortonworks中使用Spark,在执行以下代码时遇到异常。我还在我的系统上运行了一个独立的Spark实例 - 相同的代码在其中运行良好。

我需要在Hortonworks中做任何不同的事情来解决以下错误吗?请帮助我。

[root@sandbox-hdp bin]# ./spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/08/31 11:36:44 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://172.17.0.2:4041
Spark context available as 'sc' (master = local[*], app id = local-1535715404685).
Spark session available as 'spark'.
Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 2.2.0.2.6.3.0-235
          /_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_151)
Type in expressions to have them evaluated.
Type :help for more information.

    scala> :paste
    // Entering paste mode (ctrl-D to finish)

    import org.apache.spark.sql.SQLContext
    val sqlContext = new SQLContext(sc)
    val df = sqlContext.read
    .format("com.databricks.spark.csv")
    .option("header", "true")
    .option("inferSchema", "true")
    .load("hdfs://sandbox-hdp.hortonworks.com:8020/city.csv")
    df.show()
    df.printSchema()

    // Exiting paste mode, now interpreting.

warning: there was one deprecation warning; re-run with -deprecation for details
java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/StreamWriteSupport
  at java.lang.ClassLoader.defineClass1(Native Method)
  at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
  at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
  at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
  at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
  at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
  at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at java.lang.Class.forName0(Native Method)
  at java.lang.Class.forName(Class.java:348)
  at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:370)
  at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
  at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
  at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
  at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
  at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
  at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
  at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:533)
  at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:89)
  at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:89)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:304)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:156)
  ... 53 elided

Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.sources.v2.StreamWriteSupport
  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  ... 86 more

所有 CSV 驱动程序都已正确复制到 Spark 的 jar 文件夹中。我在连接 Excel、Hbase 和 Apache Phoenix 时遇到了相同的异常情况。

这只发生在 Hortonworks 中。


1
从文档来看,这个类似乎是在Spark 2.3.1中添加的,而你的shell显示的是2.2.1。https://spark.apache.org/docs/2.3.1/api/java/org/apache/spark/sql/sources/v2/StreamWriteSupport.html - Felipe Martins Melo
添加了最新的Spark代码Jar包后,它终于可以工作了 :) - Nirmal
2个回答

2

我有同样的问题。我在我的pom.xml文件中发现了一个错误,我的Spark版本与我的环境不同。所以,我改变了Spark版本并保持相同的环境,解决了这个错误。


1

在添加了最新的Spark代码Jar之后,它就可以工作了。


Spark代码jar?那是什么意思? - undefined

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接