Apache Spark - Parquet / Snappy 压缩错误

5
我从 Oracle 表中得到了一个数据框架,现在想使用 Snappy 压缩格式将它本地保存为 Parquet 格式。如果保存为 CSV 格式可以成功,但是保存为 Parquet 格式时出现错误。
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I

Snappy库已在我的类路径中,对于其他源类型(如平面文件)有效。

我该怎么办才能解决这个问题?

以下是堆栈跟踪:

2017-05-19 08:10:37.398  INFO 7740 --- [rker for task 0] org.apache.hadoop.io.compress.CodecPool  : Got brand-new compressor [.snappy]
2017-05-19 08:11:45.482 ERROR 7740 --- [rker for task 0] org.apache.spark.util.Utils              : Aborting task
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
    at org.xerial.snappy.SnappyNative.maxCompressedLength(Native Method) ~[snappy-java-1.1.2.6.jar:na]
    at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:376) ~[snappy-java-1.1.2.6.jar:na]
    at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) ~[hadoop-common-2.2.0.jar:na]
    at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) ~[hadoop-common-2.2.0.jar:na]
    at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.parquet.column.impl.ColumnWriterV1.accountForValueWritten(ColumnWriterV1.java:113) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.parquet.column.impl.ColumnWriterV1.write(ColumnWriterV1.java:205) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.addBinary(MessageColumnIO.java:347) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$makeWriter$9.apply(ParquetWriteSupport.scala:169) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$makeWriter$9.apply(ParquetWriteSupport.scala:157) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$writeFields$1.apply$mcV$sp(ParquetWriteSupport.scala:114) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$consumeField(ParquetWriteSupport.scala:422) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$writeFields(ParquetWriteSupport.scala:113) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$write$1.apply$mcV$sp(ParquetWriteSupport.scala:104) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.consumeMessage(ParquetWriteSupport.scala:410) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.write(ParquetWriteSupport.scala:103) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.write(ParquetWriteSupport.scala:51) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.writeInternal(ParquetOutputWriter.scala:42) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:245) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1341) ~[spark-core_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) [spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) [spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) [spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) [spark-core_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.scheduler.Task.run(Task.scala:99) [spark-core_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) [spark-core_2.11-2.1.1.jar:2.1.1]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_75]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_75]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
2017-05-19 08:11:45.484  INFO 7740 --- [rker for task 0] o.a.p.h.InternalParquetRecordWriter      : Flushing mem columnStore to file. allocated memory: 13,812,677
2017-05-19 08:11:45.499  WARN 7740 --- [rker for task 0] org.apache.hadoop.fs.FileUtil            : Failed to delete file or dir [C:\Dev\edi_parquet\GMS_TEST\_temporary\0\_temporary\attempt_20170519081036_0000_m_000000_0\.part-00000-193f8835-6505-4dac-8cb6-0e8c5f3cff1b.snappy.parquet.crc]: it still exists.
2017-05-19 08:11:45.501  WARN 7740 --- [rker for task 0] org.apache.hadoop.fs.FileUtil            : Failed to delete file or dir [C:\Dev\edi_parquet\GMS_TEST\_temporary\0\_temporary\attempt_20170519081036_0000_m_000000_0\part-00000-193f8835-6505-4dac-8cb6-0e8c5f3cff1b.snappy.parquet]: it still exists.
2017-05-19 08:11:45.501  WARN 7740 --- [rker for task 0] o.a.h.m.lib.output.FileOutputCommitter   : Could not delete file:/C:/Dev/edi_parquet/GMS_TEST/_temporary/0/_temporary/attempt_20170519081036_0000_m_000000_0
2017-05-19 08:11:45.504 ERROR 7740 --- [rker for task 0] o.a.s.s.e.datasources.FileFormatWriter   : Job job_20170519081036_0000 aborted.

你是否在追加输出? - koiralo
不,只需使用 df.write().mode(SaveMode.Overwrite) 方法保存完整的数据框。结果问题只存在于我测试的Windows机器上。在Linux环境下运行正常... - Chris Finlayson
1个回答

1
这个问题是由于parquet需要的snappy-java版本与spark/hadoop不兼容所致。
我们在Cloudera上使用spark 2.3时也遇到了同样的问题。
我们使用的解决方案是下载snappy-java-1.1.2.6.jar并将其放置在Spark的jar文件夹中,这样可以解决问题。
这包括在安装了Spark的所有节点上替换snappy-java jar。

您可以在以下位置找到Spark的jar文件夹:

  • Cloudera:/opt/cloudera/parcels/SPARK2-{spark-cloudera-version}/lib/spark2/jars
  • Hdp:/usr/hdp/{hdp version}/spark2/jars

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接