将文件写入亚马逊 S3

3

我正在尝试以下内容:

import awscala._, s3._

implicit val s3 = S3()
val bucket = s3.createBucket("acme-datascience-lab")
bucket.put("sample.txt", new java.io.File("sample.txt"))

I get the following error:

Exception in thread "main" java.lang.NoSuchFieldError: EU_CENTRAL_1
    at awscala.Region0$.<init>(Region0.scala:27)
    at awscala.Region0$.<clinit>(Region0.scala)
    at awscala.package$.<init>(package.scala:3)
    at awscala.package$.<clinit>(package.scala)
    at awscala.s3.S3$.apply$default$2(S3.scala:18)
    at com.acme.spark.FlightDelays.HistoricalFlightDelayOutput$.main(HistoricalFlightDelayOutput.scala:164)
    at com.acme.spark.FlightDelays.HistoricalFlightDelayOutput.main(HistoricalFlightDelayOutput.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

这个问题出现在以下代码行:

implicit val s3 = S3()

这是我的build.sbt文件的内容:

import AssemblyKeys._

assemblySettings

name := "acme-get-flight-delays"
version := "0.0.1"
scalaVersion := "2.10.5"

// additional libraries
libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.10" % "1.6.0" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.6.0",
  "org.apache.spark" %% "spark-hive" % "1.6.0",
  "org.scalanlp" %% "breeze" % "0.11.2",
  "org.scalanlp" %% "breeze-natives" % "0.11.2",
  "net.liftweb" %% "lift-json" % "2.5+",
  "org.apache.hadoop" % "hadoop-client" % "2.6.0",
  "org.apache.hadoop" % "hadoop-aws" % "2.6.0",
  "com.amazonaws" % "aws-java-sdk" % "1.0.002",
  "com.github.seratch" %% "awscala" % "0.5.+"
)

resolvers ++= Seq(
  "Spark Packages Repo" at "http://dl.bintray.com/spark-packages/maven",
  "JBoss Repository" at "http://repository.jboss.org/nexus/content/repositories/releases/",
  "Spray Repository" at "http://repo.spray.cc/",
  "Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos/",
  "Akka Repository" at "http://repo.akka.io/releases/",
  "Twitter4J Repository" at "http://twitter4j.org/maven2/",
  "Apache HBase" at "https://repository.apache.org/content/repositories/releases",
  "Twitter Maven Repo" at "http://maven.twttr.com/",
  "scala-tools" at "https://oss.sonatype.org/content/groups/scala-tools",
  "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/",
  "Second Typesafe repo" at "http://repo.typesafe.com/typesafe/maven-releases/",
  "Mesosphere Public Repository" at "http://downloads.mesosphere.io/maven",
  Resolver.sonatypeRepo("public")
)

mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
  {
    case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
    case m if m.startsWith("META-INF") => MergeStrategy.discard
    case PathList("javax", "servlet", xs @ _*) => MergeStrategy.first
    case PathList("org", "apache", xs @ _*) => MergeStrategy.first
    case PathList("org", "jboss", xs @ _*) => MergeStrategy.first
    case "about.html"  => MergeStrategy.rename
    case "reference.conf" => MergeStrategy.concat
    case _ => MergeStrategy.first
  }
}

// Configure JAR used with the assembly plug-in
jarName in assembly := "acme-get-flight-delays.jar"

// A special option to exclude Scala itself from our assembly JAR, since Spark
// already bundles in Scala.
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)

2
啊,你包含了一个旧得多的AWS SDK版本,这将会引起冲突。当前版本的AWScala依赖于1.10.77版本,如果你删除对aws-java-sdk的显式依赖,它将自动被拉取。支持eu-central-1是在1.9.3版本中添加的。 - Ramesh Sen
2个回答

1
您可以直接使用Java的AWS SDK或类似AWScala的封装器,以便进行以下操作。
import awscala._, s3._

implicit val s3 = S3()
val bucket = s3.bucket("your-bucket")
bucket.put("sample.txt", new java.io.File("sample.txt"))

我得到了以下错误信息:Exception in thread "main" java.lang.NoSuchFieldError: EU_CENTRAL_1 - Paul Reiners
你能否发布完整的堆栈跟踪和你的build.sbt文件? - Ramesh Sen

1
将以下内容添加到build.sbt文件中:
libraryDependencies += "com.github.seratch" %% "awscala" % "0.3.+"

以下代码将完成工作:

implicit val s3 = S3()
s3.setRegion(com.amazonaws.regions.Region.getRegion(com.amazonaws.regions.Regions.US_EAST_1))
val bucket = s3.bucket("acme-datascience-lab")
bucket.get.put("sample.txt", new java.io.File("sample.txt"))

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接