我正在连接到一个拥有一个Master和两个Slaves的Spark Standalone集群,运行着spark-1.0.0。我使用Spark-submit运行了wordcount.py,它实际上从HDFS读取数据并将结果写入HDFS。目前一切都很好,结果会被正确地写入HDFS。但是我担心的是,当我检查每个工作节点的标准输出时,它是空的,我不知道它是否应该为空?而在stderr中,我得到了以下内容:
Some(app-20140704174955-0002)的stderr日志页面
Spark
Executor Command: "java" "-cp" "::
/usr/local/spark-1.0.0/conf:
/usr/local/spark-1.0.0
/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar:/usr/local/hadoop/conf" "
-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.CoarseGrainedExecutorBackend
" "akka.tcp://spark@master:54477/user/CoarseGrainedScheduler" "0" "slave2" "1
" "akka.tcp://sparkWorker@slave2:41483/user/Worker" "app-20140704174955-0002"
========================================
14/07/04 17:50:14 ERROR CoarseGrainedExecutorBackend:
Driver Disassociated [akka.tcp://sparkExecutor@slave2:33758] ->
[akka.tcp://spark@master:54477] disassociated! Shutting down.