我想使用Spark(1.6.2)Streaming从Kafka(Broker版本0.10.2.1)中的一个主题接收消息。
我正在使用Receiver
方法。 代码如下:
public static void main(String[] args) throws Exception
{
SparkConf sparkConf = new SparkConf().setAppName("SimpleStreamingApp");
JavaStreamingContext javaStreamingContext = new JavaStreamingContext(sparkConf, new Duration(5000));
//
Map<String, Integer> topicMap = new HashMap<>();
topicMap.put("myTopic", 1);
//
String zkQuorum = "host1:port1,host2:port2,host3:port3";
//
Map<String, String> kafkaParamsMap = new HashMap<>();
kafkaParamsMap.put("bootstraps.server", zkQuorum);
kafkaParamsMap.put("metadata.broker.list", zkQuorum);
kafkaParamsMap.put("zookeeper.connect", zkQuorum);
kafkaParamsMap.put("group.id", "group_name");
kafkaParamsMap.put("security.protocol", "SASL_PLAINTEXT");
kafkaParamsMap.put("security.mechanism", "GSSAPI");
kafkaParamsMap.put("ssl.kerberos.service.name", "kafka");
kafkaParamsMap.put("key.deserializer", "kafka.serializer.StringDecoder");
kafkaParamsMap.put("value.deserializer", "kafka.serializer.DefaultDecoder");
//
JavaPairReceiverInputDStream<byte[], byte[]> stream = KafkaUtils.createStream(javaStreamingContext,
byte[].class, byte[].class,
DefaultDecoder.class, DefaultDecoder.class,
kafkaParamsMap,
topicMap,
StorageLevel.MEMORY_ONLY());
VoidFunction<JavaPairRDD<byte[], byte[]>> voidFunc = new VoidFunction<JavaPairRDD<byte[], byte[]>> ()
{
public void call(JavaPairRDD<byte[], byte[]> rdd) throws Exception
{
List<Tuple2<byte[], byte[]>> all = rdd.collect();
System.out.println("size of red: " + all.size());
}
}
stream.forEach(voidFunc);
javaStreamingContext.start();
javaStreamingContext.awaitTermination();
}
访问Kafka需要进行Kerberos认证。我启动时使用以下命令:
spark-submit --verbose --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=jaas.conf"
--files jaas.conf,privKey.der
--principal <accountName>
--keytab <path to keytab file>
--master yarn
--jars <comma separated path to all jars>
--class <fully qualified java main class>
<path to jar file containing main class>
kafkaParams
哈希映射中包含的属性会在Kafka日志中触发VerifiableProperties
类发出警告消息:
INFO KafkaReceiver: connecting to zookeeper: <the correct zookeeper quorum provided in kafkaParams map>
VerifiableProperties: Property auto.offset.reset is overridden to largest
VerifiableProperties: Property enable.auto.commit is not valid.
VerifiableProperties: Property sasl.kerberos.service.name is not valid
VerifiableProperties: Property key.deserializer is not valid
...
VerifiableProperties: Property zookeeper.connect is overridden to ....
我认为因为这些属性没有被接受,所以可能会影响流处理。
当我在集群模式下启动--master yarn
时,这些警告消息就不会出现。随后,我每5秒钟看到以下日志重复出现,如下所配置: INFO BlockRDD:从持久性列表中删除RDD 4 INFO KafkaInputDStream:在createStream处删除RDD BlockRDD [4]的块 INFO ReceivedBlockTracker:正在删除批次ArrayBuffer() INFO ... INFO BlockManager:删除RDD 4
然而,我并没有看到任何实际的消息打印在控制台上。
问题:为什么我的代码没有打印任何实际的消息?
我的Gradle依赖关系是:
compile group: 'org.apache.spark', name: 'spark-core_2.10', version: '1.6.2'
compile group: 'org.apache.spark', name: 'spark-streaming_2.10', version: '1.6.2'
compile group: 'org.apache.spark', name: 'spark-streaming-kafka_2.10', version: '1.6.2'