Kafka SASL ZooKeeper 认证

23

在启用Zookeeper和代理身份验证时,我遇到了以下错误。

[2017-04-18 15:54:10,476] DEBUG Size of client SASL token: 0 
(org.apache.zookeeper.server.ZooKeeperServer)
[2017-04-18 15:54:10,476] ERROR cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. (org.apache.zookeeper.server.    ZooKeeperServer)
[2017-04-18 15:54:10,478] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-04-18 15:54:10,478] DEBUG Received event: WatchedEvent state:AuthFailed type:None path:null (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Leaving process event (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient... (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-04-18 15:54:10,478] DEBUG Closing ZooKeeper connected to localhost:2181 (org.I0Itec.zkclient.ZkConnection)
[2017-04-18 15:54:10,478] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient...done (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,480] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
    at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
    at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
    at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
    at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
    at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
    at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
    at kafka.Kafka$.main(Kafka.scala:67)
    at kafka.Kafka.main(Kafka.scala)
[2017-04-18 15:54:10,482] INFO shutting down (kafka.server.KafkaServer)

以下配置在JAAS文件中给出,作为KAFKA_OPTS传递以将其视为JVM参数:

  KafkaServer {
       org.apache.kafka.common.security.plain.PlainLoginModule required
       username="admin"
       password="admin-secret"
       user_admin="admin-secret";
  };

  Client {
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="admin"
      password="admin-secret";
  };

kafka broker的server.properties文件设置了以下额外字段:

zookeeper.set.acl=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.endpoint.identification.algorithm=HTTPS
ssl.keystore.location=path
ssl.keystore.password=anything
ssl.key.password=anything
ssl.truststore.location=path
ssl.truststore.password=anything

以下是Zookeeper的属性:

 authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl

好的,我想您没有使用SSL吧? - Maximilien Belinga
是的,我不想在Zookeeper和Broker之间使用SSL。但是Kafka客户端通信需要设置SSL。 - sunder
好的,请让我看看。 - Maximilien Belinga
你能增加日志级别吗? - Maximilien Belinga
让我们在聊天室继续这个讨论 - Maximilien Belinga
显示剩余3条评论
2个回答

56

我通过将日志级别提高到DEBUG来发现了问题。基本上按照以下步骤进行操作。我没有使用SSL,但您可以轻松集成它。

以下是我的配置文件:

server.properties

security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
auto.create.topics.enable=false
broker.id=0
listeners=SASL_PLAINTEXT://localhost:9092
advertised.listeners=SASL_PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

advertised.host.name=localhost
num.partitions=1
num.recovery.threads.per.data.dir=1
log.flush.interval.messages=30000000
log.flush.interval.ms=1800000
log.retention.minutes=30
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
super.users=User:admin

zookeeper.properties

dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

producer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none

consumer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

现在最重要的文件是确保您的服务器能够正常启动的文件:

zookeeper_jaas.conf

Server {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

kafka_server_jaas.conf

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

Client {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret";
};

完成所有这些配置后,在第一个终端窗口执行以下操作:

终端1(启动Zookeeper服务器)

从Kafka根目录开始:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/zookeeper_jaas.conf"
$ bin/zookeeper-server-start.sh config/zookeeper.properties

终端2(启动Kafka服务器)

从Kafka根目录开始

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/kafka_server_jaas.conf"
$ bin/kafka-server-start.sh config/server.properties

[开始更新]

kafka_client_jaas.conf

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};

终端3(启动Kafka消费者)

在客户端终端上,导出客户端jaas配置文件并启动消费者:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties  --bootstrap-server=localhost:9092

终端 4(启动Kafka生产者)

如果您也想要生产,请在另一个终端窗口执行以下操作:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties

[更新结束]


1
太好了!它正常工作了。非常感谢您的帮助,我已经挣扎了两天了。 - sunder
1
只是好奇想知道失败的真正原因。是不是Zookeeper期望有一个Server {}配置? - sunder
没错,这就是缺失的那一块。 - Maximilien Belinga
1
你的zookeeper_jaas.conf文件中有一个错别字。在user_admin="admin-secret"的末尾缺少一个分号。 - user2687486
9
我认为需要进行一些更新,我尝试过这个方法,但遇到了错误,因为zookeeper不支持PlainLoginModule,而使用的是DigestLoginModule。因此,需要进行以下更改:在zookeeper_jaas.conf和kafka_server_jaas.conf的Client部分中将"org.apache.kafka.common.security.plain.PlainLoginModule required"替换为"org.apache.zookeeper.server.auth.DigestLoginModule required"。 - Tushar H
显示剩余9条评论

13

您需要为Zookeeper创建一个JAAS配置文件,并使其使用它。

创建一个类似以下内容的Zookeeper JAAS配置文件:

Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    user_admin="admin-secret";
};

用户(管理员)和密码(admin-secret)必须与Kafka JAAS配置文件的客户端部分中的用户名和密码匹配。

要让Zookeeper使用JAAS配置文件,请向Zookeeper传递以下JVM标记,指向之前创建的文件。

-Djava.security.auth.login.config=/path/to/server/jaas/file.conf"

如果您正在使用与Kafka包一起提供的Zookeeper,您可以像这样启动Zookeeper,假设您的Zookeeper JAAS配置文件位于./config/zookeeper_jaas.conf中

EXTRA_ARGS=-Djava.security.auth.login.config=./config/zookeeper_jaas.conf ./bin/zookeeper-server-start.sh ./config/zookeeper.properties 

4
截至2019年11月,这是最新且简明的答案! - geekQ

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接