无法在Apache Kafka中使用supervisor

3

我有一台安装了Apache Kafka的Ubuntu 16.04机器。目前,我可以使用一个名为start_kafka.sh的脚本使其无缝运行,其内容如下:

JMX_PORT=17264 KAFKA_HEAP_OPTS="-Xms1024M -Xmx3072M" /home/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh -daemon /home/kafka/kafka_2.11-0.10.1.0/config/server.properties

现在,我想使用supervisor来自动重启进程,以防止它失败,并在重新启动计算机后立即启动。问题是我无法让supervisor启动Kafka。
我使用pip安装了supervisor,并将此配置文件放置在/etc/supervisord.conf中:
; Supervisor config file.
;
; For more information on the config file, please see:
; http://supervisord.org/configuration.html

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)

[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
loglevel=info                ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

[program:kafka]
command=/home/kafka/kafka_2.11-0.10.1.0/start_kafka.sh ; the program (relative uses PATH, can take args)
;process_name=%(program_name)s ; process_name expr (default %(program_name)s)
startsecs=10                   ; # of secs prog must stay up to be running (def. 1)
startretries=3                ; max # of serial start failures when starting (default 3)
;autorestart=unexpected        ; when to restart if exited after running (def: unexpected)
;exitcodes=0,2                 ; 'expected' exit codes used with autorestart (default 0,2)
stopsignal=TERM               ; signal used to kill process (default TERM)
stopwaitsecs=180               ; max num secs to wait b4 SIGKILL (default 10)
stdout_logfile=NONE        ; stdout log path, NONE for none; default AUTO
;environment=A="1",B="2"       ; process environment additions (def no adds)

当我尝试启动Kafka时,出现以下错误:
# supervisorctl start kafka
kafka: ERROR (spawn error)

监管员日志(位于/tmp/supervisord.log)包含以下内容:

2017-01-23 22:10:24,532 INFO spawned: 'kafka' with pid 21311
2017-01-23 22:10:24,536 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:25,542 INFO spawned: 'kafka' with pid 21312
2017-01-23 22:10:25,559 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:27,562 INFO spawned: 'kafka' with pid 21313
2017-01-23 22:10:27,567 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:30,571 INFO spawned: 'kafka' with pid 21314
2017-01-23 22:10:30,576 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:31,578 INFO gave up: kafka entered FATAL state, too many start retries too quickly

必须指出的是,我已经尝试去掉 start_kafka.sh 中的 -daemon 标志以便与 supervisor 一起使用,但没有成功。

有人对此有什么想法吗?


命令= /home/kafka/kafka_2.11-0.10.1.0/start_kafka.sh?为什么它不包含“config/server.properties”? - amethystic
2个回答

2
以下监管配置文件适用于我,取自https://github.com/miguno/wirbelsturm通过https://github.com/miguno/puppet-kafka。主要区别在于它使用kafka-run-class.sh而不是kafka-server-start.sh
请注意,您需要更新各种路径,以使其与您的设置匹配,例如,您必须将/opt/kafka/bin/kafka-run-class.sh更改为/home/kafka/kafka_2.11-0.10.1.0/bin/kafka-run-class.sh
[program:kafka-broker]
command=/opt/kafka/bin/kafka-run-class.sh kafka.Kafka /opt/kafka/config/server.properties
numprocs=1
numprocs_start=0
priority=999
autostart=true
autorestart=true
startsecs=10
startretries=999
exitcodes=0,2
stopsignal=INT
stopwaitsecs=120
stopasgroup=true
directory=/
user=kafka
redirect_stderr=false
stdout_logfile=/var/log/supervisor/kafka-broker/kafka-broker.out
stdout_logfile_maxbytes=20MB
stdout_logfile_backups=5
stderr_logfile=/var/log/supervisor/kafka-broker/kafka-broker.err
stderr_logfile_maxbytes=20MB
stderr_logfile_backups=10
environment=JMX_PORT=9999,KAFKA_GC_LOG_OPTS="-Xloggc:/var/log/kafka/daemon-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps",KAFKA_HEAP_OPTS="-Xms512M -Xmx512M -XX:NewSize=200m -XX:MaxNewSize=200m",KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false",KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true",KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties",KAFKA_OPTS="-XX:CMSInitiatingOccupancyFraction=70 -XX:+PrintTenuringDistribution"

2

我最终通过两个修改成功地让supervisor与Kafka配合工作:

  • 没有使用-daemon标志部署Kafka,因为supervisor需要管理非守护进程
  • 在supervisor配置文件中明确定义Java路径

这是可行的配置:

start_kafka.sh

JMX_PORT=17264 KAFKA_HEAP_OPTS="-Xms1024M -Xmx3072M" /home/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh /home/kafka/kafka_2.11-0.10.1.0/config/server.properties

supervisord.conf

[unix_http_server]
file=/var/run/supervisor.sock   ; (the path to the socket file)
chmod=0700                       ; sockef file mode (default 0700)

[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor            ; ('AUTO' child log dir, default $TEMP)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL  for a unix socket

; The [include] section can just contain the "files" setting.  This
; setting can list multiple files (separated by whitespace or
; newlines).  It can also contain wildcards.  The filenames are
; interpreted as relative to this file.  Included files *cannot*
; include files themselves.

[include]
files = /etc/supervisor/conf.d/*.conf

[program:kafka]
command=/home/kafka/kafka_2.11-0.10.1.0/start_kafka.sh
directory=/home/kafka/kafka_2.11-0.10.1.0
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/kafka/stdout.log
stderr_logfile=/var/log/kafka/stderr.log
environment = JAVA_HOME=/usr/lib/jvm/java-8-oracle

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接