使用本地kafka-connect集群连接远程数据库时出现连接超时问题。

5
我正在尝试使用docker-compose运行本地kafka-connect集群。 我需要连接到远程数据库,同时还在使用远程kafka和schema-registry。 我已经允许从我的计算机访问这些远程资源。
为了启动集群,在我的Ubuntu WSL2终端中的项目文件夹中,我运行以下命令:
docker build -t my-connect:1.0.0
docker-compose up 应用程序成功运行,但是当我尝试创建新的连接器时,返回错误500并超时。 我的Dockerfile
FROM confluentinc/cp-kafka-connect-base:5.5.0

RUN cat /etc/confluent/docker/log4j.properties.template

ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/

RUN   confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
   && confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2

ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/

ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"] 

我的docker-compose.yaml文件

services:
  connect:
    image: my-connect:1.0.0
    ports:
     - 8083:8083
    environment:
      - CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
      - CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
      - CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
      - CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
      - CONNECT_GROUP_ID=kafka-connect
      - CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
      - CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
      - CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
      - CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
      - CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
      - CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
      - CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
      - CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
      - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
      - CONNECT_REST_ADVERTISED_HOST_NAME=localhost

我的集群已经启动

~$ curl -X GET http://localhost:8083/
{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}

连接器调用

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d
{
    "name": "my-connector",
    "config":  
    { 
    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
    "tasks.max": "1",
    "database.user": "user", 
    "database.password": "pass",    
    "database.dbname":"SID",
    "database.schema":"schema",
    "database.server.name": "dbname",   
    "schema.include.list": "schema",    
    "database.connection.adapter":"logminer",   
    "database.hostname":"databasehost",
    "database.port":"1521"
   }
}

错误

{"error_code": 500,"message": "IO Error trying to forward REST request: java.net.SocketTimeoutException: Connect Timeout"}

## LOG
connect_1  | [2021-07-01 19:08:50,481] INFO Database Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
connect_1  | Version 19.4.0.0.0 (io.debezium.connector.oracle.OracleConnection)
connect_1  | [2021-07-01 19:08:50,628] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection)
connect_1  | [2021-07-01 19:08:50,643] INFO AbstractConfig values:
connect_1  |  (org.apache.kafka.common.config.AbstractConfig)
connect_1  | [2021-07-01 19:09:05,722] ERROR IO error forwarding REST request:  (org.apache.kafka.connect.runtime.rest.RestClient)
connect_1  | java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Connect Timeout

测试与数据库的连接

$ telnet 数据库主机名 1521 正在尝试 <ip>... 已连接到数据库主机名

测试与kafka broker的连接

$ telnet broker1.intranet 9092 正在尝试 <ip>... 已连接到broker1.intranet

测试与远程schema-registry的连接

$ telnet schema-registry.intranet 8081 正在尝试 <ip>... 已连接到schema-registry.intranet

我做错了什么?我需要配置其他内容以允许连接到这个远程数据库吗?

1个回答

3
您需要正确地设置rest.advertised.host.name(如果您使用Docker,则为CONNECT_REST_ADVERTISED_HOST_NAME)。这是Connect worker与集群中其他worker通信的方式。
更多细节请参见配置多个Kafka Connect workers时常见错误,作者为Robin Moffatt
在您的情况下,请尝试从compose文件中删除CONNECT_REST_ADVERTISED_HOST_NAME=localhost

我已经尝试过这个,但是当我尝试使用“network_mode”= host上传compose时,我会收到错误消息:“connect_1 | sh: 1: export: 172.17.0.1: bad variable name”。 - Malkath
1
我解决了。这很傻。CONNECT_GROUP_ID与在同一环境中运行并使用相同配置主题的另一个实例具有相同的名称。更改后,它可以工作!无论如何还是谢谢。 - Malkath
@Malkath,不客气!如果我的回答有帮助,请点赞。 - Iskuskov Alexander

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接