Spring Kafka - 配置异常:对于配置键“subject.name.strategy”,值“TopicNameStrategy”无效。无法找到类TopicNameStrategy。

3

我们刚刚在生产环境部署了一个Kafka生产者并且遇到了在非生产环境中没有出现的奇怪问题。该服务是一个Spring Boot微服务,接收REST HTTP请求,并使用spring kafka将事件发布到主题。该微服务托管在AWS ECS上。此API的Java版本设置为Java 11。下面是错误信息:

org.apache.kafka.common.KafkaException: Failed to construct kafka producer
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.createRawProducer(DefaultKafkaProducerFactory.java:743)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.createKafkaProducer(DefaultKafkaProducerFactory.java:584)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.doCreateProducer(DefaultKafkaProducerFactory.java:544)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.createProducer(DefaultKafkaProducerFactory.java:519)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.createProducer(DefaultKafkaProducerFactory.java:513)
    at org.springframework.kafka.core.KafkaTemplate.getTheProducer(KafkaTemplate.java:683)
    at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:569)
    at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:386)
    at com.nab.ms.hs.lodgement.producer.HardshipCaseSubmitEventProducer.publishHardshipCaseSubmitEvent(HardshipCaseSubmitEventProducer.java:47)
    at com.nab.ms.hs.lodgement.application.CreateHardshipRequestService.lambda$publishHardshipCaseSubmitEvent$0(CreateHardshipRequestService.java:108)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
    at com.nab.ms.hs.lodgement.application.CreateHardshipRequestService.processAccountRequest(CreateHardshipRequestService.java:103)
    at com.nab.ms.hs.lodgement.application.CreateHardshipRequestService.processNewHardshipRequest(CreateHardshipRequestService.java:75)
    at com.nab.ms.hs.lodgement.application.HardshipNewRequestService.lambda$processNewHardshipRequest$0(HardshipNewRequestService.java:46)
    at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1736)
    at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1728)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.ExceptionInInitializerError: null
    at io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:50)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:368)
    ... 22 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.kafka.serializers.subject.TopicNameStrategy for configuration key.subject.name.strategy: Class io.confluent.kafka.serializers.subject.TopicNameStrategy could not be found.
    at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:729)

当我运行./gradlew dependencies --configuration=runtimeClasspath命令时,以下是依赖项:

+--- org.springframework.kafka:spring-kafka:2.7.1
|    |    +--- org.springframework:spring-context:5.3.7 (*)
|    |    +--- org.springframework:spring-messaging:5.3.7
|    |    |    +--- org.springframework:spring-beans:5.3.7 (*)
|    |    |    \--- org.springframework:spring-core:5.3.7 (*)
|    |    +--- org.springframework:spring-tx:5.3.7 (*)
|    |    +--- org.springframework.retry:spring-retry:1.3.1
|    |    +--- org.apache.kafka:kafka-clients:2.7.1
|    |    |    +--- com.github.luben:zstd-jni:1.4.5-6
|    |    |    +--- org.lz4:lz4-java:1.7.1
|    |    |    +--- org.xerial.snappy:snappy-java:1.1.7.7
|    |    |    \--- org.slf4j:slf4j-api:1.7.30
|    |    +--- org.jetbrains.kotlin:kotlin-stdlib:1.4.32
|    |    |    +--- org.jetbrains.kotlin:kotlin-stdlib-common:1.4.32
|    |    |    \--- org.jetbrains:annotations:13.0
|    |    \--- com.google.code.findbugs:jsr305:3.0.2
|    +--- org.apache.avro:avro:1.10.2
|    |    +--- com.fasterxml.jackson.core:jackson-core:2.12.2 -> 2.11.4
|    |    +--- com.fasterxml.jackson.core:jackson-databind:2.12.2 -> 2.11.4 (*)
|    |    +--- org.apache.commons:commons-compress:1.20
|    |    \--- org.slf4j:slf4j-api:1.7.30
|    +--- org.apache.kafka:kafka-clients:2.7.1 (*)
|    +--- org.apache.kafka:kafka-streams:2.7.1 -> 2.6.2
|    |    +--- org.apache.kafka:kafka-clients:2.6.2 -> 2.7.1 (*)
|    |    +--- org.apache.kafka:connect-json:2.6.2
|    |    |    +--- org.apache.kafka:connect-api:2.6.2
|    |    |    |    +--- org.apache.kafka:kafka-clients:2.6.2 -> 2.7.1 (*)
|    |    |    |    \--- org.slf4j:slf4j-api:1.7.30
|    |    |    +--- com.fasterxml.jackson.core:jackson-databind:2.10.5.1 -> 2.11.4 (*)
|    |    |    +--- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:2.10.5 -> 2.11.4 (*)
|    |    |    \--- org.slf4j:slf4j-api:1.7.30
|    |    +--- org.slf4j:slf4j-api:1.7.30
|    |    \--- org.rocksdb:rocksdbjni:5.18.4
|    +--- io.confluent:common-config:6.1.1
|    |    +--- io.confluent:common-utils:6.1.1
|    |    |    \--- org.slf4j:slf4j-api:1.7.30
|    |    \--- org.slf4j:slf4j-api:1.7.30
|    +--- io.confluent:common-utils:6.1.1 (*)
|    +--- io.confluent:kafka-avro-serializer:6.1.1
|    |    +--- org.apache.avro:avro:1.9.2 -> 1.10.2 (*)
|    |    +--- io.confluent:kafka-schema-serializer:6.1.1
|    |    |    +--- io.confluent:kafka-schema-registry-client:6.1.1
|    |    |    |    +--- org.apache.kafka:kafka-clients:6.1.1-ccs -> 2.7.1 (*)
|    |    |    |    +--- org.apache.avro:avro:1.9.2 -> 1.10.2 (*)
|    |    |    |    +--- com.fasterxml.jackson.core:jackson-databind:2.10.5.1 -> 2.11.4 (*)
|    |    |    |    +--- jakarta.ws.rs:jakarta.ws.rs-api:2.1.6
|    |    |    |    +--- org.glassfish.jersey.core:jersey-common:2.31 -> 2.32
|    |    |    |    |    +--- jakarta.ws.rs:jakarta.ws.rs-api:2.1.6
|    |    |    |    |    +--- jakarta.annotation:jakarta.annotation-api:1.3.5
|    |    |    |    |    +--- org.glassfish.hk2.external:jakarta.inject:2.6.1
|    |    |    |    |    \--- org.glassfish.hk2:osgi-resource-locator:1.0.3
|    |    |    |    +--- io.swagger:swagger-annotations:1.6.2
|    |    |    |    \--- io.confluent:common-utils:6.1.1 (*)
|    |    |    \--- io.confluent:common-utils:6.1.1 (*)
|    |    +--- io.confluent:kafka-schema-registry-client:6.1.1 (*)
|    |    \--- io.confluent:common-utils:6.1.1 (*)
|    +--- io.confluent:kafka-schema-registry-client:6.1.1 (*)
|    +--- io.confluent:kafka-streams-avro-serde:6.1.1
|    |    +--- io.confluent:kafka-avro-serializer:6.1.1 (*)
|    |    +--- io.confluent:kafka-schema-registry-client:6.1.1 (*)
|    |    +--- org.apache.avro:avro:1.9.2 -> 1.10.2 (*)
|    |    \--- io.confluent:common-utils:6.1.1 (*)
|    +--- org.springframework.boot:spring-boot-starter:2.4.5 -> 2.4.6 (*)

以下是生产者代码:

@Service
@Slf4j
public class EventProducer {

  private final KafkaTemplate<EventKey, SubmitEvent> hardshipProducer;
  
  @Value("${app.kafka.topic.hardship.case.submit.event.name}")
  private String topicName;

  public EventProducer(KafkaTemplate<EventKey, SubmitEvent> hardshipProducer) {
    this.hardshipProducer = hardshipProducer;
  }

  @SneakyThrows
  public void publishHardshipCaseSubmitEvent(HardshipCaseSubmitEvent hardshipCaseSubmitEvent, HardshipData hardshipData) {
    
    ListenableFuture<SendResult<EventKey, SubmitEvent>> future = hardshipProducer.send(topicName,
        EventKey.newBuilder().setCaseId(hardshipData.getHsCaseId()).build(),
        hardshipCaseSubmitEvent);
    future.addCallback(new ListenableFutureCallback<>() {
      @SneakyThrows
      @Override
      public void onFailure(@NonNull Throwable ex) {
        log.error("Exception = " + ex.getMessage() + " publishing hardshipCaseSubmitEvent for meId = " + hardshipCaseSubmitEvent.getData().getMeId() + ", correlation id=" + correlationId + ", caseId=" + hardshipData.getHsCaseId(), ex);
      }

      @Override
      public void onSuccess(SendResult<HardshipEventKey, HardshipCaseSubmitEvent> result) {
        
          log.info("hardshipCaseSubmitEvent event status = success, partition= {}, offset= {}, meId={}, correlation id={}, caseId={}",
              result.getRecordMetadata().partition(),
              result.getRecordMetadata().offset(),
              hardshipCaseSubmitEvent.getData().getMeId().toString(),
              correlationId,
              hardshipData.getHsCaseId());
        }
      
    });
    hardshipProducer.flush();
  }
}

需要注意的是,该事件有时能够成功产生,有时会出现上述错误而失败。我已经记录了事件主体以进行比较,但没有发现任何差异。我检查了容器实例中存在的war文件,并发现所有jar文件依赖项都按预期安装。主题主体已设置为TopicNameStrategy,并在yml配置中提供了相同的主题。

请告知是否有人有任何想法。

编辑:在此处添加配置。

nabkafka:
  kafka:
    allow.auto.create.topics: false
    schema-registry:
      cache-size: 2048
      auto.register.schemas: false
      key-subject-name-strategy: Topic
      value-subject-name-strategy: Topic
      subject-name-strategy: Topic
      ssl:
        protocol: SSL
        key-store-location: file:${infrastructure.services.ssl.keyStorePath}
        key-store-password: ${infrastructure.services.ssl.keyStorePassword}
        key-password: ${infrastructure.services.ssl.keyStorePassword}
        trust-store-location: file:${infrastructure.services.ssl.trustStorePath}
        trust-store-password: ${infrastructure.services.ssl.trustStorePassword}
        trust-store-type: JKS
        key-store-type: JKS
    producer:
      acks: all
      key-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
      value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
      properties:
        auto.register.schemas: false
    ssl:
      key-store-location: file:${infrastructure.services.ssl.keyStorePath}
      key-store-password: ${infrastructure.services.ssl.keyStorePassword}
      key-password: ${infrastructure.services.ssl.keyStorePassword}
      trust-store-location: file:${infrastructure.services.ssl.trustStorePath}
      trust-store-password: ${infrastructure.services.ssl.trustStorePassword}

请注意,在组织中甚至在我们的非生产环境中,我们使用了一个覆盖了spring kafka的包装器,工作得非常好。

有相同的问题,但只有在我尝试在ForkJoinPool.common-worker线程中发送时才会复现。 - Vichukano
1个回答

3
我遇到了同样的问题。正如@Vichukano提到的那样,这是与执行器相关的类加载器问题(在本例中为ForkJoinPool)。 你是如何定义你正在使用的DefaultKafkaProducerFactory的? 从spring-kafka v2.8.0开始,他们添加了一个标志来在创建新生产者时配置序列化程序(消费者工厂具有相同的功能)。如果需要,请事先配置你的序列化程序并将此标志设置为false,例如:
  @Bean
  ProducerFactory producerFactory(SchemaRegistryClient registryClient, KafkaProperties kafkaProperties) {
    var serializer = new KafkaAvroSerializer(registryClient);
    var configs = kafkaProperties.buildProducerProperties();

    serializer.configure(configs, false);

    return new DefaultKafkaProducerFactory(configs, new StringSerializer(), serializer, false);
  }

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接