You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The tb-node setatefulset pod log keeps printing that topicId changed. It prints this for every id every 60 seconds~. From my understanding this happens when a topic is being recreated with the same id. Is this expected bahaviour?
Steps to Reproduce
Create completely empty cluster
Create empty demo thingsboard postgresql database
kubectl apply -f thirdparty.yml
kubectl apply -f tb-services.yml
When initialized, tb-node starts to post these messages every 60 seconds.
Expected Behavior
Should not keep printing these messages for the same kafka topics every 60 seconds.
Actual Behavior
The messages look like this
....
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.1-0 to 0 since the associated topicId changed from null to fwMO8nYKRQGjNtbuUtspfQ
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.5-0 to 0 since the associated topicId changed from null to YMBDSCF2R7SrrgxblL5p7A
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.7-0 to 0 since the associated topicId changed from null to zR53ZGknRoWhibRlgVM3cQ
....
kafka version: wurstmeister/kafka:2.13-2.8.1 (i had the same problem with wurstmeister/kafka:2.12-2.2.1 version)
Cluster nodes: 1 master node, 3 worker nodes
Additional Information
Transport nodes don't seem to have the same behaviour at the start, but sometimes also start spamming these kind of messages for some topics after a couple of weeks or months of deployment.
The text was updated successfully, but these errors were encountered:
Though it's still not clear if the topicid metada changing in consumer is because of some bug in the logger, or if it's a real problem with the kafka setup.
Description
The tb-node setatefulset pod log keeps printing that topicId changed. It prints this for every id every 60 seconds~. From my understanding this happens when a topic is being recreated with the same id. Is this expected bahaviour?
Steps to Reproduce
Expected Behavior
Should not keep printing these messages for the same kafka topics every 60 seconds.
Actual Behavior
The messages look like this
Full tb-node log:
tb-node-0-log.txt
Environment
Additional Information
Transport nodes don't seem to have the same behaviour at the start, but sometimes also start spamming these kind of messages for some topics after a couple of weeks or months of deployment.
The text was updated successfully, but these errors were encountered: