Skip to content

Conversation

@OZOOOOOH
Copy link

@OZOOOOOH OZOOOOOH commented Jul 6, 2025

Problem

#960
#1208
#1194

Multiple table sink support has been requested in several issues.
Currently, JDBC sink connector only supports routing all records to a single table or using topic-based routing, which limits flexibility for complex data pipeline requirements.

Solution

This PR implements a simple and non-intrusive approach to support multiple table sinks by leveraging Kafka message headers.

I am currently operating a data pipeline in production using this feature, successfully routing data from a single Kafka topic through a single connector to over 100 different tables. This demonstrates the scalability and reliability of this approach for complex, multi-table data ingestion scenarios.

Key Implementation Details:

  • When table.name.format is set to __RECORD_HEADER__, the connector activates header-based table routing
  • The connector reads the table name from the Kafka message header with key table.name.format
  • This approach preserves existing functionality while adding new capabilities
  • No breaking changes to current configurations or behavior

How it works:

  1. Configure the connector with table.name.format=__RECORD_HEADER__
  2. Kafka producers include a header with key table.name.format and target table name as value
  3. The connector dynamically routes each record to the specified table
Does this solution apply anywhere else?
  • yes
  • no
If yes, where?

Test Strategy

Testing done:
  • Unit tests
  • Integration tests
  • System #tests
  • Manual tests

Release Plan

- Support dynamic table routing via 'table.name.format' header
Copilot AI review requested due to automatic review settings July 6, 2025 09:47
@OZOOOOOH OZOOOOOH requested a review from a team as a code owner July 6, 2025 09:47
@confluent-cla-assistant
Copy link

confluent-cla-assistant bot commented Jul 6, 2025

🎉 All Contributor License Agreements have been signed. Ready to merge.
✅ OZOOOOOH
Please push an empty commit if you would like to re-run the checks to verify CLA status for all contributors.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds support for dynamic table routing based on a Kafka message header (table.name.format).
Key changes:

  • Extended JdbcSinkConfig with a special constant and updated documentation for header-based table routing.
  • Updated JdbcDbWriter to determine destination tables from record headers when configured.
  • Added unit tests in JdbcSinkTaskTest covering header-based routing with different PK modes and invalid headers.

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkConfig.java Added TABLE_NAME_FORMAT_RECORD_HEADER, enhanced docstring for header routing.
src/main/java/io/confluent/connect/jdbc/sink/JdbcDbWriter.java Implemented determineTableId and header extraction logic.
src/test/java/io/confluent/connect/jdbc/sink/JdbcSinkTaskTest.java New tests for header-based routing (Kafka PK, record-key PK, invalid header).
Comments suppressed due to low confidence (2)

src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkConfig.java:132

  • There's a missing space between the closing parenthesis and the next sentence in the docstring. Consider changing to "a header key ('table.name.format') that contains..." for readability.
      + "a header key ('table.name.format')"

src/test/java/io/confluent/connect/jdbc/sink/JdbcSinkTaskTest.java:575

  • In the Kafka PK mode test, only the kafka_offset field is asserted. Consider adding assertions for kafka_topic and kafka_partition to fully validate all configured primary key columns.
                assertEquals(44, rs.getLong("kafka_offset"));

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant