You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"description": "\nReads a batch of messages from a Kafka broker and waits for the output to acknowledge the writes before updating the Kafka consumer group offset.\n\nThis input should be used in combination with a `legacy_redpanda_migrator` output.\n\nWhen a consumer group is specified this input consumes one or more topics where partitions will automatically balance across any other connected clients with the same consumer group. When a consumer group is not specified topics can either be consumed in their entirety or with explicit partitions.\n\nIt provides the same delivery guarantees and ordering semantics as the `redpanda` input.\n\n== Metrics\n\nEmits a `redpanda_lag` metric with `topic` and `partition` labels for each consumed topic.\n\n== Metadata\n\nThis input adds the following metadata fields to each message:\n\n```text\n- kafka_key\n- kafka_topic\n- kafka_partition\n- kafka_offset\n- kafka_lag\n- kafka_timestamp_ms\n- kafka_timestamp_unix\n- kafka_tombstone_message\n- All record headers\n```\n"
21
+
},
22
+
{
23
+
"name": "legacy_redpanda_migrator_offsets",
24
+
"type": "inputs",
25
+
"status": "deprecated",
26
+
"version": "4.45.0",
27
+
"description": "\nThis input reads consumer group updates via the `OffsetFetch` API and should be used in combination with the `legacy_redpanda_migrator_offsets` output.\n\n== Metadata\n\nThis input adds the following metadata fields to each message:\n\n```text\n- kafka_offset_topic\n- kafka_offset_group\n- kafka_offset_partition\n- kafka_offset_commit_timestamp\n- kafka_offset_metadata\n- kafka_is_high_watermark\n```\n"
28
+
},
29
+
{
30
+
"name": "legacy_redpanda_migrator",
31
+
"type": "outputs",
32
+
"status": "deprecated",
33
+
"version": "4.37.0",
34
+
"description": "\nWrites a batch of messages to a Kafka broker and waits for acknowledgement before propagating it back to the input.\n\nThis output should be used in combination with a `legacy_redpanda_migrator` input identified by the label specified in\n`input_resource` which it can query for topic and ACL configurations. Once connected, the output will attempt to\ncreate all topics which the input consumes from along with their ACLs.\n\nIf the configured broker does not contain the current message topic, this output attempts to create it along with its\nACLs.\n\nACL migration adheres to the following principles:\n\n- `ALLOW WRITE` ACLs for topics are not migrated\n- `ALLOW ALL` ACLs for topics are downgraded to `ALLOW READ`\n- Only topic ACLs are migrated, group ACLs are not migrated\n"
35
+
},
36
+
{
37
+
"name": "legacy_redpanda_migrator_offsets",
38
+
"type": "outputs",
39
+
"status": "deprecated",
40
+
"version": "4.37.0",
41
+
"description": "This output should be used in combination with the `legacy_redpanda_migrator_offsets` input"
42
+
}
43
+
],
44
+
"removedComponents": [
45
+
{
46
+
"name": "redpanda_migrator_offsets",
47
+
"type": "inputs",
48
+
"status": "beta",
49
+
"version": "4.45.0",
50
+
"description": "\nThis input reads consumer group updates via the `OffsetFetch` API and should be used in combination with the `redpanda_migrator_offsets` output.\n\n== Metadata\n\nThis input adds the following metadata fields to each message:\n\n```text\n- kafka_offset_topic\n- kafka_offset_group\n- kafka_offset_partition\n- kafka_offset_commit_timestamp\n- kafka_offset_metadata\n- kafka_is_high_watermark\n```\n"
51
+
},
52
+
{
53
+
"name": "redpanda_migrator_offsets",
54
+
"type": "outputs",
55
+
"status": "beta",
56
+
"version": "4.37.0",
57
+
"description": "This output should be used in combination with the `redpanda_migrator_offsets` input"
58
+
}
59
+
],
60
+
"newFields": [
61
+
{
62
+
"component": "inputs:microsoft_sql_server_cdc",
63
+
"field": "checkpoint_cache_table_name",
64
+
"description": "The multipart identifier for the checkpoint cache table name. If no `checkpoint_cache` field is specified, this input will automatically create a table and stored procedure under the `rpcn` schema to act as a checkpoint cache. This table stores the latest processed Log Sequence Number (LSN) that has been successfully delivered, allowing Redpanda Connect to resume from that point upon restart rather than reconsume the entire change table."
65
+
},
66
+
{
67
+
"component": "inputs:microsoft_sql_server_cdc",
68
+
"field": "checkpoint_cache_key",
69
+
"description": "The key to use to store the snapshot position in `checkpoint_cache`. An alternative key can be provided if multiple CDC inputs share the same cache."
70
+
},
71
+
{
72
+
"component": "inputs:redpanda_migrator",
73
+
"field": "schema_registry",
74
+
"description": "Configuration for schema registry integration. Enables migration of schema subjects, versions, and compatibility settings between clusters."
75
+
},
76
+
{
77
+
"component": "outputs:redpanda_migrator",
78
+
"field": "schema_registry",
79
+
"description": "Configuration for schema registry integration. Enables migration of schema subjects, versions, and compatibility settings between clusters."
80
+
},
81
+
{
82
+
"component": "outputs:redpanda_migrator",
83
+
"field": "consumer_groups"
84
+
},
85
+
{
86
+
"component": "outputs:redpanda_migrator",
87
+
"field": "topic_replication_factor",
88
+
"description": "The replication factor for created topics. If not specified, inherits the replication factor from source topics. Useful when migrating to clusters with different sizes."
89
+
},
90
+
{
91
+
"component": "outputs:redpanda_migrator",
92
+
"field": "sync_topic_acls",
93
+
"description": "Whether to synchronise topic ACLs from source to destination cluster. ACLs are transformed safely: ALLOW WRITE permissions are excluded, and ALLOW ALL is downgraded to ALLOW READ to prevent conflicts."
94
+
},
95
+
{
96
+
"component": "outputs:redpanda_migrator",
97
+
"field": "serverless",
98
+
"description": "Enable serverless mode for Redpanda Cloud serverless clusters. This restricts topic configurations and schema features to those supported by serverless environments."
Copy file name to clipboardExpand all lines: modules/ai-agents/pages/mcp-server/developer-guide.adoc
+27-8Lines changed: 27 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ This command scaffolds your project with the necessary directories and template
65
65
+
66
66
The `resources` directory is where you define your tools such as inputs, outputs, and processors. Each tool is defined in its own YAML file. By default, the `example-` files are provided as templates. The `o11y` directory contains configuration for observability, including metrics and tracing.
67
67
68
-
. Remove the example- files and add your own tools:
68
+
. Remove the `example-` files and add your own tools:
Then, you can create new YAML files for each tool you want to expose. For example:
79
-
+
80
-
[,bash]
81
-
----
82
-
touch resources/processors/weather-lookup.yaml
83
-
touch resources/processors/database-query.yaml
84
-
----
78
+
Then, you can create new YAML files for each tool you want to expose. Make sure each file contains only one component type (input, output, processor, or cache). Put inputs in the `resources/inputs/` directory, outputs in `resources/outputs/`, processors in `resources/processors/`, and caches in `resources/caches/`.
85
79
86
80
See the next sections for details on customizing these files for your use case.
87
81
@@ -163,6 +157,31 @@ Each YAML file should contain exactly one component type. The component type is
163
157
| Cache component
164
158
|===
165
159
160
+
=== Property restrictions by component type
161
+
162
+
Different component types have different property capabilities when exposed as MCP tools:
163
+
164
+
[cols="1,2,2"]
165
+
|===
166
+
| Component Type | Property Support | Details
167
+
168
+
| `input`
169
+
| Only supports `count` property
170
+
| AI clients can specify how many messages to read, but you cannot define custom properties.
171
+
172
+
| `cache`
173
+
| No custom properties
174
+
| Properties are hardcoded to `key` and `value` for cache operations.
175
+
176
+
| `output`
177
+
| Custom properties supported
178
+
| AI sees properties as an array for batch operations: `[{prop1, prop2}, {prop1, prop2}]`.
179
+
180
+
| `processor`
181
+
| Custom properties supported
182
+
| You can define any properties needed for data processing operations.
0 commit comments