You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the KafkaConnect BigQuery sink connector, from time to time, we observe the following error since the streaming API has a limit on the size of the batch it can write to BigQuery com.google.cloud.bigquery.BigQueryException: Request size is too big: 12705398 limitation: 12582912. Would it be possible to have the batch size configurable on the sink connector to stay under certain limits?
thanks @b-goyal . If I understand correctly max.poll.records controls the number of records which are pulled. This is not exactly what we would like to have, which is a limit on the size in bytes of the batch. True that less records would probably be smaller in size, but this is not an optimal solution.
Using the KafkaConnect BigQuery sink connector, from time to time, we observe the following error since the streaming API has a limit on the size of the batch it can write to BigQuery
com.google.cloud.bigquery.BigQueryException: Request size is too big: 12705398 limitation: 12582912
. Would it be possible to have the batch size configurable on the sink connector to stay under certain limits?Kafka offers different parameters to configure on the producer at the moment (https://kafka.apache.org/documentation/#producerconfigs).
The text was updated successfully, but these errors were encountered: