Skip to content

Conversation

@aroe559
Copy link

@aroe559 aroe559 commented Oct 10, 2025

  • Add ability to open extra ports on the daemonset to allow for customisation

Checklist

  • Changelog updated or skip changelog label added
  • Documentation updated
  • Template tests added for new features

Anahera Roestenburg added 2 commits October 13, 2025 13:37
@aroe559 aroe559 marked this pull request as ready for review October 13, 2025 00:57
@aroe559 aroe559 requested a review from a team as a code owner October 13, 2025 00:57
@Gourav2906 Gourav2906 requested a review from Copilot October 14, 2025 15:31
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds configurable extraPorts to the otellogs DaemonSet so users can expose additional container/host ports (e.g. OTLP HTTP, StatsD).

  • Helm values.yaml: introduces commented example extraPorts configuration.
  • Template: injects extraPorts into container ports list when provided.
  • Tests and changelog updated to reflect new feature.

Reviewed Changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
deploy/helm/sumologic/templates/logs/collector/otelcol/daemonset.yaml Renders extraPorts under the container ports list conditionally.
deploy/helm/sumologic/values.yaml Adds commented documentation/example block for extraPorts.
tests/helm/testdata/goldenfile/logs_otc_daemonset/complex.input.yaml Test input extended with extraPorts to validate rendering.
tests/helm/testdata/goldenfile/logs_otc_daemonset/complex.output.yaml Golden output updated to include rendered extra ports.
.changelog/4011.added.txt Changelog entry for the new extraPorts feature.

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@rnishtala-sumo
Copy link
Contributor

@aroe559 thank you for the contribution! A few questions about this change.

Would using additional service monitors cover the use-case that you're looking for?
https://help.sumologic.com/docs/send-data/kubernetes/collecting-metrics/#application-metrics-are-exposed-multiple-endpoints-scenario

@aroe559
Copy link
Author

aroe559 commented Oct 14, 2025

@rnishtala-sumo thanks for your review :)

You're correct, the intention is to extend the metric collection functionalities of the pipeline. Service monitors don't seem to cover our use case, as we are interested in opening NodePorts to allow pods to push metrics into the pipeline.

I will look into writing an integration test to cover this 🫡

@aroe559 aroe559 marked this pull request as draft October 14, 2025 21:03
@aroe559
Copy link
Author

aroe559 commented Oct 15, 2025

Hey @rnishtala-sumo, I looked into adding an integration test, and I was hoping you could clarify what you're looking for.

I see no reason for adding an integration test:

  • The change is to allow adding extra ports to the daemonset, and this functionality is tested by the goldenfile test
  • The sumologic-kubernetes collector shouldn't care how the ports are used, but just that they are able to be opened. An integration test to set up some metric collection and test it seems beyond the scope of the collector.
  • I would consider the change minor, and the docs indicate integration tests are for major changes.
  • I don't see any existing integration tests testing specific port availablility

@aroe559 aroe559 marked this pull request as ready for review October 15, 2025 22:18
@rnishtala-sumo
Copy link
Contributor

rnishtala-sumo commented Oct 16, 2025

@aroe559 I'm a bit unclear on a few things

  • Which pod metrics are we trying to ingest with this change? Could this be done using pod monitors?
  • Also, the changes made are to the logs daemonset, in the logs pipeline. Was the intention to make these changes to the metrics pipeline?

If you could provide more detail about the metrics we're trying to pull/push into the pipeline, that'll help me assist better.

@aroe559
Copy link
Author

aroe559 commented Oct 16, 2025

Hey @rnishtala-sumo

Which pod metrics are we trying to ingest with this change? Could this be done using pod monitors?

We are not trying to scrape metrics, but rather open a hostport on the daemonset for pods to push telemetry to. Pod monitors are not suitable for this use case.

Also, the changes made are to the logs daemonset, in the logs pipeline. Was the intention to make these changes to the metrics pipeline?

Correct me if I'm wrong, but the metrics pipeline doesn't have a daemonset, so there is no way for us to achieve the goal.

If you could provide more detail about the metrics we're trying to pull/push into the pipeline, that'll help me assist better.

We want to open a hostport on every node (daemonset) to allow pods to push otel metrics into the collection pipeline.

We think that allowing extension and customisation of the otellogs Daemonset by opening extra ports is a useful feature to add

@rnishtala-sumo
Copy link
Contributor

rnishtala-sumo commented Oct 21, 2025

Correct me if I'm wrong, but the metrics pipeline doesn't have a daemonset

@aroe559 have you looked into using the prometheus-node-exporter? This runs as a daemonset and depending on the metrics that need to be collected, this could prove useful. To enable the node exporter, you can use.

kube-prometheus-stack:
  nodeExporter:
    enabled: true

It currently collects the following metrics by default - https://www.sumologic.com/help/docs/metrics/kubernetes-metrics/#node-exporter-node-exporter-metrics

You might be able to use a sidecar (statsdexporter) to convert statsd metrics into prometheus metrics format to send these metrics via the node exporter.

@lawrenceong
Copy link

@rnishtala-sumo The purpose of exposing these ports is multifaceted. They use the OTEL protocol, allowing us to send various OTEL data types (such as traces/metrics) through them. Using Prometheus would be a more roundabout approach. We're not aiming to collect node metrics, as that is already handled elsewhere.

@rnishtala-sumo
Copy link
Contributor

rnishtala-sumo commented Oct 27, 2025

@lawrenceong there is a tracing endpoint in our solution to push otlp traces to, documented here - https://www.sumologic.com/help/docs/apm/traces/get-started-transaction-tracing/set-up-traces-collection-for-kubernetes-environments/#pointing-tracing-clients-instrumentation-exporters-to-the-agent-collectors

You can use the above endpoints to push otlp metrics and traces.

The statsd exporter is only required to convert statsd metrics into prometheus format. The metrics pipeline has native support for metrics in prometheus format.

We're not aiming to collect node metrics, as that is already handled elsewhere

The prometheus node exporter can be extended (via the statsd sidecar) to receive/export statsd metrics without necessarily exporting node metrics.

Finally, exposing additional ports on the logs collector daemonset does not enable downstream components in the metrics pipeline to ingest additional metrics sent to these ports. The logs daemonset is exclusive to the logging pipeline and does not currently support extending metrics collection capabilities.

Please let us know if you have additional questions. It would also help if we created an issue for this so our team could work with you closely on this feature.

cc: @fguimond @echlebek

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants