Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Dynamic Instrumentation] Fix stability issues #34340

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

GreenMatan
Copy link

@GreenMatan GreenMatan commented Feb 23, 2025

What does this PR do?

  • Fixed few nil dereferences that caused crashes (DEBUG-3454)
  • Fixed a race condition between map reading & writing, resulting in a crash (DEBUG-3211)
  • One probe used to sabotage all other probes in a service if there was a failure during DWARF inspection (DEBUG-3230)
  • Similar to the previous point, one process used to fail all other processes inspection if there was an error while processing probes for particular process (DEBUG-3205)

Motivation

Improve stability of Go DI

Describe how you validated your changes

I ran locally our e2e tests & exploration tests and verified the proposed changes improved our stability.

@bits-bot
Copy link
Collaborator

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@github-actions github-actions bot added the long review PR is complex, plan time to review it label Feb 23, 2025
@GreenMatan GreenMatan force-pushed the matang/exploration-tests branch from ff13c65 to 2e3cba1 Compare February 23, 2025 14:57
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 23, 2025

[Fast Unit Tests Report]

On pipeline 56883287 (CI Visibility). The following jobs did not run any unit tests:

Jobs:
  • tests_windows-x64

If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 23, 2025

Uncompressed package size comparison

Comparison with ancestor 153cd143979ee9acb303aa22608e636442089005

Diff per package
package diff status size ancestor threshold
datadog-agent-amd64-deb 0.00MB 835.77MB 835.76MB 0.50MB
datadog-agent-x86_64-rpm 0.00MB 845.56MB 845.55MB 0.50MB
datadog-agent-x86_64-suse 0.00MB 845.56MB 845.55MB 0.50MB
datadog-agent-arm64-deb 0.00MB 826.28MB 826.28MB 0.50MB
datadog-agent-aarch64-rpm 0.00MB 836.05MB 836.05MB 0.50MB
datadog-dogstatsd-amd64-deb 0.00MB 39.42MB 39.42MB 0.50MB
datadog-dogstatsd-x86_64-rpm 0.00MB 39.50MB 39.50MB 0.50MB
datadog-dogstatsd-x86_64-suse 0.00MB 39.50MB 39.50MB 0.50MB
datadog-dogstatsd-arm64-deb 0.00MB 37.96MB 37.96MB 0.50MB
datadog-heroku-agent-amd64-deb 0.00MB 443.28MB 443.28MB 0.50MB
datadog-iot-agent-amd64-deb 0.00MB 62.02MB 62.02MB 0.50MB
datadog-iot-agent-x86_64-rpm 0.00MB 62.09MB 62.09MB 0.50MB
datadog-iot-agent-x86_64-suse 0.00MB 62.09MB 62.09MB 0.50MB
datadog-iot-agent-arm64-deb 0.00MB 59.27MB 59.27MB 0.50MB
datadog-iot-agent-aarch64-rpm 0.00MB 59.33MB 59.33MB 0.50MB

Decision

✅ Passed

@GreenMatan GreenMatan force-pushed the matang/exploration-tests branch from 2e3cba1 to 08a01e0 Compare February 23, 2025 15:34
@GreenMatan GreenMatan changed the title [Dynamic Instrumentation] Fixed few bugs + added exploration testing [Dynamic Instrumentation] Fix stability issues & introduce Exploration Testing Feb 23, 2025
Copy link

cit-pr-commenter bot commented Feb 23, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: c5c8390e-d514-40b6-bd5d-b6c367afd440

Baseline: 153cd14
Comparison: 62f7f41
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
uds_dogstatsd_to_api_cpu % cpu utilization +0.65 [-0.20, +1.51] 1 Logs
tcp_syslog_to_blackhole ingress throughput +0.65 [+0.59, +0.71] 1 Logs
quality_gate_idle memory utilization +0.24 [+0.18, +0.29] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency egress throughput +0.19 [-0.60, +0.97] 1 Logs
file_to_blackhole_500ms_latency egress throughput +0.08 [-0.70, +0.86] 1 Logs
quality_gate_idle_all_features memory utilization +0.05 [-0.01, +0.10] 1 Logs bounds checks dashboard
file_to_blackhole_100ms_latency egress throughput +0.01 [-0.64, +0.66] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.00 [-0.62, +0.63] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.00 [-0.76, +0.77] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.02, +0.02] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput +0.00 [-0.81, +0.81] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.00 [-0.80, +0.80] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.00 [-0.27, +0.27] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.27 [-0.74, +0.19] 1 Logs
file_tree memory utilization -0.39 [-0.45, -0.33] 1 Logs
quality_gate_logs % cpu utilization -0.40 [-3.28, +2.49] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@grantseltzer grantseltzer added changelog/no-changelog team/dynamic-instrumentation Dynamic Instrumentation qa/done QA done before merge and regressions are covered by tests labels Feb 23, 2025
@GreenMatan GreenMatan force-pushed the matang/exploration-tests branch 2 times, most recently from 2de5570 to 07334c5 Compare February 24, 2025 15:54
@grantseltzer
Copy link
Member

I can't get the tests to actually run. Issues with running git:

=== RUN   TestIntegration
download /tmp/protobuf-integration-1060272402/src/google.golang.org/protobuf/.cache/protobuf-27.0
    integration_test.go:267: executing (git clone https://github.com/protocolbuffers/protobuf protobuf-27.0): exit status 128
        Cloning into 'protobuf-27.0'...
        error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8)
        error: 4217 bytes of body are still expected
        fetch-pack: unexpected disconnect while reading sideband packet
        fatal: early EOF
        fatal: fetch-pack: invalid index-pack output
--- FAIL: TestIntegration (118.60s)
FAIL
FAIL	google.golang.org/protobuf	120.028s
FAIL
    exploration_e2e_test.go:1038:
        	Error Trace:	/home/vagrant/datadog-agent/pkg/dynamicinstrumentation/testutil/exploration_e2e_test.go:1038
        	Error:      	Received unexpected error:
        	            	exit status 1
        	Test:       	TestExplorationGoDI
--- FAIL: TestExplorationGoDI (142.58s)
FAIL
exit status 1
FAIL	github.com/DataDog/datadog-agent/pkg/dynamicinstrumentation/testutil	142.615s

Regardless I think dependencies would be better served as git submodules, what do you think?.

}

func (pt *ProcessTracker) scanProcessTree() error {
if err := syscall.Kill(-g_cmd.Process.Pid, syscall.SIGSTOP); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you document more the need to use signals for stopping/continuing procs?

My understanding so far is that we download the protobuf library, compile all test binaries, attach probes to all functions in the binaries, then run them and inspect results. Why is it necessary to start a process group and orchestrate with signals instead of running the binaries one at a time?

Copy link
Author

@GreenMatan GreenMatan Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to do what you suggest, I'll have to use Go DI in unusual way. With the Exploration Testing, I wanted to mimic real-world scenario as much as I can, including all the process tracking logic, analysis of binaries with their ProcessInfo struct, etc. Essentially treating Go DI and the project-under-test as two blackboxes.
Your recommendation could guide us in creating a new test that is tailored to unravel the scope of our support.

In the work of designing the moving parts of Exploration Testing I considered a couple of approaches and ended up going in the approach of running the tests in a blackbox without caring much about their nature. The infra is pretty flexible and can be easily adjusted to add more projects. I didn't want to manually import protobuf, look into how to run it continuously and be forced to put probes on predefined locations.
I do agree, though, that we have a challenge in determining which functions are actually executing. I don't think this challenge rule out the whole technique just yet. I have a few ideas of how to deal with that (use static analysis / profling to determine which symbols should be prioritized to probe)

Having said all of that, I still believe the exploration tests are incredibly useful in their current state. Simply running them helped me uncover 6+ crashes caused by nil/invalid indexing dereferences, 1 race condition, and 2 logical bugs - bugs that would have otherwise been really hard to find. Debugging it was also a bless. The fact that numerous subtests involve in the execution of protobuf, some of which are very short-lived, also helped uncover these edge cases.
I don't say that exploration tests are a complete task as they currently stand, but they're totally a tool to assess the readiness to private beta. We could choose to create more tests either by utilizing this infra or creating a new one, but the current exploration tests did prove themselves.

@grantseltzer
Copy link
Member

grantseltzer commented Feb 24, 2025

  1. Let's split this into two PRs, one for stability fixes and another for exploration testing.

  2. Generally speaking i'd like to discuss making the testing set up simpler. Maybe we can start by having a service we own (or build on top of the existing sample service) which can use protobuf. We could even start multiple instances of it with a runner for the sake of stress testing. Each instance can even use multiple go routines for portions of the test. It would make it a lot easier to know what code paths are called (so we know if output percentages are accurate), and what compare event output we expect for accuracy.

@GreenMatan GreenMatan force-pushed the matang/exploration-tests branch from 07334c5 to 62f7f41 Compare February 25, 2025 14:22
@github-actions github-actions bot added medium review PR review might take time and removed long review PR is complex, plan time to review it labels Feb 25, 2025
@GreenMatan GreenMatan changed the title [Dynamic Instrumentation] Fix stability issues & introduce Exploration Testing [Dynamic Instrumentation] Fix stability issues Feb 25, 2025
@GreenMatan
Copy link
Author

I've splitted the PR into Stability & Exploration Testing. The exploration testing PR is: #34423 and is stacked on top of this PR.

@agent-platform-auto-pr
Copy link
Contributor

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=56883287 --os-family=ubuntu

Note: This applies to commit 62f7f41

@agent-platform-auto-pr
Copy link
Contributor

Static quality checks ✅

Please find below the results from static quality gates

Successful checks

Info

Result Quality gate On disk size On disk size limit On wire size On wire size limit
static_quality_gate_agent_deb_amd64 808.74MiB 841.59MiB 195.28MiB 210.72MiB
static_quality_gate_agent_deb_arm64 799.62MiB 830.66MiB 176.82MiB 192.1MiB
static_quality_gate_agent_rpm_amd64 808.88MiB 841.52MiB 198.18MiB 214.52MiB
static_quality_gate_agent_rpm_arm64 799.67MiB 830.66MiB 178.77MiB 193.64MiB
static_quality_gate_agent_suse_amd64 808.87MiB 841.52MiB 198.18MiB 214.52MiB
static_quality_gate_agent_suse_arm64 799.67MiB 830.66MiB 178.77MiB 194.03MiB
static_quality_gate_dogstatsd_deb_amd64 37.67MiB 49.7MiB 9.78MiB 20.6MiB
static_quality_gate_dogstatsd_deb_arm64 36.27MiB 48.1MiB 8.48MiB 19.1MiB
static_quality_gate_dogstatsd_rpm_amd64 37.67MiB 49.7MiB 9.79MiB 20.6MiB
static_quality_gate_dogstatsd_suse_amd64 37.67MiB 49.7MiB 9.79MiB 20.6MiB
static_quality_gate_iot_agent_deb_amd64 59.23MiB 69.0MiB 14.88MiB 24.8MiB
static_quality_gate_iot_agent_deb_arm64 56.59MiB 66.4MiB 12.85MiB 22.8MiB
static_quality_gate_iot_agent_rpm_amd64 59.23MiB 69.0MiB 14.9MiB 24.8MiB
static_quality_gate_iot_agent_rpm_arm64 56.6MiB 66.4MiB 12.85MiB 22.8MiB
static_quality_gate_iot_agent_suse_amd64 59.23MiB 69.0MiB 14.9MiB 24.8MiB
static_quality_gate_docker_agent_amd64 893.2MiB 926.0MiB 298.81MiB 317.43MiB
static_quality_gate_docker_agent_arm64 907.41MiB 939.07MiB 284.54MiB 302.43MiB
static_quality_gate_docker_agent_jmx_amd64 1.07GiB 1.1GiB 373.9MiB 392.54MiB
static_quality_gate_docker_agent_jmx_arm64 1.07GiB 1.1GiB 355.6MiB 373.21MiB
static_quality_gate_docker_dogstatsd_amd64 45.81MiB 57.88MiB 17.28MiB 28.29MiB
static_quality_gate_docker_dogstatsd_arm64 44.45MiB 56.27MiB 16.16MiB 27.06MiB
static_quality_gate_docker_cluster_agent_amd64 264.93MiB 274.78MiB 106.35MiB 116.28MiB
static_quality_gate_docker_cluster_agent_arm64 280.89MiB 290.82MiB 101.18MiB 111.12MiB

Comment on lines +247 to +252
if procInfo.TypeMap == nil {
err := AnalyzeBinary(procInfo)
if err != nil {
log.Errorf("couldn't inspect binary: %v\n", err)
return
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's other fields which are set in procInfo from AnalyzeBinary besides the type map, so i'm not sure if this is the correct way to do this check.

I originally thought that it would be an issue because when the binary is inspected we're only analyzing types for target functions, so if a config is updated, we wouldn't be able to parse types for the new configuration. However testing this branch doesn't show any issues.

What were the reservations you had about this particular change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I did further investigation and realized this code doesn't belong here at all. AnalyzeBinary needs to be called for the sake of the RC config manager, but the memory config manager (i.e. what file based configuration is built on) has separate callbacks for when procs/configs are updated which call AnalyzeBinary already.

So remove this call to AnalyzeBinary all together and add it before the call to applyConfigUpdate() on line 239 (without the condition nil check)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog medium review PR review might take time qa/done QA done before merge and regressions are covered by tests team/dynamic-instrumentation Dynamic Instrumentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants