Skip to content

Conversation

@eliottness
Copy link
Contributor

@eliottness eliottness commented Oct 20, 2025

What does this PR do?

  • Implements a complete OpenFeature-compatible feature flag provider for Go that integrates with Datadog Remote Config
  • Supports all OpenFeature flag types: boolean, string, integer, float, and JSON objects
  • Provides advanced targeting capabilities with attribute-based conditions and traffic sharding

Motivation

This PR adds the Datadog OpenFeature provider to dd-trace-go, enabling Go applications to evaluate feature flags using the OpenFeature standard
interface.

Testing Instructions

  1. Set environment variable: export DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true
  2. Run unit tests: go test ./openfeature/...
  3. Run integration tests: go test -v ./openfeature/...
  4. Test with Remote Config by starting tracer and creating provider

System Tests run and DataDog/system-tests#5580

Benchmarks

goos: linux
goarch: amd64
pkg: github.com/DataDog/dd-trace-go/v2/openfeature
cpu: 12th Gen Intel(R) Core(TM) i9-12900H
BenchmarkEvaluation
BenchmarkEvaluation-20                    	               4850703	       245.4 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingContextSize
BenchmarkEvaluationWithVaryingContextSize/1field
BenchmarkEvaluationWithVaryingContextSize/1field-20         	 4679434	       246.4 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingContextSize/5fields
BenchmarkEvaluationWithVaryingContextSize/5fields-20        	 4783665	       243.7 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingContextSize/10fields
BenchmarkEvaluationWithVaryingContextSize/10fields-20       	 4668084	       266.6 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingContextSize/20fields
BenchmarkEvaluationWithVaryingContextSize/20fields-20       	 4098861	       261.5 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingFlagCounts
BenchmarkEvaluationWithVaryingFlagCounts/5flags
BenchmarkEvaluationWithVaryingFlagCounts/5flags-20          	 4383056	       301.1 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingFlagCounts/10flags
BenchmarkEvaluationWithVaryingFlagCounts/10flags-20         	 4741010	       250.4 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingFlagCounts/50flags
BenchmarkEvaluationWithVaryingFlagCounts/50flags-20         	 4491292	       246.4 ns/op	      48 B/op	       3 allocs/op
BenchmarkEvaluationWithVaryingFlagCounts/100flags
BenchmarkEvaluationWithVaryingFlagCounts/100flags-20        	 4890760	       246.2 ns/op	      48 B/op	       3 allocs/op
BenchmarkConcurrentEvaluations
BenchmarkConcurrentEvaluations-20                           	11141725	       107.9 ns/op	      48 B/op	       3 allocs/op
PASS

@eliottness eliottness changed the title eat(openfeature): add Datadog OpenFeature provider for server-side feature flag evaluation feat(openfeature): add Datadog OpenFeature provider for server-side feature flag evaluation Oct 20, 2025
@pr-commenter
Copy link

pr-commenter bot commented Oct 20, 2025

Benchmarks

Benchmark execution time: 2025-10-27 13:21:22

Comparing candidate commit f9ae5bf in PR branch eliottness/ffe with baseline commit 9225b43 in branch main.

Found 0 performance improvements and 1 performance regressions! Performance is the same for 5 metrics, 0 unstable metrics.

scenario:BenchmarkStartSpanConfig/scenario_WithStartSpanConfig-24

  • 🟥 execution_time [+84.363ns; +99.037ns] or [+2.406%; +2.824%]

@leoromanovsky
Copy link

Benchmarks

Benchmark execution time: 2025-10-20 17:04:18

Comparing candidate commit bdc04fe in PR branch eliottness/ffe with baseline commit 585513f in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 3 metrics, 0 unstable metrics.

Automatied benchmarks is really interested - will we be able to wire in the flag evaluations into it to see if changes cause regressions too? I took a pass at doing this in the eppo repository for memory allocations, which are a customer sensitivity - we'd like to operate with a light touch in our customers' environments: https://github.com/Eppo-exp/golang-sdk/blob/main/.github/workflows/memory-profile-compare.yml

Copy link

@leoromanovsky leoromanovsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very strong foundation!

@datadog-datadog-prod-us1
Copy link

datadog-datadog-prod-us1 bot commented Oct 21, 2025

⚠️ Tests

⚠️ Warnings

❄️ 1 New flaky test detected

TestReportHealthMetricsAtInterval from github.com/DataDog/dd-trace-go/v2/ddtrace/tracer (Datadog)
Failed

=== RUN   TestReportHealthMetricsAtInterval
    metrics_test.go:64: 
        	Error Trace:	/home/runner/work/dd-trace-go/dd-trace-go/ddtrace/tracer/metrics_test.go:64
        	Error:      	Condition never satisfied
        	Test:       	TestReportHealthMetricsAtInterval
--- FAIL: TestReportHealthMetricsAtInterval (5.03s)

ℹ️ Info

🧪 All tests passed

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: f9ae5bf | Docs | Was this helpful? Give us feedback!

@eliottness eliottness added enhancement quick change/addition that does not need full team approval AI Assisted AI/LLM assistance used in this PR (partially or fully) labels Oct 22, 2025
@github-actions github-actions bot added the apm:ecosystem contrib/* related feature requests or bugs label Oct 22, 2025
@eliottness
Copy link
Contributor Author

eliottness commented Oct 22, 2025

Automatied benchmarks is really interested - will we be able to wire in the flag evaluations into it to see if changes cause regressions too? I took a pass at doing this in the eppo repository for memory allocations, which are a customer sensitivity - we'd like to operate with a light touch in our customers' environments: https://github.com/Eppo-exp/golang-sdk/blob/main/.github/workflows/memory-profile-compare.yml

@leoromanovsky I added benchmarks and registered them into the continuous benchmarking platform. This will take a while for them to show up because it needs to gather a lot of data points by running on main

@eliottness eliottness marked this pull request as ready for review October 22, 2025 17:14
@eliottness eliottness requested review from a team as code owners October 22, 2025 17:14
@darccio
Copy link
Member

darccio commented Oct 23, 2025

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@darccio
Copy link
Member

darccio commented Oct 23, 2025

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +89 to +101
// Init initializes the provider. For the Datadog provider,
// this is waiting for the first configuration to be loaded.
func (p *DatadogProvider) Init(openfeature.EvaluationContext) error {
p.mu.Lock()
defer p.mu.Unlock()
for p.configuration == nil {
p.configChange.Wait()
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid blocking forever when no configuration arrives

The provider’s Init method waits on the condition variable until p.configuration becomes non‑nil and never returns otherwise. If Remote Config is unavailable, misconfigured, or simply has no flags for this environment, this wait loop never exits and the OpenFeature client never finishes initialization. In practice this can hang application startup indefinitely even though evaluations could proceed with default values. Consider adding a timeout or returning immediately when Remote Config is unavailable so the provider can be registered and errors can be reported per evaluation instead of deadlocking initialization.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leoromanovsky Is this a known issue of the openfeature.StateHandler interface not giving us a way to timeout ? I am against putting a constant timeout to this waiting time because remoteconfig api contract has no guaranteed init time

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not known - when you encounter something like that which you feel we could improve OF upstream with could you please add a ticket to (https://datadoghq.atlassian.net/browse/FFL-1147); we will follow up with pull requests.

Comment on lines +132 to +138
// Special handling for "id" attribute: if not explicitly provided, use targeting key
if condition.Attribute == "id" && !exists {
if targetingKey, ok := context[of.TargetingKey].(string); ok {
attributeValue = targetingKey
exists = true
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no special handling is desired here; callers must provide the targeting key

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this to make one of the system-tests pass 🤔 @khanayan123 What do you think ?

// Returns an error if the default configuration of the Remote Config client is NOT working
// In this case, please call tracer.Start before creating the provider.
func NewDatadogProvider() (*DatadogProvider, error) {
if !internal.BoolEnv(ffeProductEnvVar, false) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leoromanovsky, are we sure about this activation env var? Please think of the customer implications and experience. IMHO, this is useful for emergency disablement for ops teams (ie. without touching the source code), and that can be achieved in other ways too.

Signed-off-by: Eliott Bouhana <[email protected]>
Signed-off-by: Eliott Bouhana <[email protected]>
Signed-off-by: Eliott Bouhana <[email protected]>
Signed-off-by: Eliott Bouhana <[email protected]>
Signed-off-by: Eliott Bouhana <[email protected]>
Signed-off-by: Eliott Bouhana <[email protected]>
Signed-off-by: Eliott Bouhana <[email protected]>
@eliottness
Copy link
Contributor Author

/merge

@dd-devflow-routing-codex
Copy link

dd-devflow-routing-codex bot commented Oct 27, 2025

View all feedbacks in Devflow UI.

2025-10-27 13:08:26 UTC ℹ️ Start processing command /merge


2025-10-27 13:08:35 UTC ℹ️ MergeQueue: waiting for PR to be ready

This merge request is not mergeable according to GitHub. Common reasons include pending required checks, missing approvals, or merge conflicts — but it could also be blocked by other repository rules or settings.
It will be added to the queue as soon as checks pass and/or get approvals.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.


2025-10-27 14:10:21 UTC ℹ️ MergeQueue: merge request added to the queue

The expected merge time in main is approximately 18m (p90).


2025-10-27 14:26:30 UTC ℹ️ MergeQueue: This merge request was merged

Signed-off-by: Eliott Bouhana <[email protected]>
@dd-mergequeue dd-mergequeue bot merged commit 99623ba into main Oct 27, 2025
291 of 295 checks passed
@dd-mergequeue dd-mergequeue bot deleted the eliottness/ffe branch October 27, 2025 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Assisted AI/LLM assistance used in this PR (partially or fully) apm:ecosystem contrib/* related feature requests or bugs enhancement quick change/addition that does not need full team approval mergequeue-status: done

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants