Skip to content

Conversation

osmman
Copy link
Collaborator

@osmman osmman commented Sep 5, 2025

PR Type

Tests


Description

  • Replace context.TODO() with t.Context() in test files

  • Update test function signatures to use SpecContext

  • Add EnforceDefaultTimeoutsWhenUsingContexts() to test suites

  • Modernize Gomega usage patterns in tests


Diagram Walkthrough

flowchart LR
  A["context.TODO()"] -- "replace with" --> B["t.Context()"]
  C["BeforeEach/AfterEach"] -- "add" --> D["SpecContext parameter"]
  E["Test suites"] -- "add" --> F["EnforceDefaultTimeoutsWhenUsingContexts()"]
  G["gomega.RegisterTestingT(t)"] -- "replace with" --> H["gomega.NewWithT(t)"]
Loading

File Walkthrough

Relevant files
Tests
48 files
sharding_config_test.go
Replace context.TODO() with t.Context() in sharding config tests
+31/-31 
handle_secret_test.go
Replace context.TODO() with t.Context() in database secret tests
+28/-28 
deployment_test.go
Modernize context usage and Gomega patterns in deployment tests
+33/-36 
generate_cert_test.go
Replace context.TODO() with t.Context() in certificate generation
tests
+10/-11 
server_config_test.go
Replace context.TODO() with t.Context() in server config tests
+12/-12 
resolve_keys_test.go
Replace context.TODO() with t.Context() in key resolution tests
+10/-9   
generate_signer_test.go
Replace context.TODO() with t.Context() in signer generation tests
+9/-9     
role_binding_test.go
Modernize context usage and Gomega patterns in role binding tests
+14/-15 
role_test.go
Modernize context usage and Gomega patterns in role tests
+12/-13 
tsa_hot_update_test.go
Update test function signatures to use SpecContext             
+5/-8     
ingress_test.go
Modernize context usage and Gomega patterns in ingress tests
+10/-11 
server_config_test.go
Replace context.TODO() with t.Context() in CTlog server config tests
+4/-5     
rekor_hot_update_test.go
Update test function signatures to use SpecContext             
+4/-7     
rekor_controller_test.go
Update test function signatures to use SpecContext             
+4/-7     
timestampauthority_controller_test.go
Update test function signatures to use SpecContext             
+4/-7     
config_map_test.go
Modernize context usage and Gomega patterns in config map tests
+8/-9     
secret_test.go
Modernize context usage and Gomega patterns in secret tests
+8/-9     
fulcio_hot_update_test.go
Update test function signatures to use SpecContext             
+4/-7     
fulcio_controller_test.go
Update test function signatures to use SpecContext             
+4/-7     
trillian_controller_test.go
Update test function signatures to use SpecContext             
+4/-7     
ctlog_controller_test.go
Update test function signatures to use SpecContext             
+4/-7     
tuf_controller_test.go
Update test function signatures to use SpecContext             
+4/-7     
ctlog_hot_update_test.go
Update test function signatures to use SpecContext             
+4/-7     
service_test.go
Modernize context usage and Gomega patterns in service tests
+7/-7     
tsa.go
Replace context.TODO() with t.Context() in TSA test utilities
+5/-5     
tls_test.go
Modernize context usage and Gomega patterns in TLS tests 
+5/-6     
resolve_pub_key_test.go
Replace context.TODO() with t.Context() in public key resolution tests
+2/-3     
tls_test.go
Replace context.TODO() with t.Context() in log signer TLS tests
+2/-4     
tls_test.go
Replace context.TODO() with t.Context() in log server TLS tests
+2/-4     
handle_keys_test.go
Replace context.TODO() with t.Context() in CTlog key handling tests
+2/-3     
handle_fulcio_root_test.go
Replace context.TODO() with t.Context() in Fulcio root handling tests
+2/-3     
ingress_test.go
Replace context.TODO() with t.Context() in Fulcio ingress tests
+1/-3     
ingress_test.go
Replace context.TODO() with t.Context() in Rekor server ingress tests
+1/-3     
ingress_test.go
Replace context.TODO() with t.Context() in TSA ingress tests
+1/-3     
ingress_test.go
Replace context.TODO() with t.Context() in TUF ingress tests
+1/-3     
ingress_test.go
Replace context.TODO() with t.Context() in Rekor UI ingress tests
+1/-3     
suite.go
Replace context.TODO() with context.Background() in test suite
+2/-2     
generate_signer_test.go
Replace context.TODO() with t.Context() in TSA signer generation tests
+2/-3     
rekor_attestation_test.go
Replace context.TODO() with ctx in Rekor attestation tests
+1/-1     
action_test.go
Replace context.TODO() with t.Context() in RBAC action tests
+1/-1     
action_test.go
Replace context.TODO() with t.Context() in tree action tests
+1/-1     
ntpMonitoring_test.go
Replace context.TODO() with t.Context() in NTP monitoring tests
+1/-1     
suite_test.go
Add EnforceDefaultTimeoutsWhenUsingContexts to CTlog test suite
+1/-0     
suite_test.go
Add EnforceDefaultTimeoutsWhenUsingContexts to Fulcio test suite
+1/-0     
suite_test.go
Add EnforceDefaultTimeoutsWhenUsingContexts to Rekor test suite
+1/-0     
suite_test.go
Add EnforceDefaultTimeoutsWhenUsingContexts to Trillian test suite
+1/-0     
suite_test.go
Add EnforceDefaultTimeoutsWhenUsingContexts to TSA test suite
+1/-0     
suite_test.go
Add EnforceDefaultTimeoutsWhenUsingContexts to TUF test suite
+1/-0     

Copy link

sourcery-ai bot commented Sep 5, 2025

Reviewer's Guide

This PR refactors the test suite to use proper test-scoped contexts and Gomega instances, ensuring consistent context propagation across all tests and updating verify helper signatures and Ginkgo specs accordingly.

File-Level Changes

Change Details Files
Standardize context usage in tests
  • Replace context.TODO()/Background() with t.Context() in unit tests
  • Convert Ginkgo BeforeEach/AfterEach/It to accept SpecContext
  • Pass the test-provided context into client.Get/Create/Delete and action.CanHandle/Handle calls
internal/utils/kubernetes/**/*.go
internal/controller/**/*_test.go
internal/testing/common/tsa/tsa.go
internal/action/**/*_test.go
Adopt per-test Gomega instances
  • Remove gomega.RegisterTestingT(t) calls
  • Introduce g := gomega.NewWithT(t) at the start of each test
  • Change gomega.Expect(...) to g.Expect(...)
internal/utils/kubernetes/**/*_test.go
internal/controller/**/*_test.go
internal/utils/tls/tls_test.go
Update table-driven verify function signatures
  • Add context.Context as the first parameter in verify funcs
  • Adjust calls to pass ctx into tt.want.verify
internal/controller/**/*_test.go
internal/utils/kubernetes/**/deployment_test.go
Enhance Ginkgo suites with context timeouts
  • Add EnforceDefaultTimeoutsWhenUsingContexts() in suite_test.go files
  • Remove manual context.Background() usage in suites
internal/controller/*/suite_test.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

qodo-merge-pro bot commented Sep 5, 2025

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 PR contains tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Assertion Style Consistency

New code switches to instance-local Gomega (NewWithT) which is good; ensure no remaining RegisterTestingT or package-level assertions linger in other modified tests to avoid cross-test interference and flakiness.

const name = "dp"

func TestEnsureTrustedCA(t *testing.T) {
	t.Run("update existing object", func(t *testing.T) {
		g := gomega.NewWithT(t)
		ctx := t.Context()
		c := testAction.FakeClientBuilder().
			WithObjects(&v1.Deployment{
				ObjectMeta: v2.ObjectMeta{Name: name, Namespace: "default"},
				Spec: v1.DeploymentSpec{
					Template: core.PodTemplateSpec{
						Spec: core.PodSpec{
							Containers: []core.Container{
								{
									Name: name, Image: "test",
									Env: []core.EnvVar{
										{Name: "NAME", Value: "VALUE"},
									},
									VolumeMounts: []core.VolumeMount{
										{
											MountPath: "path",
											Name:      "mount",
										},
									},
								},
							},
							Volumes: []core.Volume{
								{
									Name: "mount",
								},
							},
						},
					},
				},
			}).
			Build()

		result, err := kubernetes.CreateOrUpdate(ctx, c,
			&v1.Deployment{ObjectMeta: v2.ObjectMeta{Name: name, Namespace: "default"}},
			TrustedCA(&v1alpha1.LocalObjectReference{Name: "test"}, name),
		)
		g.Expect(err).ToNot(gomega.HaveOccurred())

		g.Expect(result).To(gomega.Equal(controllerutil.OperationResultUpdated))

		existing := &v1.Deployment{}
		g.Expect(c.Get(ctx, client.ObjectKey{Namespace: "default", Name: name}, existing)).To(gomega.Succeed())
		g.Expect(existing.Spec.Template.Spec.Containers[0].Env).To(gomega.HaveLen(2))
		g.Expect(existing.Spec.Template.Spec.Containers[0].Env[0].Name).To(gomega.Equal("NAME"))
		g.Expect(existing.Spec.Template.Spec.Containers[0].Env[0].Value).To(gomega.Equal("VALUE"))

		g.Expect(existing.Spec.Template.Spec.Containers[0].Env[1].Name).To(gomega.Equal("SSL_CERT_DIR"))
		g.Expect(existing.Spec.Template.Spec.Containers[0].Env[1].Value).To(gomega.Equal("/var/run/configs/tas/ca-trust:/var/run/secrets/kubernetes.io/serviceaccount"))

		g.Expect(existing.Spec.Template.Spec.Containers[0].VolumeMounts).To(gomega.HaveLen(2))
		g.Expect(existing.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name).To(gomega.Equal("mount"))
		g.Expect(existing.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath).To(gomega.Equal("path"))

		g.Expect(existing.Spec.Template.Spec.Containers[0].VolumeMounts[1].Name).To(gomega.Equal("ca-trust"))
		g.Expect(existing.Spec.Template.Spec.Containers[0].VolumeMounts[1].MountPath).To(gomega.Equal("/var/run/configs/tas/ca-trust"))

		g.Expect(existing.Spec.Template.Spec.Volumes).To(gomega.HaveLen(2))
		g.Expect(existing.Spec.Template.Spec.Volumes[0].Name).To(gomega.Equal("mount"))
		g.Expect(existing.Spec.Template.Spec.Volumes[1].Name).To(gomega.Equal("ca-trust"))
		g.Expect(existing.Spec.Template.Spec.Volumes[1].Projected.Sources).To(gomega.HaveLen(1))
		g.Expect(existing.Spec.Template.Spec.Volumes[1].Projected.Sources[0].ConfigMap.Name).To(gomega.Equal("test"))

	})
}

func TestEnsureTLS(t *testing.T) {
	t.Run("update existing object", func(t *testing.T) {
		g := gomega.NewWithT(t)
		ctx := t.Context()
Verify Function Signature Change

The verify callbacks now accept a context; confirm all call sites pass t.Context() and that verify implementations consistently use the provided ctx instead of any leftover context variables.

	canHandle     bool
	result        *action.Result
	certCondition metav1.ConditionStatus
	verify        func(context.Context, Gomega, rhtasv1alpha1.FulcioStatus, client.WithWatch)
}
tests := []struct {
	name string
	env  env
	want want
}{
	{
		name: "generate new cert with default values",
		env: env{
			certSpec: rhtasv1alpha1.FulcioCert{
				OrganizationName:  "RH",
				OrganizationEmail: "[email protected]",
			},
			status: rhtasv1alpha1.FulcioStatus{},
		},
		want: want{
			canHandle:     true,
			result:        testAction.StatusUpdate(),
			certCondition: metav1.ConditionTrue,
			verify: func(ctx context.Context, g Gomega, fulcio rhtasv1alpha1.FulcioStatus, cli client.WithWatch) {
				g.Expect(fulcio.Certificate.CommonName).ToNot(BeEmpty())
				g.Expect(fulcio.Certificate.OrganizationEmail).To(Equal("[email protected]"))
				g.Expect(fulcio.Certificate.OrganizationName).To(Equal("RH"))
				g.Expect(fulcio.Certificate.PrivateKeyPasswordRef.Name).ToNot(BeEmpty())
				g.Expect(fulcio.Certificate.PrivateKeyRef.Name).ToNot(BeEmpty())
				g.Expect(fulcio.Certificate.CARef.Name).ToNot(BeEmpty())

				scr, err := kubernetes.FindSecret(ctx, cli, "default", FulcioCALabel)
				g.Expect(err).ToNot(HaveOccurred())
				g.Expect(scr.Name).To(Equal(fulcio.Certificate.CARef.Name))
			},
		},
	},
	{
		name: "generate new cert with missing private key",
		env: env{
			certSpec: rhtasv1alpha1.FulcioCert{
				OrganizationName:  "RH",
				OrganizationEmail: "[email protected]",
				PrivateKeyRef: &rhtasv1alpha1.SecretKeySelector{
					LocalObjectReference: rhtasv1alpha1.LocalObjectReference{
						Name: "fulcio-private",
					},
					Key: "private",
				},
			},
			status: rhtasv1alpha1.FulcioStatus{},
		},
		want: want{
			canHandle:     true,
			result:        testAction.Requeue(),
			certCondition: metav1.ConditionFalse,
			verify: func(ctx context.Context, g Gomega, fulcio rhtasv1alpha1.FulcioStatus, cli client.WithWatch) {
				g.Expect(fulcio.Certificate.CommonName).ToNot(BeEmpty())
				g.Expect(fulcio.Certificate.OrganizationEmail).To(Equal("[email protected]"))
				g.Expect(fulcio.Certificate.OrganizationName).To(Equal("RH"))
				g.Expect(fulcio.Certificate.PrivateKeyRef.Name).ToNot(BeEmpty())
				g.Expect(fulcio.Certificate.CARef).To(BeNil())

				_, err := kubernetes.FindSecret(ctx, cli, "default", FulcioCALabel)
				g.Expect(errors.IsNotFound(err)).To(BeTrue())
			},
		},
	},
	{
		name: "generate new cert with provided private key",
		env: env{
			certSpec: rhtasv1alpha1.FulcioCert{
				OrganizationName:  "RH",
				OrganizationEmail: "[email protected]",
				PrivateKeyRef: &rhtasv1alpha1.SecretKeySelector{
					LocalObjectReference: rhtasv1alpha1.LocalObjectReference{
						Name: "fulcio-private",
					},
					Key: "private",
				},
			},
			status: rhtasv1alpha1.FulcioStatus{},
			objects: []client.Object{
				&v1.Secret{
					ObjectMeta: metav1.ObjectMeta{Name: "fulcio-private", Namespace: "default"},
					Data:       map[string][]byte{"private": pemKey},
				},
			},
		},
		want: want{
			canHandle:     true,
			result:        testAction.StatusUpdate(),
			certCondition: metav1.ConditionTrue,
			verify: func(ctx context.Context, g Gomega, fulcio rhtasv1alpha1.FulcioStatus, cli client.WithWatch) {
				g.Expect(fulcio.Certificate.CommonName).ToNot(BeEmpty())
Context Propagation

Multiple verify lambdas and CanHandle invocations now use t.Context(); validate that any client.Get/List/Watch calls uniformly use this ctx to prevent mixing contexts within the same test.

}
type want struct {
	result *action.Result
	verify func(context.Context, Gomega, client.WithWatch, <-chan watch.Event)
}
tests := []struct {
	name string
	env  env
	want want
}{
	{
		name: "create empty sharding config",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{
				Sharding: make([]rhtasv1alpha1.RekorLogRange, 0),
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(ContainSubstring(cmName))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKeyWithValue(shardingConfigName, ""))

				g.Expect(events).To(HaveLen(1))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(cm.Name)),
					)))
			},
		},
	},
	{
		name: "create sharding config with 2 shards",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{
				Sharding: []rhtasv1alpha1.RekorLogRange{
					{
						TreeID:           222222,
						TreeLength:       10,
						EncodedPublicKey: "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0=",
					},
					{
						TreeID:           333333,
						TreeLength:       20,
						EncodedPublicKey: "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0=",
					},
				},
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(ContainSubstring(cmName))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKey(shardingConfigName))

				rlr := make([]rhtasv1alpha1.RekorLogRange, 0)
				g.Expect(yaml.Unmarshal([]byte(cm.Data[shardingConfigName]), &rlr)).To(Succeed())
				g.Expect(rlr).Should(Equal(r.Spec.Sharding))

				g.Expect(events).To(HaveLen(1))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(cm.Name)),
					)))
			},
		},
	},
	{
		name: "update sharding config",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{
				Sharding: []rhtasv1alpha1.RekorLogRange{
					{
						TreeID:     111111,
						TreeLength: 10,
					},
					{
						TreeID:     222222,
						TreeLength: 10,
					},
				},
			},
			status: rhtasv1alpha1.RekorStatus{
				ServerConfigRef: &rhtasv1alpha1.LocalObjectReference{Name: cmName + "old"},
			},
			objects: []client.Object{
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      cmName + "old",
					},
					Data: errors.IgnoreError(createShardingConfigData([]rhtasv1alpha1.RekorLogRange{
						{
							TreeID:     111111,
							TreeLength: 10,
						},
					})),
				},
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(ContainSubstring(cmName))
				g.Expect(r.Status.ServerConfigRef.Name).ShouldNot(Equal(cmName + "old"))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKey(shardingConfigName))

				rlr := make([]rhtasv1alpha1.RekorLogRange, 0)
				g.Expect(yaml.Unmarshal([]byte(cm.Data[shardingConfigName]), &rlr)).To(Succeed())
				g.Expect(rlr).Should(Equal(r.Spec.Sharding))

				g.Expect(events).To(HaveLen(2))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Deleted)),
						WithTransform(getEventObjectName, Equal(cmName+"old")),
					)))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(cm.Name)),
					)))
			},
		},
	},
	{
		name: "update empty sharding config",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{
				Sharding: []rhtasv1alpha1.RekorLogRange{
					{
						TreeID:     123456,
						TreeLength: 10,
					},
				},
			},
			status: rhtasv1alpha1.RekorStatus{
				ServerConfigRef: &rhtasv1alpha1.LocalObjectReference{Name: cmName + "old"},
			},
			objects: []client.Object{
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      cmName + "old",
					},
					Data: errors.IgnoreError(createShardingConfigData([]rhtasv1alpha1.RekorLogRange{})),
				},
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(ContainSubstring(cmName))
				g.Expect(r.Status.ServerConfigRef.Name).ShouldNot(Equal(cmName + "old"))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKey(shardingConfigName))

				rlr := make([]rhtasv1alpha1.RekorLogRange, 0)
				g.Expect(yaml.Unmarshal([]byte(cm.Data[shardingConfigName]), &rlr)).To(Succeed())
				g.Expect(rlr).Should(Equal(r.Spec.Sharding))

				g.Expect(events).To(HaveLen(2))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Deleted)),
						WithTransform(getEventObjectName, Equal(cmName+"old")),
					)))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(cm.Name)),
					)))
			},
		},
	},
	{
		name: "spec.sharding == sharding ConfigMap (empty)",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{},
			status: rhtasv1alpha1.RekorStatus{
				ServerConfigRef: &rhtasv1alpha1.LocalObjectReference{Name: cmName + "old"},
			},
			objects: []client.Object{
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      cmName + "old",
					},
					Data: errors.IgnoreError(createShardingConfigData([]rhtasv1alpha1.RekorLogRange{})),
				},
			},
		},
		want: want{
			result: testAction.Continue(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(Equal(cmName + "old"))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKeyWithValue(shardingConfigName, ""))

				rlr := make([]rhtasv1alpha1.RekorLogRange, 0)
				g.Expect(yaml.Unmarshal([]byte(cm.Data[shardingConfigName]), &rlr)).To(Succeed())
				g.Expect(rlr).Should(BeEmpty())
			},
		},
	},
	{
		name: "spec.sharding == sharding ConfigMap",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{
				Sharding: []rhtasv1alpha1.RekorLogRange{
					{
						TreeID:     111111,
						TreeLength: 10,
					},
				},
			},
			status: rhtasv1alpha1.RekorStatus{
				ServerConfigRef: &rhtasv1alpha1.LocalObjectReference{Name: cmName + "old"},
			},
			objects: []client.Object{
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      cmName + "old",
					},
					Data: errors.IgnoreError(createShardingConfigData([]rhtasv1alpha1.RekorLogRange{
						{
							TreeID:     111111,
							TreeLength: 10,
						},
					})),
				},
			},
		},
		want: want{
			result: testAction.Continue(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(Equal(cmName + "old"))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKey(shardingConfigName))

				rlr := make([]rhtasv1alpha1.RekorLogRange, 0)
				g.Expect(yaml.Unmarshal([]byte(cm.Data[shardingConfigName]), &rlr)).To(Succeed())
				g.Expect(rlr).Should(Equal(r.Spec.Sharding))

				g.Expect(events).To(BeEmpty())
			},
		},
	},
	{
		name: "status.serverConfigRef not found",
		env: env{
			spec: rhtasv1alpha1.RekorSpec{},
			status: rhtasv1alpha1.RekorStatus{
				ServerConfigRef: &rhtasv1alpha1.LocalObjectReference{Name: cmName + "deleted"},
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).ShouldNot(Equal(cmName + "deleted"))
				g.Expect(r.Status.ServerConfigRef.Name).Should(ContainSubstring(cmName))

				cm := v1.ConfigMap{}
				g.Expect(c.Get(ctx, types.NamespacedName{Name: r.Status.ServerConfigRef.Name, Namespace: rekorNN.Namespace}, &cm)).To(Succeed())
				g.Expect(cm.Data).Should(HaveKeyWithValue(shardingConfigName, ""))

				g.Expect(events).To(HaveLen(1))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(cm.Name)),
					)))
			},
		},
	},
	{
		name: "delete unassigned sharding configmap",
		env: env{
			spec:   rhtasv1alpha1.RekorSpec{},
			status: rhtasv1alpha1.RekorStatus{},
			objects: []client.Object{
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      cmName + "old",
						Labels:    shardingConfigLabels,
					},
					Data: map[string]string{shardingConfigName: ""},
				},
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).ShouldNot(Equal(cmName + "old"))

				g.Expect(events).To(HaveLen(2))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(r.Status.ServerConfigRef.Name)),
					)))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Deleted)),
						WithTransform(getEventObjectName, Equal(cmName+"old")),
					)))
			},
		},
	},
	{
		name: "remove invalid config and keep other CM",
		env: env{
			spec:   rhtasv1alpha1.RekorSpec{},
			status: rhtasv1alpha1.RekorStatus{},
			objects: []client.Object{
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      "keep",
						Labels:    labels.For(actions.ServerComponentName, actions.ServerDeploymentName, "rekor"),
					},
					Data: map[string]string{},
				},
				&v1.ConfigMap{
					ObjectMeta: metav1.ObjectMeta{
						Namespace: "default",
						Name:      cmName + "old",
						Labels:    shardingConfigLabels,
					},
					Data: map[string]string{shardingConfigName: "fake"},
				},
			},
		},
		want: want{
			result: testAction.StatusUpdate(),
			verify: func(ctx context.Context, g Gomega, c client.WithWatch, events <-chan watch.Event) {
				r := rhtasv1alpha1.Rekor{}
				g.Expect(c.Get(ctx, rekorNN, &r)).To(Succeed())
				g.Expect(r.Status.ServerConfigRef).ShouldNot(BeNil())
				g.Expect(r.Status.ServerConfigRef.Name).Should(Not(Equal(cmName + "old")))

				g.Expect(c.Get(ctx, types.NamespacedName{Name: cmName + "old", Namespace: rekorNN.Namespace}, &v1.ConfigMap{})).To(HaveOccurred())
				g.Expect(c.Get(ctx, types.NamespacedName{Name: "keep", Namespace: rekorNN.Namespace}, &v1.ConfigMap{})).To(Succeed())

				g.Expect(events).To(HaveLen(2))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Added)),
						WithTransform(getEventObjectName, Equal(r.Status.ServerConfigRef.Name)),
					)))
				g.Expect(events).To(Receive(
					And(
						WithTransform(getEventType, Equal(watch.Deleted)),
						WithTransform(getEventObjectName, Equal(cmName+"old")),
					)))
			},
		},
	},
}
for _, tt := range tests {
	t.Run(tt.name, func(t *testing.T) {
		g := NewWithT(t)
		ctx := t.Context()
		instance := &rhtasv1alpha1.Rekor{
			ObjectMeta: metav1.ObjectMeta{
				Name:      "rekor",
				Namespace: "default",
			},
			Spec:   tt.env.spec,
			Status: tt.env.status,
		}

		meta.SetStatusCondition(&instance.Status.Conditions,
			metav1.Condition{Type: constants.Ready, Reason: constants.Creating},
		)

		c := testAction.FakeClientBuilder().
			WithObjects(instance).
			WithStatusSubresource(instance).
			WithObjects(tt.env.objects...).
			Build()

		watchCm, err := c.Watch(ctx, &v1.ConfigMapList{}, client.InNamespace("default"))
		g.Expect(err).To(Not(HaveOccurred()))

		a := testAction.PrepareAction(c, NewShardingConfigAction())

		if got := a.Handle(ctx, instance); !reflect.DeepEqual(got, tt.want.result) {
			t.Errorf("CanHandle() = %v, want %v", got, tt.want.result)
		}
		watchCm.Stop()
		if tt.want.verify != nil {
			tt.want.verify(ctx, g, c, watchCm.ResultChan())
		}

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider calling t.Parallel() at the top of your individual TestXxx functions to allow tests to run concurrently and speed up the suite.
  • There’s a lot of repeated boilerplate initializing g := NewWithT(t) and ctx := t.Context(); you could extract that into a small helper or test setup function to reduce duplication.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider calling t.Parallel() at the top of your individual TestXxx functions to allow tests to run concurrently and speed up the suite.
- There’s a lot of repeated boilerplate initializing `g := NewWithT(t)` and `ctx := t.Context()`; you could extract that into a small helper or test setup function to reduce duplication.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link

qodo-merge-pro bot commented Sep 5, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Confirm toolchain and deps versions

The PR relies on testing.T.Context(), SpecContext, and
EnforceDefaultTimeoutsWhenUsingContexts(), which require newer Go and Ginkgo
versions. Ensure go.mod and CI images pin a compatible Go toolchain (at least
the version that introduces T.Context) and a sufficiently recent Ginkgo v2
release exposing these APIs, otherwise the test suite will not compile or will
behave inconsistently.

Examples:

internal/controller/ctlog/ctlog_controller_test.go [59-82]
		BeforeEach(func(ctx SpecContext) {
			By("Creating the Namespace to perform the tests")
			err := suite.Client().Create(ctx, namespace)
			Expect(err).To(Not(HaveOccurred()))
		})

		AfterEach(func(ctx SpecContext) {
			By("removing the custom resource for the Kind CTlog")
			found := &v1alpha1.CTlog{}
			err := suite.Client().Get(ctx, typeNamespaceName, found)

 ... (clipped 14 lines)
internal/controller/ctlog/suite_test.go [45]
	EnforceDefaultTimeoutsWhenUsingContexts()

Solution Walkthrough:

Before:

// In test files
import "context"

func TestSomething(t *testing.T) {
    // ...
    a.CanHandle(context.TODO(), &instance)
    // ...
}

// In Ginkgo suite
var _ = Describe("...", func() {
    ctx := context.Background()

    BeforeEach(func() {
        // uses ctx from closure
        suite.Client().Create(ctx, ...)
    })
})

After:

// In test files
func TestSomething(t *testing.T) {
    // ...
    a.CanHandle(t.Context(), &instance)
    // ...
}

// In Ginkgo suite
var _ = Describe("...", func() {
    BeforeEach(func(ctx SpecContext) {
        // uses ctx from SpecContext
        suite.Client().Create(ctx, ...)
    })
})

// In suite_test.go
EnforceDefaultTimeoutsWhenUsingContexts()
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies that the PR introduces dependencies on newer versions of Go and Ginkgo, and raises a critical point about ensuring the build environment and dependencies are updated, which is essential for the project's stability.

High
  • More

@osmman osmman marked this pull request as draft September 5, 2025 15:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant