Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-4969: Cluster Domain Downward API #4972

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

nightkr
Copy link

@nightkr nightkr commented Nov 21, 2024

  • One-line PR description: Initial KEP draft
  • Other comments:

@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. sig/network Categorizes an issue or PR as relevant to SIG Network. labels Nov 21, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @nightkr!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Nov 21, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @nightkr. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Nov 21, 2024
@nightkr nightkr mentioned this pull request Nov 21, 2024
4 tasks
Copy link
Contributor

@lfrancke lfrancke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just two minor nitpicks/typos

Comment on lines 163 to 164
Currently there is no way for cluster workloads to query for this domain name,
leaving them either use relative domain names or take it as manual configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Currently there is no way for cluster workloads to query for this domain name,
leaving them either use relative domain names or take it as manual configuration.
Currently, there is no way for cluster workloads to query this domain name,
leaving them to either use relative domain names or configure it manually.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6c655bf

I left "query for this" as-is, because I read the revised variant as "ask what the domain name (that I already know) means" rather than "ask what the domain name is".

Currently there is no way for cluster workloads to query for this domain name,
leaving them either use relative domain names or take it as manual configuration.

This KEP proposes adding a new Downward API for that workloads can use to request it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This KEP proposes adding a new Downward API for that workloads can use to request it.
This KEP proposes adding a new Downward API that workloads can use to request it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. label Nov 21, 2024
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. label Nov 21, 2024
Comment on lines +333 to +334
- `nodePropertyRef` (@aojea)
- `runtimeConfigs` (@thockin)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is out of scope of this KEP, but I guess this change sets a precedent of other configs that could be passed down into the Pod.
I wonder if someone with more experience than me has an idea/vision of what that could look like, which may then determine what name to decide on?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking forward, I suspect that both cases will become relevant eventually. This property just occupies an awkward spot since it doesn't really have one clean owner.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have tried not to define magic words, neither as env vars nor as sources, but as sources is far less worrisome than as env.

If this is the path we go, it's not so bad. I don't know if I would call it clusterPropertyRef since there's no cluster property to refer to. But something like runtimeProperty isn't egregious.

@adrianmoisey
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 22, 2024
@nightkr
Copy link
Author

nightkr commented Nov 22, 2024

/retest

clusterPropertyRef: clusterDomain
```

`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`.
Copy link
Contributor

@sftim sftim Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extra credit: define a command line argument that relies on interpolating $(CLUSTER_DOMAIN)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly forgot that we do env expansion on command and args - I had to go look it up. It SHOULD work.

environments, since `node-b` might not be able to resolve `cluster.local`
FQDNs correctly.

For this KEP to make sense, this would have to be explicitly prohibited.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if that's true.

For Pod consumes, the downward API is implemented by the kubelet, so each kubelet can expose its local view of the cluster domain.
We would still strongly recommend against having multiple cluster domains defined across your cluster - anything else sounds really unwise - but technically it can be made to work.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's one of those.. it would be implementable without making such a declaration, but it would likely be another thing leading to pretty confusing behaviour. Maybe the language can be softened somewhat.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can mention the need to emphasize that the existing thing, already a bad idea, is even more of a bad idea.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if we write a document that says "setting these differently within a cluster is prohibited" that does ~nothing to enforce that clusters will actually follow that behavior, and what clusters actually do is what drives consistency.

We have conformance, but I wouldn't be terribly enthused about a test that attempts to validate every node, those are meant to catch API breaks, not runtime configuration.

I don't see why a pod wouldn't work fine, all that needs to be true is the cluster domain reported to the pod needs to be routable on the network to get to services. The consistency part doesn't actually seem relevant.

Comment on lines +298 to +303
<!--
What are the caveats to the proposal?
What are some important details that didn't come across above?
Go in to as much detail as necessary here.
This might be a good place to talk about core concepts and how they relate.
-->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A related detail.

  • A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via .status.
  • A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a somethingz HTTP endpoint.
  • A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a Prometheus static-series label
    eg kubelet_cluster_domain{domain_name="cluster.example"} 1

If we decide to make the API server aware of cluster domain, adding that info could help with troubleshooting and general observability.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's already available via the kubelet's /configz (mentioned in Alternatives). I don't have a strong opinion either way on adding it to the Node status.

Comment on lines +899 to +905
The ConfigMap written by k3s[^prior-art-k3s] could be blessed, requiring that
all other distributions also provide it. However, this would require additional
migration effort from each distribution.

Additionally, this would be problematic to query for: users would have to query
it manually using the Kubernetes API (since ConfigMaps cannot be mounted across
Namespaces), and users would require RBAC permission to query wherever it is stored.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of the non-goal I was mostly referring to configuring the kubelet itself (which would have been a retread of #281). Maybe that could be clarified.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd clarify that. I took the non-goal to mean that the value might be aligned across the cluster, but that this KEP explicitly avoids having Kubernetes help with that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +909 to +910
This roughly shares the arguments for/against as [the ConfigMap alternative](#alternative-configmap),
although it would allow more precise RBAC policy targeting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.

# The following PRR answers are required at alpha release
# List the feature gate name and the components for which it must be enabled
feature-gates:
- name: MyFeature
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could pick the feature gate name even at provisional stage.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. PodClusterDomain would align with PodHostIPs (the only other downward API feature gate listed on https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/), but happy to hear other takes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM, especially for provisional.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


## Design Details

A new Downward API `clusterPropertyRef: clusterDomain` would be introduced, which can be projected into an environment variable or a volume file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this a reference to?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was mostly trying to be consistent with fieldRef and resourceFieldRef, which also don't quite correspond to the pod/container objects (since they're not quite 1:1), but I'm certainly not married to it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually a node property, and making it a cluster-wide property is an explicit non-goal of the KEP.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to clarify the non-goal in c3f33dd

@bowei
Copy link
Member

bowei commented Jan 16, 2025

/cc @bowei

@aojea
Copy link
Member

aojea commented Jan 21, 2025

Ok, I had a lot of fun trying to figure out this puzzle, all the solutions we are exploring sounds like workarounds, when setting the domainname within the UTS namespace seems the most appropriate and semantically correct way to handle the Kubernetes cluster domain within a pod, the UTS namespace is specifically designed to manage hostname and domain name information.

The OCI specification introduced the domainname field in version 1.1.0

containerd already implements this feature to set the domainname: containerd/containerd#7869 and I opened an issue in CRIO to implement it cri-o/cri-o#8927 , that seems relatively simply to do it.

The hack in #4972 (comment) works because we mount the /etc/hosts, not because it appends hostname + domainname

	// The value of hostname is the short host name and it is sent to makeMounts to create /etc/hosts file.
	hostname, hostDomainName, err := kl.GeneratePodHostNameAndDomain(pod)
	if err != nil {
		return nil, nil, err
	}

So, if we agree to set the UTS namespace domainname with the cluster domain, the work to do it should be:

  1. Define the user facing behavior: Setting the domainname to the cluster domain by default or via a pod.spec fields,
  2. Enhance the CRI API to pass the domainname, the kubelet communicates with the container runtime via the CRI API, and already sends the hostname via the PodSandboxConfig
    https://github.com/kubernetes/cri-api/blob/a2aeca53612b08980d3fc97a81617190f85d569a/pkg/apis/runtime/v1/api.proto#L454-L462
// PodSandboxConfig holds all the required and optional fields for creating a
// sandbox.
message PodSandboxConfig {
    // Metadata of the sandbox. This information will uniquely identify the
    // sandbox, and the runtime should leverage this to ensure correct
    // operation. The runtime may also use this information to improve UX, such
    // as by constructing a readable name.
    PodSandboxMetadata metadata = 1;
    // Hostname of the sandbox. Hostname could only be empty when the pod
    // network namespace is NODE.
    string hostname = 2;
  1. Instruct the container runtimes to set the new field domainname on the pod sandbox if it is set

@thockin WDYT, I see you already touched on this a long time ago moby/moby#14282 (comment)

@sftim
Copy link
Contributor

sftim commented Jan 21, 2025

I like the UTS namespace idea, but bear in mind Windows nodes (and Pods) have DNS but don't have UTS namespaces.

Also, the API server doesn't - right now - know the cluster domain name, so defaulting within the Pod API is tricky.

@aojea
Copy link
Member

aojea commented Jan 21, 2025

I like the UTS namespace idea, but bear in mind Windows nodes (and Pods) have DNS but don't have UTS namespaces.

windows does a lot of thing differently and does not implement all pods features, so I would not mix it here

Also, the API server doesn't - right now - know the cluster domain name, so defaulting within the Pod API is tricky.

domain name is a kubelet property, as discussed in other places it is hard to move it to a cluster global property #4972 (comment) , my point is that kubelet sends domainname via CRI API with its configured cluster domain, the question is how to expose that

@sftim
Copy link
Contributor

sftim commented Jan 21, 2025

If we set hostname and subdomain within the right part of a Pod spec, do we not already pass these in via CRI? If not, that could almost be a separate KEP because it sounds like a good idea in its own right.

However, even if a Pod has a custom domain name important-host-42.public.example, it might still want to discover its backing services and that requires making it aware of the cluster domain name. We can't use the UTS namespace for that purpose because the cluster domain may not be public.example.

@nightkr
Copy link
Author

nightkr commented Jan 21, 2025

Using the NIS domain could lead to confusion, since it's currently just forwarded from the host system (tested under Arch, v1.31.4+k3s1): sysctl kernel.domainname=asdf && kubectl run --rm -i foo --image=archlinux -- sh -c 'pacman -Sy inetutils --noconfirm && hostname --nis'. There wouldn't be any way for the workload to determine whether the domain name actually means anything.

@thockin
Copy link
Member

thockin commented Jan 21, 2025

I am not sure what all the implications of setting domain name are. I will have to go read more. For example, should it include namespace? Does nsswitch consider it?

@aojea
Copy link
Member

aojea commented Jan 22, 2025

Using the NIS domain could lead to confusion, since it's currently just forwarded from the host system

@nightkr that does not match my observation and the definition of uts_namespaces(7)

UTS namespaces provide isolation of two system identifiers: the
hostname and the NIS domain name. These identifiers are set
using sethostname(2) and setdomainname(2), and can be retrieved
using uname(2), gethostname(2), and getdomainname(2). Changes
made to these identifiers are visible to all other processes in
the same UTS namespace, but are not visible to processes in other
UTS namespaces.

is it possible that the runtime of that k3s distro is not setting the UTS namespaces for the pods and inheriting the host UTS namespace?

I am not sure what all the implications of setting domain name are. I will have to go read more. For example, should it include namespace? Does nsswitch consider it?

@thockin that is why I do think this should be opt-in, this subsystem is complicated and multiple distros take different decisions, from the hostname -h description

Description:
This command can get or set the host name or the NIS domain name. You can
also get the DNS domain or the FQDN (fully qualified domain name).
Unless you are using bind or NIS for host lookups you can change the
FQDN (Fully Qualified Domain Name) and the DNS domain name (which is
part of the FQDN) in the /etc/hosts file.

that IIUIC it is possible to configure the pod to use the NID domain name if you configure your nsswitch.conf for doing that

@nightkr
Copy link
Author

nightkr commented Jan 22, 2025

is it possible that the runtime of that k3s distro is not setting the UTS namespaces for the pods and inheriting the host UTS namespace?

@aojea At least in the k3s environment I tested against, the domain is copied from the host when creating the container. If I change the host's domain afterwards then the container's domain stays the same (until it is deleted and recreated, anyway).

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In #4972 (comment) Antonio argues for using the NIS domainname. With my 30 years of Linux sysadmin experience, I cannot tell you the exact implications of that. It's so poorly documented that it feels like asking for trouble.

That said, docker does it. The standard Linux tools at least seem to look at it.

$ docker run --privileged -ti ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
0958e719bd8b
0958e719bd8b
(none)

$ docker run --privileged -ti --hostname foo ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo
foo
(none)

$ docker run --privileged -ti --hostname foo.example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo.example.com
foo.example.com
(none)
example.com

$ docker run --privileged -ti --hostname foo --domainname example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo
foo.example.com
example.com
example.com

$ docker run --privileged -ti --hostname foo.bar --domainname example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo.bar
hostname: No address associated with hostname
example.com
dnsdomainname: No address associated with hostname

Using --hostname and --domainname is the only thing that got close to what I expected. I straced these commands and, frankly, it's all over the place as to what's happening. :)

Where this goes wrong for me is that these are trying to divine the Pod's FQDN, when we have never defined that in kubernetes. I think we could/should but see other comments about DNS schema and evolution. If we had an FQDN for pods it would PROBABLY be something like <hostname>.<namespace>.pod.<zone>. You are not trying to learn the pod's FQDN, you are trying to learn the zone.

So I have another alternative. It's the first time I write it so bear with me.

This belongs in status.

Why not put the effective DNS Config (PodDNSConfig plus) into status?

type PodDNSStatus struct {
    Hostname string

    Zone string

    Nameservers []string

    Searches []string

    Options []PodDNSConfigOption
}

Then this becomes just one more fieldRef: status.dns.zone or something like that.

I get the freedom to one day fix DNS. People who use custom DNS configs get their own data here. It can vary by kubelet if someone wants to do that.

Now, shoot me down?


## Summary

All Kubernetes Services (and many Pods) have Fully Qualified Domain Names (FQDNs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strictly speaking, Services do not have a DNS name. Services have a name. Most Kube clusters use DNS to expose services, by mapping them into a DNS zone with the schema you listed. This is not actually a requirement, though it is so pervasive it probably is de facto required.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you pass conformance without cluster DNS? I never checked.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After 10 years of Hyrum's Law, perhaps not, but it doesn't mean we should couple further :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you pass conformance without cluster DNS? I never checked.

When we test services we are generally using clusterIP.

... but we do have [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] exists :-)

Which checks for services by DNS, though it's relying on the search paths (or an implementation that just provides these narrowly I supposed)
https://github.com/kubernetes/kubernetes/blob/0e9ca10eebdead0ef9ef54a6754ce70161c2b1e9/test/e2e/network/dns.go#L103-L109

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(all conformance tests are in https://github.com/kubernetes/kubernetes/blob/0e9ca10eebdead0ef9ef54a6754ce70161c2b1e9/test/conformance/testdata/conformance.yaml)

You definitely need functioning DNS that resolves to things, but you don't specifically need something like .cluster.local AFAICT, you could be resolving entries like kubernetes.default directly, in theory, and still pass all the ones I've peeked at.

Comment on lines +167 to +168
Currently, there is no way for cluster workloads to query for this domain name,
leaving them to either use relative domain names or configure it manually.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...because it is not actually part of Kubernetes. Except that we conflated it with kubelet and pod hostname and subdomain fields and ... :)

fieldRef: metadata.namespace
- name: CLUSTER_DOMAIN
valueFrom:
clusterPropertyRef: clusterDomain
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very much hope that one day we will have the time and wherewithal to revisit DNS. The existing schema was intended as a demonstration, and that's how much thought went into it. We've been making it work ever since, but I think we can do better.

I think that if we wanted to do that, we would need pods to opt-in to an alternate DNS configuration. We have PodDNSConfig which allows one to configure it manually, so it is already sort of possible, but that should be opaque to us - we can't reasonable go poking around in there and making assumptions (e.g. parsing search paths).

The net result is that the cluster zone (not sure why we didn't use that term) is neither a cluster config nor a kubelet config -- it's a pod config, or at least the pod is an indirection into one of the others.

clusterPropertyRef: clusterDomain
```

`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly forgot that we do env expansion on command and args - I had to go look it up. It SHOULD work.

Comment on lines +333 to +334
- `nodePropertyRef` (@aojea)
- `runtimeConfigs` (@thockin)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have tried not to define magic words, neither as env vars nor as sources, but as sources is far less worrisome than as env.

If this is the path we go, it's not so bad. I don't know if I would call it clusterPropertyRef since there's no cluster property to refer to. But something like runtimeProperty isn't egregious.

Why should this KEP _not_ be implemented?
-->

## Alternatives
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can work, but I don't love it. If you need a solution NOW, probably do this.

@sftim
Copy link
Contributor

sftim commented Jan 31, 2025

Where this goes wrong for me is that these are trying to divine the Pod's FQDN, when we have never defined that in kubernetes. I think we could/should but see other comments about DNS schema and evolution. If we had an FQDN for pods it would PROBABLY be something like <hostname>.<namespace>.pod.<zone>. You are not trying to learn the pod's FQDN, you are trying to learn the zone.

With CoreDNS we do have an A record for each Pod, but only for Pods that back a Service - see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records-1

fieldRef: metadata.namespace
- name: CLUSTER_DOMAIN
valueFrom:
clusterPropertyRef: clusterDomain
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm missing something, but as I understand it kubeadm will always initialize it to the value it gets from its ClusterConfiguration before it even launches the kubelet?

kubeadm has a cluster-wide shared kubelet config, I don't think the point was "This is how kubeadm works" it was "clusters do not all work the same, and there is no portable guarantee that this is a cluster-wide concept as opposed to a kubelet option (and many clusters do not use kubeadm)".

Some properties will always need to be consistent for the kubelet to be in a valid state (apiserver URL, certificates, etc).

Eh? Taking this example ... You could easily make a cluster where different kubelets have different API server URLs, certificates, etc. Some clusters even do use a local proxy for example.

We don't make any assumptions about this AFAIK, those details are local to this kubelet instance.

They could also have clusters with overlapping pod CIDRs. At some point we have to delineate what's a valid state for the cluster to be in and what isn't.

Well, taking that example, pod CIDRs aren't necessarily a property managed by Kubernetes core either, lots of clusters use BYO IPAM for pod IPs.

I'm fine with declaring that split domain configurations are valid and supported (or that there is an intention to work towards that at some point). But that's not the impression that I got from either you or @thockin so far.

I read this more as: We haven't declared this one way or another yet, maybe let's not declare that this is a cluster property because that's not necessarily true and we're not prepared to enforce it (e.g. through some sort of conformance test).

environments, since `node-b` might not be able to resolve `cluster.local`
FQDNs correctly.

For this KEP to make sense, this would have to be explicitly prohibited.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if we write a document that says "setting these differently within a cluster is prohibited" that does ~nothing to enforce that clusters will actually follow that behavior, and what clusters actually do is what drives consistency.

We have conformance, but I wouldn't be terribly enthused about a test that attempts to validate every node, those are meant to catch API breaks, not runtime configuration.

I don't see why a pod wouldn't work fine, all that needs to be true is the cluster domain reported to the pod needs to be routable on the network to get to services. The consistency part doesn't actually seem relevant.

fieldRef: metadata.namespace
- name: CLUSTER_DOMAIN
valueFrom:
clusterPropertyRef: clusterDomain
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SERVICES_DOMAIN seems like a more accurate name, though of course some users may already be familiar with the kubelet option

Also, as long as the reported domain works for the pod to reach services, I don't think it matters that we attempt to assume if it's identical across machines or not.

If your goal is to have one pod read something to then be consumed by other pods, then we are in fact trying to read cluster-wide config by way of a kubelet option, and THAT seems like a broken concept without actually moving it into something cluster-scoped.

- `nodePropertyRef` (@aojea)
- `runtimeConfigs` (@thockin)

This also implies a decision about who "owns" the setting, the cluster as a whole or the individual kubelet.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That actually seems orthogonal to exposing this to the pods. We could expose the service domain as reported to kubelet without requiring this.

@thockin
Copy link
Member

thockin commented Jan 31, 2025

@sftim you can't say "we have an A record for each Pod" and then "but only for some Pods". :)

What I really meant is that we (kubernetes) do not define a "standard" name for all pods, only for pods in the context of a service

@sftim
Copy link
Contributor

sftim commented Jan 31, 2025

@sftim you can't say "we have an A record for each Pod" and then "but only for some Pods". :)

Thought I'd qualified that adequately. Each Pod that backs some Service where the cluster uses CoreDNS with the right options. And this is very much not good enough. We also can't use this to set FQDN because one Pod can back multiple Services.

@thockin
Copy link
Member

thockin commented Feb 6, 2025

Pinging - time is running out for 1.33

@thockin
Copy link
Member

thockin commented Feb 10, 2025

one more ping

@nightkr
Copy link
Author

nightkr commented Feb 10, 2025

Apologies, I've been distracted elsewhere lately. I'll try to get back on the train tomorrow.

@thockin
Copy link
Member

thockin commented Feb 11, 2025

@nightkr - at risk of spammiong the same point, I want to make sure you see it since we are on far-away timezones and I don't want to miss the boat just because of that.

See #4972 (comment) - what do you think about status for this?

Also @aojea and @BenTheElder

@nightkr
Copy link
Author

nightkr commented Feb 11, 2025

@thockin No worries, replied in there. I really wish GitHub supported treating any comment as a thread (á la GitLab), not just code comments...

Copy link
Member

@BenTheElder BenTheElder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically if you're relying on some controller code reading this from within pod A to generate a configmap for other controllers (?) and then assuming it could be used from another pod, couldn't you already read the /etc/resolv.conf in the controller's container, that this kubelet flag currently controls? (the search field?)

While not defined by us, this is a portable format.

You would take the second entry of search after $namespace.$service-domain which should be $service-domain (assuming you haven't customized the pod DNS)

the cluster domain is tedious and error-prone, discouraging application
developers from using them.

Many distributions already provide ways to query for the cluster domain (such as
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noting: The kubeadm config is still Beta, and has recently shipped breaking changes. I don't think I'd consider it a bug if this became unavailable in kind because it's non-portable and beta. This is interesting prior art though.

#### Story 1

The Pod `foo` needs to access its sibling Service `bar` in the same namespace.
It adds two `env` bindings:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our operators generate configmaps with all the details you'll need to connect to the service managed by the operators (including URLs). Our operators' pod manifests don't know what specific CR objects they'll end up managing.

I'm not following why that is blocked on FQDN rather than $service.$namespace, can you elaborate the use cases as suggested by danwinship?

@BenTheElder
Copy link
Member

BenTheElder commented Feb 11, 2025

There's another possibly use case for having service: FQDN for less pressure on DNS lookups, you can use dnsConfig to lower ndots instead though.

We currently use both (hardcoded FQDN for a service in a specific cluster) in Kubernetes's CI to reduce DNS flakes.
(To be clear: We host the CI infrastructure / services on Kubernetes, I'm talking about those clusters, not the temporary clusters being end to end tested when testing changes to Kubernetes, those are not using any workarounds)

@thockin
Copy link
Member

thockin commented Feb 11, 2025

The main discussion is happening in this thread: https://github.com/kubernetes/enhancements/pull/4972/files#r1909211177 which is not easily found from the main page here.

@nightkr
Copy link
Author

nightkr commented Feb 12, 2025

While not defined by us, this is a portable format.

You would take the second entry of search after $namespace.$service-domain which should be $service-domain (assuming you haven't customized the pod DNS)

The overall format is portable, yes, but the specific order isn't. I wouldn't feel comfortable relying on that the second value specifically always has a particular magical meaning.

@thockin
Copy link
Member

thockin commented Feb 12, 2025

We discussed on slack that we can let this one slide to next release, is that still your plan?

@BenTheElder
Copy link
Member

BenTheElder commented Feb 12, 2025

The overall format is portable, yes, but the specific order isn't. I wouldn't feel comfortable relying on that the second value specifically always has a particular magical meaning.

kubelet synthesizes this file and controls the ordering, are you seeing otherwise?

I agree that it has a portability risk, in theory, but IMHO this whole feature does, and in practice, I can't see kubelet breaking the ordering.

@nightkr
Copy link
Author

nightkr commented Feb 12, 2025

We discussed on slack that we can let this one slide to next release, is that still your plan?

Yes.

kubelet synthesizes this file and controls the ordering, are you seeing otherwise?

No, I'm not.

I agree that it has a portability risk, in theory, but IMHO this whole feature does, and in practice, I can't see kubelet breaking the ordering.

Respectfully, I have to disagree. DNS is the ~one API that we expect every single app to rely on in some form or shape, but I don't see it as impossible at all that a future release would add an option somewhere for prepending search entries or enabling something like CoreDNS autopath that moves the responsibility for search resolution out of resolv.conf entirely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
Status: Pre-Alpha
Development

Successfully merging this pull request may close these issues.

10 participants