-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-4969: Cluster Domain Downward API #4972
base: master
Are you sure you want to change the base?
Conversation
nightkr
commented
Nov 21, 2024
- One-line PR description: Initial KEP draft
- Issue link: Cluster Domain Downward API #4969
- Other comments:
Welcome @nightkr! |
Hi @nightkr. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just two minor nitpicks/typos
Currently there is no way for cluster workloads to query for this domain name, | ||
leaving them either use relative domain names or take it as manual configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently there is no way for cluster workloads to query for this domain name, | |
leaving them either use relative domain names or take it as manual configuration. | |
Currently, there is no way for cluster workloads to query this domain name, | |
leaving them to either use relative domain names or configure it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left "query for this" as-is, because I read the revised variant as "ask what the domain name (that I already know) means" rather than "ask what the domain name is".
Currently there is no way for cluster workloads to query for this domain name, | ||
leaving them either use relative domain names or take it as manual configuration. | ||
|
||
This KEP proposes adding a new Downward API for that workloads can use to request it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This KEP proposes adding a new Downward API for that workloads can use to request it. | |
This KEP proposes adding a new Downward API that workloads can use to request it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is out of scope of this KEP, but I guess this change sets a precedent of other configs that could be passed down into the Pod.
I wonder if someone with more experience than me has an idea/vision of what that could look like, which may then determine what name to decide on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking forward, I suspect that both cases will become relevant eventually. This property just occupies an awkward spot since it doesn't really have one clean owner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have tried not to define magic words, neither as env vars nor as sources, but as sources is far less worrisome than as env.
If this is the path we go, it's not so bad. I don't know if I would call it clusterPropertyRef
since there's no cluster property to refer to. But something like runtimeProperty
isn't egregious.
/ok-to-test |
/retest |
clusterPropertyRef: clusterDomain | ||
``` | ||
|
||
`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra credit: define a command line argument that relies on interpolating $(CLUSTER_DOMAIN)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly forgot that we do env expansion on command
and args
- I had to go look it up. It SHOULD work.
environments, since `node-b` might not be able to resolve `cluster.local` | ||
FQDNs correctly. | ||
|
||
For this KEP to make sense, this would have to be explicitly prohibited. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if that's true.
For Pod consumes, the downward API is implemented by the kubelet, so each kubelet can expose its local view of the cluster domain.
We would still strongly recommend against having multiple cluster domains defined across your cluster - anything else sounds really unwise - but technically it can be made to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's one of those.. it would be implementable without making such a declaration, but it would likely be another thing leading to pretty confusing behaviour. Maybe the language can be softened somewhat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can mention the need to emphasize that the existing thing, already a bad idea, is even more of a bad idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if we write a document that says "setting these differently within a cluster is prohibited" that does ~nothing to enforce that clusters will actually follow that behavior, and what clusters actually do is what drives consistency.
We have conformance, but I wouldn't be terribly enthused about a test that attempts to validate every node, those are meant to catch API breaks, not runtime configuration.
I don't see why a pod wouldn't work fine, all that needs to be true is the cluster domain reported to the pod needs to be routable on the network to get to services. The consistency part doesn't actually seem relevant.
<!-- | ||
What are the caveats to the proposal? | ||
What are some important details that didn't come across above? | ||
Go in to as much detail as necessary here. | ||
This might be a good place to talk about core concepts and how they relate. | ||
--> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A related detail.
- A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via
.status
. - A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a somethingz HTTP endpoint.
- A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a Prometheus static-series label
egkubelet_cluster_domain{domain_name="cluster.example"} 1
If we decide to make the API server aware of cluster domain, adding that info could help with troubleshooting and general observability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's already available via the kubelet's /configz (mentioned in Alternatives). I don't have a strong opinion either way on adding it to the Node status.
Co-authored-by: Tim Bannister <[email protected]>
Co-authored-by: Tim Bannister <[email protected]>
The ConfigMap written by k3s[^prior-art-k3s] could be blessed, requiring that | ||
all other distributions also provide it. However, this would require additional | ||
migration effort from each distribution. | ||
|
||
Additionally, this would be problematic to query for: users would have to query | ||
it manually using the Kubernetes API (since ConfigMaps cannot be mounted across | ||
Namespaces), and users would require RBAC permission to query wherever it is stored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of the non-goal I was mostly referring to configuring the kubelet itself (which would have been a retread of #281). Maybe that could be clarified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd clarify that. I took the non-goal to mean that the value might be aligned across the cluster, but that this KEP explicitly avoids having Kubernetes help with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This roughly shares the arguments for/against as [the ConfigMap alternative](#alternative-configmap), | ||
although it would allow more precise RBAC policy targeting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.
# The following PRR answers are required at alpha release | ||
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: MyFeature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could pick the feature gate name even at provisional stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. PodClusterDomain
would align with PodHostIPs
(the only other downward API feature gate listed on https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/), but happy to hear other takes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, especially for provisional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
## Design Details | ||
|
||
A new Downward API `clusterPropertyRef: clusterDomain` would be introduced, which can be projected into an environment variable or a volume file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's this a reference to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was mostly trying to be consistent with fieldRef
and resourceFieldRef
, which also don't quite correspond to the pod/container objects (since they're not quite 1:1), but I'm certainly not married to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually a node property, and making it a cluster-wide property is an explicit non-goal of the KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to clarify the non-goal in c3f33dd
/cc @bowei |
Ok, I had a lot of fun trying to figure out this puzzle, all the solutions we are exploring sounds like workarounds, when setting the The OCI specification introduced the
The hack in #4972 (comment) works because we mount the /etc/hosts, not because it appends
So, if we agree to set the UTS namespace
@thockin WDYT, I see you already touched on this a long time ago moby/moby#14282 (comment) |
I like the UTS namespace idea, but bear in mind Windows nodes (and Pods) have DNS but don't have UTS namespaces. Also, the API server doesn't - right now - know the cluster domain name, so defaulting within the Pod API is tricky. |
windows does a lot of thing differently and does not implement all pods features, so I would not mix it here
domain name is a kubelet property, as discussed in other places it is hard to move it to a cluster global property #4972 (comment) , my point is that kubelet sends domainname via CRI API with its configured cluster domain, the question is how to expose that |
If we set However, even if a Pod has a custom domain name |
Using the NIS domain could lead to confusion, since it's currently just forwarded from the host system (tested under Arch, v1.31.4+k3s1): |
I am not sure what all the implications of setting domain name are. I will have to go read more. For example, should it include namespace? Does nsswitch consider it? |
@nightkr that does not match my observation and the definition of uts_namespaces(7)
is it possible that the runtime of that k3s distro is not setting the UTS namespaces for the pods and inheriting the host UTS namespace?
@thockin that is why I do think this should be opt-in, this subsystem is complicated and multiple distros take different decisions, from the
that IIUIC it is possible to configure the pod to use the NID domain name if you configure your nsswitch.conf for doing that |
@aojea At least in the k3s environment I tested against, the domain is copied from the host when creating the container. If I change the host's domain afterwards then the container's domain stays the same (until it is deleted and recreated, anyway). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #4972 (comment) Antonio argues for using the NIS domainname. With my 30 years of Linux sysadmin experience, I cannot tell you the exact implications of that. It's so poorly documented that it feels like asking for trouble.
That said, docker does it. The standard Linux tools at least seem to look at it.
$ docker run --privileged -ti ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
0958e719bd8b
0958e719bd8b
(none)
$ docker run --privileged -ti --hostname foo ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo
foo
(none)
$ docker run --privileged -ti --hostname foo.example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo.example.com
foo.example.com
(none)
example.com
$ docker run --privileged -ti --hostname foo --domainname example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo
foo.example.com
example.com
example.com
$ docker run --privileged -ti --hostname foo.bar --domainname example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo.bar
hostname: No address associated with hostname
example.com
dnsdomainname: No address associated with hostname
Using --hostname and --domainname is the only thing that got close to what I expected. I straced these commands and, frankly, it's all over the place as to what's happening. :)
Where this goes wrong for me is that these are trying to divine the Pod's FQDN, when we have never defined that in kubernetes. I think we could/should but see other comments about DNS schema and evolution. If we had an FQDN for pods it would PROBABLY be something like <hostname>.<namespace>.pod.<zone>
. You are not trying to learn the pod's FQDN, you are trying to learn the zone.
So I have another alternative. It's the first time I write it so bear with me.
This belongs in status
.
Why not put the effective DNS Config (PodDNSConfig
plus) into status?
type PodDNSStatus struct {
Hostname string
Zone string
Nameservers []string
Searches []string
Options []PodDNSConfigOption
}
Then this becomes just one more fieldRef: status.dns.zone
or something like that.
I get the freedom to one day fix DNS. People who use custom DNS configs get their own data here. It can vary by kubelet if someone wants to do that.
Now, shoot me down?
|
||
## Summary | ||
|
||
All Kubernetes Services (and many Pods) have Fully Qualified Domain Names (FQDNs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strictly speaking, Services do not have a DNS name. Services have a name. Most Kube clusters use DNS to expose services, by mapping them into a DNS zone with the schema you listed. This is not actually a requirement, though it is so pervasive it probably is de facto required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pass conformance without cluster DNS? I never checked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After 10 years of Hyrum's Law, perhaps not, but it doesn't mean we should couple further :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pass conformance without cluster DNS? I never checked.
When we test services we are generally using clusterIP.
... but we do have [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
exists :-)
Which checks for services by DNS, though it's relying on the search paths (or an implementation that just provides these narrowly I supposed)
https://github.com/kubernetes/kubernetes/blob/0e9ca10eebdead0ef9ef54a6754ce70161c2b1e9/test/e2e/network/dns.go#L103-L109
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(all conformance tests are in https://github.com/kubernetes/kubernetes/blob/0e9ca10eebdead0ef9ef54a6754ce70161c2b1e9/test/conformance/testdata/conformance.yaml)
You definitely need functioning DNS that resolves to things, but you don't specifically need something like .cluster.local
AFAICT, you could be resolving entries like kubernetes.default
directly, in theory, and still pass all the ones I've peeked at.
Currently, there is no way for cluster workloads to query for this domain name, | ||
leaving them to either use relative domain names or configure it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...because it is not actually part of Kubernetes. Except that we conflated it with kubelet and pod hostname
and subdomain
fields and ... :)
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I very much hope that one day we will have the time and wherewithal to revisit DNS. The existing schema was intended as a demonstration, and that's how much thought went into it. We've been making it work ever since, but I think we can do better.
I think that if we wanted to do that, we would need pods to opt-in to an alternate DNS configuration. We have PodDNSConfig
which allows one to configure it manually, so it is already sort of possible, but that should be opaque to us - we can't reasonable go poking around in there and making assumptions (e.g. parsing search paths).
The net result is that the cluster zone (not sure why we didn't use that term) is neither a cluster config nor a kubelet config -- it's a pod config, or at least the pod is an indirection into one of the others.
clusterPropertyRef: clusterDomain | ||
``` | ||
|
||
`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly forgot that we do env expansion on command
and args
- I had to go look it up. It SHOULD work.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have tried not to define magic words, neither as env vars nor as sources, but as sources is far less worrisome than as env.
If this is the path we go, it's not so bad. I don't know if I would call it clusterPropertyRef
since there's no cluster property to refer to. But something like runtimeProperty
isn't egregious.
Why should this KEP _not_ be implemented? | ||
--> | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can work, but I don't love it. If you need a solution NOW, probably do this.
With CoreDNS we do have an A record for each Pod, but only for Pods that back a Service - see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records-1 |
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm missing something, but as I understand it kubeadm will always initialize it to the value it gets from its ClusterConfiguration before it even launches the kubelet?
kubeadm has a cluster-wide shared kubelet config, I don't think the point was "This is how kubeadm works" it was "clusters do not all work the same, and there is no portable guarantee that this is a cluster-wide concept as opposed to a kubelet option (and many clusters do not use kubeadm)".
Some properties will always need to be consistent for the kubelet to be in a valid state (apiserver URL, certificates, etc).
Eh? Taking this example ... You could easily make a cluster where different kubelets have different API server URLs, certificates, etc. Some clusters even do use a local proxy for example.
We don't make any assumptions about this AFAIK, those details are local to this kubelet instance.
They could also have clusters with overlapping pod CIDRs. At some point we have to delineate what's a valid state for the cluster to be in and what isn't.
Well, taking that example, pod CIDRs aren't necessarily a property managed by Kubernetes core either, lots of clusters use BYO IPAM for pod IPs.
I'm fine with declaring that split domain configurations are valid and supported (or that there is an intention to work towards that at some point). But that's not the impression that I got from either you or @thockin so far.
I read this more as: We haven't declared this one way or another yet, maybe let's not declare that this is a cluster property because that's not necessarily true and we're not prepared to enforce it (e.g. through some sort of conformance test).
environments, since `node-b` might not be able to resolve `cluster.local` | ||
FQDNs correctly. | ||
|
||
For this KEP to make sense, this would have to be explicitly prohibited. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if we write a document that says "setting these differently within a cluster is prohibited" that does ~nothing to enforce that clusters will actually follow that behavior, and what clusters actually do is what drives consistency.
We have conformance, but I wouldn't be terribly enthused about a test that attempts to validate every node, those are meant to catch API breaks, not runtime configuration.
I don't see why a pod wouldn't work fine, all that needs to be true is the cluster domain reported to the pod needs to be routable on the network to get to services. The consistency part doesn't actually seem relevant.
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SERVICES_DOMAIN
seems like a more accurate name, though of course some users may already be familiar with the kubelet option
Also, as long as the reported domain works for the pod to reach services, I don't think it matters that we attempt to assume if it's identical across machines or not.
If your goal is to have one pod read something to then be consumed by other pods, then we are in fact trying to read cluster-wide config by way of a kubelet option, and THAT seems like a broken concept without actually moving it into something cluster-scoped.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) | ||
|
||
This also implies a decision about who "owns" the setting, the cluster as a whole or the individual kubelet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That actually seems orthogonal to exposing this to the pods. We could expose the service domain as reported to kubelet without requiring this.
@sftim you can't say "we have an A record for each Pod" and then "but only for some Pods". :) What I really meant is that we (kubernetes) do not define a "standard" name for all pods, only for pods in the context of a service |
Thought I'd qualified that adequately. Each Pod that backs some Service where the cluster uses CoreDNS with the right options. And this is very much not good enough. We also can't use this to set FQDN because one Pod can back multiple Services. |
Pinging - time is running out for 1.33 |
one more ping |
Apologies, I've been distracted elsewhere lately. I'll try to get back on the train tomorrow. |
@nightkr - at risk of spammiong the same point, I want to make sure you see it since we are on far-away timezones and I don't want to miss the boat just because of that. See #4972 (comment) - what do you think about Also @aojea and @BenTheElder |
@thockin No worries, replied in there. I really wish GitHub supported treating any comment as a thread (á la GitLab), not just code comments... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically if you're relying on some controller code reading this from within pod A to generate a configmap for other controllers (?) and then assuming it could be used from another pod, couldn't you already read the /etc/resolv.conf
in the controller's container, that this kubelet flag currently controls? (the search field?)
While not defined by us, this is a portable format.
You would take the second entry of search
after $namespace.$service-domain
which should be $service-domain
(assuming you haven't customized the pod DNS)
the cluster domain is tedious and error-prone, discouraging application | ||
developers from using them. | ||
|
||
Many distributions already provide ways to query for the cluster domain (such as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noting: The kubeadm config is still Beta, and has recently shipped breaking changes. I don't think I'd consider it a bug if this became unavailable in kind
because it's non-portable and beta. This is interesting prior art though.
#### Story 1 | ||
|
||
The Pod `foo` needs to access its sibling Service `bar` in the same namespace. | ||
It adds two `env` bindings: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our operators generate configmaps with all the details you'll need to connect to the service managed by the operators (including URLs). Our operators' pod manifests don't know what specific CR objects they'll end up managing.
I'm not following why that is blocked on FQDN rather than $service.$namespace
, can you elaborate the use cases as suggested by danwinship?
There's another possibly use case for having service: FQDN for less pressure on DNS lookups, you can use dnsConfig to lower ndots instead though. We currently use both (hardcoded FQDN for a service in a specific cluster) in Kubernetes's CI to reduce DNS flakes. |
The main discussion is happening in this thread: https://github.com/kubernetes/enhancements/pull/4972/files#r1909211177 which is not easily found from the main page here. |
The overall format is portable, yes, but the specific order isn't. I wouldn't feel comfortable relying on that the second value specifically always has a particular magical meaning. |
We discussed on slack that we can let this one slide to next release, is that still your plan? |
kubelet synthesizes this file and controls the ordering, are you seeing otherwise? I agree that it has a portability risk, in theory, but IMHO this whole feature does, and in practice, I can't see kubelet breaking the ordering. |
Yes.
No, I'm not.
Respectfully, I have to disagree. DNS is the ~one API that we expect every single app to rely on in some form or shape, but I don't see it as impossible at all that a future release would add an option somewhere for prepending search entries or enabling something like CoreDNS autopath that moves the responsibility for search resolution out of resolv.conf entirely. |