Skip to content

Conversation

afbjorklund
Copy link
Member

Also the indent of the config was too much


The configuration was not read, because it used the old config for config version 2 (it is now config version 3)

The configuration is automatically updated from the old version, so we can use the same as the upstream docs:

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd


It is a very minor difference in "pause" versions and it only mattered for Windows, and then systemd...

@@ -47,7 +47,7 @@
     use_local_image_pull = false
 
     [plugins.'io.containerd.cri.v1.images'.pinned_images]
-      sandbox = 'registry.k8s.io/pause:3.10'
+      sandbox = 'registry.k8s.io/pause:3.10.1'
 
     [plugins.'io.containerd.cri.v1.images'.registry]
       config_path = ''
@@ -105,6 +105,7 @@
             NoNewKeyring = false
             Root = ''
             ShimCgroup = ''
+            SystemdCgroup = true
 
     [plugins.'io.containerd.cri.v1.runtime'.cni]
       bin_dir = ''

But you still get a warning from kubeadm, since it hasn't fully handed over the sandbox detection to the CRI:

https://github.com/kubernetes/kubernetes/blob/release-1.34/cmd/kubeadm/app/preflight/checks.go#L830

The setting of the cgroup is from here, it has been hardcoded to systemd in kubeadm not not so in containerd:

https://github.com/containerd/containerd/blob/main/docs/cri/config.md

Also the indent of the config was too much

Signed-off-by: Anders F Björklund <[email protected]>
Copy link
Member

@jandubois jandubois left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM

@jandubois jandubois merged commit 852c0bc into lima-vm:master Oct 13, 2025
63 of 64 checks passed
@AkihiroSuda AkihiroSuda added this to the v2.0.0 milestone Oct 14, 2025
@AkihiroSuda
Copy link
Member

The main containerd config is now version 3

Our config is still version 2:

if [ "${LIMA_CIDATA_CONTAINERD_SYSTEM}" = 1 ]; then
if [ ! -e /etc/containerd/config.toml ]; then
mkdir -p /etc/containerd
cat >"/etc/containerd/config.toml" <<EOF
version = 2
# TODO: remove imports after upgrading containerd to v2.2, as
# conf.d is set by default since v2.2.
imports = ['/etc/containerd/conf.d/*.toml']
[plugins."io.containerd.grpc.v1.cri"]
enable_cdi = true
[proxy_plugins]
[proxy_plugins."stargz"]
type = "snapshot"
address = "/run/containerd-stargz-grpc/containerd-stargz-grpc.sock"
EOF
fi
if [ ! -e /etc/buildkit/buildkitd.toml ]; then
mkdir -p /etc/buildkit
cat >"/etc/buildkit/buildkitd.toml" <<EOF
[worker.oci]
enabled = false
[worker.containerd]
enabled = true
namespace = "${CONTAINERD_NAMESPACE}"
snapshotter = "${CONTAINERD_SNAPSHOTTER}"
EOF
fi
systemctl enable --now containerd buildkit stargz-snapshotter
fi
if [ "${LIMA_CIDATA_CONTAINERD_USER}" = 1 ]; then
if [ ! -e "${LIMA_CIDATA_HOME}/.config/containerd/config.toml" ]; then
mkdir -p "${LIMA_CIDATA_HOME}/.config/containerd"
cat >"${LIMA_CIDATA_HOME}/.config/containerd/config.toml" <<EOF
version = 2
[plugins."io.containerd.grpc.v1.cri"]
enable_cdi = true
[proxy_plugins]
[proxy_plugins."fuse-overlayfs"]
type = "snapshot"
address = "/run/user/${LIMA_CIDATA_UID}/containerd-fuse-overlayfs.sock"
[proxy_plugins."stargz"]
type = "snapshot"
address = "/run/user/${LIMA_CIDATA_UID}/containerd-stargz-grpc/containerd-stargz-grpc.sock"
EOF
chown -R "${LIMA_CIDATA_USER}" "${LIMA_CIDATA_HOME}/.config"
fi

The configuration was not read, because it used the old config for config version 2 (it is now config version 3)

Doesn't seem true

@jandubois
Copy link
Member

Doesn't seem true

Yeah, but this PR seems to work for any version of the main config because it specifies the k8s.toml version separately. So this is still fine, or am I missing something?

@afbjorklund
Copy link
Member Author

It should work either way

@afbjorklund
Copy link
Member Author

But it was still version 2

@afbjorklund afbjorklund changed the title The main containerd config is now version 3 The main containerd config is soon version 3 Oct 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants