Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rootless containers in the same pod can't communicate with each other #25372

Closed
haithcockce opened this issue Feb 20, 2025 · 6 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@haithcockce
Copy link

Issue Description

Describe your issue

In my rootless compose setup, having two running podman containers in the same pod can not access each other via localhost, 0.0.0.0, or 127.0.0.1 but they can access each other via the respective IP addresses of the containers or the container names. Connection attempts fail with connection refused (found when working on connecting python to mongodb) or Could not connect to server (via the reproducer provided). I can not reproduce when running manually (podman run blahblahblah).

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a rootless setup
  2. Create compose yaml with the following
    version: '3'
    services:
      webserver:
        image: quay.io/libpod/banner
        container_name: webserver-compose
      client:
        image: alpine
        container_name: client-compose
        command: sh -c "apk add curl && curl http://0.0.0.0:80"
        depends_on:
          - webserver
    
  3. podman compose up

Describe the results you received

Describe the results you received

╰─ podman compose up
9921d14374586416876309b6c06ac01e33ef406d5e716715b1486f9076796ecf
739b63de72d479482c97a00be8fbd82df88fbbb1eb8d3905ac424aefcf879530
fe47b692cc9af314e6249c3522796ac419c55566a412b012763036ff3295139e
[client]    | fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz
[client]    | fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/community/x86_64/APKINDEX.tar.gz
[client]    | (1/9) Installing brotli-libs (1.1.0-r2)
[client]    | (2/9) Installing c-ares (1.34.3-r0)
[client]    | (3/9) Installing libunistring (1.2-r0)
[client]    | (4/9) Installing libidn2 (2.3.7-r0)
[client]    | (5/9) Installing nghttp2-libs (1.64.0-r0)
[client]    | (6/9) Installing libpsl (0.21.5-r3)
[client]    | (7/9) Installing zstd-libs (1.5.6-r2)
[client]    | (8/9) Installing libcurl (8.12.1-r0)
[client]    | (9/9) Installing curl (8.12.1-r0)
[client]    | Executing busybox-1.37.0-r12.trigger
[client]    | OK: 12 MiB in 24 packages
[client]    |   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
[client]    |                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
[client]    | curl: (7) Failed to connect to 0.0.0.0 port 80 after 0 ms: Could not connect to server

Describe the results you expected

Describe the results you expected

[client]    | fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz
[client]    | fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/community/x86_64/APKINDEX.tar.gz
[client]    | (1/9) Installing brotli-libs (1.1.0-r2)
[client]    | (2/9) Installing c-ares (1.34.3-r0)
[client]    | (3/9) Installing libunistring (1.2-r0)
[client]    | (4/9) Installing libidn2 (2.3.7-r0)
[client]    | (5/9) Installing nghttp2-libs (1.64.0-r0)
[client]    | (6/9) Installing libpsl (0.21.5-r3)
[client]    | (7/9) Installing zstd-libs (1.5.6-r2)
[client]    | (8/9) Installing libcurl (8.12.1-r0)
[client]    | (9/9) Installing curl (8.12.1-r0)
[client]    | Executing busybox-1.37.0-r12.trigger
[client]    | OK: 12 MiB in 24 packages
[client]    |   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
[client]    |                                  Dload  Upload   Total   Spent    Left  Speed
[client]    |    ___          __              
[client]    |   / _ \___  ___/ /_ _  ___ ____ 
[client]    |  / ___/ _ \/ _  /  ' \/ _ `/ _ \
[client]    | /_/   \___/\_,_/_/_/_/\_,_/_//_/
[client]    | 
100   133  100   133    0     0  97435      0 --:--:-- --:--:-- --:--:--  129k

podman info output

If you are unable to run podman info for any reason, please provide the podman version, operating system and its version and the architecture you are running.


╰─ podman version
Client:       Podman Engine
Version:      5.3.2
API Version:  5.3.2
Go Version:   go1.23.4
Built:        Tue Jan 21 17:00:00 2025
OS/Arch:      linux/amd64

╰─ podman info
host:
  arch: amd64
  buildahVersion: 1.38.1
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 94.67
    systemPercent: 2.09
    userPercent: 3.23
  cpus: 12
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: workstation
    version: "41"
  eventLogger: journald
  freeLocks: 1983
  hostname: callisto
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.10.10-1.surface.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 2165391360
  memTotal: 15978881024
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20250121.g4f2c8e7-2.fc41.x86_64
    version: |
      pasta 0^20250121.g4f2c8e7-2.fc41.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: slirp4netns
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 1995706368
  swapTotal: 8589930496
  uptime: 191h 15m 55.00s (Approximately 7.96 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/haithcockce/.config/containers/storage.conf
  containerStore:
    number: 7
    paused: 0
    running: 4
    stopped: 3
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/haithcockce/.local/share/containers/storage
  graphRootAllocated: 254356226048
  graphRootUsed: 172820185088
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 32
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/haithcockce/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.2
  Built: 1737504000
  BuiltTime: Tue Jan 21 17:00:00 2025
  GitCommit: ""
  GoVersion: go1.23.4
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.2

╰─ rpm -q podman
podman-5.3.2-1.fc41.x86_64

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

Additional environment details

It appears the two containers run in two different network namespaces.

╰─ watch -n 2 "podman inspect --format '{{.NetworkSettings.SandboxKey}}' client-compose; podman inspect --format '{{.NetworkSettings.SandboxKey}}' webserver-compose"
Every 2.0s: podman inspect --format '{{.NetworkSet...  callisto: Wed Feb 19 21:42:17 2025

/run/user/1000/netns/netns-8aa8b555-9037-497e-738d-6013024869b4
/run/user/1000/netns/netns-3c655036-06e9-924d-3d2a-96bb4180669a

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@haithcockce haithcockce added the kind/bug Categorizes issue or PR as related to a bug. label Feb 20, 2025
@haithcockce
Copy link
Author

Small disclaimer, I am fairly new to podman and somewhat newish to container workloads overall. It is entirely possible I messed up my configuration somewhere.

@Luap99
Copy link
Member

Luap99 commented Feb 20, 2025

podman compose is just a wrapper that calls podman-compose or docker-compose and AFAIK neither of them create pods by default so I am not sure why you think compose used pods.

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Feb 20, 2025
@haithcockce
Copy link
Author

The default behavior of podman compose is to create an empty pod (no pause container) and put the containers into it. The pod's name is, by default pod_NAME where NAME is the project or derived by the compose file itself. In fact, recent changes require passing --in-pod false if you plan to set userns: keep-id in the compose file or as a parameter.

The reproducer provided is sufficient in showing this behavior if ran then checking podman pod ps and podman ps --pods after.

@Luap99
Copy link
Member

Luap99 commented Feb 20, 2025

A pod alone does not mean the netns is shared, if the pod is created without infra container or without shared netns (see --share) then each container ends up in their own netns. Also if a container is run with --network then that overwrites the pods netns and a new netns is created as well.

And as I said podman-compose is a different project so a podman-compose command is not a reproducer to me.
If they run in different network namesapces than this is because compose configured that. In the rare case that this is actually a podman problem then I need to see the actual podman commands that are used to create the pod/containers.

@baude
Copy link
Member

baude commented Feb 20, 2025

@haithcockce here is a good demonstration of what you are trying to do I think. I used a podman version from the main branch.

676711f8a92239b04774f040eba9ed4b18f8620322b434c2d403202a06e6083f

Above we create a pod called foobar and run nginx in it.

❯ podman run -it --rm --pod foobar docker.io/nicolaka/netshoot curl http://localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Above we run a container that has curl in it and resolves perfectly. By default, pods created in podman do share a ns. Paul is basically saying we have no idea about podman-compose did it and he's hoping you try a reproducer outside the context of podman-compose so we have a podman-only reproducer.

@haithcockce
Copy link
Author

Thank you for that clarification; my apologies, like I noted I'm a bit newer to podman and wasn't aware it was a separate project given they were under the same containers group in github. Indeed I can not reproduce it outside of podman compose, so I will open a ticket with that project. Thank you all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants