Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using --opt o=size=10M on Podman Local Volumes Passes Unsupported 'size' Mount Option on XFS #25368

Open
ak89224 opened this issue Feb 19, 2025 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ak89224
Copy link

ak89224 commented Feb 19, 2025

Issue Description

When using the local volume driver with the --opt o=size=10M option on an XFS-backed storage (with project quotas enabled), Podman correctly creates the volume and assigns an XFS project quota. However, during container startup, Podman erroneously passes the “size” option as a mount parameter to the XFS mount command. Since XFS does not recognize a “size” mount option, the container fails to start with an error:

mount: /mnt/data/containers/storage/volumes/testVolume/_data: fsconfig system call failed: xfs: Unknown parameter 'size'

Steps to reproduce the issue

  1. Prepare an XFS Partition with Project Quotas:
    • Format the device (e.g., /dev/sdb1) as XFS and mount it at /mnt/data with prjquota enabled.
    • /etc/fstab entry:
      /dev/sdb1 /mnt/data/  xfs defaults,x-systemd.device-timeout=0,pquota 1 2
      
    • Mount the partition using:
      mount -a
  2. Configure Podman to Use the XFS Partition:
    • Edit /etc/containers/storage.conf to set graphroot:
      [storage]
      driver = "overlay"
      graphroot = "/mnt/data/containers/storage"
    • Restart Podman if necessary.
  3. Create a Volume with a Size Option:
    • Create a volume with:
      podman volume create --driver local --opt o=size=10M testVolume
    • Verify via:
      xfs_quota -x -c "report -p" /mnt/data
      that a project quota is set (e.g., a hard limit of ~10MB).
  4. Attempt to Run a Container Using the Volume:
    • Start a container with:
      podman run --rm -v testVolume:/data busybox sh -c "dd if=/dev/zero of=/data/testfile bs=1M count=15"
    • The container fails with an error:
      mount: /mnt/data/containers/storage/volumes/testVolume/_data: fsconfig system call failed: xfs: Unknown parameter 'size'
      

Describe the results you received

Actual Behavior:

  • When a volume is created with --opt o=size=10M, Podman sets the project quota as expected.
  • However, at container startup, Podman issues a mount command that includes -o size=10M, which is rejected by the XFS mount system call, causing the container to fail to start.

Debug Logs and Analysis:

  • Debug Log Snippet: podman volume create --opt device=/dev/sdb1 --opt type=xfs --opt o=size=10M testvol4
[root@fedora abhi]# podman --log-level=debug volume create --opt device=/dev/sdb1 --opt type=xfs -o=o=size=10M testvol4
INFO[0000] podman filtering at log level debug
DEBU[0000] Called create.PersistentPreRunE(podman --log-level=debug volume create --opt device=/dev/sdb1 --opt type=xfs -o=o=size=10M testvol4)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /mnt/data/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /mnt/data/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /mnt/data/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: imagestore=/usr/lib/containers/storage
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is being used
DEBU[0000] NewControl(/mnt/data/containers/storage/overlay): nextProjectID = 100001
DEBU[0000] Cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
DEBU[0000] backingFs=xfs, projectQuotaSupported=true, useNativeDiff=false, usingMetacopy=true
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Validating options for local driver
DEBU[0000] NewControl(/mnt/data/containers/storage/volumes): nextProjectID = 200003
DEBU[0000] Setting quota project ID 200003 on /mnt/data/containers/storage/volumes/testvol4
DEBU[0000] SetQuota path=/mnt/data/containers/storage/volumes/testvol4, size=10000000, inodes=0, projectID=200003
testvol4
DEBU[0000] Called create.PersistentPostRunE(podman --log-level=debug volume create --opt device=/dev/sdb1 --opt type=xfs -o=o=size=10M testvol4)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=87400
  • Volume Inspect:
[root@fedora abhi]# podman volume inspect testvol4
[
     {
          "Name": "testvol4",
          "Driver": "local",
          "Mountpoint": "/mnt/data/containers/storage/volumes/testvol4/_data",
          "CreatedAt": "2025-02-20T01:26:43.831051058+05:30",
          "Labels": {},
          "Scope": "local",
          "Options": {
               "SIZE": "10M",
               "device": "/dev/sdb1",
               "o": "size=10M",
               "type": "xfs"
          },
          "MountCount": 0,
          "NeedsCopyUp": true,
          "NeedsChown": true,
          "LockNumber": 3
     }
]

  • Debug Log Snippet: podman run
  DEBU[0000] Running mount command: /usr/bin/mount -o size=10M -t xfs /dev/sdb1 /mnt/data/containers/storage/volumes/testvol4/_data
  DEBU[0000] Mount command failed with exit status 32
...
...
...
DEBU[0000] ExitCode msg: "mounting volume testvol4 for container a44a8a561b8148341e6db6aef76f1a25ffcbb620ba7d628138ad6efb1e155f58: mount: /mnt/data/containers/storage/volumes/testvol4/_data: fsconfig system call failed: xfs: unknown parameter 'size'.\n       dmesg(1) may have more information after failed mount system call.\n"
Error: mounting volume testvol4 for container a44a8a561b8148341e6db6aef76f1a25ffcbb620ba7d628138ad6efb1e155f58: mount: /mnt/data/containers/storage/volumes/testvol4/_data: fsconfig system call failed: xfs: Unknown parameter 'size'.
       dmesg(1) may have more information after failed mount system call.

  • Analysis:
    • Although Podman’s volume creation process correctly sets up XFS project quotas (as confirmed by xfs_quota), it later passes the “size=10M” mount option when mounting the volume inside the container.
    • XFS does not recognize any mount option named “size”; project quotas are managed via the quota system, not as a mount parameter.
    • As a workaround, if the volume is created without the --opt o=size=10M option, Podman mounts the volume successfully, and quotas can be manually enforced using XFS tools.

Describe the results you expected

Expected Behavior:

  • Podman should use the --opt o=size=10M parameter to set the XFS project quota on the volume (which it does) but should not pass a “size=10M” option to the mount system call when starting a container.
  • The container should mount the volume normally, and the XFS quota should limit writes to ~10MB without causing a mount error.

podman info output

host:
  arch: amd64
  buildahVersion: 1.38.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 97.8
    systemPercent: 1.88
    userPercent: 0.32
  cpus: 4
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: workstation
    version: "41"
  eventLogger: journald
  freeLocks: 2044
  hostname: fedora
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.12.11-200.fc41.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 8684425216
  memTotal: 16758345728
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20250121.g4f2c8e7-2.fc41.x86_64
    version: |
      pasta 0^20250121.g4f2c8e7-2.fc41.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 60h 45m 7.00s (Approximately 2.50 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.imagestore: /usr/lib/containers/storage
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /mnt/data/containers/storage
  graphRootAllocated: 21406679040
  graphRootUsed: 464166912
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /mnt/data/containers/storage/volumes
version:
  APIVersion: 5.3.2
  Built: 1737504000
  BuiltTime: Wed Jan 22 05:30:00 2025
  GitCommit: ""
  GoVersion: go1.23.4
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.2

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

Environment:

  • Podman Version: 5.3.2
  • Operating System: Fedora 41
  • Filesystem: XFS with project quotas enabled (mounted with prjquota)
  • Podman Storage Configuration:
    • graphroot = "/mnt/data/containers/storage"

Additional information

Impact:

  • This issue prevents the use of the `--opt o=size=10M feature for local volumes on XFS-backed storage in Podman, limiting the ability to enforce per-volume storage limits automatically.
  • The only workaround is to create the volume without the size option and manually apply XFS quotas after volume creation, which is not ideal for automated or production environments.

Additional Information:

  • The issue appears to be isolated to the way Podman translates the --opt o=size=10M option into mount options for XFS.
  • Similar behavior is observed even though the XFS project quotas are properly applied at volume creation.
  • Debug logs indicate that the unsupported mount option “size=10M” is passed during the container’s volume mount process.
@ak89224 ak89224 added the kind/bug Categorizes issue or PR as related to a bug. label Feb 19, 2025
@ak89224
Copy link
Author

ak89224 commented Feb 19, 2025

FYI,
@mheon , @rhatdan

@mheon
Copy link
Member

mheon commented Feb 19, 2025

It's not podman volume create --opt size= but podman volume create --opt o=size= - this is documented in the manpage.

Still, I find it very curious that size= is not being rejected outright - per the manpage it's not valid syntax and we should be rejecting it. So that's still a bug.

@mheon
Copy link
Member

mheon commented Feb 19, 2025

Alternatively we could just add more native support for size= to do the same thing as o=size= - but we'd have to handle both being set at the same time which would be awkward.

@ak89224
Copy link
Author

ak89224 commented Feb 19, 2025

It's not podman volume create --opt size= but podman volume create --opt o=size= - this is documented in the manpage.

Still, I find it very curious that size= is not being rejected outright - per the manpage it's not valid syntax and we should be rejecting it. So that's still a bug.

It's not --opt size if you see the logs I tried with both -o=o=size=10M & --opt o=size=10M although both are same but still.

I made a typo in the theory part. Will omit it.

@ak89224 ak89224 changed the title Using --opt size on Podman Local Volumes Passes Unsupported 'size' Mount Option on XFS Using --opt o=size=10M on Podman Local Volumes Passes Unsupported 'size' Mount Option on XFS Feb 19, 2025
@ak89224
Copy link
Author

ak89224 commented Feb 19, 2025

It's not podman volume create --opt size= but podman volume create --opt o=size= - this is documented in the manpage.

Still, I find it very curious that size= is not being rejected outright - per the manpage it's not valid syntax and we should be rejecting it. So that's still a bug.

And yes I can confirm that for this syntax --opt size= it doesn't rejects. It just creates a volume without any quota.

@Luap99
Copy link
Member

Luap99 commented Feb 20, 2025

Your reproducer says

podman volume create --driver local --opt o=size=10M testVolume

But you logs shows

podman --log-level=debug volume create --opt device=/dev/sdb1 --opt type=xfs -o=o=size=10M testvol4

These are two different things, the first creates a normal bind mount and tries to enable quotas, the second however mounts a new filesystem (xfs) and all the o= blah options are passed as mount options as documented.

So I would say it is very much expected that the second command does not work. The first however should work per docs.

@ak89224
Copy link
Author

ak89224 commented Feb 20, 2025

Your reproducer says

podman volume create --driver local --opt o=size=10M testVolume

But you logs shows

podman --log-level=debug volume create --opt device=/dev/sdb1 --opt type=xfs -o=o=size=10M testvol4

These are two different things, the first creates a normal bind mount and tries to enable quotas, the second however mounts a new filesystem (xfs) and all the o= blah options are passed as mount options as documented.

So I would say it is very much expected that the second command does not work. The first however should work per docs.

Thanks, @Luap99

With the 1st option where it's creating a normal bind it doesn't pass the mount options forward.

But with this I ran into another issue, that the volume quota option (--opt o=size=... when creating volumes with the local driver and XFS filesystem) is not being enforced when writing to the volume from within a running container.

Steps to reproduce:

  1. Create Podman volume with quota: podman --log-level=debug volume create --driver local --opt o=size=10M testVol8
  2. Run container mounting the volume and write more than quota:
[root@fedora testVol8]# podman run --rm -v testVol8:/data busybox sh -c "dd if=/dev/zero of=/data/testfile bs=1M count=15"
15+0 records in
15+0 records out
15728640 bytes (15.0MB) copied, 0.191835 seconds, 78.2MB/s
[root@fedora testVol8]#
  1. The dd command completes successfully, writing more than the 10MB quota to the volume.
  2. The Project quota says it's unused. #200008 0 9768 9768 00 [--------]
[root@fedora abhi]# xfs_quota -x -c "report -p" /mnt/data
Project quota on /mnt/data (/dev/sdb1)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0              65876          0          0     00 [--------]
storage             4          0          0     00 [--------]
volumes             0       9768       9768     00 [--------]
#200001             0       9768       9768     00 [--------]
#200002             0       9768       9768     00 [--------]
#200003             0       9768       9768     00 [--------]
#200004             0       9768       9768     00 [--------]
#200005             0       9768       9768     00 [--------]
#200006             0       9768       9768     00 [--------]
#200007             0       9768       9768     00 [--------]
#200008             0       9768       9768     00 [--------]
  1. Direct host write test to see if quotas are set or not:
[root@fedora abhi]# cd /mnt/data/containers/storage/volumes/testVol8
[root@fedora testVol8]#
[root@fedora testVol8]#
[root@fedora testVol8]#
[root@fedora testVol8]# ll
total 0
drwxr-xr-x. 2 root root 22 Feb 21 00:37 _data
[root@fedora testVol8]# dd if=/dev/zero of=testfile_host bs=1M count=11
dd: error writing 'testfile_host': No space left on device
10+0 records in
9+0 records out
9895936 bytes (9.9 MB, 9.4 MiB) copied, 0.158613 s, 62.4 MB/s
  1. Writing to the volume directory directly on the host (as root) does trigger the XFS project quota, and writes fail after the limit is reached.
[root@fedora testVol8]# xfs_quota -x -c "report -p" /mnt/data
Project quota on /mnt/data (/dev/sdb1)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0              65876          0          0     00 [--------]
storage             4          0          0     00 [--------]
volumes             0       9768       9768     00 [--------]
#200001             0       9768       9768     00 [--------]
#200002             0       9768       9768     00 [--------]
#200003             0       9768       9768     00 [--------]
#200004             0       9768       9768     00 [--------]
#200005             0       9768       9768     00 [--------]
#200006             0       9768       9768     00 [--------]
#200007             0       9768       9768     00 [--------]
#200008          9664       9768       9768     00 [--------]
  1. This point out that Podman sets the quota during volume creation but does not seem to enforce it in the container runtime
  2. Then something very interesting, I'm able to write inside the _data directory of the Podman volume, even after the quota should have been exceeded based on our previous understanding. This is unexpected.
[root@fedora testVol8]# dd if=/dev/zero of=_data/testfile_host bs=1M count=11
11+0 records in
11+0 records out
11534336 bytes (12 MB, 11 MiB) copied, 0.0782921 s, 147 MB/s
[root@fedora testVol8]# xfs_quota -x -c "report -p" /mnt/data
Project quota on /mnt/data (/dev/sdb1)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0              77140          0          0     00 [--------]
storage             4          0          0     00 [--------]
volumes             0       9768       9768     00 [--------]
#200001             0       9768       9768     00 [--------]
#200002             0       9768       9768     00 [--------]
#200003             0       9768       9768     00 [--------]
#200004             0       9768       9768     00 [--------]
#200005             0       9768       9768     00 [--------]
#200006             0       9768       9768     00 [--------]
#200007             0       9768       9768     00 [--------]
#200008          9664       9768       9768     00 [--------]
  1. Looked libpod src and found that
    When Podman creates a local volume (driver local), it typically creates a directory structure like this within the volume path (/mnt/data/containers/storage/volumes in this case):
/mnt/data/containers/storage/volumes/
└── testVol8/
    └── _data/
  • testVol8: This is the volume name directory. This is what gets the project quota applied to it based on the podman volume create logs.
DEBU[0000] Validating options for local driver
DEBU[0000] NewControl(/mnt/data/containers/storage/volumes): nextProjectID = 200008
DEBU[0000] Setting quota project ID 200008 on /mnt/data/containers/storage/volumes/testVol8
DEBU[0000] SetQuota path=/mnt/data/containers/storage/volumes/testVol8, size=10000000, inodes=0, projectID=200008
testVol8
DEBU[0000] Called create.PersistentPostRunE(podman --log-level=debug volume create --driver local --opt o=size=10M testVol8)
DEBU[0000] Shutting down engines
INFO[0000] Received shutdown.Stop(), terminating!        PID=142619
  • _data: This is a subdirectory inside the testVol8 directory. This is where the actual volume data is stored.

My Hypothesis:

  1. The XFS project quota is being applied by Podman to the volume's top-level directory (/mnt/data/containers/storage/volumes/testVol8), but the actual volume data is stored within the _data subdirectory (/mnt/data/containers/storage/volumes/testVol8/_data).
  2. Because the quota is not applied to the _data subdirectory or recursively enforced within testVol8, writing to _data bypasses the quota enforcement.

@Luap99 , @mheon
Most probable root cause for this is related to the fix in this PR Set quota on volume root directory, not _data is conflicting with this containers quota management driver commits which guarantees that only children files inherit the ID not subdirectories // stripProjectInherit strips the project inherit flag from a directory.

@mheon
Copy link
Member

mheon commented Feb 20, 2025

stripProjectInherit is only used for the top-level directory. The _data directories are not stripped of the inherit flag.

@ak89224
Copy link
Author

ak89224 commented Feb 21, 2025

stripProjectInherit is only used for the top-level directory. The _data directories are not stripped of the inherit flag.

Ahh...I see. Yes, but still this behavior is unexpected !

Also direct write on the host confirms that at filesystem level xfs quotas are only enforced on volume name directory not on its children subdirectories (_data)

@mheon @Luap99
Could you please have a look on the logs and point what could have possibly gone wrong here during container creation ? I'm putting the snippets of logs during container run.

-------------- Logs Below (Concerned parts are marked with **) ------------

[root@fedora abhi]# podman --log-level=debug run --rm -v testVol8:/data busybox sh -c "dd if=/dev/zero of=/data/testfile bs=1M count=15"
INFO[0000] podman filtering at log level debug
**DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run --rm -v testVol8:/data busybox sh -c dd if=/dev/zero of=/data/testfile bs=1M count=15)**
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
**DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /mnt/data/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /mnt/data/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /mnt/data/containers/storage/volumes**
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: imagestore=/usr/lib/containers/storage
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is being used
**DEBU[0000] NewControl(/mnt/data/containers/storage/overlay): nextProjectID = 100001
DEBU[0000] Cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
DEBU[0000] backingFs=xfs, projectQuotaSupported=true, useNativeDiff=false, usingMetacopy=true**
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Pulling image busybox (policy: missing)
DEBU[0000] Looking up image "busybox" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Trying "docker.io/library/busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/mnt/data/containers/storage+/run/containers/storage:overlay.imagestore=/usr/lib/containers/storage,overlay.mountopt=nodev,metacopy=on]@af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] Found image "busybox" as "docker.io/library/busybox:latest" in local containers storage
DEBU[0000] Found image "busybox" as "docker.io/library/busybox:latest" in local containers storage ([overlay@/mnt/data/containers/storage+/run/containers/storage:overlay.imagestore=/usr/lib/containers/storage,overlay.mountopt=nodev,metacopy=on]@af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66)
DEBU[0000] exporting opaque data as blob "sha256:af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] Looking up image "docker.io/library/busybox:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "docker.io/library/busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/mnt/data/containers/storage+/run/containers/storage:overlay.imagestore=/usr/lib/containers/storage,overlay.mountopt=nodev,metacopy=on]@af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] Found image "docker.io/library/busybox:latest" as "docker.io/library/busybox:latest" in local containers storage
**DEBU[0000] Found image "docker.io/library/busybox:latest" as "docker.io/library/busybox:latest" in local containers storage ([overlay@/mnt/data/containers/storage+/run/containers/storage:overlay.imagestore=/usr/lib/containers/storage,overlay.mountopt=nodev,metacopy=on]@af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66)
DEBU[0000] exporting opaque data as blob "sha256:af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] User mount testVol8:/data options []**
DEBU[0000] Looking up image "busybox" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "docker.io/library/busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/mnt/data/containers/storage+/run/containers/storage:overlay.imagestore=/usr/lib/containers/storage,overlay.mountopt=nodev,metacopy=on]@af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] Found image "busybox" as "docker.io/library/busybox:latest" in local containers storage
DEBU[0000] Found image "busybox" as "docker.io/library/busybox:latest" in local containers storage ([overlay@/mnt/data/containers/storage+/run/containers/storage:overlay.imagestore=/usr/lib/containers/storage,overlay.mountopt=nodev,metacopy=on]@af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66)
DEBU[0000] exporting opaque data as blob "sha256:af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] Inspecting image af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66
DEBU[0000] exporting opaque data as blob "sha256:af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
DEBU[0000] Inspecting image af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66
DEBU[0000] Inspecting image af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66
DEBU[0000] Inspecting image af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Successfully loaded 1 networks
DEBU[0000] Allocated lock 8 for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0000] exporting opaque data as blob "sha256:af47096251092caf59498806ab8d58e8173ecf5a182f024ce9d635b5b4a55d66"
**DEBU[0000] Cached value indicated that idmapped mounts for overlay are supported
DEBU[0000] Setting quota project ID 100001 on /mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea
DEBU[0000] SetQuota path=/mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea, size=0, inodes=0, projectID=100001
DEBU[0000] Created container "b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb"
DEBU[0000] Container "b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb" has work directory "/mnt/data/containers/storage/overlay-containers/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb/userdata"
DEBU[0000] Container "b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb" has run directory "/run/containers/storage/overlay-containers/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb/userdata"**
DEBU[0000] Not attaching to stdin
INFO[0000] Received shutdown.Stop(), terminating!        PID=142751
DEBU[0000] Enabling signal proxying
DEBU[0000] Cached value indicated that volatile is being used
**DEBU[0000] overlay: mount_data=lowerdir=/mnt/data/containers/storage/overlay/l/6YJOUZBC74AJBDOVC6MU2YUYKA,upperdir=/mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea/diff,workdir=/mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea/work,nodev,metacopy=on,volatile,context="system_u:object_r:container_file_t:s0:c632,c636"
DEBU[0000] Mounted container "b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb" at "/mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea/merged"
DEBU[0000] Going to mount named volume testVol8
DEBU[0000] Copying up contents from container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb to volume testVol8
DEBU[0000] Created root filesystem for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb at /mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea/merged
DEBU[0000] Made network namespace at /run/netns/netns-20d8cf0e-ca95-3071-1d7e-e9a5deb85bbb for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb**
[DEBUG netavark::network::validation] Validating network namespace...
[DEBUG netavark::commands::setup] Setting up...
[INFO  netavark::firewall] Using nftables firewall driver
[DEBUG netavark::network::bridge] Setup network podman
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.88.0.6/16]
[DEBUG netavark::network::bridge] Bridge name: podman0 with IP addresses [10.88.0.1/16]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/podman0/rp_filter to 2
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/rp_filter to 2
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.88.0.1, metric 100)
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "POSTROUTING", expr: [Match(Match { left: BinaryOperation(AND(Named(Meta(Meta { key: Mark })), Number(8192))), right: Number(8192), op: EQ }), Masquerade(None)], handle: Some(11), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "NETAVARK-HOSTPORT-SETMARK", expr: [Mangle(Mangle { key: Named(Meta(Meta { key: Mark })), value: BinaryOperation(OR(Named(Meta(Meta { key: Mark })), Number(8192))) })], handle: Some(12), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "PREROUTING", expr: [Match(Match { left: Named(Fib(Fib { result: Type, flags: {Daddr} })), right: String("local"), op: EQ }), Jump(JumpTarget { target: "NETAVARK-HOSTPORT-DNAT" })], handle: Some(13), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "OUTPUT", expr: [Match(Match { left: Named(Fib(Fib { result: Type, flags: {Daddr} })), right: String("local"), op: EQ }), Jump(JumpTarget { target: "NETAVARK-HOSTPORT-DNAT" })], handle: Some(14), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "FORWARD", expr: [Match(Match { left: Named(CT(CT { key: "state", family: None, dir: None })), right: String("invalid"), op: IN }), Drop(None)], handle: Some(15), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "FORWARD", expr: [Jump(JumpTarget { target: "NETAVARK-ISOLATION-1" })], handle: Some(16), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "NETAVARK-ISOLATION-3", expr: [Jump(JumpTarget { target: "NETAVARK-ISOLATION-2" })], handle: Some(18), index: None, comment: None }
[DEBUG netavark::firewall::firewalld] Adding firewalld rules for network 10.88.0.0/16
[DEBUG netavark::firewall::firewalld] Adding subnet 10.88.0.0/16 to zone trusted as source
[INFO  netavark::firewall::nft] Creating container chain nv_2f259bab_10_88_0_0_nm16
[DEBUG netavark::commands::setup] {
        "podman": StatusBlock {
            dns_search_domains: Some(
                [],
            ),
            dns_server_ips: Some(
                [],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "06:39:f7:60:00:4a",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.88.0.1,
                                    ),
                                    ipnet: 10.88.0.6/16,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] Setup complete
DEBU[0000] /proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts
DEBU[0000] Setting Cgroups for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb to machine.slice:libpod:b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
**DEBU[0000] Workdir "/" resolved to host path "/mnt/data/containers/storage/overlay/2e33f3df776d5a10f3481b06d47abcd52ce6d12b94f3a152ee97201ca7a9d1ea/merged"
DEBU[0000] Created OCI spec for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb at /mnt/data/containers/storage/overlay-containers/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb/userdata/config.json**
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb -u b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb -r /usr/bin/crun -b /mnt/data/containers/storage/overlay-containers/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb/userdata -p /run/containers/storage/overlay-containers/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb/userdata/pidfile -n silly_beaver --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /mnt/data/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /mnt/data/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.imagestore=/usr/lib/containers/storage --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg --rm --exit-command-arg b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb.scope
DEBU[0001] Received: 142797
INFO[0001] Got Conmon PID as 142790
DEBU[0001] Created container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb in OCI runtime
DEBU[0001] Adding nameserver(s) from network status of '[]'
DEBU[0001] Adding search domain(s) from network status of '[]'
DEBU[0001] found local resolver, using "/run/systemd/resolve/resolv.conf" to get the nameservers
**DEBU[0001] Attaching to container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0001] Starting container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb with command [sh -c dd if=/dev/zero of=/data/testfile bs=1M count=15]
DEBU[0001] Started container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0001] Notify sent successfully
15+0 records in
15+0 records out
15728640 bytes (15.0MB) copied, 0.207084 seconds, 72.4MB/s
DEBU[0002] Checking if container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb should restart**
DEBU[0002] Removing container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0002] Cleaning up container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0002] Tearing down network namespace at /run/netns/netns-20d8cf0e-ca95-3071-1d7e-e9a5deb85bbb for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
[DEBUG netavark::commands::teardown] Tearing down..
[INFO  netavark::firewall] Using nftables firewall driver
[INFO  netavark::network::bridge] removing bridge podman0
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "INPUT", expr: [Match(Match { left: Named(Payload(PayloadField(PayloadField { protocol: "ip", field: "saddr" }))), right: Named(Prefix(Prefix { addr: String("10.88.0.0"), len: 16 })), op: EQ }), Match(Match { left: Named(Meta(Meta { key: L4proto })), right: Named(Set([Element(String("tcp")), Element(String("udp"))])), op: EQ }), Match(Match { left: Named(Payload(PayloadField(PayloadField { protocol: "th", field: "dport" }))), right: Number(53), op: EQ }), Accept(None)], handle: Some(59), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "FORWARD", expr: [Match(Match { left: Named(Payload(PayloadField(PayloadField { protocol: "ip", field: "daddr" }))), right: Named(Prefix(Prefix { addr: String("10.88.0.0"), len: 16 })), op: EQ }), Match(Match { left: Named(CT(CT { key: "state", family: None, dir: None })), right: List([String("established"), String("related")]), op: IN }), Accept(None)], handle: Some(60), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "FORWARD", expr: [Match(Match { left: Named(Payload(PayloadField(PayloadField { protocol: "ip", field: "saddr" }))), right: Named(Prefix(Prefix { addr: String("10.88.0.0"), len: 16 })), op: EQ }), Accept(None)], handle: Some(61), index: None, comment: None }
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "POSTROUTING", expr: [Match(Match { left: Named(Payload(PayloadField(PayloadField { protocol: "ip", field: "saddr" }))), right: Named(Prefix(Prefix { addr: String("10.88.0.0"), len: 16 })), op: EQ }), Jump(JumpTarget { target: "nv_2f259bab_10_88_0_0_nm16" })], handle: Some(62), index: None, comment: None }
[DEBUG netavark::firewall::nft] Removing 4 rules
[DEBUG netavark::firewall::nft] Found chain nv_2f259bab_10_88_0_0_nm16
[DEBUG netavark::firewall::firewalld] Removing firewalld rules for IPs 10.88.0.0/16
[DEBUG netavark::firewall::nft] Matched Rule { family: INet, table: "netavark", chain: "NETAVARK-ISOLATION-3", expr: [Match(Match { left: Named(Meta(Meta { key: Oifname })), right: String("podman0"), op: EQ }), Drop(None)], handle: Some(54), index: None, comment: None }
[DEBUG netavark::firewall::nft] Removing 1 isolation rules for network
[DEBUG netavark::commands::teardown] Teardown complete
DEBU[0002] Successfully cleaned up container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0002] Unmounted container "b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb"
DEBU[0002] Removing all exec sessions for container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb
DEBU[0002] Container b569e996534b169f805d357959e2b0cb113d9067a4e13766ccb05fb8ca0bfeeb storage is already unmounted, skipping...
DEBU[0002] Called run.PersistentPostRunE(podman --log-level=debug run --rm -v testVol8:/data busybox sh -c dd if=/dev/zero of=/data/testfile bs=1M count=15)
DEBU[0002] Shutting down engines

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants