Skip to content

dataset and zpool size major mismatch + temporary data loss - yet zpool has no errors #17224

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
NearSquire opened this issue Apr 7, 2025 · 0 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@NearSquire
Copy link

System information

Distribution Name debian
Distribution Version 12
Kernel Version 6.1.0-32-amd64
Architecture x86_64
OpenZFS Version |2.3.1
Processor - AMD GX-424
Memory - 16GB non-ECC

Describe the problem you're observing

Dataset and pool sizes dont match at all.
No USED data in dataset while zpool has 60+ GB of USED data.
Files were was also temporarily lost after a zfs send (syncoid).

I discovered this anomaly when I tried to do an initial syncoid send of the encrypted dataset using option -w.

It finished immediately despite what should have been a 60+ GB of data.
When I then returned to check the mymirror/userdata it was empty to my scare.
I know I checked that the data was there before using ls.
No error reported on the pool , key was loaded. The dataset had a very small size as you can see below.
So i decided to try a scrub anyway, it did not help as there according to the zpool was nothing to repair. So to avoid any more rot I shut down the NAS.
Today, when returning, the zpool size and dataset size are still not matching but I could see my data so I did a backup to another disk using normal cp.

In the data I collected please checkout

  • the zfs list -t all output for the zpool mymirror. the zpool mymirror is using 63.7GB of data but the dataset is using 148k of data.
  • the du output from the dataset (after the restart), 66824002.
  • pool says it is healthy. There were no error before the last scrub either.

Describe how to reproduce the problem

I dont know. I have included the history of the pool.
Maybe syncoid using --sendoption=w had something to do with it, maybe not.

Include any warning/errors/backtraces from the system logs

jan@igel340:/mymirror/userdata$ sudo zfs --version
zfs-2.3.1-1~bpo12+1
zfs-kmod-2.3.1-1~bpo12+1
jan@igel340:/mymirror/userdata$ uname -a
Linux igel340 6.1.0-32-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.129-1 (2025-03-06) x86_64 GNU/Linux
jan@igel340:/mymirror/userdata$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
jan@igel340:/mymirror/userdata$ zfs list -t all
NAME                                                             USED  AVAIL  REFER  MOUNTPOINT
mymirror                                                        63.7G   704G  63.7G  /mymirror
mymirror/userdata                                                148K   704G    98K  /mymirror/userdata
mymirror/userdata@autosnap_2025-04-01_08:12:20_yearly              0B      -    98K  -
mymirror/userdata@autosnap_2025-04-01_08:12:20_monthly             0B      -    98K  -
mymirror/userdata@autosnap_2025-04-01_08:12:20_daily               0B      -    98K  -
mymirror/userdata@autosnap_2025-04-02_08:05:37_daily               0B      -    98K  -
mymirror/userdata@autosnap_2025-04-03_00:00:07_daily               0B      -    98K  -
mymirror/userdata@autosnap_2025-04-04_00:00:13_daily               0B      -    98K  -
mymirror/userdata@autosnap_2025-04-05_00:00:13_daily               0B      -    98K  -
mymirror/userdata@autosnap_2025-04-06_00:00:12_daily               0B      -    98K  -
mymirror/userdata@syncoid_igel340_2025-04-06:09:42:20-GMT01:00     0B      -    98K  -
mymirror/userdata@autosnap_2025-04-07_10:54:32_daily               0B      -    98K  -
mystripe                                                        5.53T  65.5G    24K  /mystripe
mystripe/data                                                   2.49T  65.5G  2.33T  /mystripe/data
...
jan@igel340:/mymirror/userdata$ sudo zpool status mymirror -v
  pool: mymirror
 state: ONLINE
  scan: scrub repaired 0B in 00:07:29 with 0 errors on Sun Apr  6 11:36:19 2025
config:

	NAME         STATE     READ WRITE CKSUM
	mymirror     ONLINE       0     0     0
	  mirror-0   ONLINE       0     0     0
	    mirror0  ONLINE       0     0     0
	    mirror1  ONLINE       0     0     0

errors: No known data errors


jan@igel340:/mymirror/userdata$ du
.....
66819420	./jan
66824002	.
jan@igel340:/mymirror/userdata$ 
jan@igel340:/mymirror/userdata$ sudo zpool history  mymirror
History for 'mymirror':
2025-03-07.19:57:27 zpool create mymirror mirror -f /dev/disk/by-partlabel/mirror0 /dev/disk/by-partlabel/mirror1
2025-03-07.20:14:59 zfs create -o encryption=on -o compression=lz4 -o keyformat=passphrase mymirror/userdata
2025-03-09.00:24:02 zpool scrub mymirror
2025-03-31.18:18:17 zpool export mymirror
2025-03-31.18:26:08 zpool import mymirror
2025-04-01.09:12:20 zfs snapshot mymirror/userdata@autosnap_2025-04-01_08:12:20_yearly
2025-04-01.09:12:20 zfs snapshot mymirror/userdata@autosnap_2025-04-01_08:12:20_monthly
2025-04-01.09:12:20 zfs snapshot mymirror/userdata@autosnap_2025-04-01_08:12:20_daily
2025-04-01.16:25:50 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-01:16:25:50-GMT01:00
2025-04-01.18:03:33 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-01:18:03:33-GMT01:00
2025-04-01.18:03:56 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-01:18:03:56-GMT01:00
2025-04-02.09:05:37 zfs snapshot mymirror/userdata@autosnap_2025-04-02_08:05:37_daily
2025-04-03.01:00:07 zfs snapshot mymirror/userdata@autosnap_2025-04-03_00:00:07_daily
2025-04-04.01:00:13 zfs snapshot mymirror/userdata@autosnap_2025-04-04_00:00:13_daily
2025-04-05.01:00:13 zfs snapshot mymirror/userdata@autosnap_2025-04-05_00:00:13_daily
2025-04-06.01:00:12 zfs snapshot mymirror/userdata@autosnap_2025-04-06_00:00:12_daily
2025-04-06.09:14:02 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:14:02-GMT01:00
2025-04-06.09:15:00 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:15:00-GMT01:00
2025-04-06.09:17:18 zfs load-key mymirror/userdata
2025-04-06.09:17:23 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:17:23-GMT01:00
2025-04-06.09:17:25 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-01:18:03:33-GMT01:00
2025-04-06.09:17:25 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:14:02-GMT01:00
2025-04-06.09:17:25 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-01:16:25:50-GMT01:00
2025-04-06.09:17:25 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:15:00-GMT01:00
2025-04-06.09:17:25 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-01:18:03:56-GMT01:00
2025-04-06.09:19:05 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:19:05-GMT01:00
2025-04-06.09:19:06 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:17:23-GMT01:00
2025-04-06.09:20:49 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:20:49-GMT01:00
2025-04-06.09:20:50 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:19:05-GMT01:00
2025-04-06.09:27:39 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:27:39-GMT01:00
2025-04-06.09:27:40 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:20:49-GMT01:00
2025-04-06.09:31:57 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:31:56-GMT01:00
2025-04-06.09:34:31 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:34:30-GMT01:00
2025-04-06.09:34:32 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:31:56-GMT01:00
2025-04-06.09:34:32 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:27:39-GMT01:00
2025-04-06.09:38:59 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:38:59-GMT01:00
2025-04-06.09:39:00 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:34:30-GMT01:00
2025-04-06.09:42:20 zfs snapshot mymirror/userdata@syncoid_igel340_2025-04-06:09:42:20-GMT01:00
2025-04-06.09:42:20 zfs send -I mymirror/userdata@syncoid_igel340_2025-04-06:09:38:59-GMT01:00 mymirror/userdata@syncoid_igel340_2025-04-06:09:42:20-GMT01:00
2025-04-06.09:42:21 zfs destroy mymirror/userdata@syncoid_igel340_2025-04-06:09:38:59-GMT01:00
2025-04-06.11:29:00 zpool scrub mymirror
2025-04-07.11:54:32 zfs snapshot mymirror/userdata@autosnap_2025-04-07_10:54:32_daily
2025-04-07.12:32:32 zpool upgrade mymirror
@NearSquire NearSquire added the Type: Defect Incorrect behavior (e.g. crash, hang) label Apr 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

1 participant