Ticket #4606 (new defect)
Wrong information on copy/move window on zfs with deduplication on
Reported by: | AdamK | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | Future Releases |
Component: | mc-core | Version: | 4.8.29 |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Branch state: | no branch | Votes for changeset: |
Description (last modified by zaytsev) (diff)
I've been copying and moving data inside single ZFS pool.
ZFS was configured with deduplication (fast dedup)
Information (Total, ETA, per file progress bar) were totaly wrong (Total alwas showed 0 / X (X was ok), ETA always showed 1s left, per file progres bar showed nothing)
$ LC_MESSAGES=C mc -F mc --configure-options Home directory: /mnt/cs01/home_adamk Profile root directory: /mnt/cs01/home_adamk [System data] Config directory: /etc/mc/ Data directory: /usr/share/mc/ File extension handlers: /usr/lib/mc/ext.d/ VFS plugins and scripts: /usr/lib/mc/ extfs.d: /usr/lib/mc/extfs.d/ fish: /usr/lib/mc/fish/ [User data] Config directory: /mnt/cs01/home_adamk/.config/mc/ Data directory: /mnt/cs01/home_adamk/.local/share/mc/ skins: /mnt/cs01/home_adamk/.local/share/mc/skins/ extfs.d: /mnt/cs01/home_adamk/.local/share/mc/extfs.d/ fish: /mnt/cs01/home_adamk/.local/share/mc/fish/ mcedit macros: /mnt/cs01/home_adamk/.local/share/mc/mc.macros mcedit external macros: /mnt/cs01/home_adamk/.local/share/mc/mcedit/macros.d/macro.* Cache directory: /mnt/cs01/home_adamk/.cache/mc/ '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-option-checking' '--libdir=${prefix}/lib/x86_64-linux-gnu' '--runstatedir=/run' '--disable-maintainer-mode' '--disable-dependency-tracking' 'AWK=awk' 'X11_WWW=x-www-browser' '--libexecdir=/usr/lib' '--with-x' '--with-screen=slang' '--disable-rpath' '--disable-static' '--disable-silent-rules' '--enable-aspell' '--enable-vfs-sftp' '--enable-vfs-undelfs' '--enable-tests' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -ffile-prefix-map=/build/mc-3Uz4Lz/mc-4.8.29=. -fstack-protector-strong -Wformat -Werror=format-security' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now -Wl,--as-needed' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
Attachments
Change History
comment:1 Changed 6 weeks ago by zaytsev
- Version changed from master to 4.8.29
- Description modified (diff)
comment:2 Changed 6 weeks ago by zaytsev
We don't have any systems with ZFS to try. What was the host system? It seems that you are using 4.8.29 and not master like you indicated. Just a few days ago this code was re-written in master. Can you try master and confirm whether the new version works better or not?
comment:3 Changed 6 weeks ago by AdamK
Host system is Linux truenas 6.6.44-production+truenas #1 SMP PREEMPT_DYNAMIC Fri Nov 8 18:37:36 UTC 2024 x86_64 GNU/Linux
TrueNAS is based on Debian, but pretty limited in customization (package management is disabled, no compiler) so I can't try MC master.
comment:4 Changed 6 weeks ago by zaytsev
Okay, so this is ZoL, which further complicates the matters. I have ZFS on Solaris, but so far I didn't notice anything like that. Is there an easy way to reproduce the issue?
Maybe you can do a static build on a different system and the copy it to TrueNAS... We can keep this ticket open in the case that someone else wants to try to investigate, but at the moment it's not even clear whether the problem still exists with the latest code.
comment:5 follow-up: ↓ 8 Changed 6 weeks ago by AdamK
I can do static build if instructed how to do.
Screenshot showing the issue attached. Total shows 0 but in fact most of the files size-wise has been copied (more than a half) ETA shows 1s left to finiss (and has been showing that all the time).
comment:6 Changed 6 weeks ago by zaytsev
But how we can reproduce that? Is there a way to create a small ZFS volume and copy some files to see this? Which commands does one have to use?
comment:7 Changed 6 weeks ago by AdamK
I do not know if any ZFS volume options are important. This is how it is set up:
NAME PROPERTY VALUE SOURCE cs01 size 50.9T - cs01 capacity 86% - cs01 altroot /mnt local cs01 health ONLINE - cs01 guid 6910343543503356744 - cs01 version - default cs01 bootfs - default cs01 delegation on default cs01 autoreplace off default cs01 cachefile /data/zfs/zpool.cache local cs01 failmode continue local cs01 listsnapshots off default cs01 autoexpand on local cs01 dedupratio 1.92x - cs01 free 6.98T - cs01 allocated 43.9T - cs01 readonly off - cs01 ashift 12 local cs01 comment - default cs01 expandsize - - cs01 freeing 0 - cs01 fragmentation 14% - cs01 leaked 0 - cs01 multihost off default cs01 checkpoint - - cs01 load_guid 17775713548141664304 - cs01 autotrim on local cs01 compatibility off default cs01 bcloneused 24.5T - cs01 bclonesaved 24.5T - cs01 bcloneratio 2.00x - cs01 dedup_table_size 161M - cs01 dedup_table_quota auto default cs01 feature@async_destroy enabled local cs01 feature@empty_bpobj active local cs01 feature@lz4_compress active local cs01 feature@multi_vdev_crash_dump enabled local cs01 feature@spacemap_histogram active local cs01 feature@enabled_txg active local cs01 feature@hole_birth active local cs01 feature@extensible_dataset active local cs01 feature@embedded_data active local cs01 feature@bookmarks enabled local cs01 feature@filesystem_limits enabled local cs01 feature@large_blocks enabled local cs01 feature@large_dnode enabled local cs01 feature@sha512 enabled local cs01 feature@skein enabled local cs01 feature@edonr enabled local cs01 feature@userobj_accounting active local cs01 feature@encryption enabled local cs01 feature@project_quota active local cs01 feature@device_removal enabled local cs01 feature@obsolete_counts enabled local cs01 feature@zpool_checkpoint enabled local cs01 feature@spacemap_v2 active local cs01 feature@allocation_classes enabled local cs01 feature@resilver_defer enabled local cs01 feature@bookmark_v2 enabled local cs01 feature@redaction_bookmarks enabled local cs01 feature@redacted_datasets enabled local cs01 feature@bookmark_written enabled local cs01 feature@log_spacemap active local cs01 feature@livelist enabled local cs01 feature@device_rebuild enabled local cs01 feature@zstd_compress enabled local cs01 feature@draid enabled local cs01 feature@zilsaxattr active local cs01 feature@head_errlog active local cs01 feature@blake3 enabled local cs01 feature@block_cloning active local cs01 feature@vdev_zaps_v2 active local cs01 feature@redaction_list_spill enabled local cs01 feature@raidz_expansion active local cs01 feature@fast_dedup active local
This options are default when creating ZFS pool in TrueNAS +deduplication +fast_dedup. Compression is set to LZ4, I do not remeber if that's the default.
I'm copying and moving data within single dataset of that pool.
comment:8 in reply to: ↑ 5 Changed 6 weeks ago by andrew_b
Replying to AdamK:
Screenshot showing the issue attached. Total shows 0 but in fact most of the files size-wise has been copied (more than a half) ETA shows 1s left to finiss (and has been showing that all the time).
You're copying 21T (21*2^40) bytes. If you copy less than 21T (say, 21G) bytes, is progress info correct? If yes, could you please to find a boundary where info becomes wrong?
comment:10 Changed 6 weeks ago by AdamK
I'm now copying 1.7TB, same effect. I may try to find if there is a boundary after I'm done with some maintenance tasks.
$ mc --version GNU Midnight Commander 4.8.29 Built with GLib 2.74.5 Built with S-Lang 2.3.3 with terminfo database Built with libssh2 1.10.0 With builtin Editor and Aspell support With subshell support as default With support for background operations With mouse support on xterm and Linux console With support for X11 events With internationalization support With multiple codepages support With ext2fs attributes support Virtual File Systems: cpiofs, tarfs, sfs, extfs, ext2undelfs, ftpfs, sftpfs, fish Data types: char: 8; int: 32; long: 64; void *: 64; size_t: 64; off_t: 64;
comment:11 Changed 6 weeks ago by AdamK
I've just made some smaller copy operations (11GB, 1GB, etc) and it looks the same.