mark wrote:
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://zfsonlinux.org/msg/ZFS-8000-4J scan: resilvered 1.91T in 29h33m with 0 errors on Tue Jun 11 15:45:59 2019 config:
NAME STATE READ WRITE CKSUM export1 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 sda ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 sdb UNAVAIL 0 0 0 sdl ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdk ONLINE 0 0 0 spares sdl INUSE
currently in
use
but when I try zpool replace export1 /dev/sdb1, it says, nope, invalid vdev specification use '-f' to override the following errors: /dev/sdb1 is part of active pool 'export1'
Any idea what I'm doing wrong?
Never mind. More googling, with different search terms, showed me that in this case, I had to use zpool online export1 /dev/sdb1.
I would have thought that zfs would undersand this automattically, and not need me to tell it this, but....
mark