[CentOS] ZFS on Linux in production?

Lists lists at benjamindsmith.com
Fri Oct 25 17:33:04 UTC 2013


On 10/24/2013 11:18 PM, Warren Young wrote:
> - vdev, which is a virtual device, something like a software RAID.  It is one or more disks, configured together, typically with some form of redundancy.
>
> - pool, which is one or more vdevs, which has a capacity equal to all of its vdevs added together.

Thanks for the clarification of terms.

> You would have 3 TB *if* you configured these disks as two separate vdevs.
>
> If you tossed all four disks into a single vdev, you could have only 2 TB because the smallest disk in a vdev limits the total capacity.
>
> (This is yet another way ZFS isn't like a Drobo[*], despite the fact that a lot of people hype it as if it were the same thing.)

Two separate vdevs is pretty much what I was after. Drobo: another 
interesting option :)

>> Are you suggesting we add a couple of
>> 4 TB drives:
>>
>> A1 <-> A2 = 2x 1TB drives, 1 TB redundant storage.
>> B1 <-> B2 = 2x 2TB drives, 2 TB redundant storage.
>> C1 <-> C2 = 2x 4TB drives, 4 TB redundant storage.
>>
>> Then wait until ZFS moves A1/A2 over to C1/C2 before removing A1/A2? If
>> so, that's capability I'm looking for.
> No.  ZFS doesn't let you remove a vdev from a pool once it's been added, without destroying the pool.
>
> The supported method is to add disks C1 and C2 to the *A* vdev, then tell ZFS that C1 replaces A1, and C2 replaces A2.  The filesystem will then proceed to migrate the blocks in that vdev from the A disks to the C disks. (I don't remember if ZFS can actually do both in parallel.)
>
> Hours later, when that replacement operation completes, you can kick disks A1 and A2 out of the vdev, then physically remove them from the machine at your leisure.  Finally, you tell ZFS to expand the vdev.
>
> (There's an auto-expand flag you can set, so that last step can happen automatically.)
>
> If you're not seeing the distinction, it is that there never were 3 vdevs at any point during this upgrade.  The two C disks are in the A vdev, which never went away.

I see the distinction about vdevs vs. block devices. Still, the process 
you outline is *exactly* the capability that I'm looking for, despite 
the distinction in semantics.

> Yes, implicit in my comments was that you were using XFS or ext4 with some sort of RAID (Linux md RAID or hardware) and Linux's LVM2.
>
> You can use XFS and ext4 without RAID and LVM, but if you're going to compare to ZFS, you can't fairly ignore these features just because it makes ZFS look better.

I've had good results with Linux' software RAID+Ext[2-4].  For example, 
I *love* that you can mount a RAID partitioned drive directly in a 
worst-case scenario. LVM2 complicates administration terribly. The 
widely touted, simplified administration of ZFS is quite attractive to me.

I'm just trying to find the best tool for the job. That may well end up 
being Drobo!




More information about the CentOS mailing list