What's the right way to set up >2TB partitions for raid1 autoassembly? I don't need to boot from this but I'd like it to come up and mount automatically at boot.
On 10/12/11 8:48 AM, Les Mikesell wrote:
What's the right way to set up>2TB partitions for raid1 autoassembly? I don't need to boot from this but I'd like it to come up and mount automatically at boot.
disks 2TB and up have to be formatted as GPT rather than MBR, so old MBR tools like fdisk are useless. use parted
I would build a LVM vg with the multi-terabyte volumes, using lvm mirroring, and then create an XFS lv on the vg .... xfs seems 100% stable and high performance on centos6
On Wed, Oct 12, 2011 at 11:09 AM, John R Pierce pierce@hogranch.com wrote:
What's the right way to set up>2TB partitions for raid1 autoassembly? I don't need to boot from this but I'd like it to come up and mount automatically at boot.
disks 2TB and up have to be formatted as GPT rather than MBR, so old MBR tools like fdisk are useless. use parted
I would build a LVM vg with the multi-terabyte volumes, using lvm mirroring, and then create an XFS lv on the vg .... xfs seems 100% stable and high performance on centos6
What I'm looking for is the way to make md raid autodetect and assemble on boot. With fdisk, you would set the partition type to FD for that, but I don't see an equivalent in parted.
On Wed, Oct 12, 2011 at 12:14 PM, Les Mikesell lesmikesell@gmail.comwrote:
On Wed, Oct 12, 2011 at 11:09 AM, John R Pierce pierce@hogranch.com wrote:
What's the right way to set up>2TB partitions for raid1 autoassembly? I don't need to boot from this but I'd like it to come up and mount automatically at boot.
disks 2TB and up have to be formatted as GPT rather than MBR, so old MBR tools like fdisk are useless. use parted
I would build a LVM vg with the multi-terabyte volumes, using lvm mirroring, and then create an XFS lv on the vg .... xfs seems 100% stable and high performance on centos6
What I'm looking for is the way to make md raid autodetect and assemble on boot. With fdisk, you would set the partition type to FD for that, but I don't see an equivalent in parted.
With GPT that's set using flags. This is done in parted with the command "set <partition number> raid on" -- David
On Wed, Oct 12, 2011 at 12:20 PM, David Miller david3d@gmail.com wrote:
On Wed, Oct 12, 2011 at 12:14 PM, Les Mikesell lesmikesell@gmail.comwrote:
On Wed, Oct 12, 2011 at 11:09 AM, John R Pierce pierce@hogranch.com wrote:
What's the right way to set up>2TB partitions for raid1 autoassembly? I don't need to boot from this but I'd like it to come up and mount automatically at boot.
disks 2TB and up have to be formatted as GPT rather than MBR, so old MBR tools like fdisk are useless. use parted
I would build a LVM vg with the multi-terabyte volumes, using lvm mirroring, and then create an XFS lv on the vg .... xfs seems 100% stable and high performance on centos6
What I'm looking for is the way to make md raid autodetect and assemble on boot. With fdisk, you would set the partition type to FD for that, but I don't see an equivalent in parted.
With GPT that's set using flags. This is done in parted with the command "set <partition number> raid on"
I just thought of one other thing. You'll want to read up on bios boot if these large drives are being used for the grub boot loader. -- David
On Wed, Oct 12, 2011 at 11:22 AM, David Miller david3d@gmail.com wrote:
With GPT that's set using flags. This is done in parted with the command "set <partition number> raid on"
I just thought of one other thing. You'll want to read up on bios boot if these large drives are being used for the grub boot loader.
Not booting from them, but is there a problem with kernel autoassembly on large partitions? I see this in dmsg:
md: Autodetecting RAID arrays. md: invalid raid superblock magic on sde1 md: sde1 has invalid sb, not importing!
But mdadm --assemble /dev/md5 /dev/sde1 works fine after booting. (The array was created with a missing member which hasn't been added yet).
----- Original Message -----
From: "Les Mikesell" lesmikesell@gmail.com To: "CentOS mailing list" centos@centos.org Sent: Wednesday, October 12, 2011 9:28:48 AM Subject: Re: [CentOS] raid on large disks?
On Wed, Oct 12, 2011 at 11:22 AM, David Miller david3d@gmail.com wrote:
With GPT that's set using flags. This is done in parted with the command "set <partition number> raid on"
I just thought of one other thing. You'll want to read up on bios boot if these large drives are being used for the grub boot loader.
Not booting from them, but is there a problem with kernel autoassembly on large partitions? I see this in dmsg:
md: Autodetecting RAID arrays. md: invalid raid superblock magic on sde1 md: sde1 has invalid sb, not importing!
But mdadm --assemble /dev/md5 /dev/sde1 works fine after booting. (The array was created with a missing member which hasn't been added yet).
When using mdadm you don't even have to partition a drive if you are using the whole thing. Just make sure the block device is set to GPT in parted if it is not already. After creating the RAID1 get the uuid of the raid by doing mdadm -D /dev/dm5. Then use the following format in the /etc/mdadm.conf to have it auto assemble at boot.
ARRAY /dev/md5 devices=/dev/sd[e-f] uuid=$UUID_FROM_MDADM
David C Miller.
On Wed, Oct 12, 2011 at 11:37 AM, David C. Miller millerdc@fusion.gat.com wrote:
Not booting from them, but is there a problem with kernel autoassembly on large partitions? I see this in dmsg:
md: Autodetecting RAID arrays. md: invalid raid superblock magic on sde1 md: sde1 has invalid sb, not importing!
But mdadm --assemble /dev/md5 /dev/sde1 works fine after booting. (The array was created with a missing member which hasn't been added yet).
When using mdadm you don't even have to partition a drive if you are using the whole thing. Just make sure the block device is set to GPT in parted if it is not already. After creating the RAID1 get the uuid of the raid by doing mdadm -D /dev/dm5. Then use the following format in the /etc/mdadm.conf to have it auto assemble at boot.
ARRAY /dev/md5 devices=/dev/sd[e-f] uuid=$UUID_FROM_MDADM
Thanks - I already had the raid and filesystem (and data) on a partition - it just wasn't restarting at boot. The above approach with the device names of the partitions seems to bring it up in time to make it work when included in /etc/fstab. That's better than nothing, but I'd really prefer kernel autoassembly because the disks are all in swappable enclosures and I do move things around once in a while. Does that just not work for large partitions? I see that 'cat /proc/mdstat' says 'super 1.0' on this device and none of the others.
On 10/12/11 9:14 AM, Les Mikesell wrote:
What I'm looking for is the way to make md raid autodetect and assemble on boot. With fdisk, you would set the partition type to FD for that, but I don't see an equivalent in parted.
set 1 raid on
but I was suggesting forgoing mdraid entirely, and using lvm mirroring (lvcreate -m 2 ... vgname)
On Wed, Oct 12, 2011 at 11:24 AM, John R Pierce pierce@hogranch.com wrote:
On 10/12/11 9:14 AM, Les Mikesell wrote:
What I'm looking for is the way to make md raid autodetect and assemble on boot. With fdisk, you would set the partition type to FD for that, but I don't see an equivalent in parted.
set 1 raid on
but I was suggesting forgoing mdraid entirely, and using lvm mirroring (lvcreate -m 2 ... vgname)
LVM seems unnecessarily complicated unless mdraid is broken on large devices. And I already have data on the md partition (moving stuff from a pair of 1.5TB drives to 3TB). If md isn't going to work, can I put LVM on a single drive, add the data, then add the mirror (reusing the one that now has the md partition and currently holding the data)? The old disks are still around but not in the machine now.
On 10/12/11 9:36 AM, Les Mikesell wrote:
LVM seems unnecessarily complicated unless mdraid is broken on large devices. And I already have data on the md partition (moving stuff from a pair of 1.5TB drives to 3TB). If md isn't going to work, can I put LVM on a single drive, add the data, then add the mirror (reusing the one that now has the md partition and currently holding the data)? The old disks are still around but not in the machine now.
I believe (from memory) you'd add the 2nd drive to the volume group, then
vgextend vgname /dev/(newdisk) lvconvert -m 1 logicalvolumename
one nice thing about lvm mirroring is you can do it with a mix of drive sizes without having to worry about geometry or layout, it will simply ensure that each block exists on two devices....