I just got a new server with a Dell MD-1000 SAS unit and 6-750 gigabyte drives which are now initializing in RAID 10 which will give me just about 2 terabytes.
I vaguely recall reading that fdisk wasn't suitable for partitioning and wonder if I shouldn't be using partd instead. I am also wondering if I should use lvm or just mkfs to create the filesystem. Anyone have suggestions before I blunder in?
Thanks
Craig
I vaguely recall reading that fdisk wasn't suitable for partitioning and wonder if I shouldn't be using partd instead. I am also wondering if I should use lvm or just mkfs to create the filesystem. Anyone have suggestions before I blunder in?
fdisk can't do GPT which is what you need for partitions larger than 2tb, so you use parted w/ gpt.
If it were mine, I'd partition it as one big chunk and mark it LVM, then carve it out so you have the flexibility. After all, you already made it one array so making partitions out of it only limits you down the road.
YMMV, jlc
On Oct 14, 2008, at 10:36 PM, Craig White craigwhite@azapple.com wrote:
I just got a new server with a Dell MD-1000 SAS unit and 6-750 gigabyte drives which are now initializing in RAID 10 which will give me just about 2 terabytes.
I vaguely recall reading that fdisk wasn't suitable for partitioning and wonder if I shouldn't be using partd instead. I am also wondering if I should use lvm or just mkfs to create the filesystem. Anyone have suggestions before I blunder in?
Just pvcreate the whole disk and forgo partitioning it. Then create a vg out of it and start creating lvs.
-Ross
Just pvcreate the whole disk and forgo partitioning it. Then create a vg out of it and start creating lvs.
Hey Ross, I thought it was best practice to create an LVM partition such that the disk could be recognizable under all circumstances such as if the volume was moved? Is that not really "best practice" anymore?
jlc
Just pvcreate the whole disk and forgo partitioning it. Then create a vg out of it and start creating lvs.
Hey Ross, I thought it was best practice to create an LVM partition such that the disk could be recognizable under all circumstances such as if the volume was moved? Is that not really "best practice" anymore?
I'm not Ross, but I'll chime in: I heartily recommend creating an LVM partition rather than using the entire disk. It will cover you for those times when you are booting off the Rescue or Install CD. When anaconda(?) sees an LVM "formatted" disk, it thinks it's garbage because there's no valid partition table. It then asks you if you want to format the disk (or words to that effect; I forget the precise details), with the default set to YES (WTH? Default option is the most dangerous? That's nutty).
From experience I can tell you that selecting "Yes" is really really bad for your LVM meta data. Let's just say it took a while to restore the 1TB of data the time I miss-clicked YES from a finger twitch at the wrong moment while moving the mouse.
<Sigh>
Craig Miskell
======================================================================= Attention: The information contained in this message and/or attachments from AgResearch Limited is intended only for the persons or entities to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipients is prohibited by AgResearch Limited. If you have received this message in error, please notify the sender immediately. =======================================================================
On Oct 15, 2008, at 10:15 AM, "Joseph L. Casale" <JCasale@activenetwerx.com
wrote:
Just pvcreate the whole disk and forgo partitioning it. Then create a vg out of it and start creating lvs.
Hey Ross, I thought it was best practice to create an LVM partition such that the disk could be recognizable under all circumstances such as if the volume was moved? Is that not really "best practice" anymore?
Well if you mean recognizable by other OS's then does it matter if the other OS can't see the disk or can see the disk, but not access the data?
When it comes to terabytes MBR just doesn't cut it and GPT isn't widely recognized either and such a PITA to implement and still fraught with pitfalls between implementation.
Or did I not understand you properly?
-Ross
On Wed, 2008-10-15 at 09:47 -0400, Ross Walker wrote:
On Oct 14, 2008, at 10:36 PM, Craig White craigwhite@azapple.com wrote:
I just got a new server with a Dell MD-1000 SAS unit and 6-750 gigabyte drives which are now initializing in RAID 10 which will give me just about 2 terabytes.
I vaguely recall reading that fdisk wasn't suitable for partitioning and wonder if I shouldn't be using partd instead. I am also wondering if I should use lvm or just mkfs to create the filesystem. Anyone have suggestions before I blunder in?
Just pvcreate the whole disk and forgo partitioning it. Then create a vg out of it and start creating lvs.
---- OK - makes sense but I am a bit confused here.
I have done the pgcreate and tested lvcreate but wonder about 'setphysicalextentsize' because in the man page, it states, "The default of 4 MB leads to a maximum logical volume size of around 256GB" which makes me think that if I want one volume when this is all done, I have to increase that value.
# fdisk -l /dev/sdb
Disk /dev/sdb: 2248.8 GB, 2248818032640 bytes 255 heads, 63 sectors/track, 273403 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb doesn't contain a valid partition table
So thinking that I would need a volume just under 10 times the maximum of 256GB, I would have to set the physicalextentsize to 64 MB (32 not being quite large enough and thus 64 being the next increment in power of 2)
Does this make sense?
Craig
On Wed, Oct 15, 2008 at 09:52:03AM -0700, Craig White wrote:
I have done the pgcreate and tested lvcreate but wonder about 'setphysicalextentsize' because in the man page, it states, "The default of 4 MB leads to a maximum logical volume size of around 256GB" which makes me think that if I want one volume when this is all done, I have to increase that value.
The man page for "vgcreate" says "there is a limit of 65534 extents in each logical volume" but only for *lvm1* format. lvm2 format doesn't have such restrictions.
I used default values for my 4Tbyte array (5*1Tbyte disk in a md raid5) under CentOS 4.
% fdisk -l /dev/md3
Disk /dev/md3: 4000.8 GB, 4000808697856 bytes 2 heads, 4 sectors/track, 976759936 cylinders Units = cylinders of 8 * 512 = 4096 bytes
% pvdisplay /dev/md3 --- Physical volume --- PV Name /dev/md3 VG Name Raid5 PV Size 3.64 TB / not usable 320.00 KB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 953867 Free PE 0 Allocated PE 953867 PV UUID NngvXK-4tqJ-xNtG-UnDL-Rin0-RHIl-xZ2wzI
% vgdisplay Raid5 --- Volume group --- VG Name Raid5 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.64 TB PE Size 4.00 MB Total PE 953867 Alloc PE / Size 953867 / 3.64 TB Free PE / Size 0 / 0 VG UUID mKSI0h-26i7-5LK5-vwpX-GY3a-Bjiv-xX4q8n
% lvdisplay Raid5/Media --- Logical volume --- LV Name /dev/Raid5/Media VG Name Raid5 LV UUID c8x4Ip-R1wq-n9An-NM6B-IuBs-U61L-kfVgAU LV Write Access read/write LV Status available # open 1 LV Size 3.64 TB Current LE 953867 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5
Does this make sense?
Try using "pvcreate", "vgcreate" and "lvcreate" with no special options and see what happens. It worked for me!
The default in CentOS should be lvm2; you can see that's what was created on mine by the "Format" line in the vgdisplay output.
Format lvm2
On Wed, 2008-10-15 at 13:34 -0400, Stephen Harris wrote:
On Wed, Oct 15, 2008 at 09:52:03AM -0700, Craig White wrote:
I have done the pgcreate and tested lvcreate but wonder about 'setphysicalextentsize' because in the man page, it states, "The default of 4 MB leads to a maximum logical volume size of around 256GB" which makes me think that if I want one volume when this is all done, I have to increase that value.
The man page for "vgcreate" says "there is a limit of 65534 extents in each logical volume" but only for *lvm1* format. lvm2 format doesn't have such restrictions.
I used default values for my 4Tbyte array (5*1Tbyte disk in a md raid5) under CentOS 4.
SNIP...
Try using "pvcreate", "vgcreate" and "lvcreate" with no special options and see what happens. It worked for me!
The default in CentOS should be lvm2; you can see that's what was created on mine by the "Format" line in the vgdisplay output.
Format lvm2
---- OK - well, I can delete the vg and the lv that I created and just consider them as practice. I'm not really sure what the difference would be having the physical extent size as 64 MB versions 4 MB.
I did run into a snag that I don't fully understand while trying to make the filesystem though...
# mke2fs -v -j -l 2TbVol /dev/VolGroup10/2TbVol mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 274513920 inodes, 549011456 blocks 27450572 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 16755 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000
read_bad_blocks_file: No such file or directory while trying to open 2TbVol
Obviously there is no physical disk to read a bad blocks file from and I don't see in the man page for mke2fs any way to tell it to ignore bad blocks or not search for the file.
Craig
I'm not really sure what the difference would be having the physical extent size as 64 MB versions 4 MB.
It's the smallest allocatable size of any PV you can give to an LV. Think of it like Allocation Unit Size. Also as pointed out above some limitations arise in different versions of LVM. RH defaults to 32M I believe...
On Wed, Oct 15, 2008 at 11:13:08AM -0700, Craig White wrote:
I did run into a snag that I don't fully understand while trying to make the filesystem though...
# mke2fs -v -j -l 2TbVol /dev/VolGroup10/2TbVol
Were you trying to specify a label? If so, use the -L option, not -l
Were you trying to specify a bad block list? Why? I'd be VERY surprised if you need to do this.
So you either meant mke2fs -v -j /dev/VolGroup10/2TbVol OR mke2fs -v -j -L 2TbVol /dev/VolGroup10/2TbVol
On Wed, 2008-10-15 at 14:34 -0400, Stephen Harris wrote:
On Wed, Oct 15, 2008 at 11:13:08AM -0700, Craig White wrote:
I did run into a snag that I don't fully understand while trying to make the filesystem though...
# mke2fs -v -j -l 2TbVol /dev/VolGroup10/2TbVol
Were you trying to specify a label? If so, use the -L option, not -l
Were you trying to specify a bad block list? Why? I'd be VERY surprised if you need to do this.
So you either meant mke2fs -v -j /dev/VolGroup10/2TbVol OR mke2fs -v -j -L 2TbVol /dev/VolGroup10/2TbVol
---- duh...thanks :::blush:::
too many man pages ;-)
Thanks
Craig