Hello, sorry for the long email, it's a little hard to explain this issue. The gist of it is that the Ubuntu version of parted allowed me to do something which perhaps should not be allowed i.e. creating partitions on a 2.7TB drive when the partition table is not *gpt* but *msdos*.
I am trying to configure 2 identical servers, both are Dell Poweredge 2970 machines with 6 disks in them configured as a RAID 5 with one hotspare, and both give me 2.726TB of space after the RAID 5 is configured. There are slight differences between the BIOS versions and Firmware versions of the LSI disk controller, etc but I'm not sure that matters in this case.
Now, I setup server "A" a few months ago and for some reason that I don't remember now I resorted to using a Ubuntu 64-bit LiveCD to create the partitions. Since the disk is larger than 2TB, I had to use 'parted' to create the partitions. So I happily created the partitions I wanted which are below:
/ 50 GB /var 20 GB /data the remainder (a large partition)
Now, after a few months I forgot all about the Ubuntu LiveCD and tried to setup server "B" using the CentOS 5.3 x86_64 CD. However the installer immediately complained that "this disk in using a GPT partition table and this computer cannot boot using GPT" and it keeps saying this no matter what I do. I've tried creating a separate /boot partition, using LVMs for everything, etc but nothing works, even "dd" did not give me much luck, although perhaps I should try deleting the "end" of the disk rather than the beginning?
I have now noticed that when press Ctrl-Alt-F2 on server "B" during the CentOS install and attempt to use 'parted' it says that the "label" type is "gpt". It allows me to create new partitions, etc but it's no help because the installer keeps complaining that the machine will not boot with a GPT partition table. So I need to really use a "msdos" partition type to boot this machine successfully, but the CentOS version of parted will not allow me to do that.
The additional mystery is that if I check server "A" which I partitioned a few months ago using Ubuntu, the "label" type is "msdos"!! How is that possible? In addition if I use the CentOS CD and try to use parted on server "A" now, it gives the following error:
--------------------------- # parted GNU Parted 1.8.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Error: msdos labels do not support devices that have more than 4294967295 sectors. ---------------------------
BUT if I reboot this same server (server A) again using the Ubuntu LiveCD parted works just fine! It prints the label type as "msdos", and it prints the above partition table correctly and gives me the right size for it and everything.
So what is going on here? Is the Ubuntu parted somehow buggy and allowing me to do something dangerous that I will regret later, or can I just ignore the label setting in parted and continue to setup Server "B" the same as "A" and hope for the best?
I would appreciate any insights.
2010/2/23 Khusro Jaleel mailing-lists@kerneljack.com:
Hello, sorry for the long email, it's a little hard to explain this issue. The gist of it is that the Ubuntu version of parted allowed me to do something which perhaps should not be allowed i.e. creating partitions on a 2.7TB drive when the partition table is not *gpt* but *msdos*.
I am trying to configure 2 identical servers, both are Dell Poweredge 2970 machines with 6 disks in them configured as a RAID 5 with one hotspare, and both give me 2.726TB of space after the RAID 5 is configured. There are slight differences between the BIOS versions and Firmware versions of the LSI disk controller, etc but I'm not sure that matters in this case.
Now, I setup server "A" a few months ago and for some reason that I don't remember now I resorted to using a Ubuntu 64-bit LiveCD to create the partitions. Since the disk is larger than 2TB, I had to use 'parted' to create the partitions. So I happily created the partitions I wanted which are below:
I think it is not possible to create partitions lager than 1.7TB without gpt partition table.
Of course you can create small boot partition and some lvm partitions and combine them to one big lvm volume or use another drive for booting and use gpt partition table for big disk..
-- Eero
At Tue, 23 Feb 2010 17:45:34 +0200 CentOS mailing list centos@centos.org wrote:
2010/2/23 Khusro Jaleel mailing-lists@kerneljack.com:
Hello, sorry for the long email, it's a little hard to explain this issue. The gist of it is that the Ubuntu version of parted allowed me to do something which perhaps should not be allowed i.e. creating partitions on a 2.7TB drive when the partition table is not *gpt* but *msdos*.
I am trying to configure 2 identical servers, both are Dell Poweredge 2970 machines with 6 disks in them configured as a RAID 5 with one hotspare, and both give me 2.726TB of space after the RAID 5 is configured. There are slight differences between the BIOS versions and Firmware versions of the LSI disk controller, etc but I'm not sure that matters in this case.
Now, I setup server "A" a few months ago and for some reason that I don't remember now I resorted to using a Ubuntu 64-bit LiveCD to create the partitions. Since the disk is larger than 2TB, I had to use 'parted' to create the partitions. So I happily created the partitions I wanted which are below:
I think it is not possible to create partitions lager than 1.7TB without gpt partition table.
Of course you can create small boot partition and some lvm partitions and combine them to one big lvm volume or use another drive for booting and use gpt partition table for big disk..
Random odd thought. It sounds like you are using a hardware RAID controller (LSI)? I know that the old Mylex RAID controllers would allow you to create multiple *logical* disks on top of a RAID set. Can you do this with the LSI RAID controller? If so, what I would do is create two logical disks, one 'small' (say 20gig or so) and one large (whatever is left). Then, install CentOS on the 20gig logical disk, using a MS-DOS partition table as CentOS wants to do (I'd do four partitions: /boot swap / and /home). *Don't* even try to partition the big disk. Just make it an LVM PV and then create a VG with this physical volume. Carve out logical volumes as needed.
-- Eero _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 23 Feb 2010 at 3:38pm, Khusro Jaleel wrote
Now, after a few months I forgot all about the Ubuntu LiveCD and tried to setup server "B" using the CentOS 5.3 x86_64 CD. However the installer immediately complained that "this disk in using a GPT partition table and this computer cannot boot using GPT" and it keeps saying this no matter what I do. I've tried creating a separate /boot partition, using LVMs for everything, etc but nothing works, even "dd" did not give me much luck, although perhaps I should try deleting the "end" of the disk rather than the beginning?
There are 2 facts at play here:
1) Any device larger than 2TB must use a GPT disklabel.
2) You cannot boot from a device with a GPT disklabel.
None of the tricks you mention above will work. What you need to do is use the RAID card BIOS to divide the array into multiple devices. Most decent RAID cards will either "auto-carve" arrays into <2TB chunks or let you create a small "boot-drive". The latter is preferable, IMO. If your RAID card doesn't offer such an option, then you'll need to either remove some disks from the array to use as boot drives or add more drives to the system.
The additional mystery is that if I check server "A" which I partitioned a few months ago using Ubuntu, the "label" type is "msdos"!! How is that possible? In addition if I use the CentOS CD and try to use parted on server "A" now, it gives the following error:
Weird things happen when trying to boot from GPT labeled devices, including all sorts of data-loss scenarios.
On Feb 23, 2010, at 9:38 AM, Khusro Jaleel wrote:
I am trying to configure 2 identical servers, both are Dell Poweredge 2970 machines with 6 disks in them configured as a RAID 5 with one hotspare, and both give me 2.726TB of space after the RAID 5 is configured. There are slight differences between the BIOS versions and Firmware versions of the LSI disk controller, etc but I'm not sure that matters in this case.
I had this exact issue with the exact hardware and setup you describe above.
So what is going on here?
I believe that the issue is with the Anaconda.
Is the Ubuntu parted somehow buggy and allowing me to do something dangerous that I will regret later, or can I just ignore the label setting in parted and continue to setup Server "B" the same as "A" and hope for the best?
I would appreciate any insights.
Here is what I ended up doing...
In the hardware RAID controller I setup a "virtual drive" of 50GB and another virtual drive made from the remaining space.
Here is a nice ASCII description of the RAID setup.
0 - 750GB SATA --- Global Hot spare
1 - 750GB SATA ----\ 2 - 750GB SATA -----\ 3 - 750GB SATA ------> RAID 5 == ~2.7 TB 4 - 750GB SATA -----/ 5 - 750GB SATA ----/
VD = Virtual Disk
VD 0 (sda) - 50GB (boot volume [/boot, /, /home, /tmp, etc.]) VD 1 (sdb) - <remaining space> ("data" storage, /var)
After saving those changes within the RAID controller I then booted the CentOS 5.4 x86_64 installer and installed away.
Hope that helps.
- tim
Thanks to all of you for your help, and especially Tim Shubitz who faced the same problem and his solution worked perfectly for me.
However, now that I have properly created a GPT partition of size 2.7TB, which filesystem is best on it? This filesystem will be used to store backups of various other linux systems, so the files will be mostly small, however some systems do host big movie files and sometimes SVN dumps, and DB dumps can get a little big. I am going to be using rsnapshot to do the backups, so perhaps I should be careful about the number of inodes I create and try to maximise them?
I am thinking of using XFS, but am not sure. I seem to have heard in the past that one should avoid EXT3 on such huge filesystems, but I can't find a reference or proper justification for it. JFS is another option but then some mailing list threads online say it has lost data for them so I'm a bit confused as to what is best to use in my scenario.
As for XFS I have read that a UPS is necessary and this is not a problem since these machines are already connected to a UPS (and that UPS has a backup as well).
Any help appreciated, thanks,
Khusro
If it helps: we're running several 10-15TB filesystems with ReiserFS and never had even the slightest problem with it. The current limit, however, is 16TB even for ext4. Or, to be more precise: the filesystem could handle more than 16TB easily, however the tools necessary to create the filesystem (aka mkfs.ext4) have a hard-coded 32-bit limit that's been detected almost three years ago and has still not been fixed. Which should not affect your 2.7TB partition tho ;)
Martin
On 08/03/2010 09:53, Khusro Jaleel wrote:
Thanks to all of you for your help, and especially Tim Shubitz who faced the same problem and his solution worked perfectly for me.
However, now that I have properly created a GPT partition of size 2.7TB, which filesystem is best on it? This filesystem will be used to store backups of various other linux systems, so the files will be mostly small, however some systems do host big movie files and sometimes SVN dumps, and DB dumps can get a little big. I am going to be using rsnapshot to do the backups, so perhaps I should be careful about the number of inodes I create and try to maximise them?
I am thinking of using XFS, but am not sure. I seem to have heard in the past that one should avoid EXT3 on such huge filesystems, but I can't find a reference or proper justification for it. JFS is another option but then some mailing list threads online say it has lost data for them so I'm a bit confused as to what is best to use in my scenario.
As for XFS I have read that a UPS is necessary and this is not a problem since these machines are already connected to a UPS (and that UPS has a backup as well).
Any help appreciated, thanks,
Khusro
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, 8 Mar 2010, Alan Hoffmeister wrote:
On 08/03/2010 09:53, Khusro Jaleel wrote:
Thanks to all of you for your help, and especially Tim Shubitz who faced the same problem and his solution worked perfectly for me.
However, now that I have properly created a GPT partition of size 2.7TB, which filesystem is best on it? This filesystem will be used to store backups of various other linux systems, so the files will be mostly small, however some systems do host big movie files and sometimes SVN dumps, and DB dumps can get a little big. I am going to be using rsnapshot to do the backups, so perhaps I should be careful about the number of inodes I create and try to maximise them?
I am thinking of using XFS, but am not sure. I seem to have heard in the past that one should avoid EXT3 on such huge filesystems, but I can't find a reference or proper justification for it. JFS is another option but then some mailing list threads online say it has lost data for them so I'm a bit confused as to what is best to use in my scenario.
As for XFS I have read that a UPS is necessary and this is not a problem since these machines are already connected to a UPS (and that UPS has a backup as well).
Any help appreciated, thanks,
Khusro
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
JFS. http://www.debian-administration.org/articles/388 _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ext4 seems to be missing from that benchmark. I'd go ext4, imho.
On 8 Mar 2010, at 16:09, Alan Hoffmeister wrote:
The article you linked to suggests XFS though? I'm also now thinking about EXT3 with a "mkfs.ext3 -T news" for example so that I get more inodes?
On Monday 08 March 2010, Khusro Jaleel wrote:
Thanks to all of you for your help, and especially Tim Shubitz who faced the same problem and his solution worked perfectly for me.
However, now that I have properly created a GPT partition of size 2.7TB, which filesystem is best on it? This filesystem will be used to store backups of various other linux systems, so the files will be mostly small, however some systems do host big movie files and sometimes SVN dumps, and DB dumps can get a little big. I am going to be using rsnapshot to do the backups, so perhaps I should be careful about the number of inodes I create and try to maximise them?
I am thinking of using XFS, but am not sure. I seem to have heard in the past that one should avoid EXT3 on such huge filesystems, but I can't find a reference or proper justification for it. JFS is another option but then some mailing list threads online say it has lost data for them so I'm a bit confused as to what is best to use in my scenario.
My thoughts on this are roughly: * 2.7T isn't really big ext3,xfs,jfs,etc. should all be fine * We've run XFS alot, but still, it's a lot less mainstream than ext3 * Ext4 is still a tech preview in 5.4 * We have alot of data on Lustre-style ext3 (in the range 4-8T), no issues
Boils down to: Use what you're comfortable with (XFS is typically faster for us but ext3 certainly won't break down at this scale).
/Peter
As for XFS I have read that a UPS is necessary and this is not a problem since these machines are already connected to a UPS (and that UPS has a backup as well).
Any help appreciated, thanks,
Khusro
Thanks for your replies, just to clear things up, here is what I am seeing.
If I reboot server "A" with the Ubuntu LiveCD, I get: ---------------------------------------- # parted /dev/sda p
Model: DELL PERC 5/i (scsi) Disk /dev/sda: 2998GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 53.7GB 53.7GB primary ext3 2 53.7GB 62.3GB 8595MB primary linux-swap 3 62.3GB 83.8GB 21.5GB primary ext3 4 83.8GB 2199GB 2115GB primary xfs
# fdisk -l
Disk /dev/sda: 2998.4 GB, 2998424043520 bytes 255 heads, 63 sectors/track, 364537 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x852b68e5
Device Boot Start End Blocks Id System /dev/sda1 1 6528 52436128+ 83 Linux /dev/sda2 6529 7573 8393962+ 82 Linux swap / Solaris /dev/sda3 7574 10185 20980890 83 Linux /dev/sda4 10186 267349 2065669830 83 Linux ----------------------------------------
Now when I try this with CentOS, I get: ---------------------------------------- Error: msdos labels do not support devices that have more than 4294967295 sectors. -----------------------------------------
straight away. I understand what you guys are saying about GPT and not being able to boot off it, etc but how did I end up in this situation? And is this dangerous?
I am thinking that if this is possible, why not try and setup the second server the same way? But it just feels wrong that Ubuntu allows this and if CentOS does not, there must be a good reason.
On Tue, 23 Feb 2010 at 4:11pm, Khusro Jaleel wrote
straight away. I understand what you guys are saying about GPT and not being able to boot off it, etc but how did I end up in this situation?
There's an old saying that Unix gives you enough rope to hang yourself with...
And is this dangerous?
Yes. Absolutely yes. One day you'll reboot and your partition table (and all your data) will be gone and unrecoverable. Trust me.
I am thinking that if this is possible, why not try and setup the second server the same way? But it just feels wrong that Ubuntu allows this and if CentOS does not, there must be a good reason.
And that reason is that it *will* die horribly and eat your data. Set up the small logical drive in the RAID BIOS as another poster detailed so nicely. Now. Before now.
Yes. Absolutely yes. One day you'll reboot and your partition table (and all your data) will be gone and unrecoverable. Trust me.
And that reason is that it *will* die horribly and eat your data. Set up the small logical drive in the RAID BIOS as another poster detailed so nicely. Now. Before now.
Thanks, that is what I thought might be the case.
*sigh* It's a good thing I found this out, however I'll have to do a painful migration of the data off that drive now and re-partition, etc which is just a pain. I wish Ubuntu had *stopped* me and told me the first time that *this is not possible!*.
At Tue, 23 Feb 2010 16:11:48 +0000 CentOS mailing list centos@centos.org wrote:
Thanks for your replies, just to clear things up, here is what I am seeing.
If I reboot server "A" with the Ubuntu LiveCD, I get:
# parted /dev/sda p
Model: DELL PERC 5/i (scsi) Disk /dev/sda: 2998GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 53.7GB 53.7GB primary ext3 2 53.7GB 62.3GB 8595MB primary linux-swap 3 62.3GB 83.8GB 21.5GB primary ext3 4 83.8GB 2199GB 2115GB primary xfs
# fdisk -l
Disk /dev/sda: 2998.4 GB, 2998424043520 bytes 255 heads, 63 sectors/track, 364537 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x852b68e5
Device Boot Start End Blocks Id System /dev/sda1 1 6528 52436128+ 83 Linux /dev/sda2 6529 7573 8393962+ 82 Linux swap / Solaris /dev/sda3 7574 10185 20980890 83 Linux /dev/sda4 10186 267349 2065669830 83 Linux
Now when I try this with CentOS, I get:
Error: msdos labels do not support devices that have more than 4294967295 sectors.
straight away. I understand what you guys are saying about GPT and not being able to boot off it, etc but how did I end up in this situation? And is this dangerous?
I am thinking that if this is possible, why not try and setup the second server the same way? But it just feels wrong that Ubuntu allows this and if CentOS does not, there must be a good reason.
I guessing one of these things is going on:
A) Ubuntu has *patched* versions of parted and fdisk that disable their error checking (!).
B) Ubuntu has new versions of parted and fdisk that are more liberal than the (older) versions shipped with CentOS.
What does
parted -v and fdisk -v
display under the Ubuntu LiveCD? BTW which version of Ubuntu are you using? Hardy (8.4) or Karma (9.1)?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 23 Feb 2010, at 18:02, Robert Heller wrote:
I guessing one of these things is going on:
A) Ubuntu has *patched* versions of parted and fdisk that disable their error checking (!).
B) Ubuntu has new versions of parted and fdisk that are more liberal than the (older) versions shipped with CentOS.
What does
parted -v and fdisk -v
Yes, that is possible. The versions are:
# parted -v: 1.8.8
# fdisk -v: util-linux-ng 2.14.2
Khusro Jaleel wrote:
Thanks for your replies, just to clear things up, here is what I am seeing.
If I reboot server "A" with the Ubuntu LiveCD, I get:
# parted /dev/sda p
Model: DELL PERC 5/i (scsi) Disk /dev/sda: 2998GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 53.7GB 53.7GB primary ext3 2 53.7GB 62.3GB 8595MB primary linux-swap 3 62.3GB 83.8GB 21.5GB primary ext3 4 83.8GB 2199GB 2115GB primary xfs
# fdisk -l
Disk /dev/sda: 2998.4 GB, 2998424043520 bytes 255 heads, 63 sectors/track, 364537 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x852b68e5
Device Boot Start End Blocks Id System /dev/sda1 1 6528 52436128+ 83 Linux /dev/sda2 6529 7573 8393962+ 82 Linux swap / Solaris /dev/sda3 7574 10185 20980890 83 Linux /dev/sda4 10186 267349 2065669830 83 Linux
Now when I try this with CentOS, I get:
Error: msdos labels do not support devices that have more than 4294967295 sectors.
straight away. I understand what you guys are saying about GPT and not being able to boot off it, etc but how did I end up in this situation? And is this dangerous?
I am thinking that if this is possible, why not try and setup the second server the same way? But it just feels wrong that Ubuntu allows this and if CentOS does not, there must be a good reason.
You realize that you're utilizing just 2TiB of that 2.7TiB drive, right? It looks like the tools in Ubuntu simply partitioned as much of the drive as they could handle with an msdos label and let the rest go to waste.
On 23 Feb 2010, at 23:41, Robert Nichols wrote:
You realize that you're utilizing just 2TiB of that 2.7TiB drive, right? It looks like the tools in Ubuntu simply partitioned as much of the drive as they could handle with an msdos label and let the rest go to waste.
Yes I'll fix this the next time once I re-partition the whole lot.