Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Thank you.
Regards, A.S
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Thank you.
Regards, A.S _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
My understanding is that grub and lilo are not able to boot off of GPT labeled disks currently. Given the size of currently available disks, this will probably change soon, however, for now you need a small partition to boot a large disk.
James A. Peltier wrote:
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Thank you.
Regards, A.S _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
My understanding is that grub and lilo are not able to boot off of GPT labeled disks currently. Given the size of currently available disks, this will probably change soon, however, for now you need a small partition to boot a large disk.
sorry, a bit quick off the trigger, but essentially, if you wanted to use a single RAID-5 volume of this size (even if you configured it as you said) the GPT label for the volume would be what gets you cuz of the boot loader.
The use of LVM and XFS, just have to do with the way they handle larger disks. With LVM you can lay out the disks in a bit more fine tuned manner that allows you go get around some limitations in certain file systems. XFS is just recommended because it is a very good performer and was meant to handle large file systems from its inception. Feel free to use JFS, ReiserFS or your local don-juan-ho file system you like
James A. Peltier wrote:
James A. Peltier wrote:
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Thank you.
Regards, A.S _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
My understanding is that grub and lilo are not able to boot off of GPT labeled disks currently. Given the size of currently available disks, this will probably change soon, however, for now you need a small partition to boot a large disk.
sorry, a bit quick off the trigger, but essentially, if you wanted to use a single RAID-5 volume of this size (even if you configured it as you said) the GPT label for the volume would be what gets you cuz of the boot loader.
The use of LVM and XFS, just have to do with the way they handle larger disks. With LVM you can lay out the disks in a bit more fine tuned manner that allows you go get around some limitations in certain file systems. XFS is just recommended because it is a very good performer and was meant to handle large file systems from its inception. Feel free to use JFS, ReiserFS or your local don-juan-ho file system you like
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
Yes, i lose if the 300G fails, but i think i can do something about that later.
Thanks for the replies.
Regards, A.S
Anup Shukla wrote:
James A. Peltier wrote:
James A. Peltier wrote:
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Thank you.
Regards, A.S _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
My understanding is that grub and lilo are not able to boot off of GPT labeled disks currently. Given the size of currently available disks, this will probably change soon, however, for now you need a small partition to boot a large disk.
sorry, a bit quick off the trigger, but essentially, if you wanted to use a single RAID-5 volume of this size (even if you configured it as you said) the GPT label for the volume would be what gets you cuz of the boot loader.
The use of LVM and XFS, just have to do with the way they handle larger disks. With LVM you can lay out the disks in a bit more fine tuned manner that allows you go get around some limitations in certain file systems. XFS is just recommended because it is a very good performer and was meant to handle large file systems from its inception. Feel free to use JFS, ReiserFS or your local don-juan-ho file system you like
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
Yes, i lose if the 300G fails, but i think i can do something about that later.
Thanks for the replies.
I know that XFS gets all the press about being a great performing file system ... but if you want the best stability on CentOS, you should at least consider ext3 instead.
I have worked very hard to get stable code for xfs in centos-4 and centos-5, and lots of people use it, but (IMHO) ext3 is still much more stable with the CentOS Kernels.
That is my $0.02 ... I'm sure other people will tell you I am all hosed up :D
Thanks, Johnny Hughes
Johnny Hughes wrote:
I know that XFS gets all the press about being a great performing file system ... but if you want the best stability on CentOS, you should at least consider ext3 instead.
I have worked very hard to get stable code for xfs in centos-4 and centos-5, and lots of people use it, but (IMHO) ext3 is still much more stable with the CentOS Kernels.
That is my $0.02 ... I'm sure other people will tell you I am all hosed up :D
Thanks, Johnny Hughes
EXT3 performance is lacking in many areas and its support for larger file systems is still a problem. However, it is rock solid and hopefully EXT4 will address the performance and file system limit issues.
James A. Peltier wrote:
Johnny Hughes wrote:
I know that XFS gets all the press about being a great performing file system ... but if you want the best stability on CentOS, you should at least consider ext3 instead.
I have worked very hard to get stable code for xfs in centos-4 and centos-5, and lots of people use it, but (IMHO) ext3 is still much more stable with the CentOS Kernels.
That is my $0.02 ... I'm sure other people will tell you I am all hosed up :D
EXT3 performance is lacking in many areas and its support for larger file systems is still a problem. However, it is rock solid and hopefully EXT4 will address the performance and file system limit issues.
I don't disagree with that assessment, however newer versions of ext3 have switches to use to improve performance and they work on bigger file systems.
Still, ext3 support is indeed lacking on larger filesystems and yes, hopefully ext4 will address this.
But ... still, if spending a fortune on HUGE drives for an enterprise file system I would still think that one should at least see if ext3 will meet their needs before automatically shifting to XFS. I have seen many a filesystem be unrecoverable with XFS, especially on 4K stack systems (which CentOS i386 is).
Believe me, I have personally put a lot of time and effort into the xfs filesystem modules that are in CentOS Plus and CentOS Extras ... and I use them in some places, but I just want to be on record saying that ext3 is more stable and I recommend its use unless it just _WILL_NOT_WORK_, that's all :D
Thanks, Johnny Hughes
Johnny Hughes wrote:
James A. Peltier wrote:
Johnny Hughes wrote:
I know that XFS gets all the press about being a great performing file system ... but if you want the best stability on CentOS, you should at least consider ext3 instead.
I have worked very hard to get stable code for xfs in centos-4 and centos-5, and lots of people use it, but (IMHO) ext3 is still much more stable with the CentOS Kernels.
That is my $0.02 ... I'm sure other people will tell you I am all hosed up :D
EXT3 performance is lacking in many areas and its support for larger file systems is still a problem. However, it is rock solid and hopefully EXT4 will address the performance and file system limit issues.
I don't disagree with that assessment, however newer versions of ext3 have switches to use to improve performance and they work on bigger file systems.
Still, ext3 support is indeed lacking on larger filesystems and yes, hopefully ext4 will address this.
But ... still, if spending a fortune on HUGE drives for an enterprise file system I would still think that one should at least see if ext3 will meet their needs before automatically shifting to XFS. I have seen many a filesystem be unrecoverable with XFS, especially on 4K stack systems (which CentOS i386 is).
Believe me, I have personally put a lot of time and effort into the xfs filesystem modules that are in CentOS Plus and CentOS Extras ... and I use them in some places, but I just want to be on record saying that ext3 is more stable and I recommend its use unless it just _WILL_NOT_WORK_, that's all :D
Thanks, Johnny Hughes
I am not an expert in filesystems. But, yes, in all these years on Linux, i have never found ext3 go bad for me ever.
Infact, i have never tried any other fs till date.
Going by the comments and views of everyone, i would prefer to go with ext3.
The drive is a large one, but i have no particular need to make one big partition on it.
I can as well have several small partitions (in fact thats what i want) with each partition being the data store for 1 mogstored daemon.
Now i am not sure if thats the best possible solution, but i still got time to implement it and do my benchmarks.
For now, ext3 is surely the fs of choice. Cannot afford to lose anything thats going to be stored on this server.
A big thanks to everyone.
Regards, A.S
I know that XFS gets all the press about being a great performing file system ... but if you want the best stability on CentOS, you should at least consider ext3 instead.
+1
I have worked very hard to get stable code for xfs in centos-4 and centos-5, and lots of people use it, but (IMHO) ext3 is still much more stable with the CentOS Kernels.
That is my $0.02 ... I'm sure other people will tell you I am all hosed up :D
I will be telling them wait for a power loss, wait for the XFS code to shut down one of its filesystem for no reason, take a good look at the neverending stream of bug fixes in the mainline kernel, take a look at those kernel developers who have openly announced they want nothing to do with the XFS codebase and to note the fact that the XFS code is the largest there is for a filesystem due to all the workarounds they have had to put into to deal with Linux's different vm and other stuff.
There is a reason why XFS is not that stable but it sure is the fastest there is for writes.
I will be telling them wait for a power loss, wait for the XFS code to shut down one of its filesystem for no reason, take a good look at the neverending stream of bug fixes in the mainline kernel, take a look at those kernel developers who have openly announced they want nothing to do with the XFS codebase and to note the fact that the XFS code is the largest there is for a filesystem due to all the workarounds they have had to put into to deal with Linux's different vm and other stuff.
I have a couple mission critical servers (3TB) that I formatted with JFS. I have been completely happy with the results and have yet to see any filesystem corruption.
A JFS Fsck on the drive takes only a few seconds even after a crash.
I have created and moved various large files without a problem. I have also pulled the plug during write intensive operations.
Just wanted to add another vote for JFS. :)
Shawn
Shawn Everett wrote:
I will be telling them wait for a power loss, wait for the XFS code to shut down one of its filesystem for no reason, take a good look at the neverending stream of bug fixes in the mainline kernel, take a look at those kernel developers who have openly announced they want nothing to do with the XFS codebase and to note the fact that the XFS code is the largest there is for a filesystem due to all the workarounds they have had to put into to deal with Linux's different vm and other stuff.
I have a couple mission critical servers (3TB) that I formatted with JFS. I have been completely happy with the results and have yet to see any filesystem corruption.
Great! JFS takes second on all benchmarks. Writes, reads, ... you name it. The only question that I have had was was it stable but I had yet to hear about it being used.
http://untroubled.org/benchmarking/2004-04/
A bit old but I doubt things have changed much since then.
A JFS Fsck on the drive takes only a few seconds even after a crash.
I have created and moved various large files without a problem. I have also pulled the plug during write intensive operations.
Just wanted to add another vote for JFS. :)
+1 :-D
On Tuesday 23 October 2007, Anup Shukla wrote: ...
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
Correct.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
That will work. Another way is to see if the raid-controller can present two volumes from your raid5, one small (for OS) and one big (for gpt large fs). If this works then you'll get one device on which you can use msdos partitions and boot from and one (>2T) on which you use gpt (or simply lvm directly on the device).
/Peter
Peter Kjellstrom wrote:
On Tuesday 23 October 2007, Anup Shukla wrote: ...
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
Correct.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
Yes, thought about it. But DELL PERC does not seem to be able to do that. That is atleast what i have found out till now. Wish it was possible.
Just in case, if anyone knows better, please let me know. I have a Dell PE2950
That will work. Another way is to see if the raid-controller can present two volumes from your raid5, one small (for OS) and one big (for gpt large fs). If this works then you'll get one device on which you can use msdos partitions and boot from and one (>2T) on which you use gpt (or simply lvm directly on the device).
/Peter
Regards, A.S
Sorry, the previous mail i sent was not correctly quoted. Corrections below.
Anup Shukla wrote:
Peter Kjellstrom wrote:
On Tuesday 23 October 2007, Anup Shukla wrote: ...
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
Correct.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
That will work. Another way is to see if the raid-controller can present two volumes from your raid5, one small (for OS) and one big (for gpt large fs). If this works then you'll get one device on which you can use msdos partitions and boot from and one (>2T) on which you use gpt (or simply lvm directly on the device).
/Peter
Yes, thought about it. But DELL PERC does not seem to be able to do that. That is atleast what i have found out till now. Wish it was possible.
Just in case, if anyone knows better, please let me know. I have a Dell PE2950
Regards, A.S
Anup Shukla wrote:
Peter Kjellstrom wrote:
On Tuesday 23 October 2007, Anup Shukla wrote: ...
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
Correct.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
Yes, thought about it. But DELL PERC does not seem to be able to do that. That is atleast what i have found out till now. Wish it was possible.
Just in case, if anyone knows better, please let me know. I have a Dell PE2950
Which Dell PERC they make dozens under that name?
If it is the PERC 5e then yes you can create multiple LUNs out of an array.
That will work. Another way is to see if the
raid-controller can present two
volumes from your raid5, one small (for OS) and one big
(for gpt large fs).
If this works then you'll get one device on which you can use msdos partitions and boot from and one (>2T) on which you use gpt
(or simply lvm
directly on the device).
IMHO I would recommend using 2 internal drives with a software mirror for the CentOS install and keep your external array completely out of the OS install.
I use LVM for all volumes, use ext3 file system for the OS volumes, and you can pick or choose the file system you want to use for your data volumes, I'd probably stick with ext3 or maybe jfs if it wasn't too cumbersome to get going.
My server disk config of choice in kickstart speak:
part raid.1 --noformat --onpart sda1 part raid.2 --noformat --onpart sdb1 part raid.3 --noformat --onpart sda2 part raid.4 --noformat --onpart sdb2
raid /boot --useexisting --fstype ext3 --level=RAID1 --device=md0 raid.1 raid.2 raid pv.1 --noformat --useexisting --fstype "physical volume (LVM)" --level=RAID1 --device=md1 raid.3 raid.4
volgroup CentOS --noformat --useexisting --pesize=32768 pv.1 logvol / --useexisting --fstype ext3 --name=root --vgname=CentOS --size=8192 logvol swap --useexisting --fstype swap --name=swap --vgname=CentOS --size=4096
That setup will yield an initial 100MB /boot, 8GB / and 4GB swap and leave the rest of the space free for future use.
You can then create a separate VG out of your data array and sub-divide it into smaller LVs formatted for the FS of choice.
Don't allocate all storage initially, just what you need to get started you can always extend your volumes later relatively easily, but shrinking is far more troublesome.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Anup Shukla wrote:
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
If you use a hardware RAID adapter, you can make two LUNs from the disks. So make one big RAID5 array but two logical drives. I would still use LVM anyway for management down the road.
Not all hardware RAID adapters might support this, but if yours does you will get data protection for "free" on your system drive.
//Morten
Morten Torstensen wrote:
Anup Shukla wrote:
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
If you use a hardware RAID adapter, you can make two LUNs from the disks. So make one big RAID5 array but two logical drives. I would still use LVM anyway for management down the road.
Not all hardware RAID adapters might support this, but if yours does you will get data protection for "free" on your system drive.
//Morten
Thats making me feel miserable .. ;)
I had this thought, and then a message on the list said the same and now one more message saying the same.
I am literally scavenging through Dell PERC guides to find out if and how this can be done.
Hope this is possible with Dell PERC.
Regards, A.S
on 10/23/2007 3:21 AM Anup Shukla spake the following:
Morten Torstensen wrote:
Anup Shukla wrote:
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
If you use a hardware RAID adapter, you can make two LUNs from the disks. So make one big RAID5 array but two logical drives. I would still use LVM anyway for management down the road.
Not all hardware RAID adapters might support this, but if yours does you will get data protection for "free" on your system drive.
//Morten
Thats making me feel miserable .. ;)
I had this thought, and then a message on the list said the same and now one more message saying the same.
I am literally scavenging through Dell PERC guides to find out if and how this can be done.
Hope this is possible with Dell PERC.
Regards, A.S
If you have it I think the option is called flexraid on a perc.
on 10/23/2007 2:06 AM Anup Shukla spake the following:
James A. Peltier wrote:
James A. Peltier wrote:
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Thank you.
Regards, A.S _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
My understanding is that grub and lilo are not able to boot off of GPT labeled disks currently. Given the size of currently available disks, this will probably change soon, however, for now you need a small partition to boot a large disk.
sorry, a bit quick off the trigger, but essentially, if you wanted to use a single RAID-5 volume of this size (even if you configured it as you said) the GPT label for the volume would be what gets you cuz of the boot loader.
The use of LVM and XFS, just have to do with the way they handle larger disks. With LVM you can lay out the disks in a bit more fine tuned manner that allows you go get around some limitations in certain file systems. XFS is just recommended because it is a very good performer and was meant to handle large file systems from its inception. Feel free to use JFS, ReiserFS or your local don-juan-ho file system you like
I think its finally got into my head now. :)
From what i understand (after your replies and some more googling) GRUB cannot boot from gpt labeled drives. So no matter how i partition them, it just wont boot.
So finally, i am putting a 300G SATA to act as the "system" drive. Then use the other 750G's to be the big RAID 5 Volume (XFS)
Yes, i lose if the 300G fails, but i think i can do something about that later.
You did say a hardware raid5 right? You just need to have the /boot partition at the beginning of the array, and your raid card should have some sort of carving feature. You just need to make sure you have multi-lun support going, and with your kernel and initrd in the boot partition you should be able to boot.
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Dell PERC 5/i "does" have an option to create multiple LUNs. So, i have been quite moronic in not trying to apply logic initially. As for now, i am creating a small "system" disk, and multiple 500G "data" disks.
I do not really need to have 1 big partition for the data.
So this is a lot simpler setup i believe, and no chance of hitting the 2T limit.
A big thanks to everyone who guided me, and my apologies if this qualifies as a "waste of time" post.
This, for sure, is the best list i have ever experienced.
Thank you all again.
Regards, A.S
On Wednesday 24 October 2007, Anup Shukla wrote: ...
Dell PERC 5/i "does" have an option to create multiple LUNs. So, i have been quite moronic in not trying to apply logic initially. As for now, i am creating a small "system" disk, and multiple 500G "data" disks.
Reconsider the "multiple 500G" part. Slicing a raid-set up typically has bad performance effects (how bad depends on the controller). This results from that linux now considers several parts of your one raidset as devices to be scheduled independently.
/Peter
I do not really need to have 1 big partition for the data.
So this is a lot simpler setup i believe, and no chance of hitting the 2T limit.
Peter Kjellstrom wrote:
Reconsider the "multiple 500G" part. Slicing a raid-set up typically has bad performance effects (how bad depends on the controller). This results from that linux now considers several parts of your one raidset as devices to be scheduled independently.
Ok, looks like i am not done yet then. I would like to spend some more time trying to do a performance benchmark. But, unfortunately, time is the constraint here for me.
Still, given the suggestion, i will surely try to reduce the number of slices.
Thank you for the information.
Regards, A.S
Anup Shukla wrote:
Still, given the suggestion, i will surely try to reduce the number of slices.
I would make one system LUN at say 20GB and one data LUN with the rest of the RADI5 space.
On the system LUN I would make a /boot filesystem and a LVM partition with at least a / filesystem and swap. Usually I make /, /usr, /opt, /home, /var and /tmp but it varies a bit depending on what kind of machine it is.
The data LUN I would use as a PV directly for LVM and not bother with partitions at all.
//Morten
Morten Torstensen wrote:
Anup Shukla wrote:
Still, given the suggestion, i will surely try to reduce the number of slices.
I would make one system LUN at say 20GB and one data LUN with the rest of the RADI5 space.
On the system LUN I would make a /boot filesystem and a LVM partition with at least a / filesystem and swap. Usually I make /, /usr, /opt, /home, /var and /tmp but it varies a bit depending on what kind of machine it is.
The data LUN I would use as a PV directly for LVM and not bother with partitions at all.
I created 500G slices. Partitioned and mounted them Then did a simple "time dd if=/dev/zero of=/mnt/data1 bs=1k count=1200000" This gave me a speed of over 150MB/s
Then i deleted entire RAID thing.. recreate 2 LUNs 30G, and whatever is left.
Create a PV on the bigger drive with 1 VG and 3 LVs of equal sizes.
Format and run the dd command again. The speed is 130MB/s now.
Its a bit confusing. Does LVM slow down things? Or i did something that is not really of any relevance to check IO speed.
I used mkfs.ext3 -m0 -E stride=96 -O dir_index /dev/sdb1 ... I have a RAID5 volume consisting of 6 disk with stripe size = 64k I hope the stride=96 is optimal.
Should i stick with LVM, or go back to the older way?
Thank you.
Regards, A.S
Anup Shukla wrote:
I created 500G slices. Partitioned and mounted them Then did a simple "time dd if=/dev/zero of=/mnt/data1 bs=1k count=1200000" This gave me a speed of over 150MB/s
Then i deleted entire RAID thing.. recreate 2 LUNs 30G, and whatever is left.
Create a PV on the bigger drive with 1 VG and 3 LVs of equal sizes.
Format and run the dd command again. The speed is 130MB/s now.
Its a bit confusing. Does LVM slow down things? Or i did something that is not really of any relevance to check IO speed.
I used mkfs.ext3 -m0 -E stride=96 -O dir_index /dev/sdb1 ... I have a RAID5 volume consisting of 6 disk with stripe size = 64k I hope the stride=96 is optimal.
Should i stick with LVM, or go back to the older way?
Thank you.
On second thoughts, i have gone completely off-topic now. It isn't CentOS anymore.
So it would be appropriate for me to end this topic here.
Thanks for the help.
Regards, A.S
Anup Shukla wrote:
Format and run the dd command again. The speed is 130MB/s now.
It can vary a quite a bit depending on where you hit the disk. Remember, what you are testing is just how fast dd can read from /dev/zero and write to the file in a filesystem with 1k blocks. How that will map to real performance is another matter.
Its a bit confusing. Does LVM slow down things? Or i did something that is not really of any relevance to check IO speed.
LVM adds very little overhead. The file and placement on disk can have more to do with raw I/O bandwidth than anything else in this particular scenario.
I used mkfs.ext3 -m0 -E stride=96 -O dir_index /dev/sdb1 ... I have a RAID5 volume consisting of 6 disk with stripe size = 64k I hope the stride=96 is optimal.
Depends what you want... With 4k blocks in ext3 (default) a stride of 16 (16 times 4 is 64) would read one stripe from a disk. With 6 disk 16 times 6 is 96, so for every I/O you will hit each disk once and read one stripe. Ditto for writes. In general that is a good start.
Should i stick with LVM, or go back to the older way?
I would never really consider not using LVM. The flexibility it adds for disk management is essential for managing your disks.
//Morten
on 10/23/2007 10:16 PM Anup Shukla spake the following:
Anup Shukla wrote:
Hi All,
Sorry if this has been answered many times. But i have been going through a lot of pages (via google search). The more i search, the more its confusing me.
I have a server with 6 (750G each) SATA disks with H/W Raid 5.
I plan to allocate the space as follows
swap 8G /boot 100M / 20G -- and remaining space to /data{1,2,3,N} (equal sizes)
However after the installation and reboot, i got an error about bad partition for /data8
I had hit the 2T limit.
Then i found this page at http://www.knowplace.org/pages/howtos/linux_large_filesystems_support.php
which speaks of using Parted/LVM2 and XFS.
If i understand this correctly, I need to have 1 disk to host the CentOS installation. And i can use the other 5 disks in a RAID array (label type gpt...)
Is it not possible to partition and use the existing RAID 5 volume?
I really am not sure about how to proceed for this big disk problem.
Any ideas/links will really help.
Dell PERC 5/i "does" have an option to create multiple LUNs. So, i have been quite moronic in not trying to apply logic initially. As for now, i am creating a small "system" disk, and multiple 500G "data" disks.
I do not really need to have 1 big partition for the data.
So this is a lot simpler setup i believe, and no chance of hitting the 2T limit.
A big thanks to everyone who guided me, and my apologies if this qualifies as a "waste of time" post.
This, for sure, is the best list i have ever experienced.
Thank you all again.
Regards, A.S
I would not make the data partitions that small as linux will play hell thinking it really is separate drives and slow down. Make them just under the 2T size.