Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
Thanks Paras.
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
Thanks Paras.
HTH,
Timo
I don't get a grub so I can't issue "c" .
Paras.
On Thu, Mar 7, 2013 at 10:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
Thanks Paras.
HTH,
Timo
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
One question:
During the install, do I install grub on sda or md0?
Paras.
On Thu, Mar 7, 2013 at 10:43 AM, Paras pradhan pradhanparas@gmail.com wrote:
I don't get a grub so I can't issue "c" .
Paras.
On Thu, Mar 7, 2013 at 10:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
Thanks Paras.
HTH,
Timo
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, Mar 7, 2013 at 12:00 PM, Paras pradhan pradhanparas@gmail.comwrote:
One question:
During the install, do I install grub on sda or md0?
Install grub on sda and sdb. Installing GRUB on the mbr of both disks ensures that your system can still boot if one disk has failed.
Although the Linux OS sees those two drives as a software raid1, GRUB looks at a single drive when booting.
Paras.
On Thu, Mar 7, 2013 at 10:43 AM, Paras pradhan pradhanparas@gmail.com wrote:
I don't get a grub so I can't issue "c" .
Paras.
On Thu, Mar 7, 2013 at 10:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
Thanks Paras.
HTH,
Timo
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I have a centos 5.4 installation. grub on /dev/md0 . no problem at all. my primary disk failed , replaced the disk and no problem at all. what has changed in 5.9 and 6 releases? its not easy anymore.
Paras.
On Thu, Mar 7, 2013 at 11:17 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 7, 2013 at 12:00 PM, Paras pradhan pradhanparas@gmail.comwrote:
One question:
During the install, do I install grub on sda or md0?
Install grub on sda and sdb. Installing GRUB on the mbr of both disks ensures that your system can still boot if one disk has failed.
Although the Linux OS sees those two drives as a software raid1, GRUB looks at a single drive when booting.
Paras.
On Thu, Mar 7, 2013 at 10:43 AM, Paras pradhan pradhanparas@gmail.com wrote:
I don't get a grub so I can't issue "c" .
Paras.
On Thu, Mar 7, 2013 at 10:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
Thanks Paras.
HTH,
Timo
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- ---~~.~~--- Mike // SilverTip257 // _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, Mar 7, 2013 at 12:17 PM, SilverTip257 silvertip257@gmail.comwrote:
On Thu, Mar 7, 2013 at 12:00 PM, Paras pradhan pradhanparas@gmail.comwrote:
One question:
During the install, do I install grub on sda or md0?
Install grub on sda and sdb. Installing GRUB on the mbr of both disks ensures that your system can still boot if one disk has failed.
Although the Linux OS sees those two drives as a software raid1, GRUB looks at a single drive when booting.
When you boot to your rescue CD, check the metadata version of your raid1 array. You might reply back with the output from "mdadm -D --scan"
This is simliar to what I saw when using an unsupported metadata (too new for the GRUB version). If you didn't set up any partitions by hand, I would expect Anaconda set the proper metadata version.
Paras.
On Thu, Mar 7, 2013 at 10:43 AM, Paras pradhan pradhanparas@gmail.com wrote:
I don't get a grub so I can't issue "c" .
Paras.
On Thu, Mar 7, 2013 at 10:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
Thanks Paras.
HTH,
Timo
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- ---~~.~~--- Mike // SilverTip257 //
On Thu, Mar 7, 2013 at 9:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
Does the same warning about it not being recommended to use the software RAID also apply to CentOS 6?
Thanks, Dave
On 03/07/2013 06:29 PM, Dave Johansen wrote:
On Thu, Mar 7, 2013 at 9:33 AM, Timo Schoeler timo.schoeler@riscworks.net wrote:
On 03/07/2013 05:30 PM, thus Paras pradhan spake:
Hi,
Hi,
I have a server with 2 disks. Installed centos 5.9 with raid1. I created /dev/md0 to hold "/" and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don't see the boot loader . I see a blank cursor blinking.
What have I done wrong?
have you paid attention on 'Section Two' here?
Does the same warning about it not being recommended to use the software RAID also apply to CentOS 6?
Thanks, Dave
Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.
Gerry
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've always put a static boot on each drive, then made the REST of the drive a mdraid mirror and put a LVM in it. all that needs to be done is to rsync the primary /boot with the backup following any kernel updates.
On Thu, 7 Mar 2013, John R Pierce wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.
+1
have you been putting /boot on a mdraid? that's what the article is recommending against.
I don't understand why. As long as you remember to install grub on each drive you're good to go.
Steve
On 03/07/2013 06:40 PM, John R Pierce wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've always put a static boot on each drive, then made the REST of the drive a mdraid mirror and put a LVM in it. all that needs to be done is to rsync the primary /boot with the backup following any kernel updates.
Yes, I have /boot on /dev/md0 many times. Some distros (anacondas) give great problem with this. For those I just create entire filesystem outside of anaconda and then tell it to use an existing Linux. Works fine.
Gerry
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst thing I've seen about it is that some machines change their idea of bios disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk - and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.
On 03/07/2013 06:52 PM, Les Mikesell wrote:
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst thing I've seen about it is that some machines change their idea of bios disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk - and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.
Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno greno@verizon.net wrote:
On 03/07/2013 06:52 PM, Les Mikesell wrote:
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com
wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro
RH/CentOS/Fedora for over 10 years without any
serious difficulties. I don't quite understand the logic in all these
negative statements about software raid on that
wiki page. The worst I get into is I have to boot from a bootdisk if
the MBR gets corrupted for any reason. No big
deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst thing
No problems here either - I have had /boot on software raid1 on quite a few systems past and present.
I've seen about it is that some machines change their idea of bios
If I do have a drive fail, I can frequently hot-remove them and hot-add the replacement drive to get it resyncing without powering off.
disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk - and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.
Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
I think we could argue that rescue disks are a necessity regardless one is using software raid or not. :)
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Fri, Mar 8, 2013 at 6:14 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno greno@verizon.net wrote:
On 03/07/2013 06:52 PM, Les Mikesell wrote:
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com
wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat distro
RH/CentOS/Fedora for over 10 years without any
serious difficulties. I don't quite understand the logic in all these
negative statements about software raid on that
wiki page. The worst I get into is I have to boot from a bootdisk if
the MBR gets corrupted for any reason. No big
deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst thing
No problems here either - I have had /boot on software raid1 on quite a few systems past and present.
I've seen about it is that some machines change their idea of bios
If I do have a drive fail, I can frequently hot-remove them and hot-add the replacement drive to get it resyncing without powering off.
disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk - and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.
Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
I think we could argue that rescue disks are a necessity regardless one is using software raid or not. :)
Thanks for all of the helpful info, and now I have a follow on question. I have a Dell m6500 that I've been running as a RAID 1 using the BIOS RAID on RHEL 5. The issue is that when you switch one of the drives, the BIOS renames the RAID and then RHEL 5 doesn't recognize it anymore. So here are my questions:
1) Has this issue of handling the renaming been resolved in RHEL 6? (my guess is no) 2) Would a software RAID be a better choice than using the BIOS RAID? 3) If a software RAID is the better choice, are there going to be an impact on performance/stability/etc?
Thanks, Dave
On Tue, Mar 12, 2013 at 1:28 PM, Dave Johansen davejohansen@gmail.comwrote:
On Fri, Mar 8, 2013 at 6:14 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno greno@verizon.net wrote:
On 03/07/2013 06:52 PM, Les Mikesell wrote:
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com
wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote:
Dave, I've been using software raid with every type of RedHat
distro
RH/CentOS/Fedora for over 10 years without any
serious difficulties. I don't quite understand the logic in all
these
negative statements about software raid on that
wiki page. The worst I get into is I have to boot from a bootdisk
if
the MBR gets corrupted for any reason. No big
deal. Just rerun grub.
have you been putting /boot on a mdraid? that's what the article is recommending against.
I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst
thing
No problems here either - I have had /boot on software raid1 on quite a
few
systems past and present.
I've seen about it is that some machines change their idea of bios
If I do have a drive fail, I can frequently hot-remove them and hot-add
the
replacement drive to get it resyncing without powering off.
disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk - and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.
Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
I think we could argue that rescue disks are a necessity regardless one
is
using software raid or not. :)
Thanks for all of the helpful info, and now I have a follow on question. I have a Dell m6500 that I've been running as a RAID 1 using the BIOS RAID on RHEL 5. The issue is that when you switch one of the drives, the BIOS renames the RAID and then RHEL 5 doesn't recognize it anymore. So here are my questions:
- Has this issue of handling the renaming been resolved in RHEL 6?
(my guess is no)
I've not seen any weird bios naming issues.
- Would a software RAID be a better choice than using the BIOS RAID?
It depends! In another thread on this list someone said they prefer the reliable Linux toolset over the manufacturer tools/RAID controllers.
In a way it comes down to what you can afford and what you are comfortable with. And then there are chips on the hardware RAID controllers on which the RAID operations are offloaded.
- If a software RAID is the better choice, are there going to be an
impact on performance/stability/etc?
Supposedly hardware RAID performs better. I've not done any tests to quantify this statement I've heard others make.
Since it's software RAID, the OS will be using a few CPU cycles to handle the softraid. But I'll doubt you'll miss those CPU cycles ... I haven't and I have a mix of hardware raid and software raid systems. In the end that it will come down to drive performance in terms of how many IOPS you can squeeze out of your drives.
Make certain to set up checks for your array health and disk health (smartd). If you use hardware raid many controllers don't allow directly accessing the drives with smartctl ... you have to use a vendor binary/utility (or open source one if available).
And then there's aligning your hardware RAID boundaries... [0] [1] :)
[0] http://www.mysqlperformanceblog.com/2011/06/09/aligning-io-on-a-hard-disk-ra... [1] http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-edi...
Thanks, Dave _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Mar 12, 2013 at 5:04 PM, SilverTip257 silvertip257@gmail.com wrote:
On Tue, Mar 12, 2013 at 1:28 PM, Dave Johansen davejohansen@gmail.comwrote:
On Fri, Mar 8, 2013 at 6:14 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno greno@verizon.net wrote:
On 03/07/2013 06:52 PM, Les Mikesell wrote:
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com
wrote:
On 3/7/2013 3:35 PM, Gerry Reno wrote: > Dave, I've been using software raid with every type of RedHat
distro
RH/CentOS/Fedora for over 10 years without any
> serious difficulties. I don't quite understand the logic in all
these
negative statements about software raid on that
> wiki page. The worst I get into is I have to boot from a > bootdisk
if
the MBR gets corrupted for any reason. No big
> deal. Just rerun grub. have you been putting /boot on a mdraid? that's what the article is recommending against.
I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst
thing
No problems here either - I have had /boot on software raid1 on quite a
few
systems past and present.
I've seen about it is that some machines change their idea of bios
If I do have a drive fail, I can frequently hot-remove them and hot-add
the
replacement drive to get it resyncing without powering off.
disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk - and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.
Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
I think we could argue that rescue disks are a necessity regardless one
is
using software raid or not. :)
Thanks for all of the helpful info, and now I have a follow on question. I have a Dell m6500 that I've been running as a RAID 1 using the BIOS RAID on RHEL 5. The issue is that when you switch one of the drives, the BIOS renames the RAID and then RHEL 5 doesn't recognize it anymore. So here are my questions:
- Has this issue of handling the renaming been resolved in RHEL 6?
(my guess is no)
I've not seen any weird bios naming issues.
It's an Intel Software RAID and whenever either of the drives are switched, the system won't boot at all because it says it can't find the device. I end up having to boot to a rescue disk and then manually edit the init image to get it to boot.
- Would a software RAID be a better choice than using the BIOS RAID?
It depends! In another thread on this list someone said they prefer the reliable Linux toolset over the manufacturer tools/RAID controllers.
In a way it comes down to what you can afford and what you are comfortable with. And then there are chips on the hardware RAID controllers on which the RAID operations are offloaded.
- If a software RAID is the better choice, are there going to be an
impact on performance/stability/etc?
Supposedly hardware RAID performs better. I've not done any tests to quantify this statement I've heard others make.
Since it's software RAID, the OS will be using a few CPU cycles to handle the softraid. But I'll doubt you'll miss those CPU cycles ... I haven't and I have a mix of hardware raid and software raid systems. In the end that it will come down to drive performance in terms of how many IOPS you can squeeze out of your drives.
Make certain to set up checks for your array health and disk health (smartd). If you use hardware raid many controllers don't allow directly accessing the drives with smartctl ... you have to use a vendor binary/utility (or open source one if available).
And then there's aligning your hardware RAID boundaries... [0] [1] :)
[0]
http://www.mysqlperformanceblog.com/2011/06/09/aligning-io-on-a-hard-disk-ra... [1]
http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-edi...
Thanks to everyone for all the info and help. It sounds like the best solution is a software RAID with /boot being a non-RAID partition and then being rsynced on to the second drive, so I'll give that a whirl.
Thanks again, Dave
On Thu, Mar 21, 2013 at 10:04 PM, Dave Johansen davejohansen@gmail.comwrote:
On Tue, Mar 12, 2013 at 5:04 PM, SilverTip257 silvertip257@gmail.com wrote:
On Tue, Mar 12, 2013 at 1:28 PM, Dave Johansen davejohansen@gmail.comwrote:
On Fri, Mar 8, 2013 at 6:14 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno greno@verizon.net
wrote:
On 03/07/2013 06:52 PM, Les Mikesell wrote:
On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce pierce@hogranch.com
wrote:
> On 3/7/2013 3:35 PM, Gerry Reno wrote: >> Dave, I've been using software raid with every type of RedHat
distro
RH/CentOS/Fedora for over 10 years without any
>> serious difficulties. I don't quite understand the logic in
all
these
negative statements about software raid on that
>> wiki page. The worst I get into is I have to boot from a >> bootdisk
if
the MBR gets corrupted for any reason. No big
>> deal. Just rerun grub. > have you been putting /boot on a mdraid? that's what the
article
> is > recommending against. I've put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst
thing
No problems here either - I have had /boot on software raid1 on quite a
few
systems past and present.
I've seen about it is that some machines change their idea of
bios
If I do have a drive fail, I can frequently hot-remove them and hot-add
the
replacement drive to get it resyncing without powering off.
disk 0 and 1 when the first one fails, so your grub setup might
be
wrong even after you do it on the 2nd disk - and that would be
the
same with/without raid. As long as you are prepared to boot
from
a rescue disk you can fix it easily anyway.
Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
I think we could argue that rescue disks are a necessity regardless one
is
using software raid or not. :)
Thanks for all of the helpful info, and now I have a follow on question. I have a Dell m6500 that I've been running as a RAID 1 using the BIOS RAID on RHEL 5. The issue is that when you switch one of the drives, the BIOS renames the RAID and then RHEL 5 doesn't recognize it anymore. So here are my questions:
- Has this issue of handling the renaming been resolved in RHEL 6?
(my guess is no)
I've not seen any weird bios naming issues.
It's an Intel Software RAID and whenever either of the drives are switched, the system won't boot at all because it says it can't find the device. I end up having to boot to a rescue disk and then manually edit the init image to get it to boot.
- Would a software RAID be a better choice than using the BIOS RAID?
It depends! In another thread on this list someone said they prefer the reliable Linux toolset over the manufacturer tools/RAID controllers.
In a way it comes down to what you can afford and what you are
comfortable
with. And then there are chips on the hardware RAID controllers on which the RAID operations are offloaded.
- If a software RAID is the better choice, are there going to be an
impact on performance/stability/etc?
Supposedly hardware RAID performs better. I've not done any tests to quantify this statement I've heard others make.
Since it's software RAID, the OS will be using a few CPU cycles to handle the softraid. But I'll doubt you'll miss those CPU cycles ... I haven't and I have a mix of hardware raid and software raid systems. In the end that it will come down to drive performance in terms of how many IOPS you can squeeze out of your drives.
Make certain to set up checks for your array health and disk health (smartd). If you use hardware raid many controllers don't allow directly accessing the drives with smartctl ... you have to use a vendor binary/utility (or open source one if available).
And then there's aligning your hardware RAID boundaries... [0] [1] :)
[0]
http://www.mysqlperformanceblog.com/2011/06/09/aligning-io-on-a-hard-disk-ra...
[1]
http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-edi...
Thanks to everyone for all the info and help. It sounds like the best solution is a software RAID with /boot being a non-RAID partition and then being rsynced on to the second drive, so I'll give that a whirl.
Just a bit of emphasis here:
I've had success with /boot being part of a RAID1 software array and installing GRUB on the MBR of both disks. That way if the main/first disk fails the other disk still has GRUB installed and the system will boot if rebooted.
RAID1 is viewed as single disks by GRUB. That is it boots off the _first drive_ -- it is not until the Linux kernel has loaded and mdadm has assembled the drives that you actually have a RAID1. Since GRUB only boots off one drive it is prudent to install GRUB on both disks when set up this way.
Putting /boot on a RAID1 software array will save you having to rsync /boot to the other drive. And booting to a rescue CD to install GRUB on the new drive after the primary drive died. The above configuration is some work up front, but less hassle in the wake of a drive failure.
Try it on a test system if you don't trust the configuration ... it will boot. :) I have this configuration on fifteen or more systems (rough estimate) and some have been in service for years.
Thanks again, Dave _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Best Regards,
On Fri, Mar 22, 2013 at 5:16 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 21, 2013 at 10:04 PM, Dave Johansen davejohansen@gmail.comwrote:
On Tue, Mar 12, 2013 at 5:04 PM, SilverTip257 silvertip257@gmail.com wrote:
On Tue, Mar 12, 2013 at 1:28 PM, Dave Johansen davejohansen@gmail.comwrote:
On Fri, Mar 8, 2013 at 6:14 AM, SilverTip257 silvertip257@gmail.com wrote:
On Thu, Mar 7, 2013 at 6:54 PM, Gerry Reno greno@verizon.net
wrote:
On 03/07/2013 06:52 PM, Les Mikesell wrote: > On Thu, Mar 7, 2013 at 5:40 PM, John R Pierce > pierce@hogranch.com wrote: >> On 3/7/2013 3:35 PM, Gerry Reno wrote: >>> Dave, I've been using software raid with every type of >>> RedHat
distro
RH/CentOS/Fedora for over 10 years without any >>> serious difficulties. I don't quite understand the logic in
all
these
negative statements about software raid on that >>> wiki page. The worst I get into is I have to boot from a >>> bootdisk
if
the MBR gets corrupted for any reason. No big >>> deal. Just rerun grub. >> have you been putting /boot on a mdraid? that's what the
article
>> is >> recommending against. > I've put /boot on md raid1 on a lot of machines (always > drives > small > enough to be MBR based) and never had any problem with the > partition > looking enough like a native one for grub to boot it. The > worst
thing
No problems here either - I have had /boot on software raid1 on quite a
few
systems past and present.
> I've seen about it is that some machines change their idea of
bios
If I do have a drive fail, I can frequently hot-remove them and hot-add
the
replacement drive to get it resyncing without powering off.
> disk 0 and 1 when the first one fails, so your grub setup > might
be
> wrong even after you do it on the 2nd disk - and that would be
the
> same with/without raid. As long as you are prepared to boot
from
> a > rescue disk you can fix it easily anyway. > Good point, Les. Rescue disks and bootdisks are key and critical if you're going to use software raid.
I think we could argue that rescue disks are a necessity regardless one
is
using software raid or not. :)
Thanks for all of the helpful info, and now I have a follow on question. I have a Dell m6500 that I've been running as a RAID 1 using the BIOS RAID on RHEL 5. The issue is that when you switch one of the drives, the BIOS renames the RAID and then RHEL 5 doesn't recognize it anymore. So here are my questions:
- Has this issue of handling the renaming been resolved in RHEL 6?
(my guess is no)
I've not seen any weird bios naming issues.
It's an Intel Software RAID and whenever either of the drives are switched, the system won't boot at all because it says it can't find the device. I end up having to boot to a rescue disk and then manually edit the init image to get it to boot.
- Would a software RAID be a better choice than using the BIOS
RAID?
It depends! In another thread on this list someone said they prefer the reliable Linux toolset over the manufacturer tools/RAID controllers.
In a way it comes down to what you can afford and what you are
comfortable
with. And then there are chips on the hardware RAID controllers on which the RAID operations are offloaded.
- If a software RAID is the better choice, are there going to be an
impact on performance/stability/etc?
Supposedly hardware RAID performs better. I've not done any tests to quantify this statement I've heard others make.
Since it's software RAID, the OS will be using a few CPU cycles to handle the softraid. But I'll doubt you'll miss those CPU cycles ... I haven't and I have a mix of hardware raid and software raid systems. In the end that it will come down to drive performance in terms of how many IOPS you can squeeze out of your drives.
Make certain to set up checks for your array health and disk health (smartd). If you use hardware raid many controllers don't allow directly accessing the drives with smartctl ... you have to use a vendor binary/utility (or open source one if available).
And then there's aligning your hardware RAID boundaries... [0] [1] :)
[0]
http://www.mysqlperformanceblog.com/2011/06/09/aligning-io-on-a-hard-disk-ra...
[1]
http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-edi...
Thanks to everyone for all the info and help. It sounds like the best solution is a software RAID with /boot being a non-RAID partition and then being rsynced on to the second drive, so I'll give that a whirl.
Just a bit of emphasis here:
I've had success with /boot being part of a RAID1 software array and installing GRUB on the MBR of both disks. That way if the main/first disk fails the other disk still has GRUB installed and the system will boot if rebooted.
RAID1 is viewed as single disks by GRUB. That is it boots off the _first drive_ -- it is not until the Linux kernel has loaded and mdadm has assembled the drives that you actually have a RAID1. Since GRUB only boots off one drive it is prudent to install GRUB on both disks when set up this way.
Putting /boot on a RAID1 software array will save you having to rsync /boot to the other drive. And booting to a rescue CD to install GRUB on the new drive after the primary drive died. The above configuration is some work up front, but less hassle in the wake of a drive failure.
Try it on a test system if you don't trust the configuration ... it will boot. :) I have this configuration on fifteen or more systems (rough estimate) and some have been in service for years.
That does sound like a simpler solution in the longer term and I'm more concerned with maintenance/use than with the difficulty of setting it up, so I will give this a whirl. Thanks, Dave
On 03/12/2013 10:28 AM, Dave Johansen wrote:
- Has this issue of handling the renaming been resolved in RHEL 6?
(my guess is no) 2) Would a software RAID be a better choice than using the BIOS RAID?
Almost certainly, yes.
- If a software RAID is the better choice, are there going to be an
impact on performance/stability/etc?
You'd have to measure it on your own workload, but I wouldn't expect any. Hardware RAID frequently offers a significant performance benefit as a result of having a battery backed write cache. As long as you don't fill the cache, writing to the RAID card's RAM is very fast, and writes will go to disk as idle time is available. Your BIOS raid probably doesn't have a write cache, and thus probably is no better than software RAID for performance.