Hello all,
I've got CentOS 5.2 installed on an old Pentium III server. The server is used by a club at my school, and they are rapidly running out of room on the internal IDE drives that are configured in software RAID 1.
I have a Silicon Image SATA controller in the server, and I was planning on adding two more SATA hard drives to the computer.
Is there a GUI like Disk Druid for initializing hard drives in software RAID 1 *after* the system has been installed?
Thanks, Hal
on 12-2-2008 9:05 AM Hal Martin spake the following:
Hello all,
I've got CentOS 5.2 installed on an old Pentium III server. The server is used by a club at my school, and they are rapidly running out of room on the internal IDE drives that are configured in software RAID 1.
I have a Silicon Image SATA controller in the server, and I was planning on adding two more SATA hard drives to the computer.
Is there a GUI like Disk Druid for initializing hard drives in software RAID 1 *after* the system has been installed?
Thanks, Hal
There is Webmin if you want something easy to admin the server. But the commandline parameters aren't that hard to master.
Is there a GUI like Disk Druid for initializing hard drives in software RAID 1 *after* the system has been installed?
no need -
"
5.6 RAID-1
You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.
Set up the |/etc/raidtab| file like this:
raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1
If you have spare disks, you can add them to the end of the device specification like
device /dev/sdd5 spare-disk 0
Remember to set the |nr-spare-disks| entry correspondingly.
Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized.
Issue the
mkraid /dev/md0
command to begin the mirror initialization.
Check out the |/proc/mdstat| file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction.
Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely.
The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.
Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck."
Taken from
Tom Brown wrote on Tue, 02 Dec 2008 17:29:06 +0000:
unfortunately that and the mini-howto are both very much outdated. Many of the stuff it mentions (like mkraid, /etc/raidtab) is not part of the distro anymore. You use mdadm nowadays. Those parts that contain mdadm commands are still valid.
Does this "Silicon Image SATA controller" not include Hardware RAID by chance? The basic steps for software-RAID are: - decide about the partitions the RAID devices will be based on - if it is used only for data you may probably want to have just one RAID partition: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb (I assume you can use sda and sdb. I always use several RAID partitions, so I never used the whole disk.) put LVM on it: pvcreate /dev/md0 vgcreate myvolumegroupname /dev/md0 - creates vg myvolumegroupname on it - start adding your logical volumes: lvcreate -L50G --name myname myvolumegroupname - adds a 50G logical volume named myname - format that lv: mkfs.ext3 /dev/myvolumegroupname/myname - copy any data you like from the old drives - add the mount points to fstab - if you don't boot from the RAID partition you are done now
Kai
Kai Schaetzl wrote:
Does this "Silicon Image SATA controller" not include Hardware RAID by chance?
Silicon Image... good old Fake RAID.
To use the Silicon Image Fake RAID you will need to reboot and enter the Silicon Image BIOS, where you can configure the RAID settings there. Once the RAID is initialized, then you can format it within the OS.
I would recommend against this though. You should use mdadm as recommended previously, as Linux Sofware RAID beats out Fake RAID in compatibility, portability, and performance.
As far as a GUI partitioner, other than the Disk Druid when only appears during installation, I haven't found any that are included in the CentOS repositories.
You might want to try: * gparted (Gnome Partition Editor) [1] * qtparted (Graphical frontend for parted) [2]
Both of which you can obtain the RPMs from DAG.
Kenneth
[1] http://dag.wieers.com/packages/gparted/ [2] http://dag.wieers.com/packages/qtparted/
Just along these lines, would it be possible for me to break RAID 1 on the two internal drives into RAID 0 and then mirror that new RAID 0 array onto a SATA drive using RAID 1 without loosing any data?
I used JFS as the file system for the RAID 1 array, so that may have to be changed to XFS as you cannot dynamically expand JFS to the best of my knowledge.
-Hal
Kai Schaetzl wrote:
Tom Brown wrote on Tue, 02 Dec 2008 17:29:06 +0000:
unfortunately that and the mini-howto are both very much outdated. Many of the stuff it mentions (like mkraid, /etc/raidtab) is not part of the distro anymore. You use mdadm nowadays. Those parts that contain mdadm commands are still valid.
Does this "Silicon Image SATA controller" not include Hardware RAID by chance? The basic steps for software-RAID are:
- decide about the partitions the RAID devices will be based on
- if it is used only for data you may probably want to have just one RAID
partition: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb (I assume you can use sda and sdb. I always use several RAID partitions, so I never used the whole disk.) put LVM on it: pvcreate /dev/md0 vgcreate myvolumegroupname /dev/md0
- creates vg myvolumegroupname on it
- start adding your logical volumes: lvcreate -L50G --name myname myvolumegroupname
- adds a 50G logical volume named myname
- format that lv: mkfs.ext3 /dev/myvolumegroupname/myname
- copy any data you like from the old drives
- add the mount points to fstab
- if you don't boot from the RAID partition you are done now
Kai
on 12-17-2008 5:42 PM Hal Martin spake the following:
Just along these lines, would it be possible for me to break RAID 1 on the two internal drives into RAID 0 and then mirror that new RAID 0 array onto a SATA drive using RAID 1 without loosing any data?
I used JFS as the file system for the RAID 1 array, so that may have to be changed to XFS as you cannot dynamically expand JFS to the best of my knowledge.
-Hal
If the drive has LVM partitions, I don't know if they can be expanded reliably yet. I didn't have any luck the last time I tried.