I have a system running CentOS 6.3, with a SCSI attached RAID:
http://www.raidweb.com/index.php/2012-10-24-12-40-09/janus-ii-scsi/2012-10-2...
For disaster recovery purposes, I want to build up a spare system which could take the place of the server hosting the RAID above.
But here's what I see:
# fdisk -l /dev/sdc
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdc: 44004.7 GB, 44004691814400 bytes 255 heads, 63 sectors/track, 5349932 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 524288 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sdc1 1 267350 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. #
But here's the partitions I have:
# df -k |grep sdc /dev/sdc1 15379809852 8627488256 6596071608 57% /space01 /dev/sdc2 6248052728 905001184 5279574984 15% /space02 /dev/sdc5 8175038780 2418326064 5673659088 30% /space03 /dev/sdc4 6248052728 1444121916 4740454252 24% /space04 /dev/sdc3 6248052728 1886640284 4297935884 31% /space05 #
How can I build up a new system to be ready for this existing RAID? Or will the latest/greatest CentOS just know what to do, and allow me to simply copy the /etc/fstab over and respect it?
Thanks!
Regards, Joseph Spenner
______________________________________________________________________ If life gives you lemons, keep them-- because hey.. free lemons. "♥ Sticker" fixer: http://microflush.org/stuff/stickers/heartFix.html
On 9/10/2013 9:52 AM, Joseph Spenner wrote:
How can I build up a new system to be ready for this existing RAID? Or will the latest/greatest CentOS just know what to do, and allow me to simply copy the /etc/fstab over and respect it?
what that error said, use parted.
parted -l /dev/sda
to list the partitions on device /dev/sda.
fdisk is deprecated
From: John R Pierce pierce@hogranch.com
To: centos@centos.org Sent: Tuesday, September 10, 2013 10:11 AM Subject: Re: [CentOS] large SCSI RAID, replacing server
On 9/10/2013 9:52 AM, Joseph Spenner wrote:
How can I build up a new system to be ready for this existing RAID? Or will the latest/greatest CentOS just know what to do, >>and allow me to simply copy the /etc/fstab over and respect it?
what that error said, use parted.
parted -l /dev/sda
to list the partitions on device /dev/sda.
fdisk is deprecated
Thanks for the reply!
But is there a way to stage the new system so all I need to do is move the RAID from the old system to the new system? Or do I need to do anything at all? I'm not sure if the existing system has some special packages which make it able to use those large partitions. It doesn't appear to have 'parted' installed.
That's why I was curious if the latest/greatest CentOS would know what to do.
On 9/10/2013 10:23 AM, Joseph Spenner wrote:
But is there a way to stage the new system so all I need to do is move the RAID from the old system to the new system? Or do I need to do anything at all? I'm not sure if the existing system has some special packages which make it able to use those large partitions. It doesn't appear to have 'parted' installed.
what kind of raid is this? hardware, mdraid, what? if hardware raid, then what sort of hardware raid controller, and does that controller support moving volumes between systems?
are both systems totally identical hardware? in that case, I'd simply move ALL the disks, including the system volume.
----- Original Message ----- | On 9/10/2013 10:23 AM, Joseph Spenner wrote: | > But is there a way to stage the new system so all I need to do is | > move the RAID from the old system to the new system? | > Or do I need to do anything at all? | > I'm not sure if the existing system has some special packages which | > make it able to use those large partitions. It doesn't appear to | > have 'parted' installed. | | what kind of raid is this? hardware, mdraid, what? if hardware | raid, then what sort of hardware raid controller, and does that | controller support moving volumes between systems? | | are both systems totally identical hardware? in that case, I'd simply | move ALL the disks, including the system volume.
I agree here. Looking at the future, consider e2label to assign or change a label on an ext2/3/4 file system or xfs_admin -L <label> /dev/sdc1 for XFS and then move to using file system labels instead. It will avoid the device enumeration problems discussed earlier.
On Tue, Sep 10, 2013 at 12:47 PM, James A. Peltier jpeltier@sfu.ca wrote:
| are both systems totally identical hardware? in that case, I'd simply | move ALL the disks, including the system volume.
I agree here. Looking at the future, consider e2label to assign or change a label on an ext2/3/4 file system or xfs_admin -L <label> /dev/sdc1 for XFS and then move to using file system labels instead. It will avoid the device enumeration problems discussed earlier.
Or make it worse, depending on how well and how centrally you can track things like that.
----- Original Message ----- | On Tue, Sep 10, 2013 at 12:47 PM, James A. Peltier jpeltier@sfu.ca | wrote: | > | are both systems totally identical hardware? in that case, I'd | > | simply | > | move ALL the disks, including the system volume. | > | > I agree here. Looking at the future, consider e2label to assign or | > change a label on an ext2/3/4 file system or xfs_admin -L <label> | > /dev/sdc1 for XFS and then move to using file system labels | > instead. It will avoid the device enumeration problems discussed | > earlier. | | Or make it worse, depending on how well and how centrally you can | track things like that.
I was referring more to the context of these two machines and swapping the disks. In this case it would be simple, however, on larger scale yes, it can be problematic.
Still, it seems like this case is an offline cold backup and not an online hot swap in. If the latter is what is desired I would recommend something completely different that this.
On Tue, Sep 10, 2013 at 12:11 PM, John R Pierce pierce@hogranch.com wrote:
On 9/10/2013 9:52 AM, Joseph Spenner wrote:
How can I build up a new system to be ready for this existing RAID? Or will the latest/greatest CentOS just know what to do, and allow me to simply copy the /etc/fstab over and respect it?
what that error said, use parted.
parted -l /dev/sda
to list the partitions on device /dev/sda.
fdisk is deprecated
And note that the /dev/sd + letter names are detection-order dependent. Another box will detect the same partitions on the raid but they might end up with different names. Assuming you don't boot from the disk, that is something you can fix with a little fiddling after you see where it ends up on the new host, but it can be inconvenient - and you have to know how to remount your root disk rw so you can fix it when a bad fstab entry makes the boot fail.
----- Original Message ----- | I have a system running CentOS 6.3, with a SCSI attached RAID: | | | http://www.raidweb.com/index.php/2012-10-24-12-40-09/janus-ii-scsi/2012-10-2... | | | For disaster recovery purposes, I want to build up a spare system | which could take the place of the server hosting the RAID above. | | But here's what I see: | | # fdisk -l /dev/sdc | | WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util | fdisk doesn't support GPT. Use GNU Parted. | | | Disk /dev/sdc: 44004.7 GB, 44004691814400 bytes | 255 heads, 63 sectors/track, 5349932 cylinders | Units = cylinders of 16065 * 512 = 8225280 bytes | Sector size (logical/physical): 512 bytes / 512 bytes | I/O size (minimum/optimal): 524288 bytes / 524288 bytes | Disk identifier: 0x00000000 | | Device Boot Start End Blocks Id System | /dev/sdc1 1 267350 2147483647+ ee GPT | Partition 1 does not start on physical sector boundary. | # | | | But here's the partitions I have: | | # df -k |grep sdc | /dev/sdc1 15379809852 8627488256 6596071608 57% /space01 | /dev/sdc2 6248052728 905001184 5279574984 15% /space02 | /dev/sdc5 8175038780 2418326064 5673659088 30% /space03 | /dev/sdc4 6248052728 1444121916 4740454252 24% /space04 | /dev/sdc3 6248052728 1886640284 4297935884 31% /space05 | # | | | How can I build up a new system to be ready for this existing RAID? | Or will the latest/greatest CentOS just know what to do, and allow | me to simply copy the /etc/fstab over and respect it?
Personally, I wouldn't (and don't) partition the disk(s) at all. I instead use LVM to manage the storage. This allows far greater flexibility and could allow you to provision the disks better. I'd recommend you use LVM on your new machine to create the volumes and use file system volume labels instead of physical drive locations such as /dev/sdc1. Volume labels are not system specific, although if you have conflicting labels it could be problematic.
pvcreate /dev/sdc vgcreate DATA lvcreate -L 15379809852 -n space01 DATA lvcreate -L 6248052728 -n space02 DATA ...
mkfs.xfs -L space01 /dev/DATA/space01 mkfs.xfs -L space02 /dev/DATA/space02 ...
then in /etc/fstab
LABEL=space01 /space01 xfs defaults 0 0 LABEL=space02 /space02 xfs defaults 0 0 ...
This of course all depends on what it is that you're trying to accomplish but I would certainly recommend moving away from using partitions. If you're just rsync'ing the data you don't have to worry about the sizes being the same, you just have to worry about the sizes being sufficiently large to store the amount of data needed.
Currently I can see that you have well over provisioned the amount of space required and being able to manage growth of say /space01 which seems to be much larger, is somewhat fixed in your configuration, how would you handle the growth or shrinking of other partitions to make room. With LVM you would provision space with some growth of what's there now and then grow file systems as needed.
As a side note, you may want to investigate Gluster or DRDB to actually replicate the data across the nodes giving you a more "true" replication and fail-over configuration.