I have 4 ST2000DL003-9VT166 (2Tbyte) disks in a RAID 5 array. Because of the size I built them as a GPT partitioned disk. They were originally built on a CentOS 5.x machine but more recently plugged into a CentOS 6.2 machine where they were detected just fine
e.g. % parted /dev/sdj print Model: ATA ST2000DL003-9VT1 (scsi) Disk /dev/sdj: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt
Number Start End Size File system Name Flags 3 2097kB 2000GB 2000GB Linux RAID raid
The MBR is empty except for the standard "protection" partition
% fdisk -l /dev/sdj
WARNING: GPT (GUID Partition Table) detected on '/dev/sdj'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes 256 heads, 63 sectors/track, 242251 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sdj1 1 242252 1953514583+ ee GPT
% dd if=/dev/sdj count=1 2> /dev/null | hexdump 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 00001c0 0002 ffee ffff 0001 0000 88af e8e0 0000 00001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 00001f0 0000 0000 0000 0000 0000 0000 0000 aa55 0000200
So far, so normal. This works fine under 2.6.32-220.23.1.el6.x86_64
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid5 sdj3[2] sdi2[1] sdk4[3] sdh1[0] 5860537344 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
However, I just patched to CentOS 6.3 and on reboot this array failed to be built. The 2.6.32-279 kernel complained that /dev/sdj was too similar to /dev/sdj3. But I reboot to -220.23.1 then it works.
And, indeed, if I run "mdadm --examine /dev/sdj" then it _does_ look like mdadm thinks the raw device is part of the array! But with a different name... % mdadm --examine /dev/sdj /dev/sdj: Magic : a92b4efc Version : 0.90.00 UUID : 79b0ccbb:8f11154b:7df88b83:4a0b6975 Creation Time : Thu Sep 8 22:14:01 2011 Raid Level : raid5 Used Dev Size : 1953512448 (1863.01 GiB 2000.40 GB) Array Size : 5860537344 (5589.04 GiB 6001.19 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 5
Update Time : Tue Jul 10 09:10:38 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 8cc92232 - correct Events : 36410
Layout : left-symmetric Chunk Size : 64K
Number Major Minor RaidDevice State this 2 8 147 2 active sync /dev/sdj3
0 0 8 113 0 active sync /dev/sdh1 1 1 8 130 1 active sync /dev/sdi2 2 2 8 147 2 active sync /dev/sdj3 3 3 8 164 3 active sync /dev/sdk4
This is the same output as if I'd run it against /dev/sdj3
What's odd is that I also have 4*1Tbyte disks in another raid array but these are partitioned with standard MBR partitions, and these are not reporting this problem.
% dd if=/dev/sdd count=1 2> /dev/null | hexdump 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 00001b0 0000 0000 0000 0000 0000 0000 0000 0100 00001c0 0001 fefd ffff 003f 0000 5982 7470 0000 00001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 00001f0 0000 0000 0000 0000 0000 0000 0000 aa55 0000200
% mdadm --examine /dev/sdd /dev/sdd: MBR Magic : aa55 Partition[0] : 1953520002 sectors at 63 (type fd)
So, finally, the question: Is there anything I could do inside the MBR of these GPT disks to stop mdadm from thinking the whole disk is part of the array? Is there anything I can do that'll let the new kernel assemble the array?
Thanks!