I've mentioned this problem before but put off doing anything about it and maybe now someone can suggest the best solution.
I have a 3-member RAID1 set where one of the members is periodically swapped and rotated offsite. The filesystem contains a backuppc archive which has millions of hardlinks that make it impractical to copy with a file-oriented approach. The current filesystem is ext3 with one partition that uses the entire disk capacity (no lvm). It works as is, but...
I'd like to use a laptop size drive for the swapped member and the only ones available that match the size have 4k sectors. I have swappable, trayless SATA bays available for both drive sizes. The problem is that with the current partition layout, the drive with 4k sectors takes more than a day to re-sync even though on read access the speed is a match for the full sized drives that sync in a few hours.
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
Is it worth converting to ext4?
Is there a difference between doing this on 5.6 or 6.x?
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
On Mon, 25 Jul 2011, Les Mikesell wrote:
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
no expert here, but I have the scars across my back from pulling arrows out, as a pioneer
We have hit the issue on our storage backend which runs ext4, and on some of our dom0 built before the 4k sector alignment was generally acknowledged and known to be potentially in play
We have some non-conformant units, and after seaching, concluded that a 'wipe and rebuild' was the most time efficient process for us -- YMMV
Is it worth converting to ext4?
ext4 is pleasant in some large filesystem cases, but probably overkill as a blanket option.
Certainly it is 'wayy overkill for domU as a general rule, as it makes for a more fragile image in the sense that generic tools are less likely to work without higher version and skill levels when a filesystem gets horked up and a repair expedition has to be mounted ... we had an issue that a 'dirty' filesystem that would not fsck kept showing up in a nightly backup exception report, and ended up manually repairing what should have been able to be repaired automatically
Is there a difference between doing this on 5.6 or 6.x?
in C5, it took extra effort to use the technology preview; in C6 it is natively available
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
no idea if gparted does this by default -- it does not in all versions; certainly fdisk did not -- 4k alignment is on our deployment checklist, and we are manually checking partitioning to make sure, when we are rebuilding boxes
-- Russ herrold
On Mon, 2011-07-25 at 13:23 -0400, R P Herrold wrote:
On Mon, 25 Jul 2011, Les Mikesell wrote:
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
no expert here, but I have the scars across my back from pulling arrows out, as a pioneer
We have hit the issue on our storage backend which runs ext4, and on some of our dom0 built before the 4k sector alignment was generally acknowledged and known to be potentially in play
We have some non-conformant units, and after seaching, concluded that a 'wipe and rebuild' was the most time efficient process for us -- YMMV
Is it worth converting to ext4?
ext4 is pleasant in some large filesystem cases, but probably overkill as a blanket option.
Certainly it is 'wayy overkill for domU as a general rule, as it makes for a more fragile image in the sense that generic tools are less likely to work without higher version and skill levels when a filesystem gets horked up and a repair expedition has to be mounted ... we had an issue that a 'dirty' filesystem that would not fsck kept showing up in a nightly backup exception report, and ended up manually repairing what should have been able to be repaired automatically
Is there a difference between doing this on 5.6 or 6.x?
in C5, it took extra effort to use the technology preview; in C6 it is natively available
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
no idea if gparted does this by default -- it does not in all versions; certainly fdisk did not -- 4k alignment is on our deployment checklist, and we are manually checking partitioning to make sure, when we are rebuilding boxes
I can only comment on the last section
I have built Centos 6.0 on a SSD using F15 version of gdisk man gdisk shows for the l option "Change the sector alignment value. Disks with more logical sectors per physical sectors (such as some Western Digital models introduced in December of 2009) and some RAID configurations can suffer performance problems if partitions are not aligned properly for their internal data structures. On new disks, GPT fdisk attempts to align partitions on 2048-sector (1MiB) boundaries by default, which optimizes performance for both of these disk types."
Only straight ext4 parttions (No lvm) I have seen no problems so far ...
Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF02 BIOS boot partition 2 4096 2101247 1024.0 MiB 0700 Linux/Windows data 3 2101248 6295551 2.0 GiB 8200 Linux swap 4 6295552 69210111 30.0 GiB 0700 Linux/Windows data 5 69210112 132124671 30.0 GiB 0700 Linux/Windows data 6 132124672 174067711 20.0 GiB 0700 Linux/Windows data 7 174067712 468862094 140.6 GiB 0700 Linux/Windows data
John
John Austin wrote:
On Mon, 2011-07-25 at 13:23 -0400, R P Herrold wrote:
On Mon, 25 Jul 2011, Les Mikesell wrote:
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
<snip>
We have some non-conformant units, and after seaching, concluded that a 'wipe and rebuild' was the most time efficient process for us -- YMMV
Is it worth converting to ext4?
ext4 is pleasant in some large filesystem cases, but probably overkill as a blanket option.
<snip>
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
no idea if gparted does this by default -- it does not in all versions; certainly fdisk did not -- 4k alignment is on our deployment checklist, and we are manually checking partitioning to make sure, when we are rebuilding boxes
<snip> <rant> I think it was when I was building a 6.0 box a couple weeks ago, but I'd partition, it would do an mkfs... and *then* tell me it wasn't aligned, and I played with it several times, and it absolutely would NOT align it, nor offer to do so.
mark
--On Monday, July 25, 2011 01:56:38 PM -0400 m.roth@5-cent.us wrote:
I think it was when I was building a 6.0 box a couple weeks ago, but I'd partition, it would do an mkfs... and *then* tell me it wasn't aligned, and I played with it several times, and it absolutely would NOT align it, nor offer to do so.
When I was building out a 6.0 box a few days ago using 4k sector drives, I first booted into rescue mode and partitioned using fdisk via: fdisk -uc -H 224 -S 56 /dev/sd{a,b,c,d} (I'm not sure, but the -H and -S might be irrelevent due to the -uc.)
The first partition then defaulted to starting at sector 2048 (one MB), size of 200MB. On all disks, one other partition was created holding the remainder of the disk. All partition types were set to 0xfd.
I then booted the install disk normally and eventually things got configured so that partion 1 on all drives makes up a 200MB mirrored /dev/md0 for /boot, and everthing else went into /dev/md1 as RAID6.
As far as I can tell, I've not buggered things up ...
Devin
Devin Reade wrote:
--On Monday, July 25, 2011 01:56:38 PM -0400 m.roth@5-cent.us wrote:
I think it was when I was building a 6.0 box a couple weeks ago, but I'd partition, it would do an mkfs... and *then* tell me it wasn't aligned, and I played with it several times, and it absolutely would NOT align it, nor offer to do so.
When I was building out a 6.0 box a few days ago using 4k sector drives, I first booted into rescue mode and partitioned using fdisk via: fdisk -uc -H 224 -S 56 /dev/sd{a,b,c,d} (I'm not sure, but the -H and -S might be irrelevent due to the -uc.)
<snip> No joy - I think I have to use parted - the drive was too big for fdisk.
mark
On Mon, Jul 25, 2011 at 1:10 PM, Les Mikesell lesmikesell@gmail.com wrote:
I've mentioned this problem before but put off doing anything about it and maybe now someone can suggest the best solution.
I have a 3-member RAID1 set where one of the members is periodically swapped and rotated offsite. The filesystem contains a backuppc archive which has millions of hardlinks that make it impractical to copy with a file-oriented approach. The current filesystem is ext3 with one partition that uses the entire disk capacity (no lvm). It works as is, but...
I'd like to use a laptop size drive for the swapped member and the only ones available that match the size have 4k sectors. I have swappable, trayless SATA bays available for both drive sizes. The problem is that with the current partition layout, the drive with 4k sectors takes more than a day to re-sync even though on read access the speed is a match for the full sized drives that sync in a few hours.
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
Is it worth converting to ext4?
Is there a difference between doing this on 5.6 or 6.x?
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
-- Les Mikesell lesmikesell@gmail.com
I've wondered many times, though haven't tried it, if the issues with hard links and backuppc could be solved by using a container file with a loopback mount, and then that file could be moved around as needed without running into hard-link issues.
In this case, you could format the external drive in the optimal mode for 4k sectors, then create a container file and mount it using loopback. Then add the loopback device to the mdraid and have it sync.
-☙ Brian Mathis ❧-
On 7/25/2011 1:42 PM, Brian Mathis wrote:
I've wondered many times, though haven't tried it, if the issues with hard links and backuppc could be solved by using a container file with a loopback mount, and then that file could be moved around as needed without running into hard-link issues.
In this case, you could format the external drive in the optimal mode for 4k sectors, then create a container file and mount it using loopback. Then add the loopback device to the mdraid and have it sync.
It doesn't really help with the problem as it stands, which is that the target disk (a swappable sata, not really external) has no extra space that would permit shifting the alignment. It might work to shrink the existing size, then partition the new drives with the right offset, but I may just start from scratch and keep the old drives around in case I need the old history.
On 7/25/2011 1:42 PM, Brian Mathis wrote:
I've wondered many times, though haven't tried it, if the issues with hard links and backuppc could be solved by using a container file with a loopback mount, and then that file could be moved around as needed without running into hard-link issues.
In this case, you could format the external drive in the optimal mode for 4k sectors, then create a container file and mount it using loopback. Then add the loopback device to the mdraid and have it sync.
It doesn't really help with the problem as it stands, which is that the target disk (a swappable sata, not really external) has no extra space that would permit shifting the alignment. It might work to shrink the existing size, then partition the new drives with the right offset, but I may just start from scratch and keep the old drives around in case I need the old history.
-- Les Mikesell lesmikesell@gmail.com
I thought this was a 3-disk RAID1? Can't you repartition the hotswap disk and still have the data on the other 2? Why would you need to shrink the existing partition? Just blow it away and resync the data once you rebuild the disk.
-☙ Brian Mathis ❧-
On 7/25/2011 4:05 PM, Brian Mathis wrote:
On 7/25/2011 1:42 PM, Brian Mathis wrote:
I've wondered many times, though haven't tried it, if the issues with hard links and backuppc could be solved by using a container file with a loopback mount, and then that file could be moved around as needed without running into hard-link issues.
In this case, you could format the external drive in the optimal mode for 4k sectors, then create a container file and mount it using loopback. Then add the loopback device to the mdraid and have it sync.
It doesn't really help with the problem as it stands, which is that the target disk (a swappable sata, not really external) has no extra space that would permit shifting the alignment. It might work to shrink the existing size, then partition the new drives with the right offset, but I may just start from scratch and keep the old drives around in case I need the old history.
I thought this was a 3-disk RAID1? Can't you repartition the hotswap disk and still have the data on the other 2? Why would you need to shrink the existing partition? Just blow it away and resync the data once you rebuild the disk.
The disk I want to add is the same size as the existing disks if expressed in 512 byte sectors - and they have one partition taking all of the disk space. If I add a leading offset to get the 4k alignment, there won't be enough room for the existing partition size.
On 07/25/11 2:17 PM, Les Mikesell wrote:
The disk I want to add is the same size as the existing disks if expressed in 512 byte sectors - and they have one partition taking all of the disk space. If I add a leading offset to get the 4k alignment, there won't be enough room for the existing partition size.
you sure its that tight? different brand and model 1TB (or whatever) drives vary all over the place in actual size. generally newer ones are a hair bigger than older ones. you need at most 7 sectors to achieve 4 kilobyte alignment.
On 7/25/2011 4:26 PM, John R Pierce wrote:
On 07/25/11 2:17 PM, Les Mikesell wrote:
The disk I want to add is the same size as the existing disks if expressed in 512 byte sectors - and they have one partition taking all of the disk space. If I add a leading offset to get the 4k alignment, there won't be enough room for the existing partition size.
you sure its that tight? different brand and model 1TB (or whatever) drives vary all over the place in actual size. generally newer ones are a hair bigger than older ones. you need at most 7 sectors to achieve 4 kilobyte alignment.
The full sized disks are Seagates:
Host: scsi7 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: ST3750640NS Rev: 3.AE and fdisk sees this: Disk /dev/sdh: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
The 2.5" ones are WD's: Host: scsi9 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD7500BPVT-0 Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 fdisk: Disk /dev/sdi: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Don't see any extra space there unless you can shift the partition start forwards.
There's a very new 1 TB drive that might fit in the swappable bays (a cute little thing that fits 2 in a floppy drive space), but when I got these 750Gb was as large as you could go without adding extra height.
On 07/25/11 2:44 PM, Les Mikesell wrote:
255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
where is your existing partition starting?
if its on a track or cylinder boundary... then sure, you can move it forward by using something that will let you partition by sectors.
On 7/25/2011 5:33 PM, John R Pierce wrote:
On 07/25/11 2:44 PM, Les Mikesell wrote:
255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
where is your existing partition starting?
if its on a track or cylinder boundary... then sure, you can move it forward by using something that will let you partition by sectors.
Device Boot Start End Blocks Id System /dev/sdh1 1 91201 732572001 fd Linux raid autodetect
It doesn't need to boot. And the 3rd member doesn't need to autodetect, although I do want to be able to mount it independently if needed. Should it work to use the raw disk instead of a partition?
On 07/25/11 3:54 PM, Les Mikesell wrote:
Device Boot Start End Blocks Id System /dev/sdh1 1 91201 732572001 fd Linux raid autodetect
It doesn't need to boot. And the 3rd member doesn't need to autodetect, although I do want to be able to mount it independently if needed. Should it work to use the raw disk instead of a partition?
thats by "cylinder", which is an old MSDOS legacy thing. I believe parted and probably some other programs let you partition by sector instead.
you previously wrote...
On 07/25/11 2:44 PM, Les Mikesell wrote:
255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
so, a 'cylinder' is 8225 kbytes per that. simply cutting that down by a few kbytes will get you on your 4K boundary. right now a cylinder is 255*63 = 16065 sectors. which is most certainly not divisible by 8 (8 512 byte sectors is 4K bytes)
From: John R Pierce pierce@hogranch.com
thats by "cylinder", which is an old MSDOS legacy thing. I believe parted and probably some other programs let you partition by sector instead.
In my kickstart pre script, I use: ... | sfdisk -H $HEADS -S $SECTORS -uS --force -L $DEVICE For SSDs, I saw the recommended respective values: 224 56 (or 32 32) fdisk also has -u for sectors unit, and -H/-S to force a fake geometry. But, from fdisk man page: "-b sectorsize Specify the sector size of the disk. Valid values are 512, 1024, or 2048. (Recent kernels know the sector size. Use this only on old kernels or to override the kernel’s ideas.)" Which would seem to imply that fdisk is limited to 2K sector sizes?
I do not deal with drives above 2GB though... But I must admit that I am still a bit confused with all these alignments...
JD
On Tuesday, July 26, 2011 05:21:58 AM John Doe wrote:
From: John R Pierce pierce@hogranch.com
thats by "cylinder", which is an old MSDOS legacy thing. I believe parted and probably some other programs let you partition by sector instead.
In my kickstart pre script, I use: ... | sfdisk -H $HEADS -S $SECTORS -uS --force -L $DEVICE For SSDs, I saw the recommended respective values: 224 56 (or 32 32) fdisk also has -u for sectors unit, and -H/-S to force a fake geometry.
But I must admit that I am still a bit confused with all these alignments...
The key thing is to be sector aligned per physical drive; align to eight sector blocks; a starting sector of 56 would work. With RAID and LVM alignment to chunks or stripes is desireable.
Forget CHS specifications; they haven't been valid for years anyway; think LBA and only LBA and you'll be fine. No drive made actually has 255 heads anyway.... or a constant 63 sectors per track, either, for that matter. All mechanical hard drives made these days employ ZBR and have a variable number of sectors per track, less than ten (or 12, in the case of some 15K RPM FC and SCSI drives that I know about; have some 15KRPM 36GB SCSI drives with six physical platters, 12 genuine physical heads, all in a half-height 3.5 inch form-factor) heads, and many thousands of cylinders.
SSDs don't even have heads or tracks, and thus those specifications are meaningless and need to just go away. It's LBA all the way, and the critical alignment is to erase-block size.
See the following articles for more, and better, information that goes into a lot more detail than I have time to do:
http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.htm...
http://www.ocztechnologyforum.com/forum/showthread.php?48309-Partition-align...
(yes, the thread says windows, but the particular post is about Linux)
http://www.tcpdump.com/kb/os/windows/disk-alignment/into.html (has some good illustrations that are relevant on Linux, even though the article is about Windows)
And there are more; those were all on the first page of a Google search for the terms 'sector alignment linux' (no quotes).
On 07/25/2011 10:10 AM, Les Mikesell wrote:
I've mentioned this problem before but put off doing anything about it and maybe now someone can suggest the best solution.
I have a 3-member RAID1 set where one of the members is periodically swapped and rotated offsite. The filesystem contains a backuppc archive which has millions of hardlinks that make it impractical to copy with a file-oriented approach. The current filesystem is ext3 with one partition that uses the entire disk capacity (no lvm). It works as is, but...
I'd like to use a laptop size drive for the swapped member and the only ones available that match the size have 4k sectors. I have swappable, trayless SATA bays available for both drive sizes. The problem is that with the current partition layout, the drive with 4k sectors takes more than a day to re-sync even though on read access the speed is a match for the full sized drives that sync in a few hours.
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
Is it worth converting to ext4?
Is there a difference between doing this on 5.6 or 6.x?
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
For LVM's see the --dataalignment and --dataalignmentoffset options. For md devices, my understanding is that the raid superblock is at the end of the partition, so the data is aligned with wherever the partition starts. I verified this using: hexdump /dev/md1 | head -6 hexdump /dev/sda4 | head -6
Nataraj
On 2011-07-25 19:10, Les Mikesell wrote:
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
I think so. Your partition starts at sector 63, you need anything divisible by 8, so: - 64 is my expectation, - 56 is a fallback solution if the partition does not fit on the disk with 64 sector offset - 2048 would be perfect (1M alignment is currently preferred by Centos 6 and many other OSs)
To be on the safe side, take the disk out of the array (mdadm -f /dev/md0 /dev/sdX1 ; mdadm -r /dev/md0 /dev/sdX1) and clear superblock using mdadm --zero-superblock /dev/sdX1.
Then repartition the disk using fdisk using the following commands: fdisk /dev/sdX u -- display units are sectors c -- no DOS compatibility (== no cyllinder rounding, you definitely want that) o -- new dos partition table n -- new partition p -- primary 1 -- partition 1 64 -- starting offset 1465144065 -- exact size here, because (just to be on the safe side) you do not want to have a larger partition on a rescue disk than on a base disk. Your partition sdh1 has 732572001 1k-blocks, as you wrote in one e-mail, multiply this by 2 (sectors) add 64, (starting offset) subtract 1 because the offset is inclusive. You get 2*732572001+64-1=1465144065. If fdisk complains that this is too much then offset 64 cannot be used and you need to repeat the procedure using offset 56 (don't forget to recalculate ending sector). t -- type fd -- linux raid autodetect w
mdadm -a /dev/mdX /dev/sdX1
And everything should be fine.
Is it worth converting to ext4?
I don't know.
Is there a difference between doing this on 5.6 or 6.x?
Yes, Centos 6 by default aligns partitions on the disk automatically on 1MiB (2048sectors) boundaries. C6 LVM also aligns lv's on 1MiB boundaries relative to the pv start. Finally md in Centos 6 uses 512KiB chunks and aligns data on this boundary (default md superblock in Centos 6 Installer is 1.1 so it is on beginning of the partition) so it is also OK.
If I start over from scratch with 6.x, will the partitioning tools automatically align for 4k sector drives (with/without lvm?)?
Yes. But I always check that to be sure :)
Andrzej
On 7/26/2011 3:11 AM, Andrzej Szymanski wrote:
On 2011-07-25 19:10, Les Mikesell wrote:
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
I think so. Your partition starts at sector 63, you need anything divisible by 8, so:
- 64 is my expectation,
- 56 is a fallback solution if the partition does not fit on the disk
with 64 sector offset
- 2048 would be perfect (1M alignment is currently preferred by Centos 6
and many other OSs)
To be on the safe side, take the disk out of the array (mdadm -f /dev/md0 /dev/sdX1 ; mdadm -r /dev/md0 /dev/sdX1) and clear superblock using mdadm --zero-superblock /dev/sdX1.
Then repartition the disk using fdisk using the following commands: fdisk /dev/sdX u -- display units are sectors c -- no DOS compatibility (== no cyllinder rounding, you definitely want that) o -- new dos partition table n -- new partition p -- primary 1 -- partition 1 64 -- starting offset 1465144065 -- exact size here, because (just to be on the safe side) you do not want to have a larger partition on a rescue disk than on a base disk. Your partition sdh1 has 732572001 1k-blocks, as you wrote in one e-mail, multiply this by 2 (sectors) add 64, (starting offset) subtract 1 because the offset is inclusive. You get 2*732572001+64-1=1465144065. If fdisk complains that this is too much then offset 64 cannot be used and you need to repeat the procedure using offset 56 (don't forget to recalculate ending sector). t -- type fd -- linux raid autodetect w
mdadm -a /dev/mdX /dev/sdX1
And everything should be fine.
Thank you! That seems to have worked, but now I'm curious as to why the partition on the old drives didn't go to the end of the disk - which I had expected would have left no extra room. Was the dos style rounding computing the end of a cylinder wrong?
fdisk -lu /dev/sdh (old 3.5") Disk /dev/sdh: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /dev/sdh1 63 1465144064 732572001 fd Linux raid autodetect
fdisk -lu /dev/sdi (new 2.5") Disk /dev/sdi: 750.1 GB, 750156374016 bytes 1 heads, 1 sectors/track, 1465149168 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /dev/sdi1 64 1465144065 732572001 fd Linux raid autodetect
The laptop drive is still slower, but not 10x slower like before. I did try something like this earlier trying for a 56 sector offset but must have done something wrong.
On 07/26/2011 03:53 PM, Les Mikesell wrote:
Thank you! That seems to have worked, but now I'm curious as to why the partition on the old drives didn't go to the end of the disk - which I had expected would have left no extra room. Was the dos style rounding computing the end of a cylinder wrong?
fdisk -lu /dev/sdh (old 3.5") Disk /dev/sdh: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /dev/sdh1 63 1465144064 732572001 fd Linux raid autodetect
Simple. The drive does not contain an integral number of those arbitrary 255H x 63S cylinders, and with the previous cylinder-aligned partitioning the partition extended only to the end of the last full cylinder.
Are there differences in the way CentOS 6.0 handles md raid1 arrays compared to earlier versions? After getting my drive with 4k sectors partitioned with a 64-sector starting offset and working under 5.6 with about a 20Mb/sec sync rate, I booted with the 6.0 livecd to see if there would be any improvement, but instead of autodetecting the pairing, it assembled each partition into its own md device with missing members. And the sync of the drive with the 4k sectors would only go at about 4Mb/sec (regardless of what I put in /proc/sys/dev/raid/speed_limit_min and _max).
What was worse was that booting back to 5.6 left most of the raid pairs broken, some still having numbers in the md120 range, and one of the drives with an unrecognizable partition table. I think I can repair everything, but did I miss something about this in release notes somewhere?
What was worse was that booting back to 5.6 left most of the raid pairs broken, some still having numbers in the md120 range, and one of the drives with an unrecognizable partition table. I think I can repair everything, but did I miss something about this in release notes somewhere?
I just went through this very scenario yesterday on a recovery box, in the end troubleshooting wasn't worth it so I recreated the sets in c6 and re-populated the data back from the master.
Fortunately, it's my habit to always mount on a reliable means like uuid or forced friendly mpath names and I am glad I did, my md0 became md127 after the first reboot and any attempts to stop and re-assemble with the name I had were quashed at next boot.
Scratched that one upto early adopter woes and called it fixed:)
jlc
On 7/26/2011 3:11 AM, Andrzej Szymanski wrote:
On 2011-07-25 19:10, Les Mikesell wrote:
My questions for any filesystem experts are:
Is there a way to adjust the existing md partitions to get the right alignment for 4k sectors without having to do a file-oriented copy to new partitions? A resize + a dd copy to shift the position might be feasible time-wise if that would work.
I think so. Your partition starts at sector 63, you need anything divisible by 8, so:
- 64 is my expectation,
- 56 is a fallback solution if the partition does not fit on the disk
with 64 sector offset
- 2048 would be perfect (1M alignment is currently preferred by Centos 6
and many other OSs)
To be on the safe side, take the disk out of the array (mdadm -f /dev/md0 /dev/sdX1 ; mdadm -r /dev/md0 /dev/sdX1) and clear superblock using mdadm --zero-superblock /dev/sdX1.
Then repartition the disk using fdisk using the following commands: fdisk /dev/sdX u -- display units are sectors c -- no DOS compatibility (== no cyllinder rounding, you definitely want that) o -- new dos partition table n -- new partition p -- primary 1 -- partition 1 64 -- starting offset 1465144065 -- exact size here, because (just to be on the safe side) you do not want to have a larger partition on a rescue disk than on a base disk. Your partition sdh1 has 732572001 1k-blocks, as you wrote in one e-mail, multiply this by 2 (sectors) add 64, (starting offset) subtract 1 because the offset is inclusive. You get 2*732572001+64-1=1465144065. If fdisk complains that this is too much then offset 64 cannot be used and you need to repeat the procedure using offset 56 (don't forget to recalculate ending sector). t -- type fd -- linux raid autodetect w
mdadm -a /dev/mdX /dev/sdX1
And everything should be fine.
One more follow-up on this. This did work fine but it turned out that the disk I was using was defective which is probably what threw off my earlier attempts to get the alignment right. It would start with some promising values for the resync speed, but would keep slowing down more and more as it went and eventually it got to the point where there were errors reported. I returned it and the replacement is a pretty close match to the 3.5" drives in speed.