I have just finished creating an array on our new enclosure and our CentOS 5 server has recognized it. It shows as the full 6tb in the LSI configuration utility as well as when I ran fdisk:
[root@HOST sbin]# fdisk /dev/sdb Note: sector size is 2048 (not 512)
The number of cylinders for this disk is set to 182292. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 5997.6 GB, 5997628227584 bytes 255 heads, 63 sectors/track, 182292 cylinders Units = cylinders of 16065 * 2048 = 32901120 bytes
Device Boot Start End Blocks Id System
I was then created a parition in fdisk and it appeared to work until I formated it (here is the output of the formating)
[root@HOST ~]# mkfs -t ext2 -j /dev/sdb1 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 195264512 inodes, 390518634 blocks 19525931 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 11918 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848
Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
When I did a df I got the following (only included the array entry)
[root@HOST etc]# df -h Filesystem Size Used Avail Use% Mounted on ... /dev/sdb1 1.5T 198M 1.5T 1% /home1
I then tried removing it and working with parted.
[root@HOST etc]# parted /dev/sdb Warning: Device /dev/sdb has a logical sector size of 2048. Not all parts of GNU Parted support this at the moment, and the working code is HIGHLY EXPERIMENTAL.
GNU Parted 1.8.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Error: Unable to open /dev/sdb - unrecognised disk label. (parted) mklabel gpt *** glibc detected *** <unknown>: double free or corruption (!prev): 0x0000000016760800 *** ======= Backtrace: ========= /lib64/libc.so.6[0x3435c6f4f4] /lib64/libc.so.6(cfree+0x8c)[0x3435c72b1c] /usr/lib64/libparted-1.8.so.0[0x3436c1a5c5] /usr/lib64/libparted-1.8.so.0[0x3436c48a54] ======= Memory map: ======== 00400000-00410000 r-xp 00000000 08:05 130761 /sbin/parted 00610000-00611000 rw-p 00010000 08:05 130761 /sbin/parted 00611000-00612000 rw-p 00611000 00:00 0 00810000-00812000 rw-p 00010000 08:05 130761 /sbin/parted 1673d000-1677f000 rw-p 1673d000 00:00 0 3435800000-343581a000 r-xp 00000000 08:05 4765445 /lib64/ld-2.5.so 3435a19000-3435a1a000 r--p 00019000 08:05 4765445 /lib64/ld-2.5.so 3435a1a000-3435a1b000 rw-p 0001a000 08:05 4765445 /lib64/ld-2.5.so 3435c00000-3435d46000 r-xp 00000000 08:05 4765452 /lib64/libc-2.5.so 3435d46000-3435f46000 ---p 00146000 08:05 4765452 /lib64/libc-2.5.so 3435f46000-3435f4a000 r--p 00146000 08:05 4765452 /lib64/libc-2.5.so 3435f4a000-3435f4b000 rw-p 0014a000 08:05 4765452 /lib64/libc-2.5.so 3435f4b000-3435f50000 rw-p 3435f4b000 00:00 0 3436000000-3436013000 r-xp 00000000 08:05 4765602 /lib64/libdevmapper.so.1.02 3436013000-3436213000 ---p 00013000 08:05 4765602 /lib64/libdevmapper.so.1.02 3436213000-3436215000 rw-p 00013000 08:05 4765602 /lib64/libdevmapper.so.1.02 3436400000-3436402000 r-xp 00000000 08:05 4765458 /lib64/libdl-2.5.so 3436402000-3436602000 ---p 00002000 08:05 4765458 /lib64/libdl-2.5.so 3436602000-3436603000 r--p 00002000 08:05 4765458 /lib64/libdl-2.5.so 3436603000-3436604000 rw-p 00003000 08:05 4765458 /lib64/libdl-2.5.so 3436800000-3436802000 r-xp 00000000 08:05 4765478 /lib64/libuuid.so.1.2 3436802000-3436a02000 ---p 00002000 08:05 4765478 /lib64/libuuid.so.1.2 3436a02000-3436a03000 rw-p 00002000 08:05 4765478 /lib64/libuuid.so.1.2 3436c00000-3436c5c000 r-xp 00000000 08:05 464724 /usr/lib64/libparted-1.8.so.0.0. 1 3436c5c000-3436e5b000 ---p 0005c000 08:05 464724 /usr/lib64/libparted-1.8.so.0.0. 1 3436e5b000-3436e5f000 rw-p 0005b000 08:05 464724 /usr/lib64/libparted-1.8.so.0.0. 1 3436e5f000-3436e60000 rw-p 3436e5f000 00:00 0 3437000000-3437035000 r-xp 00000000 08:05 463236 /usr/lib64/libreadline.so.5.1 3437035000-3437234000 ---p 00035000 08:05 463236 /usr/lib64/libreadline.so.5.1 3437234000-343723c000 rw-p 00034000 08:05 463236 /usr/lib64/libreadline.so.5.1 343723c000-343723d000 rw-p 343723c000 00:00 0 343b000000-343b00d000 r-xp 00000000 08:05 4765466 /lib64/libgcc_s-4.1.2-20070626.s o.1 343b00d000-343b20d000 ---p 0000d000 08:05 4765466 /lib64/libgcc_s-4.1.2-20070626.s o.1 343b20d000-343b20e000 rw-p 0000d000 08:05 4765466 /lib64/libgcc_s-4.1.2-20070626.s o.1 343c000000-343c03b000 r-xp 00000000 08:05 4765743 /lib64/libsepol.so.1 343c03b000-343c23b000 ---p 0003b000 08:05 4765743 /lib64/libsepol.so.1 343c23b000-343c23c000 rw-p 0003b000 08:05 4765743 /lib64/libsepol.so.1 343c23c000-343c246000 rw-p 343c23c000 00:00 0 343c400000-343c415000 r-xp 00000000 08:05 4765744 /lib64/libselinux.so.1 343c415000-343c615000 ---p 00015000 08:05 4765744 /lib64/libselinux.so.1 343c615000-343c617000 rw-p 00015000 08:05 4765744 /lib64/libselinux.so.1 343c617000-343c618000 rw-p 343c617000 00:00 0 3448e00000-3448e4e000 r-xp 00000000 08:05 464701 /usr/lib64/libncurses.so.5.5 3448e4e000-344904e000 ---p 0004e000 08:05 464701 Aborted
Any suggestions or direction would be appreciated.
Thank you, Rob
I would seriously start thinking about using LVM on such a large storage unit.
You can't use an MBR partition table on a volume that large there is a max 2TB disk size limit and 2TB partition size limit for MBR, so you must use GPT.
There is a real lack of reliable and easy GPT tools under Linux, parted can read GPT partition tables, but I do not believe it can create them AFAIK.
LVM can handle volumes of extremely large size (64-bit), so you shouldn't run into any problems there and you can create file systems directly in LVs of 2TB+.
-Ross
________________________________
From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Rob Lines Sent: Monday, February 04, 2008 11:34 AM To: CentOS mailing list Subject: [CentOS] Large RAID volume issues I have just finished creating an array on our new enclosure and our CentOS 5 server has recognized it. It shows as the full 6tb in the LSI configuration utility as well as when I ran fdisk: [root@HOST sbin]# fdisk /dev/sdb Note: sector size is 2048 (not 512) The number of cylinders for this disk is set to 182292. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sdb: 5997.6 GB, 5997628227584 bytes 255 heads, 63 sectors/track, 182292 cylinders Units = cylinders of 16065 * 2048 = 32901120 bytes Device Boot Start End Blocks Id System I was then created a parition in fdisk and it appeared to work until I formated it (here is the output of the formating) [root@HOST ~]# mkfs -t ext2 -j /dev/sdb1 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 195264512 inodes, 390518634 blocks 19525931 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 11918 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done When I did a df I got the following (only included the array entry) [root@HOST etc]# df -h Filesystem Size Used Avail Use% Mounted on ... /dev/sdb1 1.5T 198M 1.5T 1% /home1 I then tried removing it and working with parted. [root@HOST etc]# parted /dev/sdb Warning: Device /dev/sdb has a logical sector size of 2048. Not all parts of GNU Parted support this at the moment, and the working code is HIGHLY EXPERIMENTAL. GNU Parted 1.8.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Error: Unable to open /dev/sdb - unrecognised disk label. (parted) mklabel gpt *** glibc detected *** <unknown>: double free or corruption (!prev): 0x0000000016760800 *** ======= Backtrace: ========= /lib64/libc.so.6[0x3435c6f4f4] /lib64/libc.so.6(cfree+0x8c)[0x3435c72b1c] /usr/lib64/libparted-1.8.so.0[0x3436c1a5c5] /usr/lib64/libparted-1.8.so.0[0x3436c48a54] ======= Memory map: ======== 00400000-00410000 r-xp 00000000 08:05 130761 /sbin/parted 00610000-00611000 rw-p 00010000 08:05 130761 /sbin/parted 00611000-00612000 rw-p 00611000 00:00 0 00810000-00812000 rw-p 00010000 08:05 130761 /sbin/parted 1673d000-1677f000 rw-p 1673d000 00:00 0 3435800000-343581a000 r-xp 00000000 08:05 4765445 /lib64/ld-2.5.so 3435a19000-3435a1a000 r--p 00019000 08:05 4765445 /lib64/ld-2.5.so 3435a1a000-3435a1b000 rw-p 0001a000 08:05 4765445 /lib64/ld-2.5.so 3435c00000-3435d46000 r-xp 00000000 08:05 4765452 /lib64/libc-2.5.so 3435d46000-3435f46000 ---p 00146000 08:05 4765452 /lib64/libc-2.5.so 3435f46000-3435f4a000 r--p 00146000 08:05 4765452 /lib64/libc-2.5.so 3435f4a000-3435f4b000 rw-p 0014a000 08:05 4765452 /lib64/libc-2.5.so 3435f4b000-3435f50000 rw-p 3435f4b000 00:00 0 3436000000-3436013000 r-xp 00000000 08:05 4765602 /lib64/libdevmapper.so.1.02 3436013000-3436213000 ---p 00013000 08:05 4765602 /lib64/libdevmapper.so.1.02 3436213000-3436215000 rw-p 00013000 08:05 4765602 /lib64/libdevmapper.so.1.02 3436400000-3436402000 r-xp 00000000 08:05 4765458 /lib64/libdl-2.5.so 3436402000-3436602000 ---p 00002000 08:05 4765458 /lib64/libdl-2.5.so 3436602000-3436603000 r--p 00002000 08:05 4765458 /lib64/libdl-2.5.so 3436603000-3436604000 rw-p 00003000 08:05 4765458 /lib64/libdl-2.5.so 3436800000-3436802000 r-xp 00000000 08:05 4765478 /lib64/libuuid.so.1.2 3436802000-3436a02000 ---p 00002000 08:05 4765478 /lib64/libuuid.so.1.2 3436a02000-3436a03000 rw-p 00002000 08:05 4765478 /lib64/libuuid.so.1.2 3436c00000-3436c5c000 r-xp 00000000 08:05 464724 /usr/lib64/libparted-1.8.so.0.0. 1 3436c5c000-3436e5b000 ---p 0005c000 08:05 464724 /usr/lib64/libparted-1.8.so.0.0. 1 3436e5b000-3436e5f000 rw-p 0005b000 08:05 464724 /usr/lib64/libparted-1.8.so.0.0. 1 3436e5f000-3436e60000 rw-p 3436e5f000 00:00 0 3437000000-3437035000 r-xp 00000000 08:05 463236 /usr/lib64/libreadline.so.5.1 3437035000-3437234000 ---p 00035000 08:05 463236 /usr/lib64/libreadline.so.5.1 3437234000-343723c000 rw-p 00034000 08:05 463236 /usr/lib64/libreadline.so.5.1 343723c000-343723d000 rw-p 343723c000 00:00 0 343b000000-343b00d000 r-xp 00000000 08:05 4765466 /lib64/libgcc_s-4.1.2-20070626.s o.1 343b00d000-343b20d000 ---p 0000d000 08:05 4765466 /lib64/libgcc_s-4.1.2-20070626.s o.1 343b20d000-343b20e000 rw-p 0000d000 08:05 4765466 /lib64/libgcc_s-4.1.2-20070626.s o.1 343c000000-343c03b000 r-xp 00000000 08:05 4765743 /lib64/libsepol.so.1 343c03b000-343c23b000 ---p 0003b000 08:05 4765743 /lib64/libsepol.so.1 343c23b000-343c23c000 rw-p 0003b000 08:05 4765743 /lib64/libsepol.so.1 343c23c000-343c246000 rw-p 343c23c000 00:00 0 343c400000-343c415000 r-xp 00000000 08:05 4765744 /lib64/libselinux.so.1 343c415000-343c615000 ---p 00015000 08:05 4765744 /lib64/libselinux.so.1 343c615000-343c617000 rw-p 00015000 08:05 4765744 /lib64/libselinux.so.1 343c617000-343c618000 rw-p 343c617000 00:00 0 3448e00000-3448e4e000 r-xp 00000000 08:05 464701 /usr/lib64/libncurses.so.5.5 3448e4e000-344904e000 ---p 0004e000 08:05 464701 Aborted Any suggestions or direction would be appreciated. Thank you, Rob
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Mon, 4 Feb 2008 at 11:56am, Ross S. W. Walker wrote
You can't use an MBR partition table on a volume that large there is a max 2TB disk size limit and 2TB partition size limit for MBR, so you must use GPT.
For completeness' sake, MBR=master boot record, not a partition table. The standard type of partition table is msdos. And, yes, it cannot handle devices >2TiB.
There is a real lack of reliable and easy GPT tools under Linux, parted can read GPT partition tables, but I do not believe it can create them AFAIK.
Incorrect. parted has no issue creating and managing gpt disklabels.
Joshua Baker-LePain wrote:
On Mon, 4 Feb 2008 at 11:56am, Ross S. W. Walker wrote
You can't use an MBR partition table on a volume that large
there is a
max 2TB disk size limit and 2TB partition size limit for
MBR, so you
must use GPT.
For completeness' sake, MBR=master boot record, not a partition table. The standard type of partition table is msdos. And, yes, it cannot handle devices >2TiB.
Yes, MBR is the master boot record, it contains the boot loader and the partition table for the primary partitions (the extended partitions table is kept in the first sector of the primary partition marked as the extended partitions container). The only partition table type that can be kept in the MBR is the msdos or bios partition table, so when one talks MBR one typically talks msdos partition table.
A GPT partition table is kept further in disk, but it also keeps a "compatibility" MBR for BIOS based systems including a GPT boot loader in the MBR to read and boot the GPT table. EFI based systems don't use the MBR as they read the GPT table directly, and have the boot-loader built-in so one will not likely see an MBR on a pure EFI based system.
There is a real lack of reliable and easy GPT tools under
Linux, parted
can read GPT partition tables, but I do not believe it can
create them
AFAIK.
Incorrect. parted has no issue creating and managing gpt disklabels.
Good to know, last I used parted it was only able to "read" GPT tables not create or modify them.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Mon, 4 Feb 2008 at 11:33am, Rob Lines wrote
I have just finished creating an array on our new enclosure and our CentOS 5 server has recognized it. It shows as the full 6tb in the LSI configuration utility as well as when I ran fdisk:
[root@HOST etc]# parted /dev/sdb Warning: Device /dev/sdb has a logical sector size of 2048. Not all parts of GNU Parted support this at the moment, and the working code is HIGHLY EXPERIMENTAL.
This would appear to be your problem. Unless you have strong reasons to use 2K sectors, I'd change them to the much more standard 512.
After that, parted should have no issues whatsoever.
This would appear to be your problem. Unless you have strong reasons to use 2K sectors, I'd change them to the much more standard 512.
After that, parted should have no issues whatsoever.
In looking back through the configuration. The 2kb sectors were set in the Array in the Variable Sector Size as that was what it suggested for the larger than 2tb slices. The other option on the array was using 16 byte CDB(Command Descriptor Block) but that options is not compatible with the LSO cards according the manufacturer.
I would guess that scaling it back to the 1kb per track would not really help either.
As an aside for the LVM option is there anything to be aware of using it on such a large drive?
Thank you everyone for the information and help.
Rob
Rob Lines wrote:
This would appear to be your problem. Unless you have strong reasons to use 2K sectors, I'd change them to the much more standard 512. After that, parted should have no issues whatsoever.
In looking back through the configuration. The 2kb sectors were set in the Array in the Variable Sector Size as that was what it suggested for the larger than 2tb slices. The other option on the array was using 16 byte CDB(Command Descriptor Block) but that options is not compatible with the LSO cards according the manufacturer.
I would guess that scaling it back to the 1kb per track would not really help either.
As an aside for the LVM option is there anything to be aware of using it on such a large drive?
with LVM, you could join several smaller logical drives, maybe 1TB each, into a single volume set, which could then contain various file systems.
On Feb 4, 2008 3:16 PM, John R Pierce pierce@hogranch.com wrote:
with LVM, you could join several smaller logical drives, maybe 1TB each, into a single volume set, which could then contain various file systems.
That looks like it may be the result. The main reason was to keep the amount of overhead and 'stuff' required to revive it in the event of a server issue to a minimum. That was one of the reasons for going with an enclosure that handles all the RAID internally and just presents to the server as a single drive. We had been trying to avoid LVM as we had run into problems using knoppix recovering it in the past.
It looks like we will probably just end up breaking it up into smaller chunks unless I can find a way for the enclosure to use 512 sectors and still have greater than 2 tb volumes.
Rob Lines wrote:
On Feb 4, 2008 3:16 PM, John R Pierce pierce@hogranch.com wrote:
with LVM, you could join several smaller logical drives, maybe 1TB each, into a single volume set, which could then contain various file systems.
That looks like it may be the result. The main reason was to keep the amount of overhead and 'stuff' required to revive it in the event of a server issue to a minimum. That was one of the reasons for going with an enclosure that handles all the RAID internally and just presents to the server as a single drive. We had been trying to avoid LVM as we had run into problems using knoppix recovering it in the past.
It looks like we will probably just end up breaking it up into smaller chunks unless I can find a way for the enclosure to use 512 sectors and still have greater than 2 tb volumes.
LVM is very well supported these days.
In fact I default on LVM for all my OS and external storage configurations here as it provides for greater flexibility and manageability then raw disks/partitions.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Feb 4, 2008 3:34 PM, Ross S. W. Walker rwalker@medallion.com wrote:
Rob Lines wrote:
On Feb 4, 2008 3:16 PM, John R Pierce pierce@hogranch.com wrote:
with LVM, you could join several smaller logical
drives, maybe 1TB each, into a single volume set, which could then contain various file systems.
That looks like it may be the result. The main reason was to keep the amount of overhead and 'stuff' required to revive it in the event of a server issue to a minimum. That was one of the reasons for going with an enclosure that handles all the RAID internally and just presents to the server as a single drive. We had been trying to avoid LVM as we had run into problems using knoppix recovering it in the past.
It looks like we will probably just end up breaking it up into smaller chunks unless I can find a way for the enclosure to use 512 sectors and still have greater than 2 tb volumes.
LVM is very well supported these days.
In fact I default on LVM for all my OS and external storage configurations here as it provides for greater flexibility and manageability then raw disks/partitions.
How easy is it to migrate to a new os install? Given the situation as I described with a single 6tb 'drive' using lvm and the server goes down and we have to rebuild the server from scratch or move the storage to another machine (all using CentOS 5) how easy is that?
We are still checking with the vendor for a solution to move back to the 512 sectors rather than the 2k ones. Hopefully they come up with something.
Thanks, Rob
Rob Lines wrote:
On Feb 4, 2008 3:34 PM, Ross S. W. Walker rwalker@medallion.com wrote:
Rob Lines wrote:
On Feb 4, 2008 3:16 PM, John R Pierce
pierce@hogranch.com wrote:
with LVM, you could join several smaller logical
drives, maybe 1TB each, into a single volume set, which could then contain various file systems.
That looks like it may be the result. The main reason was to keep the amount of overhead and 'stuff' required to revive it in the event of a server issue to a minimum. That was one of the reasons for going with an enclosure that handles all the RAID internally and just presents to the server as a single drive. We had been trying to avoid LVM as we had run into problems using knoppix recovering it in the past.
It looks like we will probably just end up breaking it up into smaller chunks unless I can find a way for the enclosure to use 512 sectors and still have greater than 2 tb volumes.
LVM is very well supported these days.
In fact I default on LVM for all my OS and external storage configurations here as it provides for greater flexibility and manageability then raw disks/partitions.
How easy is it to migrate to a new os install? Given the situation as I described with a single 6tb 'drive' using lvm and the server goes down and we have to rebuild the server from scratch or move the storage to another machine (all using CentOS 5) how easy is that?
To move an external array to a new server is as easy as plugging it in and importing the volume group (vgimport).
Typically I name my OS volume groups "CentOS" and give semi-descriptive names to my external array volume groups, such as "Exch-SQL" or "VM_Guests".
You could also have a hot server activate the volume group via heartbeat if the first server goes down if your storage allows multiple initiators to attach to it.
We are still checking with the vendor for a solution to move back to the 512 sectors rather than the 2k ones. Hopefully they come up with something.
I wish you luck here, but in my experience once an array is created with a set sector size or chunk size, changing these usually involves re-creating the array.
LVM might be able to handle the sector size though, no need to create any partition on the disk, but future migration compatibility could be questionable.
To create a VG out of it:
pvcreate /dev/sdb
then,
vgcreate "VG_Name" /dev/sdb
then,
lvcreate -L 4T -n "LV_Name" "VG_Name"
If you get a new external array say it's /dev/sdc and want to move all data from the old one to the new one online and then remove the old one.
pvcreate /dev/sdc
vgextend "VG_Name" /dev/sdc
pvmove /dev/sdb /dev/sdc
vgreduce "VG_Name" /dev/sdb
pvremove /dev/sdb
Then take /dev/sdb offline.
-Ross
PS You might want to remove any existing MBR/GPT stuff off of /dev/sdb before you pvcreate it, with:
dd if=/dev/zero of=/dev/sdb bs=512 count=63
That will wipe the first track which should do it.
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Feb 4, 2008 4:49 PM, Ross S. W. Walker rwalker@medallion.com wrote:To move an external array to a new server is as easy as plugging
it in and importing the volume group (vgimport).
Typically I name my OS volume groups "CentOS" and give semi-descriptive names to my external array volume groups, such as "Exch-SQL" or "VM_Guests".
You could also have a hot server activate the volume group via heartbeat if the first server goes down if your storage allows multiple initiators to attach to it.
We are still checking with the vendor for a solution to move back to the 512 sectors rather than the 2k ones. Hopefully they come up with something.
I wish you luck here, but in my experience once an array is created with a set sector size or chunk size, changing these usually involves re-creating the array.
LVM might be able to handle the sector size though, no need to create any partition on the disk, but future migration compatibility could be questionable.
To create a VG out of it:
pvcreate /dev/sdb
then,
vgcreate "VG_Name" /dev/sdb
then,
lvcreate -L 4T -n "LV_Name" "VG_Name"
If you get a new external array say it's /dev/sdc and want to move all data from the old one to the new one online and then remove the old one.
pvcreate /dev/sdc
vgextend "VG_Name" /dev/sdc
pvmove /dev/sdb /dev/sdc
vgreduce "VG_Name" /dev/sdb
pvremove /dev/sdb
Then take /dev/sdb offline.
-Ross
PS You might want to remove any existing MBR/GPT stuff off of /dev/sdb before you pvcreate it, with:
dd if=/dev/zero of=/dev/sdb bs=512 count=63
That will wipe the first track which should do it.
Luckily the array is empty at the moment as we are only at the phase of building/mounting so as it turns out there was a typo in the documentation and it is supposed to be created using the 16 byte CDB option and that will allow it use 512 sectors. (apparently there was a rogue not in the docs changing "If you have an LSI card you should use CDB" to "if you have an LSI card you should not use CDB") So this will be our next attempt and go from there.
Thank you very much for the help and the quick lesson on LVM. Neat stuff we will have to look at.
Rob