Hi,
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS - 8nos) and create a file system with ext3? (CentOs4.3 64bit)
The maximum I am getting is 1.24tb and fdisk returns no more space available. But when I say fdisk /dev/sda it list full size of 3.3 TB. But I can not create partitions more than 1.24TB.
Any idea?
Thanks
Rajeev
On 18/08/06, Rajeev R Veedu rajeev@cracknell.com wrote:
Hi,
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS – 8nos) and create a file system with ext3? (CentOs4.3 64bit)
The maximum I am getting is 1.24tb and fdisk returns no more space available. But when I say fdisk /dev/sda it list full size of 3.3 TB. But I can not create partitions more than 1.24TB.
I think there are limitations on the partition size fdisk can handle. I believe the canonical way of creating larger partitions is with "parted" but I can't say I've ever needed it.
Will.
On Friday 18 August 2006 13:52, Will McDonald wrote:
On 18/08/06, Rajeev R Veedu rajeev@cracknell.com wrote:
Hi,
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS – 8nos) and create a file system with ext3? (CentOs4.3 64bit)
The maximum I am getting is 1.24tb and fdisk returns no more space available. But when I say fdisk /dev/sda it list full size of 3.3 TB. But I can not create partitions more than 1.24TB.
I think there are limitations on the partition size fdisk can handle. I believe the canonical way of creating larger partitions is with "parted" but I can't say I've ever needed it.
This is basically correct, but the limit is not really in fdisk but in the DOS partition table format. Two ways to go: 1) use parted and the gpt format 2) don't use paritions at all, just run lvm directly on it (pvcreate /dev/sda ..)
/Peter
Rajeev R Veedu wrote:
Hi,
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS – 8nos) and create a file system with ext3? (CentOs4.3 64bit)
The maximum I am getting is 1.24tb and fdisk returns no more space available. But when I say fdisk /dev/sda it list full size of 3.3 TB. But I can not create partitions more than 1.24TB.
Any idea?
you should try creating an LVM instead of creating a partition .... with LVM you dont need to create a partition ... you can use the whole device ....
from lvm2 faq:
*4.1.13. * What is the maximum size of a single LV?
- The answer to this question depends upon the CPU architecture of
your computer and the kernel you are a running:
For 2.4 based kernels, the maximum LV size is 2TB. For some older kernels, however, the limit was 1TB due to signedness problems in the block layer. Red Hat Enterprise Linux 3 Update 5 has fixes to allow the full 2TB LVs. Consult your distribution for more information in this regard.
For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.
For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. (Yes, that is a very large number.)
cheers,
Nick
On Fri, 2006-08-18 at 10:06 -0300, Nicholas Anderson wrote:
Rajeev R Veedu wrote:
Hi,
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS – 8nos) and create a file system with ext3? (CentOs4.3 64bit)
<snip>
you should try creating an LVM instead of creating a partition .... with LVM you dont need to create a partition ... you can use the whole device ....
You don't need LVM either. You can use the whole raw disk. *BUT*, using LVM has some advantages, like being able to extend the volume group with new/other disks as needed.
Unless there is some *severe* performance concerns related to LVM, I would go with that too, as Nicholas suggests.
Just wanted you to know that you had the option of no partition and no lvm.
BTW, if it is a bootable unit, you will probably want at lease one small partition or will need LVM. Raw partition will not suffice.
<snip>
HTH
On Friday 18 August 2006 15:17, William L. Maltby wrote:
... Just wanted you to know that you had the option of no partition and no lvm.
I agree that this should work, but I've seen some strange behaviour when trying it with large devices and would as such recommend against it.
/Peter
On Fri, 2006-08-18 at 15:39 +0200, Peter Kjellström wrote:
On Friday 18 August 2006 15:17, William L. Maltby wrote:
... Just wanted you to know that you had the option of no partition and no lvm.
I agree that this should work, but I've seen some strange behaviour when trying it with large devices and would as such recommend against it.
Even more, raw has only 1 basic advantage: max space and speed. But LVM seems to have little enough overhead that all its advantages become predominate in my mind. I never use raw anymore, regardless of size. I am too much in love with things like vgextend, vgreduce, ... that allow easy adjustments as the environmental needs change.
I am even using it as a SOHO cheap, fast backup and "instant recover" vehicle. A little more dev and I'm done (for *my* needs).
/Peter
<snip sig stuff>
William L. Maltby wrote:
Even more, raw has only 1 basic advantage: max space and speed. But LVM seems to have little enough overhead that all its advantages become predominate in my mind. I never use raw anymore, regardless of size. I am too much in love with things like vgextend, vgreduce, ... that allow easy adjustments as the environmental needs change.
Now, if the LVM implementation in Linux supported mirroring, that would be great. Perfect as replacement of md-raid1 and perfect for drive migrations where you don't have to move real live data, as you just move mirrors.
And define /boot to always start in PE #1, and you could do away with partitions all together with an easy addition to grub.
On Fri, 2006-08-18 at 16:29 +0200, Morten Torstensen wrote:
William L. Maltby wrote:
<snip>
Now, if the LVM implementation in Linux supported mirroring, that would be great. Perfect as replacement of md-raid1 and perfect for drive migrations where you don't have to move real live data, as you just move mirrors.
I know we've wandered OT here, but I have to say that I recently saw something about LVM mirroring implementation. I bet a quick google will get you to it. I don't recall where I saw it though.
<snip daydreams about grub boot with no partitions needed ;-) >
Morten Torstensen wrote:
Now, if the LVM implementation in Linux supported mirroring, that would be great. Perfect as replacement of md-raid1 and perfect for drive migrations where you don't have to move real live data, as you just move mirrors.
lvm2 does, now, support mirrors.
Karanbir Singh wrote:
Morten Torstensen wrote:
Now, if the LVM implementation in Linux supported mirroring, that would
lvm2 does, now, support mirrors.
I see that is a new feature in the lvm2 tools in the upstream providers Update 4. Tried to google up some info, but it all drowns in md mirroring and old info. Hope they have implemented 3-way mirrors, on-the-fly increase and reduction of mirrors. Like you have in AIX LVM. Too bad it was based on HP-UX LVM originally, but that is water under the bridge.
With LVM, it is pretty easy to do mirroring, since all you have to do is map more than one PE to each LE. I'm looking forward to test this feature. It could also deprecate utilities like pvmove, since you would not really need it anymore.
On Sat, 2006-08-19 at 03:44, Morten Torstensen wrote:
lvm2 does, now, support mirrors.
With LVM, it is pretty easy to do mirroring, since all you have to do is map more than one PE to each LE. I'm looking forward to test this feature. It could also deprecate utilities like pvmove, since you would not really need it anymore.
Can you add a mirror to any LVM after creating it without making provisions ahead of time as you must when setting up raid devices? I have found it useful to create 'broken raid' devices that could be mirrored on demand to external drives and have been thinking about doing it over iscsi too. However it doesn't seem possible to install the system on a raid with missing devices.
Les Mikesell wrote:
Can you add a mirror to any LVM after creating it without making provisions ahead of time as you must when setting up raid devices?
Not sure how the lvm2 implementation in linux is, as I have not seen any docs for how it is supposed to work. In AIX, you do it on the fly with mklvcopy and you can make up to two copies (so 3 in total, one "original" and two copies) and you remove it with rmlvcopy. This can be done on the fly regardless of filesystem or if it is mounted or not since it all occurs at LVM leven and below the filesystem.
I have found it useful to create 'broken raid' devices that could be mirrored on demand to external drives and have been thinking about doing it over iscsi too. However it doesn't seem possible to install the system on a raid with missing devices.
Yes, this is a use for the feature. Kind of a simple snapshot where you can make a portable copy of a filesystem (after the copy, you split off the PV from the VG).
Rajeev R Veedu wrote:
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS – 8nos) and create a file system with ext3? (CentOs4.3 64bit)
The maximum I am getting is 1.24tb and fdisk returns no more space available. But when I say fdisk /dev/sda it list full size of 3.3 TB. But I can not create partitions more than 1.24TB.
one point that hasent been brought up in this thread, and I'd like to mention it here, is that fdisk is actually deprecated, use parted instead, and as Peter has pointed out already, you need a gpt partition table type.
Also, since its been ref'd to - 3ware provides its own means of supporting large volume blocks - carving.
Hi
Thanks for all who helped me to solve the problem. I have used Parted and made the disk of 3.2 TB. Later I made the ext3 file system with mkfs -t ext3 /dev/sda1.
Thanks once again for the suggestions and support
Rajeev
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Karanbir Singh Sent: Saturday, August 19, 2006 2:35 AM To: CentOS mailing list Subject: Re: [CentOS] FW: FDISK Help please in Centos 4.3
Rajeev R Veedu wrote:
Is it possible to create one raid volume of 3.3 TB (9550SX-8lp X WD5000YS - 8nos) and create a file system with ext3? (CentOs4.3 64bit)
The maximum I am getting is 1.24tb and fdisk returns no more space available. But when I say fdisk /dev/sda it list full size of 3.3 TB. But I can not create partitions more than 1.24TB.
one point that hasent been brought up in this thread, and I'd like to mention it here, is that fdisk is actually deprecated, use parted instead, and as Peter has pointed out already, you need a gpt partition table type.
Also, since its been ref'd to - 3ware provides its own means of supporting large volume blocks - carving.