I'm new to lvm. I decided to decrease the space of a logical volume. So I did a: $ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 Rounding up size to full physical extent 992.00 MB WARNING: Reducing active and open logical volume to 47.91 GB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce LogVol05? [y/n]: y Reducing logical volume LogVol05 to 47.91 GB Logical volume LogVol05 successfully resized
$ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
Note that "df" shows the same size available. This probably means that the 2 "systems" aren't talking to each other (or my lvm command failed).
When I rebooted, things failed, going into "repair filesystem" mode. I tried fsck /dev/VolGroup00/LogVol05
but after awhile, it started giving block errors, specifically "Error reading block <block-number> (Invalid argument) while doing inode scan. Ignore error<y>?
I held down the <Enter> key for awhile in hopes that I'd be able to get through the errors, but no joy. I finally cancelled the thing.
I can rebuild the server, it's no big deal. In fact the logical volume that went bad isn't a big deal data wise, and I shouldn't need that data to bring up the server itself. I shouldn't need to mount it. So can I still save this? === Al
On Wed, May 02, 2007 at 06:59:26PM -0700, Al Sparks enlightened us:
I'm new to lvm. I decided to decrease the space of a logical volume. So I did a: $ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 Rounding up size to full physical extent 992.00 MB WARNING: Reducing active and open logical volume to 47.91 GB THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol05? [y/n]: y Reducing logical volume LogVol05 to 47.91 GB Logical volume LogVol05 successfully resized
$ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
Note that "df" shows the same size available. This probably means that the 2 "systems" aren't talking to each other (or my lvm command failed).
When I rebooted, things failed, going into "repair filesystem" mode. I tried fsck /dev/VolGroup00/LogVol05
but after awhile, it started giving block errors, specifically "Error reading block <block-number> (Invalid argument) while doing inode scan. Ignore error<y>?
I held down the <Enter> key for awhile in hopes that I'd be able to get through the errors, but no joy. I finally cancelled the thing.
I can rebuild the server, it's no big deal. In fact the logical volume that went bad isn't a big deal data wise, and I shouldn't need that data to bring up the server itself. I shouldn't need to mount it. So can I still save this?
Did you resize the filesystem, too?
Matt
--- Matt Hyclak hyclak@math.ohiou.edu wrote:
On Wed, May 02, 2007 at 06:59:26PM -0700, Al Sparks enlightened us:
I'm new to lvm. I decided to decrease the space of a logical volume. So I did a: $ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 Rounding up size to full physical extent 992.00 MB WARNING: Reducing active and open logical volume to 47.91 GB THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol05? [y/n]: y Reducing logical volume LogVol05 to 47.91 GB Logical volume LogVol05 successfully resized
$ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
Note that "df" shows the same size available. This probably means that the 2 "systems" aren't talking to each other (or my lvm command failed).
When I rebooted, things failed, going into "repair filesystem" mode. I tried fsck /dev/VolGroup00/LogVol05
but after awhile, it started giving block errors, specifically "Error reading block <block-number> (Invalid argument) while doing inode scan. Ignore
error<y>?
I held down the <Enter> key for awhile in hopes that I'd be able to get through the errors, but no joy. I finally cancelled the thing.
I can rebuild the server, it's no big deal. In fact the logical volume that went bad isn't a big deal data wise, and I shouldn't need that data to bring up the server itself. I shouldn't need to mount it. So can I still save this?
Did you resize the filesystem, too?
Matt
Nope. How do you do that? === Al
On Wed, May 02, 2007 at 11:06:31PM -0700, Al Sparks enlightened us:
Did you resize the filesystem, too?
Matt
Nope. How do you do that?
resize2fs would be a good guess. Usually this is done *before* you shrink the disk out from underneath it. If anything was in those sectors you removed from the LV, you might be out of luck.
Matt
--- Matt Hyclak hyclak@math.ohiou.edu wrote:
On Wed, May 02, 2007 at 11:06:31PM -0700, Al Sparks enlightened us:
Did you resize the filesystem, too?
Matt
Nope. How do you do that?
resize2fs would be a good guess. Usually this is done *before* you shrink the disk out from underneath it. If anything was in those sectors you removed from the LV, you might be out of luck.
Matt
Well, I was able to get my server back up.
First, I had to boot it up on a recovery CD (standard CentOS) and comment out the bad volume it wanted to mount in /etc/fstab.
When the server came back up, I tried using resize2fs to resize. The resize2fs would not let me resize until I manually ran fsck because the file system and logical volume block counts wouldn't match (been there, done that, no joy). So I increased the logical volume back to where it was to make them match, ran fsck again, and everything checked out. I then was able to use resize2fs to decrease the file system size, and then run lvreduce to decrease the volume size.
However, because neither application didn't seem to allow me to use block size parameters, I used units of Megabytes and would end up within a few blocks of each other. I then would estimate how much more units of +/- MB I needed to get things in sync.
Kind of a pain. Anyway, I appreciate the help. Nice to have these tools available to use. === Al
Al Sparks spake the following on 5/3/2007 6:39 PM:
--- Matt Hyclak hyclak@math.ohiou.edu wrote:
On Wed, May 02, 2007 at 11:06:31PM -0700, Al Sparks enlightened us:
Did you resize the filesystem, too?
Matt
Nope. How do you do that?
resize2fs would be a good guess. Usually this is done *before* you shrink the disk out from underneath it. If anything was in those sectors you removed from the LV, you might be out of luck.
Matt
Well, I was able to get my server back up.
First, I had to boot it up on a recovery CD (standard CentOS) and comment out the bad volume it wanted to mount in /etc/fstab.
When the server came back up, I tried using resize2fs to resize. The resize2fs would not let me resize until I manually ran fsck because the file system and logical volume block counts wouldn't match (been there, done that, no joy). So I increased the logical volume back to where it was to make them match, ran fsck again, and everything checked out. I then was able to use resize2fs to decrease the file system size, and then run lvreduce to decrease the volume size.
However, because neither application didn't seem to allow me to use block size parameters, I used units of Megabytes and would end up within a few blocks of each other. I then would estimate how much more units of +/- MB I needed to get things in sync.
Kind of a pain. Anyway, I appreciate the help. Nice to have these tools available to use. === Al
If it worked for you then you could resize2fs to "smaller" than you need, resize the LV, and resize2fs with no size parameters I believe will resize to all available space in the filesystem. Yes, it is one extra step, but it should work. I haven't used resize2fs for so long, I did't think it worked on ext3. I guess past pains don't always carry to the future.
On Thu, May 3, 2007 2:06 am, Al Sparks wrote:
--- Matt Hyclak hyclak@math.ohiou.edu wrote:
On Wed, May 02, 2007 at 06:59:26PM -0700, Al Sparks enlightened us:
I'm new to lvm. I decided to decrease the space of a logical volume. So I did a: $ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 Rounding up size to full physical extent 992.00 MB WARNING: Reducing active and open logical volume to 47.91 GB THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol05? [y/n]: y Reducing logical volume LogVol05 to 47.91 GB Logical volume LogVol05 successfully resized
$ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
Note that "df" shows the same size available. This probably means that the 2 "systems" aren't talking to each other (or my lvm command failed).
When I rebooted, things failed, going into "repair filesystem" mode. I tried fsck /dev/VolGroup00/LogVol05
but after awhile, it started giving block errors, specifically "Error reading block <block-number> (Invalid argument) while doing
inode scan. Ignore error<y>?
I held down the <Enter> key for awhile in hopes that I'd be able to get through the errors, but no joy. I finally cancelled the thing.
I can rebuild the server, it's no big deal. In fact the logical volume that went bad isn't a big deal data wise, and I shouldn't need that data to bring up the server itself. I shouldn't need to mount it. So can I still save this?
Did you resize the filesystem, too?
Matt
Nope. How do you do that? === Al
Take a look at http://tldp.org/HOWTO/LVM-HOWTO/ . Lots of good examples.
Al Sparks spake the following on 5/2/2007 6:59 PM:
I'm new to lvm. I decided to decrease the space of a logical volume. So I did a: $ df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 Rounding up size to full physical extent 992.00 MB WARNING: Reducing active and open logical volume to 47.91 GB THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol05? [y/n]: y Reducing logical volume LogVol05 to 47.91 GB Logical volume LogVol05 successfully resized
LVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Scott Silva Sent: Friday, May 04, 2007 1:16 PM To: centos@centos.org Subject: [CentOS] Re: LVM Resizing Problem
Al Sparks spake the following on 5/2/2007 6:59 PM:
I'm new to lvm. I decided to decrease the space of a
logical volume.
So I did a: $ df -m Filesystem 1M-blocks Used Available Use%
Mounted on
/dev/mapper/VolGroup00-LogVol00 1953 251 1602 14% / /dev/sda2 494 21 448 5% /boot tmpfs 1014 0 1014 0% /dev/shm /dev/mapper/VolGroup00-LogVol05 48481 6685 39295 15% /home /dev/mapper/VolGroup00-LogVol03 961 18 894 2% /tmp /dev/mapper/VolGroup00-LogVol01 7781 2051 5329 28% /usr /dev/mapper/VolGroup00-LogVol02 5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 Rounding up size to full physical extent 992.00 MB WARNING: Reducing active and open logical volume to 47.91 GB THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol05? [y/n]: y Reducing logical volume LogVol05 to 47.91 GB Logical volume LogVol05 successfully resized
LVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily.
Were the LV calculations done in the VG's extent size unit?
Most people forget LVM rounds to the closest whole extent in it's calculations which I believe is 4MB by default, so care must be taken to make sure any file system fits comfortably in there first.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On May 6, 2007, at 8:14 AM, Ross S. W. Walker wrote:
LVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily.
Were the LV calculations done in the VG's extent size unit?
Most people forget LVM rounds to the closest whole extent in it's calculations which I believe is 4MB by default, so care must be taken to make sure any file system fits comfortably in there first.
Is there any tool which is aware of both the filesystem and LVM layers and can correctly ensure the filesystem fits?
The filesystem on my only large disk array is corrupt, presumably due to some problem in one of the Fedora Core 6 update kernels. I rebooted into single user mode, fscked (which found a huge number of errors) and rebooted and it's still complaining. So it's time to start over with a fresh filesystem on a more trustworthy dom0 system (CentOS 5). I don't have anything vital stored only there, but there are a number of large files I'd like to save if possible. They don't fit anywhere else.
Here's my plan:
1. boot a CentOS 5 DVD in rescue mode, fsck the filesystem again 2. shrink the existing filesystem and LV (crossing my fingers) 3. install CentOS 5 to a new LV and filesystem 4. copy whatever's left of these files 5. delete the old LV 6. expand the new filesystem and LV
Never done steps #2 and #6 before, and I want to give it as much chance of success as possible with an already-screwed-up filesystem. I see the section in the LVM HOWTO [1], but it doesn't mention the sort of gotcha you're describing. I'm skeptical of LVM's documentation in general. It doesn't even mention RAID (md), instead including some incredibly stupid recipes virtually guaranteed to lose all your data when one disk out of many fails [2].
[1] - http://www.tldp.org/HOWTO/LVM-HOWTO/reducelv.html [2] - http://www.tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
Cheers, Scott
On Sunday 06 May 2007, Scott Lamb wrote:
On May 6, 2007, at 8:14 AM, Ross S. W. Walker wrote:
LVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily.
Were the LV calculations done in the VG's extent size unit?
Most people forget LVM rounds to the closest whole extent in it's calculations which I believe is 4MB by default, so care must be taken to make sure any file system fits comfortably in there first.
Is there any tool which is aware of both the filesystem and LVM layers and can correctly ensure the filesystem fits?
You could always look at using system-config-lvm
On Sun, 6 May 2007, Scott Lamb wrote:
Date: Sun, 6 May 2007 13:47:28 -0700 From: Scott Lamb slamb@slamb.org Reply-To: CentOS mailing list centos@centos.org To: CentOS mailing list centos@centos.org Subject: Re: [CentOS] Re: LVM Resizing Problem
On May 6, 2007, at 8:14 AM, Ross S. W. Walker wrote:
LVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily.
Were the LV calculations done in the VG's extent size unit?
Most people forget LVM rounds to the closest whole extent in it's calculations which I believe is 4MB by default, so care must be taken to make sure any file system fits comfortably in there first.
Is there any tool which is aware of both the filesystem and LVM layers and can correctly ensure the filesystem fits?
no tool as such I am aware of, but bc is your good friend. Use lvdisplay rather than lvs, pvdisplay,. etc and count in LE (logical extents) rather than MB or GB.
The filesystem on my only large disk array is corrupt, presumably due to some problem in one of the Fedora Core 6 update kernels. I rebooted into single user mode, fscked (which found a huge number of errors) and rebooted and it's still complaining. So it's time to start over with a fresh filesystem on a more trustworthy dom0 system (CentOS 5). I don't have anything vital stored only there, but there are a number of large files I'd like to save if possible. They don't fit anywhere else.
Here's my plan:
- boot a CentOS 5 DVD in rescue mode, fsck the filesystem again
- shrink the existing filesystem and LV (crossing my fingers)
You canno shring ext3 until it fsck clean, as far as I remember.
- install CentOS 5 to a new LV and filesystem
- copy whatever's left of these files
- delete the old LV
- expand the new filesystem and LV
Never done steps #2 and #6 before, and I want to give it as much chance of success as possible with an already-screwed-up filesystem.
I would suggest you to add external HDD (e.g. USB2, or IDE/SATA with USB2/PATA+SATA converter) and copy your data. If the filesystem is screwed up, resizing can only make its state worse.
I see the section in the LVM HOWTO [1], but it doesn't mention the sort of gotcha you're describing. I'm skeptical of LVM's documentation in general. It doesn't even mention RAID (md), instead including some incredibly stupid recipes virtually guaranteed to lose all your data when one disk out of many fails [2].
Could you please elaborate or give some pointers about reasons for those recipes being stupid, etc. ?
[1] - http://www.tldp.org/HOWTO/LVM-HOWTO/reducelv.html [2] - http://www.tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
Cheers, Scott
On May 7, 2007, at 12:56 AM, Wojtek.Pilorz wrote:
On Sun, 6 May 2007, Scott Lamb wrote:
Date: Sun, 6 May 2007 13:47:28 -0700 From: Scott Lamb slamb@slamb.org Reply-To: CentOS mailing list centos@centos.org To: CentOS mailing list centos@centos.org Subject: Re: [CentOS] Re: LVM Resizing Problem
On May 6, 2007, at 8:14 AM, Ross S. W. Walker wrote:
LVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily.
Were the LV calculations done in the VG's extent size unit?
Most people forget LVM rounds to the closest whole extent in it's calculations which I believe is 4MB by default, so care must be taken to make sure any file system fits comfortably in there first.
Is there any tool which is aware of both the filesystem and LVM layers and can correctly ensure the filesystem fits?
no tool as such I am aware of, but bc is your good friend. Use lvdisplay rather than lvs, pvdisplay,. etc and count in LE (logical extents) rather than MB or GB.
Okay. I will just do that carefully.
Shawn, thanks for the system-config-lvm suggestion. I don't think it will be too helpful in my specific situation (probably not available from the rescue disk), but it's good to know about in general.
The filesystem on my only large disk array is corrupt, presumably due to some problem in one of the Fedora Core 6 update kernels. I rebooted into single user mode, fscked (which found a huge number of errors) and rebooted and it's still complaining. So it's time to start over with a fresh filesystem on a more trustworthy dom0 system (CentOS 5). I don't have anything vital stored only there, but there are a number of large files I'd like to save if possible. They don't fit anywhere else.
Here's my plan:
- boot a CentOS 5 DVD in rescue mode, fsck the filesystem again
- shrink the existing filesystem and LV (crossing my fingers)
You canno shring ext3 until it fsck clean, as far as I remember.
I think fsck reported the filesystem as clean...the kernel was still comparing though.
- install CentOS 5 to a new LV and filesystem
- copy whatever's left of these files
- delete the old LV
- expand the new filesystem and LV
Never done steps #2 and #6 before, and I want to give it as much chance of success as possible with an already-screwed-up filesystem.
I would suggest you to add external HDD (e.g. USB2, or IDE/SATA with USB2/PATA+SATA converter) and copy your data. If the filesystem is screwed up, resizing can only make its state worse.
Definitely a more reliable way, but I don't want to spend money on this project. If I lose the files, life will go on.
I see the section in the LVM HOWTO [1], but it doesn't mention the sort of gotcha you're describing. I'm skeptical of LVM's documentation in general. It doesn't even mention RAID (md), instead including some incredibly stupid recipes virtually guaranteed to lose all your data when one disk out of many fails [2].
Could you please elaborate or give some pointers about reasons for those recipes being stupid, etc. ?
Sure. In the "add disk" recipe:
* the "dev" volume group will fail if /dev/sda, /dev/sdd, /dev/sdf, or /dev/sdg fail. * the "sales" volume group will fail if /dev/sdb or /dev/sde fail. * the "ops" volume group will fail if /dev/sdb or /dev/sde fail.
LVM does not do RAID, so adding drives makes failures more likely.
I brought this up on the LVM list [3]. One person came up with a nice recipe on how to use LVM+RAID, but there wasn't a consensus that it would be good to stop misleading people into destroying their disks.
[1] - http://www.tldp.org/HOWTO/LVM-HOWTO/reducelv.html [2] - http://www.tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
[3] - http://marc.info/?l=linux-lvm&m=115879450212535&w=2
On May 7, 2007, at 12:56 AM, Wojtek.Pilorz wrote:
I would suggest you to add external HDD (e.g. USB2, or IDE/SATA with USB2/PATA+SATA converter) and copy your data. If the filesystem is screwed up, resizing can only make its state worse.
On second thought, there is a way I can do this without buying more drives or resizing filesystems. I can just degrade my RAID-5 array by pulling one of the drives and formatting it standalone. I think the stuff I want will (just barely) fit on a single drive.
Scott Lamb wrote:
On May 7, 2007, at 12:56 AM, Wojtek.Pilorz wrote:
I would suggest you to add external HDD (e.g. USB2, or IDE/SATA with USB2/PATA+SATA converter) and copy your data. If the filesystem is screwed up, resizing can only make its state worse.
On second thought, there is a way I can do this without buying more drives or resizing filesystems. I can just degrade my RAID-5 array by pulling one of the drives and formatting it standalone. I think the stuff I want will (just barely) fit on a single drive.
--Scott Lamb http://www.slamb.org/
Isn't a degraded raid5 horribly slow?
Russ
On May 7, 2007, at 11:56 AM, Ruslan Sivak wrote:
Scott Lamb wrote:
On May 7, 2007, at 12:56 AM, Wojtek.Pilorz wrote:
I would suggest you to add external HDD (e.g. USB2, or IDE/SATA with USB2/PATA+SATA converter) and copy your data. If the filesystem is screwed up, resizing can only make its state worse.
On second thought, there is a way I can do this without buying more drives or resizing filesystems. I can just degrade my RAID-5 array by pulling one of the drives and formatting it standalone. I think the stuff I want will (just barely) fit on a single drive.
--Scott Lamb http://www.slamb.org/
Isn't a degraded raid5 horribly slow?
Probably. That's okay...I'm a patient guy, and I'll go back to full RAID-5 after I'm done backing up / reinstalling / restoring.
On May 7, 2007, at 11:56 AM, Ruslan Sivak wrote:
Isn't a degraded raid5 horribly slow?
For the record, my copy is going slowly. I'm copying files from three out of four RAID-5 drives to the non-RAID drive now, and the RAID is the bottleneck, not the single drive. From eyeballing the iostat output, an md device composed of three ST3320620AS drives is reading anywhere from 30,000-65,000 blocks/sec (individual devices reading 18,000-35,000 blocks/sec), and the other drive (while mostly idle) occasionally shoots up to writes of 120,000 blocks/sec. (I think block size = sector size = 512 bytes, so md read performance of 15 MByte/sec - 33 Mbyte/sec. Not good.)
The time seems to be going into seeks. The CPU's mostly idle (this is software RAID, so that means the parity calculations aren't the problem), and my ears and the "tps" column of iostat say there's a lot of seeking going on. I assume that's due to some combination of degraded RAID-5, the extra LVM layer, fragmentation, the filesystem damage, and possibly a few other processes accessing the RAID array since I've booted off it.
In particular with the fragmentation: some of the larger files were downloaded with rtorrent. I wonder if its access pattern of ftruncate ()ing files to their full length (creating holes) then filling them in with mmap()-based writes encourages the operating system to fragment the file. It also seems to be unusual enough to trigger kernel bugs - there was one recently about data corruption. I wonder if it also triggered whatever bug caused my filesystem to be corrupted.
In any case, the copy's not so slow that it won't finish tonight, which is good enough for me.