Can anyone point me in the right direction for correcting errors on an HD when using LVM? I've tried e2fsck and indicates bad block. I've tried with -b 8193, 16384, and 32768 and no good.
I've found some info about reiserfsck on google, but this utility doesn't seem to be included in Centos4.3. I did find it on my old FC1 box.
I am thinking now I really should have went with just regular 83 Linux ext3 partitions. Arrgghhh.
And if I want to switch to 83 Linux instead of 8e LVM, whats the best way, or at least a feasible way? I can pop another drive in if I need to move data around, but I don't see how, as I can't mount the LVM partition (hda2).
Oh, and the errors I am getting in /var/log/messages are:
kernel: hda: dma_intr: status=0x51 { DriveReady SeekComplete Error } kernel: hda: dma_intr: error=0x84 { DriveStatusError BadCRC }
On Thu, June 29, 2006 12:05 am, Paul wrote:
Can anyone point me in the right direction for correcting errors on an HD when using LVM? I've tried e2fsck and indicates bad block. I've tried with -b 8193, 16384, and 32768 and no good.
I've found some info about reiserfsck on google, but this utility doesn't seem to be included in Centos4.3. I did find it on my old FC1 box.
I am thinking now I really should have went with just regular 83 Linux ext3 partitions. Arrgghhh.
And if I want to switch to 83 Linux instead of 8e LVM, whats the best way, or at least a feasible way? I can pop another drive in if I need to move data around, but I don't see how, as I can't mount the LVM partition (hda2).
--
^^^^^^^^^^^^| || \ | Budvar ######|| ||'|",__. | _..._...______ ===|=||_|__|...] "(@)'(@)""""**|(@)(@)*****(@)I
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
kernel: hda: dma_intr: status=0x51 { DriveReady SeekComplete Error } kernel: hda: dma_intr: error=0x84 { DriveStatusError BadCRC }
Okay, there are a couple issues here... First off, are you running the fsck on the partition or the logical volume? You can't run fsck on an LVM physical volume because it doesn't have a standard file system, it has lvm meta data on it. If you run fsck, it has to be on the logical volume (/dev/VolGroupName/LogVolWhatever).
Now... You seem discouraged with LVM... The above errors are pointing to a drive failing. Are you doing any sort of raid or similar? lvm2 in linux is used to manipulate partitions on the fly and allows for all sorts of spanning and moving and expanding and stuff like that. It isn't like IBM's evms where you can mirror volumes and fun stuff like that. So if you want to protect your data you still need some sort of raid. Going back to just plain ext3 wouldn't help you in this case because it's a hardware failure that's causing your troubles. It's similar to running raid0 and losing 1 drive.
j
On Thu, 2006-06-29 at 00:05 -0400, Paul wrote:
Can anyone point me in the right direction for correcting errors on an HD when using LVM? I've tried e2fsck and indicates bad block. I've tried with -b 8193, 16384, and 32768 and no good.
What Jason said, essentially. E2fsck on /dev/VolGroup00/LogVol00 or whatever. I have just finished reading about 80% of all I found on the web about it (not lists, but HowTos, man pages, ...) and I feel it has many advantages over the "old way". And I *am* and "old way" myself and, theoretically, don't easily change.
I've found some info about reiserfsck on google, but this utility doesn't seem to be included in Centos4.3. I did find it on my old FC1 box.
Would this even be useful on an ext2/3 partition?
I am thinking now I really should have went with just regular 83 Linux ext3 partitions. Arrgghhh.
And if I want to switch to 83 Linux instead of 8e LVM, whats the best way, or at least a feasible way? I can pop another drive in if I need to move data around, but I don't see how, as I can't mount the LVM partition (hda2).
Save what you can by mounting the *LVM* device, not the underlying physical partition. It sounds as if you *may* be making judgments in ignorance (no slam here, but if you haven't read up on the stuff, you're at a disadvantage, as with anything complex).
There are several options you have, depending on just how bad the hardware check nails you. If the e2fsck on /dev/VolGroup00/LogVol00 (or whatever) manages to get the FS to a coherent state with minimal data loss, you can add a PV (physical volume, could be on the same drive, just different partition or could be new drive) after appropriate setup to the volume group and then use pvmove (after appropriate setup) to move the physical extents from the bad drive to another (although I don't know if I would do that). All or part. That may be all you need to do. But I would not stop there.
If you make a boot and root partition on the new drive, copy /boot and / content, do an appropriate grub install, maybe reset the jumpers on the new drive... and more.
I just finished doing this setup for LVM and now have full boot and run from hda and hdd. I can either change boot drive in BIOS or boot current and edit grub to run off other root FS. With LVM snapshot feature, keeping in sync will be a breeze.
I have scripts I would share, but you must keep in mind this is my first use of LVM and it may not be optimal or even mostly correct. But it is working and I can see great things for my use of LVM.
I'd be glad to share the scripts with the list, if "The List" so desires, or privately. 60KB uncompressed, does almost everything but the grub-install - just haven't automated and tested it - and the needed initrd modification. You won't need that if you just are replacing the drive.
Let me know if you want them.
HTH
On Thu, June 29, 2006 7:27 am, William L. Maltby wrote:
On Thu, 2006-06-29 at 00:05 -0400, Paul wrote:
Can anyone point me in the right direction for correcting errors on an HD when using LVM? I've tried e2fsck and indicates bad block. I've tried with -b 8193, 16384, and 32768 and no good.
What Jason said, essentially. E2fsck on /dev/VolGroup00/LogVol00 or whatever. I have just finished reading about 80% of all I found on the web about it (not lists, but HowTos, man pages, ...) and I feel it has many advantages over the "old way". And I *am* and "old way" myself and, theoretically, don't easily change.
I've found some info about reiserfsck on google, but this utility doesn't seem to be included in Centos4.3. I did find it on my old FC1 box.
Would this even be useful on an ext2/3 partition?
I am thinking now I really should have went with just regular 83 Linux ext3 partitions. Arrgghhh.
And if I want to switch to 83 Linux instead of 8e LVM, whats the best way, or at least a feasible way? I can pop another drive in if I need to move data around, but I don't see how, as I can't mount the LVM partition (hda2).
Save what you can by mounting the *LVM* device, not the underlying physical partition. It sounds as if you *may* be making judgments in ignorance (no slam here, but if you haven't read up on the stuff, you're at a disadvantage, as with anything complex).
Yea, probably more like learning curve. Sometimes I get frustrated then soon after all my over-excitement, I catch on and it's easy after that.
There are several options you have, depending on just how bad the hardware check nails you. If the e2fsck on /dev/VolGroup00/LogVol00 (or whatever) manages to get the FS to a coherent state with minimal data loss, you can add a PV (physical volume, could be on the same drive, just different partition or could be new drive) after appropriate setup to the volume group and then use pvmove (after appropriate setup) to move the physical extents from the bad drive to another (although I don't know if I would do that). All or part. That may be all you need to do. But I would not stop there.
If you make a boot and root partition on the new drive, copy /boot and / content, do an appropriate grub install, maybe reset the jumpers on the new drive... and more.
I just finished doing this setup for LVM and now have full boot and run from hda and hdd. I can either change boot drive in BIOS or boot current and edit grub to run off other root FS. With LVM snapshot feature, keeping in sync will be a breeze.
I have scripts I would share, but you must keep in mind this is my first use of LVM and it may not be optimal or even mostly correct. But it is working and I can see great things for my use of LVM.
I'd be glad to share the scripts with the list, if "The List" so desires, or privately. 60KB uncompressed, does almost everything but the grub-install - just haven't automated and tested it - and the needed initrd modification. You won't need that if you just are replacing the drive.
Let me know if you want them.
Thanks for all the info. I'm going to practice up on LVM on my test box. I am totally clueless on it. I am curious if I can change volume size on the fly, now that would be of use.
What's funny is that after I was doing all the trouble shooting on it last night, now it's fine, no more crc errors. So I am thinking that the currupt data was on boot slice. But I still gotta know how to work with LVM.
And yes, send me your scripts. I like studying various code. I'm always learning. Thanks!
On Thu, 2006-06-29 at 17:44 -0400, Paul wrote:
On Thu, June 29, 2006 7:27 am, William L. Maltby wrote:
On Thu, 2006-06-29 at 00:05 -0400, Paul wrote:
Can anyone point me in the right direction for correcting errors on an HD when using LVM? <snip>
I just finished doing this setup for LVM and now have full boot and run from hda and hdd.<snip>
I'd be glad to share the scripts with the list, if "The List" so desires, or privately. 60KB uncompressed, does almost everything but the grub-install - just haven't automated and tested it - and the needed initrd modification. You won't need that if you just are replacing the drive.
Let me know if you want them.
Thanks for all the info. I'm going to practice up on LVM on my test box. I am totally clueless on it. I am curious if I can change volume size on the fly, now that would be of use.
Yes, you can. Go to... hang tight a sec ...
http://www.tldp.org/HOWTO/LVM-HOWTO/
as a decent starting point.
What's funny is that after I was doing all the trouble shooting on it last night, now it's fine, no more crc errors.
I have a similar problem on my (now) hdd drive when I was booting it on hda. After shutdown for awhile might get silent freeze, might get crc error after it started uncompressing, just kept playing and eventually it would boot. My step now that I have a fall-back is to isolate, cable, drive, IDE port, ... whatever so I know what to fix/replace. If it was just a few of the "infant mortality sectors" making their appearance, the drive might be good for years, based on my experience. If new bad keep appearing, a boat anchor in the making...
So I am thinking that the currupt data was on boot slice. But I still gotta know how to work with LVM.
And yes, send me your scripts. I like studying various code. I'm always learning. Thanks!
Nothing fancy here since focus was on LVM learning and getting my butt covered with a new drive and fall-back.
Want them e-mailed to you (the address at From is good and attachment gets through firewall OK??) or I can see if I can post them at the web page Road Runner gives me (allow a day or two to get to it - big trees getting felled and I'm multiplexing).
On Thu, 2006-06-29 at 00:05 -0400, Paul wrote:
Can anyone point me in the right direction for correcting errors on an HD when using LVM? I've tried e2fsck and indicates bad block. I've tried with -b 8193, 16384, and 32768 and no good.
Keep in mind that backup super-blocks will differ depending on files system size (IIRC) and block size (IIRC again) and who knows what? If your FS is "large" or ... anyway, here my backup super blocks an FS made as indicated
################# m k e 2 f s & m k d i r ################ Filesystem label=/Home01 OS type: Linux Block size=2048 (log=1) Fragment size=2048 (log=1) 524288 inodes, 2097152 blocks 20971 blocks (1.00%) reserved for the super user First data block=0 Maximum filesystem blocks=538968064 128 block groups 16384 blocks per group, 16384 fragments per group 4096 inodes per group Superblock backups stored on blocks: 16384, 49152, 81920, 114688, 147456, 409600, 442368, 802816, 1327104, 2048000
Writing inode tables: 0/128...127/128 done Writing inode tables: 0/128...127/128 done Creating journal (8192 blocks): done
<snip>
HTH
On Thu, 2006-06-29 at 08:18 -0400, William L. Maltby wrote:
On Thu, 2006-06-29 at 00:05 -0400, Paul wrote:
Can anyone point me in the right direction for correcting errors on an HD when using LVM? I've tried e2fsck and indicates bad block. I've tried with -b 8193, 16384, and 32768 and no good.
Keep in mind that backup super-blocks will differ depending on files system size (IIRC) and block size (IIRC again) and who knows what? If your FS is "large" or ... anyway, here my backup super blocks an FS made as indicated <snip>
And to prove I've not ingested enough coffee yet, here's the 4KB blocksize one I meant to include with the other post.
mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 410400 inodes, 2441216 blocks 122060 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2499805184 75 block groups 32768 blocks per group, 32768 fragments per group 5472 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: 0/75 ...75done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done
OK, I'm still trying to solve this. Though the server has been up rock steady, but the errors concern me. I built this on a test box months ago and now that I am thinking, I may have built it originally on a drive of a different manufacturer, although about the same size (20g). This may have something to do with it. What is the easiest way to get these errors taken care of? I've tried e2fsck, and also ran fsck on Vol00. Looks like I made a fine mess of things. Is there I wasy to fix it without reloading Centos? Here are some outputs:
snapshot from /var/log/messages:
Jul 12 04:03:21 hostname kernel: hda: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 12 04:03:21 hostname kernel: hda: dma_intr: error=0x84 { DriveStatusError BadCRC } Jul 12 04:03:21 hostname kernel: ide: failed opcode was: unknown
sfdisk -l:
Disk /dev/hda: 39870 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 39870/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 83 Linux /dev/hda2 13 2500 2488 19984860 8e Linux LVM /dev/hda3 0 - 0 0 0 Empty /dev/hda4 0 - 0 0 0 Empty Warning: start=63 - this looks like a partition rather than the entire disk. Using fdisk on it is probably meaningless. [Use the --force option if you really want this]
sfdisk -lf
Disk /dev/hda: 39870 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 39870/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 83 Linux /dev/hda2 13 2500 2488 19984860 8e Linux LVM /dev/hda3 0 - 0 0 0 Empty /dev/hda4 0 - 0 0 0 Empty
Disk /dev/hda1: 207 cylinders, 16 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature /dev/hda1: unrecognized partition No partitions found
Disk /dev/hda2: 39652 cylinders, 16 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature /dev/hda2: unrecognized partition No partitions found Disk /dev/dm-0: cannot get geometry
Disk /dev/dm-0: 0 cylinders, 0 heads, 0 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature /dev/dm-0: unrecognized partition No partitions found Disk /dev/dm-1: cannot get geometry
Disk /dev/dm-1: 0 cylinders, 0 heads, 0 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature /dev/dm-1: unrecognized partition No partitions found
On Wed, 2006-07-12 at 19:33 -0400, Paul wrote:
OK, I'm still trying to solve this. Though the server has been up rock steady, but the errors concern me. I built this on a test box months ago and now that I am thinking, I may have built it originally on a drive of a different manufacturer, although about the same size (20g). This may have something to do with it. What is the easiest way to get these errors taken care of? I've tried e2fsck, and also ran fsck on Vol00. Looks like I made a fine mess of things. Is there I wasy to fix it without reloading
AFAIK, there is no "easiest way". From my *limited* knowledge, you have a couple different problems (maybe) and they are not identified. I'll offer some guesses and suggestions, but without my own hard-headed stubbornness in play, results are even more iffy.
Centos? Here are some outputs:
snapshot from /var/log/messages:
Jul 12 04:03:21 hostname kernel: hda: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 12 04:03:21 hostname kernel: hda: dma_intr: error=0x84 { DriveStatusError BadCRC } Jul 12 04:03:21 hostname kernel: ide: failed opcode was: unknown
I've experienced these regularly on a certain brand of older drive (*really* older, probably not your situation). Maxtor IIRC. Anyway, the problem occurred mostly on cold boot or when re-spinning the drive after it slept. It apparently had a really *slow* spin up speed and timeout would occur (not handled in the protocol I guess), IIRC.
Your post doesn't mention if this might be related. If all your log occurrences tend to indicate it happens only after long periods of inactivity, or upon cold boot, it might not be an issue. But even there, hdparm might have some help. Also, if it does seem to be only on cold- boot or long periods of "sleeping", is it possible that a bunch of things starting at the same time are taxing the power supply? Is the PS "weak". Remember that PSs must have not only a maximum wattage sufficient to support the maximum draw of all devices at the same time (plus a margin for safety), but that also various 5/12 volt lines are limited. Different PSs have different limits on those lines and often they are not published on the PS label. Lots of 12 or 5 volt draws at the same time (as happens in a non-sequenced start-up) might be producing an unacceptable voltage or amperage drop.
Is your PCI bus 33/66/100 MHz? Do you get messages on boot saying "assume 33MHz.... use idebus=66"? I hear it's OK to have an idebus param that is too fast, but it's a problem if your bus is faster than what the kernel thinks it is.
Re-check and make sure all cables are well-seated and that power is well connected. Speaking of cables, is it new or "old"? Maybe cable has a small intermittent break? Try replacing the cable. Try using an 80- conductor (UDMA?) cable, if not using that already. If the problem is only on cold boot, can you get a DC volt-meter on the power connector? If so, look for the voltages to "sag". That might tell you that you are taxing your PS. Or use the labels, do the math and calculate if your are close to the max wattage in a worst-case scenario.
I suggest using hdparm (*very* carefully) to see if the problem can be replicated on demand. Take the drive into various reduced-power modes and restart it and see if the problem is fairly consistent.
sfdisk -l:
Disk /dev/hda: 39870 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 39870/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 83 Linux /dev/hda2 13 2500 2488 19984860 8e Linux LVM /dev/hda3 0 - 0 0 0 Empty /dev/hda4 0 - 0 0 0 Empty Warning: start=63 - this looks like a partition rather than the entire disk. Using fdisk on it is probably meaningless. [Use the --force option if you really want this]
What does your BIOS show for this drive? It's likely here that the drive was labeled (or copied from a drive that was labeled) in another machine. The "key" for me is the "255" vs. "16". The only fix here (not important to do it though) is to get the drive properly labeled for this machine. B/u data, make sure BIOS is set correctly, fdisk (or sfdisk) it to get partitions correct.
WARNING! Although this can be done "live", use sfdisk -l -uS to get starting sector numbers and make the partitions match. When you re-label at "255", some of the calculated translations internal to the drivers(?) might change (Do things *still* translate to CHS on modern drives? I'll need to look into that some day. I bet not.). Also, the *desired* starting and ending sectors of the partitions are likely to change. What I'm saying is that the final partitioning will likely be "non-standard" in layout and laying in wait to bite your butt.
I would backup the data, change BIOS, sfdisk it (or fdisk or cfdisk, or any other partitioner, your choice). If system is hot, sfdisk -R will re-read the params and get them into the kernel. Then reload data (if needed). If it's "hot", single user, or run level 1, mounted "ro", of course. Careful reading of sfdisk can allow you to script and test (on another drive) parts of this.
Easy enough so far? >:-)
sfdisk -lf
The "f" does you no good here, as you can see. It is really useful only when trying to change disk label. What would be useful (maybe) to you is "-uS".
<snip>
HTH
On Thu, July 13, 2006 9:50 am, William L. Maltby wrote:
On Wed, 2006-07-12 at 19:33 -0400, Paul wrote:
OK, I'm still trying to solve this. Though the server has been up rock steady, but the errors concern me. I built this on a test box months ago and now that I am thinking, I may have built it originally on a drive of a different manufacturer, although about the same size (20g). This may have something to do with it. What is the easiest way to get these errors taken care of? I've tried e2fsck, and also ran fsck on Vol00. Looks like I made a fine mess of things. Is there I wasy to fix it without reloading
AFAIK, there is no "easiest way". From my *limited* knowledge, you have a couple different problems (maybe) and they are not identified. I'll offer some guesses and suggestions, but without my own hard-headed stubbornness in play, results are even more iffy.
Centos? Here are some outputs:
snapshot from /var/log/messages:
Jul 12 04:03:21 hostname kernel: hda: dma_intr: status=0x51 { DriveReady SeekComplete Error } Jul 12 04:03:21 hostname kernel: hda: dma_intr: error=0x84 { DriveStatusError BadCRC } Jul 12 04:03:21 hostname kernel: ide: failed opcode was: unknown
I've experienced these regularly on a certain brand of older drive (*really* older, probably not your situation). Maxtor IIRC. Anyway, the problem occurred mostly on cold boot or when re-spinning the drive after it slept. It apparently had a really *slow* spin up speed and timeout would occur (not handled in the protocol I guess), IIRC.
This is definitely a symptom. I wonder if LVM has anything to do with it? I'm running an "IBM-DTLA-307020" (20gig). I was previously running an "IBM-DTLA-307015" on FC1 on ext3 partitions and never had a problem.
When I find the time, I am just going reload the Centos4.3 on ext3 partitions, restore data, and see how it goes.
Your post doesn't mention if this might be related. If all your log occurrences tend to indicate it happens only after long periods of inactivity, or upon cold boot, it might not be an issue. But even there, hdparm might have some help. Also, if it does seem to be only on cold- boot or long periods of "sleeping", is it possible that a bunch of things starting at the same time are taxing the power supply? Is the PS "weak". Remember that PSs must have not only a maximum wattage sufficient to support the maximum draw of all devices at the same time (plus a margin for safety), but that also various 5/12 volt lines are limited. Different PSs have different limits on those lines and often they are not published on the PS label. Lots of 12 or 5 volt draws at the same time (as happens in a non-sequenced start-up) might be producing an unacceptable voltage or amperage drop.
Is your PCI bus 33/66/100 MHz? Do you get messages on boot saying "assume 33MHz.... use idebus=66"? I hear it's OK to have an idebus param that is too fast, but it's a problem if your bus is faster than what the kernel thinks it is.
Re-check and make sure all cables are well-seated and that power is well connected. Speaking of cables, is it new or "old"? Maybe cable has a small intermittent break? Try replacing the cable. Try using an 80- conductor (UDMA?) cable, if not using that already. If the problem is only on cold boot, can you get a DC volt-meter on the power connector? If so, look for the voltages to "sag". That might tell you that you are taxing your PS. Or use the labels, do the math and calculate if your are close to the max wattage in a worst-case scenario.
I suggest using hdparm (*very* carefully) to see if the problem can be replicated on demand. Take the drive into various reduced-power modes and restart it and see if the problem is fairly consistent.
sfdisk -l:
Disk /dev/hda: 39870 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 39870/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 83 Linux /dev/hda2 13 2500 2488 19984860 8e Linux LVM /dev/hda3 0 - 0 0 0 Empty /dev/hda4 0 - 0 0 0 Empty Warning: start=63 - this looks like a partition rather than the entire disk. Using fdisk on it is probably meaningless. [Use the --force option if you really want this]
What does your BIOS show for this drive? It's likely here that the drive was labeled (or copied from a drive that was labeled) in another machine. The "key" for me is the "255" vs. "16". The only fix here (not important to do it though) is to get the drive properly labeled for this machine. B/u data, make sure BIOS is set correctly, fdisk (or sfdisk) it to get partitions correct.
WARNING! Although this can be done "live", use sfdisk -l -uS to get starting sector numbers and make the partitions match. When you re-label at "255", some of the calculated translations internal to the drivers(?) might change (Do things *still* translate to CHS on modern drives? I'll need to look into that some day. I bet not.). Also, the *desired* starting and ending sectors of the partitions are likely to change. What I'm saying is that the final partitioning will likely be "non-standard" in layout and laying in wait to bite your butt.
I would backup the data, change BIOS, sfdisk it (or fdisk or cfdisk, or any other partitioner, your choice). If system is hot, sfdisk -R will re-read the params and get them into the kernel. Then reload data (if needed). If it's "hot", single user, or run level 1, mounted "ro", of course. Careful reading of sfdisk can allow you to script and test (on another drive) parts of this.
I really want to try some of this, but not until I have a hot ready standby HD to throw in if it get's hosed. I'm hosting some stuff and like to known for reliable 24x7 service.
Easy enough so far? >:-)
Yea, peace of cake. Thanks for sharing your knowledge! I do need to play around with LVM more and get comfortable with it. LVM seems to be somewhere between Solaris metabd's and ZFS.
sfdisk -lf
The "f" does you no good here, as you can see. It is really useful only when trying to change disk label. What would be useful (maybe) to you is "-uS".
<snip>
HTH
Bill _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos