I'm trying to extend a logical volume and I'm doing as follow:
1- Run `fdisk -l` command and this is the output:
Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6
Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 10444 83371008 8e Linux LVM
Disk /dev/mapper/vg_devserver-lv_swap: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/mapper/vg_devserver-lv_root: 27.5 GB, 27523022848 bytes 255 heads, 63 sectors/track, 3346 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
2- Run `fdisk /dev/sda`and print partition using `p`:
Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6
Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 10444 83371008 8e Linux LVM
Try to create the partition by running:
Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 No free sectors available
I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing.
Then following the suggestion from the previous command results I run this other command vgreduce --removemissing vg_devserver but again got this error: WARNING: Partial LV lv_root needs to be repaired or removed. There are still partial LVs in VG vg_devserver. To remove them unconditionally use: vgreduce --removemissing --force. Proceeding to remove empty missing PVs. so I change the command to the one suggested but once again another message Removing partial LV lv_root. Logical volume vg_devserver/lv_root contains a filesystem in use. so at this point I don't know what else to do,can any give me some ideas or help?
Don't kill me if is something basic I'm not a Linux Admin or a Linux Advanced User just a developer trying to setup their development environment. How I can get this done? What I'm doing wrong?
I'm following [this][1] guide because my filesytem is Ext4. Also [this][2] is helpful too but applies to Ext3 only
[1]: http://www.uptimemadeeasy.com/vmware/grow-an-ext4-filesystem-on-a-vmware-esx... [2]: http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&...
Rebooting your system, then run fdisk /dev/sda
Then run P N P 3 8e ......so on
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of reynierpm@gmail.com Sent: Monday, October 27, 2014 12:57 PM To: centos@centos.org Subject: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
I'm trying to extend a logical volume and I'm doing as follow:
1- Run `fdisk -l` command and this is the output:
Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6
Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 10444 83371008 8e Linux LVM
Disk /dev/mapper/vg_devserver-lv_swap: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/mapper/vg_devserver-lv_root: 27.5 GB, 27523022848 bytes 255 heads, 63 sectors/track, 3346 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
2- Run `fdisk /dev/sda`and print partition using `p`:
Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6
Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 10444 83371008 8e Linux LVM
Try to create the partition by running:
Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 No free sectors available
I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing.
Then following the suggestion from the previous command results I run this other command vgreduce --removemissing vg_devserver but again got this error: WARNING: Partial LV lv_root needs to be repaired or removed. There are still partial LVs in VG vg_devserver. To remove them unconditionally use: vgreduce --removemissing --force. Proceeding to remove empty missing PVs. so I change the command to the one suggested but once again another message Removing partial LV lv_root. Logical volume vg_devserver/lv_root contains a filesystem in use. so at this point I don't know what else to do,can any give me some ideas or help?
Don't kill me if is something basic I'm not a Linux Admin or a Linux Advanced User just a developer trying to setup their development environment. How I can get this done? What I'm doing wrong?
I'm following [this][1] guide because my filesytem is Ext4. Also [this][2] is helpful too but applies to Ext3 only
[1]: http://www.uptimemadeeasy.com/vmware/grow-an-ext4-filesystem-on-a-vmware-esx... [2]: http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&... _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, Oct 27, 2014 at 3:54 PM, Zhang, Jonathan zhangj@evergreen.edu wrote:
Rebooting your system, then run fdisk /dev/sda
Then run P N P 3
Can't pass from here, it says:
Partition number (1-4): 3 No free sectors available
Why?
On Mon, Oct 27, 2014 at 3:56 PM, reynierpm@gmail.com reynierpm@gmail.com wrote:
I'm trying to extend a logical volume and I'm doing as follow:
1- Run `fdisk -l` command and this is the output:
This is for actual partitions, not LVM which seems to be what you want per the rest of your message.
2- Run `fdisk /dev/sda`and print partition using `p`:
Partition number (1-4): 3 No free sectors available
It's telling you the truth. Sounds like you want another Logical Volume (LV) not partition.
I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and
Unless you know what you're doing, you _really_ shouldn't do this in anything but a VM where you won't lose your data. First rule of LVM resizing is to adjust the size (grow or shrink depending on your goal) of the file system before resizing the LV "container".
Remember there are a few "layers" here you have to keep in mind. disk --> partition --> LVM Phys Volume --> LVM Vol Group --> LVM Logical Vol --> File System (ext4, xfs, etc)
If there are free extents in the VG, then you can probably create a LV. Depends on the extent size (defaults can vary between releases and/or an admin's configuration).
don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing.
Then following the suggestion from the previous command results I run this other command vgreduce --removemissing vg_devserver but again got this error: WARNING: Partial LV lv_root needs to be repaired or removed. There are still partial LVs in VG vg_devserver. To remove them unconditionally use: vgreduce --removemissing --force. Proceeding to remove empty missing PVs. so I change the command to the one suggested but once again another message Removing partial LV lv_root. Logical volume vg_devserver/lv_root contains a filesystem in use. so at this point I don't know what else to do,can any give me some ideas or help?
Sounds like you destroyed one or more of your LVs through all this.
Please read the following documentation before forging further ahead. And you might spin up a VM or live CD to experiment with LVM operations before going any further as well. - speaks about extents [0] - read the entire Chapter 2 on LVM [1] as it applies to your scenario (ex: snapshots probably don't) - dated/older, but it may prove helpful [2]
[0] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... [1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... [2] http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/
Hi SilverTip nice answer and very helpful, I'll try to get some more help here since as I said in the main post I'm not an expert on Linux or a Administrator I'm just a developer trying to setup a development enviroment so ...
It's telling you the truth.
Sounds like you want another Logical Volume (LV) not partition.
You're right, what I need is a new LV but how I do that?
Sounds like you destroyed one or more of your LVs through all this.
Probable and I'm pretty sure I do it :-(
Please read the following documentation before forging further ahead. And you might spin up a VM or live CD to experiment with LVM operations before going any further as well.
- speaks about extents [0]
- read the entire Chapter 2 on LVM [1] as it applies to your scenario (ex:
snapshots probably don't)
- dated/older, but it may prove helpful [2]
[0]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... [1]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... [2] http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/
Fine, I read it but know doubts persist on my mind. First, I'm running OS in a Vmware Workstation VM and I'll not like to loose every I have there since then I'll need to reconfigure all from scratch but if there is not another option to save my mess the we should go through it. Now I'm almost sure what I need here is a "Linear Volumes" configuration why? Well because my VM disks have 30GB in first and now I resize it to 80GB and that's the space I want to see in my Linux and can't get it. In order to get it working again, what steps I should follow? That's my concern and what I've clear at all
Thanks
On 10/27/2014 07:42 PM, reynierpm@gmail.com wrote:
Hi SilverTip nice answer and very helpful, I'll try to get some more help here since as I said in the main post I'm not an expert on Linux or a Administrator I'm just a developer trying to setup a development enviroment so ...
It's telling you the truth.
Sounds like you want another Logical Volume (LV) not partition.
You're right, what I need is a new LV but how I do that?
Sounds like you destroyed one or more of your LVs through all this.
Probable and I'm pretty sure I do it :-(
Please read the following documentation before forging further ahead. And you might spin up a VM or live CD to experiment with LVM operations before going any further as well.
- speaks about extents [0]
- read the entire Chapter 2 on LVM [1] as it applies to your scenario (ex:
snapshots probably don't)
- dated/older, but it may prove helpful [2]
[0]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... [1]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... [2] http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/
Fine, I read it but know doubts persist on my mind. First, I'm running OS in a Vmware Workstation VM and I'll not like to loose every I have there since then I'll need to reconfigure all from scratch but if there is not another option to save my mess the we should go through it.
If I were in your position, I think I would: * Create a new, 80GB disk using VMWare * Partition that disk into your /boot and LVM partitions * pvcreate * vgcreate * lvcreate the disk structure you want in your new disk, making sure all LVs are at least a little bigger than the old ones. * use dd to copy disks from old drives to corresponding old drives * use resize2fs to expand your file system to the full size of each of the LVs you created. * detach old virtual disk from your VM * reboot, and see if you succeeded
If I forgot something here, hopefully someone else will chime in. The idea is to dump your corrupted LVM structure without loosing its content.
Ted Miller Elkhart, IN, USA
Uppsss I think this goes more and more advanced all the time but here I go .... more doubts
On Mon, Oct 27, 2014 at 9:20 PM, Ted Miller tedlists@sbcglobal.net wrote:
If I were in your position, I think I would:
- Create a new, 80GB disk using VMWare
Not problem at all
- Partition that disk into your /boot and LVM partitions
How I do that out of the box? I mean should I mount that disk in the VM and partition from there, right?
- pvcreate
- vgcreate
Ok, create physical volume and volume group
- lvcreate the disk structure you want in your new disk, making sure all
LVs are at least a little bigger than the old ones.
Here I get lost, what structure should I create here? I only have one LV lv_root you mean create the same and of course bigger than the old one, right?
- use dd to copy disks from old drives to corresponding old drives
And here I declare myself complete lost, this is the first time I see this command and don't know how to use it
- use resize2fs to expand your file system to the full size of each of the
LVs you created.
- detach old virtual disk from your VM
- reboot, and see if you succeeded
If I forgot something here, hopefully someone else will chime in. The idea is to dump your corrupted LVM structure without loosing its content.
On 10/27/2014 02:56 PM, reynierpm@gmail.com wrote:
I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing.
Those I/O errors are alarming. They suggest that you have a disk that is failing. Does anything about disk sda appear in /var/log/messages when you do that? You should indeed have 29GB available for growing lv_root, but perhaps the disk error is what is preventing the tool from finding the LV's UUID.
On Mon, Oct 27, 2014 at 11:21 PM, Robert Nichols <rnicholsNOSPAM@comcast.net
wrote:
Those I/O errors are alarming. They suggest that you have a disk that is failing. Does anything about disk sda appear in /var/log/messages when you do that? You should indeed have 29GB available for growing lv_root, but perhaps the disk error is what is preventing the tool from finding the LV's UUID.
If I search through grep uuid this is what I get
#cat /var/log/messages | grep uuid Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS.
And if I do throug sda instead I get this: #cat /var/log/messages | grep sda Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] 167772160 512-byte logical blocks: (85.8 GB/80.0 GiB) Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Write Protect is off Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Cache data unavailable Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Assuming drive cache: write through Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Cache data unavailable Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Assuming drive cache: write through Oct 27 17:56:08 localhost kernel: sda: sda1 sda2 Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Cache data unavailable Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Assuming drive cache: write through Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Attached SCSI disk Oct 27 17:56:08 localhost kernel: dracut: Scanning devices sda2 for LVM logical volumes vg_devserver/lv_root vg_devserver/lv_swap Oct 27 17:56:08 localhost kernel: dracut: Scanning devices sda2 for LVM logical volumes vg_devserver/lv_root vg_devserver/lv_swap Oct 27 17:56:08 localhost kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts:
what do you get from the commands:
pvs -v vgs -v lvs
and, if pvs shows any /dev/mdXX devices, the output of mdadm --detail /dev/mdXX
? example output...
# pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/md127 vgdata lvm2 a-- 1.82t 19.68g 1.82t pPuDNs-AVQ8-92tw-TXcT-WWyD-nPhQ-dZqpx0 /dev/sda2 vg_myhost lvm2 a-- 476.45g 5.36g 476.45g EWe4ws-1Z6S-v9d6-gvQ2-e7QE-K58b-Sd1W5z
# mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Sat Jun 14 13:18:25 2014 Raid Level : raid1 Array Size : 1953383232 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953383232 (1862.89 GiB 2000.26 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent
Update Time : Mon Oct 27 22:35:55 2014 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Name : myhost:0 (local to host myhost) UUID : d9c90fda:9a0e5d4f:d27cf1f6:19d0b43a Events : 441
Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
# vgs -v Finding all volume groups Finding volume group "vgdata" Finding volume group "vg_myhost" VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile vg_myhost wz--n- 4.00m 1 6 0 476.45g 5.36g cX6DQy-iDY2-mL0Q-zM1m-pgf2-kLdE-8zWtIG vgdata wz--n- 4.00m 1 1 0 1.82t 19.68g USqdKh-VIv7-TrCE-2RXn-52oG-7Qed-01URWg
# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_home vg_myhost -wi-ao---- 29.30g lv_root vg_myhost -wi-ao---- 50.00g lv_swap vg_myhost -wi-ao---- 11.80g lvimages vg_myhost -wi-ao---- 150.00g lvpgsql vg_myhost -wi-ao---- 30.00g lvtest vg_myhost -wi-a----- 200.00g lvhome2 vgdata -wi-ao---- 1.80t
On Tue, Oct 28, 2014 at 1:08 AM, John R Pierce pierce@hogranch.com wrote:
what do you get from the commands:
pvs -v vgs -v lvs
and, if pvs shows any /dev/mdXX devices, the output of mdadm --detail /dev/mdXX
Hi John, here are the results:
#pvs -v Scanning for physical volume names /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Wiping cache of LVM-capable devices /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. There are 1 physical volumes missing. There are 1 physical volumes missing. There are 1 physical volumes missing. PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sda2 vg_devserver lvm2 a-- 29.51g 0 79.51g ij17Vf-kY56-jfg0-j769-Z1E5-Nk7C-RqYG18 unknown device vg_devserver lvm2 a-m 29.99g 29.96g 0 vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS
# vgs -v Finding all volume groups /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Finding volume group "vg_devserver" Wiping cache of LVM-capable devices /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. There are 1 physical volumes missing. There are 1 physical volumes missing. VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile vg_devserver wz-pn- 4.00m 2 2 0 59.50g 29.96g VidRBE-37ri-HYfd-Sd1x-6lZX-ph9I-F3wvCG
# lvs /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root vg_devserver -wi-ao--p- 25.63g lv_swap vg_devserver -wi-ao---- 3.91g
And there is not mdXX device so I ommit that command
On 10/28/2014 07:57 AM, reynierpm@gmail.com wrote:
# lvs /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root vg_devserver -wi-ao--p- 25.63g lv_swap vg_devserver -wi-ao---- 3.91g
Something very strange is going on. /dev/root is presumably /dev/mapper/vg_devserve-lv_root. It is odd that LVM should be looking inside that LV for another PV, and disturbing that there should be an I/O error at those offsets, which are within the 27523022848 byte size of /dev/mapper/vg_devserver-lv_root .
Please post the output from ls -l /dev/root /dev/mapper and the contents of file /etc/lvm/backup/vg_devserver .
On Tue, Oct 28, 2014 at 10:31 AM, Robert Nichols <rnicholsNOSPAM@comcast.net
wrote:
Please post the output from ls -l /dev/root /dev/mapper and the contents of file /etc/lvm/backup/vg_devserver .
Here,
# ls -l /dev/root /dev/mapper lrwxrwxrwx 1 root root 4 Oct 27 17:55 /dev/root -> dm-2
/dev/mapper: total 0 crw-rw---- 1 root root 10, 58 Oct 27 17:55 control lrwxrwxrwx 1 root root 7 Oct 27 17:55 vg_devserver-lv_root -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 27 17:55 vg_devserver-lv_root-missing_1_0 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 27 17:55 vg_devserver-lv_swap -> ../dm-0
# cat /etc/lvm/backup/vg_devserver # Generated by LVM2 version 2.02.100(2)-RHEL6 (2013-10-23): Mon Oct 20 15:27:17 2014
contents = "Text Format Volume Group" version = 1
description = "Created *after* executing 'vgreduce --removemissing vg_devserver'"
creation_host = "localhost" # Linux localhost 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 creation_time = 1413835037 # Mon Oct 20 15:27:17 2014
vg_devserver { id = "VidRBE-37ri-HYfd-Sd1x-6lZX-ph9I-F3wvCG" seqno = 6 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0
physical_volumes {
pv0 { id = "ij17Vf-kY56-jfg0-j769-Z1E5-Nk7C-RqYG18" device = "/dev/sda2" # Hint only
status = ["ALLOCATABLE"] flags = [] dev_size = 61888512 # 29.5107 Gigabytes pe_start = 2048 pe_count = 7554 # 29.5078 Gigabytes }
pv1 { id = "vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS" device = "unknown device" # Hint only
status = ["ALLOCATABLE"] flags = ["MISSING"] dev_size = 62906520 # 29.9962 Gigabytes pe_start = 2048 pe_count = 7678 # 29.9922 Gigabytes } }
logical_volumes {
lv_root { id = "Ee2apF-rS1m-7Xny-4XYk-u2ZM-lMp7-IBQPnL" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "devserver" creation_time = 1361816267 # 2013-02-25 13:47:47 -0430 segment_count = 2
segment1 { start_extent = 0 extent_count = 6554 # 25.6016 Gigabytes
type = "striped" stripe_count = 1 # linear
stripes = [ "pv0", 0 ] } segment2 { start_extent = 6554 extent_count = 8 # 32 Megabytes
type = "striped" stripe_count = 1 # linear
stripes = [ "pv1", 0 ] } }
lv_swap { id = "hW0BAn-BzE4-aHXm-b18h-8rg0-JyBS-ycbOVd" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "devserver" creation_time = 1361816271 # 2013-02-25 13:47:51 -0430 segment_count = 1
segment1 { start_extent = 0 extent_count = 1000 # 3.90625 Gigabytes
type = "striped" stripe_count = 1 # linear
stripes = [ "pv0", 6554 ] } } } }
PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sda2 vg_devserver lvm2 a-- 29.51g 0 79.51g ij17Vf-kY56-jfg0-j769-Z1E5-Nk7C-RqYG18 unknown device vg_devserver lvm2 a-m 29.99g 29.96g 0 vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS
By looking at fdisk output pasted in your first mail, I don't see more than 2 partition in your first disk - /dev/sda . The first partition is - /dev/sda1 - is allocated for /boot and second partition is - /dev/sda2 - for your PV/VG/LV where root resided. However, the above output indicates you had 2 PVs , which is contradicting with the information you have shared so far.
I assume that, you have added an additional disk , then extended the VG, but later removed ? OR tried another vgcreate ( afaik, lvm shouldn't allow this but a wild guess ) on actual PV - /dev/sda2 ?
Cheers,
On Tue, Oct 28, 2014 at 10:48 AM, Dominic Geevarghese share2dom@gmail.com wrote:
By looking at fdisk output pasted in your first mail, I don't see more than 2 partition in your first disk - /dev/sda . The first partition is - /dev/sda1 - is allocated for /boot and second partition is - /dev/sda2 - for your PV/VG/LV where root resided. However, the above output indicates you had 2 PVs , which is contradicting with the information you have shared so far.
I assume that, you have added an additional disk , then extended the VG, but later removed ? OR tried another vgcreate ( afaik, lvm shouldn't allow this but a wild guess ) on actual PV - /dev/sda2 ?
You're right in some moment I added a new disk and try to extend the VG and yes once again I removed by trying to do the same from Gparted Live CD and I messed all, this is what I'm trying to fix if it's possible so what procedure I should follow having this extra info? (apologies perhaps I should say this from the begining)
Ok. I think, I should know
- whether you have rebooted your machine/VM right after extended the disk ? If not, please don't reboot now :) - Other than attaching an additional disk, have you tried any lvm/dd commands or any other procedure that's missed in your previous update ?
On 10/28/2014 10:30 AM, reynierpm@gmail.com wrote:
You're right in some moment I added a new disk and try to extend the VG and yes once again I removed by trying to do the same from Gparted Live CD and I messed all, this is what I'm trying to fix if it's possible so what procedure I should follow having this extra info? (apologies perhaps I should say this from the begining)
Yes, that would have helped a lot. I can see in file /etc/lvm/backup/vg_devserver that the lv_root LV has been extended by a mere 32 Megabytes (yes, Mega) on the missing physical volume. Do you still have the additional disk? The best thing to do would be to put it back, make a partition on it, and then re-create the missing PV. I'll assume that the new partition is /dev/sdb1. Adjust all references in the following if it si something else.
Make a copy of /etc/lvm/backup/vg_devserver on vg_devserver.bak, and then run
pvcreate -v --restorefile vg_devserver.bak \ --uuid "vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS" \ /dev/sdb1
(I've broken that up into multiple lines to avoid word wrap problems.)
You should then be able to run pvs and lvs successfully. To properly remove this PV from the LVM structure, run
lvreduce --extents -8 /dev/vg_devserver/lv_root vgreduce vg_devserver /dev/sdb1 pvremove /dev/sdb1
That lvreduce will get rid of the 32MB that were allocated on that added PV. You will probably get a warning about possible data loss and will have to confirm. The vgreduce and pvremove should then proceed without any issues.
On Tue, Oct 28, 2014 at 11:45 AM, Robert Nichols <rnicholsNOSPAM@comcast.net
wrote:
Yes, that would have helped a lot. I can see in file /etc/lvm/backup/vg_devserver that the lv_root LV has been extended by a mere 32 Megabytes (yes, Mega) on the missing physical volume. Do you still have the additional disk? The best thing to do would be to put it back, make a partition on it, and then re-create the missing PV. I'll assume that the new partition is /dev/sdb1. Adjust all references in the following if it si something else.
I don't think I have that disk alive, take a look to this image http://imgur.com/jMuhM03 this is how my disk looks like
Make a copy of /etc/lvm/backup/vg_devserver on vg_devserver.bak, and then run
pvcreate -v --restorefile vg_devserver.bak \ --uuid "vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS" \ /dev/sdb1
This is the result of the command above:
# pvcreate -v --restorefile /etc/lvm/backup/vg_devserver.bak --uuid "vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS" /dev/sdb1 /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Wiping cache of LVM-capable devices /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Wiping cache of LVM-capable devices /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Device /dev/sdb1 not found (or ignored by filtering).
(I've broken that up into multiple lines to avoid word wrap problems.)
You should then be able to run pvs and lvs successfully. To properly remove this PV from the LVM structure, run
lvreduce --extents -8 /dev/vg_devserver/lv_root vgreduce vg_devserver /dev/sdb1 pvremove /dev/sdb1
That lvreduce will get rid of the 32MB that were allocated on that added PV. You will probably get a warning about possible data loss and will have to confirm. The vgreduce and pvremove should then proceed without any issues.
On 10/28/2014 11:42 AM, reynierpm@gmail.com wrote:
On Tue, Oct 28, 2014 at 11:45 AM, Robert Nichols <rnicholsNOSPAM@comcast.net
wrote:
Yes, that would have helped a lot. I can see in file /etc/lvm/backup/vg_devserver that the lv_root LV has been extended by a mere 32 Megabytes (yes, Mega) on the missing physical volume. Do you still have the additional disk? The best thing to do would be to put it back, make a partition on it, and then re-create the missing PV. I'll assume that the new partition is /dev/sdb1. Adjust all references in the following if it si something else.
I don't think I have that disk alive, take a look to this image http://imgur.com/jMuhM03 this is how my disk looks like
Make a copy of /etc/lvm/backup/vg_devserver on vg_devserver.bak, and then run
pvcreate -v --restorefile vg_devserver.bak \ --uuid "vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS" \ /dev/sdb1
This is the result of the command above:
# pvcreate -v --restorefile /etc/lvm/backup/vg_devserver.bak --uuid "vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS" /dev/sdb1 /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Wiping cache of LVM-capable devices /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Wiping cache of LVM-capable devices /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Device /dev/sdb1 not found (or ignored by filtering).
You will have to find the archived configuration from before you tried to extend that LV, and restore that configuration. Let's see what file you have. Post the output from
grep -H 'description =' /etc/lvm/archive/vg_devserver*
On Tue, Oct 28, 2014 at 1:05 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
You will have to find the archived configuration from before you tried to extend that LV, and restore that configuration. Let's see what file you have. Post the output from
grep -H 'description =' /etc/lvm/archive/vg_devserver*
Here,
# grep -H 'description =' /etc/lvm/archive/vg_devserver* /etc/lvm/archive/vg_devserver_00000-1387802225.vg:description = "Created *before* executing '/sbin/vgs --noheadings -o name --config 'log{command_names=0 prefix=" "}''" /etc/lvm/archive/vg_devserver_00001-1037597683.vg:description = "Created *before* executing 'vgextend vg_devserver /dev/sda3'" /etc/lvm/archive/vg_devserver_00002-1876503.vg:description = "Created *before* executing 'lvextend -L+29.99 /dev/vg_devserver/lv_root'" /etc/lvm/archive/vg_devserver_00003-1263624397.vg:description = "Created *before* executing 'vgreduce --removemissing vg_devserver'" /etc/lvm/archive/vg_devserver_00004-313693030.vg:description = "Created *before* executing 'vgreduce --removemissing vg_devserver --force'"
On 10/28/2014 12:42 PM, reynierpm@gmail.com wrote:
# grep -H 'description =' /etc/lvm/archive/vg_devserver* /etc/lvm/archive/vg_devserver_00000-1387802225.vg:description = "Created *before* executing '/sbin/vgs --noheadings -o name --config 'log{command_names=0 prefix=" "}''" /etc/lvm/archive/vg_devserver_00001-1037597683.vg:description = "Created *before* executing 'vgextend vg_devserver /dev/sda3'" /etc/lvm/archive/vg_devserver_00002-1876503.vg:description = "Created *before* executing 'lvextend -L+29.99 /dev/vg_devserver/lv_root'" /etc/lvm/archive/vg_devserver_00003-1263624397.vg:description = "Created *before* executing 'vgreduce --removemissing vg_devserver'" /etc/lvm/archive/vg_devserver_00004-313693030.vg:description = "Created *before* executing 'vgreduce --removemissing vg_devserver --force'"
That all there are??? By default that archive should go back a minimum of 30 days and a minimum of 10 files, whichever is larger. All I see here is a history of your recovery efforts. Did you make that change from a live CD, or something?
If necessary I could edit the backup file that you have now and clean out all reference to the missing PV, but I'd rather not do that if there is a safer way.
On Tue, Oct 28, 2014 at 1:43 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
That all there are??? By default that archive should go back a minimum of 30 days and a minimum of 10 files, whichever is larger. All I see here is a history of your recovery efforts. Did you make that change from a live CD, or something?
Yes, that is all and yes I made the changes from a Gparted LiveCD :-\
If necessary I could edit the backup file that you have now and clean out all reference to the missing PV, but I'd rather not do that if there is a safer way.
Ok, I attached the file here but also will be nice to extend the LVM so it can grow up until not empty space leave.
On 10/28/2014 01:25 PM, reynierpm@gmail.com wrote:
Ok, I attached the file here but also will be nice to extend the LVM so it can grow up until not empty space leave.
The attachment didn't come through, but I can use the file that you posted before. The PV you have is completely used, but restoring this configuration should get you back to a state where you can try again to extend the VG to a new PV. You can do _all_ of that online -- no need to resort to a live CD. And this time, don't remove or overwrite that new PV until you have successfully purged it from the LVM structure.
Updated file attached as LVMconfig.new . All I have done is comment out two blocks of lines referring to "pv1" and change the segment count to "1" for lv_root. You will need to run
vgcfgrestore -v --file LVMconfig.new vg_devserver
You might want to try it first including the "--test" option to see what it is going to do.
On Tue, Oct 28, 2014 at 2:17 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
The attachment didn't come through, but I can use the file that you posted before. The PV you have is completely used, but restoring this configuration should get you back to a state where you can try again to extend the VG to a new PV. You can do _all_ of that online -- no need to resort to a live CD. And this time, don't remove or overwrite that new PV until you have successfully purged it from the LVM structure.
Updated file attached as LVMconfig.new . All I have done is comment out two blocks of lines referring to "pv1" and change the segment count to "1" for lv_root. You will need to run
vgcfgrestore -v --file LVMconfig.new vg_devserver
You might want to try it first including the "--test" option to see what it is going to do.
Well seem to be not working
# vgcfgrestore -v --file LVMconfig.new vg_devserver --test File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Parse error at byte 81 (line 2): unexpected token Couldn't read volume group metadata. Restore failed. Test mode: Wiping internal cache Wiping internal VG cache
Nevermind I fixed this is the result of running the command in test mode:
# vgcfgrestore -v --file LVMconfig.new vg_devserver --test File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Restored volume group vg_devserver Test mode: Wiping internal cache Wiping internal VG cache
Should I go?
On Tue, Oct 28, 2014 at 2:25 PM, reynierpm@gmail.com reynierpm@gmail.com wrote:
On Tue, Oct 28, 2014 at 2:17 PM, Robert Nichols < rnicholsNOSPAM@comcast.net> wrote:
The attachment didn't come through, but I can use the file that you posted before. The PV you have is completely used, but restoring this configuration should get you back to a state where you can try again to extend the VG to a new PV. You can do _all_ of that online -- no need to resort to a live CD. And this time, don't remove or overwrite that new PV until you have successfully purged it from the LVM structure.
Updated file attached as LVMconfig.new . All I have done is comment out two blocks of lines referring to "pv1" and change the segment count to "1" for lv_root. You will need to run
vgcfgrestore -v --file LVMconfig.new vg_devserver
You might want to try it first including the "--test" option to see what it is going to do.
Well seem to be not working
# vgcfgrestore -v --file LVMconfig.new vg_devserver --test File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Parse error at byte 81 (line 2): unexpected token Couldn't read volume group metadata. Restore failed. Test mode: Wiping internal cache Wiping internal VG cache
On 10/28/2014 01:58 PM, reynierpm@gmail.com wrote:
Nevermind I fixed this is the result of running the command in test mode:
# vgcfgrestore -v --file LVMconfig.new vg_devserver --test File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Restored volume group vg_devserver Test mode: Wiping internal cache Wiping internal VG cache
Should I go?
That looks good. I understand the I/O error now. It is from an attempt to read from the missing PV. The one question I have is, "Did you ever resize the root filesystem?" I suspect not, since you would have seen errors from the missing blocks. If you _did_, then it should be "interesting" trying to get the filesystem back into its available space. In any case, fixing the LVM structure has to come first, so I say go ahead without the "--test" option.
On Tue, Oct 28, 2014 at 2:47 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
That looks good. I understand the I/O error now. It is from an attempt to read from the missing PV. The one question I have is, "Did you ever resize the root filesystem?" I suspect not, since you would have seen errors from the missing blocks. If you _did_, then it should be "interesting" trying to get the filesystem back into its available space. In any case, fixing the LVM structure has to come first, so I say go ahead without the "--test"
Ok, done:
vgcfgrestore -v --file LVMconfig.new vg_devserver File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Restored volume group vg_devserver
What's next?
On 10/28/2014 02:47 PM, reynierpm@gmail.com wrote:
Ok, done:
vgcfgrestore -v --file LVMconfig.new vg_devserver File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Restored volume group vg_devserver
What's next?
See if things look sane. Do vgs and lvs run without error now?
Well I reboot the VM and now it does not boot up with kernel panic seems like this is bad. See attached image.
On Tue, Oct 28, 2014 at 3:32 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
On 10/28/2014 02:47 PM, reynierpm@gmail.com wrote:
Ok, done:
vgcfgrestore -v --file LVMconfig.new vg_devserver File descriptor 7 (pipe:[18995]) leaked on vgcfgrestore invocation. Parent PID 3790: bash /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Restored volume group vg_devserver
What's next?
See if things look sane. Do vgs and lvs run without error now?
-- Bob Nichols "NOSPAM" is really part of my email address. Do NOT delete it.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 10/28/2014 03:06 PM, reynierpm@gmail.com wrote:
Well I reboot the VM and now it does not boot up with kernel panic seems like this is bad. See attached image.
Again, your attachments aren't coming through. Did you try running vgs and lvs before rebooting? What was the result?
On Tue, Oct 28, 2014 at 4:08 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
Again, your attachments aren't coming through. Did you try running vgs and lvs before rebooting? What was the result?
That's weird maybe the list is configured to not allow attachments and no I didn't try any just reboot after the latest command and now get kernel panic, VM do not start
On 10/28/2014 03:45 PM, reynierpm@gmail.com wrote:
On Tue, Oct 28, 2014 at 4:08 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
Again, your attachments aren't coming through. Did you try running vgs and lvs before rebooting? What was the result?
That's weird maybe the list is configured to not allow attachments and no I didn't try any just reboot after the latest command and now get kernel panic, VM do not start
The list does allow attachments. The file I sent you was an attachment.
I wish you hadn't jumped right into rebooting. My only consolation is that the result you got is probably what would have happened had you tried to reboot _without_ changing anything.
Are you seeing any messages prior to the kernel panic? If not, reboot and press TAB to see the GRUB menu. Press "a" to get to the kernel parameters and then remove the parameters "rhgb" and "quiet". Then you will get to see what leads to the panic.
On Tue, Oct 28, 2014 at 4:40 PM, Robert Nichols rnicholsNOSPAM@comcast.net wrote:
The list does allow attachments. The file I sent you was an attachment.
I do not know why then my attachment are not attached, anyway here I uploaded the image http://imgur.com/B7YWY10
I wish you hadn't jumped right into rebooting. My only consolation is that the result you got is probably what would have happened had you tried to reboot _without_ changing anything.
Yes, before the latest change I reboot several times and system always pass trough
Are you seeing any messages prior to the kernel panic? If not, reboot and press TAB to see the GRUB menu. Press "a" to get to the kernel parameters and then remove the parameters "rhgb" and "quiet". Then you will get to see what leads to the panic.
I leave you the printscreen on the image link, anyway is a VM the only problem is that I lost all the data and need to reinstall it all from scratch but recently CentOS 6.6 comes out so I'll give a try to this one and this time creating the proper LVM with a lot of space and keep this email secure for the future
Thanks to everyone here one the list, great support