I did the following test:
############################################### 1.
Computer with Centos 7.5 installed on hard drive /dev/sda.
Added two hard drives to the computer: /dev/sdb and /dev/sdc.
Created a new logical volume in RAID-1 using RedHat System Storage Manager:
ssm create --fstype xfs -r 1 /dev/sdb /dev/sdc /mnt/data
Everything works. /dev/lvm_pool/lvol001 is mounted to /mnt/data. Files and folders can be copied/moved, read/written on /mnt/data.
###############################################
2.
I erased CentOS 7.5 from /dev/sda. Wrote zeros to /dev/sda using dd. Reinstalled CentOS 7 on /dev/sda. Completed yum update - reboot - yum install system-storage-manager.
RedHat system storage manager listed all existing volumes on the computer:
[root@localhost]# ssm list
-------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------- /dev/cl/root cl 65.00 GB xfs 64.97 GB 63.67 GB linear / /dev/cl/swap cl 8.00 GB linear /dev/lvm_pool/lvol001 lvm_pool200.00 GB xfs 199.90 GB 184.53 GB raid1 /mnt/data /dev/cl/home cl 200.00 GB xfs 199.90 GB 199.87 GB linear /home /dev/sda1 4.00 GB xfs 3.99 GB 3.86 GB part /boot -------------------------------------------------------------------------------------- [/CODE]
So far, so good. The new CentOS7 install can see the logical volume.
Mounted the volume: ssm mount -t xfs /dev/lvm_pool/lvol001 /mnt/data Works. cd to /mnt/data and I can see the files left on the volume from the previous tests. Moving/copying/read/write -- works.
###################################################
3. Is it safe to assume when using RedHat System Storage Manager it's not necessary to use the lvm commands (vgexport and vgimport) to move two physical drives containing a logical volume in raid 1 from one computer to another?
Thanks for your help and guidance.
Maybe not a good assumption afterall --
I can no longer boot using kernel 3.10.0-514 or 3.10.0-862.
boot.log shows:
Dependency failed for /mnt/data Dependency failed for Local File Systems Dependency failed for Mark the need to relabel after reboot. Dependency failed for Migrate local SELinux policy changes from the old store structure to the new structure. Dependency failed for Relabel all filesystems, if necessary.
On Sat, Jul 14, 2018 at 12:55 PM Mike 1100100@gmail.com wrote:
I did the following test:
###############################################
Computer with Centos 7.5 installed on hard drive /dev/sda.
Added two hard drives to the computer: /dev/sdb and /dev/sdc.
Created a new logical volume in RAID-1 using RedHat System Storage Manager:
ssm create --fstype xfs -r 1 /dev/sdb /dev/sdc /mnt/data
Everything works. /dev/lvm_pool/lvol001 is mounted to /mnt/data. Files and folders can be copied/moved, read/written on /mnt/data.
###############################################
I erased CentOS 7.5 from /dev/sda. Wrote zeros to /dev/sda using dd. Reinstalled CentOS 7 on /dev/sda. Completed yum update - reboot - yum install system-storage-manager.
RedHat system storage manager listed all existing volumes on the computer:
[root@localhost]# ssm list
Volume Pool Volume size FS FS size Free Type Mount point
/dev/cl/root cl 65.00 GB xfs 64.97 GB 63.67 GB linear / /dev/cl/swap cl 8.00 GB linear /dev/lvm_pool/lvol001 lvm_pool200.00 GB xfs 199.90 GB 184.53 GB raid1 /mnt/data /dev/cl/home cl 200.00 GB xfs 199.90 GB 199.87 GB linear /home /dev/sda1 4.00 GB xfs 3.99 GB 3.86 GB part /boot
[/CODE]
So far, so good. The new CentOS7 install can see the logical volume.
Mounted the volume: ssm mount -t xfs /dev/lvm_pool/lvol001 /mnt/data Works. cd to /mnt/data and I can see the files left on the volume from the previous tests. Moving/copying/read/write -- works.
###################################################
- Is it safe to assume when using RedHat System Storage Manager it's
not necessary to use the lvm commands (vgexport and vgimport) to move two physical drives containing a logical volume in raid 1 from one computer to another?
Thanks for your help and guidance.
When I change /etc/fstab from /dev/mapper/lvol001 to /dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.
Kernel 3.10.0-862 hangs and will not boot. On Sat, Jul 14, 2018 at 1:20 PM Mike 1100100@gmail.com wrote:
Maybe not a good assumption afterall --
I can no longer boot using kernel 3.10.0-514 or 3.10.0-862.
boot.log shows:
Dependency failed for /mnt/data Dependency failed for Local File Systems Dependency failed for Mark the need to relabel after reboot. Dependency failed for Migrate local SELinux policy changes from the old store structure to the new structure. Dependency failed for Relabel all filesystems, if necessary.
On Sat, Jul 14, 2018 at 12:55 PM Mike 1100100@gmail.com wrote:
I did the following test:
###############################################
Computer with Centos 7.5 installed on hard drive /dev/sda.
Added two hard drives to the computer: /dev/sdb and /dev/sdc.
Created a new logical volume in RAID-1 using RedHat System Storage Manager:
ssm create --fstype xfs -r 1 /dev/sdb /dev/sdc /mnt/data
Everything works. /dev/lvm_pool/lvol001 is mounted to /mnt/data. Files and folders can be copied/moved, read/written on /mnt/data.
###############################################
I erased CentOS 7.5 from /dev/sda. Wrote zeros to /dev/sda using dd. Reinstalled CentOS 7 on /dev/sda. Completed yum update - reboot - yum install system-storage-manager.
RedHat system storage manager listed all existing volumes on the computer:
[root@localhost]# ssm list
Volume Pool Volume size FS FS size Free Type Mount point
/dev/cl/root cl 65.00 GB xfs 64.97 GB 63.67 GB linear / /dev/cl/swap cl 8.00 GB linear /dev/lvm_pool/lvol001 lvm_pool200.00 GB xfs 199.90 GB 184.53 GB raid1 /mnt/data /dev/cl/home cl 200.00 GB xfs 199.90 GB 199.87 GB linear /home /dev/sda1 4.00 GB xfs 3.99 GB 3.86 GB part /boot
[/CODE]
So far, so good. The new CentOS7 install can see the logical volume.
Mounted the volume: ssm mount -t xfs /dev/lvm_pool/lvol001 /mnt/data Works. cd to /mnt/data and I can see the files left on the volume from the previous tests. Moving/copying/read/write -- works.
###################################################
- Is it safe to assume when using RedHat System Storage Manager it's
not necessary to use the lvm commands (vgexport and vgimport) to move two physical drives containing a logical volume in raid 1 from one computer to another?
Thanks for your help and guidance.
On Sat, Jul 14, 2018 at 1:36 PM Mike 1100100@gmail.com wrote:
When I change /etc/fstab from /dev/mapper/lvol001 to /dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.
Kernel 3.10.0-862 hangs and will not boot. On Sat, Jul 14, 2018 at 1:20 PM Mike 1100100@gmail.com wrote:
Is that first entry /dev/mapper/lvol001 right?
I'd expect /dev/mapper/lvm_pool-lvo001
On Sat, Jul 14, 2018 at 1:57 PM Tony Schreiner anthony.schreiner@bc.edu wrote:
Is that first entry /dev/mapper/lvol001 right?
I'd expect /dev/mapper/lvm_pool-lvo001
ssm list shows -
/dev/lvm_pool/lvol001
When I place /dev/lvm_pool/lvol001 into /etc/fstab the computer will boot using kernel 514. Kernel 862 still hangs/panics.
On Sat, Jul 14, 2018 at 2:02 PM Mike 1100100@gmail.com wrote:
On Sat, Jul 14, 2018 at 1:57 PM Tony Schreiner anthony.schreiner@bc.edu wrote:
Is that first entry /dev/mapper/lvol001 right?
I'd expect /dev/mapper/lvm_pool-lvo001
ssm list shows -
/dev/lvm_pool/lvol001
When I place /dev/lvm_pool/lvol001 into /etc/fstab the computer will boot using kernel 514. Kernel 862 still hangs/panics.
I don't have an answer to why kernel 514 is not booting, but what I was trying to say is:
/dev/lvm_pool/lvol001 and /dev/mapper/lvm_pool-lvol001 are both symlinks to the same /dev/dm-X device file. You can use either name, but the one you listed was missing the volume group name
On Sat, Jul 14, 2018 at 2:15 PM Tony Schreiner anthony.schreiner@bc.edu wrote:
I don't have an answer to why kernel 514 is not booting, but what I was trying to say is:
/dev/lvm_pool/lvol001 and /dev/mapper/lvm_pool-lvol001 are both symlinks to the same /dev/dm-X device file. You can use either name, but the one you listed was missing the volume group name
kernel 514 does boot. kernel 862 hangs/panics. I will try both entries in your example above on kernel 514 to confirm. If both work then I'll try them also on kernel 862 to see if possibly one will work. thanks for your help.
/dev/lvm_pool/lvol001 and /dev/mapper/lvm_pool-lvol001 work with kernel 514.
they don't work with kernel 862.
the googling continues . . .
Cannot get the system storage manager (ssm) to create the raid 1 array with logical volume and xfs file system in one step. Cannot find my error or omission. The 862 kernel crashes on reboot every time. I went back to simple lvm on raid and everything worked on the first try --- man page reviews and implementation complete in under 30 mins. I'm giving myself permission to let it be. :-)
Tested. Confirmed. Works --
fdisk /dev/sdb primary partition partition 1 type: fd write to disk and exit.
fdisk /dev/sdc primary partition partition 1 type: fd write to disk and exit.
[root@localhost ~]# systemctl reboot [root@localhost ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 [root@localhost ~]# cat /proc/mdstat [root@localhost ~]# systemctl reboot [root@localhost ~]# ssm create --fstype xfs -p alpha -n charlie /dev/md0 /mnt/data add the following to /etc/fstab: /dev/mapper/alpha-charlie /mnt/data xfs defaults 0 0 [root@localhost ~]# systemctl reboot copy/move/read/write/to/from /mnt/data --- yes to all.
On Sat, Jul 14, 2018 at 2:25 PM Mike 1100100@gmail.com wrote:
/dev/lvm_pool/lvol001 and /dev/mapper/lvm_pool-lvol001 work with kernel 514.
they don't work with kernel 862.
the googling continues . . .
Tried --
umount -t xfs /mnt/data vgchange -a n lvm_pool vgexport lvm_pool vgimport lvm_pool
Rebooted and kernel 862 still panics/hangs. Can boot into kernel 514.
On Sat, Jul 14, 2018 at 1:35 PM Mike 1100100@gmail.com wrote:
When I change /etc/fstab from /dev/mapper/lvol001 to /dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.
Kernel 3.10.0-862 hangs and will not boot. On Sat, Jul 14, 2018 at 1:20 PM Mike 1100100@gmail.com wrote:
Maybe not a good assumption afterall --
I can no longer boot using kernel 3.10.0-514 or 3.10.0-862.
boot.log shows:
Dependency failed for /mnt/data Dependency failed for Local File Systems Dependency failed for Mark the need to relabel after reboot. Dependency failed for Migrate local SELinux policy changes from the old store structure to the new structure. Dependency failed for Relabel all filesystems, if necessary.
On Sat, Jul 14, 2018 at 12:55 PM Mike 1100100@gmail.com wrote:
I did the following test:
###############################################
Computer with Centos 7.5 installed on hard drive /dev/sda.
Added two hard drives to the computer: /dev/sdb and /dev/sdc.
Created a new logical volume in RAID-1 using RedHat System Storage Manager:
ssm create --fstype xfs -r 1 /dev/sdb /dev/sdc /mnt/data
Everything works. /dev/lvm_pool/lvol001 is mounted to /mnt/data. Files and folders can be copied/moved, read/written on /mnt/data.
###############################################
I erased CentOS 7.5 from /dev/sda. Wrote zeros to /dev/sda using dd. Reinstalled CentOS 7 on /dev/sda. Completed yum update - reboot - yum install system-storage-manager.
RedHat system storage manager listed all existing volumes on the computer:
[root@localhost]# ssm list
Volume Pool Volume size FS FS size Free Type Mount point
/dev/cl/root cl 65.00 GB xfs 64.97 GB 63.67 GB linear / /dev/cl/swap cl 8.00 GB linear /dev/lvm_pool/lvol001 lvm_pool200.00 GB xfs 199.90 GB 184.53 GB raid1 /mnt/data /dev/cl/home cl 200.00 GB xfs 199.90 GB 199.87 GB linear /home /dev/sda1 4.00 GB xfs 3.99 GB 3.86 GB part /boot
[/CODE]
So far, so good. The new CentOS7 install can see the logical volume.
Mounted the volume: ssm mount -t xfs /dev/lvm_pool/lvol001 /mnt/data Works. cd to /mnt/data and I can see the files left on the volume from the previous tests. Moving/copying/read/write -- works.
###################################################
- Is it safe to assume when using RedHat System Storage Manager it's
not necessary to use the lvm commands (vgexport and vgimport) to move two physical drives containing a logical volume in raid 1 from one computer to another?
Thanks for your help and guidance.