My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup with two identical 256 Gb SSDs after removing Windows. It runs fine but I just discovered in gparted something that does not seem right:
- Launching gparted it complains "invalid argument during seek for red on /dev/md126" and when I click on Ignore I get another error "The backup GPT table is corrupt, but the primary appears OK, so that will be used." I then click on OK whereupon I again see the second error message. I then see "Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 6832 blocks) or continue with the current setting? " I click on Fix but nothing seems to happen. I am not sure what /dev/md126 is but it is the exact same size as sda and sdb which I believe are the two RAID disks. I also have two other hard disks which seem to be fine, one using XFS, the other ZFS.
Does this look familiar to anyone? Given the error messages it seems this is something I ought to fix sooner rather than later. Any idea what I should do?
Thanks.
My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup with two identical 256 Gb SSDs after removing Windows. It runs fine but I just discovered in gparted something that does not seem right:
- Launching gparted it complains "invalid argument during seek for red on
/dev/md126" and when I click on Ignore I get another error "The backup GPT table is corrupt, but the primary appears OK, so that will be used." I then click on OK whereupon I again see the second error message. I then see "Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 6832 blocks) or continue with the current setting? " I click on Fix but nothing seems to happen. I am not sure what /dev/md126 is but it is the exact same size as sda and sdb which I believe are the two RAID disks. I also have two other hard disks which seem to be fine, one using XFS, the other ZFS.
Does this look familiar to anyone? Given the error messages it seems this is something I ought to fix sooner rather than later. Any idea what I should do?
I'm a bit confused what you have here. Did you mix pseudo hardware RAID (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part of a software RAID.
Regards, Simon
Once upon a time, Simon Matter simon.matter@invoca.ch said:
I'm a bit confused what you have here. Did you mix pseudo hardware RAID (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part of a software RAID.
IIRC the old dmraid support for motherboard RAID has been phased out, but mdraid has grown support for Intel (and maybe some other?) common motherboard RAID. So, /dev/md<foo> doesn't inherently mean "Linux software RAID" for a while now.
On 10/23/2020 10:07 AM, Chris Adams wrote:
Once upon a time, Simon Matter simon.matter@invoca.ch said:
I'm a bit confused what you have here. Did you mix pseudo hardware RAID (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part of a software RAID.
IIRC the old dmraid support for motherboard RAID has been phased out, but mdraid has grown support for Intel (and maybe some other?) common motherboard RAID. So, /dev/md<foo> doesn't inherently mean "Linux software RAID" for a while now.
Today LibreOffice locked my computer solid and I had to cycle power to start up again. Unfortunately it failed booting and I ended up in the dracut shell with which I am wholly unfamiliar... Although this could be considered a learning experience it is one I could have been without since I was also running a long job that had already run for 10 days and needed another four days to complete...
Anyway, after googling and fiddling around I think I identified the culprit as the command line in grub menu. I have a ThinkStation where two identical SSDs are configured as RAID in BIOS (see above) and the grub command line has two rd.md.uuid= statements with the UUIDs of the two disks, further complicated by the fact that the disk is encrypted.
After fiddling I discovered that if I removed the rd.md.uuid= statements in the grub command line and replaced it with one single rd.md.=0, I could boot. However, it seems that only one of the disks is decrypted and used, not the second one. I also tried rd.md.auto with the same outcome as with rd.md=0.
I was not able to boot any of the three older kernels, nor the rescue option, they all failed for the same reason.
Does anyone have a suggestion on how I can modify the grub boot option so that: (1) the system will boot /and/ (2) use the two identical SSDs as the BIOS RAID as it did before this happened?
On 10/23/2020 03:29 AM, Simon Matter wrote:
My ThinkStation runs CentOS 7 which I installed on a BIOS RAID 0 setup with two identical 256 Gb SSDs after removing Windows. It runs fine but I just discovered in gparted something that does not seem right:
- Launching gparted it complains "invalid argument during seek for red on
/dev/md126" and when I click on Ignore I get another error "The backup GPT table is corrupt, but the primary appears OK, so that will be used." I then click on OK whereupon I again see the second error message. I then see "Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 6832 blocks) or continue with the current setting? " I click on Fix but nothing seems to happen. I am not sure what /dev/md126 is but it is the exact same size as sda and sdb which I believe are the two RAID disks. I also have two other hard disks which seem to be fine, one using XFS, the other ZFS.
Does this look familiar to anyone? Given the error messages it seems this is something I ought to fix sooner rather than later. Any idea what I should do?
I'm a bit confused what you have here. Did you mix pseudo hardware RAID (BIOS RAID 0) with software RAID here? Because /dev/md126 clearly is part of a software RAID.
Regards, Simon
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Not that I know of but how do I check my configuration?