Hi, folks,
I'm building a new box, and I want three partitions - /boot, /, and swap, on *one* RAID 1, not three separate partitions. Other than <alt-f2> mdadm..., *is* there any way in the graphical installer to do this? All I see is a way to make three separate partitions.
Pointers to links happily accepted.
mark, back to googling
Am 24.01.2017 um 17:33 schrieb m.roth@5-cent.us: I'm building a new box, and I want three partitions - /boot, /, and swap, on *one* RAID 1, not three separate partitions.
the first sentence is in conflict with the last one ("I want three partitions" vs. "not three separate partitions")
Other than <alt-f2> mdadm..., *is* there any way in the graphical installer to do this? All I see is a way to make three separate partitions.
Pointers to links happily accepted.
A hardware RAID will provide one device that can be partitioned or use LVM on one MD RAID (software version).
-- LF
On 01/24/2017 08:33 AM, m.roth@5-cent.us wrote:
I'm building a new box, and I want three partitions - /boot, /, and
swap, on*one* RAID 1, not three separate partitions. Other than <alt-f2> mdadm...,*is* there any way in the graphical installer to do this? All I see is a way to make three separate partitions.
I don't know the answer to that question, but what I can tell you is that I handle software RAID setup in kickstart, creating the partitions manually, so that I can replace drives using a shell script. Making the partitions predictable means there's less of a chance that I'll make errors during drive replacement, and that I can pass that duty on to less experienced co-workers without worrying about it.
Gordon Messmer wrote:
On 01/24/2017 08:33 AM, m.roth@5-cent.us wrote:
I'm building a new box, and I want three partitions - /boot, /, and
swap, on*one* RAID 1, not three separate partitions. Other than <alt-f2> mdadm...,*is* there any way in the graphical installer to do this? All I see is a way to make three separate partitions.
If that wasn't clear, I meant to make the two drives into a single RAID 1, *then* partition that for root, swap, and boot.
I don't know the answer to that question, but what I can tell you is that I handle software RAID setup in kickstart, creating the partitions manually, so that I can replace drives using a shell script. Making the
<snip> Trouble is, it's for this one box. Next box, or the one after, will be happy with the kickstart as it is.
The solved part: I did the <alt-F2> and created the RAID 1. I went back to the GUI, and tried to rescan... it didn't find it, didn't show any drives, then it showed the two real drives... then it gagged, and crashed, and rebooted. HOWEVER, when I tried the next time, anaconda's probing found the RAID, and I'm installing now.
*phew*
mark
On Tue, January 24, 2017 1:10 pm, m.roth@5-cent.us wrote:
Gordon Messmer wrote:
On 01/24/2017 08:33 AM, m.roth@5-cent.us wrote:
I'm building a new box, and I want three partitions - /boot, /, and
swap, on*one* RAID 1, not three separate partitions. Other than <alt-f2> mdadm...,*is* there any way in the graphical installer to do this? All I see is a way to make three separate partitions.
If that wasn't clear, I meant to make the two drives into a single RAID 1, *then* partition that for root, swap, and boot.
I don't know the answer to that question, but what I can tell you is that I handle software RAID setup in kickstart, creating the partitions manually, so that I can replace drives using a shell script. Making the
<snip> Trouble is, it's for this one box. Next box, or the one after, will be happy with the kickstart as it is.
The solved part: I did the <alt-F2> and created the RAID 1.
The trouble is: this will be not part of your /root/initial-setup-ks.cfg file after installation completes. That means, you will have to do that part (which basically makes drives members of software RAID - mirror) before kickstart installation on each box. Or you may need to add this part at the very top of your kickstart file, followed by something like
mdadm --assemble --scan
- I'm not certain though to what extent it is doable. I'm a "hardware RAID" guy...
Valeri
I went back to the GUI, and tried to rescan... it didn't find it, didn't show any drives, then it showed the two real drives... then it gagged, and crashed, and rebooted. HOWEVER, when I tried the next time, anaconda's probing found the RAID, and I'm installing now.
*phew*
mark
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
So, it installed happily.
Then wouldn't boot. No problem, I'll bring it up with pxe, then chroot and grub2-install.
Um, nope. I edited the device map from hd0 and hd1 being the RAID to /dev/sda and /dev/sdb, then ran grup2-install. It now tells me can't identify the filesystem on hd0, and can't perform a safety check, and gives up.
What am I missing? Google is not giving me any answers....
mark
On Tue, January 24, 2017 4:14 pm, m.roth@5-cent.us wrote:
So, it installed happily.
Then wouldn't boot. No problem, I'll bring it up with pxe, then chroot and grub2-install.
Um, nope. I edited the device map from hd0 and hd1 being the RAID to /dev/sda and /dev/sdb, then ran grup2-install. It now tells me can't identify the filesystem on hd0, and can't perform a safety check, and gives up.
This is an interesting logical contradiction (unless things progressed much farther than what I last read):
If you want to boot off your RAID1 device you need software RAID piece of code already running, i.e. kernel already loaded, to load which which in the first place you needed md0 or whichever device to exist to load it from...
The only way around that I remember people were using was: cutting small partition off the drive to keep it as a regular partition, and have /boot on it. The rest of the drive can be different partition which can participate in software RAID. For mirror (RAID-1) I remember people were cutting the same piece off the beginning of both drives, one is always active /boot (another can be maintained as a copy of it, but if you loose first drive, you will need to install grub bootsector to second drive pointing to /boot copy on that drive for loading initramdrive).
Anyway, good luck. Getting hardware RAID controller will be waaay less hassle at all stages of your machine's life.
Valeri
What am I missing? Google is not giving me any answers....
mark
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 01/24/2017 02:14 PM, m.roth@5-cent.us wrote:
So, it installed happily. Then wouldn't boot.
What did the storage configuration look like, exactly? I'd guess that you put one partition on each disk, combined those in a RAID1 MD array, made than an LVM physical volume, and then created filesystems and swap on LVs. But that's a lot of guesses. Did you use MBR partitions or GPT? Are you booting under BIOS or UEFI? Where do your partitions start? Did you create a standard MD RAID volume and LVM or a partitionable RAID volume and partitions?
On 01/24/17 19:00, Gordon Messmer wrote:
On 01/24/2017 02:14 PM, m.roth@5-cent.us wrote:
So, it installed happily. Then wouldn't boot.
What did the storage configuration look like, exactly? I'd guess that you put one partition on each disk, combined those in a RAID1 MD array, made than an LVM physical volume, and then created filesystems and swap on LVs. But that's a lot of guesses. Did you use MBR partitions or GPT? Are you booting under BIOS or UEFI? Where do your partitions start? Did you create a standard MD RAID volume and LVM or a partitionable RAID volume and partitions?
No. Brand new machine, pulled it out of the box and racked it. NOTHING on the internal SSDs. Made an md RAID 0 on the raw disks - /dev/sda /dev/sdb. No partitions, nothing. However, when I bring it up, fdisk shows an MBR with no partitions. I can, however, mount /dev/md127p3 as /mnt/sysimage, and all is there.
Did I need to make a single partition, on each drive, and then make the RAID 1 out of *those*? I don't think I need to have /boot not on a RAID.
mark
You didn't answer all of the questions I asked, but I'll answer as best I can with the information you gave.
On 01/25/2017 04:47 AM, mark wrote:
Made an md RAID 0 on the raw disks - /dev/sda /dev/sdb. No partitions, nothing.
OK, so right off the bat we have to note that this is not a configuration supported by Red Hat. It is possible to set such a system up, but it may require advanced knowledge of grub2 and mdadm. Because the vendor doesn't support this configuration, and as you've seen, the tools don't always parse out the information they need, you'll forever be responsible for fixing any boot problems that come up. Do you really want that?
I sympathize. I wanted to use full disk RAID, too. I thought that replacing disks would be much easier this way, since there'd just be one md RAID device to manage. That was an attractive option after working with hardware RAID controllers that were easy to manage but expensive, unreliable, and performed very poorly in some conditions. But after a thorough review, I found my earlier suggestion of partitioned RAID with the kickstart and RAID management script I provided was the least work for me, in the long term.
However, when I bring it up, fdisk shows an MBR with no partitions. I can, however, mount /dev/md127p3 as /mnt/sysimage, and all is there.
I assume you're booting with BIOS, then?
One explanation for fdisk showing nothing is that you're using GPT instead of MBR (I think). In order to boot on such a system, you'd need a bios_boot partition at the beginning of the RAID volume to provide enough room for grub2 not to stomp on the first partition with a filesystem.
The other explanation that comes to mind is that you're using an mdadm metadata version stored at the beginning of the drive instead of the end. Do you know what metadata version you used?
Did I need to make a single partition, on each drive, and then make the RAID 1 out of *those*? I don't think I need to have /boot not on a RAID.
That's one option, but it still won't be a supported configuration.
Gordon Messmer wrote:
You didn't answer all of the questions I asked, but I'll answer as best I can with the information you gave.
Manitu ate my email, *again*.
On 01/25/2017 04:47 AM, mark wrote:
Made an md RAID 0 on the raw disks - /dev/sda /dev/sdb. No partitions, nothing.
OK, so right off the bat we have to note that this is not a configuration supported by Red Hat. It is possible to set such a system up, but it may require advanced knowledge of grub2 and mdadm. Because
<snip>
I sympathize. I wanted to use full disk RAID, too. I thought that
<snip> Thank you.
However, when I bring it up, fdisk shows an MBR with no partitions. I can, however, mount /dev/md127p3 as /mnt/sysimage, and all is there.
I assume you're booting with BIOS, then?
Yup.
One explanation for fdisk showing nothing is that you're using GPT instead of MBR (I think). In order to boot on such a system, you'd need
Nope. fdisk sees it as an MBR. The SSDs are only 128G. They just run the server, and the LSI card takes care of the 12 hot-swap drives.... <g> (It's a storage server.)
a bios_boot partition at the beginning of the RAID volume to provide enough room for grub2 not to stomp on the first partition with a filesystem.
The other explanation that comes to mind is that you're using an mdadm metadata version stored at the beginning of the drive instead of the end. Do you know what metadata version you used?
I took CentOS 7's default for mdadm.
Did I need to make a single partition, on each drive, and then make the RAID 1 out of *those*? I don't think I need to have /boot not on a RAID.
That's one option, but it still won't be a supported configuration.
Yeah, I see. Well, time to go rebuild, and this time with three separate RAID 1 partitions....
mark
On Tue, 2017-01-24 at 17:14 -0500, m.roth@5-cent.us wrote:
So, it installed happily.
Then wouldn't boot. No problem, I'll bring it up with pxe, then chroot and grub2-install.
Um, nope. I edited the device map from hd0 and hd1 being the RAID to /dev/sda and /dev/sdb, then ran grup2-install. It now tells me can't identify the filesystem on hd0, and can't perform a safety check, and gives up.
What am I missing? Google is not giving me any answers....
Surely, if you are using software RAID, then you should configure that RAID in anaconda, that will then cope with setting up the partitions to allow booting. Basically it needs a small non-RAID partition to hold /boot on the boot disk.
Remember that the boot sequence is generally: BIOS reads MBR and executes it; MBR code reads kernel from /boot and executes it (yes, it's more complicated than that). If the MBR code doesn't know how to read a RAID partition, then it's going to fail, that's why you have a small non-RAID partition to hold /boot.
Hardware RAID is different because it interfaces at the BIOS level so the MBR code doesn't need to know how to specifically read it.
P.
In article 1485342377.3072.6.camel@biggs.org.uk, Pete Biggs pete@biggs.org.uk wrote:
On Tue, 2017-01-24 at 17:14 -0500, m.roth@5-cent.us wrote:
So, it installed happily.
Then wouldn't boot. No problem, I'll bring it up with pxe, then chroot and grub2-install.
Um, nope. I edited the device map from hd0 and hd1 being the RAID to /dev/sda and /dev/sdb, then ran grup2-install. It now tells me can't identify the filesystem on hd0, and can't perform a safety check, and gives up.
What am I missing? Google is not giving me any answers....
Surely, if you are using software RAID, then you should configure that RAID in anaconda, that will then cope with setting up the partitions to allow booting. Basically it needs a small non-RAID partition to hold /boot on the boot disk.
Remember that the boot sequence is generally: BIOS reads MBR and executes it; MBR code reads kernel from /boot and executes it (yes, it's more complicated than that). If the MBR code doesn't know how to read a RAID partition, then it's going to fail, that's why you have a small non-RAID partition to hold /boot.
Hardware RAID is different because it interfaces at the BIOS level so the MBR code doesn't need to know how to specifically read it.
If you are using RAID 1 kernel mirroring, you can do that with /boot too, and Grub finds the kernel just fine. I've done it many times:
1. Primary partition 1 type FD, size 200M. /dev/sda1 and /dev/sdb1. 2. Create /dev/md0 as RAID 1 from /dev/sda1 and /dev/sdb1. 3. Assign /dev/md0 to /boot, ext3 format (presumably ext4 would work too?) 4. Make sure to setup both drives separately in grub.
Typically I then go on to have /dev/sda2+/dev/sdb2 => /dev/md1 => swap, and /dev/sda3+/dev/sdb3 => /dev/md2 => /
Cheers Tony
On 26/01/17 05:46, Tony Mountifield wrote:
In article 1485342377.3072.6.camel@biggs.org.uk, Pete Biggs pete@biggs.org.uk wrote:
On Tue, 2017-01-24 at 17:14 -0500, m.roth@5-cent.us wrote:
So, it installed happily.
Then wouldn't boot. No problem, I'll bring it up with pxe, then chroot and grub2-install.
Um, nope. I edited the device map from hd0 and hd1 being the RAID to /dev/sda and /dev/sdb, then ran grup2-install. It now tells me can't identify the filesystem on hd0, and can't perform a safety check, and gives up.
What am I missing? Google is not giving me any answers....
Surely, if you are using software RAID, then you should configure that RAID in anaconda, that will then cope with setting up the partitions to allow booting. Basically it needs a small non-RAID partition to hold /boot on the boot disk.
Remember that the boot sequence is generally: BIOS reads MBR and executes it; MBR code reads kernel from /boot and executes it (yes, it's more complicated than that). If the MBR code doesn't know how to read a RAID partition, then it's going to fail, that's why you have a small non-RAID partition to hold /boot.
Hardware RAID is different because it interfaces at the BIOS level so the MBR code doesn't need to know how to specifically read it.
If you are using RAID 1 kernel mirroring, you can do that with /boot too, and Grub finds the kernel just fine. I've done it many times:
- Primary partition 1 type FD, size 200M. /dev/sda1 and /dev/sdb1.
I think it wiser to have /boot at 1Gb nowadays.
- Create /dev/md0 as RAID 1 from /dev/sda1 and /dev/sdb1.
- Assign /dev/md0 to /boot, ext3 format (presumably ext4 would work too?)
- Make sure to setup both drives separately in grub.
Typically I then go on to have /dev/sda2+/dev/sdb2 => /dev/md1 => swap, and /dev/sda3+/dev/sdb3 => /dev/md2 => /
Cheers Tony
If you are using RAID 1 kernel mirroring, you can do that with /boot too, and Grub finds the kernel just fine. I've done it many times:
Hmm, OK. I wonder why anaconda doesn't do it then.
Reading various websites, it looks like grub2 can do it, but you have to make sure that various grub modules are installed first - i.e. do something like
grub-install --modules='biosdisk ext2 msdos raid mdraid' /dev/xxx
I don't know if they are added by default these days.
The other gotcha is, of course, that the boot sectors aren't RAID'd - so if /dev/sda goes, replacing it will make the system unbootable since it doesn't contain the boot sectors. Hot swap will keep the system running but you have to remember to re-install the correct boot sector before reboot. If you have to bring the machine down to change the disk, then things could get interesting!
P.
In article 1485416344.2047.1.camel@biggs.org.uk, Pete Biggs pete@biggs.org.uk wrote:
If you are using RAID 1 kernel mirroring, you can do that with /boot too, and Grub finds the kernel just fine. I've done it many times:
Hmm, OK. I wonder why anaconda doesn't do it then.
Reading various websites, it looks like grub2 can do it, but you have to make sure that various grub modules are installed first - i.e. do something like
grub-install --modules='biosdisk ext2 msdos raid mdraid' /dev/xxx
I don't know if they are added by default these days.
I don't know, but I've never had to do it, when using plain mirroring, on either C4, C5 or C6. I can imagine you would need to if /boot was RAID 0 striped, if indeed that is even possible.
The other gotcha is, of course, that the boot sectors aren't RAID'd - so if /dev/sda goes, replacing it will make the system unbootable since it doesn't contain the boot sectors. Hot swap will keep the system running but you have to remember to re-install the correct boot sector before reboot. If you have to bring the machine down to change the disk, then things could get interesting!
Yup, been there, done that. So long as you use grub to install the boot sector on both drives, then you can always tell the BIOS to boot from the other drive to bring the system up after replacing the first disk.
Anaconda doesn't set up the boot sector on the second drive by default, so I put some grub commands in the post-install section of kickstart to do so.
Cheers Tony
On 01/26/2017 01:40 AM, Tony Mountifield wrote:
Anaconda doesn't set up the boot sector on the second drive by default, so I put some grub commands in the post-install section of kickstart to do so.
I can't attest that it *works* (mostly since I use UEFI everywhere possible) but anaconda definitely attempts to install grub on each drive with a copy of /boot:
https://github.com/rhinstaller/anaconda/blob/master/pyanaconda/bootloader.py
Hi all,
I use cURL to connect external https server from local apache php application. I found this application only use even local port number and never use odd port number.
My server info is as follows:
# uname -r 2.6.32-642.3.1.el6.x86_64
# sysctl -a | grep net.ipv4.ip_local net.ipv4.ip_local_port_range = 32768 60999 net.ipv4.ip_local_reserved_ports =
I want cURL use odd and even number as local port.
Any idea?
Thank you. Yana
In article 5ef97952-14c0-6ad2-0803-c24691a6816b@gmail.com, Gordon Messmer gordon.messmer@gmail.com wrote:
On 01/26/2017 01:40 AM, Tony Mountifield wrote:
Anaconda doesn't set up the boot sector on the second drive by default, so I put some grub commands in the post-install section of kickstart to do so.
I can't attest that it *works* (mostly since I use UEFI everywhere possible) but anaconda definitely attempts to install grub on each drive with a copy of /boot:
https://github.com/rhinstaller/anaconda/blob/master/pyanaconda/bootloader.py
Thanks, that's interesting to know. When I first started doing this it was on CentOS 4, and I'm pretty sure the second drive didn't get grubbed back then, which would be what prompted me to add the post-install grub for the second drive at that time.
I never went back to check whether the need had been obviated in CentOS 5 or 6.
Cheers Tony