[CentOS] "multi-boot" drive partitioning

William L. Maltby CentOS4Bill at triad.rr.com
Tue Dec 18 12:58:32 UTC 2007


On Mon, 2007-12-17 at 21:20 -0600, Frank Cox wrote:
> I want to set up a multiple "mode" computer with four separate Centos
> installations on it.  The objective here is to have a "spare computer" that I
> can boot up into any of four "modes" depending on what I'm swapping it in for a
> the moment.  For example, I want to be able to boot it up as a webserver, or as
> a fileserver, or as a LTSP-enabled application server.  And so on.
> 
> I have a computer here with two 300GB hard drives in it, which I plan to split
> into four 150GB partitions, one for each of my "modes".  And I want to install
> Centos separately and independently into each partition, so I can just tell
> Grub to boot up using whatever partition I choose.
> 
> What is the best way to accomplish this?  I have a bad feeling that the drive
> partitioning tool is going to complain about having multiple / partitions
> unless I take steps to avoid that.
> 

Fajar's post provides a simple way and seems to be what you're looking
for. The key is that one boot image can have different root file systems
specified using the same kernel. So, configurations can be "pre-loaded"
in each FS to look like the failed node. BTW, adding one more
configuration that has the sole purpose of acquiring and applying
changes to the other local images would be a good idea. It's only
natural that over time some configuration changes will occur and one
will forget or make an error when trying to replicate those changes to
the spare.

However, as mentioned by another poster, letting it set may lead to it
being non-operational when needed. Cmos batteries expire, things age,
dust collects, power surges get past various protections, etc.

I did something similar to this back in 199(twoish?) with a network
involved on a real UNIX 5.X system, that had NFS/RFS available, to
protect my client in case of HD failure (the most likely scenario then
IMO).

All nodes ran continuously and user databases were distributed. Each
node was sized to be able to hold both the OS and a copy of the largest
2 live data bases on any node (hoping that only one node would fail at
once, but allowing for two). A cron entry did cross-copy of any changed
components (excluding node-specific ones) during the wee hours of the
morning and a small report was generated that let us be sure the
"backups" had completed successfully. A tape backup of all that on the
least loaded node was then done in background at low priority.

It was not intended for the end-user to be able to automatically bring
the "spare" into the mix as a replacement for a failed unit, but it was
intended that I could quickly adjust it to do so. This would require
only changing shared stuff (using RFS at the time) on the replacement
and activating those shares. I think I also changed the node reference
on the "clients", IIRC. Anyway, with RFS, that was quick and easy. I
liked RFS a lot and was sorry to see it eventually go away.

Sure enough, a drive failed eventually and I came away looking like a
hero.

With todays equipment, you should be able to have the spare "sleep",
awaken, receive any changes you like, and go back to sleep. A small
report before re-sleeping will allow you to be sure that all is well
with it.

-- 
Bill




More information about the CentOS mailing list