Hi All,
I have two identical computers with CentOS 5.0 and lampp installed. Are there any high-availability solution for CentOS? So, when one one computer offline, another could act as a backup and being transparent to user.
Regards, Tong
On Sat, 2007-05-12 at 20:28 +0800, Fung wrote:
Hi All,
I have two identical computers with CentOS 5.0 and lampp installed. Are there any high-availability solution for CentOS? So, when one one computer offline, another could act as a backup and being transparent to user.
DRBD and Heartbeat can do that, but I'm not sure they are already built for CentOS5 (they are in the extras repository of CentOS 4.x)
M$-Internet Exploder est le cancer de l'Internet, voyez pourquoi ici : --> http://www.decroissance.info/Ateliers-Liberez-votre-ordinateur <-- Et plus vous éviterez les produits Micro$oft, plus libres vous serez : -------------> http://libre-fan.apinc.org/article21.html <------------
-- (°- Bernard Lheureux Gestionnaire des MailingLists ML, TechML, LinuxML //\ http://www.bbsoft4.org/Mailinglists.htm ** MailTo:root@bbsoft4.org v_/_ http://www.bbsoft4.org/ <<<<<< * >>>>>> http://www.portalinux.org/
On May 12, 2007, at 8:28 AM, Fung wrote:
I have two identical computers with CentOS 5.0 and lampp installed. Are there any high-availability solution for CentOS? So, when one one computer offline, another could act as a backup and being transparent to user.
You should be able to build a failover system using Cluster Suite, provided you put up a third box to be used as shared storage. Here's the link to the announcement of the rebuild of CS/GFS for CentOS 4:
http://www.centos.org/modules/news/article.php?storyid=108
I imagine we'll see a similar announcement when CS/GFS is rebuilt for CentOS 5.
-steve
-- If this were played upon a stage now, I could condemn it as an improbable fiction. - Fabian, Twelfth Night, III,v
Steve Huff wrote:
On May 12, 2007, at 8:28 AM, Fung wrote:
I have two identical computers with CentOS 5.0 and lampp installed. Are there any high-availability solution for CentOS? So, when one one computer offline, another could act as a backup and being transparent to user.
You should be able to build a failover system using Cluster Suite, provided you put up a third box to be used as shared storage. Here's the link to the announcement of the rebuild of CS/GFS for CentOS 4:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
Russ
Ruslan Sivak wrote:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
enterprise grade shared storage (SAN) is generally fully redundant. They use dual storage controllers, ideally in an active/active configuration. you use dual fiber channel cards on each host, going to two seperate FC switches, both switches go to both storage controllers. each drive bay has fiber loops to both controllers. fiberchannel drives have dual ports so both controllers can talk to them... Each storage chassis has redundant power, ideally wired to seperate UPS systems. This is all rather expensive but there is no single point of failure. Combined with the fact that every component of this sort of SAN is engineered for very high uptime in the first place, you achieve 5 9's or better reliability.
On May 14, 2007, at 10:25 AM, Ruslan Sivak wrote:
Steve Huff wrote:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
Short answer: maybe. :)
Longer answer: If you set up your shared storage according to upstream's guidelines, as described in the documentation (http:// mirror.centos.org/centos/4/docs/html/rh-cs-en-4/ch-hardware.html#TB- HARDWARE-NOSPOF), then you provide at least two channels of communication between each component in the cluster. In addition, you choose a platform for shared storage that provides some redundancy of its own, whether it's multi-controller HW RAID, or multiple storage nodes on a SAN, or what have you.
CS/GFS operates under the assumption that your shared storage is fault-tolerant; its job is to make your services fault-tolerant. Is the recommended "no single point of failure" configuration proof against your data center burning down, or against a madman with an axe? Unlikely. Will it allow you to host services in a way that is considerably more robust and flexible than hosting them on a single box? Yes.
-Steve
-- If this were played upon a stage now, I could condemn it as an improbable fiction. - Fabian, Twelfth Night, III,v
Steve Huff wrote:
On May 14, 2007, at 10:25 AM, Ruslan Sivak wrote:
Steve Huff wrote:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
Short answer: maybe. :)
Longer answer: If you set up your shared storage according to upstream's guidelines, as described in the documentation (http://mirror.centos.org/centos/4/docs/html/rh-cs-en-4/ch-hardware.html#TB-H...), then you provide at least two channels of communication between each component in the cluster. In addition, you choose a platform for shared storage that provides some redundancy of its own, whether it's multi-controller HW RAID, or multiple storage nodes on a SAN, or what have you.
CS/GFS operates under the assumption that your shared storage is fault-tolerant; its job is to make your services fault-tolerant. Is the recommended "no single point of failure" configuration proof against your data center burning down, or against a madman with an axe? Unlikely. Will it allow you to host services in a way that is considerably more robust and flexible than hosting them on a single box? Yes.
-Steve
I am currently running a redundant environment on windows by having 2 boxes with apache and having the data (images) be synced over automatically between servers using FRS (File Replication Service). This works well most of the time, except for when it breaks, at which point I need to resync the two servers, which usually takes days.
I would like to set up something similar using linux. I don't have the budget for a SAN/NAS, and even having a third server as storage would probably not be worth it, although we can possibly go with this. The problem, is that it would be a single point of failure.
Is there some service/filesystem in Linux that allows for the automatic replication of files to make a fault tolerant environment possible with only 2 servers? Basically whenever there is an update of a file on a certain file system (certain folder), the file gets synced over to another system.
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Monday, May 14, 2007 10:56 AM To: CentOS mailing list Subject: Re: [CentOS] HA with CentOS
Steve Huff wrote:
On May 14, 2007, at 10:25 AM, Ruslan Sivak wrote:
Steve Huff wrote:
If you set up a third box to be the shared storage,
doesn't that now
become the single point of failure?
Short answer: maybe. :)
Longer answer: If you set up your shared storage according to upstream's guidelines, as described in the documentation
(http://mirror.centos.org/centos/4/docs/html/rh-cs-en-4/ch-har dware.html#TB-HARDWARE-NOSPOF),
then you provide at least two channels of communication
between each
component in the cluster. In addition, you choose a platform for shared storage that provides some redundancy of its own,
whether it's
multi-controller HW RAID, or multiple storage nodes on a
SAN, or what
have you.
CS/GFS operates under the assumption that your shared storage is fault-tolerant; its job is to make your services
fault-tolerant. Is
the recommended "no single point of failure" configuration proof against your data center burning down, or against a madman with an axe? Unlikely. Will it allow you to host services in a
way that is
considerably more robust and flexible than hosting them on a single box? Yes.
-Steve
I am currently running a redundant environment on windows by having 2 boxes with apache and having the data (images) be synced over automatically between servers using FRS (File Replication Service). This works well most of the time, except for when it breaks, at which point I need to resync the two servers, which usually takes days.
I would like to set up something similar using linux. I don't have the budget for a SAN/NAS, and even having a third server as storage would probably not be worth it, although we can possibly go with this. The problem, is that it would be a single point of failure.
Is there some service/filesystem in Linux that allows for the automatic replication of files to make a fault tolerant environment possible with only 2 servers? Basically whenever there is an update of a file on a certain file system (certain folder), the file gets synced over to another system.
Russ
You can check out DRBD, it does block-level replication of data.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Mon, 2007-05-14 at 10:55 -0400, Ruslan Sivak wrote:
Steve Huff wrote:
On May 14, 2007, at 10:25 AM, Ruslan Sivak wrote:
Steve Huff wrote:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
Short answer: maybe. :)
Longer answer: If you set up your shared storage according to upstream's guidelines, as described in the documentation (http://mirror.centos.org/centos/4/docs/html/rh-cs-en-4/ch-hardware.html#TB-H...), then you provide at least two channels of communication between each component in the cluster. In addition, you choose a platform for shared storage that provides some redundancy of its own, whether it's multi-controller HW RAID, or multiple storage nodes on a SAN, or what have you.
CS/GFS operates under the assumption that your shared storage is fault-tolerant; its job is to make your services fault-tolerant. Is the recommended "no single point of failure" configuration proof against your data center burning down, or against a madman with an axe? Unlikely. Will it allow you to host services in a way that is considerably more robust and flexible than hosting them on a single box? Yes.
-Steve
I am currently running a redundant environment on windows by having 2 boxes with apache and having the data (images) be synced over automatically between servers using FRS (File Replication Service). This works well most of the time, except for when it breaks, at which point I need to resync the two servers, which usually takes days.
I would like to set up something similar using linux. I don't have the budget for a SAN/NAS, and even having a third server as storage would probably not be worth it, although we can possibly go with this. The problem, is that it would be a single point of failure.
Is there some service/filesystem in Linux that allows for the automatic replication of files to make a fault tolerant environment possible with only 2 servers? Basically whenever there is an update of a file on a certain file system (certain folder), the file gets synced over to another system.
Russ
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
DRBD and Heartbeat seem pretty solid together for cheap affective high availability. We've been using them for our production FTP servers which handle hundreds of thousands transactions a day both uploading/downloading. We fail over between the two every 6 months and haven't had any problems on CentOS 4.3, they've actually been up for several hundred days now. There is actually a yumgroup named drbd-heartbeat in the CentOS extras repository but I don't see that it is available in CentOS 5.0. Does anyone know if these packages will be available in any of the CentOS 5.0 yum repositories?
DRBD and Heartbeat seem pretty solid together for cheap affective high availability. We've been using them for our production FTP servers which handle hundreds of thousands transactions a day both uploading/downloading. We fail over between the two every 6 months and haven't had any problems on CentOS 4.3, they've actually been up for several hundred days now. There is actually a yumgroup named drbd-heartbeat in the CentOS extras repository but I don't see that it is available in CentOS 5.0. Does anyone know if these packages will be available in any of the CentOS 5.0 yum repositories?
I think they were just built for C5 and are currently in testing repo.
Akemi
On Mon, 2007-05-14 at 10:55 -0700, Akemi Yagi wrote:
DRBD and Heartbeat seem pretty solid together for cheap affective high availability. We've been using them for our production FTP servers which handle hundreds of thousands transactions a day both uploading/downloading. We fail over between the two every 6 months and haven't had any problems on CentOS 4.3, they've actually been up for several hundred days now. There is actually a yumgroup named drbd-heartbeat in the CentOS extras repository but I don't see that it is available in CentOS 5.0. Does anyone know if these packages will be available in any of the CentOS 5.0 yum repositories?
I think they were just built for C5 and are currently in testing repo.
Akemi _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ah, so the testing repository (CentOS 5) is now separate from the main stream repositories and is located at dev.centos.org. I did not know that. Do you happen to know how/if packages in the testing repository will be promoted into the "official" CentOS repositories that are so easily rsync'd against? Will they be promoted to the extras repository in CentOS 5? Thanks.
Scott McClanahan spake the following on 5/14/2007 11:19 AM:
On Mon, 2007-05-14 at 10:55 -0700, Akemi Yagi wrote:
DRBD and Heartbeat seem pretty solid together for cheap affective high availability. We've been using them for our production FTP servers which handle hundreds of thousands transactions a day both uploading/downloading. We fail over between the two every 6 months and haven't had any problems on CentOS 4.3, they've actually been up for several hundred days now. There is actually a yumgroup named drbd-heartbeat in the CentOS extras repository but I don't see that it is available in CentOS 5.0. Does anyone know if these packages will be available in any of the CentOS 5.0 yum repositories?
I think they were just built for C5 and are currently in testing repo.
Akemi _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ah, so the testing repository (CentOS 5) is now separate from the main stream repositories and is located at dev.centos.org. I did not know that. Do you happen to know how/if packages in the testing repository will be promoted into the "official" CentOS repositories that are so easily rsync'd against? Will they be promoted to the extras repository in CentOS 5? Thanks.
After they are tested, they usually get moved into the appropriate place in the tree. But they will not get moved until some people test them and give some positive feedback on their usability and functionality.
Scott McClanahan wrote:
On Mon, 2007-05-14 at 10:55 -0400, Ruslan Sivak wrote:
Steve Huff wrote:
On May 14, 2007, at 10:25 AM, Ruslan Sivak wrote:
Steve Huff wrote:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
Short answer: maybe. :)
Longer answer: If you set up your shared storage according to upstream's guidelines, as described in the documentation (http://mirror.centos.org/centos/4/docs/html/rh-cs-en-4/ch-hardware.html#TB-H...), then you provide at least two channels of communication between each component in the cluster. In addition, you choose a platform for shared storage that provides some redundancy of its own, whether it's multi-controller HW RAID, or multiple storage nodes on a SAN, or what have you.
CS/GFS operates under the assumption that your shared storage is fault-tolerant; its job is to make your services fault-tolerant. Is the recommended "no single point of failure" configuration proof against your data center burning down, or against a madman with an axe? Unlikely. Will it allow you to host services in a way that is considerably more robust and flexible than hosting them on a single box? Yes.
-Steve
I am currently running a redundant environment on windows by having 2 boxes with apache and having the data (images) be synced over automatically between servers using FRS (File Replication Service). This works well most of the time, except for when it breaks, at which point I need to resync the two servers, which usually takes days.
I would like to set up something similar using linux. I don't have the budget for a SAN/NAS, and even having a third server as storage would probably not be worth it, although we can possibly go with this. The problem, is that it would be a single point of failure.
Is there some service/filesystem in Linux that allows for the automatic replication of files to make a fault tolerant environment possible with only 2 servers? Basically whenever there is an update of a file on a certain file system (certain folder), the file gets synced over to another system.
Russ
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
DRBD and Heartbeat seem pretty solid together for cheap affective high availability. We've been using them for our production FTP servers which handle hundreds of thousands transactions a day both uploading/downloading. We fail over between the two every 6 months and haven't had any problems on CentOS 4.3, they've actually been up for several hundred days now. There is actually a yumgroup named drbd-heartbeat in the CentOS extras repository but I don't see that it is available in CentOS 5.0. Does anyone know if these packages will be available in any of the CentOS 5.0 yum repositories?
Looks interesting. I will have to try them out once they're in the stable repo. Looks as of DRBD-8.0.0, you can use it together with GFS and run both nodes as primary. Would heartbeat still be needed? Can heartbeat work with VM boxes?
Russ
On Mon, 2007-05-14 at 13:08 -0400, Scott McClanahan wrote:
On Mon, 2007-05-14 at 10:55 -0400, Ruslan Sivak wrote:
Steve Huff wrote:
On May 14, 2007, at 10:25 AM, Ruslan Sivak wrote:
Steve Huff wrote:
If you set up a third box to be the shared storage, doesn't that now become the single point of failure?
Short answer: maybe. :)
Longer answer: If you set up your shared storage according to upstream's guidelines, as described in the documentation (http://mirror.centos.org/centos/4/docs/html/rh-cs-en-4/ch-hardware.html#TB-H...), then you provide at least two channels of communication between each component in the cluster. In addition, you choose a platform for shared storage that provides some redundancy of its own, whether it's multi-controller HW RAID, or multiple storage nodes on a SAN, or what have you.
CS/GFS operates under the assumption that your shared storage is fault-tolerant; its job is to make your services fault-tolerant. Is the recommended "no single point of failure" configuration proof against your data center burning down, or against a madman with an axe? Unlikely. Will it allow you to host services in a way that is considerably more robust and flexible than hosting them on a single box? Yes.
-Steve
I am currently running a redundant environment on windows by having 2 boxes with apache and having the data (images) be synced over automatically between servers using FRS (File Replication Service). This works well most of the time, except for when it breaks, at which point I need to resync the two servers, which usually takes days.
I would like to set up something similar using linux. I don't have the budget for a SAN/NAS, and even having a third server as storage would probably not be worth it, although we can possibly go with this. The problem, is that it would be a single point of failure.
Is there some service/filesystem in Linux that allows for the automatic replication of files to make a fault tolerant environment possible with only 2 servers? Basically whenever there is an update of a file on a certain file system (certain folder), the file gets synced over to another system.
Russ
The is clustering include in CentOS-5 ... see the guides for using C5 Clustering here:
DRBD and Heartbeat seem pretty solid together for cheap affective high availability. We've been using them for our production FTP servers which handle hundreds of thousands transactions a day both uploading/downloading. We fail over between the two every 6 months and haven't had any problems on CentOS 4.3, they've actually been up for several hundred days now. There is actually a yumgroup named drbd-heartbeat in the CentOS extras repository but I don't see that it is available in CentOS 5.0. Does anyone know if these packages will be available in any of the CentOS 5.0 yum repositories?
There is a testing DRBD / Heartbeat for CentOS-5 in the testing repository:
http://dev.centos.org/centos/5/
Thanks, Johnny Hughes