Hello all,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
Another option I am looking at is a piece of shared storage, a machine running CentOS with a large software RAID 5 array.
What is the best means of sharing the storage? I would really like to use a combination of an iSCSI target server, and GFS or OCFS.
But the iSCSI target server in the CentOS repos is a 'technology preview'
Have any of you used the iSCSI target server in a production environment yet?
Is NFS and option?
Kind regards, Coert Waagmeester
Briefly, but iet has been rock stable for me. It just runs forever... I have only used NFS under vmware, it worked good.
jlc _______________________________________________
jlc,
what has been rock stable?
can you be more specific on the implementaion?
are you saying "it" or "iet"
if "iet" what is that?
;-)
- rh
On Thu, 2009-06-11 at 13:35 -0700, RobertH wrote:
Briefly, but iet has been rock stable for me. It just runs forever... I have only used NFS under vmware, it worked good.
jlc _______________________________________________
jlc,
what has been rock stable?
can you be more specific on the implementaion?
are you saying "it" or "iet"
if "iet" what is that?
;-)
- rh
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
jlc was talking about the iSCSI target server I think...
jlc,
what has been rock stable?
can you be more specific on the implementaion?
are you saying "it" or "iet"
if "iet" what is that?
;-)
Sorry buddy, I meant "iSCSI Enterprise Target" @ http://iscsitarget.sourceforge.net/ This project is fortunate enough to have the developers and some very bright and experienced members active in the list. It was actually one the places I started when learning Linux and received some invaluable help from two specific people.
This is one of the few pieces of software as critical as it is, I have gotten into the bad habit of not monitoring, it just bloody runs w/o issue.
If you need a target, I can at least say this one is rock stable and any issues are dealt with fast!
HTH, jlc
On Jun 11, 2009, at 5:21 PM, "Joseph L. Casale" <JCasale@activenetwerx.com
wrote:
jlc,
what has been rock stable?
can you be more specific on the implementaion?
are you saying "it" or "iet"
if "iet" what is that?
;-)
Sorry buddy, I meant "iSCSI Enterprise Target" @ http:// iscsitarget.sourceforge.net/ This project is fortunate enough to have the developers and some very bright and experienced members active in the list. It was actually one the places I started when learning Linux and received some invaluable help from two specific people.
This is one of the few pieces of software as critical as it is, I have gotten into the bad habit of not monitoring, it just bloody runs w/o issue.
If you need a target, I can at least say this one is rock stable and any issues are dealt with fast!
It is, though it needs to be brought current with asynchronous notifications, persistent reservations, error recovery level 2 and multiple connections per session.
So if any developer wants to get their feet wet with the iSCSI protocol and kernel driver development please subscribe to the iSCSI Enterprise Target list.
Also when testing it out use the 'noop' or 'deadline' io schedulers on the backend target storage as there currently is a performance issue with the 'cfq' scheduler.
It compiles clean on most architectures.
-Ross
I use NFS - stable solution, but if your looking more for redundancy, use the DRBD and heartbeat solution you mentioned. I have quite a few system running this - it works very well. You may also use Raid with DRBD. Sorry, I have never used iSCSI Target, looks interesting though.
~Ron
Coert Waagmeester wrote:
Hello all,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
Another option I am looking at is a piece of shared storage, a machine running CentOS with a large software RAID 5 array.
What is the best means of sharing the storage? I would really like to use a combination of an iSCSI target server, and GFS or OCFS.
But the iSCSI target server in the CentOS repos is a 'technology preview'
Have any of you used the iSCSI target server in a production environment yet?
Is NFS and option?
Kind regards, Coert Waagmeester
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Coert Waagmeester schrieb:
Hello all,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
Another option I am looking at is a piece of shared storage, a machine running CentOS with a large software RAID 5 array.
How large? Depending on the size, RAID6 is the better option (with >=1TB disks, the rebuild can take longer than the statistical average time another disk needs to fail).
What is the best means of sharing the storage? I would really like to use a combination of an iSCSI target server, and GFS or OCFS.
If you don't already do GFS (and have been doing so for years), I'd say you better only do it in a configuration that is either supported by RedHat (e.g. with RHEL) or some competent 3rd-party that can help you over the pitfalls. Else you are on your own, with only the GFS mailinglist, yourself and your keyboard ;-)
Rainer
On Thu, 2009-06-11 at 17:14 +0200, Rainer Duffner wrote:
Coert Waagmeester schrieb:
Hello all,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
Another option I am looking at is a piece of shared storage, a machine running CentOS with a large software RAID 5 array.
How large? Depending on the size, RAID6 is the better option (with >=1TB disks, the rebuild can take longer than the statistical average time another disk needs to fail).
I am starting with 4 1TB SATA disks.
With RAID 6 that will give me 2 TB right?
What is the best means of sharing the storage? I would really like to use a combination of an iSCSI target server, and GFS or OCFS.
If you don't already do GFS (and have been doing so for years), I'd say you better only do it in a configuration that is either supported by RedHat (e.g. with RHEL) or some competent 3rd-party that can help you over the pitfalls. Else you are on your own, with only the GFS mailinglist, yourself and your keyboard ;-)
Will OCFS be easier?
Rainer _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Coert Waagmeester wrote:
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
Another option I am looking at is a piece of shared storage, a machine running CentOS with a large software RAID 5 array.
What is the best means of sharing the storage? I would really like to use a combination of an iSCSI target server, and GFS or OCFS.
But the iSCSI target server in the CentOS repos is a 'technology preview'
Have any of you used the iSCSI target server in a production environment yet?
Is NFS and option?
I've always liked the 'low-tech' way of using RAID1 in a box with swappable disks and keeping a spare chassis handy. In the most common scenario of a single drive failure you will keep running at full speed and can replace the drive at your convenience. For a less likely motherboard or power supply failure, you move the disks to the other chassis and are up in the time it takes to reboot. And if the whole thing melts, you can recover the data off of any single disk you have left. You still need backups, of course and you need an on-site person to swap drives. You could automate it a bit more with drbl and a heartbeat failover to keep the standby live, but that adds a lot of complexity and more things to go wrong.
2009/6/11 Coert Waagmeester lgroups@waagmeester.co.za:
Hello all,
Hi,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
I am using a combination of DRBD+GFS. Since v8.2, DRBD [1] can be configured in dual-primary mode [2]. You can mount your local partitions in r/w mode using a Distributed Lock Manager and GFS. It works pretty well in my case, both my partitions are correctly replicated at device block level. Please, note that with this solution you have to configure a fence device to preserve the file system integrity. The DRBD documentation contains everything you need to realize this solution.
[1] http://www.drbd.org/users-guide/ [2] http://www.drbd.org/users-guide/s-dual-primary-mode.html
Cheers
On Fri, 2009-06-12 at 22:59 +0200, Giuseppe Fuggiano wrote:
2009/6/11 Coert Waagmeester lgroups@waagmeester.co.za:
Hello all,
Hi,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
I am using a combination of DRBD+GFS. Since v8.2, DRBD [1] can be configured in dual-primary mode [2]. You can mount your local partitions in r/w mode using a Distributed Lock Manager and GFS. It works pretty well in my case, both my partitions are correctly replicated at device block level. Please, note that with this solution you have to configure a fence device to preserve the file system integrity. The DRBD documentation contains everything you need to realize this solution.
[1] http://www.drbd.org/users-guide/ [2] http://www.drbd.org/users-guide/s-dual-primary-mode.html
Cheers
Hello,
Thanks, I will give this a bash, trying to set up GFS now. (very hairy!)
What is you guys opinion on OCFS and GlusterFS? Or am I better off sticking with GFS?
Kind regards, Coert
Coert Waagmeester schrieb:
On Fri, 2009-06-12 at 22:59 +0200, Giuseppe Fuggiano wrote:
2009/6/11 Coert Waagmeester lgroups@waagmeester.co.za:
Hello all,
Hi,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
I am using a combination of DRBD+GFS. Since v8.2, DRBD [1] can be configured in dual-primary mode [2]. You can mount your local partitions in r/w mode using a Distributed Lock Manager and GFS. It works pretty well in my case, both my partitions are correctly replicated at device block level. Please, note that with this solution you have to configure a fence device to preserve the file system integrity. The DRBD documentation contains everything you need to realize this solution.
[1] http://www.drbd.org/users-guide/ [2] http://www.drbd.org/users-guide/s-dual-primary-mode.html
Cheers
Hello,
Thanks, I will give this a bash, trying to set up GFS now. (very hairy!)
What is you guys opinion on OCFS and GlusterFS? Or am I better off sticking with GFS?
Have it setup by someone who knows what he's doing and who can bail you out in case it goes boom.
Otherwise, you just introduce another layer (or two) of complexity that gives you no additional uptime over the one a simple-setup, solid server from HP/IBM/Sun (or maybe even Dell) + a UPS will give you.
What were your primary reasons for outages over the last two years?
Rainer
2009/6/11 Coert Waagmeester lgroups@waagmeester.co.za:
Hello all,
Hi,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
Have you considered something line FreeNAS or even OpenFiler? They're both install-from-CD-and-use-NAS server plaforms and works great.
Thanks good suggestion, I have used FreeNAS and I like it; and we did use it here in the past but the Windows side of our network would not backup 100% properly, no big surprise to me. So that is how the big fat 5TB HP storage array became a Windows server and CENT OS was left out in the cold to figure out how to mount it like a second class citizen. :-) Thanks Rudi.
LK
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Rudi Ahlers Sent: Thursday, June 18, 2009 11:43 AM To: CentOS mailing list Subject: Re: [CentOS] NAS Storage server question
2009/6/11 Coert Waagmeester lgroups@waagmeester.co.za:
Hello all,
Hi,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
Have you considered something line FreeNAS or even OpenFiler? They're both install-from-CD-and-use-NAS server plaforms and works great.
On Thu, 2009-06-18 at 15:28 +0200, Rainer Duffner wrote:
Coert Waagmeester schrieb:
On Fri, 2009-06-12 at 22:59 +0200, Giuseppe Fuggiano wrote:
2009/6/11 Coert Waagmeester lgroups@waagmeester.co.za:
Hello all,
Hi,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat. That is probably the best way.
I am using a combination of DRBD+GFS. Since v8.2, DRBD [1] can be configured in dual-primary mode [2]. You can mount your local partitions in r/w mode using a Distributed Lock Manager and GFS. It works pretty well in my case, both my partitions are correctly replicated at device block level. Please, note that with this solution you have to configure a fence device to preserve the file system integrity. The DRBD documentation contains everything you need to realize this solution.
[1] http://www.drbd.org/users-guide/ [2] http://www.drbd.org/users-guide/s-dual-primary-mode.html
Cheers
Hello,
Thanks, I will give this a bash, trying to set up GFS now. (very hairy!)
What is you guys opinion on OCFS and GlusterFS? Or am I better off sticking with GFS?
Have it setup by someone who knows what he's doing and who can bail you out in case it goes boom.
Otherwise, you just introduce another layer (or two) of complexity that gives you no additional uptime over the one a simple-setup, solid server from HP/IBM/Sun (or maybe even Dell) + a UPS will give you.
What were your primary reasons for outages over the last two years?
Rainer _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
We have not really had major outages yet... Most of our stuff is already at least RAID 1 (even Windblows) I might just be a little too paranoid.
I have decided that at first I will be going for the DRBD solution. With the current hardware I have it will be the easiest solution.
Have any of you guys used GlusterFS or OCFS yet?
HI All, I have created own distribution CD. but after installation that is when it is going post installation sh :/usr/sbin/mouseconfig file or directory not found " . can any help me to solve this . Thanks in advance Juliet