Hi !
As some of you might know, Apple has discontinued it's xServes server as of january 31st 2011.
We have a server rack with 12 xserves ranging from dual G5's to dual quand-core xeon lastest generation, 3 xserve-raid and one activeraid 16 TB disk enclosure. We also use xSan to access a shared file system among the servers. Services are run from this shared filesystem, spreaded across the servers. Some LUNs on the fiber channel network are accessed directly and mounted on a case-by-case basis. Those raid volumes are partitioned with a GUID partition map, and apple_label type volumes. So they can be mounted by name with mount_hfs.
We were on the verge on upgrading at least 6 of our server in a separate location (as a backup-site), with another SAN, same aplication etc. But this announce has come put a little delay. We do have several servers running CentOS (about 10 or so), on intel server platform.
Now with this said, I am searching for documentation on operating a SAN under linux. We are looking at Quantum StorNext FS2 product for the SAN itselft.
And I am searching info about accessing volumes on a fiber channel network by label. I know I can label individual ext3 partition, but how to do so on a raid array via fiber channel ?
Basicly, I search for a linux starter guide to fiber channel storage.
Thanks for any insight.
Nicolas
On Nov 5, 2010, at 7:34 PM, "Nicolas Ross" rossnick-lists@cybercat.ca wrote:
Hi !
As some of you might know, Apple has discontinued it's xServes server as of january 31st 2011.
We have a server rack with 12 xserves ranging from dual G5's to dual quand-core xeon lastest generation, 3 xserve-raid and one activeraid 16 TB disk enclosure. We also use xSan to access a shared file system among the servers. Services are run from this shared filesystem, spreaded across the servers. Some LUNs on the fiber channel network are accessed directly and mounted on a case-by-case basis. Those raid volumes are partitioned with a GUID partition map, and apple_label type volumes. So they can be mounted by name with mount_hfs.
We were on the verge on upgrading at least 6 of our server in a separate location (as a backup-site), with another SAN, same aplication etc. But this announce has come put a little delay. We do have several servers running CentOS (about 10 or so), on intel server platform.
Now with this said, I am searching for documentation on operating a SAN under linux. We are looking at Quantum StorNext FS2 product for the SAN itselft.
And I am searching info about accessing volumes on a fiber channel network by label. I know I can label individual ext3 partition, but how to do so on a raid array via fiber channel ?
Basicly, I search for a linux starter guide to fiber channel storage.
Thanks for any insight.
You could also look at Nexenta to replace the OS on the SAN head servers, load up their RAM, put in SSD drives and make the storage live for another 5 years until you can find an alternative.
As for the other servers, virtualize, virtualize, virtualize. Then you don't really have to worry about the hardware being discontinued any more.
-Ross
On 11/05/2010 04:34 PM, Nicolas Ross wrote:
Now with this said, I am searching for documentation on operating a SAN under linux. We are looking at Quantum StorNext FS2 product for the SAN itselft.
I'm not sure how much help you'll get from the community. StorNext is a proprietary product that appears to have its own drivers and management tools. If you want documentation, ask the vendor for it.
And I am searching info about accessing volumes on a fiber channel network by label. I know I can label individual ext3 partition, but how to do so on a raid array via fiber channel ?
Well, on standard SAN products you'll see block devices corresponding to the volumes that you've exported from the SAN to the host system. You can create filesystems on them and label those just like you would any other block device.
Thanks,
On 11/05/2010 04:34 PM, Nicolas Ross wrote:
Now with this said, I am searching for documentation on operating a SAN under linux. We are looking at Quantum StorNext FS2 product for the SAN itselft.
I'm not sure how much help you'll get from the community. StorNext is a proprietary product that appears to have its own drivers and management tools. If you want documentation, ask the vendor for it.
Is there any other solution for building a SAN under linux ?
On 11/07/10 3:33 AM, Nicolas Ross wrote:
Thanks,
On 11/05/2010 04:34 PM, Nicolas Ross wrote:
Now with this said, I am searching for documentation on operating a SAN under linux. We are looking at Quantum StorNext FS2 product for the SAN itselft.
I'm not sure how much help you'll get from the community. StorNext is a proprietary product that appears to have its own drivers and management tools. If you want documentation, ask the vendor for it.
Is there any other solution for building a SAN under linux ?
openfiler ... while this supports a full range of NAS (file server) features, they also have iSCSI block SAN support.
Nicolas Ross wrote:
Thanks,
On 11/05/2010 04:34 PM, Nicolas Ross wrote:
Now with this said, I am searching for documentation on operating a SAN under linux. We are looking at Quantum StorNext FS2 product for the SAN itselft.
I'm not sure how much help you'll get from the community. StorNext is a proprietary product that appears to have its own drivers and management tools. If you want documentation, ask the vendor for it.
Is there any other solution for building a SAN under linux ?
We're using a somewhat aged HP StorageWorks EVA3000 SAN and a 2 Gb fibre channel infrastructure with our CentOS 4 servers running the Red Hat Cluster Suite to support several instances of Oracle. The hardware includes Qlogic FC controllers and Brocade FC switches. It actually works quite well, though the versions of RHCS for RHEL/CentOS 4 are a bit complicated for today's needs.
We are currently working to migrate this to an EMC CX4 SAN on an 8 Gb fibre channel infrastructure with Dell blade servers. We're using RHEL 5 and Oracle's cluster toolkit and it seems quite an improvement over RHCS and GFS2. OCFS2 seems to have caught up with GFS2 as far as capabilities go and is laughably simple to configure compared to RHCS 4. Of course, with it working so well we haven't had much opportunity to develop troubleshooting skills.
We also use a LOT of iSCSI SAN connections, using either iSCSI servers from HP or Dell or general-purpose machines running OpenFiler. Performance isn't quite up to the 8 Gb/s SAN speeds, but with Gigabit Ethernet and jumbo frames it's pretty respectable.
Perhaps FreeNAS would fit the bill?
Sent from my iPhone
On Nov 8, 2010, at 6:52 PM, Gordon Messmer yinyang@eburg.com wrote:
On 11/07/2010 03:33 AM, Nicolas Ross wrote:
Is there any other solution for building a SAN under linux ?
None of my customers use a SAN right now. I have some friends who speak pretty highly of their Dell SAN gear (re-branded EMC CX300) with Qlogic HBAs. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 11/09/2010 12:58 AM, Tim Dunphy wrote:
Perhaps FreeNAS would fit the bill?
Sent from my iPhone
On Nov 8, 2010, at 6:52 PM, Gordon Messmeryinyang@eburg.com wrote:
On 11/07/2010 03:33 AM, Nicolas Ross wrote:
Is there any other solution for building a SAN under linux ?
How about openfiler: http://www.openfiler.com/
Regards, Patrick
On 11/08/2010 04:06 PM, Patrick Lists wrote:
On 11/09/2010 12:58 AM, Tim Dunphy wrote:
Perhaps FreeNAS would fit the bill? http://freenas.org/features
How about openfiler: http://www.openfiler.com/
I don't believe either of those support exporting volumes over Fibre Channel. You could do iSCSI, but your performance wouldn't be as good.
----- Original Message ----- | On 11/09/2010 12:58 AM, Tim Dunphy wrote: | > Perhaps FreeNAS would fit the bill? | > | > http://freenas.org/features | > | > | > Sent from my iPhone | > | > On Nov 8, 2010, at 6:52 PM, Gordon Messmeryinyang@eburg.com wrote: | > | >> On 11/07/2010 03:33 AM, Nicolas Ross wrote: | >>> | >>> Is there any other solution for building a SAN under linux ? | | How about openfiler: http://www.openfiler.com/ | | Regards, | Patrick | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos
All the tools to build a SAN are available out there, but it is a more complicated problem then you might expect to develop one. For example, you could have GNU/Linux or Solaris 10 hosts that act as dumb iSCSI target enclosures with a bunch of disks in them. This device could have FC, 1 or 10 GbE, Infiniband or some other adapter and attach to your existing storage fabric. That's the easy part!
You then need a method for dealing with the high availability aspect. You need to be able to fence the storage while a fail-over is taking place. You need to (maybe) move MAC addresses and other storage IP bits. This is the hard part! Getting this right is not necessarily a trivial task.
Products like FreeNAS and OpenFiler only target the easy stuff.
Now, you could virtualize the head node using something like Xen, KVM or VMWare, then in case of failure or administrative necessity, fail over to the other physical hardware, which *may* negate some of the hard bits, but you would need to *really* test it in the most common scenarios to see if it fits your bill.
I have a solution that is currently centered around commodity storage bricks (Dell R510), flash PCI-E controllers, 1 or 10GbE (on separate Jumbo Frame Data Tier) and Solaris + ZFS.
So far it has worked out really well. Each R510 is a box with a fair bit of memory, running OpenIndiana for ZFS/RAIDZ3/Disk Dedup/iSCSI. Each brick is fully populated and in a RAIDZ2 configuration with 1 hot spare. Some have SSDs most have SAS or SATA. I export this storage pool as a single iSCSI target and I attach each of these targets to the SAN pool and provision from there.
I have two VMWare physical machines which are identically configured. If I need to perform administrative maintenance on the boxes I can migrate the host over to the other machine. This works for me, but it took a really long time to develop the solution and for the cost of my time it *might* have been cheaper to just buy some package deal.
It was a hell of a lot of fun learning though. ;)
-- James A. Peltier Systems Analyst (FASNet), VIVARIUM Technical Director Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpeltier@sfu.ca Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca http://blogs.sfu.ca/people/jpeltier MSN : subatomic_spam@hotmail.com
On 11/08/10 4:29 PM, James A. Peltier wrote:
You then need a method for dealing with the high availability aspect. You need to be able to fence the storage while a fail-over is taking place. You need to (maybe) move MAC addresses and other storage IP bits. This is the hard part! Getting this right is not necessarily a trivial task.
Products like FreeNAS and OpenFiler only target the easy stuff.
the -really- hard part is maintaining write cache coherency across redundant storage servers. thats what seperates the pros from the toys.
btw, the product that the OP was asking about, the Quantum StorNext, I did a bit of reading on that, its a distributed network storage system. Quantum is selling it as a "Data Sharing and Archive" software, its not a conventional SAN at all. It looks like its being sold to the TV post production industry, using Apple XSan and XServer based storage. Quantum acquired it when they picked up ADIC, who in turn had bought the original company that had developed it.
On 11/8/10 6:29 PM, James A. Peltier wrote:
I have a solution that is currently centered around commodity storage bricks (Dell R510), flash PCI-E controllers, 1 or 10GbE (on separate Jumbo Frame Data Tier) and Solaris + ZFS.
So far it has worked out really well. Each R510 is a box with a fair bit of memory, running OpenIndiana for ZFS/RAIDZ3/Disk Dedup/iSCSI. Each brick is fully populated and in a RAIDZ2 configuration with 1 hot spare. Some have SSDs most have SAS or SATA. I export this storage pool as a single iSCSI target and I attach each of these targets to the SAN pool and provision from there.
I have two VMWare physical machines which are identically configured. If I need to perform administrative maintenance on the boxes I can migrate the host over to the other machine. This works for me, but it took a really long time to develop the solution and for the cost of my time it *might* have been cheaper to just buy some package deal.
It was a hell of a lot of fun learning though. ;)
Did you look at Nexentastor for this? You might need the commercial version for a fail-over set but I think the basic version is free up to a fairly large size.
On 11/8/10 6:29 PM, James A. Peltier wrote:
Did you look at Nexentastor for this? You might need the commercial version for a fail-over set but I think the basic version is free up to a fairly large size.
12T, IIRC. That's not exactly great IMO. You get that with a RAID10 over two populated 16bay Promise chassis.
Rainer
On 11/9/10 2:53 AM, rainer@ultra-secure.de wrote:
Did you look at Nexentastor for this? You might need the commercial version for a fail-over set but I think the basic version is free up to a fairly large size.
12T, IIRC. That's not exactly great IMO. You get that with a RAID10 over two populated 16bay Promise chassis.
Ummm, OK - I guess times have changed... Is the commercial pricing reasonable if you go larger compared to turnkey hardware?
On 11/9/10 2:53 AM, rainer@ultra-secure.de wrote:
Did you look at Nexentastor for this? You might need the commercial version for a fail-over set but I think the basic version is free up to a fairly large size.
12T, IIRC. That's not exactly great IMO. You get that with a RAID10 over two populated 16bay Promise chassis.
Ummm, OK - I guess times have changed... Is the commercial pricing reasonable if you go larger compared to turnkey hardware?
I guess it depends on your definition of "reasonable" ;-) The smallest silver-edition license you can buy is 1100 bucks onetime and 65 renewal every year.
Personally, I would really try to find the funding for a "real" SUN Open/Integrated Storage - even if Oracle have probably hiked the prices now. Those S7000s can do everything (NFS, CIFS, iSCSI, FC) and very fast.
Cost is per TB. Would kill me here when one user occupies 150TB just themselves.
----- Original Message ----- | On 11/8/10 6:29 PM, James A. Peltier wrote: | > | > I have a solution that is currently centered around commodity | > storage bricks (Dell R510), flash PCI-E controllers, 1 or 10GbE (on | > separate Jumbo Frame Data Tier) and Solaris + ZFS. | > | > So far it has worked out really well. Each R510 is a box with a fair | > bit of memory, running OpenIndiana for ZFS/RAIDZ3/Disk Dedup/iSCSI. | > Each brick is fully populated and in a RAIDZ2 configuration with 1 | > hot spare. Some have SSDs most have SAS or SATA. I export this | > storage pool as a single iSCSI target and I attach each of these | > targets to the SAN pool and provision from there. | > | > I have two VMWare physical machines which are identically | > configured. If I need to perform administrative maintenance on the | > boxes I can migrate the host over to the other machine. This works | > for me, but it took a really long time to develop the solution and | > for the cost of my time it *might* have been cheaper to just buy | > some package deal. | > | > It was a hell of a lot of fun learning though. ;) | | Did you look at Nexentastor for this? You might need the commercial | version for | a fail-over set but I think the basic version is free up to a fairly | large size. | | -- | Les Mikesell | lesmikesell@gmail.com | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos
Perhaps FreeNAS would fit the bill?
Thanks for the suggestions (others also), but I don't beleivee it'll do. We need to be able to access the file system directly via FC so we can lock files across systems. Pretty much like xSan, but not on apple. xSan is really StorNext from Qlogic, but half the price per node. So, we are searching for an alternative to xSan, on linux.
For those who don't know xSan, you can access a fibre-channel volume directly, and simultanously among many clients computer or servers. Access, locking and other tasks are handled by a metadata controler who is reponsible for keeping things together. No controler, no volume, hence a failover controler is needed.
So from what I've read so far, I'll be stuck with StorNext.
Some nodes have dedicated volumes on the fiber channel network, and I beleive with what I've read that I could replicate not too difficulty what we've done with the guid partition and apple lable volumes.
Thanks, Nicolas
On Tue, Nov 9, 2010 at 4:36 AM, Nicolas Ross rossnick-lists@cybercat.ca wrote:
Perhaps FreeNAS would fit the bill?
Thanks for the suggestions (others also), but I don't beleivee it'll do. We need to be able to access the file system directly via FC so we can lock files across systems. Pretty much like xSan, but not on apple. xSan is really StorNext from Qlogic, but half the price per node. So, we are searching for an alternative to xSan, on linux.
For those who don't know xSan, you can access a fibre-channel volume directly, and simultanously among many clients computer or servers. Access, locking and other tasks are handled by a metadata controler who is reponsible for keeping things together. No controler, no volume, hence a failover controler is needed.
So from what I've read so far, I'll be stuck with StorNext.
Some nodes have dedicated volumes on the fiber channel network, and I beleive with what I've read that I could replicate not too difficulty what we've done with the guid partition and apple lable volumes.
Thanks, Nicolas
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Take a look @ Gluster / GlusterFS, it may just do what you need?
On Mon, 8 Nov 2010 at 9:36pm, Nicolas Ross wrote
Thanks for the suggestions (others also), but I don't beleivee it'll do. We need to be able to access the file system directly via FC so we can lock files across systems. Pretty much like xSan, but not on apple. xSan is really StorNext from Qlogic, but half the price per node. So, we are searching for an alternative to xSan, on linux.
For those who don't know xSan, you can access a fibre-channel volume directly, and simultanously among many clients computer or servers. Access, locking and other tasks are handled by a metadata controler who is reponsible for keeping things together. No controler, no volume, hence a failover controler is needed.
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
On 11/09/2010 12:13 PM, Joshua Baker-LePain wrote:
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
Good point Joshua,
I was reading this thread and wondering how come no one brought up the fact that you can achieve the entire desired feature set just using the components already included in CentOS-5.
- KB
On Tue, Nov 9, 2010 at 2:35 PM, Karanbir Singh mail-lists@karan.org wrote:
On 11/09/2010 12:13 PM, Joshua Baker-LePain wrote:
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
Good point Joshua,
I was reading this thread and wondering how come no one brought up the fact that you can achieve the entire desired feature set just using the components already included in CentOS-5.
- KB
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
KB, I think the OP is looking for a nice set of userland tools which was included in xServer
KB, I think the OP is looking for a nice set of userland tools which was included in xServer
Pretty much.
Since we were about to purchase about 8 new xserve to build a new xSan on top of an active raid 16 1 tb disk enclosure as our new production environement, we are exploring other possibilities, open source (preferably) or not.
I took a quick look at redhat's GFS, and it seems promising and do pretty much what xSan can do. I'll dig more into this.
Regards,
On 11/09/2010 12:13 PM, Joshua Baker-LePain wrote:
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
Good point Joshua,
I was reading this thread and wondering how come no one brought up the fact that you can achieve the entire desired feature set just using the components already included in CentOS-5.
But there is no GFS for OSX, IIRC.
Rainer
On 11/09/2010 12:40 PM, rainer@ultra-secure.de wrote:
I was reading this thread and wondering how come no one brought up the fact that you can achieve the entire desired feature set just using the components already included in CentOS-5.
But there is no GFS for OSX, IIRC.
The last comment from Nicolas indicates he's looking at block device level support from a single remote storage setup.
If you need a filesystem, use whatever you want and whatever works for your platform :)
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
I've spent better part of the last day to read documentation on gfs2 on redhat's site.
My god, that's pretty much what I'm looking for... To the point that I'll probably be ordering a pair of intel 1u rack servers, 2 lsi fibre channel card and a fiber switch to begin experimenting with this...
The documentation is very technichal, I'm ok with that, but it seems to miss some starting point. For instance, there's a part about the required number of journal to create and the size of those. But I cannot find suggested size or any thumb-rule for those...
That made my day ;-)
On 11/09/2010 08:32 PM, Nicolas Ross wrote:
The documentation is very technichal, I'm ok with that, but it seems to miss some starting point. For instance, there's a part about the required number of journal to create and the size of those. But I cannot find suggested size or any thumb-rule for those...
The linux-cluster mailing list is super friendly, has both developers and consumers of the entire RHCS & associated packages - and CentOS friendly :) I seriously recommend anyone looking to do any sort of work with this toolchain should be on that list.
- KB
The linux-cluster mailing list is super friendly, has both developers and consumers of the entire RHCS & associated packages - and CentOS friendly :) I seriously recommend anyone looking to do any sort of work with this toolchain should be on that list.
Thanks, I'll surely make a visit there. But there's not much activity...
The linux-cluster mailing list is super friendly, has both developers and consumers of the entire RHCS & associated packages - and CentOS friendly :) I seriously recommend anyone looking to do any sort of work with this toolchain should be on that list.
Thanks, I'll surely make a visit there. But there's not much activity...
I was looking at marc.info's linux-cluster mailing list, last post was from 2009. Maybe that was an old list... I found the linux-cluster list on redhat's site, there is much more activity there...
On another note, on the same subject (xServes being disontinued), one feature we use heavily on our os-x server is the ability to load / unload periodic jobs with launchd.
With it we're able to schedule jobs let's say every 5 minutes, and so on. One could say I could do something like "*/5 * * * * /path to job" in crontab. True, but the big advendage of launchd in that matter, is that it's 5 minutes between jobs. So if the job takes 6 minutes, we will never have 2 time the same job running at the same time.
We even have a job that is scheduled to run every 60 seconds, but can take 2 hours to complete.
Is there any scheduler under linux that approch this ?
Thanks,
On 11/11/2010 2:32 PM, Nicolas Ross wrote:
On another note, on the same subject (xServes being disontinued), one feature we use heavily on our os-x server is the ability to load / unload periodic jobs with launchd.
With it we're able to schedule jobs let's say every 5 minutes, and so on. One could say I could do something like "*/5 * * * * /path to job" in crontab. True, but the big advendage of launchd in that matter, is that it's 5 minutes between jobs. So if the job takes 6 minutes, we will never have 2 time the same job running at the same time.
We even have a job that is scheduled to run every 60 seconds, but can take 2 hours to complete.
Is there any scheduler under linux that approch this ?
The simple-minded approach if you just want to serialize them is to run a script that does the job, then schedules another run of itself with 'at now' or 'at now + interval'. There are much more complex schedulers too. Depending on the kind of jobs, you might roll your own cross-platform job manager starting with hudson. It is really intended to do software build jobs with slave agents on different machines but it will run any job you give it and there are a large number of plug-in extensions. http://hudson-ci.org/
On Thu, Nov 11, 2010, Nicolas Ross wrote:
On another note, on the same subject (xServes being disontinued), one feature we use heavily on our os-x server is the ability to load / unload periodic jobs with launchd.
With it we're able to schedule jobs let's say every 5 minutes, and so on. One could say I could do something like "*/5 * * * * /path to job" in crontab. True, but the big advendage of launchd in that matter, is that it's 5 minutes between jobs. So if the job takes 6 minutes, we will never have 2 time the same job running at the same time.
We even have a job that is scheduled to run every 60 seconds, but can take 2 hours to complete.
Is there any scheduler under linux that approch this ?
There are various ways of handling this type of problem. One consideration is whether it's OK for a job to start if the previous job has not completed. This is application specific, and I don't know of any scheduler that does this (enlighten me if there is :-).
I have seen cases of daily processing that do things like update the ``locate'' database which may well not complete within 24 hours on large file systems. Without checking for completion of the previous day's run, this can end up creating problems.
For shell scripting, we often use the ``shlock'' program which I got originally from the ``inn'' news software. There's a perl module LockFile::Simple that handles this for perl, and I've hacked a python implementation of that module. These all write the pid of the controlling process to a lockfile which can be read to test for stale jobs if the original job didn't properly remove its lock file.
Bill
On 11/11/10 12:32 PM, Nicolas Ross wrote:
We even have a job that is scheduled to run every 60 seconds, but can take 2 hours to complete.
Is there any scheduler under linux that approch this ?
don't even really need a scheduler for that.
put the job in a loop like...
while true; do your stuff sleep 60 done;
On 11/11/2010 03:45 PM, John R Pierce wrote:
put the job in a loop like...
while true; do your stuff sleep 60 done;
Sure, but you also need to start the loop and make sure it doesn't die. You could use a script like this to repeat a script and then wait:
--- #!/bin/sh
delay="$1" shift
"${@}"
at now + "$delay" <<EOF "$0" "$delay" "${@}" EOF ---
Run "repeat.sh 5m /path/to/whatever -args". The script will run that script and args, then schedule itself to run again in at. The script takes care of both running the job you specify and inserting itself into the system's scheduler.
Sure, but you also need to start the loop and make sure it doesn't die. You could use a script like this to repeat a script and then wait:
#!/bin/sh
delay="$1" shift
"${@}"
at now + "$delay" <<EOF "$0" "$delay" "${@}" EOF
Run "repeat.sh 5m /path/to/whatever -args". The script will run that script and args, then schedule itself to run again in at. The script takes care of both running the job you specify and inserting itself into the system's scheduler.
That is clever... I will certainly retain that idea. I may also make something basic in php/mysql...
Thanks for the tip...
Can't wait to install Centos 6. I'll try rhel 6 beta 2 on my first two nodes I will reveice tomorrow to start playing with directory service and clustering.
On Thu, Nov 11, 2010 at 05:39:15PM -0800, Gordon Messmer wrote:
On 11/11/2010 03:45 PM, John R Pierce wrote:
put the job in a loop like...
while true; do your stuff sleep 60 done;
Sure, but you also need to start the loop and make sure it doesn't die.
Put in /etc/inittab ms:2345:respawn:/path/to/my/loop_script
(where "ms" is unique).
If the loop dies then init will restart it for you.
while true; do your stuff sleep 60 done;
Sure, but you also need to start the loop and make sure it doesn't die.
Put in /etc/inittab ms:2345:respawn:/path/to/my/loop_script
(where "ms" is unique).
If the loop dies then init will restart it for you.
I tought of that, and I will be needing something like this, since I have some services that need to be restarted in the event of them dying or being killed.
But I'm not that much confortable scripting a modification of the initab to activate / deactivate services on a server-by-server basis.
From: Nicolas Ross rossnick-lists@cybercat.ca
while true; do your stuff sleep 60 done;
Sure, but you also need to start the loop and make sure it doesn't die.
Put in /etc/inittab ms:2345:respawn:/path/to/my/loop_script (where "ms" is unique). If the loop dies then init will restart it for you.
I tought of that, and I will be needing something like this, since I have some services that need to be restarted in the event of them dying or being killed. But I'm not that much confortable scripting a modification of the initab to activate / deactivate services on a server-by-server basis.
I usually use a lock file and put an entry in cron (like try to relaunch the scrip every xxx in case it crashed). But that means taking care of removing the leftover lock file if the script crashed... Something like:
if [ -f "$LOCK_FILE" ]; then PID=`cat "$LOCK_FILE"` if [ ! -d "/proc/$PID" ]; then rm -f "$LOCK_FILE" if [ -f "$STOP_FILE" ]; then rm -f "$STOP_FILE"; fi exit 1 fi exit 1 else echo $$ > "$LOCK_FILE" fi
while [ ! -f "$STOP_FILE" ]; do your stuff sleep 60 done
rm -f "$STOP_FILE" rm -f "$LOCK_FILE"
JD
I tought of that, and I will be needing something like this, since I have some services that need to be restarted in the event of them dying or being killed.
But I'm not that much confortable scripting a modification of the initab to activate / deactivate services on a server-by-server basis.
Have a look at monit. We are using it for restarting critical services (heartbeat, postfix, apache) if they fail and it has been simple to set up and stable.
The most useful thing is that it emails you whenever the PID of the process changes or it has to restart it.
On Tue, 2010-11-09 at 15:32 -0500, Nicolas Ross wrote:
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
I've spent better part of the last day to read documentation on gfs2 on redhat's site.
My god, that's pretty much what I'm looking for... To the point that I'll probably be ordering a pair of intel 1u rack servers, 2 lsi fibre channel card and a fiber switch to begin experimenting with this...
The documentation is very technichal, I'm ok with that, but it seems to miss some starting point. For instance, there's a part about the required number of journal to create and the size of those. But I cannot find suggested size or any thumb-rule for those...
That made my day ;-)
---- http://sources.redhat.com/cluster/doc/nfscookbook.pdf
Will give you a better view than the Docs them self will. It centers on NFS but the thing is, it is the basic principles that it has in whole.
John
On 11/9/2010 2:32 PM, Nicolas Ross wrote:
Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does).
I've spent better part of the last day to read documentation on gfs2 on redhat's site.
My god, that's pretty much what I'm looking for... To the point that I'll probably be ordering a pair of intel 1u rack servers, 2 lsi fibre channel card and a fiber switch to begin experimenting with this...
The documentation is very technichal, I'm ok with that, but it seems to miss some starting point. For instance, there's a part about the required number of journal to create and the size of those. But I cannot find suggested size or any thumb-rule for those...
That made my day ;-)
Do you have to have something that looks exactly like a file system or could your application use something that's more cloud-database like: http://www.basho.com/Riak.html Doesn't seem to have any concept of locking, though.