Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler?
The idea I'm thinking of here is to use two OpenFiler servers with physical drives in RAID 1, to create iSCSI virtual devices and run CentOS guest VMs off the MD RAID 1 device. Since theoretically, this setup would survive both a single physical drive failure as well as a machine failure on the storage side with a much shorter failover time than say using heartbeat.
Or is this yet another stupid idea again from me? :D
On 28/06/2010 20:13, Emmanuel Noobadmin wrote:
Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler?
I dont use openfiler, but I run a mdraid-10 ( which isnt raid10 ), off locally mounted, remote storage exported over iscsi from centos-5 machines.
What did you try ? how did you fail ?
- KB
On 6/29/10, Karanbir Singh mail-lists@karan.org wrote:
On 28/06/2010 20:13, Emmanuel Noobadmin wrote:
Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler?
I dont use openfiler, but I run a mdraid-10 ( which isnt raid10 ), off locally mounted, remote storage exported over iscsi from centos-5 machines.
What did you try ? how did you fail ?
I haven't tried it yet, still researching on which way to go (looked into Lustre, then glusterFS, then now this). Coincidentally, I just googled on your post in the mailing list archive and was about to ask since your setup worked, wouldn't that implied that it should also work with openfiler providing the iSCSI connection?
Or would openfiler even be necessary?
On 28/06/2010 20:31, Emmanuel Noobadmin wrote:
I haven't tried it yet, still researching on which way to go (looked into Lustre, then glusterFS, then now this). Coincidentally, I just
Not sure what you are trying to do here. Lustre, glusterfs, gfs etc solve a different problem than imported remote storage ( which is essentially what iscsi will give you )
googled on your post in the mailing list archive and was about to ask since your setup worked, wouldn't that implied that it should also work with openfiler providing the iSCSI connection?
I dont see why not. But you dont dont need openfilter to give you iscsi capability. CentOS-5.1+ has had the ability to export an iscsi target itself with all the tooling built in.
Or would openfiler even be necessary?
Depends on what you want, perhaps http://slimphpiscsipan.sourceforge.net/ might be all that is needed in your case ? I prefer using tgtadm on the cli. It also means that I can script my storage box's configs and wrap them into puppet manifests.
- KB
I dont see why not. But you dont dont need openfilter to give you iscsi capability. CentOS-5.1+ has had the ability to export an iscsi target itself with all the tooling built in.
AFAIK, Openfiler is CentOS/rPath and has Web-based administration tool, this why some people use it.
On 6/28/2010 2:50 PM, Karanbir Singh wrote:
On 28/06/2010 20:31, Emmanuel Noobadmin wrote:
I haven't tried it yet, still researching on which way to go (looked into Lustre, then glusterFS, then now this). Coincidentally, I just
Not sure what you are trying to do here. Lustre, glusterfs, gfs etc solve a different problem than imported remote storage ( which is essentially what iscsi will give you )
I think he is looking for redundant, failover remote storage (i.e. mirrored copies from different iscsi hosts). Sort of like DRBD but with both copies remote. I think is should work but the timing might be tricky on retries vs. failure on the iscsi connections.
On 28/06/2010 20:55, Les Mikesell wrote:
I think he is looking for redundant, failover remote storage (i.e. mirrored copies from different iscsi hosts). Sort of like DRBD but with both copies remote. I think is should work but the timing might be tricky on retries vs. failure on the iscsi connections.
yes, thats what I run. We import 5 remote iscsi connections that come from 5 different hosts, and mdraid10 them locally on one machine - thats our 'storage' box. Each of the 5 machines doing the iscsi target exports run raid-0 themselves across 4 disks.
As long as latency is kept low, things work fine. The DM/MD stack works fairly well even when individual block devices tend to have slightly different traits. eg. when you have a mix of 5900 and 7200 rpm disks in the same machine.
- KB
On 6/29/10, Karanbir Singh mail-lists@karan.org wrote:
On 28/06/2010 20:55, Les Mikesell wrote:
I think he is looking for redundant, failover remote storage (i.e. mirrored copies from different iscsi hosts). Sort of like DRBD but with both copies remote. I think is should work but the timing might be tricky on retries vs. failure on the iscsi connections.
yes, thats what I run. We import 5 remote iscsi connections that come from 5 different hosts, and mdraid10 them locally on one machine - thats our 'storage' box. Each of the 5 machines doing the iscsi target exports run raid-0 themselves across 4 disks.
Part of my concern with such a setup is the whole data system goes down too if the storage box dies due to say blown PSU or motherboard problem. Doesn't it?
On 28/06/2010 21:30, Emmanuel Noobadmin wrote:
Part of my concern with such a setup is the whole data system goes down too if the storage box dies due to say blown PSU or motherboard problem. Doesn't it?
Depends on how you set it up, if you have 2 machines ( disk nodes ), exporting iscsi. 1 machine ( data node ) doing the import and sets up a raid1; you can afford to have one of those two machines down. You *cant* afford to have the data-node down. Thats where the filesystem lives. You can potentially have the same disks from the disk-nodes imported to a standby data node using something like drbd over the mdraid setup. Alternatively, you can look at using a clustered filesystem and have it go X way. But then you may as well use something like gnbd with gfs2 instead(!).
Yes, lots of options and different ways of doing the same thing. So start at the top, make a list of all the problems you are trying to solve. then split that into 3 segments: - Must have - Good to have - Dont really need
And then evaluate what solutions meet your requirements best.
- KB
On 6/29/10, Karanbir Singh mail-lists@karan.org wrote:
Depends on how you set it up, if you have 2 machines ( disk nodes ), exporting iscsi. 1 machine ( data node ) doing the import and sets up a raid1; you can afford to have one of those two machines down. You *cant* afford to have the data-node down. Thats where the filesystem lives. You can potentially have the same disks from the disk-nodes imported to a standby data node using something like drbd over the mdraid setup. Alternatively, you can look at using a clustered filesystem and have it go X way. But then you may as well use something like gnbd with gfs2 instead(!).
Looking up gfs2 was what lead me to glusterFS actually and because glusterFS had all the RAID stuff pointed out upfront, I stopped reading about gfs2. Googling gluster then lead to openFiler which then seemed like a simpler way to achieve the objectives.
Yes, lots of options and different ways of doing the same thing. So start at the top, make a list of all the problems you are trying to solve. then split that into 3 segments:
- Must have
- Good to have
- Dont really need
Must have - low cost, clients have a budget which was why mirroring all the machines is not an option - data redundancy, application servers can go down, but data must not be lost/corrupted. - expandable capacity - works with VM - doable by noob admin :D
Good to have - able to add/restore capacity without need to take down the whole setup - application server redundancy - webUI for remote management
I've done mostly LVM + mdraid setup so far, hence the openfiler + remote iSCSI raid route looks to fit the above and is the most simple (less new things to learn/mess up) option compared to most other which needs multiple components to work together it seems.
On Tuesday, June 29, 2010 04:53 AM, Emmanuel Noobadmin wrote:
On 6/29/10, Karanbir Singhmail-lists@karan.org wrote:
Depends on how you set it up, if you have 2 machines ( disk nodes ), exporting iscsi. 1 machine ( data node ) doing the import and sets up a raid1; you can afford to have one of those two machines down. You *cant* afford to have the data-node down. Thats where the filesystem lives. You can potentially have the same disks from the disk-nodes imported to a standby data node using something like drbd over the mdraid setup. Alternatively, you can look at using a clustered filesystem and have it go X way. But then you may as well use something like gnbd with gfs2 instead(!).
Looking up gfs2 was what lead me to glusterFS actually and because glusterFS had all the RAID stuff pointed out upfront, I stopped reading about gfs2. Googling gluster then lead to openFiler which then seemed like a simpler way to achieve the objectives.
No acls on Gluster...but I suppose you have no need for acl support...
On 6/29/10, Karanbir Singh mail-lists@karan.org wrote:
On 28/06/2010 20:31, Emmanuel Noobadmin wrote:
I haven't tried it yet, still researching on which way to go (looked into Lustre, then glusterFS, then now this). Coincidentally, I just
Not sure what you are trying to do here. Lustre, glusterfs, gfs etc solve a different problem than imported remote storage ( which is essentially what iscsi will give you )
As my username suggests, I don't know what I'm doing. Server admin/setup is secondary to my primary job of writing web-based applications.
I'm trying to figure out a setup that would allow me to add VM guests on more than two VM server and provide data redundancy to these without having to add physical machines unnecessarily.
With just two machines, I could simply mirror them. But if I have more VM guest than they can comfortably handle (or more than I am comfortable), 3 servers seem a bit more tricky. Also if I need more storage capacity than processing power, which is more usually the case due to backup and history data, each physical server has a limit.
So I figured I might as well try to find a singular setup (less administrative headache for the amateur admin) with a VM/data cluster that can survive a single drive failure per machine as well as single machine failure. I would then use this setup for the current client's demand to try things out before the next one which will really need the flexibility and redundancy.
I dont see why not. But you dont dont need openfilter to give you iscsi capability. CentOS-5.1+ has had the ability to export an iscsi target itself with all the tooling built in.
I'm not sure yet since openFiler seems to provide a few more options, if I'm not mistaken the ability to soft RAID 5/6 on multiple machines and remote block duplication. So theoretically, I'm thinking with openFiler presenting a frontend to the application servers, I could increase storage without having to mess with the application server setup.
i.e. they still see a pair of iSCSI from the openFiler servers, which then RAID 5/6 iSCSI servers (physical or VM initially) to provide the storage and can be increased at any time transparent to the application servers.
On 6/28/2010 3:25 PM, Emmanuel Noobadmin wrote:
I dont see why not. But you dont dont need openfilter to give you iscsi capability. CentOS-5.1+ has had the ability to export an iscsi target itself with all the tooling built in.
I'm not sure yet since openFiler seems to provide a few more options, if I'm not mistaken the ability to soft RAID 5/6 on multiple machines and remote block duplication. So theoretically, I'm thinking with openFiler presenting a frontend to the application servers, I could increase storage without having to mess with the application server setup.
If you are looking at openfiler, you might also want to consider nexentastor. Their community edition is free for up to 12TB of storage. It's an OpenSolaris/ZFS based system with web management, able to export cifs/nfs/ftp/sftp/iscsi with support for snapshots, deduplication, compression, etc. I haven't used it beyond installing in a VM and going through some options, but it looks more capable than anything else I've seen for free.
On 6/29/10, Les Mikesell lesmikesell@gmail.com wrote:
If you are looking at openfiler, you might also want to consider nexentastor. Their community edition is free for up to 12TB of storage. It's an OpenSolaris/ZFS based system with web management, able to export cifs/nfs/ftp/sftp/iscsi with support for snapshots, deduplication, compression, etc. I haven't used it beyond installing in a VM and going through some options, but it looks more capable than anything else I've seen for free.
Thanks for the info, it looks quite interesting and seems like a simpler option given the claim of easy setup wizard doing things in 15 minutes.
The only problem is their HA is commercial only and costs more than the entire hardware budget I've got for this. Crucially, it relies on a failover/heartbeat kind of arrangement. According to some sources, the failover delay of a few seconds will cause certain services/apps to fail/lock up. Not an issue for the immediate need but will be a major no no for the other project I have in the pipeline.
Which was why I was thinking of MD raid 1 on the application server site: no failover delay if one of the data server fails to respond to respond in time.
On Tuesday, June 29, 2010 10:53 AM, Emmanuel Noobadmin wrote:
On 6/29/10, Les Mikeselllesmikesell@gmail.com wrote:
If you are looking at openfiler, you might also want to consider nexentastor. Their community edition is free for up to 12TB of storage. It's an OpenSolaris/ZFS based system with web management, able to export cifs/nfs/ftp/sftp/iscsi with support for snapshots, deduplication, compression, etc. I haven't used it beyond installing in a VM and going through some options, but it looks more capable than anything else I've seen for free.
Thanks for the info, it looks quite interesting and seems like a simpler option given the claim of easy setup wizard doing things in 15 minutes.
The only problem is their HA is commercial only and costs more than the entire hardware budget I've got for this. Crucially, it relies on a failover/heartbeat kind of arrangement. According to some sources, the failover delay of a few seconds will cause certain services/apps to fail/lock up. Not an issue for the immediate need but will be a major no no for the other project I have in the pipeline.
So install Nexenta CP2/CP3 then. That's completely free and ZFS has its own web interface...
On 6/29/10, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
The only problem is their HA is commercial only and costs more than the entire hardware budget I've got for this. Crucially, it relies on a failover/heartbeat kind of arrangement. According to some sources, the failover delay of a few seconds will cause certain services/apps to fail/lock up. Not an issue for the immediate need but will be a major no no for the other project I have in the pipeline.
So install Nexenta CP2/CP3 then. That's completely free and ZFS has its own web interface...
Sorry, a little braindead by now but how would the Nexenta Core Platform (I assume this is the CP you are referring to), solve the failover delay problem since it would still be relying on HB to do failover monitoring right?
Or do you mean to use NCP for the storage units, relying on ZFS to do the disk management and export iSCSI interfaces to the application to use as MD RAID 1 members?
On Tuesday, June 29, 2010 01:23 PM, Emmanuel Noobadmin wrote:
On 6/29/10, Christopher Chanchristopher.chan@bradbury.edu.hk wrote:
The only problem is their HA is commercial only and costs more than the entire hardware budget I've got for this. Crucially, it relies on a failover/heartbeat kind of arrangement. According to some sources, the failover delay of a few seconds will cause certain services/apps to fail/lock up. Not an issue for the immediate need but will be a major no no for the other project I have in the pipeline.
So install Nexenta CP2/CP3 then. That's completely free and ZFS has its own web interface...
Sorry, a little braindead by now but how would the Nexenta Core Platform (I assume this is the CP you are referring to), solve the failover delay problem since it would still be relying on HB to do failover monitoring right?
Or do you mean to use NCP for the storage units, relying on ZFS to do the disk management and export iSCSI interfaces to the application to use as MD RAID 1 members?
raid1/iscsi if you have a single host accessing the data or gluster if you have more than one host accessing the data...
Nexentastor has a HA distributed filesystem? Gotta take a closer look at that.
On 6/29/10, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
raid1/iscsi if you have a single host accessing the data or gluster if you have more than one host accessing the data...
This is starting to look really complicated with NCP Storage units on zfs -> iscsi to gluster unit ext3 since gluster doesn't do zfs -> multiple application host.
Wouldn't using both ncp/zfs with gluster be redundant since gluster does cluster storage to begin with?
I think I might be overcomplicating things here.
Reading up more on gluster, it seems that I could simply put a gluster client on the application server, mount a volume mirrored on from two gluster servers and let gluster handle the failover transparently.
Emmanuel Noobadmin wrote:
On 6/29/10, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
raid1/iscsi if you have a single host accessing the data or gluster if you have more than one host accessing the data...
This is starting to look really complicated with NCP Storage units on zfs -> iscsi to gluster unit ext3 since gluster doesn't do zfs -> multiple application host.
gluster don't care about underlying filesystem...it don't support acl yet for a reason
Wouldn't using both ncp/zfs with gluster be redundant since gluster does cluster storage to begin with?
??? what cluster storage on ncp???
I think I might be overcomplicating things here.
Reading up more on gluster, it seems that I could simply put a gluster client on the application server, mount a volume mirrored on from two gluster servers and let gluster handle the failover transparently.
/me nods
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 6/29/10, Chan Chung Hang Christopher christopher.chan@bradbury.edu.hk wrote:
gluster don't care about underlying filesystem...it don't support acl yet for a reason
Could you elaborate on that? Although at the moment I don't appear to have a need for ACL on the storage, it is always good to be aware of any potential pitfalls.
I think I might be overcomplicating things here.
Reading up more on gluster, it seems that I could simply put a gluster client on the application server, mount a volume mirrored on from two gluster servers and let gluster handle the failover transparently.
/me nods
Thanks for the confirmation :)
Also just for the benefit of whoever else in the future looking at the archives Just found this link which seems to confirm that Gluster can be used to share active/active failover storage to multiple machines by running it on the machines themselves and gives the steps/command to do it on cloud VM.
http://rackerhacker.com/2010/05/27/glusterfs-on-the-cheap-with-rackspaces-cl...
Emmanuel Noobadmin wrote:
On 6/29/10, Chan Chung Hang Christopher christopher.chan@bradbury.edu.hk wrote:
gluster don't care about underlying filesystem...it don't support acl yet for a reason
Could you elaborate on that? Although at the moment I don't appear to have a need for ACL on the storage, it is always good to be aware of any potential pitfalls.
No POSIX acl support let alone NFSv4 ACL support. The sole reason why I have not yet gone Linux samba frontends and OpenSolaris ZFS backends glued together with Gluster. It does support POSIX permissions but that is not specific enough nor close enough to the NTFS security.
Other than that, I would have given GlusterFS a go a long time ago.
I think I might be overcomplicating things here.
Reading up more on gluster, it seems that I could simply put a gluster client on the application server, mount a volume mirrored on from two gluster servers and let gluster handle the failover transparently.
/me nods
Thanks for the confirmation :)
Also just for the benefit of whoever else in the future looking at the archives Just found this link which seems to confirm that Gluster can be used to share active/active failover storage to multiple machines by running it on the machines themselves and gives the steps/command to do it on cloud VM.
http://rackerhacker.com/2010/05/27/glusterfs-on-the-cheap-with-rackspaces-cl...
Define cheap. Like these...er...hmm...creative chums here?
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-chea...
Or how about 7850USD for a 4U, 36 bay ( loaded with 12 x 1TB - not going full out :-( ), multipathing dual SAS host controller + sas backplane, 4 port GB Intel NIC + dual GB Intel NIC, 16GB ECC DDR2 RAM, multiple HT3 links, dual 6 core cpu box? Future 45 bay 4U SAS storage box possible too.
No, not putting Centos 5 on that. :-( Not trusting raid5/6. raidz2/raidz3 it is going to be.
On 6/29/10, Chan Chung Hang Christopher christopher.chan@bradbury.edu.hk wrote:
Define cheap. Like these...er...hmm...creative chums here?
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-chea...
Or how about 7850USD for a 4U, 36 bay ( loaded with 12 x 1TB - not going full out :-( ), multipathing dual SAS host controller + sas backplane, 4 port GB Intel NIC + dual GB Intel NIC, 16GB ECC DDR2 RAM, multiple HT3 links, dual 6 core cpu box? Future 45 bay 4U SAS storage box possible too.
LOL, I was expecting that to come up soon. But unfortunately as mentioned previously somewhere, my entire hardware budget acceptable to the client is less than USD5000 for the application server and expandable redundant storage. :(
I'm just thankful they paid for a Gigabit switch previously!
No, not putting Centos 5 on that. :-( Not trusting raid5/6. raidz2/raidz3 it is going to be.
Solaris?
On Tuesday, June 29, 2010 11:25 PM, Emmanuel Noobadmin wrote:
On 6/29/10, Chan Chung Hang Christopher christopher.chan@bradbury.edu.hk wrote:
Define cheap. Like these...er...hmm...creative chums here?
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-chea...
Or how about 7850USD for a 4U, 36 bay ( loaded with 12 x 1TB - not going full out :-( ), multipathing dual SAS host controller + sas backplane, 4 port GB Intel NIC + dual GB Intel NIC, 16GB ECC DDR2 RAM, multiple HT3 links, dual 6 core cpu box? Future 45 bay 4U SAS storage box possible too.
LOL, I was expecting that to come up soon. But unfortunately as mentioned previously somewhere, my entire hardware budget acceptable to the client is less than USD5000 for the application server and expandable redundant storage. :(
So cut appropriate corners to fit. Just not like Backblaze. Their's is decidedly crap hobbled together.
I'm just thankful they paid for a Gigabit switch previously!
D-Link? :-D. I had to get D-Links when money was a bit tighter but now I have HP Procurve 9210al switches.
/me stomps on Cisco crap.
No, not putting Centos 5 on that. :-( Not trusting raid5/6. raidz2/raidz3 it is going to be.
Solaris?
Either OpenSolaris or Nexenta. Hey, I thought we were supposed to be running cheap aka freeloading?
On 6/30/10, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
So cut appropriate corners to fit. Just not like Backblaze. Their's is decidedly crap hobbled together.
With the kind of budget I have to work with, things are already looking very rounded already.
I'm just thankful they paid for a Gigabit switch previously!
D-Link? :-D. I had to get D-Links when money was a bit tighter but now I have HP Procurve 9210al switches.
/me stomps on Cisco crap.
D-Link had always been decent to me so that's what I usually go for if available. I've heard people mentioned the HP ProCurves for many years as really good stuff but I also thought Cisco was the industry standard?
No, not putting Centos 5 on that. :-( Not trusting raid5/6. raidz2/raidz3 it is going to be.
Solaris?
Either OpenSolaris or Nexenta. Hey, I thought we were supposed to be running cheap aka freeloading?
I am, not that I wouldn't prefer to make the client pay for things so that I can actually get somebody who knows the thing to config/fix it. So it's always good, at least IMO, to know what the paid options are if the clients ever cough up the budget for that.
But with the local SME mentality, cheap is usually the first thing they want to see until they learnt their lesson like losing data without RAID (it's amazing how many "servers" I come across without even RAID 1) or losing work without UPS.
D-Link? :-D. I had to get D-Links when money was a bit tighter but now I have HP Procurve 9210al switches.
/me stomps on Cisco crap.
D-Link had always been decent to me so that's what I usually go for if available. I've heard people mentioned the HP ProCurves for many years as really good stuff but I also thought Cisco was the industry standard?
I've heard good things about the HP Procurve's as well. Both my local vendors claim good success with them and the lifetime warrranty isn't so shabby either.
Barring that I've had good luck with Linksys over the years. We just recently installed a 48port gigabit switch in the office that set us back around $900. Equivalent Procurve was priced at around $3000.
On Wednesday, June 30, 2010 11:43 AM, Drew wrote:
D-Link? :-D. I had to get D-Links when money was a bit tighter but now I have HP Procurve 9210al switches.
/me stomps on Cisco crap.
D-Link had always been decent to me so that's what I usually go for if available. I've heard people mentioned the HP ProCurves for many years as really good stuff but I also thought Cisco was the industry standard?
I've heard good things about the HP Procurve's as well. Both my local vendors claim good success with them and the lifetime warrranty isn't so shabby either.
Barring that I've had good luck with Linksys over the years. We just recently installed a 48port gigabit switch in the office that set us back around $900. Equivalent Procurve was priced at around $3000.
???
For $3000 I can get PoE+, 48 port gigabit + two expansion slots (empty), vlan, QoS, routing, yada, yada.
What are you getting for your Linksys? Model?
Barring that I've had good luck with Linksys over the years. We just recently installed a 48port gigabit switch in the office that set us back around $900. Equivalent Procurve was priced at around $3000.
???
For $3000 I can get PoE+, 48 port gigabit + two expansion slots (empty), vlan, QoS, routing, yada, yada.
What are you getting for your Linksys? Model?
Linksys SLM2048 SMB Smart switch. 48port gigabit + 2 SFP slots, VLAN, QoS, etc.
I have to make a slight correction to my price. The equivalent Procurve we could find was the 2510G-48 which was quoted at around $1800. The $3000 I mixed up was for a Cisco managed switch.
It's been one of those days. ;)
On Wednesday, June 30, 2010 01:14 PM, Drew wrote:
Barring that I've had good luck with Linksys over the years. We just recently installed a 48port gigabit switch in the office that set us back around $900. Equivalent Procurve was priced at around $3000.
???
For $3000 I can get PoE+, 48 port gigabit + two expansion slots (empty), vlan, QoS, routing, yada, yada.
What are you getting for your Linksys? Model?
Linksys SLM2048 SMB Smart switch. 48port gigabit + 2 SFP slots, VLAN, QoS, etc.
I have to make a slight correction to my price. The equivalent Procurve we could find was the 2510G-48 which was quoted at around $1800. The $3000 I mixed up was for a Cisco managed switch.
It's been one of those days. ;)
Maybe not..it looks like the 2910al 48port gigabit switch goes for a rather hefty price according the Procurve page...$4500?!?!?!
I don't remember paying anywhere near that amount and for a PoE+ version at that.
On Wednesday, June 30, 2010 10:53 AM, Emmanuel Noobadmin wrote:
On 6/30/10, Christopher Chanchristopher.chan@bradbury.edu.hk wrote:
So cut appropriate corners to fit. Just not like Backblaze. Their's is decidedly crap hobbled together.
With the kind of budget I have to work with, things are already looking very rounded already.
That brings back memories...running dozens of PIII boxes to handle 200 million email transactions and between 1 million to 4 million actual deliveries daily.
I'm just thankful they paid for a Gigabit switch previously!
D-Link? :-D. I had to get D-Links when money was a bit tighter but now I have HP Procurve 9210al switches.
/me stomps on Cisco crap.
D-Link had always been decent to me so that's what I usually go for if available. I've heard people mentioned the HP ProCurves for many years as really good stuff but I also thought Cisco was the industry standard?
The guys selling the HP Procurves were surprised that I could name the stuff. :-D
D-Links really depends I think. Them days when I was doing MTA admin, the network chum called the D-Links double dealing switches. But then, he had trouble with a leaky 3-com switch so in the end, it was rather hard to pin down a good maker.
One of my managers then called Cisco switches crapco switches. If you don't have a support contract, then they would probably be such. There are hundreds of firmware versions and running the latest is not always the solution if you are lucky enough to get problems. Nah, Cisco stuff are way too expensive compared to other stuff available. Even if I have to run a hyrid data/VoIP multi-vlan network, I wouldn't go Cisco.
/me prepares hydrogen bomb for HQ that more or less mandated a Cisco VoIP solution over asterisk + sip phones.
No, not putting Centos 5 on that. :-( Not trusting raid5/6. raidz2/raidz3 it is going to be.
Solaris?
Either OpenSolaris or Nexenta. Hey, I thought we were supposed to be running cheap aka freeloading?
I am, not that I wouldn't prefer to make the client pay for things so that I can actually get somebody who knows the thing to config/fix it. So it's always good, at least IMO, to know what the paid options are if the clients ever cough up the budget for that.
:-D. I have contemplated getting the OpenSolaris support contract but it has apparently been axed along, it seems, with the OpenSolaris distro.
But with the local SME mentality, cheap is usually the first thing they want to see until they learnt their lesson like losing data without RAID (it's amazing how many "servers" I come across without even RAID 1) or losing work without UPS.
Sounds exactly like the mentality in Hong Kong too. I mean, even the bigger companies with Asian managers have a similar mentality. The IT department is always the under-budgeted, under-manned and public enemy number one when cost-cutting.
On Wed, Jun 30, 2010 at 11:59 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Sounds exactly like the mentality in Hong Kong too. I mean, even the bigger companies with Asian managers have a similar mentality. The IT department is always the under-budgeted, under-manned and public enemy number one when cost-cutting.
Not too surprised the mentality is similar, I'm in Asia and just a few hours away by plane.
Despite putting out cost estimates to management, they just won't accept that spending a few dollars more now would reap 10x the cost savings over the next couple of years. Somehow, they seem to prefer gambling with the possibility of paying a couple of hundred bucks for emergency service calls and maybe a grand for data recovery than spending another hundred or so on an extra hard disk now.
Emmanuel Noobadmin wrote:
On Wed, Jun 30, 2010 at 11:59 AM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
Sounds exactly like the mentality in Hong Kong too. I mean, even the bigger companies with Asian managers have a similar mentality. The IT department is always the under-budgeted, under-manned and public enemy number one when cost-cutting.
Not too surprised the mentality is similar, I'm in Asia and just a few hours away by plane.
Despite putting out cost estimates to management, they just won't accept that spending a few dollars more now would reap 10x the cost savings over the next couple of years. Somehow, they seem to prefer gambling with the possibility of paying a couple of hundred bucks for emergency service calls and maybe a grand for data recovery than spending another hundred or so on an extra hard disk now.
One thing you can do on the cheap is set up nightly backups with backuppc. It can run on a machine that does something else in the daytime if necessary and its pooling and compression scheme will store about 10x the history you would expect. You need backups anyway since even complex redundancy schemes have modes of failure that can lose things.
Or, I suppose you could roll your own with rsync to a zfs filesystem with du-dup, compression, and snapshots set up.
On 6/30/10, Les Mikesell lesmikesell@gmail.com wrote:
One thing you can do on the cheap is set up nightly backups with backuppc. It can run on a machine that does something else in the daytime if necessary and its pooling and compression scheme will store about 10x the history you would expect. You need backups anyway since even complex redundancy schemes have modes of failure that can lose things.
Or, I suppose you could roll your own with rsync to a zfs filesystem with du-dup, compression, and snapshots set up.
Thanks for that suggestion. Right now I have a script that I used on several machines that basically runs at around 5am (depending on what other cronjobs are scheduled) that tarzip the datafolders, then move the archives into a USB HDD. The clients swap out that drive every few days or weeks (depending on who) when the script sends an email alert that it's full.
But a proper software meant to do that sounds like a better idea :D
On 6/30/2010 11:02 AM, Emmanuel Noobadmin wrote:
On 6/30/10, Les Mikeselllesmikesell@gmail.com wrote:
One thing you can do on the cheap is set up nightly backups with backuppc. It can run on a machine that does something else in the daytime if necessary and its pooling and compression scheme will store about 10x the history you would expect. You need backups anyway since even complex redundancy schemes have modes of failure that can lose things.
Or, I suppose you could roll your own with rsync to a zfs filesystem with du-dup, compression, and snapshots set up.
Thanks for that suggestion. Right now I have a script that I used on several machines that basically runs at around 5am (depending on what other cronjobs are scheduled) that tarzip the datafolders, then move the archives into a USB HDD. The clients swap out that drive every few days or weeks (depending on who) when the script sends an email alert that it's full.
But a proper software meant to do that sounds like a better idea :D
Not only a better idea, but easier as well. See the details at http://backuppc.sourceforge.net/ but you'd probably want to install from the epel package. A hint, though: the packaged version has already configured where the archive resides and because of the hardlinks it has to be a single filesystem. So, if you mount some big disk/raid as /var/lib/backuppc _before_ you install the rpm you'll avoid some messy contortions. And you'll likely accumulate so many files/links that it won't be practical to copy the filesystem except with image methods. You might want to make a 3-member RAID1 with one device 'missing'. Then you can periodically add a matching external disk (esata is fastest), let it sync, then fail and remove it for offsite storage.
On Tue, Jun 29, 2010 at 10:09 AM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
On 6/29/10, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
raid1/iscsi if you have a single host accessing the data or gluster if you have more than one host accessing the data...
This is starting to look really complicated with NCP Storage units on zfs -> iscsi to gluster unit ext3 since gluster doesn't do zfs -> multiple application host.
Wouldn't using both ncp/zfs with gluster be redundant since gluster does cluster storage to begin with?
I think I might be overcomplicating things here.
Reading up more on gluster, it seems that I could simply put a gluster client on the application server, mount a volume mirrored on from two gluster servers and let gluster handle the failover transparently. _______________________________________________
Emmanual,
I'm very interested to find out what happened with this project and what you ended up doing?
Christopher Chan wrote:
On Tuesday, June 29, 2010 10:53 AM, Emmanuel Noobadmin wrote:
On 6/29/10, Les Mikeselllesmikesell@gmail.com wrote:
If you are looking at openfiler, you might also want to consider nexentastor. Their community edition is free for up to 12TB of storage. It's an OpenSolaris/ZFS based system with web management, able to export cifs/nfs/ftp/sftp/iscsi with support for snapshots, deduplication, compression, etc. I haven't used it beyond installing in a VM and going through some options, but it looks more capable than anything else I've seen for free.
Thanks for the info, it looks quite interesting and seems like a simpler option given the claim of easy setup wizard doing things in 15 minutes.
The only problem is their HA is commercial only and costs more than the entire hardware budget I've got for this. Crucially, it relies on a failover/heartbeat kind of arrangement. According to some sources, the failover delay of a few seconds will cause certain services/apps to fail/lock up. Not an issue for the immediate need but will be a major no no for the other project I have in the pipeline.
So install Nexenta CP2/CP3 then. That's completely free and ZFS has its own web interface...
Or 2 nexentastor (free community version) instances not configured for HA and do what you planned with MD raid with their iscsi targets.
On 06/28/10 7:53 PM, Emmanuel Noobadmin wrote:
The only problem is their HA is commercial only and costs more than the entire hardware budget I've got for this. Crucially, it relies on a failover/heartbeat kind of arrangement. According to some sources, the failover delay of a few seconds will cause certain services/apps to fail/lock up. Not an issue for the immediate need but will be a major no no for the other project I have in the pipeline.
Which was why I was thinking of MD raid 1 on the application server site: no failover delay if one of the data server fails to respond to respond in time.
ISCSI gets REAL sketchy on network failures. it takes at least several TCP timeouts before it gives up and returns an error. I do hope both storage servers have ECC so you're not mirroring a soft bit error at an inopportune time (Solaris ZFS would cope gracefully with this, but likes iscsi timeouts even less than dmraid)
On 06/28/10 12:13 PM, Emmanuel Noobadmin wrote:
Has anybody tried or knows if it is possible to create a MD RAID1 device using networked iSCSI devices like those created using OpenFiler?
The idea I'm thinking of here is to use two OpenFiler servers with physical drives in RAID 1, to create iSCSI virtual devices and run CentOS guest VMs off the MD RAID 1 device. Since theoretically, this setup would survive both a single physical drive failure as well as a machine failure on the storage side with a much shorter failover time than say using heartbeat.
I considered much the same a couple years ago, its certainly doable.... But, after playing with it a bit in the lab, I moved onto something more robust...
the downside is A) iscsi on homebrew systems like openfiler tends to be less than rock solid reliable. and B) upon a 'failure', the rebuild times will require remirroring the whole volume, which is going to take quite awhile across two iscsi targets.