Simple, it's only a NAS device, and not really a file server / web server / data base server as well.
Here is something I am currently lokoing at, and wondering if you'd considered it or if anyone here has done it.
I've got a bunch of existing hardware - really good IBM stuff that is all installed with CentOS (with a few exceptions). We want to move to virtualization, but the direct-attached storage is a bit of an issue because with VMWare ESXi you cannot do some of the fancy stuff like "VMove" (moving a live running server from one host to another).
So of course Openfiler comes to mine. However, most of this hardware has pretty significant CPU and RAM horsepower, so just running it as openfiler would be quite wasteful. What I se some people doing is
- install ESXi (4.0) onto the bare metal - install openfiler as a virtual machine - give open filer all the disk - serve the disk out to other VMs via openfiler
This may seem redundant vs just doing it without openfiler, but as mentioned a lot of the fancy features you only get with virtualized disk.
I am about to do some benchmarks on this stuff to see what percentage of performance I give up for doing it this way.
Alan McKay wrote:
This may seem redundant vs just doing it without openfiler, but as mentioned a lot of the fancy features you only get with virtualized disk.
Doing that for the most part defeats the purpose of using things like Vmotion in the first place, that is being able to evacuate a system to perform hardware/software maintenance on it.
Myself I have 12 vSphere ESXi systems deployed at remote sites using local storage, they run web servers mostly, and are redundant, so I don't need things like vMotion. Local storage certainly does restrict flexibility.
Over complicating things is likely to do more harm than good, and you'll likely regret going down that path at some point so save your self some trouble and don't try. Get a real storage system or build a real storage system to do that kind of thing.
Cheap ones include(won't vouch for any of them personally) http://www.infortrend.com/ http://h18006.www1.hp.com/storage/disk_storage/msa_diskarrays/index.html http://www.xyratex.com/Products/storage-systems/raid.aspx
Or build/buy a system to run openfiler. At my last company I had a quad proc system running a few HP MSA shelves that ran openfiler. Though the software upgrade process for openfiler was so scary I never upgraded it. And more than one kernel at the time paniced at boot. I'm sure it's improved since then (2 years ago).
nate
absolutely CRITICAL to any SAN implementations is that the storage controller (iscsi target, be it openfiler or what) remain 100% rock solid tsable at all times.
you can NOT REboot a shared storage controller without shutting all client systems down first (or at least unmounting all SAN volumes)
its non-trivial to implement a high availability (active/standby) storage controller with iscsi. very hard, in fact.
commercial SANs are fully redundant, with redundant fiberchannel cards on each client and storage controller, redundant fiberchannel switches, redundant paths from the storage controllers to the actual drive arrays, etc. many of them shadow the writeback cache storage so if one controller fails the other one has any write cached blocks and can post them to the disk spindles transparently to maintain complete data coherency. Trying to achieve this level of 0.99999 uptime/reliability with commodity hardware and software is not easy.
John R Pierce wrote:
absolutely CRITICAL to any SAN implementations is that the storage controller (iscsi target, be it openfiler or what) remain 100% rock solid tsable at all times.
you can NOT REboot a shared storage controller without shutting all client systems down first (or at least unmounting all SAN volumes)
its non-trivial to implement a high availability (active/standby) storage controller with iscsi. very hard, in fact.
commercial SANs are fully redundant, with redundant fiberchannel cards on each client and storage controller, redundant fiberchannel switches, redundant paths from the storage controllers to the actual drive arrays, etc. many of them shadow the writeback cache storage so if one controller fails the other one has any write cached blocks and can post them to the disk spindles transparently to maintain complete data coherency. Trying to achieve this level of 0.99999 uptime/reliability with commodity hardware and software is not easy.
Has anyone tried doing it with this: http://www.nexenta.com/corp/?
John R Pierce wrote:
absolutely CRITICAL to any SAN implementations is that the storage controller (iscsi target, be it openfiler or what) remain 100% rock solid tsable at all times.
you can NOT REboot a shared storage controller without shutting all client systems down first (or at least unmounting all SAN volumes)
its non-trivial to implement a high availability (active/standby) storage controller with iscsi. very hard, in fact.
Dell have recently announced a product that may help a lot with this they call it Virtualized ISCSI devices see http://www.cns-service.com/equallogic/pdfs/WP910_Virtualized_iSCSI_SANs.pdf
commercial SANs are fully redundant, with redundant fiberchannel cards on each client and storage controller, redundant fiberchannel switches, redundant paths from the storage controllers to the actual drive arrays, etc. many of them shadow the writeback cache storage so if one controller fails the other one has any write cached blocks and can post them to the disk spindles transparently to maintain complete data coherency. Trying to achieve this level of 0.99999 uptime/reliability with commodity hardware and software is not easy.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Clint Dilks wrote:
Dell have recently announced a product that may help a lot with this they call it Virtualized ISCSI devices see http://www.cns-service.com/equallogic/pdfs/WP910_Virtualized_iSCSI_SANs.pdf
Getting OT but
http://www.vmware.com/appliances/directory/92113 http://h18006.www1.hp.com/products/storage/software/vsa/index.html
Network RAID only available from HP
* The ultimate in high availability: Traditional hardware RAID and redundant components are simply not good enough when it comes to your data. Only HP offers Network RAID which protects you from outside-the-box issues such as human error, power, cooling and networking issues. * Per-volume redundancy: You control which volumes will be protected by Network RAID. * Space efficient protection: Network RAID does require additional capacity. But oh is it worth it! Also, VSA offers other efficiencies that more than compensate for this extra storage. Thin provisioning is a great example. Provision volumes at their actual size and allocate additional storage only as its required.
Getting OT but
http://www.vmware.com/appliances/directory/92113 http://h18006.www1.hp.com/products/storage/software/vsa/index.html
That's not off topic for me - that's where I started in fact :-) But the HP sales reps evidently do not want to sell the product because nobody has gotten back to me yet. What I was wondering is if Openfiler could do the same thing, but it sounds like it cannot.
Network RAID – only available from HP
nate wrote:
Network RAID – only available from HP
Fantasy ideas (eg, I've only thought of this and never tried it). YMMV, caveat emptor, objects in mirror may be closer than they appear, etc etc.
1) two ISCSI servers, each with identical storage. each offers the same sized iscsi target to the host. the host uses mdraid 1 to mirror these.
2) two ISCSI servers, each with identical storage, configured as active-standby cluster, using conventional cluster management software. active 'master' replicates block storage to 'slave' using DRBD.
with 1) there's questions of how well the mdraid will recover from situations where one storage server is offline for some period. I'd feel much warmer about this if there was block checksumming and time stamping in the raid ala ZFS.
with 2) there's write fencing issues I'd be uncomfortable with, on a fenced write, you'd not want to acknowlege operation committed until the drbd slave has flushed its buffers.
On 10/21/2009 03:34 AM, John R Pierce wrote:
- two ISCSI servers, each with identical storage. each offers the same
sized iscsi target to the host. the host uses mdraid 1 to mirror these.
I have this running in production. only not with 2 machines, but with 4 machines, doing raid-10 ( not mdraid10, but conventional 2 sets of raid-1's 0'd )
Alan McKay wrote:
Getting OT but
http://www.vmware.com/appliances/directory/92113 http://h18006.www1.hp.com/products/storage/software/vsa/index.html
That's not off topic for me - that's where I started in fact :-) But the HP sales reps evidently do not want to sell the product because nobody has gotten back to me yet. What I was wondering is if Openfiler could do the same thing, but it sounds like it cannot.
Some other divisions in my company seem to like these: http://www-03.ibm.com/systems/storage/disk/xiv/ but they are a little out of my league.
Karanbir Singh wrote:
On 10/21/2009 03:34 AM, John R Pierce wrote:
- two ISCSI servers, each with identical storage. each offers the same
sized iscsi target to the host. the host uses mdraid 1 to mirror these.
I have this running in production. only not with 2 machines, but with 4 machines, doing raid-10 ( not mdraid10, but conventional 2 sets of raid-1's 0'd )
I thought the original object was to make the space available to multiple VMware ESX(i) servers so you could vmotion guests among them. Can ESX construct raids out of multiple iscsi sources?
Some other divisions in my company seem to like these: http://www-03.ibm.com/systems/storage/disk/xiv/ but they are a little out of my league.
I did not have to read past "high end" to know I cannot afford it.
My entire IT budget is about $50K / year!
I thought the original object was to make the space available to multiple VMware ESX(i) servers so you could vmotion guests among them. Can ESX construct raids out of multiple iscsi sources?
Well, my original may have been a bit obtuse because I do not really know what I am looking for :-)
I have a bunch of local disk all over the place. I eventually want to virtualize everything, so I'd like a way to virtualize that localdisk and do fancy things with it :-)
On 10/21/2009 03:55 AM, Les Mikesell wrote:
I have this running in production. only not with 2 machines, but with 4 machines, doing raid-10 ( not mdraid10, but conventional 2 sets of raid-1's 0'd )
I thought the original object was to make the space available to multiple VMware ESX(i) servers so you could vmotion guests among them. Can ESX construct raids out of multiple iscsi sources?
I have no idea :) my setup has no vmware or any other form of virtualisation in place there. the iscsi blockdev's are exported from 4 different machines ( each running with an areca-1220 with 8 disks ), imported into a single machine - the storage-head, where mdraid does my raid10 across those iscsi devices. this new volume is then exported over a series of nfs points to various machines on the same subnet.
my point was just that this kind of a distributed setup is possible and works well, as long as you can document and monitor the various points well.
Alan McKay wrote:
I thought the original object was to make the space available to multiple VMware ESX(i) servers so you could vmotion guests among them. Can ESX construct raids out of multiple iscsi sources?
Well, my original may have been a bit obtuse because I do not really know what I am looking for :-)
I have a bunch of local disk all over the place. I eventually want to virtualize everything, so I'd like a way to virtualize that localdisk and do fancy things with it :-)
If you don't need vmotion you could just use a small local disk to boot the guest OS and let the guest do the iscsi connections itself for the main part of its storage - in which case the software raid from different targets should work.
If you don't need vmotion you could just use a small local disk to boot the guest OS and let the guest do the iscsi connections itself for the main part of its storage - in which case the software raid from different targets should work.
Vmotion is a great selling feature of virtualization to win over nay-sayers :-)
However, once I get a quote I'll probably find out I cannot afford it anyway.
Alan McKay wrote:
If you don't need vmotion you could just use a small local disk to boot the guest OS and let the guest do the iscsi connections itself for the main part of its storage - in which case the software raid from different targets should work.
Vmotion is a great selling feature of virtualization to win over nay-sayers :-)
However, once I get a quote I'll probably find out I cannot afford it anyway.
It does sound like a fun thing to have, but for most of the things where you'd want it, you really need a load-balanced farm that can tolerate a single machine being down for a while anyway.
Alan McKay wrote:
Vmotion is a great selling feature of virtualization to win over nay-sayers :-)
Oh so OT but I can't resist!
Also can be a good way to kill off prospects of using vmware if budgets are tight. When people think vmware most of them instantly think enterprise version and several thousand $ per CPU. Sort of like when people think Oracle they instantly think enterprise edition and $40k+ per cpu.
I've been running vmware since 1999 and have never used vmotion beyond basic eval stuff.
Currently I've got more than 350 VMs running in production and QA and none of them can do vMotion. Combination of ESX/ESXi and version 3.5 and version 4.0. In total about 34 servers, 12 of which are off site(using local storage mentioned earlier), the rest use fiber channel to a high end storage array.
From a blog entry I wrote recently -
http://www.techopsguys.com/2009/08/25/cheap-vsphere-installation-managable-b...
However, once I get a quote I'll probably find out I cannot afford it anyway.
See..
The core set of vmware features has gone down in price(not taking into account ESXi) more than 90% in the past two and a half years. For me and my 350+ VMs thats a steal.
That said I am pushing to go to a advanced version for next year with new HP cClass blades and 10GbE. But I sold the company on vmware with the lower end stuff and have made it perform flawlessly for the past 14 months or so. Now they see the benefits and want the higher end stuff for production anyways, and to get production onto modern systems, currently running on 4+ year old HP DL585G1s.
nate
John R Pierce wrote:
nate wrote:
Network RAID – only available from HP
Fantasy ideas (eg, I've only thought of this and never tried it). YMMV, caveat emptor, objects in mirror may be closer than they appear, etc etc.
- two ISCSI servers, each with identical storage. each offers the same
sized iscsi target to the host. the host uses mdraid 1 to mirror these.
I've done it experimentally. It worked. Obviously you can't share a raid volume with multiple client hosts.
- two ISCSI servers, each with identical storage, configured as
active-standby cluster, using conventional cluster management software. active 'master' replicates block storage to 'slave' using DRBD.
I've done this too (active-active using RH Cluster and GFS). I gave up finally because I just couldn't get reliable multipathing. The paths would keep locking up in weird ways, fail to come up (randomly), and all kinds of hate and discontent. :(
On Oct 20, 2009, at 6:47 PM, Alan McKay alan.mckay@gmail.com wrote:
I thought the original object was to make the space available to multiple VMware ESX(i) servers so you could vmotion guests among them. Can ESX construct raids out of multiple iscsi sources?
Well, my original may have been a bit obtuse because I do not really know what I am looking for :-)
I have a bunch of local disk all over the place. I eventually want to virtualize everything, so I'd like a way to virtualize that localdisk and do fancy things with it :-)
If you have a bunch of disperse local disks you could export them via iSCSI or AoE or NBD to a head server that could then use mdraid and LVM to then aggregate that storage and share it out again to clients, ESX hosts and VMs via NFS/CIFS or iSCSI depending on their needs.
I myself use XFS over NFS for guest OS disks, then have iSCSI provide storage for the apps inside the guest VMs.
With a good storage controller this proved to be both the best performing and easiest to implement.
If you are aggregating disperse storage you could even use simple network storage protocols like AoE or NBD on the local storage servers and use RAID10 on the head server to make it fault tolerant.
-Ross
On Wed, Oct 21, 2009 at 12:14 AM, Karanbir Singh mail-lists@karan.org wrote:
I have this running in production. only not with 2 machines, but with 4 machines, doing raid-10 ( not mdraid10, but conventional 2 sets of raid-1's 0'd )
-- Karanbir Singh : http://www.karan.org/ : 2522219@icq _______________________________________________
Hi Karanbir,
Would you mind sharing some of your tips on setting up such a system, please?