Hi there, I have one server acting as a iscsi target running windows storage server r2 sp2 and the other server is running centos as an initiator. They are connected to a switch over a 1Gbit ethernet connection. the target is a Dell NF600 and the server running centos is a Poweredge R900. We want to move this configuration to a FC based installation using a Dell QLE2462 HBA (this is the hba we can get here).
So, i would like to ask before i make a mistake..... :)
If I purchase an ethernet fiber switch and add a Dell QLE2462 HBA to both servers and connect the servers to this ethernet switch, will I be able to use this configuration as iscsi target/initiator? will i be able to add a new server (initiator) to this configuration?
or the whole thing is totally impossible?
Erick Perez wrote:
If I purchase an ethernet fiber switch and add a Dell QLE2462 HBA to both servers and connect the servers to this ethernet switch, will I be able to use this configuration as iscsi target/initiator?
No, the QLE2462 HBA is a fiber channel HBA, not a Fiber ethernet card. Completely different incompatible protocols.
If you want iSCSI HBAs you'll want to look at the Qlogic QLE4062C 1Gbps iSCSI HBAs.
If you want to use fiber channel you should have a fiber channel array, using Dell as an example something like the AX4-5F would work. I've never personally seen/heard of someone using fiber channel to connect two traditional servers to each other.
nate
On Mar 18, 2009, at 6:14 PM, Erick Perez eaperezh@gmail.com wrote:
Hi there, I have one server acting as a iscsi target running windows storage server r2 sp2 and the other server is running centos as an initiator. They are connected to a switch over a 1Gbit ethernet connection. the target is a Dell NF600 and the server running centos is a Poweredge R900. We want to move this configuration to a FC based installation using a Dell QLE2462 HBA (this is the hba we can get here).
So, i would like to ask before i make a mistake..... :)
If I purchase an ethernet fiber switch and add a Dell QLE2462 HBA to both servers and connect the servers to this ethernet switch, will I be able to use this configuration as iscsi target/initiator? will i be able to add a new server (initiator) to this configuration?
or the whole thing is totally impossible?
FC != Ethernet and IP doesn't run over FC.
So no this won't work.
You could look at setting up the CentOS box as a FC initiator to connect to the fiber storage and bypass the whole storage server mess.
We got rid of MSSS at my work due to it's unreliable performance, and the fact that it needed a reboot monthly for security updates which is a pain when you have a cluster and a bunch of virtualization hosts running off it. The reboot was like power downing 50% of the network once a month!
Switched to CentOS as the storage server and didn't look back.
-Ross
Nate, Ross. Thanks for the information. I now understand the difference.
Ross: I cant ditch MSSS since it is a government purchase, so I *must* use it until something breaks and budget is assigned and maybe in 2 years we can buy something else. the previous boss purchased this equipment and i guess an HP EVA, Netapp or some other sort of NAS/SAN equipment was better suited for the job...but go figure!.
Nate: The whole idea is to use the MSSServer and connect serveral servers to it. it has 5 available slots so a bunch of cards can be placed there.
I think (after reading your comments) that i can install 2 dual port 10gb netcards in the MSSS, configure it for jumbo frames (9k) and then put 10gb netcards on the servers that will connect to this MSSS and also enable 9k frames. All this of course, connected to a good 10gb switch with a good backplane. Im currently using 1Gb so switching to fiber at 1Gb will not provide a lot of gain.
using IOMeter we saw that we will not incurr in IOWait due to slow hard disks.
we just cant trash the MSSS....sorry Ross.
------------------------------------------------------------ Erick Perez Cel +(507) 6675-5083 ------------------------------------------------------------
Erick Perez wrote:
Nate, Ross. Thanks for the information. I now understand the difference.
Ross: I cant ditch MSSS since it is a government purchase, so I *must* use it until something breaks and budget is assigned and maybe in 2 years we can buy something else. the previous boss purchased this equipment and i guess an HP EVA, Netapp or some other sort of NAS/SAN equipment was better suited for the job...but go figure!.
Nate: The whole idea is to use the MSSServer and connect serveral servers to it. it has 5 available slots so a bunch of cards can be placed there.
I think (after reading your comments) that i can install 2 dual port 10gb netcards in the MSSS, configure it for jumbo frames (9k) and then put 10gb netcards on the servers that will connect to this MSSS and also enable 9k frames. All this of course, connected to a good 10gb switch with a good backplane. Im currently using 1Gb so switching to fiber at 1Gb will not provide a lot of gain.
using IOMeter we saw that we will not incurr in IOWait due to slow hard disks.
Don't set your expectations too high on performance no matter what network card or HBA you have in the server. Getting high performance is a lot more than just fast hardware, your software needs to take advantage of it. MSSServer, I've never heard of so I'm not sure what it is, my storage array here at work is a 3PAR T400, which at max capacity can run roughly 26 gigabits per second of fiber channel throughput to servers(and another 26Gbit to the disks), which would likely require at least 640 15,000 RPM drives to provide that level of throughput(the current max of my array is 640 drives, expecting to go to 1,280 within a year).
This T400 is among the fastest storage systems on the planet, in the top 5 or 6 I believe. My configuration is by no means fully decked out, but I have plenty of room to grow into. Currently it has 200x750GB SATA-II disks and with I/O evenly distributed across all 200 drives I can get about 11Gbit of total throughput(about 60% to disk, and 40% to servers). 24GB of cache, ultra fast ASICs for RAID. The limitation in my system is the disks, the controllers don't break a sweat.
You can have twice as many drives that are twice as fast as these SATA disks and still get poorer performance, it's all about the software and architecture. I know this because we've been busy migrating off such a system onto this new system for weeks now.
So basically what I'm trying to say is just because you might get a couple 10gig cards for a storage server don't expect anywhere near 10gig performance for most workloads.
There's a reason why companies like BlueArc charge $150k for 10gig capable NFS head units(that is the head unit only, no disks), getting that level of performance isn't easy.
nate
On Mar 18, 2009, at 6:56 PM, Erick Perez eaperezh@gmail.com wrote:
Nate, Ross. Thanks for the information. I now understand the difference.
Ross: I cant ditch MSSS since it is a government purchase, so I *must* use it until something breaks and budget is assigned and maybe in 2 years we can buy something else. the previous boss purchased this equipment and i guess an HP EVA, Netapp or some other sort of NAS/SAN equipment was better suited for the job...but go figure!.
Nate: The whole idea is to use the MSSServer and connect serveral servers to it. it has 5 available slots so a bunch of cards can be placed there.
I think (after reading your comments) that i can install 2 dual port 10gb netcards in the MSSS, configure it for jumbo frames (9k) and then put 10gb netcards on the servers that will connect to this MSSS and also enable 9k frames. All this of course, connected to a good 10gb switch with a good backplane. Im currently using 1Gb so switching to fiber at 1Gb will not provide a lot of gain.
using IOMeter we saw that we will not incurr in IOWait due to slow hard disks.
we just cant trash the MSSS....sorry Ross.
Well I understand where your coming from. If you can't get rid of MSSS, but you can leverage it's strengths by serving CIFS shares and other Windows services while feeding it the FC storage from Linux host via 10Gbe iSCSI.
While Microsoft's iSCSI initiator is very good, I can't say the same about their target which is 25-33% slower then IET target on CentOS. Probably has to do with running file based targets off of NTFS partitions instead of the raw disks. You could run another target on it, but I don't think Redmond would support that.
-Ross