To extend space on our 64-bit centos FTP servers, we are considering setting them up to work with an existing PromiseRAID system via iscsi.
-just curious if anyone here knows if iScsi is fast enough to serve up all the images?
-might it be something where we get dedicated cards and put the iScsi traffic on its own Vlan?
Just curious if this would hold up under alot of traffic, like weather images and the like...
-karl
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Karl R. Balsmeier Sent: Tuesday, May 22, 2007 2:28 PM To: centos@centos.org Subject: [CentOS] Vsftpd & Iscsi - fast enough
To extend space on our 64-bit centos FTP servers, we are considering setting them up to work with an existing PromiseRAID system via iscsi.
-just curious if anyone here knows if iScsi is fast enough to serve up all the images?
-might it be something where we get dedicated cards and put the iScsi traffic on its own Vlan?
Just curious if this would hold up under alot of traffic, like weather images and the like...
The ability of iSCSI to support high throughput depends on:
1) How the back-end storage being served up by iSCSI is configured 2) How the network interconnects between the iSCSI targets and initiators are configured 3) How well the FTP software does at reading the data from disk and pumping it out the network
1Gbps ethernet can handle up to 115MB/s per interface. Using MPIO round-robin over several interfaces you can continue to add throughput if the application can scale well across these multiple paths.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On 5/22/07, Ross S. W. Walker rwalker@medallion.com wrote:
The ability of iSCSI to support high throughput depends on:
- How the back-end storage being served up by iSCSI is configured
- How the network interconnects between the iSCSI targets and
initiators are configured 3) How well the FTP software does at reading the data from disk and pumping it out the network
1Gbps ethernet can handle up to 115MB/s per interface. Using MPIO round-robin over several interfaces you can continue to add throughput if the application can scale well across these multiple paths.
I'm a little fuzzy on this Mb vs MB issue - which one is megaBITS and which is megaBYTES, and is this a standard convention or ???
Thanks.
On 5/22/07, Mark Hull-Richter mhullrich@gmail.com wrote:
On 5/22/07, Ross S. W. Walker rwalker@medallion.com wrote:
The ability of iSCSI to support high throughput depends on:
- How the back-end storage being served up by iSCSI is configured
- How the network interconnects between the iSCSI targets and
initiators are configured 3) How well the FTP software does at reading the data from disk and pumping it out the network
1Gbps ethernet can handle up to 115MB/s per interface. Using MPIO round-robin over several interfaces you can continue to add throughput if the application can scale well across these multiple paths.
I'm a little fuzzy on this Mb vs MB issue - which one is megaBITS and which is megaBYTES, and is this a standard convention or ???
Thanks.
20 years ago, Megabit was 2^20 bits (Mb) and Megabyte was 2^20 bytes (MB). The SI (ISO?) redid the units later to deal with the fact that Mega has a scientific definition of 10^6. This also allows the Hard-drive conspiracy to undersell you the number of bits on a disk. Nowadays, Mb is supposed to mean 10^6 bits, and a Mibit means 2^20 bits.
Thus you end up with a gigabit card which is 10^6 bits but the OS measures in 2^20 bits.
References http://en.wikipedia.org/wiki/Megabit http://en.wikipedia.org/wiki/Megabyte
On Tue, May 22, 2007 at 01:02:15PM -0600, Stephen John Smoogen wrote:
20 years ago, Megabit was 2^20 bits (Mb) and Megabyte was 2^20 bytes (MB).
What? All network performance numbers where always given in Mbps (Megabits per seconds, 10^6).
The SI (ISO?) redid the units later to deal with the fact that Mega has a scientific definition of 10^6. This also allows the Hard-drive conspiracy to undersell you the number of bits on a disk. Nowadays, Mb is supposed to mean 10^6 bits, and a Mibit means 2^20 bits.
The hard-drive manufactures didn't pioneer the *ibyte. I agree that for such things as RAM and HDDs, that use a power of 2 units (bytes or words and sectors), megabytes as 2^20 would be better. But I can't blame them for sticking with oficial standards.
Also, as your link points out, FDDs mixed the terms. 720 KB: 720*1024 bytes; 1.44 MB: 1.44*1000*1024 bytes.
Thus you end up with a gigabit card which is 10^6 bits but the OS measures in 2^20 bits.
The standard for network has always been for 10^6 bits per second or packets per second. Some protocols don't even align at 8 bit boundaries.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Stephen John Smoogen Sent: Tuesday, May 22, 2007 3:02 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
On 5/22/07, Mark Hull-Richter mhullrich@gmail.com wrote:
On 5/22/07, Ross S. W. Walker rwalker@medallion.com wrote:
The ability of iSCSI to support high throughput depends on:
- How the back-end storage being served up by iSCSI is configured
- How the network interconnects between the iSCSI targets and
initiators are configured 3) How well the FTP software does at reading the data
from disk and
pumping it out the network
1Gbps ethernet can handle up to 115MB/s per interface. Using MPIO round-robin over several interfaces you can continue to
add throughput
if the application can scale well across these multiple paths.
I'm a little fuzzy on this Mb vs MB issue - which one is
megaBITS and
which is megaBYTES, and is this a standard convention or ???
Thanks.
20 years ago, Megabit was 2^20 bits (Mb) and Megabyte was 2^20 bytes (MB). The SI (ISO?) redid the units later to deal with the fact that Mega has a scientific definition of 10^6. This also allows the Hard-drive conspiracy to undersell you the number of bits on a disk. Nowadays, Mb is supposed to mean 10^6 bits, and a Mibit means 2^20 bits.
Thus you end up with a gigabit card which is 10^6 bits but the OS measures in 2^20 bits.
References http://en.wikipedia.org/wiki/Megabit http://en.wikipedia.org/wiki/Megabyte
I always thought 10^6 was Mib/MiB and 2^10 was Mb/MB to keep the older manuals and papers consistent, but that doesn't seem to match the wikipedia... Is the wikipedia correct?
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On Tue, May 22, 2007 at 03:14:17PM -0400, Ross S. W. Walker wrote:
http://en.wikipedia.org/wiki/Megabit http://en.wikipedia.org/wiki/Megabyte
I always thought 10^6 was Mib/MiB and 2^10 was Mb/MB to keep the older manuals and papers consistent, but that doesn't seem to match the wikipedia... Is the wikipedia correct?
Yes: http://www.iec.ch/zone/si/si_bytes.htm http://physics.nist.gov/cuu/Units/binary.html
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Luciano Rocha Sent: Tuesday, May 22, 2007 3:29 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
On Tue, May 22, 2007 at 03:14:17PM -0400, Ross S. W. Walker wrote:
http://en.wikipedia.org/wiki/Megabit http://en.wikipedia.org/wiki/Megabyte
I always thought 10^6 was Mib/MiB and 2^10 was Mb/MB to
keep the older
manuals and papers consistent, but that doesn't seem to match the wikipedia... Is the wikipedia correct?
Yes: http://www.iec.ch/zone/si/si_bytes.htm http://physics.nist.gov/cuu/Units/binary.html
Well there goes the neighborhood... Now I have to talk memory in MiB and GiB and comm and storage in MB and GB.
Anyways datacom and storage has always been base 10, why, well I'll leave that to the conspiracy theorists.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Well there goes the neighborhood... Now I have to talk memory in MiB and GiB and comm and storage in MB and GB.
Anyways datacom and storage has always been base 10, why, well I'll leave that to the conspiracy theorists.
Perhaps it would be possible to leave the semantics games alone and just answer the guy's question? I don't have any personal experience with iSCSI or I would try to do that.
Best,
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of chrism@imntv.com Sent: Tuesday, May 22, 2007 7:07 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
Ross S. W. Walker wrote:
Well there goes the neighborhood... Now I have to talk memory in MiB and GiB and comm and storage in MB and GB.
Anyways datacom and storage has always been base 10, why, well I'll leave that to the conspiracy theorists.
Perhaps it would be possible to leave the semantics games alone and just answer the guy's question? I don't have any personal experience with iSCSI or I would try to do that.
The question was answered earlier and what does your comment contribute?
Jeez, there is nothing like a me-too troll to suck the fun out of a thread.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of chrism@imntv.com Sent: Tuesday, May 22, 2007 7:07 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
Ross S. W. Walker wrote:
Well there goes the neighborhood... Now I have to talk memory in MiB and GiB and comm and storage in MB and GB.
Anyways datacom and storage has always been base 10, why, well I'll leave that to the conspiracy theorists.
Perhaps it would be possible to leave the semantics games alone and just answer the guy's question? I don't have any personal experience with iSCSI or I would try to do that.
The question was answered earlier and what does your comment contribute?
Jeez, there is nothing like a me-too troll to suck the fun out of a thread.
-Ross
well, he (Chris) does have a point. I was excited to see my question get so many responses. But I only got one enterprise-relevant answer, from Matt Shields (thanks matt!). We definitely got side-tracked here, har. Let us chase down this iSCSI vs rsync pros/cons question a little more...
I have sort of ruled out doing iSCSI over GFS because of all the moving parts and some inherent dangers similar to what mentions about stopping data flows on such designs. Am I that much better off to ditch even the iSCSI component and just stick with the current setup that relies on rsync?
I'd like to ditch rsync, and latency isn't that *big*of an issue because those boxes don't push more than 10-20 megs of data normally. So is there a case where I could extend onto iSCSI and see some benefits vs. staying with the FTP server pair & rsync? I sort of asked some of this in another thread, but curious about what folks have to say.
-essentially the question is 'what's after rsync, when you don't have fibre channel budget, and don't want to stoop so low as iATA called Ethernet-over-IP'?
here's the GFS over iSCSI strategy....
essentially, you're going to abstract the one filesystem, pretending it's "out" of any of the hosts (it can still be physically in the one host), and create a GFS volume on it (by creating one or more PV's into the GFS LVM).
then configure the other cluster members to mount the GFS cluster-fs volume/filesystem using the GFS services over the iSCSI protocol.
that will allow all of the FTP servers to mount the shared volume read/write concurrently. no more rsync.
i'd use the iSCSI hba to expose the backup volume as an iSCSI target and attach to it from the other set members. to do it right, you'll need to put up two additional hosts and install GFS & iSCSI services on all of them.
you'll need to be using GbE, preferably channel-aggregated, if possible, between the cluster members read won't be as fast as direct-attach scsi raid, but there won't be any rsync/cross-copy latency.
if load not too high on daily basis, maybe one gbe per host dedicated to the iSCSI/GFS sync/locking traffic, and another to reach the outside world -----snip----------
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Karl R. Balsmeier Sent: Tuesday, May 22, 2007 11:01 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough ++
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of chrism@imntv.com Sent: Tuesday, May 22, 2007 7:07 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
Ross S. W. Walker wrote:
Well there goes the neighborhood... Now I have to talk memory in MiB and GiB and comm and storage in MB and GB.
Anyways datacom and storage has always been base 10, why,
well I'll
leave that to the conspiracy theorists.
Perhaps it would be possible to leave the semantics games alone and just answer the guy's question? I don't have any personal
experience with
iSCSI or I would try to do that.
The question was answered earlier and what does your
comment contribute?
Jeez, there is nothing like a me-too troll to suck the fun out of a thread.
-Ross
well, he (Chris) does have a point. I was excited to see my question get so many responses. But I only got one enterprise-relevant answer, from Matt Shields (thanks matt!). We definitely got side-tracked here, har. Let us chase down this iSCSI vs rsync pros/cons question a little more...
I have sort of ruled out doing iSCSI over GFS because of all the moving parts and some inherent dangers similar to what mentions about stopping data flows on such designs. Am I that much better off to ditch even the iSCSI component and just stick with the current setup that relies on rsync?
Let the list know what your current setup is.
If you are looking to have a single back-end storage system shared with multiple front-end FTP servers, then you have a number of choices:
- Use a cluster filesystem like GFS/OCFS etc. with a shared storage system, either SCSI, Fiber or iSCSI.
- Utilize a network file system to a shared storage server, either NFS, CIFS, or some other network file system protocol.
I'd like to ditch rsync, and latency isn't that *big*of an issue because those boxes don't push more than 10-20 megs of data normally. So is there a case where I could extend onto iSCSI and see some benefits vs. staying with the FTP server pair & rsync? I sort of asked some of this in another thread, but curious about what folks have to say.
There are a lot of possibilities and not one single one will be perfect. The trick is finding the one that handles the core problem you have.
-essentially the question is 'what's after rsync, when you don't have fibre channel budget, and don't want to stoop so low as iATA called Ethernet-over-IP'?
There are a lot of choices even when you take Fiber out of the picture.
here's the GFS over iSCSI strategy....
essentially, you're going to abstract the one filesystem, pretending it's "out" of any of the hosts (it can still be physically in the one host), and create a GFS volume on it (by creating one or more PV's into the GFS LVM).
then configure the other cluster members to mount the GFS cluster-fs volume/filesystem using the GFS services over the iSCSI protocol.
that will allow all of the FTP servers to mount the shared volume read/write concurrently. no more rsync.
True, and if you are using a shared block storage solution then you will need to use a clustered file system, but there is another solution too...
i'd use the iSCSI hba to expose the backup volume as an iSCSI target and attach to it from the other set members. to do it right, you'll need to put up two additional hosts and install GFS & iSCSI services on all of them.
Yes, in affect creating a traditional, active-active FTP cluster.
you'll need to be using GbE, preferably channel-aggregated, if possible, between the cluster members read won't be as fast as direct-attach scsi raid, but there won't be any rsync/cross-copy latency.
I utilize adaptive load balancing bonding (ALB) with iSCSI with good results from multiple initiators. 802.3ad (?) or traditional aggregated links doesn't do so hot as it is per-path instead of per-packet.
if load not too high on daily basis, maybe one gbe per host dedicated to the iSCSI/GFS sync/locking traffic, and another to reach the outside world
How about this idea for size:
iSCSI storage server backend serving up volumes to a Xen server front-end which then provides NFS/CIFS network file system access to multiple local PV FTP servers.
You can then add a second iSCSI server later and use DRBD 8.X in a multiple primary setup doing active-active block-level replication between two storage servers, which then provides storage for two Xen server front-ends that can use something like heartbeat to fail-over virtual machines in the event of a Xen server failure.
Then you can use MPIO in round-robin or fail-over mode (or a combination) between the two back-end iSCSI servers and the two front-end Xen servers.
-----snip----------
<snip>
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Karl R. Balsmeier Sent: Tuesday, May 22, 2007 2:28 PM To: centos@centos.org Subject: [CentOS] Vsftpd & Iscsi - fast enough
To extend space on our 64-bit centos FTP servers, we are considering setting them up to work with an existing PromiseRAID system via iscsi.
-just curious if anyone here knows if iScsi is fast enough to serve up all the images?
-might it be something where we get dedicated cards and put the iScsi traffic on its own Vlan?
Just curious if this would hold up under alot of traffic, like weather images and the like...
The ability of iSCSI to support high throughput depends on:
- How the back-end storage being served up by iSCSI is configured
- How the network interconnects between the iSCSI targets and
initiators are configured 3) How well the FTP software does at reading the data from disk and pumping it out the network
1Gbps ethernet can handle up to 115MB/s per interface. Using MPIO round-robin over several interfaces you can continue to add throughput if the application can scale well across these multiple paths.
-Ross
hey this part is fascinating, -so how would one practically deploy this, -say 4 GB NICs and some supported hardware? for traffic 100 - 200 megs daily perhaps this is too much? On the storage side, not sure if MPIO will auto-detect the device, but maybe it'll see it. -wonder if Vsftpd would play well with all of this.
-krb
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Karl R. Balsmeier wrote:
The ability of iSCSI to support high throughput depends on:
- How the back-end storage being served up by iSCSI is configured
- How the network interconnects between the iSCSI targets and
initiators are configured 3) How well the FTP software does at reading the data from disk and pumping it out the network
1Gbps ethernet can handle up to 115MB/s per interface. Using MPIO round-robin over several interfaces you can continue to add throughput if the application can scale well across these multiple paths.
-Ross
hey this part is fascinating, -so how would one practically deploy this, -say 4 GB NICs and some supported hardware? for traffic 100 - 200 megs daily perhaps this is too much?
There's no such thing as 'too fast', but do you really need to complete you daily transfer in less than a second? On the practical side the underlying disks aren't going to be that fast anyway.
On the storage side, not sure if MPIO will auto-detect the device, but maybe it'll see it. -wonder if Vsftpd would play well with all of this.
If you put a filesystem on an iscsi target and mount it, vsftpd won't know/care about the actual device type.
On 5/23/07, Les Mikesell lesmikesell@gmail.com wrote:
hey this part is fascinating, -so how would one practically deploy this, -say 4 GB NICs and some supported hardware? for traffic 100 - 200 megs daily perhaps this is too much?
There's no such thing as 'too fast', but do you really need to complete you daily transfer in less than a second? On the practical side the underlying disks aren't going to be that fast anyway.
You might if you have thousands of requests per second!!!
As a side note, when using the Promise VTrak and iSCSI it supports multipath.
-matt
Matt Shields wrote:
On 5/23/07, Les Mikesell lesmikesell@gmail.com wrote:
hey this part is fascinating, -so how would one practically deploy
this,
-say 4 GB NICs and some supported hardware? for traffic 100 - 200 megs daily perhaps this is too much?
There's no such thing as 'too fast', but do you really need to complete you daily transfer in less than a second? On the practical side the underlying disks aren't going to be that fast anyway.
You might if you have thousands of requests per second!!!
A 200 Meg file is likely to be completely cached in the ftp server's RAM - and the cheapest way to get performance is to be sure that happens.
As a side note, when using the Promise VTrak and iSCSI it supports multipath.
It would probably be simpler to provide a separate interface or two for the ftp server <-> storage network than to go too crazy with multipathing. And for an ftp server you shouldn't need a great deal more speed on the filesystem side than you have on client connection side.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Les Mikesell Sent: Wednesday, May 23, 2007 9:10 AM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
Matt Shields wrote:
On 5/23/07, Les Mikesell lesmikesell@gmail.com wrote:
hey this part is fascinating, -so how would one
practically deploy
this,
-say 4 GB NICs and some supported hardware? for traffic
100 - 200 megs
daily perhaps this is too much?
There's no such thing as 'too fast', but do you really
need to complete
you daily transfer in less than a second? On the
practical side the
underlying disks aren't going to be that fast anyway.
You might if you have thousands of requests per second!!!
A 200 Meg file is likely to be completely cached in the ftp server's RAM
- and the cheapest way to get performance is to be sure that happens.
Yes, cache is king here, whether it be FTP, CIFS, or SQL DBs.
Think 64-bit, and as much RAM as you can afford/fit into it.
As a side note, when using the Promise VTrak and iSCSI it supports multipath.
It would probably be simpler to provide a separate interface or two for the ftp server <-> storage network than to go too crazy with multipathing. And for an ftp server you shouldn't need a great deal more speed on the filesystem side than you have on client connection side.
How about two 4-port e1000 cards, PCI-X 133 if you have 2 PCI-X 133 slots... If that is over-kill, then 2 2-port cards.
You can then mix-up bonding and multi-pathing for SAN and Internet traffic and have 2 separate cards for redundancy, though I have yet to see a network card fail, in my experience memory, storage HBAs/disks, graphics cards and the occassional motherboard seem to be the biggest culprits.
I would definitely keep the SAN traffic, Internet traffic and system management traffic separate.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
We have a number of the Promise iSCSI M500i chassis's and we love them. Excellent speed. We currently have them on the same gig switches that our servers are on and haven't noticed any problem (yet). The one thing I would tell you, never, never, never restart linux networking service while data is being written to one of the volumes. We corrupted a volume by restarting networking. First unmount that volume, then restart networking.
-matt
On 5/22/07, Karl R. Balsmeier karl@klxsystems.net wrote:
To extend space on our 64-bit centos FTP servers, we are considering setting them up to work with an existing PromiseRAID system via iscsi.
-just curious if anyone here knows if iScsi is fast enough to serve up all the images?
-might it be something where we get dedicated cards and put the iScsi traffic on its own Vlan?
Just curious if this would hold up under alot of traffic, like weather images and the like...
-karl _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Matt Shields Sent: Tuesday, May 22, 2007 3:50 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
We have a number of the Promise iSCSI M500i chassis's and we love them. Excellent speed. We currently have them on the same gig switches that our servers are on and haven't noticed any problem (yet). The one thing I would tell you, never, never, never restart linux networking service while data is being written to one of the volumes. We corrupted a volume by restarting networking. First unmount that volume, then restart networking.
Are these iSCSI target enclosures or HBAs? SAS/SATA?
Interested in knowing more.
There is also iSCSI Enterprise Target. I use it here with good results.
<snip>
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
SATA array with iSCSI
On 5/22/07, Ross S. W. Walker rwalker@medallion.com wrote:
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Matt Shields Sent: Tuesday, May 22, 2007 3:50 PM To: CentOS mailing list Subject: Re: [CentOS] Vsftpd & Iscsi - fast enough
We have a number of the Promise iSCSI M500i chassis's and we love them. Excellent speed. We currently have them on the same gig switches that our servers are on and haven't noticed any problem (yet). The one thing I would tell you, never, never, never restart linux networking service while data is being written to one of the volumes. We corrupted a volume by restarting networking. First unmount that volume, then restart networking.
Are these iSCSI target enclosures or HBAs? SAS/SATA?
Interested in knowing more.
There is also iSCSI Enterprise Target. I use it here with good results.
<snip>
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos