Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Rudi Ahlers wrote:
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Can I suggest ZFS on Solaris/OpenSolaris? Real breeze to setup.
As for Linux, it has been a while but are there still two iscsi-target implementations? Has any one of them got into the mainline (Linux - not Redhat - although if Redhat will support one implementation I guess it does not really matter whether the mainline has it or not) kernel?
Chan Chung Hang Christopher schrieb:
Rudi Ahlers wrote:
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Can I suggest ZFS on Solaris/OpenSolaris? Real breeze to setup.
Indeed. But the problem is: this is a CentOS list and I'm afraid people just don't want to hear an answer that involves installing a different OS. Just like Windoze users don't want to hear about other OSs ;-)
As for Linux, it has been a while but are there still two iscsi-target implementations? Has any one of them got into the mainline (Linux - not Redhat - although if Redhat will support one implementation I guess it does not really matter whether the mainline has it or not) kernel?
CentOS inherits RedHat's implementation (don't know the details). We use the iSCSI-initiator only, though, but that parts seems to work OK for what we use it for.
Rainer
Rainer Duffner wrote:
Chan Chung Hang Christopher schrieb:
Rudi Ahlers wrote:
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Can I suggest ZFS on Solaris/OpenSolaris? Real breeze to setup.
Indeed. But the problem is: this is a CentOS list and I'm afraid people just don't want to hear an answer that involves installing a different OS. Just like Windoze users don't want to hear about other OSs ;-)
Even if there are no Centos solutions besides roll your own? Too bad. I am all for use the right tool for the job. The brand of the tool does not really matter.
As for Linux, it has been a while but are there still two iscsi-target implementations? Has any one of them got into the mainline (Linux - not Redhat - although if Redhat will support one implementation I guess it does not really matter whether the mainline has it or not) kernel?
CentOS inherits RedHat's implementation (don't know the details). We use the iSCSI-initiator only, though, but that parts seems to work OK for what we use it for.
However, the OP is looking for a iscsi-target...which, if I am not wrong, does not quite exist yet in Centos/RHEL.
However, the OP is looking for a iscsi-target...which, if I am not wrong, does not quite exist yet in Centos/RHEL. _______________________________________________
you're right, I am looking to setup an iscsi target, but I couldn't find a working tutorial, and this is very very new to me.
The one I found is this one:
http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.htm...
Yet, I can't get it to work.
[root@localhost ~]# iscsiadm -m discovery -t sendtargets -p 192.168.0.53 iscsiadm: cannot make connection to 192.168.0.53:3260 (111) iscsiadm: connection to discovery address 192.168.0.53 failed iscsiadm: cannot make connection to 192.168.0.53:3260 (111) iscsiadm: connection to discovery address 192.168.0.53 failed iscsiadm: caught SIGINT, exiting...
[root@localhost ~]# iscsiadm -m discovery -t sendtargets -p 127.0.0.1 iscsiadm: cannot make connection to 127.0.0.1:3260 (111) iscsiadm: connection to discovery address 127.0.0.1 failed iscsiadm: cannot make connection to 127.0.0.1:3260 (111) iscsiadm: connection to discovery address 127.0.0.1 failed iscsiadm: caught SIGINT, exiting...
Restarting /etc/init.d/iscsid doesn't give me any errors, and /var/log/messages doesn't either:
[root@localhost ~]# tail -f /var/log/messages Sep 8 17:41:34 localhost iscsid: iSCSI daemon with pid=13986 started! Sep 8 17:44:40 localhost kernel: Removing netfilter NETLINK layer. Sep 8 17:49:07 localhost iscsid: iscsid shutting down. Sep 8 17:49:07 localhost kernel: Loading iSCSI transport class v2.0-724. Sep 8 17:49:07 localhost kernel: iscsi: registered transport (tcp) Sep 8 17:49:07 localhost kernel: iscsi: registered transport (iser) Sep 8 17:49:07 localhost iscsid: iSCSI logger with pid=14373 started! Sep 8 17:49:08 localhost iscsid: transport class version 2.0-724. iscsid version 2.0-868 Sep 8 17:49:08 localhost iscsid: iSCSI daemon with pid=14374 started! Sep 8 17:49:13 localhost iscsid: send fail Connection refused Sep 8 17:54:47 localhost iscsid: iscsid shutting down. Sep 8 17:54:47 localhost kernel: Loading iSCSI transport class v2.0-724. Sep 8 17:54:47 localhost kernel: iscsi: registered transport (tcp) Sep 8 17:54:47 localhost kernel: iscsi: registered transport (iser) Sep 8 17:54:47 localhost iscsid: iSCSI logger with pid=14603 started! Sep 8 17:54:48 localhost iscsid: transport class version 2.0-724. iscsid version 2.0-868 Sep 8 17:54:48 localhost iscsid: iSCSI daemon with pid=14604 started!
The only changes I've made is this:
# To enable CHAP authentication set node.session.auth.authmethod # to CHAP. The default is None. node.session.auth.authmethod = CHAP
# To set a CHAP username and password for initiator # authentication by the target(s), uncomment the following lines: node.session.auth.username = nas1 node.session.auth.password = password discovery.sendtargets.auth.username = nas1 discovery.sendtargets.auth.password = password
the only errors I get is with iscsi itself:
[root@localhost ~]# /etc/init.d/iscsi start iscsid (pid 15037 15036) is running... Setting up iSCSI targets: iscsiadm: No records found! [ OK ] [root@localhost ~]#
On Tue, Sep 8, 2009 at 1:35 PM, Joseph L. CasaleJCasale@activenetwerx.com wrote:
[root@localhost ~]# /etc/init.d/iscsi start iscsid (pid 15037 15036) is running... Setting up iSCSI targets: iscsiadm: No records found!
Well, Have you exported any block devices? Also don't forget to open the firewall. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
No. I don't know what exactly to setup, which is why I'm looking for a howto :)
the server is setup with software RAID 1+0 (i.e. RAID1 + LVM) and I have a 900GB lvm volume as /data.
How, or where would I export that as an iSCSI volume?
[root@localhost ~]# lvscan ACTIVE '/dev/nas1/root' [20.00 GB] inherit ACTIVE '/dev/nas1/data' [883.41 GB] inherit ACTIVE '/dev/nas1/home' [20.00 GB] inherit ACTIVE '/dev/nas1/swap' [8.00 GB] inherit
the firewall is off, for now
No. I don't know what exactly to setup, which is why I'm looking for a howto :)
Rudi, How exactly did you expect the daemon to know what you wanted to export without telling it, was starting the daemon w/o any config not supposed to yield any errors:)
For RHELs target: http://kbase.redhat.com/faq/docs/DOC-15154
Maybe what you missed from your own tutorial you posted at http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.htm... was the fact that the config wasn't persistent when entering commands at the console. If you restarted the service, it would come up unconfigured.
jlc
On Mon, Sep 7, 2009 at 5:04 PM, Chan Chung Hang Christopherchristopher.chan@bradbury.edu.hk wrote:
Rudi Ahlers wrote:
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Can I suggest ZFS on Solaris/OpenSolaris? Real breeze to setup.
As for Linux, it has been a while but are there still two iscsi-target implementations? Has any one of them got into the mainline (Linux - not Redhat - although if Redhat will support one implementation I guess it does not really matter whether the mainline has it or not) kernel? _______________________________________________
chan, I already have CentOS 5.3 setup, and we need to use this as far as possible, due to some of the other software that we'll be using.
On Mon, 7 Sep 2009, Chan Chung Hang Christopher wrote:
chan, I already have CentOS 5.3 setup, and we need to use this as far as possible, due to some of the other software that we'll be using.
See Joseph Casale's post then. It is not quite available on Centos. Roll your own is the name of the game.
As I replied earlier, this is not true and has not been true for over a year.
---------------------------------------------------------------------- Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On Tue, Sep 8, 2009 at 4:24 AM, Jim Wildmanjim@rossberry.com wrote:
On Mon, 7 Sep 2009, Chan Chung Hang Christopher wrote:
chan, I already have CentOS 5.3 setup, and we need to use this as far as possible, due to some of the other software that we'll be using.
See Joseph Casale's post then. It is not quite available on Centos. Roll your own is the name of the game.
As I replied earlier, this is not true and has not been true for over a year.
Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Can I suggest ZFS on Solaris/OpenSolaris? Real breeze to setup.
As for Linux, it has been a while but are there still two iscsi-target implementations? Has any one of them got into the mainline (Linux - not Redhat - although if Redhat will support one implementation I guess it does not really matter whether the mainline has it or not) kernel?
Serious performance issues wrt to ZFS under iSCSI on Solaris/OpenSolaris at the moment which require gobs of cash to fix. See rbourbon's post from this thread: http://opensolaris.org/jive/thread.jspa?threadID=111286&tstart=0
As for iSCSI on CentOS, I use iet versus tgt as the boxed instance leaves lots to be done manually. Iet is actively developed by a some bright people and is well tested/used and stable. I can assure iet works rock solid, I have it exporting block devices to ESXi, nix and windows without ever missing a beat.
Also, according to http://kbase.redhat.com/faq/docs/DOC-15154 tgt is still only a Technology Preview, so you wouldn't expect it to be complete yet.
jlc
Joseph L. Casale wrote:
Can I suggest ZFS on Solaris/OpenSolaris? Real breeze to setup.
As for Linux, it has been a while but are there still two iscsi-target implementations? Has any one of them got into the mainline (Linux - not Redhat - although if Redhat will support one implementation I guess it does not really matter whether the mainline has it or not) kernel?
Serious performance issues wrt to ZFS under iSCSI on Solaris/OpenSolaris at the moment which require gobs of cash to fix. See rbourbon's post from this thread: http://opensolaris.org/jive/thread.jspa?threadID=111286&tstart=0
As for iSCSI on CentOS, I use iet versus tgt as the boxed instance leaves lots to be done manually. Iet is actively developed by a some bright people and is well tested/used and stable. I can assure iet works rock solid, I have it exporting block devices to ESXi, nix and windows without ever missing a beat.
Thanks for the update.
Also, according to http://kbase.redhat.com/faq/docs/DOC-15154 tgt is still only a Technology Preview, so you wouldn't expect it to be complete yet.
Did you install your iet from rpms or something then?
On Sep 7, 2009, at 11:40 AM, "Joseph L. Casale" <JCasale@activenetwerx.com
wrote:
Did you install your iet from rpms or something then?
No, but it looks like Ross Walker has created an updated spec in the source. It's the *only* thing I don't use am rpm for as there isn't anyone with an updated repo, I think atrpms is behind but I haven't checked recently.
I don't remember if the rpm spec was in the 0.4.17 release or it was added to the svn shortly after. There is also a dkms conf file for adding it to dkms.
We're almost ready to cut a 0.4.18 release which has much improved compatibility updates geared towards ESX/Xen Server and MSCS. I'm hoping this month.
Personnally I like dkms at the moment but I'm going to try and get a KABI tracking version in 'extras' shortly after 0.4.18 is released.
It really is trivial to compile and/or build a rpm out of.
-Ross
Joseph L. Casale wrote:
Did you install your iet from rpms or something then?
No, but it looks like Ross Walker has created an updated spec in the source. It's the *only* thing I don't use am rpm for as there isn't anyone with an updated repo, I think atrpms is behind but I haven't checked recently.
It appears that the Technology Preview has now moved to become a Product Enhancement from the link Ralph posted:
http://rhn.redhat.com/errata/RHEA-2009-0099.html
Maybe that takes away some of the manual stuff. Joseph vouches for IET's stability. Has anybody given tgt a run and can comment on tgt's performance and stability?
On Sep 7, 2009, at 8:12 PM, Christopher Chan <christopher.chan@bradbury.edu.hk
wrote:
Joseph L. Casale wrote:
Did you install your iet from rpms or something then?
No, but it looks like Ross Walker has created an updated spec in the source. It's the *only* thing I don't use am rpm for as there isn't anyone with an updated repo, I think atrpms is behind but I haven't checked recently.
It appears that the Technology Preview has now moved to become a Product Enhancement from the link Ralph posted:
http://rhn.redhat.com/errata/RHEA-2009-0099.html
Maybe that takes away some of the manual stuff. Joseph vouches for IET's stability. Has anybody given tgt a run and can comment on tgt's performance and stability?
TGT is stable, performance is slightly less then IET due to running in user space (no zero copy), configuration is slightly more complex due to being a general SCSI target rather than a dedicated iSCSI target.
Depends on ones needs and preferences.
-Ross
From: Rudi Ahlers Rudi@SoftDux.com
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
Google with 'centos iscsi howto' returns quite a few howtos... none of them work?
JD
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
yes, just last week i set this up.
yum install scsi-target-utils chkconfig tgtd on edit /etc/tgt/targets.conf service tgtd start
works from a windows client just fine.
On Mon, 7 Sep 2009, Rudi Ahlers wrote:
Has anyone succesfully setup, and used CentOS as an iSCSI server? I'm trying to setup a server with 4x500GB HDD's, setup in RAID 10 to act as an iSCSI server for a virtualization project, but I can't find a decent howto on how to setup an iSCSI server using CentOS.
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
See slides, scripts here.. http://www.colug.net/notes/0810mtg/
---------------------------------------------------------------------- Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On Mon, Sep 7, 2009 at 10:57 AM, Rudi Ahlers Rudi@softdux.com wrote:
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Sorry for the thread necromancy here, but can you tell me what was missing from Openfiler that you'd like to use?
I'm looking down this path right now and will be sending a few related questions to the list.
On Tue, Oct 20, 2009 at 6:09 PM, Alan McKay alan.mckay@gmail.com wrote:
On Mon, Sep 7, 2009 at 10:57 AM, Rudi Ahlers Rudi@softdux.com wrote:
I would like to setup something like Openfiler, but we also need todo some other stuff that OpenFiler doesn't support, so I would prefer to export some of the HDD space (about 500GB) as iSCSI LUN's
Sorry for the thread necromancy here, but can you tell me what was missing from Openfiler that you'd like to use?
I'm looking down this path right now and will be sending a few related questions to the list.
--
Simple, it's only a NAS device, and not really a file server / web server / data base server as well. The purposes I needed is to replace SMB on the network, and iSCSI seemed like a good alternative. The server in question is a dev server, which I thought would be beneficial to setup as an iSCSI server as well and connect other servers to it's storage, and thus consolidate the storage on it :)
If you want to unsubscribe, go here:
http://lists.centos.org/mailman/options/centos
----- Original Message ----- From: "Miguel Varas A." mvaras@inf.utfsm.cl To: "CentOS mailing list" centos@centos.org Sent: Tuesday, October 20, 2009 4:09:08 PM Subject: [CentOS] unsuscribe
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Rudi Ahlers wrote
Simple, it's only a NAS device, and not really a file server / web server / data base server as well. The purposes I needed is to replace SMB on the network, and iSCSI seemed like a good alternative. The server in question is a dev server, which I thought would be beneficial to setup as an iSCSI server as well and connect other servers to it's storage, and thus consolidate the storage on it :)
whoa. ISCSI is *NOT* a NAS/SMB replacement.
ISCSI is a SAN replacement, a low budget (and lower performance) alternative to Fibrechannel.. a given iSCSI target volume can only be accessed by a single initiator (client) at a time, unless you're running some sort of cluster file system that supports shared block devices.
On Tue, Oct 20, 2009 at 10:12 PM, John R Pierce pierce@hogranch.com wrote:
Rudi Ahlers wrote
Simple, it's only a NAS device, and not really a file server / web server / data base server as well. The purposes I needed is to replace SMB on the network, and iSCSI seemed like a good alternative. The server in question is a dev server, which I thought would be beneficial to setup as an iSCSI server as well and connect other servers to it's storage, and thus consolidate the storage on it :)
whoa. ISCSI is *NOT* a NAS/SMB replacement.
ISCSI is a SAN replacement, a low budget (and lower performance) alternative to Fibrechannel.. a given iSCSI target volume can only be accessed by a single initiator (client) at a time, unless you're running some sort of cluster file system that supports shared block devices.
John, you're right. iSCSI isn't an SMB replacement as I have learned through all of this. SMB is good for sharing data between many PC's, and even servers, but from what I understand it's also slower that iSCSI and won't allow me to scale the storage by simply adding another cheap server to the network. With iSCSI I could / should be able todo that.
OR am I approaching this from a different angle? If I wanted to setup a server to serve content (in this case file storage, www, email & SQL) to a network of computers, would iSCSI have served the purpose? Or should I have kept using SMB? I am looking for a way to quickly expand the whole setup though. If we need more space, then I just want to add another cheap server with a 1TB HDD, and have it available on the network. It is my impression that I could use iSCSI, probably together with XFS, to accomplish this?
Rudi Ahlers schrieb
John, you're right. iSCSI isn't an SMB replacement as I have learned through all of this. SMB is good for sharing data between many PC's, and even servers, but from what I understand it's also slower that iSCSI and won't allow me to scale the storage by simply adding another cheap server to the network. With iSCSI I could / should be able todo that.
iSCSI is just a protocol. It doesn't say anything about the underlying storage. http://en.wikipedia.org/wiki/ISCSI
You can have a SAN (HP EVA, EMC, whatever) and still have that served via iSCSI by an additional piece of hardware you buy. Or you can have a NAS like NetApp and buy another of those ridiculously expensive licenses and then you can server iSCSI with that, too.
What your storage looks like doesn't matter. The initiators just talk the iSCSI-protocol to the target.
OR am I approaching this from a different angle? If I wanted to setup a server to serve content (in this case file storage, www, email & SQL) to a network of computers, would iSCSI have served the purpose?
Yes. Allthough, as iSCSI uses the ethernet-network, you need good switches. Because, your actually data-traffic has to go through the same network.
Or should I have kept using SMB? I am looking for a way to quickly expand the whole setup though. If we need more space, then I just want to add another cheap server with a 1TB HDD, and have it available on the network. It is my impression that I could use iSCSI, probably together with XFS, to accomplish this?
No, it doesn't quite work like that. At least, for any trivial setup that doesn't involve some storage-virtualization software.
If you can afford it, NetApp is a good solution for what you want to achieve. Or try one of the new SUN storage boxes.
If that is out of your (financial) league, you can build the same functionality as the SUN OpenStorage boxes with your own hardware and OpenSolaris - although you will not have the extensive analytics and the ease of use of the GUI. I wouldn't use any of the cheap SOHO NASes mentioned before in this thread. Build your own from HP, Dell or IBM hardware and preferably OpenSolaris. (I know, sounds weird on a CentOS-list).
For example: if your cheap "NAS"'s storage controller dies, are you sure that the replacement unit's controller you get can actually read the data? Also, if you have data on iSCSI, you really need hardware with almost no unplanned downtime. Even planned downtime can be difficult to manage, because so many servers depend on the iSCSI-targets and you'd have to shutdown maybe a dozen or more servers for that single reboot.
Centralized storage is nice for management and backup, offers a lot of possibilities regarding efficiency and utilization - but tends to create single points of failure that just don't exist with direct attached storage.
Rainer
On Wed, Oct 21, 2009 at 10:36 AM, Rainer Duffner rainer@ultra-secure.de wrote:
Rudi Ahlers schrieb
John, you're right. iSCSI isn't an SMB replacement as I have learned through all of this. SMB is good for sharing data between many PC's, and even servers, but from what I understand it's also slower that iSCSI and won't allow me to scale the storage by simply adding another cheap server to the network. With iSCSI I could / should be able todo that.
iSCSI is just a protocol. It doesn't say anything about the underlying storage. http://en.wikipedia.org/wiki/ISCSI
You can have a SAN (HP EVA, EMC, whatever) and still have that served via iSCSI by an additional piece of hardware you buy. Or you can have a NAS like NetApp and buy another of those ridiculously expensive licenses and then you can server iSCSI with that, too.
What your storage looks like doesn't matter. The initiators just talk the iSCSI-protocol to the target.
OR am I approaching this from a different angle? If I wanted to setup a server to serve content (in this case file storage, www, email & SQL) to a network of computers, would iSCSI have served the purpose?
Yes. Allthough, as iSCSI uses the ethernet-network, you need good switches. Because, your actually data-traffic has to go through the same network.
Or should I have kept using SMB? I am looking for a way to quickly expand the whole setup though. If we need more space, then I just want to add another cheap server with a 1TB HDD, and have it available on the network. It is my impression that I could use iSCSI, probably together with XFS, to accomplish this?
No, it doesn't quite work like that. At least, for any trivial setup that doesn't involve some storage-virtualization software.
If you can afford it, NetApp is a good solution for what you want to achieve. Or try one of the new SUN storage boxes.
If that is out of your (financial) league, you can build the same functionality as the SUN OpenStorage boxes with your own hardware and OpenSolaris - although you will not have the extensive analytics and the ease of use of the GUI. I wouldn't use any of the cheap SOHO NASes mentioned before in this thread. Build your own from HP, Dell or IBM hardware and preferably OpenSolaris. (I know, sounds weird on a CentOS-list).
For example: if your cheap "NAS"'s storage controller dies, are you sure that the replacement unit's controller you get can actually read the data? Also, if you have data on iSCSI, you really need hardware with almost no unplanned downtime. Even planned downtime can be difficult to manage, because so many servers depend on the iSCSI-targets and you'd have to shutdown maybe a dozen or more servers for that single reboot.
Centralized storage is nice for management and backup, offers a lot of possibilities regarding efficiency and utilization - but tends to create single points of failure that just don't exist with direct attached storage.
Rainer _______________________________________________
Hi Rainer,
I honestly don't want to spend a lot of cash on a proprietary system like NetApp and actually want to use a lot of old tower machines (i.e. limited space for hard drives, and no redundancy, slower CPU's, etc) we already have. CentOS is my preferred OS of choice, and I don't know Solaris, at all. I could probably give it a go, but not right now.
The setup I'm hoping to achieve is as follows: We develop a lot of PHP + MySQL based intranet and internet applications, so the main server currently runs Apache + PHP + MySQL + Zend, etc.
Some of the applications require large volumes of data which is currently saved on the sambas server. This makes it easy, as any one on the LAN can add / remove data to the SMB server, and the PHP app can also access it. But I still have a problem, that if the storage runs out, and I add another box to the network, then it's a different server with a new storage point - not ideal.
I was hoping with iSCSI to join these storage servers into one large storage volume, together with XFS (or ClusterFS / GclusterFS?) and thus have anyone connect to one "central server", both for development, file storage and even email. Everything runs on Gigabit switches, so that's not a problem, and redundancy isn't the highest issue either.I'm not too concerned with that, for this particular project.
So, trying to use existing hardware, and preferably CentOS (I would prefer not to reinstall the server right now), what else (if iSCSI isn't right) would I rather use,if I want to consolidate the storage of a few Linux machines, and export it over the LAN to various workstations?
Another project altogether though would require a similar setup with cheap central storage server(s) at a data centre - but this will purely be a storage server for XEN virtual machines to connect to, and store backup data. For this OpenFiler works very well at the moment.
Rudi Ahlers schrieb:
Hi Rainer,
I honestly don't want to spend a lot of cash on a proprietary system like NetApp and actually want to use a lot of old tower machines (i.e. limited space for hard drives, and no redundancy, slower CPU's, etc) we already have. CentOS is my preferred OS of choice, and I don't know Solaris, at all. I could probably give it a go, but not right now.
The setup I'm hoping to achieve is as follows: We develop a lot of PHP + MySQL based intranet and internet applications, so the main server currently runs Apache + PHP + MySQL + Zend, etc.
Some of the applications require large volumes of data which is currently saved on the sambas server. This makes it easy, as any one on the LAN can add / remove data to the SMB server, and the PHP app can also access it. But I still have a problem, that if the storage runs out, and I add another box to the network, then it's a different server with a new storage point - not ideal.
That means you either need a bigger central server or something like pNFS or Lustre. http://www.opensolaris.org/os/project/nfsv41/ http://en.wikipedia.org/wiki/Lustre_(file_system) The later also owned by SUN, now.
The stuff you want is really mainly found in gear provided by vendors that supply storage for HPC-clusters...
So, trying to use existing hardware, and preferably CentOS (I would prefer not to reinstall the server right now), what else (if iSCSI isn't right) would I rather use,if I want to consolidate the storage of a few Linux machines, and export it over the LAN to various workstations?
I'm not sure what the status of pNFS is in Linux (given the fact that NFS on Linux has only relatively recently "matured").
cheers, Rainer
On Oct 21, 2009, at 5:38 AM, Rudi Ahlers rudiahlers@gmail.com wrote:
Hi Rainer,
I honestly don't want to spend a lot of cash on a proprietary system like NetApp and actually want to use a lot of old tower machines (i.e. limited space for hard drives, and no redundancy, slower CPU's, etc) we already have. CentOS is my preferred OS of choice, and I don't know Solaris, at all. I could probably give it a go, but not right now.
The setup I'm hoping to achieve is as follows: We develop a lot of PHP + MySQL based intranet and internet applications, so the main server currently runs Apache + PHP + MySQL + Zend, etc.
Some of the applications require large volumes of data which is currently saved on the sambas server. This makes it easy, as any one on the LAN can add / remove data to the SMB server, and the PHP app can also access it. But I still have a problem, that if the storage runs out, and I add another box to the network, then it's a different server with a new storage point - not ideal.
I was hoping with iSCSI to join these storage servers into one large storage volume, together with XFS (or ClusterFS / GclusterFS?) and thus have anyone connect to one "central server", both for development, file storage and even email. Everything runs on Gigabit switches, so that's not a problem, and redundancy isn't the highest issue either.I'm not too concerned with that, for this particular project.
So, trying to use existing hardware, and preferably CentOS (I would prefer not to reinstall the server right now), what else (if iSCSI isn't right) would I rather use,if I want to consolidate the storage of a few Linux machines, and export it over the LAN to various workstations?
Another project altogether though would require a similar setup with cheap central storage server(s) at a data centre - but this will purely be a storage server for XEN virtual machines to connect to, and store backup data. For this OpenFiler works very well at the moment.
Rudi,
How about exporting these disperse storage units via NBD or AoE to an iSCSI head server that can create a redundant array out of them using mdraid and then re-export via iSCSI or NFS/CIFS.
You might be able to put the NBD/AoE functionality on a PXE boot image so if a machine boots that image it automatically exports all 'sd' devices as network block devices and the head server can use those to build an array of network block devices.
-Ross
Rudi Ahlers wrote:
John, you're right. iSCSI isn't an SMB replacement as I have learned through all of this. SMB is good for sharing data between many PC's, and even servers, but from what I understand it's also slower that iSCSI and won't allow me to scale the storage by simply adding another cheap server to the network. With iSCSI I could / should be able todo that.
OR am I approaching this from a different angle? If I wanted to setup a server to serve content (in this case file storage, www, email & SQL) to a network of computers, would iSCSI have served the purpose? Or should I have kept using SMB? I am looking for a way to quickly expand the whole setup though. If we need more space, then I just want to add another cheap server with a 1TB HDD, and have it available on the network. It is my impression that I could use iSCSI, probably together with XFS, to accomplish this?
You can, if you connect the iscsi block devices into one machine that can combine them in one or more md raid devices, put a filesystem on them, and export via nfs and/or smb to the systems that want shared space. However, the system exporting the filesystem becomes a single point of failure and you'd probably want a separate LAN with gigabit and jumbo frames for the iscsi connections for performance. In these days of cheap 2TB drives, it's pretty easy to just cram whatever storage you need into one box - or add an external drive case if it won't fit. Why not just toss an 8-port pci-X SATA card in one of those towers and fill the bays with drives?
On Wed, Oct 21, 2009 at 7:51 AM, Les Mikesell lesmikesell@gmail.com wrote:
You can, if you connect the iscsi block devices into one machine that can combine them in one or more md raid devices, put a filesystem on them, and export via nfs and/or smb to the systems that want shared space. However, the
If you did this, took a handful of machines, exported their storage via iSCSI and had a single server taking each of those iSCSI exported drives and combining into a single giant md device, would the theory of redundancy still hold?
Say, I had 4 devices with 500 GB drives exported using iSCSI. If a single larger server took those four iSCSI export drives, and created one md RAID 5 device, could a single server be turned off, and just degrade the array until it was either replaced entirely or brought back online?
-jonathan
Jonathan Moore wrote:
On Wed, Oct 21, 2009 at 7:51 AM, Les Mikesell lesmikesell@gmail.com wrote:
You can, if you connect the iscsi block devices into one machine that can combine them in one or more md raid devices, put a filesystem on them, and export via nfs and/or smb to the systems that want shared space. However, the
If you did this, took a handful of machines, exported their storage via iSCSI and had a single server taking each of those iSCSI exported drives and combining into a single giant md device, would the theory of redundancy still hold?
Say, I had 4 devices with 500 GB drives exported using iSCSI. If a single larger server took those four iSCSI export drives, and created one md RAID 5 device, could a single server be turned off, and just degrade the array until it was either replaced entirely or brought back online?
I suspect so. After all, it is just seen as a disk as far as md is concerned and it will do the same normal thing if you unplugged a single disk from the array.
Chan Chung Hang Christopher schrieb:
I suspect so. After all, it is just seen as a disk as far as md is concerned and it will do the same normal thing if you unplugged a single disk from the array.
But the latency over the net is much higher. Who knows if the kernel can handle this in all situations?
Rainer
On Wed, Oct 21, 2009 at 8:47 AM, Rainer Duffner rainer@ultra-secure.de wrote:
But the latency over the net is much higher. Who knows if the kernel can handle this in all situations?
I could see it taking longer to notice a failed disk then it normally *should*. I wonder what type of impact this would have.
-jonathan
On Oct 21, 2009, at 9:47 AM, Rainer Duffner rainer@ultra-secure.de wrote:
Chan Chung Hang Christopher schrieb:
I suspect so. After all, it is just seen as a disk as far as md is concerned and it will do the same normal thing if you unplugged a single disk from the array.
But the latency over the net is much higher. Who knows if the kernel can handle this in all situations?
I'm sure the kernel can handle the slowness, it's the cache consistency one has to be careful with in these setups. With so many caching devices in the chain, one must make sure the write and read cache is consistent throughout.
-Ross
Ross Walker wrote:
On Oct 21, 2009, at 9:47 AM, Rainer Duffner rainer@ultra-secure.de wrote:
Chan Chung Hang Christopher schrieb:
I suspect so. After all, it is just seen as a disk as far as md is concerned and it will do the same normal thing if you unplugged a single disk from the array.
But the latency over the net is much higher. Who knows if the kernel can handle this in all situations?
I'm sure the kernel can handle the slowness, it's the cache consistency one has to be careful with in these setups. With so many caching devices in the chain, one must make sure the write and read cache is consistent throughout.
Journaled file systems should take care of the consistency issues. However you are adding some new failure modes and making this work depends on the right software layers seeing errors at the right time. A target disk error will probably propagate back quickly so the md can kick the device out, but what if the error is in the network connection or the OS disk on the target? Will you sit doing 20 minutes of TCP retries before the upper layers see an error - and what kind of error will they get?
On Oct 21, 2009, at 11:16 AM, Les Mikesell lesmikesell@gmail.com wrote:
Ross Walker wrote:
On Oct 21, 2009, at 9:47 AM, Rainer Duffner rainer@ultra-secure.de wrote:
Chan Chung Hang Christopher schrieb:
I suspect so. After all, it is just seen as a disk as far as md is concerned and it will do the same normal thing if you unplugged a single disk from the array.
But the latency over the net is much higher. Who knows if the kernel can handle this in all situations?
I'm sure the kernel can handle the slowness, it's the cache consistency one has to be careful with in these setups. With so many caching devices in the chain, one must make sure the write and read cache is consistent throughout.
Journaled file systems should take care of the consistency issues.
I'm not talking data missing due to target/device failure, I'm talking about wrong data being returned from cache because the caches between the head server and backing store don't agree. No journal can help that, ZFS would help identify it, but not prevent, nor repair it as wrong data would keep coming back.
-Ross
Rainer Duffner wrote:
Chan Chung Hang Christopher schrieb:
I suspect so. After all, it is just seen as a disk as far as md is concerned and it will do the same normal thing if you unplugged a single disk from the array.
But the latency over the net is much higher. Who knows if the kernel can handle this in all situations?
Well, if the higher latency was a problem, I suspect that you will see its effects long before you even try to 'pull' a iscsi-target.
Say, I had 4 devices with 500 GB drives exported using iSCSI. If a single larger server took those four iSCSI export drives, and created one md RAID 5 device, could a single server be turned off, and just degrade the array until it was either replaced entirely or brought back online?
how long would it take to replicate and rebuild a volume across the LAN via iscsi? if each of those slices was 500GB, you'd be reading 3 x 500GB and writing 500GB before the drive was resynched. note, a single drive on each storage controller in this scenario won't even achieve 1gigE speeds.
John R Pierce wrote:
Say, I had 4 devices with 500 GB drives exported using iSCSI. If a single larger server took those four iSCSI export drives, and created one md RAID 5 device, could a single server be turned off, and just degrade the array until it was either replaced entirely or brought back online?
how long would it take to replicate and rebuild a volume across the LAN via iscsi? if each of those slices was 500GB, you'd be reading 3 x 500GB and writing 500GB before the drive was resynched. note, a single drive on each storage controller in this scenario won't even achieve 1gigE speeds.
I'd think raid1 or 1+0 would be a better choice - these continue at full speed with a missing member. You can continue to run while the raid rebuilds, although if the drive is very busy you end up with the rebuild fighting for head position and slowing things down.
On Oct 21, 2009, at 11:43 AM, John R Pierce pierce@hogranch.com wrote:
Say, I had 4 devices with 500 GB drives exported using iSCSI. If a single larger server took those four iSCSI export drives, and created one md RAID 5 device, could a single server be turned off, and just degrade the array until it was either replaced entirely or brought back online?
how long would it take to replicate and rebuild a volume across the LAN via iscsi? if each of those slices was 500GB, you'd be reading 3 x 500GB and writing 500GB before the drive was resynched. note, a single drive on each storage controller in this scenario won't even achieve 1gigE speeds.
Worthy of testing just to see how it performs, but I definitely would stick with raid6 or 10 here.
-Ross