I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
Thanks
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
What are you using for your iSCSI target (storage array)?
Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another.
Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions.
We've elected to skip this locally and create a filesystem on the iSCSI device directly.
Creating and mounting filesystems are both generally managed on the initiator.
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
What are you using for your iSCSI target (storage array)?
Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another.
Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions.
We've elected to skip this locally and create a filesystem on the iSCSI device directly.
Creating and mounting filesystems are both generally managed on the initiator.
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
If a best practices doc could be handed to you right now, what would you like it to contain?
I would suspect that it would be different whether your setting up an initiator or a target, so maybe start by splitting it into two sections.
I would be happy to draft something up and put it on a wiki somewhere, but I would need a list of talking points to start with.
-Ross
on 15:19 Mon 07 Feb, Ross Walker (rswwalker@gmail.com) wrote:
On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
What are you using for your iSCSI target (storage array)?
Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another.
Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions.
We've elected to skip this locally and create a filesystem on the iSCSI device directly.
Creating and mounting filesystems are both generally managed on the initiator.
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
If a best practices doc could be handed to you right now, what would you like it to contain?
<grin>
I've got about 35 pages of that document sitting on my local HD now. Negotiating with management about releasing some of it in some form or another.
It's based on some specific vendor experience (Dell MD3200i), and CentOS. Among the problems: I suspect there's a range of experiences and configuration issues.
I would suspect that it would be different whether your setting up an initiator or a target, so maybe start by splitting it into two sections.
Absolutely -- if you're setting up your own target, you've got a set of problems (and opportunities) not facing those with vendor-supplied kit with its various capabilities and limitations.
Basics:
- Terminology. I found the Wikipedia article and the Open-iSCSI README to be particularly good here.
- Presentation of the devices. It took us a while to realize that we were going to get both a set of raw SCSI devices, AND a set of multipath (/dev/dm-<number>) devices, and that it was the latter we wanted to play with. We spent a rediculous amount of time trying to debug and understand what turned out to be expected and correct behavior, though this was either undocumented or very unclearly documented. How much of this is specific to the MD3xxxi kit with its multiple ports and controllers isn't clear to me.
- Basic components. For us these were vendor tools (and identifying which of these were relevant was itself non-trivial), iscsiadm, multipath, and the CentOS netfs config scripts / environment. The device-mapper-mulitpath docs are shite.
- Overview of target and initiator setup: phyisical, cabling, network topology (a dedicated switched LAN or vLAN being recommended, away from core network traffic).
- As appropriate: multipath, its role, where it is/isn't needed, what it provides.
- Target-side set-up, including virtual disk creation, RAID configuration, network configuration, host-to-LUN mapping, IQN generation / identification, CHAP configuration (if opted), etc.
I'd slice this into Linux/CentOS target configuration (specific tasks), preceded by a more generic "things you'll want to take care of" section. If people/vendors want to fill in their specific configuration procedures in additional sub-sections, that would be an option.
- Initiator-side set-up: installation of base packages, verifying iscsi functionality, verifying multipath functionality, verifying netfs functionality. Preparing storage (partitioning, filesystem creation).
- Target discovery from initiator. CHAP configuration (if opted), session log-in, state querying, log out. DiscoveryDB querying, update, manipulation.
- Configuring/disabling persistent / on-login iscsi sessions.
- Not applicable to us, but booting to an iscsi-mounted root filesystem.
- Log / dmesg error/informative messages, interpretation, debugging.
- What to do when things go wrong. An all too infrequent documentation section.
- Monitoring and assessment. "How can I tell if I'm properly configured", target/initiator-side error/issue reporting. Good ways to tie into existing reporting systems/regimes (SNMP, Nagios, Cacti, ...).
- Peformance issues.
- Security. Best I can tell this divides into: - Session authentication (CHAT/CHAP) - Target/initiator host security (specific to protocols used to access either). - Physical and network security -- not necessarily related, but the point being that iSCSI traffic and hardware should generally be isolated. I'm not aware of encrypted network transports, though I suspect some sort of socket forwarding or IPSEC could be layered in. - Fileystem encryption. I'd suspect this would be managed client-side, though implications on network datastreams should probably be noted. Hrm... if it _is_ handled client-side, the network traffic should by ciphertext, not clear.
If nothing else, this should make a good framework for developing useful docs -- the Christmas tree from which to hang the ornaments. It's rather much how my own document started -- unanswered questions / unclear elements of the configuration, filled in by research and experience.
I would be happy to draft something up and put it on a wiki somewhere, but I would need a list of talking points to start with.
How's this do you?
Dr. Ed Morbius wrote:
on 15:19 Mon 07 Feb, Ross Walker (rswwalker@gmail.com) wrote:
On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
<snip>
If a best practices doc could be handed to you right now, what would you like it to contain?
<grin>
I've got about 35 pages of that document sitting on my local HD now. Negotiating with management about releasing some of it in some form or another.
It's based on some specific vendor experience (Dell MD3200i), and CentOS. Among the problems: I suspect there's a range of experiences and configuration issues.
I would suspect that it would be different whether your setting up an initiator or a target, so maybe start by splitting it into two sections.
Absolutely -- if you're setting up your own target, you've got a set of problems (and opportunities) not facing those with vendor-supplied kit with its various capabilities and limitations.
Basics:
Terminology. I found the Wikipedia article and the Open-iSCSI README to be particularly good here.
Presentation of the devices. It took us a while to realize that we were going to get both a set of raw SCSI devices, AND a set of multipath (/dev/dm-<number>) devices, and that it was the latter we wanted to play with. We spent a rediculous amount of time trying to debug and understand what turned out to be expected and correct behavior, though this was either undocumented or very unclearly documented. How much of this is specific to the MD3xxxi kit with its multiple ports and controllers isn't clear to me.
Basic components. For us these were vendor tools (and identifying which of these were relevant was itself non-trivial), iscsiadm, multipath, and the CentOS netfs config scripts / environment. The device-mapper-mulitpath docs are shite.
Overview of target and initiator setup: phyisical, cabling, network topology (a dedicated switched LAN or vLAN being recommended, away from core network traffic).
As appropriate: multipath, its role, where it is/isn't needed, what it provides.
Target-side set-up, including virtual disk creation, RAID configuration, network configuration, host-to-LUN mapping, IQN generation / identification, CHAP configuration (if opted), etc.
I'd slice this into Linux/CentOS target configuration (specific tasks), preceded by a more generic "things you'll want to take care of" section. If people/vendors want to fill in their specific configuration procedures in additional sub-sections, that would be an option.
Initiator-side set-up: installation of base packages, verifying iscsi functionality, verifying multipath functionality, verifying netfs functionality. Preparing storage (partitioning, filesystem creation).
Target discovery from initiator. CHAP configuration (if opted), session log-in, state querying, log out. DiscoveryDB querying, update, manipulation.
Configuring/disabling persistent / on-login iscsi sessions.
Not applicable to us, but booting to an iscsi-mounted root filesystem.
Log / dmesg error/informative messages, interpretation, debugging.
What to do when things go wrong. An all too infrequent documentation section.
Monitoring and assessment. "How can I tell if I'm properly configured", target/initiator-side error/issue reporting. Good ways to tie into existing reporting systems/regimes (SNMP, Nagios, Cacti, ...).
Peformance issues.
Security. Best I can tell this divides into:
- Session authentication (CHAT/CHAP)
- Target/initiator host security (specific to protocols used to access either).
- Physical and network security -- not necessarily related, but the point being that iSCSI traffic and hardware should generally be isolated. I'm not aware of encrypted network transports, though I suspect some sort of socket forwarding or IPSEC could be layered in.
- Fileystem encryption. I'd suspect this would be managed client-side, though implications on network datastreams should probably be noted. Hrm... if it _is_ handled client-side, the network traffic should by ciphertext, not clear.
If nothing else, this should make a good framework for developing useful docs -- the Christmas tree from which to hang the ornaments. It's rather much how my own document started -- unanswered questions / unclear elements of the configuration, filled in by research and experience.
I would be happy to draft something up and put it on a wiki somewhere, but I would need a list of talking points to start with.
How's this do you?
Excellent - just reading the synopsis helped me get some things straight - any chance of adding this to the CentOS wiki HowTo section? I for one would be a greatful recipient
On 02/07/2011 05:09 PM, Dr. Ed Morbius wrote:
on 15:19 Mon 07 Feb, Ross Walker (rswwalker@gmail.com) wrote:
On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
What are you using for your iSCSI target (storage array)?
Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another.
Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions.
We've elected to skip this locally and create a filesystem on the iSCSI device directly.
Creating and mounting filesystems are both generally managed on the initiator.
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
If a best practices doc could be handed to you right now, what would you like it to contain?
<grin>
I've got about 35 pages of that document sitting on my local HD now. Negotiating with management about releasing some of it in some form or another.
It's based on some specific vendor experience (Dell MD3200i), and CentOS. Among the problems: I suspect there's a range of experiences and configuration issues.
I would suspect that it would be different whether your setting up an initiator or a target, so maybe start by splitting it into two sections.
Absolutely -- if you're setting up your own target, you've got a set of problems (and opportunities) not facing those with vendor-supplied kit with its various capabilities and limitations.
Basics:
Terminology. I found the Wikipedia article and the Open-iSCSI README to be particularly good here.
Presentation of the devices. It took us a while to realize that we were going to get both a set of raw SCSI devices, AND a set of multipath (/dev/dm-<number>) devices, and that it was the latter we wanted to play with. We spent a rediculous amount of time trying to debug and understand what turned out to be expected and correct behavior, though this was either undocumented or very unclearly documented. How much of this is specific to the MD3xxxi kit with its multiple ports and controllers isn't clear to me.
Basic components. For us these were vendor tools (and identifying which of these were relevant was itself non-trivial), iscsiadm, multipath, and the CentOS netfs config scripts / environment. The device-mapper-mulitpath docs are shite.
Overview of target and initiator setup: phyisical, cabling, network topology (a dedicated switched LAN or vLAN being recommended, away from core network traffic).
As appropriate: multipath, its role, where it is/isn't needed, what it provides.
Target-side set-up, including virtual disk creation, RAID configuration, network configuration, host-to-LUN mapping, IQN generation / identification, CHAP configuration (if opted), etc.
I'd slice this into Linux/CentOS target configuration (specific tasks), preceded by a more generic "things you'll want to take care of" section. If people/vendors want to fill in their specific configuration procedures in additional sub-sections, that would be an option.
Initiator-side set-up: installation of base packages, verifying iscsi functionality, verifying multipath functionality, verifying netfs functionality. Preparing storage (partitioning, filesystem creation).
Target discovery from initiator. CHAP configuration (if opted), session log-in, state querying, log out. DiscoveryDB querying, update, manipulation.
Configuring/disabling persistent / on-login iscsi sessions.
Not applicable to us, but booting to an iscsi-mounted root filesystem.
Log / dmesg error/informative messages, interpretation, debugging.
What to do when things go wrong. An all too infrequent documentation section.
Monitoring and assessment. "How can I tell if I'm properly configured", target/initiator-side error/issue reporting. Good ways to tie into existing reporting systems/regimes (SNMP, Nagios, Cacti, ...).
Peformance issues.
Security. Best I can tell this divides into:
- Session authentication (CHAT/CHAP)
- Target/initiator host security (specific to protocols used to access either).
- Physical and network security -- not necessarily related, but the point being that iSCSI traffic and hardware should generally be isolated. I'm not aware of encrypted network transports, though I suspect some sort of socket forwarding or IPSEC could be layered in.
- Fileystem encryption. I'd suspect this would be managed client-side, though implications on network datastreams should probably be noted. Hrm... if it _is_ handled client-side, the network traffic should by ciphertext, not clear.
If nothing else, this should make a good framework for developing useful docs -- the Christmas tree from which to hang the ornaments. It's rather much how my own document started -- unanswered questions / unclear elements of the configuration, filled in by research and experience.
I would be happy to draft something up and put it on a wiki somewhere, but I would need a list of talking points to start with.
How's this do you?
In our configuration, we are going to have our iSCSI targets and initiators all connected to the same layer 3 switch and isolate the iSCSI traffic on separate networks. Would it be beneficial to also set up multipath for this as well?
on 16:28 Tue 08 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
On 02/07/2011 05:09 PM, Dr. Ed Morbius wrote:
on 15:19 Mon 07 Feb, Ross Walker (rswwalker@gmail.com) wrote:
On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorbius@gmail.com wrote:
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks
<...>
What are you using for your iSCSI target (storage array)?
<...>
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
If a best practices doc could be handed to you right now, what would you like it to contain?
<grin>
I've got about 35 pages of that document sitting on my local HD now. Negotiating with management about releasing some of it in some form or another.
<...>
I would be happy to draft something up and put it on a wiki somewhere, but I would need a list of talking points to start with.
How's this do you?
In our configuration, we are going to have our iSCSI targets and initiators all connected to the same layer 3 switch and isolate the iSCSI traffic on separate networks. Would it be beneficial to also set up multipath for this as well?
That's pushing the limits of my knowledge/understanding.
Multipath aggregates multiple pathways to a data store. In the case of the Dell equipment mentioned in my post, there are two controllers, with 4 TOE/NIC cards each, offering 8 pathways to each target storage LUN.
Multipath aggregates all 8 pathways to a single target, and provides both performance and availability enhancements by utilizing these pathways in turn (defaulting to round-robin sequencing), and presumably disabling use of any pathway(s) which become unavailable (whether or not any monitoring/alerting of this failover/fail-out is possible would be very useful to know).
It's also possible to configure multiple initiator pathways, though in our case we've already aggregated multiple NICs into a bonded ethernet device.
From the description you've provided, I don't think you've got a multipath configuration. I don't know what would happen if you attempted to set up multipath, but presuming not too much magic smoke escapes, I'd be interested in finding out.
Presumably you'd have to configure /etc/multipath.conf appropriately to pick up the target(s).
On Feb 8, 2011, at 4:28 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
In our configuration, we are going to have our iSCSI targets and initiators all connected to the same layer 3 switch and isolate the iSCSI traffic on separate networks. Would it be beneficial to also set up multipath for this as well?
Most definitely.
At the very least mpio would provide redundancy even if you don't plan on doing round-robin for scalability.
-Ross
On 02/07/2011 02:36 PM, Dr. Ed Morbius wrote:
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
What are you using for your iSCSI target (storage array)?
Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another.
Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions.
We've elected to skip this locally and create a filesystem on the iSCSI device directly.
Creating and mounting filesystems are both generally managed on the initiator.
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
I realize now that I left out quite a bit of information from my OP.
The iSCSI target is going to be used to storage, which has 13 disks to be presented and would like these split up between two LUNs. One which will be mirrored and the second LUN which will be RAID 5; each LUN going to different servers. The iSCSI target will use CentOS 5.5 with software RAID and the initiators will be CentOS 5.5 as well.
Here are the steps that I tried when I set this up: 1) Created partitions on sdh and sdi 2) RAID'ed sdh1 and sdi1 3) Mounted the disk on the client which then became sdc (this started the confusion as to why it wasn't sdc1) 4) Partitioned sdc to sdc1 5) Created an LV sdc1 6) Mounted everything fine and added the LV to fstab 7) Rebooted the server and SELinux freaked out which put the system into rescue mode 8) Removed the storage from fstab, did an '/.autorelabel' and rebooted 9) When the system came back up the lvm was not in /dev however I could see it in lvdisplay
Since then I've tried looking for a best practice guide as to how do properly configure the disks but have not been able to find anything.
On Mon, Feb 7, 2011 at 3:32 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
On 02/07/2011 02:36 PM, Dr. Ed Morbius wrote:
on 13:56 Mon 07 Feb, Jason Brown (jason.brown@millbrookprinting.com) wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
What are you using for your iSCSI target (storage array)?
Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another.
Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions.
We've elected to skip this locally and create a filesystem on the iSCSI device directly.
Creating and mounting filesystems are both generally managed on the initiator.
Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable.
I realize now that I left out quite a bit of information from my OP.
The iSCSI target is going to be used to storage, which has 13 disks to be presented and would like these split up between two LUNs. One which will be mirrored and the second LUN which will be RAID 5; each LUN going to different servers. The iSCSI target will use CentOS 5.5 with software RAID and the initiators will be CentOS 5.5 as well.
Here are the steps that I tried when I set this up:
- Created partitions on sdh and sdi
- RAID'ed sdh1 and sdi1
- Mounted the disk on the client which then became sdc (this started
the confusion as to why it wasn't sdc1) 4) Partitioned sdc to sdc1 5) Created an LV sdc1 6) Mounted everything fine and added the LV to fstab 7) Rebooted the server and SELinux freaked out which put the system into rescue mode 8) Removed the storage from fstab, did an '/.autorelabel' and rebooted 9) When the system came back up the lvm was not in /dev however I could see it in lvdisplay
Since then I've tried looking for a best practice guide as to how do properly configure the disks but have not been able to find anything.
Here's what I would do.
Take the 13 disks, create one big raid set, RAID10 or RAID50, with a hot spare in it.
Use LVM, make it a PV, and carve 2 LVs out of it, don't allocate all the space up front, but enough to cover current usage and growth for 6 months or so. Then you can add space in large chunks to either from the VG pool as it's needed, but do it in large 6 month chunks to negate any fragmentation.
Export the LVs to the servers and voila! Your done.
-Ross
On Mon, Feb 7, 2011 at 1:56 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
Whatever you export, the whole disk, partition or logical volume, the initiator will see as a whole disk.
So if you mirror sdaX and sdbX and export md0 the initiator will see a disk the size and contents of sdaX/sdbX.
Just create the filesystem on the disk on the initiator and use it there.
REMEMBER: iSCSI isn't a way for multiple initiators to share the same disk (though they can using specialized clustering file systems), it is a way for multiple initiators to share the same disk subsystem.
You can't access the file system from both the target-side and initiator-side at once or it will corrupt the file system. If that's what you want then you want NFS or Samba and not iSCSI.
-Ross
On 02/07/2011 03:26 PM, Ross Walker wrote:
On Mon, Feb 7, 2011 at 1:56 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
Whatever you export, the whole disk, partition or logical volume, the initiator will see as a whole disk.
So if you mirror sdaX and sdbX and export md0 the initiator will see a disk the size and contents of sdaX/sdbX.
Just create the filesystem on the disk on the initiator and use it there.
REMEMBER: iSCSI isn't a way for multiple initiators to share the same disk (though they can using specialized clustering file systems), it is a way for multiple initiators to share the same disk subsystem.
You can't access the file system from both the target-side and initiator-side at once or it will corrupt the file system. If that's what you want then you want NFS or Samba and not iSCSI.
-Ross _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Well my first question would be, do you really need to partition the disks on the target or can you just RAID them together (ie sdb/sdc and not sdb1/sdc1)? Then create your md0 based off of the two drives. Once that is done, export the md0 in /etc/tgt/targets.conf to present to the clients.
Second question. This does not need to be a clustered file system as only one server will need access to it at a time however, if server A failed, could you create a new server and present it to server B and the new server would have access to the files or would it show as an unpartitioned drive?
On Mon, Feb 7, 2011 at 3:41 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
On 02/07/2011 03:26 PM, Ross Walker wrote:
On Mon, Feb 7, 2011 at 1:56 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
Whatever you export, the whole disk, partition or logical volume, the initiator will see as a whole disk.
So if you mirror sdaX and sdbX and export md0 the initiator will see a disk the size and contents of sdaX/sdbX.
Just create the filesystem on the disk on the initiator and use it there.
REMEMBER: iSCSI isn't a way for multiple initiators to share the same disk (though they can using specialized clustering file systems), it is a way for multiple initiators to share the same disk subsystem.
You can't access the file system from both the target-side and initiator-side at once or it will corrupt the file system. If that's what you want then you want NFS or Samba and not iSCSI.
Well my first question would be, do you really need to partition the disks on the target or can you just RAID them together (ie sdb/sdc and not sdb1/sdc1)? Then create your md0 based off of the two drives. Once that is done, export the md0 in /etc/tgt/targets.conf to present to the clients.
You don't need to partition the disks on the target if you want to export the whole disks. I just don't recommend it because exporting whole disks isn't the most economical use of the disks. The whole idea of iSCSI is you can create one huge RAID array on the target and all the initiators can then all benefit from it.
If you have 13 disks, say they're 500GB SATA disks. If you create a RAID50 out of 2 6 disk RAID5s (stripe the LVs in LVM for management ease instead of nested mdraid devices), you would get 10x the READ IOPS of your mirror, and the same amount or better write IOPS then the mirror, double the write IOPS then the single RAID5 and a tad better read IOPS then your single RAID5. Not to mention a lot more storage potential for the two servers, or a third server, or a fourth server...
Second question. This does not need to be a clustered file system as only one server will need access to it at a time however, if server A failed, could you create a new server and present it to server B and the new server would have access to the files or would it show as an unpartitioned drive?
You can definitely allow two different hosts access to the target, just not simultaneously (unless it's a clustered file system). The second host can log in to the target, and have the disk at ready to mount after the first target is offline or "fenced", just don't mount it while it's mounted on the first target or zap! There goes your file system!
-Ross
On 02/07/2011 03:57 PM, Ross Walker wrote:
On Mon, Feb 7, 2011 at 3:41 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
On 02/07/2011 03:26 PM, Ross Walker wrote:
On Mon, Feb 7, 2011 at 1:56 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
Whatever you export, the whole disk, partition or logical volume, the initiator will see as a whole disk.
So if you mirror sdaX and sdbX and export md0 the initiator will see a disk the size and contents of sdaX/sdbX.
Just create the filesystem on the disk on the initiator and use it there.
REMEMBER: iSCSI isn't a way for multiple initiators to share the same disk (though they can using specialized clustering file systems), it is a way for multiple initiators to share the same disk subsystem.
You can't access the file system from both the target-side and initiator-side at once or it will corrupt the file system. If that's what you want then you want NFS or Samba and not iSCSI.
Well my first question would be, do you really need to partition the disks on the target or can you just RAID them together (ie sdb/sdc and not sdb1/sdc1)? Then create your md0 based off of the two drives. Once that is done, export the md0 in /etc/tgt/targets.conf to present to the clients.
You don't need to partition the disks on the target if you want to export the whole disks. I just don't recommend it because exporting whole disks isn't the most economical use of the disks. The whole idea of iSCSI is you can create one huge RAID array on the target and all the initiators can then all benefit from it.
If you have 13 disks, say they're 500GB SATA disks. If you create a RAID50 out of 2 6 disk RAID5s (stripe the LVs in LVM for management ease instead of nested mdraid devices), you would get 10x the READ IOPS of your mirror, and the same amount or better write IOPS then the mirror, double the write IOPS then the single RAID5 and a tad better read IOPS then your single RAID5. Not to mention a lot more storage potential for the two servers, or a third server, or a fourth server...
Second question. This does not need to be a clustered file system as only one server will need access to it at a time however, if server A failed, could you create a new server and present it to server B and the new server would have access to the files or would it show as an unpartitioned drive?
You can definitely allow two different hosts access to the target, just not simultaneously (unless it's a clustered file system). The second host can log in to the target, and have the disk at ready to mount after the first target is offline or "fenced", just don't mount it while it's mounted on the first target or zap! There goes your file system!
-Ross _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I never thought about it that way; very interesting!
Thank you both for your suggestions it help out tremendously. Jason
on 15:26 Mon 07 Feb, Ross Walker (rswwalker@gmail.com) wrote:
On Mon, Feb 7, 2011 at 1:56 PM, Jason Brown jason.brown@millbrookprinting.com wrote:
I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system.
Whatever you export, the whole disk, partition or logical volume, the initiator will see as a whole disk.
So if you mirror sdaX and sdbX and export md0 the initiator will see a disk the size and contents of sdaX/sdbX.
Just create the filesystem on the disk on the initiator and use it there.
REMEMBER: iSCSI isn't a way for multiple initiators to share the same disk (though they can using specialized clustering file systems), it is a way for multiple initiators to share the same disk subsystem.
*OR* as a special case, if access is *only* read-only (or read-only to all but one initiator).
You can't access the file system from both the target-side and initiator-side at once or it will corrupt the file system. If that's what you want then you want NFS or Samba and not iSCSI.
Right, or other network-aware filesystem (andrew, coda, gluster), none of which are particularly widely used.
http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_file_systems
On Tue, 8 Feb 2011, Dr. Ed Morbius wrote:
*OR* as a special case, if access is *only* read-only (or read-only to all but one initiator).
I get the all read-only case, but wouldn't the read-only clients end up caching filesystem data that has since been changed by the read-write client? I'd have thought the read-only initiators would get pretty quickly confused.
jh
On 02/08/11 1:34 AM, John Hodrien wrote:
On Tue, 8 Feb 2011, Dr. Ed Morbius wrote:
*OR* as a special case, if access is *only* read-only (or read-only to all but one initiator).
I get the all read-only case, but wouldn't the read-only clients end up caching filesystem data that has since been changed by the read-write client? I'd have thought the read-only initiators would get pretty quickly confused.
indeed, and when the writer IS updating directory structures and such, there's no guarantee the writes will be in an order that makes sense to someone else who slips in and reads said data.
on 09:34 Tue 08 Feb, John Hodrien (J.H.Hodrien@leeds.ac.uk) wrote:
On Tue, 8 Feb 2011, Dr. Ed Morbius wrote:
*OR* as a special case, if access is *only* read-only (or read-only to all but one initiator).
I get the all read-only case, but wouldn't the read-only clients end up caching filesystem data that has since been changed by the read-write client? I'd have thought the read-only initiators would get pretty quickly confused.
Good point. If the data were highly volatile, this would seem to be likely. I'm not sure what the consequences of that confusion might be. This could be an interesting little side-research project.
Infrequent writes, journaled filesystem, minimized caching, while not entirely kosher might work "well enough" in many cases. Probably not what you'd want in a production world though, and NFS read-only shares would seem a more appropriate solution.
Cache coherence is very, very sticky stuff, and it's what burns a whole lot of computing operations.