Hi Team,
Is there any body got any experience on setting up a virtualized environment in which the vm's can access a fiber channel SAN storage connected to host? the host access the SAN through its own HBA, but the hba is not recognized inside the virtual machines. Please let me know the step to go through this.
Regards
On 03/07/13 15:22, denis bahati wrote:
Is there any body got any experience on setting up a virtualized environment in which the vm's can access a fiber channel SAN storage connected to host? the host access the SAN through its own HBA, but the hba is not recognized inside the virtual machines. Please let me know the step to go through this.
How you use this storage depends on whether you plan to do migration from one server to another.
If you're not going to be migrating then you can just allocate the FC LUN to LVM volume group and carve off logical volumes for the KVM VMs to use. These can then have meaningful LVM names in /dev/vg_(VG)/lv_(LV) that can be allocated to the VM. See system-config-lvm.
If you're planning to migrate between machines then the LVM solution is not going to work. In that case then you might need to create volumes on you FC controller that will be seen as individual devices/luns on the host servers. There is a consistent device name that can be used that appears under /dev/disk/by-id. This will be identical on any host servers that can see that volume. This can be allocated to the VM and will be consistent for a migration. Using this method requires careful management and meticulous documentation of which LUNs have been allocated to which VM. The lun ids are not very user friendly.
We've also have good results with DRBD for times when you want to be able to migrate between machines but do not have a SAN. You have to allocate all the storage on each server but you gain by having a sort of backup.
Finally, I can recommend convirt as a good system manager interface.
Regards Brett
Hi Brett,
On my plan is as follows:
I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update.
Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel.
I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this.
If you need more information, please let me know.
Regards
From: Brett Worth brett.worth@gmail.com To: denis bahati djbahati@yahoo.co.uk; Discussion about the virtualization on CentOS centos-virt@centos.org Sent: Wednesday, 3 July 2013, 8:40 Subject: Re: [CentOS-virt] KVM virtual machine and SAN storage with FC
On 03/07/13 15:22, denis bahati wrote:
Is there any body got any experience on setting up a virtualized environment in which the vm's can access a fiber channel SAN storage connected to host? the host access the SAN through its own HBA, but the hba is not recognized inside the virtual machines. Please let me know the step to go through this.
How you use this storage depends on whether you plan to do migration
from one server to another.
If you're not going to be migrating then you can just allocate the
FC LUN to LVM volume group and carve off logical volumes for the KVM VMs to use. These can then have meaningful LVM names in /dev/vg_(VG)/lv_(LV) that can be allocated to the VM. See system-config-lvm.
If you're planning to migrate between machines then the LVM solution
is not going to work. In that case then you might need to create volumes on you FC controller that will be seen as individual devices/luns on the host servers. There is a consistent device name that can be used that appears under /dev/disk/by-id. This will be identical on any host servers that can see that volume. This can be allocated to the VM and will be consistent for a migration. Using this method requires careful management and meticulous documentation of which LUNs have been allocated to which VM. The lun ids are not very user friendly.
We've also have good results with DRBD for times when you want to be
able to migrate between machines but do not have a SAN. You have to allocate all the storage on each server but you gain by having a sort of backup.
Finally, I can recommend convirt as a good system manager interface.
Regards Brett
-- /) _ _ _/_/ / / / _ _// /_)/</= / / (_(_/()/< ///
On Thu, Jul 4, 2013 at 12:44 AM, denis bahati djbahati@yahoo.co.uk wrote:
Hi Brett,
On my plan is as follows:
I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update.
Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have "split brain" skew, It Does Not Work.
Set up a proper database *cluster* with distinct back ends.
Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel.
Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues.
I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this.
Multipath does not mean "multiple clients of the same hardware storage". That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous.
Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely.
Hi Team,
Thanks for the good explanation.
If that is not workable for the database, can anyone recommend me for the setup of the database clients and data files in order to achieve HA and load balancing? How should I set up my VMs and stations (Two machines with two VMs each)? I will appreciate for a workable approach and that is practical for the HA/Load balancing.
Regards
From: Nico Kadel-Garcia nkadel@gmail.com To: denis bahati djbahati@yahoo.co.uk; Discussion about the virtualization on CentOS centos-virt@centos.org Cc: "brett@worth.id.au" brett@worth.id.au Sent: Thursday, 4 July 2013, 18:32 Subject: Re: [CentOS-virt] KVM virtual machine and SAN storage with FC
On Thu, Jul 4, 2013 at 12:44 AM, denis bahati djbahati@yahoo.co.uk wrote:
Hi Brett,
On my plan is as follows:
I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update.
Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have "split brain" skew, It Does Not Work.
Set up a proper database *cluster* with distinct back ends.
Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel.
Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues.
I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this.
Multipath does not mean "multiple clients of the same hardware storage". That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous.
Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely.
On Thu, Jul 4, 2013 at 10:25 PM, denis bahati djbahati@yahoo.co.uk wrote:
Hi Team,
Thanks for the good explanation.
If that is not workable for the database, can anyone recommend me for the setup of the database clients and data files in order to achieve HA and load balancing? How should I set up my VMs and stations (Two machines with two VMs each)? I will appreciate for a workable approach and that is practical for the HA/Load balancing.
It depends on what database product you're using. If it's Oracle Database Server it's designed to work with shared devices/file systems. You shouldn't have a problem running it in an active/active (load balanced) configuration. If it's MySQL/MariaDB the best you can hope for, as far as I know, is an active/passive configuration with replication.
I'm guessing you're using MySQL. Make your database highly available in an active/passive configuration with replication and use some sort of failover (heartbeat, carp, etc) or a network load balancer. Depending on your application you can still run it in a active/active (load balanced) configuration.
Regards
From: Nico Kadel-Garcia nkadel@gmail.com
To: denis bahati djbahati@yahoo.co.uk; Discussion about the virtualization on CentOS centos-virt@centos.org Cc: "brett@worth.id.au" brett@worth.id.au Sent: Thursday, 4 July 2013, 18:32
Subject: Re: [CentOS-virt] KVM virtual machine and SAN storage with FC
On Thu, Jul 4, 2013 at 12:44 AM, denis bahati djbahati@yahoo.co.uk wrote:
Hi Brett,
On my plan is as follows:
I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update.
Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have "split brain" skew, It Does Not Work.
Set up a proper database *cluster* with distinct back ends.
Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel.
Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues.
I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this.
Multipath does not mean "multiple clients of the same hardware storage". That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous.
Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely.
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
On Fri, Jul 5, 2013 at 10:56 AM, Gene gh5046@gmail.com wrote:
On Thu, Jul 4, 2013 at 10:25 PM, denis bahati djbahati@yahoo.co.uk wrote:
Hi Team,
Thanks for the good explanation.
If that is not workable for the database, can anyone recommend me for the setup of the database clients and data files in order to achieve HA and load balancing? How should I set up my VMs and stations (Two machines with two VMs each)? I will appreciate for a workable approach and that is practical for the HA/Load balancing.
It depends on what database product you're using. If it's Oracle Database Server it's designed to work with shared devices/file systems. You shouldn't have a problem running it in an active/active (load balanced) configuration. If it's MySQL/MariaDB the best you can hope for, as far as I know, is an active/passive configuration with replication.
Oracle is hideously expensive in this mode. It basically uses a customized operating system, and it's *still* prone to the basic problem of locking transactions to avoid conflicts. They just expend a *lot* of system and software resources to manage it, which is why such a clustered Oracle database takes so much resources.
There is "Multiple-Master MySQL", which basically provides built-in election of the master node and interesting load factors to split the load, and uses a separate IP address for the "master" node. It works pretty well and is available in the "mysql-mmm" package from EPEL.
I'm guessing you're using MySQL. Make your database highly available in an active/passive configuration with replication and use some sort of failover (heartbeat, carp, etc) or a network load balancer. Depending on your application you can still run it in a active/active (load balanced) configuration.
Been there, done that, had *way* too many places just wave their hands at the failover and never actually configure it. mysql-mmm takes a lot of the guesswork out.
Regards
From: Nico Kadel-Garcia nkadel@gmail.com
To: denis bahati djbahati@yahoo.co.uk; Discussion about the virtualization on CentOS centos-virt@centos.org Cc: "brett@worth.id.au" brett@worth.id.au Sent: Thursday, 4 July 2013, 18:32
Subject: Re: [CentOS-virt] KVM virtual machine and SAN storage with FC
On Thu, Jul 4, 2013 at 12:44 AM, denis bahati djbahati@yahoo.co.uk wrote:
Hi Brett,
On my plan is as follows:
I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update.
Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have "split brain" skew, It Does Not Work.
Set up a proper database *cluster* with distinct back ends.
Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel.
Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues.
I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this.
Multipath does not mean "multiple clients of the same hardware storage". That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous.
Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely.
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Hi,
while i am using mysql_mmm myself, it does has ist quirks and tends to get the odd node out of sync, especially if your run additional slaves connected to the master-master setup. You might have a look at galera cluster which is available standalone or as part of a special version of MariaDB. I have had a good experience with it, although it's innoDB only for now.
There is "Multiple-Master MySQL", which basically provides built-in election of the master node and interesting load factors to split the load, and uses a separate IP address for the "master" node. It works pretty well and is available in the "mysql-mmm" package from EPEL.
Regards, Thomas
On Sat, Jul 6, 2013 at 8:30 AM, Thomas Göttgens tgoettgens@gmail.com wrote:
Hi,
while i am using mysql_mmm myself, it does has ist quirks and tends to get the odd node out of sync, especially if your run additional slaves connected to the master-master setup. You might have a look at galera cluster which is available standalone or as part of a special version of MariaDB. I have had a good experience with it, although it's innoDB only for now.
Heh. For good reason. MyISAM is being deprecated, by a lot of developers, for a lot of reasons. Keeping the transactions atomic is apparently a *big* MyISAM problem, and exacerbated by clustering software.
I am curious about the multiple slave problem you mention. If this is a reasonable group to detail it, do tell!