Hi all,
I was wondering if anyone might be able to speak about using IBM's GPFS filesystem as a means of storing virtual guests in a clustered environment with CentOS as the nodes and KVM as the hypervisor?
I'm looking at using IBM's TSM software for archiving data from disk to tape. This requires buying a license for GPFS which is used in conjunction with TSM but can also be used as a clustered filesystem as well. As I understand it, GPFS can work with CentOS so long as you're using the right kernel.
Is anyone out there using CentOS+GPFS for their virtualization environment?
many thanks in advance,
...adam
I like and use ZFS, but for some reason, a vm guest’s file (raw, qcow2, etc) stored on ZFS won’t run.
Adam Wead <amsterdamos@...> writes:
Hi all,I was wondering if anyone might be able to speak about using IBM's GPFS
filesystem as a means of storing virtual guests in a clustered environment with CentOS as the nodes and KVM as the hypervisor?
I'm looking at using IBM's TSM software for archiving data from disk to tape.
This requires buying a license for GPFS which is used in conjunction with TSM but can also be used as a clustered filesystem as well. As I understand it, GPFS can work with CentOS so long as you're using the right kernel.Is anyone out there using CentOS+GPFS for their virtualization environment?many thanks in advance,...adam
CentOS-virt mailing list CentOS-virt@... http://lists.centos.org/mailman/listinfo/centos-virt
Hi Adam, I use GPFS as my filesystem for my Centos-Xenvirtual environment.
The Virtual servers are converted Compute nodes, running Centos 5.4 with Xen 3.4.2 and have Infiniband connectivity to the NSD servers. The VM's all live on the GPFS filesystem. This has worked pretty well, the disk performance of the VM's has been good when using the GPL paravirt drivers (my VM's are windows server 2003).
I'm currently in the process of trying to re-setup the infrastructure using stateless Centos+KVM Virtual servers instead, but its too early to tell if its working or not.
Good luck,
Evan.
Hi Evan,
Thanks for the response. Just out of curiosity, do you have to pay the extra licensing costs for each CentOS node?
best,
...adam
On Thu, Dec 2, 2010 at 11:04 AM, Evan Fraser evan.fraser@rms.com wrote:
Adam Wead <amsterdamos@...> writes:
Hi all,I was wondering if anyone might be able to speak about using IBM's
GPFS filesystem as a means of storing virtual guests in a clustered environment with CentOS as the nodes and KVM as the hypervisor?
Hi Adam, I use GPFS as my filesystem for my Centos-Xenvirtual environment.
The Virtual servers are converted Compute nodes, running Centos 5.4 with Xen 3.4.2 and have Infiniband connectivity to the NSD servers. The VM's all live on the GPFS filesystem. This has worked pretty well, the disk performance of the VM's has been good when using the GPL paravirt drivers (my VM's are windows server 2003).
Adam Wead wrote:
Hi all,
I was wondering if anyone might be able to speak about using IBM's GPFS filesystem as a means of storing virtual guests in a clustered environment with CentOS as the nodes and KVM as the hypervisor?
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests. No filesystem overhead and better performances.
2010/12/2 Fabian Arrotin fabian.arrotin@arrfab.net:
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests. No filesystem overhead and better performances.
...live migration...?
[>]...live migration...? _______________________________________________
Interesting. Does live migration not work on ext3 or ext4?
On 12/02/2010 12:58 PM, compdoc wrote:
[>]...live migration...? _______________________________________________
Interesting. Does live migration not work on ext3 or ext4?
No. You need a shared filesystem. Which pretty much leaves you on either NFS or a clustered filesystem. RH has an example of using NFS - with a strong statement attached that you shouldn't do it that way in real life because the performance is poor.
Trying to mount ext3 or ext4 simultaneously from two machines (on say iSCSI) would just result in filesystem corruption.
On 12/02/2010 10:53 PM, Benjamin Franz wrote:
On 12/02/2010 12:58 PM, compdoc wrote:
[>]...live migration...? _______________________________________________
Interesting. Does live migration not work on ext3 or ext4?
No. You need a shared filesystem.
Thats not true. You can do live kvm guest migration using RHEL/CentOS+RHCS+KVM using lvm volumes to allocate/install kvm guests, for example. In this case you don't need a shared filesystem ...
To accomplish a live kvm migration (or xen) you need a shared storage, not a clustered filesystem or shared filesystem.
Which pretty much leaves you on either
NFS or a clustered filesystem. RH has an example of using NFS - with a strong statement attached that you shouldn't do it that way in real life because the performance is poor.
I have two solaris zfs/nfs fileservers sharing storage to 5 ESXi servers and performance is very very good ... And it is a production system ...
Trying to mount ext3 or ext4 simultaneously from two machines (on say iSCSI) would just result in filesystem corruption.
This case isn't possible, because you can't mount an ext3 or ext4 filesystem at the same time in two or more hosts ...
Benjamin Franz wrote:
On 12/02/2010 12:58 PM, compdoc wrote:
[>]...live migration...?
<snip>
No. You need a shared filesystem. Which pretty much leaves you on either NFS or a clustered filesystem.
Totally wrong ! If you have never tested it , try it (and try to understand clvmd) before saying that it doesn't work ! If you've never tried it, that means you've never played with the rhcs stack, because even if you want to put gfs/gfs2 on top, you still need clvmd to have a consistent logical volume management across all the nodes in the hypervisor cluster ... It seems to me that most people wanting to have a clusterfs (gfs/gfs2/ocfs2/whateverfs) on top of a shared storage want that just because they are used to that thing that Vmware did for a shared storage : vmfs on top of a shared storage and file-based container (.vmdk) for the virtual machines. I've installed several solutions based purely on lvm
Please compare all the solutions and you'll easily find that on a performance/IO level you'll be always faster to put put extra layer between the VM storage and the shared storage
Kenni Lund wrote:
2010/12/2 Fabian Arrotin fabian.arrotin@arrfab.net:
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests. No filesystem overhead and better performances.
...live migration...?
Without any issue
Greetings,
On Fri, Dec 3, 2010 at 12:37 AM, Fabian Arrotin fabian.arrotin@arrfab.net wrote:
Adam Wead wrote:
Hi all,
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests.
Did you mean CLVM? Where does snapshot stand?
bitty outta touch with tech these days...
Regards,
Rajagopal
On Fri, Dec 03, 2010 at 12:45:22PM +0530, Rajagopal Swaminathan wrote:
Greetings,
On Fri, Dec 3, 2010 at 12:37 AM, Fabian Arrotin fabian.arrotin@arrfab.net wrote:
Adam Wead wrote:
Hi all,
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests.
Did you mean CLVM? Where does snapshot stand?
bitty outta touch with tech these days...
You can also use normal LVM over shared iSCSI LUN, but you need to be (very) careful with running LVM management commands and getting all the nodes (dom0s) to be in sync :)
(Citrix XenServer does this, but there the management toolstack takes care of the LVM command execution + state synchronization).
-- Pasi
Pasi Kärkkäinen wrote:
On Fri, Dec 03, 2010 at 12:45:22PM +0530, Rajagopal Swaminathan wrote:
<snip>
You can also use normal LVM over shared iSCSI LUN, but you need to be (very) careful with running LVM management commands and getting all the nodes (dom0s) to be in sync :)
(Citrix XenServer does this, but there the management toolstack takes care of the LVM command execution + state synchronization).
Yes, Citrix XenServer also use LVM, but a different implementation though (with a VHD format in the LV itself) That's also true that the management toolstack takes care of the state synch and the active/inactive state of the LV
On 03/12/2010 08:55, Pasi Kärkkäinen wrote:
On Fri, Dec 03, 2010 at 12:45:22PM +0530, Rajagopal Swaminathan wrote:
Greetings,
On Fri, Dec 3, 2010 at 12:37 AM, Fabian Arrotin fabian.arrotin@arrfab.net wrote:
Adam Wead wrote:
Hi all,
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests.
Did you mean CLVM? Where does snapshot stand?
bitty outta touch with tech these days...
You can also use normal LVM over shared iSCSI LUN, but you need to be (very) careful with running LVM management commands and getting all the nodes (dom0s) to be in sync :)
(Citrix XenServer does this, but there the management toolstack takes care of the LVM command execution + state synchronization).
Hi,
We use this solution on a san shared storage with Xen and Centos 5 for 3 years now and i confirm that works fine. We use live migration without problem. We "manage" the pool of xen server from standalone server. This server assume only one instance of a vm run on the pool.
We plan to migrate with kvm and centos 6. We thinked about the opportunity to move to a clusetred/shared filesystem in order to take benefits of the qcow2 image file format (snapshot, diff, etc...)
Googling a lot, it seems there is 2 solution : * NFS fileserver * ClusterFileSystem on the san (FC,iscsi,etc...)
As anyone have advices/experiences in production with these solutions ?
We have here a good hadware (SAN FC, multipathing, etc....) and the clusteringFileSystem seems to be the solution but we search about the best/simple solution (easy to manage) and the rhcs seems to be complex.
-- Pasi
CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Rajagopal Swaminathan wrote:
Greetings,
On Fri, Dec 3, 2010 at 12:37 AM, Fabian Arrotin fabian.arrotin@arrfab.net wrote:
Adam Wead wrote:
Hi all,
Why would so much people use a clusterfs for Virtualization ? Just use lvm and a logical volumes for your guests.
Did you mean CLVM? Where does snapshot stand?
bitty outta touch with tech these days...
Yeah, CLVM and its associated daemon that run on all the nodes, clvmd ;-)