I am busy setting up some XEN servers on a SAN for high availability and Cloud Computing, and thought it could be cool to setup virtualization on a CentOS 5.5 Desktop, running on a Core i3 + 4GB RAM, and use the SAN's storage to see if it could actually be worth my while to replicate a Cloud Computing setup in the office. And, cause I got a bit bored waiting for a few RAID-sets to finish initializing.
So, I installed CentOS + KDE, chose the Virtualization package and used Virtual Machine Manager to setup another CentOS VM inside CentOS (I only have a CentOS ISO on this SAN, since we don't use Debian / Slackware / FC / Ubuntu / etc). The installation was probably about the same speed as it would be on raw hardware. But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow / large website, it seemed asif the whole VM lagged behind.
The Virtual Machine didn't use much resources. I allocated 1CPU core & 512MB RAM to it
Yes, I know that I could have used KVM, VMWare or VirtualBox, but I wanted to use what's included already. Cause, let's face it, many people (even though they're technically advanced users) don't know virtualization.
And, granted, when we install Virtual Machines on a XEN server, we don't ever use X since the servers we run as web / email / database / file servers, so there's no need for X.
BUT, I want(ed) to see if this is a reality for the average desktop user, or not really (yet?) seeing as most modern PC's have far more CPU & RAM resources than what is actually needed by most. I'm not talking about developers / graphic designers / etc. I'm talking about Bob, who uses his PC for email, internet, document writing, etc and needs to boot into Windows if he feels like playing Warcraft III or StarCraft II, or use Pastel, etc.
Wouldn't it be nice to run Windows, of for that matter Solaris / FreeBSD / MAC (graphics designer) / another flavor of Linux / etc inside your favorite Linux, and access it from the Desktop without too much trouble?
Yes, I know that I could have used KVM, VMWare or VirtualBox, but I wanted to use what's included already.
KVM is included, you just have to select it. There is a loyal following of Xen in the community, but I use KVM for my servers. I'm often called 'dumb' for even talking about KVM, but I like it. (and I'm not saying, nor have I ever said, that KVM is better than Xen)
But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow /large website, it seemed asif the whole VM lagged behind... .. BUT, I want(ed) to see if this is a reality for the average desktop user, or not really (yet?) seeing as most modern PC's have far more CPU & RAM resources than what is actually needed by most.
I assume you're using VNC to connect? It can be painfully slow with some vnc clients, and workable for basic stuff with others.
Using MS remote desktop to connect to a VM running Windows works pretty well, but not when you're trying to view anything with graphics. (like watching videos)
There's the SPICE protocol which supposedly handles these problems, although I haven't tried it yet.
It would be nice if you could run your OS in a VM, then use some tablet with a huge screen to connect to the VM and not be able to notice a difference in speed. I think that's a ways off in the future, however.
On Wed, Mar 2, 2011 at 7:56 PM, compdoc compdoc@hotrodpc.com wrote:
Yes, I know that I could have used KVM, VMWare or VirtualBox, but I wanted to use what's included already.
KVM is included, you just have to select it. There is a loyal following of Xen in the community, but I use KVM for my servers. I'm often called 'dumb' for even talking about KVM, but I like it. (and I'm not saying, nor have I ever said, that KVM is better than Xen)
Yes, I know KVM is included, but at this stage XEN is the default and when you use the Virtual Machine Manager, it uses XEN.
But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow /large website, it seemed asif the whole VM lagged behind... .. BUT, I want(ed) to see if this is a reality for the average desktop user, or not really (yet?) seeing as most modern PC's have far more CPU & RAM resources than what is actually needed by most.
I assume you're using VNC to connect? It can be painfully slow with some vnc clients, and workable for basic stuff with others.
No, I'm not using VNC. My approach was from a single, non-networked PC-point-of-view.
Someone who's never played with Virtual PC's and then opens up Virtual Machine Manager thinking it would be cool to use, wouldn't think of using VNC or something similar.
Using MS remote desktop to connect to a VM running Windows works pretty well, but not when you're trying to view anything with graphics. (like watching videos)
I thought, just for the fun of it, let's install Windows 2008 Small Business Server. Interestingly, using the same Virtual Machine Manager, the installation wasn't as slow as with CentOS. It's almost asif it's more optimized for Windows? I used the exact same settings for the installation as with CentOS
There's the SPICE protocol which supposedly handles these problems, although I haven't tried it yet.
Is this something you install on the PC, or does it improve network access to the Virtual Machine?
It would be nice if you could run your OS in a VM, then use some tablet with a huge screen to connect to the VM and not be able to notice a difference in speed. I think that's a ways off in the future, however.
That's what I had in mind as well. But, even just using a normal monitor, keyboard, mouse and speakers - as connected to a normal PC's would be nice as well.
What I'm getting at:
Can, or will virtualization replace dual boot systems or even give one the ability to use your Desktop PC to it's full advantage? For example, while I'm busy rendering a 3hour 3D scene in Maya (running in Windows 7) I want to watch some moving in Linux - but have both run in real-time. My PC is capable of it with 2x Corei7 CPU's & 16GB RAM. - this is just an example.
Differently put, we already do this with servers. One big & fast Quad XEON can run many client's Virtual Machines, very easily. And many of those Virtual Machines host a few hundred websites, thus saving a lot on rack space, electricity, etc, etc.
How difficult will it really be todo the same on a normal Desktop PC, with what's available on CentOS ATM?
On Mar 2, 2011, at 11:35 AM, Rudi Ahlers wrote:
I thought, just for the fun of it, let's install Windows 2008 Small Business Server. Interestingly, using the same Virtual Machine Manager, the installation wasn't as slow as with CentOS. It's almost asif it's more optimized for Windows? I used the exact same settings for the installation as with CentOS
Hi Rubi,
While I've had great luck using Xen (Gitco RPMs v3.4x) + Centos 5.5, I am waiting for Centos 6 as it will have the 2.6.32 pvops kernel which solves a lot of USB/firewire and I'm hoping VGA pass through issues.
Also disk i/o is supposed is supposed to be addressed although I have never had any i/o issues.
There are several write up on how to Xen-ize your RHEL 6 dom0 out there as 6 still supports Xen domUs.
You may want to dl a 30 day eval of RHEL6 or even SL 6 for the hell of it as a prep for Centos 6.
I wouldn't say Xen is out the door and its still a bit more mature then KVM.
However I have my eye on KVM as it does have better passthrough features, that is until pvops 2.6.32 is out :)
- aurf
On 3/2/2011 1:35 PM, Rudi Ahlers wrote:
Differently put, we already do this with servers. One big& fast Quad XEON can run many client's Virtual Machines, very easily. And many of those Virtual Machines host a few hundred websites, thus saving a lot on rack space, electricity, etc, etc.
Servers are normally optimized with lots of disk spindles to spread multi-user use of the one remaining slow resource around.
How difficult will it really be todo the same on a normal Desktop PC, with what's available on CentOS ATM?
Give the VM its own disk and it won't have much impact on the host. You'll probably still want to run video-intense things natively, though. And if you aren't a developer doing throwaway tests, what's the point of using a VM for resource-intensive things anyway?
On Wed, Mar 2, 2011 at 9:54 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 3/2/2011 1:35 PM, Rudi Ahlers wrote:
Differently put, we already do this with servers. One big& fast Quad XEON can run many client's Virtual Machines, very easily. And many of those Virtual Machines host a few hundred websites, thus saving a lot on rack space, electricity, etc, etc.
Servers are normally optimized with lots of disk spindles to spread multi-user use of the one remaining slow resource around.
True, but in a one-user-one-drive (or 2 drives in RAID1) setup, the disk I/O wouldn't be a problem, or the limiting factor.
How difficult will it really be todo the same on a normal Desktop PC, with what's available on CentOS ATM?
Give the VM its own disk and it won't have much impact on the host. You'll probably still want to run video-intense things natively, though. And if you aren't a developer doing throwaway tests, what's the point of using a VM for resource-intensive things anyway?
There are many reasons why one would do this kind of things. Just thinking of my normal day-to-day work, I often start-up a new VM to test certain functionality of some software package, without affecting anything on my PC. My laptop runs Windows 7 at this stage, purely for Quickbooks and a few other Windows-only applications. So, in this case it would be nice to have Windows running permanently on my PC which will allow the accounts person to still access it remotely on her PC and I can still do stuff in Quickbooks as needed. But, I would prefer real-time access.
I think the major problem here is that the tools at hand, i.e. XEN + Virtual Machine Manager (or for that matter VirtualBox / VMWare / etc) isn't yet optimized for this kind of usage.
I guess we need better VGA-passthrough drivers, and / or a more optimized interface. Accessing the VM's via VNC / Remote Desktop / XN / etc is also probably also a possibility.
KVM is included, you just have to select it. There is a loyal following of Xen in the community, but I use KVM for my servers. I'm often called 'dumb' for even talking about KVM, but I like it. (and I'm not saying, nor have I ever said, that KVM is better than Xen)
Yes, I know KVM is included, but at this stage XEN is the default and when you use the Virtual Machine Manager, it uses XEN.
Well RHEL will no longer support a XEN host in RHEL6....
KVM has matured and is being used pretty extensively...
Virt-manager doesn't only work with Xen - it just does that if it detects you are on a Xen kernel...
Oh and you are running a Xen kernel - and Xen is not Linux so if your desktop exhibits any bugs or odd behaviour then replicate it without Xen first or else no one in the Redhat community will help you.
James
on 21:35 Wed 02 Mar, Rudi Ahlers (Rudi@SoftDux.com) wrote:
On Wed, Mar 2, 2011 at 7:56 PM, compdoc compdoc@hotrodpc.com wrote:
Yes, I know that I could have used KVM, VMWare or VirtualBox, but I wanted to use what's included already.
<...>
What I'm getting at:
Can, or will virtualization replace dual boot systems
It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat).
Virtualization allows you to have your pick of base host OS (Linux, Windows, Mac, or bare-iron virtualization with some technologies), while offering a reasonable facsimile of bare-iron performance, often allowing multiple guests to run simultaneously. For realtime-performant needs (mostly gaming, though some engineering tasks come to mind), you'll still want to avoid a virtualized host, but for many, many other tasks this is more than adequate.
The primary limitation I've encountered is RAM utilization. As much of the stuff vendors provide and however cheaply, it's never enough. And it's the truly mundane stuff (browser sessions usually) that seem to suck the most RAM.
or even give one the ability to use your Desktop PC to it's full advantage? For example, while I'm busy rendering a 3hour 3D scene in Maya (running in Windows 7) I want to watch some moving in Linux - but have both run in real-time. My PC is capable of it with 2x Corei7 CPU's & 16GB RAM. - this is just an example.
If you could reduce priority on the render, you'll likely be happier. Some resources (disk IO particularly) aren't fungible and may have impacts on virtualized environments though. This means swap as well.
On Wed, 2011-03-02 at 19:18 -0800, Dr. Ed Morbius wrote:
It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat).
My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage?
With best regards,
Paul. England, EU.
On Mar 3, 2011, at 6:38 AM, Always Learning wrote:
On Wed, 2011-03-02 at 19:18 -0800, Dr. Ed Morbius wrote:
It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat).
My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage?
If you are tri-booting, how are you accessing the file systems of the other OS's "at the same time"? Don't you have to reboot to change OS's?
On Thu, 2011-03-03 at 06:43 -0500, Kevin K wrote:
On Mar 3, 2011, at 6:38 AM, Always Learning wrote:
My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage?
If you are tri-booting, how are you accessing the file systems of the other OS's "at the same time"? Don't you have to reboot to change OS's?
No re-booting is necessary when running Centos 5.5. Besides I am 'lazy' and hate re-booting because it so time wasting.
On one machine running Centos 5.5 I have in /etc/fstab
/dev/sda5 /nos.f14 ext4 auto 0 0
/nos.f14 is a pre-created, but empty, directory used as the mounting point for, in this instance, Fedora 14.
On another machine (the tri-boot machine) I also run Centos 5.5 and in that /etc/fstab I have
/dev/sda3 /z-vista/ ntfs-3g auto,umask=0000,defaults 0 0 /dev/sda7 /z-fedora/ ext4 defaults 1 2
The z-vista and z-fedora are empty root directories used as mounting points. Obviously you can use any name you prefer.
Being honest I have to point-out that I can not remember what the 0 0 or the 1 2 actually mean.
It works. I can access and change the Vista 'drive' contents and the also the entire Fedora 'drive'. If I wanted to access, on that machine, Vista's two extra drives (System & Resources) then I would add to /etc/fstab something like
/dev/sda1 /z-system/ ntfs-3g auto,umask=0000,defaults 0 0 /dev/sda2 /z-resources/ ntfs-3g auto,umask=0000,defaults 0 0
Hope that helps.
With best regards,
Paul. England, EU.
Kevin K wrote:
On Mar 3, 2011, at 6:38 AM, Always Learning wrote:
On Wed, 2011-03-02 at 19:18 -0800, Dr. Ed Morbius wrote:
It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat).
My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage?
If you are tri-booting, how are you accessing the file systems of the other OS's "at the same time"? Don't you have to reboot to change OS's?
I think Paul's point was that ntfs-3g provides write access to NTFS, so you no longer have to use a vfat transfer partition to exchange files between linux and ms windows.
On 03/03/2011 06:43 AM, Kevin K wrote:
On Mar 3, 2011, at 6:38 AM, Always Learning wrote:
On Wed, 2011-03-02 at 19:18 -0800, Dr. Ed Morbius wrote:
My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage?
If you are tri-booting, how are you accessing the file systems of the other OS's "at the same time"? Don't you have to reboot to change OS's?
Kevin,
When booting a system with multiple operating systems, it is true that only one operating system may be in use at one time, however, those other operating systems are installed on the disk in partitions. These partitions may be mounted like any other filesystem, hence the ability to use them while a single instance of an operating system is running. It's all done via the /etc/fstab and through mount options.
Kind regards,
Phil
On Thu, Mar 03, 2011 at 07:10:26AM -0500, Phil Savoie wrote:
On 03/03/2011 06:43 AM, Kevin K wrote:
On Mar 3, 2011, at 6:38 AM, Always Learning wrote:
two operating systems. Surely that constitutes simultaneous access to storage?
If you are tri-booting, how are you accessing the file systems of the other OS's "at the same time"? Don't you have to reboot to change OS's?
When booting a system with multiple operating systems, it is true that only one operating system may be in use at one time, however, those other operating systems are installed on the disk in partitions. These partitions may be mounted like any other filesystem, hence the ability to use them while a single instance of an operating system is running. It's all done via the /etc/fstab and through mount options.
I think people are misunderstanding the word "simultaneous"; in a multi-boot environment each OS as unique access to the filesystem. Sure Linux can access the NTFS filesystem and make changes, but while Linux is accessing the device then Windows is not. There is no _simultaneous_ access; it's one OS or the other, not both at the same time.
On 03/03/11 4:10 AM, Phil Savoie wrote:
When booting a system with multiple operating systems, it is true that only one operating system may be in use at one time, however, those other operating systems are installed on the disk in partitions. These partitions may be mounted like any other filesystem, hence the ability to use them while a single instance of an operating system is running. It's all done via the /etc/fstab and through mount options.
I am not a fan of multiple booting.
Multiple OS's can make a mess of file system permissions if you're not careful. For instance, if you have multiple linux installs, you'll need to go to some troubles to ensure their /etc/passwd stays in sync or they'll make a mess of each others ownership.. and mapping linux user numbers to Windows user ID's for NTFS is non-trivial.
Computer security overhead is multiplied times the number of OSs. If you haven't booted that windows system for a few weeks, expect to spend a good hour with Windows Update, Antivirus updates, web browser & plugin updates, adobe, etc etc before using it next time. A network configuration change would have to be made to all the different OS's. etc etc.
and, when all is said and done, your system's bootstrap sequence becomes a rather fragile house of cards.
on 11:38 Thu 03 Mar, Always Learning (centos@g7.u22.net) wrote:
On Wed, 2011-03-02 at 19:18 -0800, Dr. Ed Morbius wrote:
It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat).
My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage?
I should have hedged: there are means of accessing NTFS from Linux (ntfs-ng drivers) and Linux ext2/3 filesystems from Windows (explorE2fs and some ported drivers, IIRC). As I recall, writing via ntfs-ng still triggers a filesystem scan on the next Windows boot. The ext2/3 access last I used it (years ago) worked, but wasn't particularly fluid.
Neither gives you proper multi-user semantics (/etc/passwd and wherever NT stores its user perms/IDs stuff aren't used).
If you've coordinated UIDs, yes, it's very possible to share Linux partitions between multi-booted systems, though I'd still argue that this is less than optimal. A chroot works pretty well (and keeps things like LD search paths sane). KVM is /very/ lightweight and allows for separate process space.
Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+.
The win is still virtualization.
On Thursday, March 03, 2011 01:20:06 pm Dr. Ed Morbius wrote:
Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+.
The win is still virtualization.
There are situations where dual-booting is a necessary thing to do; one of those is low-latency professional audio where accurate timekeeping is required; basically anything that needs the -rt preemptive kernel patches. I actually have need of this, from multiple OS's, and while I've tried the 'run it in VMware' thing with Windows and professional audio applications the results were not satisfactory.
There are commercially developed and supported drivers for cross-platform uses put out by Paragon Software; ext[234]fs on Windows and OS X, HFS+ on Linux and Windows, and full NTFS (with lots of utilities) on OS X and Linux.
HFS+ would be the preferred filesystem to interchange with Mac OS X, but the in-kernel Linux drivers for HFS have issues; if it's for read-only it's not a problem, but the in-kernel driver is unsafe for anything like a heavy load, with filesystem corruption possible especially when deleting lots of small files.
On Mar 3, 2011, at 12:37 PM, Lamar Owen wrote:
On Thursday, March 03, 2011 01:20:06 pm Dr. Ed Morbius wrote:
Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/ guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+.
The win is still virtualization.
There are situations where dual-booting is a necessary thing to do; one of those is low-latency professional audio where accurate timekeeping is required; basically anything that needs the -rt preemptive kernel patches. I actually have need of this, from multiple OS's, and while I've tried the 'run it in VMware' thing with Windows and professional audio applications the results were not satisfactory.
Agreed.
Even with high end 3D were OpenGL is a must.
Even Photoshop CS5 takes advantage of graphic acceleration.
I look to virtualization for ;
isolation quick app access for specific guest OS. great fault tolerance testing theories, deployments lower over all hardware cost in aggregate
- aurf
On Thu, Mar 3, 2011 at 10:45 PM, aurfalien@gmail.com wrote:
On Mar 3, 2011, at 12:37 PM, Lamar Owen wrote:
On Thursday, March 03, 2011 01:20:06 pm Dr. Ed Morbius wrote:
Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/ guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+.
The win is still virtualization.
There are situations where dual-booting is a necessary thing to do; one of those is low-latency professional audio where accurate timekeeping is required; basically anything that needs the -rt preemptive kernel patches. I actually have need of this, from multiple OS's, and while I've tried the 'run it in VMware' thing with Windows and professional audio applications the results were not satisfactory.
Agreed.
Even with high end 3D were OpenGL is a must.
Even Photoshop CS5 takes advantage of graphic acceleration.
I look to virtualization for ;
isolation quick app access for specific guest OS. great fault tolerance testing theories, deployments lower over all hardware cost in aggregate
- aurf
That's exaclty what I was getting at :)
Although it's not there yet, I'm sure we'll get there sooner than expected
On Thursday, March 03, 2011 04:04:42 pm Rudi Ahlers wrote:
Although it's not there yet, I'm sure we'll get there sooner than expected
To be fair to VMware Fusion on OS X, the graphics acceleration is fantastic, running Windows 7 in full Aero mode with no problems. But it still can't keep accurate time.
On Mar 3, 2011, at 1:18 PM, Lamar Owen wrote:
On Thursday, March 03, 2011 04:04:42 pm Rudi Ahlers wrote:
Although it's not there yet, I'm sure we'll get there sooner than expected
To be fair to VMware Fusion on OS X, the graphics acceleration is fantastic, running Windows 7 in full Aero mode with no problems. But it still can't keep accurate time.
In general yes its fine, but for specific targeted needs, virtualization may not be a viable option which I think was the point.
I think pass through is very very cool which VMware Player/Fusion doesn't do regarding graphics.
Having multiple cards/resources in a host and doling em out/dedicating them to any VM would be very cool.
Hence PVOPS 2.6.32 and Xen 4.0.1.
Although KVM is also supposed to do this.
- aurf
On 3/3/2011 2:37 PM, Lamar Owen wrote:
Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+.
The win is still virtualization.
There are situations where dual-booting is a necessary thing to do; one of those is low-latency professional audio where accurate timekeeping is required; basically anything that needs the -rt preemptive kernel patches. I actually have need of this, from multiple OS's, and while I've tried the 'run it in VMware' thing with Windows and professional audio applications the results were not satisfactory.
But you can usually run the one that is picky as the host OS and the other(s) virtualized. Or set up for dual boot, but give your virtual machine direct access to the partition (VMware can do this - not sure about the others). Then you only have to boot into the other OS when you need to run the specific app that doesn't work well in a VM.
There are commercially developed and supported drivers for cross-platform uses put out by Paragon Software; ext[234]fs on Windows and OS X, HFS+ on Linux and Windows, and full NTFS (with lots of utilities) on OS X and Linux.
HFS+ would be the preferred filesystem to interchange with Mac OS X, but the in-kernel Linux drivers for HFS have issues; if it's for read-only it's not a problem, but the in-kernel driver is unsafe for anything like a heavy load, with filesystem corruption possible especially when deleting lots of small files.
As long as you have access to a network, just connect up a common nfs/samba share from some other machine.
On Thursday, March 03, 2011 03:55:48 pm Les Mikesell wrote:
But you can usually run the one that is picky as the host OS and the other(s) virtualized.
You really don't know what you're talking about in this case, Les. The specific machine that I'm talking about needs access to Harrison Mixbus on OS X with iZotope Alloy, Ozone, and Spectron as AudioUnits, and also access to Ardour (soon Mixbus, once I get some things squared) on Linux with certain specialized LV2 plugins for special tasks. Both environments are time critical. There is also clock sync to outboard processing gear; I'm talking realtime on both OS'es, and virtualization is not a workable option, at least as long as hard realtime under a VM isn't possible. If the iZotope plugins would work as VST's under Linux in a reliable manner I could remove at least part of my need for OS X; well, and once Melodyne for Windows can run under Crossover (haven't tried; don't know). But I still do analysis in Spectre, and that's OS X-only.
Or set up for dual boot, but give your virtual machine direct access to the partition (VMware can do this - not sure about the others). Then you only have to boot into the other OS when you need to run the specific app that doesn't work well in a VM.
Again, there are apps on both systems that are needed, and they need to share rather large audio files (multiple tracks of 32-bit floating point audio for many minutes means a number of GB per session). And due to outboard processing, clock sync is a must; in the future, SMPTE timecode will be part of that. And since the workflow between the two operating systems *is* serializable, dual boot is workflow-friendly in this environment, where you might be charging a client significant amounts per hour of time. And it wasn't too awfully hard to set up.
And OS X running in VMware Workstation under Linux is rather difficult to do, using direct partition access. Linux/CentOS on VMware Fusion works great, but VMware's timekeeping isn't.
As long as you have access to a network, just connect up a common nfs/samba share from some other machine.
No. That specific machine is not networked, to reduce IRQ load. Every IRQ that can be turned off is turned off.
On 3/3/2011 3:17 PM, Lamar Owen wrote:
On Thursday, March 03, 2011 03:55:48 pm Les Mikesell wrote:
But you can usually run the one that is picky as the host OS and the other(s) virtualized.
You really don't know what you're talking about in this case, Les. The specific machine that I'm talking about needs access to Harrison Mixbus on OS X with iZotope Alloy, Ozone, and Spectron as AudioUnits, and also access to Ardour (soon Mixbus, once I get some things squared) on Linux with certain specialized LV2 plugins for special tasks. Both environments are time critical. There is also clock sync to outboard processing gear; I'm talking realtime on both OS'es, and virtualization is not a workable option, at least as long as hard realtime under a VM isn't possible. If the iZotope plugins would work as VST's under Linux in a reliable manner I could remove at least part of my need for OS X; well, and once Melodyne for Windows can run under Crossover (haven't tried; don't know). But I still do analysis in Spectre, and that's OS X-only.
So there are actually apps that work in Linux that aren't available for OS X?
As long as you have access to a network, just connect up a common nfs/samba share from some other machine.
No. That specific machine is not networked, to reduce IRQ load. Every IRQ that can be turned off is turned off.
I'm kind of surprised that a local disk controller would be better in that respect than a network card.
On Thursday, March 03, 2011 04:44:58 pm Les Mikesell wrote:
So there are actually apps that work in Linux that aren't available for OS X?
Yep. For one example, there are the LinuxDSP plugins. There are others.
I'm kind of surprised that a local disk controller would be better in that respect than a network card.
Can be, depending upon the controller's chipset. Networking has somewhat non-deterministic characteristics, even for small networks. And, if you don't need networking to get the job done, why have it?
And don't believe what the IRQ-steering docs say; sharing IRQ's with audio interfaces in not going to be reliable (been there, done that, got the ALSA xruns to prove it), at least not the last time I tried it. By cutting out devices completely that need IRQ's, you can gain some control over what IRQ goes where, in terms of the physical PCI slot, that leaving interfaces enabled 'Just Because' will complicate. In one specific example, disabling the ethernet interface on the motherboard of one particular machine, along with some of the other devices like the onboard sound card and modem, I was able to get the video card (nVidia) off the IRQ the audio interface's PCI slot (newer motherboard; only one regular PCI slot in a location conducive to the audio interface) had to have....
on 15:37 Thu 03 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 03, 2011 01:20:06 pm Dr. Ed Morbius wrote:
Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+.
The win is still virtualization.
There are situations where dual-booting is a necessary thing to do; one of those is low-latency professional audio where accurate
I think I addressed that reality. For some needs, you need to be on bare metal, though whether this is accomplished via multi-booting or multiple systems (if you're doing professional music editing, presumably you can justify a dedicated system to that task).
timekeeping is required; basically anything that needs the -rt preemptive kernel patches. I actually have need of this, from multiple OS's, and while I've tried the 'run it in VMware' thing with Windows and professional audio applications the results were not satisfactory.
What surprises me is that there aren't more systems available which provide separate bare-metal computing environments within a single enclosure, perhaps with some form of shared storage, perhaps just integrated networking, to provide this sort of need. We see this in server space (blade and multi-system enclosures) but rarely if ever in consumer space.
Otherwise, the solution would be to run the system with the low-latency requirements as the host.
On Thursday, March 03, 2011 04:24:14 pm Dr. Ed Morbius wrote:
I think I addressed that reality.
Part of it, yes.
For some needs, you need to be on bare metal, though whether this is accomplished via multi-booting or multiple systems (if you're doing professional music editing, presumably you can justify a dedicated system to that task).
It's not the computer portion of a separate dedicated system that would be expensive; it's the audio interfaces, patching, and control surfaces. Much much much easier to dual-boot in a workflow-friendly fashion. It would be decidedly nice to have virtualization running well enough to handle all the needs; but it requires twice the capacity machine to do it.
What surprises me is that there aren't more systems available which provide separate bare-metal computing environments within a single enclosure, perhaps with some form of shared storage, perhaps just integrated networking, to provide this sort of need. We see this in server space (blade and multi-system enclosures) but rarely if ever in consumer space.
I've thought a bit about options; a ClearCube-type setup might work, and used units aren't expensive. Don't know if blades are available with the expansion options needed, though. Need a PCI slot at minimum.
on 16:44 Thu 03 Mar, Lamar Owen (lowen@pari.edu) wrote:
On Thursday, March 03, 2011 04:24:14 pm Dr. Ed Morbius wrote:
I think I addressed that reality.
Part of it, yes.
For some needs, you need to be on bare metal, though whether this is accomplished via multi-booting or multiple systems (if you're doing professional music editing, presumably you can justify a dedicated system to that task).
It's not the computer portion of a separate dedicated system that would be expensive; it's the audio interfaces, patching, and control surfaces. Much much much easier to dual-boot in a workflow-friendly fashion. It would be decidedly nice to have virtualization running well enough to handle all the needs; but it requires twice the capacity machine to do it.
I thought a bit about that when posting earlier. I still disagree WRT dual-booting. And no, virtualization doesn't need twice the hardware by a long shot (aggregated load averaging, shared componentry, and a host of other savings).
Audio's pretty easy, as you could select between sources and output (or input) accordingly.
Ditto inputs (keyboard, mouse, etc.). Storage might be virtualized/aggregated somehow.
For video, you want high-performance. I'm thinking an integrated KVM might work, or something like it. If done in hardware with digital inputs it should be pretty good. How you'd split / select displays would be a design question.
On Thursday, March 03, 2011 06:55:56 pm Dr. Ed Morbius wrote:
I thought a bit about that when posting earlier. I still disagree WRT dual-booting. And no, virtualization doesn't need twice the hardware by a long shot (aggregated load averaging, shared componentry, and a host of other savings).
It needs twice the CPU and twice the RAM to work in a reliable manner for professional low-latency audio production. The DSP in Harrison Mixbus alone needs one whole CPU core pretty much dedicated to it alone; and that's just the DSP engine, and doesn't count the Ardour-based user interface; two cores is a minimum requirement to run Mixbus, as stated clearly on Harrison's website, and as verified independently by myself and others. Otherwise you get xruns, and xruns kill your quality. Not to mention the fact that the GTK GUI goes into erratic comas when you try to single-core it (even with a very fast core this is the case).
Don't get me wrong; I have tried this with virtualization; it simply does not work at the latencies required when the track count gets higher. It just doesn't work; xruns will find their way into the audio. And that's on both the host and the guest; guest load can cause the host to xrun. They are after all still sharing the same bus or PCIe fabric, and high track counts at low latency already heavily stress the PCI bus and 1x PCIe lanes, for the audio interface and for the disks; do the bandwidth calculation for yourself for 32 tracks at 96kHz sampling at 24 bits from the audio interface and 32-bit floating point to the disk. And that's bidirectional.
So if I'm running two instances of Mixbus, I need a minimum of four cores, and the memory balloon driver that's typically part of the guest's virtualization tools package can cause more problems that it's worth (I'm fighting this now with CentOS 4 (32-bit) under VMware ESX 3.5U5 on a server; I'm getting oom-killer hitting (typically it takes out clamscan, one of the antivirus engines I'm running on that server) after a couple of weeks of uptime, and after eight to twelve hours of oom-killer hitting, the root filesystem goes read-only and a hard reboot of the guest is required to recover; once I get some data on why, I'm going to file a bug report, since it started about two months ago after a long time of reliable uptime; perhaps a kernel or a glibc or a clamav (not in the CentOS repositories, third-party) update destabilized something, but I don't have enough data to be helpful yet).
Audio's pretty easy, as you could select between sources and output (or input) accordingly.
Low-latency audio isn't easy on Linux even on bare metal; I'm talking low-latency audio, where you're overdubbing material and need sub-50ms delay between inputs and outputs. I'm running a Tascam US-224 and a US-428 in the special raw USB mode and have achieved 11ms latencies, but that isn't easy. The preemptive kernel is required for this, and accurate timekeeping is required for this; you even have to turn off CPU frequency scaling to get it to work correctly as the latency goes down. And the audio latency has to be consistent; one reason pulseaudio is typically tossed out completely and JACK is the audio server of choice.
And I'm not talking about a small number of ins and outs; with RME Hammerfall equipment and outboard converters you could easily have 32 or more tracks in and that many out running concurrently. You could have Ardour/Mixbus running 40 tracks with 8 or 16 or more recording while the others are playing in an overdub session, and latency must be hard-realtime controlled (otherwise the performers doing the overdub are going to strangle the engineer....). Since the DSP plugins are running in real-time as well, you end up with quite a load, and it has to be hard realtime when you get to that many tracks.
CentOS is used quite heavily in these circumstances, incidentally, because of the history of reliability and solid version stability; the hard part becomes getting newer versions of software running.
The other application I thought about last night is NTP stratum 1 and 2 disciplined clocks where the 1pps output from a GPS receiver is used along with the timecode coming down the serial. I have yet to find any virtualization solution that keeps well enough time to be an NTP server at all, much less stratum 1 or 2.
On 3/2/2011 11:29 AM, Rudi Ahlers wrote:
So, I installed CentOS + KDE, chose the Virtualization package and used Virtual Machine Manager to setup another CentOS VM inside CentOS (I only have a CentOS ISO on this SAN, since we don't use Debian / Slackware / FC / Ubuntu / etc). The installation was probably about the same speed as it would be on raw hardware. But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow / large website, it seemed asif the whole VM lagged behind.
X without hardware acceleration is pretty ugly - you end up making the CPU do block moves even for simple things like screen scroling. Not sure how how the virtual interface works, but a better approach is either running X natively on your local hardware with the desktop/app remote (if you are on a low latency LAN) or freenx on the server and the NX client locally (works regardless of the connection speed).
And, granted, when we install Virtual Machines on a XEN server, we don't ever use X since the servers we run as web / email / database / file servers, so there's no need for X.
Xen seems to be on its way out.
BUT, I want(ed) to see if this is a reality for the average desktop user, or not really (yet?) seeing as most modern PC's have far more CPU& RAM resources than what is actually needed by most. I'm not talking about developers / graphic designers / etc. I'm talking about Bob, who uses his PC for email, internet, document writing, etc and needs to boot into Windows if he feels like playing Warcraft III or StarCraft II, or use Pastel, etc.
If you have paid for a windows license and/or want to run games, why wouldn't you run Windows natively, with the NX client to access remote linux desktops, or VMware Player to run it locally.
Wouldn't it be nice to run Windows, of for that matter Solaris / FreeBSD / MAC (graphics designer) / another flavor of Linux / etc inside your favorite Linux, and access it from the Desktop without too much trouble?
Yes, as a matter of fact, it is nice - but it doesn't really make much difference which is the host and which is the guest, or for most things whether you run locally or remotely. For most things, I find floating a running Linux desktop around among NX clients to be extremely handy. And, if you want a local VM, it is possible to set a dual-boot system up so you also have a choice of running the currently-inactive partition under vmware player without rebooting.
On 02/03/11 19:07, Les Mikesell wrote:
On 3/2/2011 11:29 AM, Rudi Ahlers wrote:
So, I installed CentOS + KDE, chose the Virtualization package and used Virtual Machine Manager to setup another CentOS VM inside CentOS (I only have a CentOS ISO on this SAN, since we don't use Debian / Slackware / FC / Ubuntu / etc). The installation was probably about the same speed as it would be on raw hardware. But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow / large website, it seemed asif the whole VM lagged behind.
X without hardware acceleration is pretty ugly - you end up making the CPU do block moves even for simple things like screen scroling. Not sure how how the virtual interface works, but a better approach is either running X natively on your local hardware with the desktop/app remote (if you are on a low latency LAN) or freenx on the server and the NX client locally (works regardless of the connection speed).
What about making the VM running X server, accepting TCP connections, and access the VM from your host using a "local" X client display. A lot of bad things can be said about the X network protocol, but at least it works smoother than VNC. The X protocol requires bandwidth (compared to VNC), but working against a virtual network adapter doesn't necessarily kill the performance.
Other than that, SPICE is probably the future [1] on Linux. That should slowly begin to be useful in RHEL5, RHEL6 and Fedora 14, if I'm not much mistaken. Not sure how much is implemented in RHEL5/CentOS5 though. However, for SPICE to work, you need to use KVM. And you need the qemu-kvm part to initialise the SPICE display properly as well.
kind regards,
David Sommerseth
[1] http://www.youtube.com/watch?v=S4DZwYqnyJM http://www.youtube.com/watch?v=uvfkj8V6ylM
On Wed, 2 Mar 2011, David Sommerseth wrote:
Other than that, SPICE is probably the future [1] on Linux. That should slowly begin to be useful in RHEL5, RHEL6 and Fedora 14, if I'm not much mistaken. Not sure how much is implemented in RHEL5/CentOS5 though. However, for SPICE to work, you need to use KVM. And you need the qemu-kvm part to initialise the SPICE display properly as well.
You need qemu-spice for using SPICE, which does not ship with RHEL5 or RHEL6. On top of that, SPICE is only supported by Red Hat for RHEV, not libvirt. That may change in the future, ... but when, nobody knows ;-)
You need qemu-spice for using SPICE, which does not ship with RHEL5 or RHEL6. On top of that, SPICE is only supported by Red Hat for RHEV, not libvirt. That may change in the future, ... but when, nobody knows ;-)
Well that's certainly disappointing. Any alternatives to spice for centos? I know Microsoft is working on something for their own systems..
On 02/03/11 21:12, Dag Wieers wrote:
On Wed, 2 Mar 2011, David Sommerseth wrote:
Other than that, SPICE is probably the future [1] on Linux. That should slowly begin to be useful in RHEL5, RHEL6 and Fedora 14, if I'm not much mistaken. Not sure how much is implemented in RHEL5/CentOS5 though. However, for SPICE to work, you need to use KVM. And you need the qemu-kvm part to initialise the SPICE display properly as well.
You need qemu-spice for using SPICE, which does not ship with RHEL5 or RHEL6. On top of that, SPICE is only supported by Red Hat for RHEV, not libvirt. That may change in the future, ... but when, nobody knows ;-)
It used to be a separate qemu-spice. But I believe with Fedora 14 (and most probably RHEL6, I haven't checked) that should now be merged into qemu upstream.
http://fedoraproject.org/wiki/Features/Spice
So I presume SPICE will be more widely supported in RHEL, considering Fedora is the "maturing stage" for many RHEL features. Which means, CentOS should get it in the end as well.
I believe they've mostly spent time stabilising it, and slowly working towards open sourcing the SPICE code. IIRC, the SPICE technology was acquired when Red Hat bought Qumranet. So it's probably been quite a journey so far for these guys :)
kind regards,
David Sommerseth
You need qemu-spice for using SPICE, which does not ship with RHEL5 or RHEL6. On top of that, SPICE is only supported by Red Hat for RHEV, not libvirt. That may change in the future, ... but when, nobody knows ;-)
--
No you don't Dag.
qemu-kvm and libvirt in RHEL6 already supports SPICE... the only thing that isn't included is support for it in virt-manager (that is coming down the road) but you can enable it with virsh edit easily enough following the XML definition at the libvirt fine.
I was playing with it last week - very impressive piece of technology.
James
On Wed, 2 Mar 2011, James Hogarth wrote:
You need qemu-spice for using SPICE, which does not ship with RHEL5 or RHEL6. On top of that, SPICE is only supported by Red Hat for RHEV, not libvirt. That may change in the future, ... but when, nobody knows ;-)
qemu-kvm and libvirt in RHEL6 already supports SPICE... the only thing that isn't included is support for it in virt-manager (that is coming down the road) but you can enable it with virsh edit easily enough following the XML definition at the libvirt fine.
I was playing with it last week - very impressive piece of technology.
Interesting, could you shed a light on what exact XML is needed ?
It used to be qemu-spice though in past Fedora releases, that's why I was expecting the same.
Interesting, could you shed a light on what exact XML is needed ?
http://libvirt.org/formatdomain.html#elementsGraphics http://libvirt.org/formatdomain.html#elementsVideo
You need to set the video type to qxl and the graphical type to spice ... then set the appropriate attributes on the element port, sport, etc etc
There should be more info on the Fedora sites as well.
James
On Wed, 2 Mar 2011, Rudi Ahlers wrote:
I am busy setting up some XEN servers on a SAN for high availability and Cloud Computing, and thought it could be cool to setup virtualization on a CentOS 5.5 Desktop, running on a Core i3 + 4GB RAM [...]
So, I installed CentOS + KDE, chose the Virtualization package and used Virtual Machine Manager to setup another CentOS VM inside CentOS (I only have a CentOS ISO on this SAN, since we don't use Debian / Slackware / FC / Ubuntu / etc). The installation was probably about the same speed as it would be on raw hardware. But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow / large website, it seemed asif the whole VM lagged behind.
The Virtual Machine didn't use much resources. I allocated 1CPU core & 512MB RAM to it
I've never allocated less than 1 GB RAM to a VM with an active GUI, but I suspect that RAM crunch is part of the problem.
Install CentOS 5 on raw hardware with 512 MB RAM and try running Firefox...
On Mar 2, 2011, at 12:29 PM, Rudi Ahlers Rudi@SoftDux.com wrote:
I am busy setting up some XEN servers on a SAN for high availability and Cloud Computing, and thought it could be cool to setup virtualization on a CentOS 5.5 Desktop, running on a Core i3 + 4GB RAM, and use the SAN's storage to see if it could actually be worth my while to replicate a Cloud Computing setup in the office. And, cause I got a bit bored waiting for a few RAID-sets to finish initializing.
So, I installed CentOS + KDE, chose the Virtualization package and used Virtual Machine Manager to setup another CentOS VM inside CentOS (I only have a CentOS ISO on this SAN, since we don't use Debian / Slackware / FC / Ubuntu / etc). The installation was probably about the same speed as it would be on raw hardware. But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow / large website, it seemed asif the whole VM lagged behind.
The Virtual Machine didn't use much resources. I allocated 1CPU core & 512MB RAM to it
Yes, I know that I could have used KVM, VMWare or VirtualBox, but I wanted to use what's included already. Cause, let's face it, many people (even though they're technically advanced users) don't know virtualization.
And, granted, when we install Virtual Machines on a XEN server, we don't ever use X since the servers we run as web / email / database / file servers, so there's no need for X.
BUT, I want(ed) to see if this is a reality for the average desktop user, or not really (yet?) seeing as most modern PC's have far more CPU & RAM resources than what is actually needed by most. I'm not talking about developers / graphic designers / etc. I'm talking about Bob, who uses his PC for email, internet, document writing, etc and needs to boot into Windows if he feels like playing Warcraft III or StarCraft II, or use Pastel, etc.
Wouldn't it be nice to run Windows, of for that matter Solaris / FreeBSD / MAC (graphics designer) / another flavor of Linux / etc inside your favorite Linux, and access it from the Desktop without too much trouble?
When I had Xen setup on my desktop with 4GB I setup dom0 with 1GB running X display manager xdm/kdm/gdm and ran headless X in each domU, 1GB for each VM. I then had a selection dialog on my X session in dom0 for which host to log in to and a full X session for that distribution.
Sound is the only tricky part, you need a sound server in dom0 that allows sound from all VMs.
This works with Xen or KVM, though the management and compartmentalization of Xen helps.
Does CentOS support the shared memory pages, memory dedup, in Xen? That would allow for a lot more Linux VMs.
-Ross
On 03/03/11 00:41, Ross Walker wrote: [...snip...]
This works with Xen or KVM, though the management and compartmentalization of Xen helps.
Does CentOS support the shared memory pages, memory dedup, in Xen? That would allow for a lot more Linux VMs.
I don't think the KSM support has been backported to the RHEL5/CentOS5 kernels. I might remember wrong though.
_If_ KSM is available on the 2.6.18 based kernels, it should definitely work for KVM on RHEL5/CentOS5. However, I doubt it has been backported to the Xen dom0 kernels.
If I've understood it correctly, the Xen hypervisor is its own microkernel and dom0 is kind of a virtual guest with more privileges than domUs, to be able to administer and control the guests. IIRC, this micro kernel got its own scheduler and memory management too.
While with KVM, the host kernel (which loads the kvm.ko module) is the hypervisor, and all the virtual guests are qemu-kvm user space processes. And KSM will merge "same pages" for user space processes, no matter if it is KVM guests or other applications.
kind regards,
David Sommerseth
On Wed, 2011-03-02 at 19:29 +0200, Rudi Ahlers wrote:
I am busy setting up some XEN servers on a SAN for high availability and Cloud Computing, and thought it could be cool to setup virtualization on a CentOS 5.5 Desktop, running on a Core i3 + 4GB RAM, and use the SAN's storage to see if it could actually be worth my while to replicate a Cloud Computing setup in the office. And, cause I got a bit bored waiting for a few RAID-sets to finish initializing. So, I installed CentOS + KDE, chose the Virtualization package and used Virtual Machine Manager to setup another CentOS VM inside CentOS (I only have a CentOS ISO on this SAN, since we don't use Debian / Slackware / FC / Ubuntu / etc). The installation was probably about the same speed as it would be on raw hardware. But, using the interface is painfully slow. I opened up Firefox and browsed the web a bit. The mouse cursor lagged a bit and whenever I loaded a slow / large website, it seemed asif the whole VM lagged behind.
I have openSUSE 11.3 GNOME desktop instances in VMware ESX... works great and performance is good.
Wouldn't it be nice to run Windows, of for that matter Solaris / FreeBSD / MAC (graphics designer) / another flavor of Linux / etc inside your favorite Linux, and access it from the Desktop without too much trouble?
Do this every day from my openSUSE 11.3/GNOME laptop; accessing openSUSE 11.3/GNOME instance on ESX as well as a Windows Vista instance in local VMware Workstation. Works great, performance is good.
I only have CentOS instances as servers (all in VMware ESX... and, of course, performance is very good).