Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
At Tue, 23 Jun 2015 13:49:08 +0100 CentOS mailing list centos@centos.org wrote:
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
The default CentOS installer always puts /boot on a separate partition. This is mostly because, the default CentOS installer uses LVM for the bulk of the disk and Grub is *generally* clueless WRT LVM (at least Grub V1, not sure how smart Grub V2 is). Also, there are lots of 'fun' options for what/where the root partition can be, not all of them compatible with what Grub (or other boot loaders) know how to deal with. Having /boot on its own (small) partition, using something 'simple' for a file system makes things 'easy' for bootloaders. Once the kernel is fired up it can load all sorts of modules to allow it to mount the root file system, everything from exotic file systems to LVM and RAID, etc.
Another advantage of having /boot on its own partition is supporting multiple linux flavors that is, it is possible to 'share' /boot between CentOS, Fedora, Ubuntu, Debian, etc. if one wants to, although it is really easier to pick one system for your 'host' and then install VMs for all of the others, but sometimes one needs to test things with different Linux flavors *on the bare metal* for various reasons.
On Tue, Jun 23, 2015 at 9:27 AM, Robert Heller heller@deepsoft.com wrote:
At Tue, 23 Jun 2015 13:49:08 +0100 CentOS mailing list centos@centos.org wrote:
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
The default CentOS installer always puts /boot on a separate partition. This is mostly because, the default CentOS installer uses LVM for the bulk of the disk and Grub is *generally* clueless WRT LVM (at least Grub V1, not sure how smart Grub V2 is). Also, there are lots of 'fun' options for what/where the
Accessing /boot within LVM was not possible with legacy GRUB (GRUB1) GRUB2 is much "smarter".
I've done a few Debian installs (VMs) with /boot as part of the root partition which was an LV. Those were done as a proof-of-concept, but it also reduced "wasted" space which would otherwise only be usable within the file system for /boot. And the flexibility of having everything a part of LVM (which is great if the file systems in use support shrinking).
I'd expect GRUB2 in CentOS7 would allow for /boot within LVM, though I have not tried it.
root partition can be, not all of them compatible with what Grub (or other boot loaders) know how to deal with. Having /boot on its own (small) partition, using something 'simple' for a file system makes things 'easy' for bootloaders. Once the kernel is fired up it can load all sorts of modules to allow it to mount the root file system, everything from exotic file systems to LVM and RAID, etc.
Another advantage of having /boot on its own partition is supporting multiple linux flavors that is, it is possible to 'share' /boot between CentOS, Fedora, Ubuntu, Debian, etc. if one wants to, although it is really easier to pick one system for your 'host' and then install VMs for all of the others, but sometimes one needs to test things with different Linux flavors *on the bare metal* for various reasons.
-- Robert Heller -- 978-544-6933 Deepwoods Software -- Custom Software Services http://www.deepsoft.com/ -- Linux Administration Services heller@deepsoft.com -- Webhosting Services
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 23 Jun 2015 at 09:27 -0000, Robert Heller wrote:
Another advantage of having /boot on its own partition is supporting multiple linux flavors that is, it is possible to 'share' /boot between CentOS, Fedora, Ubuntu, Debian, etc. if one wants to, although it is really easier to pick one system for your 'host' and then install VMs for all of the others, but sometimes one needs to test things with different Linux flavors *on the bare metal* for various reasons.
This might work until the different OS installations start arguing about the version of grub/grub2 and the contents of the grub configuration file (including whose grub configuration files to use). They tend to do this during kernel updates and at other times.
Having grub take over your main boot choice is as silly as Windows assuming it is the only OS installed.
I have long preferred to use a single / partition for each OS installation and put the system specific grub image in the individual partition. I use the 1 sector (MBR) FreeBSD bootloader to boot the partition I want at boot time. Note that grub2 doesn't like this and spews an error message, but it does work (mostly) (grub2 needs to be fixed to support this functionality and not assume it can become the master boot process).
This does tend to limit you to 3 OS choices in the primary partitions (assuming a 4th extended partition for data partitions). That usually is enough for my purposes.
With EFI systems, this is changing and my experiences are not complete enough to have strong opinions. It does look like the EFI boot process conceptually handles this by letting the user choose which EFI application to load first (using efimanager). EFI disk based booting does still need the DOS formatted partition often mounted at /boot/efi.
For booting encrypted systems, I do put /boot in separate partitions (with the corresponding / being an extended partition). I still use a separate MBR boot loader to select which /boot I use.
Stuart
On Tue, Jun 23, 2015 at 01:49:08PM +0100, Timothy Murphy wrote:
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
Not only do I have a /boot partition, but I have a /boot/efi partition.
I probably could get away without using the /boot partition, but I've been experimenting with btrfs and snapshots and it makes sense to keep /boot as an ext4 filesystem. I suspect with the move to XFS it might be a good idea to keep an ext4 /boot too.
On Tue, 23 Jun 2015 10:42:35 -0400 m.roth@5-cent.us wrote:
Timothy Murphy wrote:
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
Separate partition, 100% of the time.
Inside / (which is mostly always ext4), 100% of the time. :-)
That said, I prefer virtual machines over multiboot environments, and I absolutely despise LVM --- that cursed thing is never getting on my drives. Never again, that is...
HTH, :-) Marko
On 6/23/2015 10:33 AM, Marko Vojinovic wrote:
Inside / (which is mostly always ext4), 100% of the time. :-)
That said, I prefer virtual machines over multiboot environments, and I absolutely despise LVM --- that cursed thing is never getting on my drives. Never again, that is...
I'm curious what has made some people hate LVM so much. I have been using it for years on thousands of production systems with no issues that could not be easily explained as myself or someone else doing something stupid. And even those issues were pretty few and far between.
/opens can of worms
HTH, :-) Marko
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, Jun 23, 2015 at 12:15 PM, Jason Warr jason@warr.net wrote:
On 6/23/2015 10:33 AM, Marko Vojinovic wrote:
Inside / (which is mostly always ext4), 100% of the time. :-)
That said, I prefer virtual machines over multiboot environments, and I absolutely despise LVM --- that cursed thing is never getting on my drives. Never again, that is...
I'm curious what has made some people hate LVM so much. I have been using it for years on thousands of
No clue.
My experiences with LVM have been positive as well. And in opinion it doesn't add much complexity (if you know the LVM tools, you're fine). Flexibility is worth an ounce of complexity.
production systems with no issues that could not be easily explained as myself or someone else doing something stupid. And even those issues were pretty few and far between.
The worst "nail biter" I had was an instance where a former employee did not properly allocate LV space to /var and I had to reclaim space from rootfs and add it to /var. Even a small screw up and I'd have to go recover from my backups (not fun). Fortunately I spun up VMs and labbed everything a few times over and wrote detailed notes. Went without a hitch.
Prior Proper Planning Prevents Poor Performance
/opens can of worms
I'll bite, see above ;-)
HTH, :-)
Marko
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 23 Jun 2015 11:15:30 -0500 Jason Warr jason@warr.net wrote:
I'm curious what has made some people hate LVM so much. I have been using it for years on thousands of production systems with no issues that could not be easily explained as myself or someone else doing something stupid. And even those issues were pretty few and far between.
/opens can of worms
Well, I can only tell you my own story, I wouldn't know about other people. Basically, it boils down to the following:
(1) I have no valid usecase for it. I don't remember when was the last time I needed to resize partitions (probably back when I was trying to install Windows 95). Disk space is very cheap, and if I really need to have *that* much data on a single partition, another drive and a few intelligently placed symlinks are usually enough. Cases where a symlink cannot do the job are indicative of a bad data structure design, and LVM is often not a solution, but a patch over a deeper problem elsewhere. Though I do admit there are some valid usecases for LVM.
(2) It is fragile. If you have data on top of LVM spread over an array of disks, and one disk dies, the data on the whole array goes away. I don't know why such a design of LVM was preferred over something more robust (I guess there are reasons), but it doesn't feel right. A bunch of flawless drives containing corrupt data is Just Wrong(tm). I know, one should always have backups, but still...
(3) It's being pushed as default on everyday ordinary users, who have absolutely no need for it. I would understand it as an opt-in feature that some people might need in datacenters, drive farms, clouds, etc., but an ordinary user installing a single OS on their everyday laptop just doesn't need it. Jumping through hoops during installation to opt-in LVM by a small number of experts outweighs similar jumping to opt-out of it by a large number of noobs.
Also, related to (3), there was that famous Fedora upgrade fiasco a few Fedora releases back. It went like this:
* A default installation included LVM for all partitions, except for /boot, since grub couldn't read inside LVM. * Six months later, the upgrade process to the next release of Fedora happened to require a lot of space in /boot, more than the default settings. * The /boot partition, being the only one outside LVM, was the only one that couldn't be resized on-the-fly. * People who opted-out of LVM usually didn't have a reason to create a separate /boot partition, but bundled it under /, circumventing the size issue in advance without even knowing it.
So the story ended up with lots of people in upgrading griefs purely because they couldn't resize the separate /boot partition, and it was separate because LVM was present, and LVM was present with the goal of making partition resizing easy! A textbook example of a catch-22, unbelievable!!
Of course, I know what you'll say --- it wasn't just LVM, but an unfortunate combination of LVM, limitations of grub, bad defaults and a lousy upgrade mechanism. And yes, you'd be right, I agree. But the bottomline was that people with LVM couldn't upgrade (without bending backwards), while people without LVM didn't even notice that there is a problem. And since hatred is an irrational thing, you need not look any further than that. ;-)
Best, :-) Marko
Marko Vojinovic wrote:
On Tue, 23 Jun 2015 11:15:30 -0500 Jason Warr jason@warr.net wrote:
I'm curious what has made some people hate LVM so much. I have been using it for years on thousands of production systems with no issues that could not be easily explained as myself or someone else doing something stupid. And even those issues were pretty few and far between.
/opens can of worms
Well, I can only tell you my own story, I wouldn't know about other people. Basically, it boils down to the following:
(1) I have no valid usecase for it. I don't remember when was the last time I needed to resize partitions (probably back when I was trying to install Windows 95). Disk space is very cheap, and if I really need to have *that* much data on a single partition, another drive and a few intelligently placed symlinks are usually enough. Cases where a symlink cannot do the job are indicative of a bad data structure design, and LVM is often not a solution, but a patch over a deeper problem elsewhere. Though I do admit there are some valid usecases for LVM.
(2) It is fragile. If you have data on top of LVM spread over an array of disks, and one disk dies, the data on the whole array goes away. I don't know why such a design of LVM was preferred over something more robust (I guess there are reasons), but it doesn't feel right. A bunch of flawless drives containing corrupt data is Just Wrong(tm). I know, one should always have backups, but still...
<snip> I thought it was interesting years ago, having seen and worked with it in Tru64. These days, if I needed more space, I'd go with plain RAID.
In general, the less complex the better, and the easier to recover when something fails.
mark
On Tue, Jun 23, 2015 at 1:54 PM, Marko Vojinovic vvmarko@gmail.com wrote:
On Tue, 23 Jun 2015 11:15:30 -0500 Jason Warr jason@warr.net wrote:
I'm curious what has made some people hate LVM so much. I have been using it for years on thousands of production systems with no issues that could not be easily explained as myself or someone else doing something stupid. And even those issues were pretty few and far between.
/opens can of worms
Well, I can only tell you my own story, I wouldn't know about other people. Basically, it boils down to the following:
(1) I have no valid usecase for it. I don't remember when was the last time I needed to resize partitions (probably back when I was trying to install Windows 95). Disk space is very cheap, and if I really need to have *that* much data on a single partition, another drive and a few intelligently placed symlinks are usually enough. Cases where a symlink cannot do the job are indicative of a bad data structure design, and LVM is often not a solution, but a patch over a deeper problem elsewhere. Though I do admit there are some valid usecases for LVM.
AIX does use lvm a lot. Main difference is their filesystem allows live shrinking. Kinda nice to dynamically size a partition depending on needs, as opposite to the so often suggested approach of formatting the entire drive as one single partition. Symlinking is great until whatever the destination is does not mount. I myself use lvm as disks for my vm clients, which xenserver does too (not my fault I promise!). It is faster than an image file.
(2) It is fragile. If you have data on top of LVM spread over an array of disks, and one disk dies, the data on the whole array goes away. I don't know why such a design of LVM was preferred over something more robust (I guess there are reasons), but it doesn't feel right. A bunch of flawless drives containing corrupt data is Just Wrong(tm). I know, one should always have backups, but still...
Building a raid0, which is what your example is, and hoping data will survive in case of a drive failure is wishful thinking. You can build VLM on the top of a proper raid, or do raid inside lvm nowadayas... just like zfs,
(3) It's being pushed as default on everyday ordinary users, who have absolutely no need for it. I would understand it as an opt-in feature that some people might need in datacenters, drive farms, clouds, etc., but an ordinary user installing a single OS on their everyday laptop just doesn't need it. Jumping through hoops during installation to opt-in LVM by a small number of experts outweighs similar jumping to opt-out of it by a large number of noobs.
That is not lvm's fault, but the distro's decision.
Also, related to (3), there was that famous Fedora upgrade fiasco a few Fedora releases back. It went like this:
- A default installation included LVM for all partitions, except for /boot, since grub couldn't read inside LVM.
- Six months later, the upgrade process to the next release of Fedora happened to require a lot of space in /boot, more than the default settings.
- The /boot partition, being the only one outside LVM, was the only one that couldn't be resized on-the-fly.
- People who opted-out of LVM usually didn't have a reason to create a separate /boot partition, but bundled it under /, circumventing the size issue in advance without even knowing it.
Fedora != lvm unless I have been lied to all these years.
So the story ended up with lots of people in upgrading griefs purely because they couldn't resize the separate /boot partition, and it was separate because LVM was present, and LVM was present with the goal of making partition resizing easy! A textbook example of a catch-22, unbelievable!!
Of course, I know what you'll say --- it wasn't just LVM, but an unfortunate combination of LVM, limitations of grub, bad defaults and a lousy upgrade mechanism. And yes, you'd be right, I agree. But the bottomline was that people with LVM couldn't upgrade (without bending backwards), while people without LVM didn't even notice that there is a problem. And since hatred is an irrational thing, you need not look any further than that. ;-)
Best, :-) Marko
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 6/23/2015 11:23 AM, Mauricio Tavares wrote:
AIX does use lvm a lot. Main difference is their filesystem
allows live shrinking. Kinda nice to dynamically size a partition depending on needs, as opposite to the so often suggested approach of formatting the entire drive as one single partition. Symlinking is great until whatever the destination is does not mount. I myself use lvm as disks for my vm clients, which xenserver does too (not my fault I promise!). It is faster than an image file.
While it has the same concepts, physical volumes, volume groups, logical volumes, the LVM in AIX shares only the initials with Linux. I've heard that Linux's LVM was based on HP-UX's design.
in AIX, the LVM is tightly integrated with file system management, so you issue the command to grow a file system, and it automatically grows the underlying logical volume. the OS itself can automatically grow file systems when its installing software. Also, in AIX, the volume manager is the raid manager, you say 'copies = 2' as an attribute of a LV, and data is mirrored.
On Tue, 23 Jun 2015, John R Pierce wrote:
While it has the same concepts, physical volumes, volume groups, logical volumes, the LVM in AIX shares only the initials with Linux. I've heard that Linux's LVM was based on HP-UX's design.
Sure, and IRIX had a similar concept, although my experiences with that were slightly less good than with LVM on linux.
in AIX, the LVM is tightly integrated with file system management, so you issue the command to grow a file system, and it automatically grows the underlying logical volume. the OS itself can automatically grow file systems when its installing software. Also, in AIX, the volume manager is the raid manager, you say 'copies = 2' as an attribute of a LV, and data is mirrored.
Without knowing the details, this is possibly just semantics. With lvresize, you can resize the LV and the filesystem in one go. With lvcreate --type raid1 you can specify that a given LV is RAID1 mirrored.
jh
On Tue, 23 Jun 2015 14:23:52 -0400 Mauricio Tavares raubvogel@gmail.com wrote:
On Tue, Jun 23, 2015 at 1:54 PM, Marko Vojinovic vvmarko@gmail.com wrote:
On Tue, 23 Jun 2015 11:15:30 -0500 Jason Warr jason@warr.net wrote:
I'm curious what has made some people hate LVM so much.
(3) It's being pushed as default on everyday ordinary users, who have absolutely no need for it.
That is not lvm's fault, but the distro's decision.
Agreed, but remember that hatred is not a rational thing. When one sees LVM being pushed onto them by their favorite distro, they are not going to blame the distro (because it's their favorite distro, you know...), but rather the LVM itself. Psychology is a curious thing. ;-)
Also, related to (3), there was that famous Fedora upgrade fiasco a few Fedora releases back. It went like this:
Fedora != lvm unless I have been lied to all these years.
That Fedora stunt was just one real-world example of how things can get drastically wrong, and for a sizable number of people. I wasn't criticizing LVM, I was answering why some people hate it. :-)
As far as an ordinary noob user thinks, this is how it goes. Things that participated in the problem were:
- upgrade software, - boot partition, - grub bootloader, - LVM.
A typical noob user knows they need the first three components for day-to-day work, and that they don't need the fourth. Also, people who didn't have the fourth component didn't have the problem. Guess which of the four will catch the blame? Moreover, the fourth component failed to help with the problem, despite it being there precisely for partition resizing. There's nothing more to discuss, it's clear as day... :-D
Remember, I'm not justifying this "reasoning", just reporting what I've seen happen out in the wild, and why some people hate LVM. ;-)
Best, :-) Marko
Marko Vojinovic wrote:
On Tue, 23 Jun 2015 14:23:52 -0400 Mauricio Tavares raubvogel@gmail.com wrote:
On Tue, Jun 23, 2015 at 1:54 PM, Marko Vojinovic vvmarko@gmail.com wrote:
On Tue, 23 Jun 2015 11:15:30 -0500 Jason Warr jason@warr.net wrote:
I'm curious what has made some people hate LVM so much.
(3) It's being pushed as default on everyday ordinary users, who have absolutely no need for it.
That is not lvm's fault, but the distro's decision.
Agreed, but remember that hatred is not a rational thing. When one sees
<snip> Hold on thar, pardner. I don't "hate" LVM, but don't care for it. And in most cases, or at least my own, and the person who is vehemently against it, it's based on personal experience. How is that "not a rational thing"?
For that matter, haven't you ever gotten gunshy when something that's billed as the LATESTGREATESTTHINGSINCESLICEDBREAD is buggy, and not ready for prime time? Certainly 10-12 years ago, that's how I felt about python, where literally every sub-release broke what was running. Is it irrational to be unappreciative of it? (We'll ignore my unhappiness at the whole concept of whitespace as a syntax element.)
Or then there's systemd....
mark
On 6/23/2015 3:31 PM, m.roth@5-cent.us wrote:
Marko Vojinovic wrote:
On Tue, 23 Jun 2015 14:23:52 -0400 Mauricio Tavares raubvogel@gmail.com wrote:
On Tue, Jun 23, 2015 at 1:54 PM, Marko Vojinovic vvmarko@gmail.com wrote:
On Tue, 23 Jun 2015 11:15:30 -0500 Jason Warr jason@warr.net wrote:
I'm curious what has made some people hate LVM so much.
(3) It's being pushed as default on everyday ordinary users, who have absolutely no need for it.
That is not lvm's fault, but the distro's decision.
Agreed, but remember that hatred is not a rational thing. When one sees
<snip> Hold on thar, pardner. I don't "hate" LVM, but don't care for it. And in most cases, or at least my own, and the person who is vehemently against it, it's based on personal experience. How is that "not a rational thing"?
The only thing that could be irrational about it is if you were to say "It does not work for me now so how can it work for anyone, ever?"
I have not seen any of you guys taking that attitude but some do.
Recommending against using LVM and citing reasons based on your experience with it is certainly valid and basically why I asked the question in the first place. I have not come across any serious blockers and was curious what made it blockers for some of you.
For that matter, haven't you ever gotten gunshy when something that's billed as the LATESTGREATESTTHINGSINCESLICEDBREAD is buggy, and not ready for prime time? Certainly 10-12 years ago, that's how I felt about python, where literally every sub-release broke what was running. Is it irrational to be unappreciative of it? (We'll ignore my unhappiness at the whole concept of whitespace as a syntax element.)
Or then there's systemd....
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 06/23/2015 10:54 AM, Marko Vojinovic wrote:
(1) I have no valid usecase for it. I don't remember when was the last time I needed to resize partitions (probably back when I was trying to install Windows 95). Disk space is very cheap, and if I really need to have *that* much data on a single partition, another drive and a few intelligently placed symlinks are usually enough. Cases where a symlink cannot do the job are indicative of a bad data structure design, and LVM is often not a solution, but a patch over a deeper problem elsewhere. Though I do admit there are some valid usecases for LVM.
Such as: 1) LVM makes MBR and GPT systems more consistent with each other, reducing the probability of a bug that affects only one. 2) LVM also makes RAID and non-RAID systems more consistent with each other, reducing the probability of a bug that affects only one. 3) MBR has silly limits on the number of partitions, that don't affect LVM. Sure, GPT is better, but so long as both are supported, the best solution is the one that works in both cases. 4) There are lots of situations where you might want to expand a disk/filesystem on a server or virtual machine. Desktops might do so less often, but there's no specific reason to put more engineering effort into making the two different. The best solution is the one that works in both cases. 5) Snapshots are the only practical way to get consistent backups, and you should be using them. 6) If you use virtualization, LV-backed VMs are dramatically faster than file-backed VMs.
LVM has virtually zero cost, so there's no practical benefit to not using it.
When btrfs comes along and supports flexible volumes, snapshots, and reliable storage, then it'll make sense to ditch LVM. Until then, LVM shouldn't even be a question; the answer is yes.
The point of view that LVM isn't needed when a symlink will do is no more valid than the opposite point of view: that there's no reason to play stupid games with symlinks when you have the ability to manage volumes.
(2) It is fragile. If you have data on top of LVM spread over an array of disks, and one disk dies, the data on the whole array goes away.
That's true of every filesystem that doesn't use RAID or something like it. It's hardly a valid criticism of LVM.
And since hatred is an irrational thing, you need not look any further than that. ;-)
Well, let's not forget that you are the one who said that you despise LVM. As long as you recognize that you aren't rational, I suppose we agree on at least one thing. :)
On Tue, 23 Jun 2015 19:08:24 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
Such as:
- LVM makes MBR and GPT systems more consistent with each other,
reducing the probability of a bug that affects only one. 2) LVM also makes RAID and non-RAID systems more consistent with each other, reducing the probability of a bug that affects only one.
OTOH, it increases the probability of a bug that affects LVM itself.
But really, these arguments sound like a strawman. It reduces the probability of a bug that affects one of the setups --- I have a hard time imagining a real-world usecase where something like that can be even observable, let alone relevant.
- MBR has silly limits on the number of partitions, that don't
affect LVM. Sure, GPT is better, but so long as both are supported, the best solution is the one that works in both cases.
That only makes sense if I need a lot of partitions on a system that doesn't support GPT. Sure, that can happen (ever more rarely on modern hardware), but I wouldn't know how common it is. I rarely needed many partitions in my setups.
- There are lots of situations where you might want to expand a
disk/filesystem on a server or virtual machine. Desktops might do so less often, but there's no specific reason to put more engineering effort into making the two different. The best solution is the one that works in both cases.
What do you mean by engineering effort? When I'm setting up a data storage farm, I'll use LVM. When I'm setting up my laptop, I won't. What effort is there? I just see it as an annoyance of having to customize my partition layout on the laptop, during the OS installation (customizing a storage farm setup is pretty mandatory either way, so it doesn't make a big difference).
- Snapshots are the only practical way to get consistent backups,
and you should be using them.
That depends on what kind of data you're backing up. If you're backing up the whole filesystem, than I agree. But if you are backing up only certain critical data, I'd say that a targeted rsync can be waaay more efficient.
- If you use virtualization, LV-backed VMs are dramatically faster
than file-backed VMs.
I asked for an explanation of this in the other e-mail. Let's keep it there.
LVM has virtually zero cost, so there's no practical benefit to not using it.
If you need it. If you don't need it, there is no practical benefit of having it, either. It's just another potential point of failure, waiting to happen.
The point of view that LVM isn't needed when a symlink will do is no more valid than the opposite point of view: that there's no reason to play stupid games with symlinks when you have the ability to manage volumes.
I would agree with this, up to the point of fragility/robustness (see below).
(2) It is fragile. If you have data on top of LVM spread over an array of disks, and one disk dies, the data on the whole array goes away.
That's true of every filesystem that doesn't use RAID or something like it. It's hardly a valid criticism of LVM.
If you have a sequence of plain ext4 harddrives with several symlinks, and one drive dies, you can still read the data sitting on the other drives. With LVM, you cannot. It's as simple as that.
In some cases it makes sense to maintain access to reduced amount of data, despite the fact that a chunk went missing. A webserver, for example, can keep serving the data that's still there on the healthy drives, and survive the failure of the faulty drive without downtime. OTOH, with LVM, once a single drive fails, the server looses access to all data, which then necessitates some downtime while switching to the backup, etc. LVM isn't always an optimal solution.
And since hatred is an irrational thing, you need not look any further than that. ;-)
Well, let's not forget that you are the one who said that you despise LVM. As long as you recognize that you aren't rational, I suppose we agree on at least one thing. :)
Oh, of course! :-) The ability to be irrational is what makes us human. Otherwise life would be very boring. ;-)
Best, :-) Marko
On 06/23/2015 09:00 PM, Marko Vojinovic wrote:
On Tue, 23 Jun 2015 19:08:24 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
- LVM makes MBR and GPT systems more consistent with each other,
reducing the probability of a bug that affects only one. 2) LVM also makes RAID and non-RAID systems more consistent with each other, reducing the probability of a bug that affects only one.
OTOH, it increases the probability of a bug that affects LVM itself.
No, it doesn't. As Anaconda supports more types of disk and filesystem configuration, its complexity increases, which increases the probability that there are bugs. The number of users is not affected by complexity growth, but the permutations of possible configurations grows. Therefore, the number of users of some configurations is smaller, which means that there are fewer people testing the edge cases, and bugs that affect those edge cases are likely to last longer.
Consistency reduces the probability of bugs.
But really, these arguments sound like a strawman. It reduces the probability of a bug that affects one of the setups --- I have a hard time imagining a real-world usecase where something like that can be even observable, let alone relevant.
Follow anaconda development if you need further proof.
- MBR has silly limits on the number of partitions, that don't
affect LVM. Sure, GPT is better, but so long as both are supported, the best solution is the one that works in both cases.
That only makes sense if I need a lot of partitions on a system that doesn't support GPT.
You are looking at this from the perspective of you, one user. I am looking at this from the perspective of the developers who manage anaconda, and ultimately have to support all of the users.
That is, you are considering an anecdote, and missing the bigger picture.
LVM is an inexpensive abstraction from the specifics of disk partitions. It is more flexible than working without it. It is consistent across MBR, GPT, and RAID volumes underlying the volume group, which typically means fewer bugs.
- There are lots of situations where you might want to expand a
disk/filesystem on a server or virtual machine. Desktops might do so less often, but there's no specific reason to put more engineering effort into making the two different. The best solution is the one that works in both cases.
What do you mean by engineering effort? When I'm setting up a data storage farm, I'll use LVM. When I'm setting up my laptop, I won't. What effort is there?
The effort on the part of the anaconda and dracut developers who have to test and support various disk configurations. The more consistent systems are, the fewer bugs we hit.
I just see it as an annoyance of having to customize my partition layout on the laptop, during the OS installation (customizing a storage farm setup is pretty mandatory either way, so it doesn't make a big difference).
In my case, I set up all of my systems with kickstart and they all have the same disk configuration except for RAID. Every disk in every system has a 200MB partition, a 1G partition, and then a partition that fills the rest of the disk. On laptops, that's the EFI partition, /boot, and a PV for LVM. On a BIOS system, it's a bios_grub partition, /boot, and a PV for LVM. On a server, the second and third are RAID1 or RAID10 members for sets that are /boot and a PV for LVM. Because they all have exactly the same partition set, when I replace a disk in a server, a script sets up the partitions and adds them to the RAID sets. With less opportunity for human error, my system is more reliable, it can be managed by less experienced members of my team, and management takes less time.
When you manage hundreds of systems, you start to see the value of consistency. And you can't get to the point of managing thousands without it.
- Snapshots are the only practical way to get consistent backups,
and you should be using them.
That depends on what kind of data you're backing up. If you're backing up the whole filesystem, than I agree. But if you are backing up only certain critical data, I'd say that a targeted rsync can be waaay more efficient.
You can use a targeted rsync from data that's been snapshotted, so that's not a valid criticism. And either way, if you aren't taking snapshots, you aren't guaranteed consistent data. If you rsync a file that's actively being written, the destination file may be corrupt. The only guarantee of consistent backups is to quiesce writes, take a snapshot, and back up from the snapshot volume.
LVM has virtually zero cost, so there's no practical benefit to not using it.
If you need it. If you don't need it, there is no practical benefit of having it, either. It's just another potential point of failure, waiting to happen.
The *cost* the same whether you need it or not. The value changes, but the cost is the same. Cost and value are different things. LVM has virtually zero cost, so even if you think you don't need it, you don't lose anything by having it.
Hypothetically, as it is a software component, there could be a bug that affects it. But that ignores the context in which it exists. LVM is the standard, default storage layer for Red Hat and derived systems. It is the most tested configuration. If you want something that's less likely to fail, it's the obvious choice.
(2) It is fragile. If you have data on top of LVM spread over an array of disks, and one disk dies, the data on the whole array goes away.
That's true of every filesystem that doesn't use RAID or something like it. It's hardly a valid criticism of LVM.
If you have a sequence of plain ext4 harddrives with several symlinks, and one drive dies, you can still read the data sitting on the other drives. With LVM, you cannot. It's as simple as that.
In some cases it makes sense to maintain access to reduced amount of data, despite the fact that a chunk went missing. A webserver, for example, can keep serving the data that's still there on the healthy drives, and survive the failure of the faulty drive without downtime. OTOH, with LVM, once a single drive fails, the server looses access to all data, which then necessitates some downtime while switching to the backup, etc. LVM isn't always an optimal solution.
Unless it's the disk with your root filesystem that fails.
Your argument is bizarre. If you are concerned with reliability, use RAID until you decide that btrfs or zfs are ready.
On 06/23/2015 01:54 PM, Marko Vojinovic wrote:
So the story ended up with lots of people in upgrading griefs purely because they couldn't resize the separate /boot partition, and it was separate because LVM was present, and LVM was present with the goal of making partition resizing easy! A textbook example of a catch-22, unbelievable!!
The Fedora /boot upsize was something I handled relatively easily with the LVM tools and another drive. I actually used an eSATA drive for this, but an internal or a USB external (which would have impacted system performance) could have been used. Here's what I did to resize my Fedora /boot when the upgrade required it several years back:
1.) Added a second drive that was larger than the drive that /boot was on; 2.) Created a PV on that drive; 3.) Added that PV to the volume group corresponding to the PV on the drive with /boot; 4.) Did a pvmove from the PV on the drive with /boot to the second drive (which took quite a while); 5.) Removed the PV on the drive with /boot from the volume group; 6.) Deleted the partition that previously contained the PV; 7.) Resized the /boot partition and its filesystem (this is doable while online, whereas resizing / online can be loads of fun); 8.) Created a new PV on the drive containing /boot; 9.) Added that PV back to the volume group; 10.) Resized the filesystems on the logical volumes on the volume group to shrink it to fit the new PV's space and resized the LV's accordingly (may require single-user mode to shrink some filesystems); 11.) Did a pvmove from the secondary drive back to the drive with /boot; 12.) Removed the secondary drive's PV from the VG (and removed the drive from the system).
I was able to do this without a reboot step or going into single user mode since I had not allocated all of the space in the VG to LV's, so I was able to skip step 10. While the pvmoves were executing the system was fully up and running, but with degraded performance; no downtime was experienced until the maintenance window to do the version upgrade. Once step 12 completed, I was able to do the upgrade with no issues with /boot size and no loss of data on the volume group on the /boot drive.
On 06/23/2015 09:15 AM, Jason Warr wrote:
That said, I prefer virtual machines over multiboot environments, and I absolutely despise LVM --- that cursed thing is never getting on my drives. Never again, that is...
I'm curious what has made some people hate LVM so much.
I wondered the same thing, especially in the context of someone who prefers virtual machines. LV-backed VMs have *dramatically* better disk performance than file-backed VMs.
On Tue, 23 Jun 2015 18:42:13 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
I wondered the same thing, especially in the context of someone who prefers virtual machines. LV-backed VMs have *dramatically* better disk performance than file-backed VMs.
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
For concreteness, let's say I have a guest machine, with a dedicated physical partition for it, on a single drive. Or, I have the same thing, only the dedicated partition is inside LVM. Why is there a performance difference, and how dramatic is it?
If you convince me, I might just change my opinion about LVM. :-)
Oh, and just please don't tell me that the load can be spread accross two or more harddrives, cutting the file access by a factor of two (or more). I can do that with raid, no need for LVM. Stick to a single harddrive scenario, please.
Best, :-) Marko
Once upon a time, Marko Vojinovic vvmarko@gmail.com said:
On Tue, 23 Jun 2015 18:42:13 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
I wondered the same thing, especially in the context of someone who prefers virtual machines. LV-backed VMs have *dramatically* better disk performance than file-backed VMs.
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
File-backed images have to go through the filesystem layer. They are not allocated contiguously, so what appear to be sequential reads inside the VM can be widely scattered across the underlying disk.
There are plenty of people that have documented the performance differences, just Google it.
At Wed, 24 Jun 2015 04:10:35 +0100 CentOS mailing list centos@centos.org wrote:
On Tue, 23 Jun 2015 18:42:13 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
I wondered the same thing, especially in the context of someone who prefers virtual machines. LV-backed VMs have *dramatically* better disk performance than file-backed VMs.
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
For concreteness, let's say I have a guest machine, with a dedicated physical partition for it, on a single drive. Or, I have the same thing, only the dedicated partition is inside LVM. Why is there a performance difference, and how dramatic is it?
If you convince me, I might just change my opinion about LVM. :-)
Well if you are comparing direct partitions to LVM there is no real difference. OTOH, if you have more than a few VMs (eg more than the limits imposed by the partitioning system) and/or want to create [temporary] ones 'on-the-fly', using LVM makes that trivially possible. Otherwise, you have to repartition the disk and reboot the host. This puts you 'back' in the old-school reality of a multi-boot system. And partitioning a RAID array is tricky and combersome. Resizing physical partitions is also non-trivial. Bascally, LVM gives you on-the-fly 'partitioning', without rebooting. It is just not possible (AFAIK) to always update partition tables of a running system (never if the disk is the system disk). Most partitioning tools are not really designed for dynamic re-sizing of partitions and it is a highly error-prone process. Most partitioning tools are designed for dealing with a 'virgin' disk (or a re-virgined disk) with the idea that the partitioning won't be revisited once the O/S has been installed. LVM is all about creating and managing *dynamic* 'partitions' (which is what Logical Volumes effectively are). And no, there is little advantage in using multiple PVs. To get performance gains (and/or redundency, etc.), one uses real RAID (eg kernel software RAID -- md or hardware RAID), then layers LVM on top of that.
The 'other' *alternitive* is to use virtual container disks (eg image files as disks), which have horrible performance (compared to LVM or hard partitions) and are hard to backup.
The *additional* feature: with LVM you can take a snapshot of the VM's disk and back it up safely. Otherwise you *have* to shutdown the VM and remount the VM's disk to back it up OR you have to install backup software (eg amanda-client or the like) on the VM and back it up over the virtual network. It some cases (many cases!) it is not possible to either shutdown the VM and/or install backup software on it (eg the VM is running a 'foreign' or otherwise imcompatible O/S).
Oh, and just please don't tell me that the load can be spread accross two or more harddrives, cutting the file access by a factor of two (or more). I can do that with raid, no need for LVM. Stick to a single harddrive scenario, please.
Best, :-) Marko
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
Well, there's best and worst case scenarios. Best case for file-backed VMs is pre-allocated files. It takes up more space, and takes a while to set up initially, but it skips block allocation and probably some fragmentation performance hits later.
Worst case, though, is sparse files. In such a setup, when you write a new file in a guest, the kernel writes the metadata to the journal, then writes the file's data block, then flushes the journal to the filesystem. Every one of those writes goes through the host filesystem layer, often allocating new blocks, which goes through the host's filesystem journal. If each of those three writes hit blocks not previously used, then the host may do three writes for each of them. In that case, one write() in an application in a VM becomes nine disk writes in the VM host.
The first time I benchmarked a sparse-file-backed guest vs an LVM backed guest, bonnie++ measured block write bandwidth at about 12.5% (1/8) native disk write performance.
Yesterday I moved a bunch of VMs from a file-backed virt server (set up by someone else) to one that used logical volumes. Block write speed on the old server, measured with bonnie++, was about 21.6MB/s in the guest and about 39MB/s on the host. So, less bad than a few years prior, but still bad. (And yes, all of those numbers are bad. It's a 3ware controller, what do you expect?)
LVM backed guests measure very nearly the same as bare metal performance. After migration, bonnie++ reports about 180MB/s block write speed.
For concreteness, let's say I have a guest machine, with a dedicated physical partition for it, on a single drive. Or, I have the same thing, only the dedicated partition is inside LVM. Why is there a performance difference, and how dramatic is it?
Well, I said that there's a big performance hit to file-backed guests, not partition backed guests. You should see exactly the same disk performance on partition backed guests as LV backed guests.
However, partitions have other penalties relative to LVM.
1) If you have a system with a single disk, you have to reboot to add partitions for new guests. Linux won't refresh the partition table on the disk it boots from. 2) If you have two disks you can allocate new partitions on the second disk without a reboot. However, your partition has to be contiguous, which may be a problem, especially over time if you allocate VMs of different sizes. 3) If you want redundancy, partitions on top of RAID is more complex than LVM on top of RAID. As far as I know, partitions on top of RAID are subject to the same limitation as in #1. 4) As far as I know, Anaconda can't set up a logical volume that's a redundant type, so LVM on top of RAID is the only practical way to support redundant storage of your host filesystems.
If you use LVM, you don't have to remember any oddball rules. You don't have to reboot to set up new VMs when you have one disk. You don't have to manage partition fragmentation. Every system, whether it's one disk or a RAID set behaves the same way.
Gordon Messmer wrote:
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
Well, there's best and worst case scenarios. Best case for file-backed VMs is pre-allocated files. It takes up more space, and takes a while to set up initially, but it skips block allocation and probably some fragmentation performance hits later.
Worst case, though, is sparse files. In such a setup, when you write a new file in a guest, the kernel writes the metadata to the journal, then
<MVNCH>
Here's a question: all of the arguments you're giving have to do with VMs. Do you have some for straight-on-the-server, non-VM cases?
mark
On 06/24/2015 11:06 AM, m.roth@5-cent.us wrote:
Here's a question: all of the arguments you're giving have to do with VMs. Do you have some for straight-on-the-server, non-VM cases?
Marko sent two messages and suggested that we keep the VM performance question as a reply to that one. My reply to his other message is not specific to VMs.
Once upon a time, m.roth@5-cent.us m.roth@5-cent.us said:
Here's a question: all of the arguments you're giving have to do with VMs. Do you have some for straight-on-the-server, non-VM cases?
I've used LVM on servers with hot-swap drives to migrate to new storage without downtime a number of times. Add new drives to the system, configure RAID (software or hardware), pvcreate, vgextend, pvmove, vgreduce, and pvremove (and maybe a lvextend and resize2fs/xfs_growfs). Never unmounted a filesystem, just some extra disk I/O.
Even in cases where I had to shutdown or reboot a server to get drives added, moving data could take a long downtime, but with LVM I can live-migrate from place to place.
LVM snapshots make it easy to get point-in-time consistent backups, including databases. For example, with MySQL, you can freeze and flush all the databases, snapshot the LV, and release the freeze. MySQL takes a brief pause (few seconds), and then you mount and back up the snapshot for a fully consistent database (only way to do that other than freezing all writes during a mysqldump, which can take a long time for larger DBs). That also avoids the access-time churn (for backup programs that don't know O_NOATIME, like any that use rsync).
That's server stuff. On a desktop with a combination of SSD and "spinning rust" drives, LVM can give you transparent SSD caching of "hot" data (rather than you having to put some filesystems on SSD and some on hard drive).
Now, if btrfs ever gets all the kinks worked out (and has a stable "fsck" for the corner cases), it integrates volume management into the filesystem, which makes some of the management easier. I used AdvFS on DEC/Compaq/HP Tru64 Unix, which had some of that, and it made some of this easier/faster/smoother. Btrfs may eventually obsolete a lot of uses of LVM, but that's down the road.
On 6/24/2015 2:06 PM, Chris Adams wrote:
Once upon a time, m.roth@5-cent.us m.roth@5-cent.us said:
Here's a question: all of the arguments you're giving have to do with VMs. Do you have some for straight-on-the-server, non-VM cases?
I've used LVM on servers with hot-swap drives to migrate to new storage without downtime a number of times. Add new drives to the system, configure RAID (software or hardware), pvcreate, vgextend, pvmove, vgreduce, and pvremove (and maybe a lvextend and resize2fs/xfs_growfs). Never unmounted a filesystem, just some extra disk I/O.
Even in cases where I had to shutdown or reboot a server to get drives added, moving data could take a long downtime, but with LVM I can live-migrate from place to place.
This is one of my primary use cases, and a real big time saver. I do this allot when migrating Oracle DB LUN's to larger sized, new allocations. It works great weather you are using ASM or any Linux filesystem. It is especially handy when migrating from one SAN frame to another. You can fully migrate with zero down time if you do even a small amount of planning ahead.
There are just so many time saving things you can do with it. Sure, if all groups in the chain plan ahead properly there can be very little change needed but how often does that happen in real life? It is part of my job to plan well enough ahead to know that storage needs grow despite everyone's best intentions to get out of the gate properly. LVM makes growing much easier and flexible.
On 06/24/2015 12:06 PM, Chris Adams wrote:
LVM snapshots make it easy to get point-in-time consistent backups, including databases. For example, with MySQL, you can freeze and flush all the databases, snapshot the LV, and release the freeze.
Exactly. And I mention this from time to time... I'm working on infrastructure to make that more common and more consistent: https://bitbucket.org/gordonmessmer/dragonsdawn-snapshot
If you're interested in testing or development (or even advocacy), I'd love to have more people contributing.
That also avoids the access-time churn (for backup programs that don't know O_NOATIME, like any that use rsync).
Yes, though rsync based systems are usually always-incremental, so they won't access files that haven't been modified, and impact on atime is minimal after the first backup.
That's server stuff. On a desktop with a combination of SSD and "spinning rust" drives, LVM can give you transparent SSD caching of "hot" data (rather than you having to put some filesystems on SSD and some on hard drive).
Interesting. I wasn't aware that LVM had that option. I've been looking at bcache and dm-cache. I'll have to look into that as well.
Now, if btrfs ever gets all the kinks worked out (and has a stable "fsck" for the corner cases), it integrates volume management into the filesystem, which makes some of the management easier.
btrfs and zfs are also more reliable than RAID. If a bit flips in a RAID set, all that can be determined is that the blocks are not consistent. There's no information about which blocks are correct, or how to repair the inconsistency. btrfs and zfs *do* have that information, so they can repair those errors correctly. As much as I like LVM today, I look forward to ditching RAID and LVM in favor of btrfs.
On 06/24/2015 12:35 PM, Gordon Messmer wrote:
Interesting. I wasn't aware that LVM had that option. I've been looking at bcache and dm-cache. I'll have to look into that as well.
heh. LVM cache *is* dm-cache. Don't I feel silly.
On Wed, 24 Jun 2015 14:06:19 -0500 Chris Adams linux@cmadams.net wrote:
Now, if btrfs ever gets all the kinks worked out (and has a stable "fsck" for the corner cases), it integrates volume management into the filesystem, which makes some of the management easier. I used AdvFS on DEC/Compaq/HP Tru64 Unix, which had some of that, and it made some of this easier/faster/smoother. Btrfs may eventually obsolete a lot of uses of LVM, but that's down the road.
https://en.wikipedia.org/wiki/AdvFS AdvFS uses a relatively advanced concept of a storage pool (called a file domain) and of logical file systems (called file sets). A file domain is composed of any number of block devices, which could be partitions, LVM or LSM devices.
I really miss this. BR, Bob
On 6/24/2015 1:06 PM, m.roth@5-cent.us wrote:
Gordon Messmer wrote:
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
Well, there's best and worst case scenarios. Best case for file-backed VMs is pre-allocated files. It takes up more space, and takes a while to set up initially, but it skips block allocation and probably some fragmentation performance hits later.
Worst case, though, is sparse files. In such a setup, when you write a new file in a guest, the kernel writes the metadata to the journal, then
<MVNCH>
Here's a question: all of the arguments you're giving have to do with VMs. Do you have some for straight-on-the-server, non-VM cases?
mark
Is there an easy to follow "howto" for normal LVM administration tasks. I get tired of googling every-time I have to do something I don't remember how to do regarding LVM, so I usually just don't bother with it at all.
I believe it has some benefit for my use cases, but I've been reticent to use it, since the last time I got LVM problems, I lost everything on the volume, and had to restore from backups anyway. I suspect I shot myself in the foot, but I still don't know for sure.
thanks, -chuck
On 6/24/2015 3:11 PM, Chuck Campbell wrote:
Is there an easy to follow "howto" for normal LVM administration tasks. I get tired of googling every-time I have to do something I don't remember how to do regarding LVM, so I usually just don't bother with it at all. I believe it has some benefit for my use cases, but I've been reticent to use it, since the last time I got LVM problems, I lost everything on the volume, and had to restore from backups anyway. I suspect I shot myself in the foot, but I still don't know for sure. thanks, -chuck
Gentoo Wiki has a pretty good "cheat sheet" on it:
https://wiki.gentoo.org/wiki/LVM
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, Jun 25, 2015 at 10:49:57AM -0500, Jason Warr wrote:
On 6/24/2015 3:11 PM, Chuck Campbell wrote:
Is there an easy to follow "howto" for normal LVM administration tasks. I get tired of googling every-time I have to do something I don't remember how to do regarding LVM, so I usually just don't bother with it at all. I believe it has some benefit for my use cases, but I've been reticent to use it, since the last time I got LVM problems, I lost everything on the volume, and had to restore from backups anyway. I suspect I shot myself in the foot, but I still don't know for sure. thanks, -chuck
Gentoo Wiki has a pretty good "cheat sheet" on it:
I have my own page, limited and out of date but..
On Thu, June 25, 2015 11:59 am, Scott Robbins wrote:
On Thu, Jun 25, 2015 at 10:49:57AM -0500, Jason Warr wrote:
On 6/24/2015 3:11 PM, Chuck Campbell wrote:
Is there an easy to follow "howto" for normal LVM administration tasks. I get tired of googling every-time I have to do something I don't remember how to do regarding LVM, so I usually just don't bother with it at all. I believe it has some benefit for my use cases, but I've been reticent to use it, since the last time I got LVM problems, I lost everything on the volume, and had to restore from backups anyway. I suspect I shot myself in the foot, but I still don't know for sure. thanks, -chuck
Gentoo Wiki has a pretty good "cheat sheet" on it:
I have my own page, limited and out of date but..
AFAIK, your page exists forever. This is how I first learned LVM: from your page. (Not that I use LVM much, but whenever I need to do something LVM, I'm confident I can - using your webpage).
Thanks a lot!!
Valeri
-- Scott Robbins PGP keyID EB3467D6 ( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 ) gpg --keyserver pgp.mit.edu --recv-keys EB3467D6
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On Thu, Jun 25, 2015 at 12:05:13PM -0500, Valeri Galtsev wrote:
On Thu, June 25, 2015 11:59 am, Scott Robbins wrote:
On Thu, Jun 25, 2015 at 10:49:57AM -0500, Jason Warr wrote:
AFAIK, your page exists forever. This is how I first learned LVM: from your page. (Not that I use LVM much, but whenever I need to do something LVM, I'm confident I can - using your webpage).
Thanks a lot!!
And thank you for the kind words. It's always good to hear that these things benefit someone.
At Wed, 24 Jun 2015 14:06:30 -0400 CentOS mailing list centos@centos.org wrote:
Gordon Messmer wrote:
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing?
Well, there's best and worst case scenarios. Best case for file-backed VMs is pre-allocated files. It takes up more space, and takes a while to set up initially, but it skips block allocation and probably some fragmentation performance hits later.
Worst case, though, is sparse files. In such a setup, when you write a new file in a guest, the kernel writes the metadata to the journal, then
<MVNCH>
Here's a question: all of the arguments you're giving have to do with VMs. Do you have some for straight-on-the-server, non-VM cases?
In the most *common* case the straight-on-the-server, non-VM case are the VM themselves. Basically, in the vast number of servers you most commonly have a host with a number of VMs. The VMs are the publicly visible servers and the host is pretty much invisible. The VMs themselves won't be using LVM, but the host server will be.
Otherwise...
I recently upgraded to a newer laptop and put a 128G SSD disk in it. My previous laptop had a 60gig IDE disk. Since I didn't have any need for more files (at this time!) I set the laptop with LVM. Because of how I do backups and because of the kinds of things I have on my laptop, I have multiple logical volumes:
newgollum.deepsoft.com% df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_newgollum-lv_root 9.8G 5.7G 3.6G 62% / tmpfs 1.9G 8.2M 1.9G 1% /dev/shm /dev/sda1 477M 86M 367M 19% /boot /dev/mapper/vg_newgollum-lv_home 4.8G 4.0G 602M 88% /home /dev/mapper/vg_newgollum-scratch 30G 10G 18G 36% /scratch /dev/mapper/vg_newgollum-mp3s 9.8G 5.1G 4.2G 55% /mp3s
I only have about 60gig presently allocated (there is about 60gig 'free'). And yes, this is a laptop with a single physical disk. Some day I might create additional LVs and/or grow the existing LVs. I *might* even install a VM or two on this laptop.
My disktop machine is also a host to a number of VMs (mostly used for build environments for different versions / flavors of Linux). Here LVM is pretty much a requirement, esp. since its disks are RAID'ed.
I also manage a server for the local public library. The host runs CentOS 6 on the bare metal. It also provides DHCP, DNS, Firewall, and IP routing. The library's workstations (for staff and patrons) are diskless and boot using tftp, but they actually run Ubuntu 14.04 (since it is more 'user friendly'), so I have a Ubuntu 14.04 (server) VM providing tftp boot for Ubuntu 14.04's kernel and NFS for Ubuntu 14.04's root and /usr file systems. (The CentOS host provides the /home file system.) And just as an extra 'benefit' (?) I have a VM running a 32-bit version of MS-Windows 8 (this is needed to talk to the library's heating system). This is a basic server, but uses virtualization for selected services. Except for 'appliance' servers, I see things being more and more common that pure 'bare metal' servers becoming the exception rather than the rule. For all sorts of reasons (including security), servers will commonly be using virtualization for many purposes. And LVM makes things really easy to deal with disk space for VMs.
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Wed, 24 Jun 2015 10:40:59 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
For concreteness, let's say I have a guest machine, with a dedicated physical partition for it, on a single drive. Or, I have the same thing, only the dedicated partition is inside LVM. Why is there a performance difference, and how dramatic is it?
Well, I said that there's a big performance hit to file-backed guests, not partition backed guests. You should see exactly the same disk performance on partition backed guests as LV backed guests.
Oh, I see, I missed the detail about the guest being file-backed when I read your previous reply. Of course, I'm fully familiar with the drawbacks of file-backed virtual drives, as opposed to physical (or LVM) partitions. I was (mistakenly) under the impression that you were talking about the performance difference between a bare partition and a LVM partition that the guest lives on.
However, partitions have other penalties relative to LVM.
Ok, so basically what you're saying is that in the usecase when one is spinning VM's on a daily basis, LVM is more flexible than dedicating hardware partitions for each new VM. I can understand that. Although, I could guess that if one is spinning VM's on a daily basis, their performance probably isn't an issue, so that a file-backed VM would do the job... It depends on what you use them for, in the end.
It's true I never came across such a scenario. In my experience so far, spinning a new VM is a rare process, which includes planning, designing, estimating resource usage, etc... And then, once the VM is put in place, it is intended to work long-term (usually until its OS reaches EOL or the hardware breaks).
But I get your point, with LVM you have additional flexibility to spin test-VM's basically every day if you need to, keeping the benefit of performance level of partition-backed virtual drives.
Ok, you have me convinced! :-) Next time I get my hands on a new harddrive, I'll put LVM on it, and see if it helps me manage VM's more efficiently. Doing this on a single drive doesn't run the risk of losing more than one drive's worth of data if it fails, so I'll play with it a little more in the context of VM's, and we'll see if it improves my workflow.
Maybe I'll have a change of heart over LVM after all. ;-)
Best, :-) Marko
On Tue, 2015-06-23 at 11:15 -0500, Jason Warr wrote:
On 6/23/2015 10:33 AM, Marko Vojinovic wrote:
Inside / (which is mostly always ext4), 100% of the time. :-) That said, I prefer virtual machines over multiboot environments, and I absolutely despise LVM --- that cursed thing is never getting on my drives. Never again, that is...
I'm curious what has made some people hate LVM so much.
Having to read the documentation? That has always been what I assumed - people want to do something without being bothered with understanding what they are doing.
I have been using it for years on ...
Yep. Use it on every server, no exceptions, never had issues I did not cause myself - and moving storage around, adding storage, all on running servers... never a problem.
FWIW, we don't use a separate partition and haven't had any issues (we do set up / as a separate partition only 5GB large). We have over 150 systems that have been chugging along with many versions of CentOS this way for years.
We only use ext3,4 filesystems though. And no volume manager.
On Tue, Jun 23, 2015 at 10:42 AM, m.roth@5-cent.us wrote:
Timothy Murphy wrote:
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
Separate partition, 100% of the time.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Phelps, Matthew wrote:
FWIW, we don't use a separate partition and haven't had any issues (we do set up / as a separate partition only 5GB large). We have over 150 systems that have been chugging along with many versions of CentOS this way for years.
It was always recommended to have /boot as a separate partition. Note that you MUST have /efi as a separate partition, and that has to be mounted /boot/EFI/efi.... What a pain.
We only use ext3,4 filesystems though. And no volume manager.
We've mostly moved to ext4, and we're moving, at least for drives > 2TB, to upstream's default of xfs.
On Tue, Jun 23, 2015 at 10:42 AM, m.roth@5-cent.us wrote:
Timothy Murphy wrote:
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
Separate partition, 100% of the time.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphelps@cfa.harvard.edu, http://www.cfa.harvard.edu _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I use only / for VMs which tend to be single partition, swap-less.
My "real" servers and workstations/laptops will always have a /boot, in case I decide / needs to be on top of LVM, LUKS and so on which will require booting the kernel and then do fancier stuff from the initrd.
HTH Lucian
-- Sent from the Delta quadrant using Borg technology!
Nux! www.nux.ro
----- Original Message -----
From: "Timothy Murphy" gayleard@eircom.net To: centos@centos.org Sent: Tuesday, 23 June, 2015 13:49:08 Subject: [CentOS] /boot on a separate partition?
Do most people today have /boot on a separate partition, or do they (you) have it on the / partition ?
-- Timothy Murphy gayleard /at/ eircom.net School of Mathematics, Trinity College, Dublin
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos