Hi,
I was wondering, why/when it is useful or when should I avoid to use LVM.
I think the big advantage of LVMing is if you modify (rezising, ...) disk and filesystem layouts "a lot".
Are there any real pros or cons for following situations regarding e.g. management and speed?
e.g.:
I do have a server system raid for which the disk layout will not change; e.g. /var /usr /home will not change much in size.
OR
I do have some file storage shares (iscsi raids) up to some TB each on one big storage device.
Sometimes (e.g. after a server crash) it is useful to remount the storage to a different server.
Should I use LVM on the iscsi storage volumes?
Any suggestion and comment is welcome . Regards . Götz
----- Original Message ----- | Hi, | | I was wondering, why/when it is useful or when should I avoid to use | LVM. | | I think the big advantage of LVMing is if you modify (rezising, ...) | disk and filesystem layouts "a lot". | | Are there any real pros or cons for following situations regarding | e.g. | management and speed?
The speed at which you can manage your disk environment through the use of LVM makes most of the tradeoffs worth while. Of course, YMMV so you're best to test.
| e.g.: | | I do have a server system raid for which the disk layout will not | change; e.g. /var /usr /home will not change much in size.
This isn't so much the issue. What if *any* partition requirements *do* change in the future. LVM can account for that my allowing you flexibility to make a change should it be required. Standard partitioning is less flexible in this regard.
| OR | | I do have some file storage shares (iscsi raids) up to some TB each | on | one big storage device. | | Sometimes (e.g. after a server crash) it is useful to remount the | storage to a different server.
Standard caveats apply. If the Volume Groups or the Logical Volumes are named the same moving them to another system with similar VGs or LVs can be problematic. Same goes for file system labels, albeit both are relatively easy to fix in such a scenario.
| Should I use LVM on the iscsi storage volumes?
I would find it difficult to find a case where LVM shouldn't be used because of it's flexibility. I tend to use full disk LVM (no partitions at all) and file system labels for mounting and the like (labels match LVs).
lvcreate -L 20G -n csgrad DATA mkfs.xfs -L csgrad /dev/DATA/csgrad
/etc/fstab ----------
LABEL=csgrad /exports/csgrad xfs defaults 0 0
LVM offers other additional flexibility too in that you can migrate PVs from one device to another online. So if you have one iSCSI server that is coming off support and you are replacing it with another, you can use pvmove to move the data from one target to another.
----- Original Message ----- | ----- Original Message ----- | | Hi, | | | | I was wondering, why/when it is useful or when should I avoid to | | use | | LVM. | | | | I think the big advantage of LVMing is if you modify (rezising, | | ...) | | disk and filesystem layouts "a lot". | | | | Are there any real pros or cons for following situations regarding | | e.g. | | management and speed? | | The speed at which you can manage your disk environment through the | use of LVM makes most of the tradeoffs worth while. Of course, YMMV | so you're best to test. | | | e.g.: | | | | I do have a server system raid for which the disk layout will not | | change; e.g. /var /usr /home will not change much in size. | | This isn't so much the issue. What if *any* partition requirements | *do* change in the future. LVM can account for that my allowing you | flexibility to make a change should it be required. Standard | partitioning is less flexible in this regard. | | | OR | | | | I do have some file storage shares (iscsi raids) up to some TB each | | on | | one big storage device. | | | | Sometimes (e.g. after a server crash) it is useful to remount the | | storage to a different server. | | Standard caveats apply. If the Volume Groups or the Logical Volumes | are named the same moving them to another system with similar VGs or | LVs can be problematic. Same goes for file system labels, albeit | both are relatively easy to fix in such a scenario. | | | Should I use LVM on the iscsi storage volumes? | | I would find it difficult to find a case where LVM shouldn't be used | because of it's flexibility. I tend to use full disk LVM (no | partitions at all) and file system labels for mounting and the like | (labels match LVs). | | lvcreate -L 20G -n csgrad DATA | mkfs.xfs -L csgrad /dev/DATA/csgrad | | /etc/fstab | ---------- | | LABEL=csgrad /exports/csgrad xfs defaults 0 0 | | | LVM offers other additional flexibility too in that you can migrate | PVs from one device to another online. So if you have one iSCSI | server that is coming off support and you are replacing it with | another, you can use pvmove to move the data from one target to | another.
Oh! One last case in point. Partition Alignment. This is very important to the performance of a disk subsystem. With full disk LVM it's not an issue at all.
On 09/26/2013 09:35 PM, James A. Peltier wrote:
----- Original Message ----- | ----- Original Message ----- | | Hi, | | | | I was wondering, why/when it is useful or when should I avoid to | | use | | LVM. | | | | I think the big advantage of LVMing is if you modify (rezising, | | ...) | | disk and filesystem layouts "a lot". | | | | Are there any real pros or cons for following situations regarding | | e.g. | | management and speed? | | The speed at which you can manage your disk environment through the | use of LVM makes most of the tradeoffs worth while. Of course, YMMV | so you're best to test. | | | e.g.: | | | | I do have a server system raid for which the disk layout will not | | change; e.g. /var /usr /home will not change much in size. | | This isn't so much the issue. What if *any* partition requirements | *do* change in the future. LVM can account for that my allowing you | flexibility to make a change should it be required. Standard | partitioning is less flexible in this regard. | | | OR | | | | I do have some file storage shares (iscsi raids) up to some TB each | | on | | one big storage device. | | | | Sometimes (e.g. after a server crash) it is useful to remount the | | storage to a different server. | | Standard caveats apply. If the Volume Groups or the Logical Volumes | are named the same moving them to another system with similar VGs or | LVs can be problematic. Same goes for file system labels, albeit | both are relatively easy to fix in such a scenario. | | | Should I use LVM on the iscsi storage volumes? | | I would find it difficult to find a case where LVM shouldn't be used | because of it's flexibility. I tend to use full disk LVM (no | partitions at all) and file system labels for mounting and the like | (labels match LVs). | | lvcreate -L 20G -n csgrad DATA | mkfs.xfs -L csgrad /dev/DATA/csgrad | | /etc/fstab | ---------- | | LABEL=csgrad /exports/csgrad xfs defaults 0 0 | | | LVM offers other additional flexibility too in that you can migrate | PVs from one device to another online. So if you have one iSCSI | server that is coming off support and you are replacing it with | another, you can use pvmove to move the data from one target to | another.
Oh! One last case in point. Partition Alignment. This is very important to the performance of a disk subsystem. With full disk LVM it's not an issue at all.
Not having much experience with LVM, I just wondered how this last comment applies. Surely the alignment of partitions has got to do with the underlying hardware and how it seeks to and finds the beginning of where it wants to read - the sector. I am curious how LVM negates this hardware constraint.
----- "Rob Kampen" rkampen@kampensonline.com escreveu:
De: "Rob Kampen" rkampen@kampensonline.com Para: "CentOS mailing list" centos@centos.org Enviadas: Quinta-feira, 26 de Setembro de 2013 17:11:06 (GMT-0300) Auto-Detected Assunto: Re: [CentOS] to lvm or not to lvm - why/when to use lvm
On 09/26/2013 09:35 PM, James A. Peltier wrote:
----- Original Message ----- | ----- Original Message ----- | | Hi, | | | | I was wondering, why/when it is useful or when should I avoid
to
| | use | | LVM. | | | | I think the big advantage of LVMing is if you modify (rezising, | | ...) | | disk and filesystem layouts "a lot". | | | | Are there any real pros or cons for following situations
regarding
| | e.g. | | management and speed? | | The speed at which you can manage your disk environment through
the
| use of LVM makes most of the tradeoffs worth while. Of course,
YMMV
| so you're best to test. | | | e.g.: | | | | I do have a server system raid for which the disk layout will
not
| | change; e.g. /var /usr /home will not change much in size. | | This isn't so much the issue. What if *any* partition
requirements
| *do* change in the future. LVM can account for that my allowing
you
| flexibility to make a change should it be required. Standard | partitioning is less flexible in this regard. | | | OR | | | | I do have some file storage shares (iscsi raids) up to some TB
each
| | on | | one big storage device. | | | | Sometimes (e.g. after a server crash) it is useful to remount
the
| | storage to a different server. | | Standard caveats apply. If the Volume Groups or the Logical
Volumes
| are named the same moving them to another system with similar VGs
or
| LVs can be problematic. Same goes for file system labels, albeit | both are relatively easy to fix in such a scenario. | | | Should I use LVM on the iscsi storage volumes? | | I would find it difficult to find a case where LVM shouldn't be
used
| because of it's flexibility. I tend to use full disk LVM (no | partitions at all) and file system labels for mounting and the
like
| (labels match LVs). | | lvcreate -L 20G -n csgrad DATA | mkfs.xfs -L csgrad /dev/DATA/csgrad | | /etc/fstab | ---------- | | LABEL=csgrad /exports/csgrad xfs defaults 0 0 | | | LVM offers other additional flexibility too in that you can
migrate
| PVs from one device to another online. So if you have one iSCSI | server that is coming off support and you are replacing it with | another, you can use pvmove to move the data from one target to | another.
Oh! One last case in point. Partition Alignment. This is very
important to the performance of a disk subsystem. With full disk LVM it's not an issue at all.
Not having much experience with LVM, I just wondered how this last comment applies. Surely the alignment of partitions has got to do with the underlying hardware and how it seeks to and finds the beginning of where it wants
to read - the sector. I am curious how LVM negates this hardware constraint.
Well, I think this is one of the big examples of what we can do with LVM: http://www.greyoak.com/lvmdrive.html
On Thu, Sep 26, 2013 at 4:28 PM, Antonio da Silva Martins Junior asmartins@uem.br wrote:
Well, I think this is one of the big examples of what we can do with LVM: http://www.greyoak.com/lvmdrive.html
This is one of the top reasons that I use LVM on my home builds. I generally build with an SSD as the OS disk and a large SATA drive as my /home. When I need a bigger disk, which happens occasionally, I can either add or move up to a larger disk. I tend to just move up to a larger disk as I prefer a single disk to multiple disks for both reliability, reduced noise, and reduced power usage.
On 09/27/2013 11:25 AM, Kwan Lowe wrote:
On Thu, Sep 26, 2013 at 4:28 PM, Antonio da Silva Martins Junior asmartins@uem.br wrote:
Well, I think this is one of the big examples of what we can do with LVM: http://www.greyoak.com/lvmdrive.html
This is one of the top reasons that I use LVM on my home builds. I generally build with an SSD as the OS disk and a large SATA drive as my /home. When I need a bigger disk, which happens occasionally, I can either add or move up to a larger disk. I tend to just move up to a larger disk as I prefer a single disk to multiple disks for both reliability, reduced noise, and reduced power usage. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
So we can generally say that LVM offers no real drawbacks in terms of flexibility, but it seems like we are mostly talking about homebrew setups.
What about in a high iops situation? Is there any evidence/testing out there that might show that there is some overhead of LVM that might impact total throughput?
----- Original Message ----- | On 09/27/2013 11:25 AM, Kwan Lowe wrote: | > On Thu, Sep 26, 2013 at 4:28 PM, Antonio da Silva Martins Junior | > asmartins@uem.br wrote: | >> Well, I think this is one of the big examples of what | >> we can do with LVM: http://www.greyoak.com/lvmdrive.html | > | > This is one of the top reasons that I use LVM on my home builds. I | > generally build with an SSD as the OS disk and a large SATA drive | > as | > my /home. When I need a bigger disk, which happens occasionally, I | > can | > either add or move up to a larger disk. I tend to just move up to a | > larger disk as I prefer a single disk to multiple disks for both | > reliability, reduced noise, and reduced power usage. | > _______________________________________________ | > CentOS mailing list | > CentOS@centos.org | > http://lists.centos.org/mailman/listinfo/centos | > | | So we can generally say that LVM offers no real drawbacks in terms of | flexibility, but it seems like we are mostly talking about homebrew | setups. | | What about in a high iops situation? Is there any evidence/testing | out | there that might show that there is some overhead of LVM that might | impact total throughput? |
We have a cluster that pounds away at the various hardware in our setups and LVM offers a performance statistical difference of about 1-5% for the vast majority of our workloads. Many times it's disk or network that is the limiting bottleneck and for disk it's almost always because of the RAID card. Others have to do with file systems when dealing with hundreds of millions of files in metadata heavy workloads. I have seen these problems with and without LVM for the same workloads.
Since LVM sits on top of the same subsystem that drives your bare metal disks, your MD RAID sets, etc, you would see similar performance there.
Snapshots on the other hand can drastically reduce performance and I would strongly recommend you get rid of them as soon as you can. You don't want to be working in them for long.
A lot of work has gone into LVM and you'll notice when performing a search that most of the performance stuff is a couple years old. Careful what you trust out there.
On 9/27/2013 9:39 AM, Phil Gardner wrote:
So we can generally say that LVM offers no real drawbacks in terms of flexibility, but it seems like we are mostly talking about homebrew setups.
What about in a high iops situation? Is there any evidence/testing out there that might show that there is some overhead of LVM that might impact total throughput?
I ran quite a lot of disk IO benchmarks awhile back on both JBOD and hardware raid SAS configurations with high performance 15000 rpm drives as well as 7200rpm nearline drives... I found zilch significant difference between using a direct file system and LVM with either ext4fs or xfs. what differences i saw were down in the statistical noise and not repeatable.
Antonio da Silva Martins Junior wrote the following on 9/26/2013 3:28 PM:
Well, I think this is one of the big examples of what we can do with LVM: http://www.greyoak.com/lvmdrive.html
This seems like a great example of how LVM complicates the process of moving to a new disk. Without LVM, one can still simply use gparted to copy the data to the new drive and extend the existing partition and fs in a couple clicks. With CLI tools, that's dd/cp + parted + resize2fs. The number of commands is reduced by half or more of what was required with LVM.
The benefits with LVM seem to be the snapshot functionality and the (limited) spanning/mirroring raid logic. If you're not taking advantage of these features then I, personally, wouldn't recommend the additional obfuscation and complication that LVM adds.
--Blake
----- Original Message ----- | | Antonio da Silva Martins Junior wrote the following on 9/26/2013 3:28 | PM: | > Well, I think this is one of the big examples of what we can do | > with | > LVM: http://www.greyoak.com/lvmdrive.html | | This seems like a great example of how LVM complicates the process of | moving to a new disk. Without LVM, one can still simply use gparted | to | copy the data to the new drive and extend the existing partition and | fs | in a couple clicks. With CLI tools, that's dd/cp + parted + | resize2fs. | The number of commands is reduced by half or more of what was | required | with LVM.
So you can do this online without the users noticing huh? Hmmm. I can certainly do this with LVM ;)
| The benefits with LVM seem to be the snapshot functionality and the | (limited) spanning/mirroring raid logic. If you're not taking | advantage | of these features then I, personally, wouldn't recommend the | additional | obfuscation and complication that LVM adds. | | --Blake | _______________________________________________ | CentOS mailing list | CentOS@centos.org | http://lists.centos.org/mailman/listinfo/centos |
----- Original Message ----- | | On 09/26/2013 09:35 PM, James A. Peltier wrote: | > ----- Original Message ----- | > | ----- Original Message ----- | > | | Hi, | > | | | > | | I was wondering, why/when it is useful or when should I avoid | > | | to | > | | use | > | | LVM. | > | | | > | | I think the big advantage of LVMing is if you modify (rezising, | > | | ...) | > | | disk and filesystem layouts "a lot". | > | | | > | | Are there any real pros or cons for following situations | > | | regarding | > | | e.g. | > | | management and speed? | > | | > | The speed at which you can manage your disk environment through | > | the | > | use of LVM makes most of the tradeoffs worth while. Of course, | > | YMMV | > | so you're best to test. | > | | > | | e.g.: | > | | | > | | I do have a server system raid for which the disk layout will | > | | not | > | | change; e.g. /var /usr /home will not change much in size. | > | | > | This isn't so much the issue. What if *any* partition | > | requirements | > | *do* change in the future. LVM can account for that my allowing | > | you | > | flexibility to make a change should it be required. Standard | > | partitioning is less flexible in this regard. | > | | > | | OR | > | | | > | | I do have some file storage shares (iscsi raids) up to some TB | > | | each | > | | on | > | | one big storage device. | > | | | > | | Sometimes (e.g. after a server crash) it is useful to remount | > | | the | > | | storage to a different server. | > | | > | Standard caveats apply. If the Volume Groups or the Logical | > | Volumes | > | are named the same moving them to another system with similar VGs | > | or | > | LVs can be problematic. Same goes for file system labels, albeit | > | both are relatively easy to fix in such a scenario. | > | | > | | Should I use LVM on the iscsi storage volumes? | > | | > | I would find it difficult to find a case where LVM shouldn't be | > | used | > | because of it's flexibility. I tend to use full disk LVM (no | > | partitions at all) and file system labels for mounting and the | > | like | > | (labels match LVs). | > | | > | lvcreate -L 20G -n csgrad DATA | > | mkfs.xfs -L csgrad /dev/DATA/csgrad | > | | > | /etc/fstab | > | ---------- | > | | > | LABEL=csgrad /exports/csgrad xfs defaults 0 0 | > | | > | | > | LVM offers other additional flexibility too in that you can | > | migrate | > | PVs from one device to another online. So if you have one iSCSI | > | server that is coming off support and you are replacing it with | > | another, you can use pvmove to move the data from one target to | > | another. | > | > Oh! One last case in point. Partition Alignment. This is very | > important to the performance of a disk subsystem. With full disk | > LVM it's not an issue at all. | > | Not having much experience with LVM, I just wondered how this last | comment applies. | Surely the alignment of partitions has got to do with the underlying | hardware and how it seeks to and finds the beginning of where it | wants | to read - the sector. I am curious how LVM negates this hardware | constraint.
There are no partitions, so partition alignment is moot. That doesn't mean that you don't need to also align the file system layout, but at least you don't have to account for both partition alignment *and* file system alignment.
BTW: XFS detects this automatically if it can talk directly to the hardware otherwise you need to specify the sw= and su= values accordingly.