[CentOS] RAID help

Wed Dec 15 14:49:12 UTC 2010
Ross Walker <rswwalker at gmail.com>

On Dec 14, 2010, at 11:14 PM, Les Mikesell <lesmikesell at gmail.com> wrote:

> On 12/14/10 9:41 PM, Ross Walker wrote:
>> On Dec 14, 2010, at 7:01 PM, Les Mikesell<lesmikesell at gmail.com>  wrote:
>> 
>>> On 12/14/2010 5:14 PM, Markus Falb wrote:
>>>> 
>>>>> 
>>>>> But this only helps if you don't know where you will need to grow.  If
>>>>> you know it is going to be under /var, just give it all the space you
>>>>> have in the first place and avoid the overhead of lvm.
>>>> 
>>>> To quote Jason, the OP: "what should my SWAP space be" ?
>>>> How should I know ? lvm to the rescue.
>>> 
>>> I've never seen a machine that had pushed 2 gigs into swap recover (i.e.
>>> whatever was consuming the memory did it faster than jobs could complete
>>> and release any).  Increasing performance might have saved them but not
>>> adding more swap.
>>> 
>>>> lvm also helps if you want to have additional partitions. Maybe one day
>>>> you recognise that a separate partition for /var/log/httpd would be a
>>>> good thing.
>>>> 
>>>> You are talking about the performance overhead ? Not sure about that. I
>>>> think the flexibility you gain makes it at least worth thinking about
>>>> it. Said that, I would be interested in hearing about disadvantages of lvm.
>>> 
>>> It really depends on the purpose of the machine.  If it has to be a high
>>> performance server, I wouldn't want any extra overhead and I certainly
>>> wouldn't want bits and pieces of a partition to be spread into chunks
>>> far apart on the disk.  It would be even better to put the busy content
>>> on separate drives to avoid seeks as much as possible.
>> 
>> LVM overhead is negligible. It is basically a kernel mapping of virtual memory space into 4MB+ extents across drives.
>> 
>> It basically has the same overhead as Linux's virtual memory subsystem.
> 
> Maybe, if memory access time was measured in many milliseconds to move chunk to 
> chunk...

The LVM portion that maps LBAs to LV offsets is completely in memory. When an LV is initially allocated it's extents are contiguous, only after growing it does it become fragmented and those fragments will be large, 4GB here, 4GB there, which should minimize the seek time factor (especially on busy systems).

For VGs containing muliple PVs you can stripe LVs across them to get multiple times the throughput.

The "overhead" that people talk about is the overhead of the memory lookup going from virtual memory LBA to physical disk(s) PBA, which is negligible.

Of course if you create snapshots, those have overhead, but not strictly LVM by itself.

-Ross