Hi All,
I have a new system with 2 Seagate 1TB SATA Enterprise level drives in it.
I want to RAID1 (mirror) these drives.
This machine will be a web-server in my apartment hosting an HTML video fan site I am creating. Apache, MySQL, PHP etc. This site will easily be 300+ gigs with all the versions of each video, the MySQL won't be huge, but will grow as data for each video is added (i.e location on the server, keyframe name, etc)
I am a bit confused by: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-raid-config.html
So if I simplify, I must: 1. Create a software raid partition on each drive 2. Create a RAID 1 out of that partition and use a mount point of /boot
3. Create other mount points I might want i.e swap, /home, etc 4. Create RAID1 out of these partitions
5. rinse and repeat this for each mount point I want
A few questions:
1. This system support 16gb of RAM. I have 9gb in it, but I will max it out over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
2. Any reason I can't just create a single mount point taking up the entire drive and RAID1 the entire thing? Can anyone recommend some ideal mount points and sizes?
3. What should I account for if my /var/www/html will be very large?
Best, -Jason
Hey, Jason,
Jason T. Slack-Moehrle wrote:
I have a new system with 2 Seagate 1TB SATA Enterprise level drives in it.
I want to RAID1 (mirror) these drives.
<snip>
So if I simplify, I must:
- Create a software raid partition on each drive
- Create a RAID 1 out of that partition and use a mount point of /boot
Only if you want to mirror the boot partition.
- Create other mount points I might want i.e swap, /home, etc
- Create RAID1 out of these partitions
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea. <snip>
A few questions:
- This system support 16gb of RAM. I have 9gb in it, but I will max it
out over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
Nope. Received Wisdom said 2-2.5 times RAM. However, in these days of in insanely huge amounts of RAM, it's not really important. At work, I just make swap 2G for everything (and trust me, we've got servers that make your memory look piddly).
- Any reason I can't just create a single mount point taking up the
entire drive and RAID1 the entire thing? Can anyone recommend some ideal mount points and sizes?
Nope, no reason.
- What should I account for if my /var/www/html will be very large?
My manager here doesn't like LVM; but if it were me, I'd make that /var/www an LVM virtual partition. That way, you can always add another drive and thow more space into it.
mark
On Tue, Dec 14, 2010 at 1:49 PM, m.roth@5-cent.us wrote:
Jason T. Slack-Moehrle wrote:
Only if you want to mirror the boot partition.
- Create other mount points I might want i.e swap, /home, etc
- Create RAID1 out of these partitions
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea.
You want to mirror swap. If a drive fails your swap immediately goes offline. If an application had memory in swap it is now lost.
A few questions:
- This system support 16gb of RAM. I have 9gb in it, but I will max it
out over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
Nope. Received Wisdom said 2-2.5 times RAM. However, in these days of in insanely huge amounts of RAM, it's not really important. At work, I just make swap 2G for everything (and trust me, we've got servers that make your memory look piddly).
I do the same 1-2GB for swap. The servers hardly every touch swap as they have enough memory.
- Any reason I can't just create a single mount point taking up the
entire drive and RAID1 the entire thing? Can anyone recommend some ideal mount points and sizes?
Nope, no reason.
- What should I account for if my /var/www/html will be very large?
My manager here doesn't like LVM; but if it were me, I'd make that /var/www an LVM virtual partition. That way, you can always add another drive and thow more space into it.
I only use LVM if I need the features if offers. Otherwise it is just extra overhead.
Ryan
Ryan Wagoner wrote:
On Tue, Dec 14, 2010 at 1:49 PM, m.roth@5-cent.us wrote:
Jason T. Slack-Moehrle wrote:
<snip>
- Create other mount points I might want i.e swap, /home, etc
- Create RAID1 out of these partitions
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea.
You want to mirror swap. If a drive fails your swap immediately goes offline. If an application had memory in swap it is now lost.
<snip> Mmmm... but if a drive goes down, then swap could quite easily be in an undefined state, part-way through the mirroring.
mark
On Tue, Dec 14, 2010 at 2:11 PM, m.roth@5-cent.us wrote:
Ryan Wagoner wrote:
On Tue, Dec 14, 2010 at 1:49 PM, m.roth@5-cent.us wrote:
Jason T. Slack-Moehrle wrote:
<snip> >>> 3. Create other mount points I might want i.e swap, /home, etc >>> 4. Create RAID1 out of these partitions >> >> Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea. > > You want to mirror swap. If a drive fails your swap immediately goes > offline. If an application had memory in swap it is now lost. <snip> Mmmm... but if a drive goes down, then swap could quite easily be in an undefined state, part-way through the mirroring.
How is it in an undefined state? The mirror is created with mdadm, then mkswap is run. At this point when data is written mdadm writes the data to both drives. Even if a drive failed when the mirror was initially syncing, both drives would have the same data for any writes that occurred. Sure the unused space could be out of sync, but that doesn't matter.
Ryan
Hi Mark,
Thanks for the reply.
- Create a RAID 1 out of that partition and use a mount point of /boot
Only if you want to mirror the boot partition.
Doesn't one want to mirror that partition?
- Create other mount points I might want i.e swap, /home, etc
- Create RAID1 out of these partitions
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea.
<snip>
Right, I get that, but what is fuzzy is it you, say have a drive with a few partitions that you don't mirror and a few that you do, doesn't the drive you are mirroring to have unused space equal to the amount of the partitions you are not mirroring?
A few questions:
- This system support 16gb of RAM. I have 9gb in it, but I will max it
out over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
Nope. Received Wisdom said 2-2.5 times RAM. However, in these days of in insanely huge amounts of RAM, it's not really important. At work, I just make swap 2G for everything (and trust me, we've got servers that make your memory look piddly).
Thanks.
My manager here doesn't like LVM; but if it were me, I'd make that /var/www an LVM virtual partition. That way, you can always add another drive and thow more space into it.
I am not as familiar with LVM as I should be, do you have a link to info/tutorial?
-Jason
On Tue, Dec 14, 2010 at 2:08 PM, Jason T. Slack-Moehrle slackmoehrle@me.com wrote:
Hi Mark,
Thanks for the reply.
- Create a RAID 1 out of that partition and use a mount point of /boot
Only if you want to mirror the boot partition.
Doesn't one want to mirror that partition?
Yes you want to mirror boot. Otherwise if one drive fails you won't be able to boot from the other drive. Also make sure you install grub on both drives.
- Create other mount points I might want i.e swap, /home, etc
- Create RAID1 out of these partitions
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea.
<snip>
Right, I get that, but what is fuzzy is it you, say have a drive with a few partitions that you don't mirror and a few that you do, doesn't the drive you are mirroring to have unused space equal to the amount of the partitions you are not mirroring?
I think you are over complicating this. If you just want / in one partition or want to use LVM to split / then do the following.
Partition both drives like
sd[ab]1 100M for boot sd[ab]2 Fill entire space sd[ab]3 2GB for swap
Create three mdadm RAID 1 mirrors
md0 sd[ab]1 /boot md1 sd[ab]2 / OR LVM PV md2 sd[ab]3 swap
If you want your data separate you can use LVM to carve out the space. Or make one more mdadm mirror for your data.
I do know that RHEL 6 creates boot by default as larger than 100M so you might want to determine the size to feature proof your setup.
Ryan
Ryan Wagoner wrote:
On Tue, Dec 14, 2010 at 2:08 PM, Jason T. Slack-Moehrle slackmoehrle@me.com wrote:
<snip>
I do know that RHEL 6 creates boot by default as larger than 100M so you might want to determine the size to feature proof your setup.
*sigh* I assume that's because Fedora, at least as of 13, needs at *least* 250M, because it dumps something ridiculous there during its preupgrade, and then runs from it.
mark
My manager here doesn't like LVM; but if it were me, I'd make that /var/www an LVM virtual partition. That way, you can always add another drive and thow more space into it.
Ah I found this: http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/
-Jason
On 14.12.2010 19:49, m.roth@5-cent.us wrote:
Hey, Jason,
Jason T. Slack-Moehrle wrote:
I have a new system with 2 Seagate 1TB SATA Enterprise level drives in it.
I want to RAID1 (mirror) these drives.
<snip> > So if I simplify, I must: > 1. Create a software raid partition on each drive > 2. Create a RAID 1 out of that partition and use a mount point of /boot
Only if you want to mirror the boot partition.
- Create other mount points I might want i.e swap, /home, etc
- Create RAID1 out of these partitions
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea.
<snip>
I am surprised. I always done /boot partitions as raid 1 and I always did swap as raid 1 and therefore I would be interested about any arguments (well better facts) against doing so.
On 12/14/10 1:08 PM, Markus Falb wrote:
Only if you want each directory RAIDed. DO NOT mirror swap. Bad idea.
<snip>
I am surprised. I always done /boot partitions as raid 1 and I always did swap as raid 1 and therefore I would be interested about any arguments (well better facts) against doing so.
if you don't mirror swap, when any swap volume fails your system hard crashes, negating the entire purpose of RAID.
On Tue, 14 Dec 2010, Jason T. Slack-Moehrle wrote:
Hi All,
I have a new system with 2 Seagate 1TB SATA Enterprise level drives in it.
I want to RAID1 (mirror) these drives. <snip/>
So if I simplify, I must:
Create a software raid partition on each drive
Create a RAID 1 out of that partition and use a mount point of /boot
Create other mount points I might want i.e swap, /home, etc
Create RAID1 out of these partitions
rinse and repeat this for each mount point I want
A few questions:
This system support 16gb of RAM. I have 9gb in it, but I will max it out over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
Any reason I can't just create a single mount point taking up the entire drive and RAID1 the entire thing? Can anyone recommend some ideal mount points and sizes?
What should I account for if my /var/www/html will be very large?
If you have time to experiment a bit, I'd highly suggest encapsulating your RAID design in a kickstart file. You'll need to do some up-front work to get it ready, but once it's done you can re-do your arrangement easily (and repeat as necessary). Here's a sample (that requires two identical drives):
# disk work bootloader --location=mbr clearpart --all --initlabel part raid.01 --size=300 --ondisk=hda --asprimary part raid.02 --size=300 --ondisk=hdb --asprimary part raid.11 --size=1024 --ondisk=hda --asprimary part raid.12 --size=1024 --ondisk=hdb --asprimary part raid.21 --size=20000 --ondisk=hda --asprimary part raid.22 --size=20000 --ondisk=hdb --asprimary part raid.31 --size=1 --ondisk=hda --asprimary --grow part raid.32 --size=1 --ondisk=hdb --asprimary --grow # mirrored mountpoints raid /boot --fstype ext3 --level=RAID1 --device=md0 raid.01 raid.02 raid swap --fstype swap --level=RAID1 --device=md1 raid.11 raid.12 raid / --fstype ext3 --level=RAID1 --device=md2 raid.21 raid.22 raid /srv --fstype ext3 --level=RAID1 --device=md3 raid.31 raid.32
There are many, many ways to alter this setup (e.g., using LVM, using a different set of mount points, not relying on primary partitions).
The reason that /srv gets the lion's share of the disk is that I try to differentiate between files
* created/maintained by running processes (e.g., MySQL) * installed by RPM (e.g., /var/www/error)
both of which belong in /var, and
* data created elsewhere and "fed" to a process (e.g., your video files or HTML pages)
which goes into /srv.
On 14.12.2010 19:37, Jason T. Slack-Moehrle wrote:
Hi All,
I have a new system with 2 Seagate 1TB SATA Enterprise level drives in it.
I want to RAID1 (mirror) these drives.
...
- This system support 16gb of RAM. I have 9gb in it, but I will max it out over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
lvm, see below.
Any reason I can't just create a single mount point taking up the entire drive and RAID1 the entire thing? Can anyone recommend some ideal mount points and sizes?
What should I account for if my /var/www/html will be very large?
If you dont know in advance how your storage is allocated the best way, use lvm. The space you dont need today is in the pool and be it /var/www/html or swap or whatever assign it as needed in the future.
Note that its maybe better to not put /boot into lvm.
I would suggest
/dev/md0 -> /boot /dev/md1 -> lvm with all other partitions including swap
Hi,
If you dont know in advance how your storage is allocated the best way, use lvm. The space you dont need today is in the pool and be it /var/www/html or swap or whatever assign it as needed in the future.
Note that its maybe better to not put /boot into lvm.
I would suggest
/dev/md0 -> /boot /dev/md1 -> lvm with all other partitions including swap
OK, I have done this, I need to create mount points and I am not sure how to initially size.
How does everyone size /?
Since I know my /var/www/html will be large, say 300GB, I can create a mount point for at least that, but with LVM you are saying I can change the size later to increase it?
What other mount points should one have (besides swap)? No users will be storing sata on this box.
-Jason
Jason T. Slack-Moehrle wrote:
If you dont know in advance how your storage is allocated the best way, use lvm. The space you dont need today is in the pool and be it /var/www/html or swap or whatever assign it as needed in the future.
Note that its maybe better to not put /boot into lvm.
I agree - I'd never boot /boot into lvm. If there's some problem, and I've seen them, it may not have the lvm driver loaded, and then you're hosed. <snip>
OK, I have done this, I need to create mount points and I am not sure how to initially size.
How does everyone size /?
Take a clue from the default partition layout that install wants to use. Remember, in / will be *all* of your o/s, and third party software, and updates.... And /var, unless that's a separate partition, with all your logs, which can sometimes get *very* big. If you've got the space, give it 100G or 200G. <snip> I like /usr as a partition, and I used to like /var as one.
mark
On 14.12.2010 22:49, Jason T. Slack-Moehrle wrote:
Hi,
If you dont know in advance how your storage is allocated the best way, use lvm. The space you dont need today is in the pool and be it /var/www/html or swap or whatever assign it as needed in the future.
Note that its maybe better to not put /boot into lvm.
I would suggest
/dev/md0 -> /boot /dev/md1 -> lvm with all other partitions including swap
OK, I have done this, I need to create mount points and I am not sure how to initially size.
My idea was to assign minimum at now. It could go like this:
lvm volume group -> 1000GB
for the system: lvm logical volume for / -> 1GB lvm logical volume for /var -> 1GB lvm logical volume for /usr -> 1GB
lvm logical volume for /var/www/html -> 50GB
Now you have assigned 53GB out of the 1000 and the other 947GB remains dynamically assignable from the lvm volume group.
If you need more space in one of the partitions, just grow it, out of the pool of 947GB. Logical Volumes can be resized online and many filesystems can be grown online (mounted) too. If the initial 1GB for some partition proves to be to low, e.g. it has to be increased on every server you have than adjust it to initial 2GB or whatever size is adequat for you. I am not after numbers at all. My point is: If you dont know how to partition, assign at minimum, allowing for future flexibility.
Hi Markus,
My idea was to assign minimum at now. It could go like this:
<snip />
If you need more space in one of the partitions, just grow it, out of the pool of 947GB. Logical Volumes can be resized online and many filesystems can be grown online (mounted) too. If the initial 1GB for some partition proves to be to low, e.g. it has to be increased on every server you have than adjust it to initial 2GB or whatever size is adequat for you. I am not after numbers at all. My point is: If you dont know how to partition, assign at minimum, allowing for future flexibility.
Perfect, makes sense now what should be done.
I appreciate the explanation.
-Jason
Markus Falb wrote:
On 14.12.2010 22:49, Jason T. Slack-Moehrle wrote:
<snip>
OK, I have done this, I need to create mount points and I am not sure how to initially size.
My idea was to assign minimum at now. It could go like this:
lvm volume group -> 1000GB
for the system: lvm logical volume for / -> 1GB lvm logical volume for /var -> 1GB lvm logical volume for /usr -> 1GB
Sorry, but I don't think you can install with that. 10 years ago, think it was, I was giving /, /usr and /var 4G. For most of the time since then, I went to 20G for /usr, then 40G. And I gave /opt 20G. Giving 1G for /var is *asking* for trouble - what happens when you have a hardware error, or an intrusion attempt, and the logs fill the partition?
Oh, and while you're at it, install and run something like fail2ban, and maybe clamav. <snip>
mark
On 12/14/2010 05:21 PM, m.roth@5-cent.us wrote:
Sorry, but I don't think you can install with that. 10 years ago, think it was, I was giving /, /usr and /var 4G. For most of the time since then, I went to 20G for /usr, then 40G. And I gave /opt 20G. Giving 1G for /var is *asking* for trouble - what happens when you have a hardware error, or an intrusion attempt, and the logs fill the partition?
I usually go one step further and split /var and /var/log on separate partitions for the exact reason Mark mentions with logging.
Regards, Max
On 14.12.2010 23:21, m.roth@5-cent.us wrote:
Markus Falb wrote:
On 14.12.2010 22:49, Jason T. Slack-Moehrle wrote:
<snip> >> OK, I have done this, I need to create mount points and I am not sure >> how to initially size. > > My idea was to assign minimum at now. It could go like this: > > lvm volume group -> 1000GB > > for the system: > lvm logical volume for / -> 1GB > lvm logical volume for /var -> 1GB > lvm logical volume for /usr -> 1GB
Sorry, but I don't think you can install with that. 10 years ago, think it was, I was giving /, /usr and /var 4G. For most of the time since then, I went to 20G for /usr, then 40G. And I gave /opt 20G. Giving 1G for /var is *asking* for trouble - what happens when you have a hardware error, or an intrusion attempt, and the logs fill the partition?
You mentioned logfiles. I find it good practice to give essential processes an explicit partition for logging and another one for data. This way i can get away with relatively small system partitions. And if you do syslog to a remote target, what else remains in local logfiles.
Actually, When I said i was not after numbers, I meant I would like to avoid the discussion if 1 or 2 or 20 gb are adequat. Of course the perfect amount depends on how one is doing things. With the method i was describing everyone can find out himself and adjust as needed.
On 12/14/2010 4:16 PM, Markus Falb wrote:
On 14.12.2010 22:49, Jason T. Slack-Moehrle wrote:
Hi,
If you dont know in advance how your storage is allocated the best way, use lvm. The space you dont need today is in the pool and be it /var/www/html or swap or whatever assign it as needed in the future.
Note that its maybe better to not put /boot into lvm.
I would suggest
/dev/md0 -> /boot /dev/md1 -> lvm with all other partitions including swap
OK, I have done this, I need to create mount points and I am not sure how to initially size.
My idea was to assign minimum at now. It could go like this:
lvm volume group -> 1000GB
for the system: lvm logical volume for / -> 1GB lvm logical volume for /var -> 1GB lvm logical volume for /usr -> 1GB
lvm logical volume for /var/www/html -> 50GB
Now you have assigned 53GB out of the 1000 and the other 947GB remains dynamically assignable from the lvm volume group.
If you need more space in one of the partitions, just grow it, out of the pool of 947GB. Logical Volumes can be resized online and many filesystems can be grown online (mounted) too. If the initial 1GB for some partition proves to be to low, e.g. it has to be increased on every server you have than adjust it to initial 2GB or whatever size is adequat for you. I am not after numbers at all. My point is: If you dont know how to partition, assign at minimum, allowing for future flexibility.
But this only helps if you don't know where you will need to grow. If you know it is going to be under /var, just give it all the space you have in the first place and avoid the overhead of lvm.
On 14.12.2010 23:27, Les Mikesell wrote:
On 12/14/2010 4:16 PM, Markus Falb wrote:
On 14.12.2010 22:49, Jason T. Slack-Moehrle wrote:
Hi,
If you dont know in advance how your storage is allocated the best way, use lvm. The space you dont need today is in the pool and be it /var/www/html or swap or whatever assign it as needed in the future.
Note that its maybe better to not put /boot into lvm.
I would suggest
/dev/md0 -> /boot /dev/md1 -> lvm with all other partitions including swap
OK, I have done this, I need to create mount points and I am not sure how to initially size.
My idea was to assign minimum at now. It could go like this:
lvm volume group -> 1000GB
for the system: lvm logical volume for / -> 1GB lvm logical volume for /var -> 1GB lvm logical volume for /usr -> 1GB
lvm logical volume for /var/www/html -> 50GB
Now you have assigned 53GB out of the 1000 and the other 947GB remains dynamically assignable from the lvm volume group.
If you need more space in one of the partitions, just grow it, out of the pool of 947GB. Logical Volumes can be resized online and many filesystems can be grown online (mounted) too. If the initial 1GB for some partition proves to be to low, e.g. it has to be increased on every server you have than adjust it to initial 2GB or whatever size is adequat for you. I am not after numbers at all. My point is: If you dont know how to partition, assign at minimum, allowing for future flexibility.
But this only helps if you don't know where you will need to grow. If you know it is going to be under /var, just give it all the space you have in the first place and avoid the overhead of lvm.
To quote Jason, the OP: "what should my SWAP space be" ? How should I know ? lvm to the rescue.
lvm also helps if you want to have additional partitions. Maybe one day you recognise that a separate partition for /var/log/httpd would be a good thing.
You are talking about the performance overhead ? Not sure about that. I think the flexibility you gain makes it at least worth thinking about it. Said that, I would be interested in hearing about disadvantages of lvm.
On 12/14/2010 5:14 PM, Markus Falb wrote:
But this only helps if you don't know where you will need to grow. If you know it is going to be under /var, just give it all the space you have in the first place and avoid the overhead of lvm.
To quote Jason, the OP: "what should my SWAP space be" ? How should I know ? lvm to the rescue.
I've never seen a machine that had pushed 2 gigs into swap recover (i.e. whatever was consuming the memory did it faster than jobs could complete and release any). Increasing performance might have saved them but not adding more swap.
lvm also helps if you want to have additional partitions. Maybe one day you recognise that a separate partition for /var/log/httpd would be a good thing.
You are talking about the performance overhead ? Not sure about that. I think the flexibility you gain makes it at least worth thinking about it. Said that, I would be interested in hearing about disadvantages of lvm.
It really depends on the purpose of the machine. If it has to be a high performance server, I wouldn't want any extra overhead and I certainly wouldn't want bits and pieces of a partition to be spread into chunks far apart on the disk. It would be even better to put the busy content on separate drives to avoid seeks as much as possible.
On Dec 14, 2010, at 7:01 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 12/14/2010 5:14 PM, Markus Falb wrote:
But this only helps if you don't know where you will need to grow. If you know it is going to be under /var, just give it all the space you have in the first place and avoid the overhead of lvm.
To quote Jason, the OP: "what should my SWAP space be" ? How should I know ? lvm to the rescue.
I've never seen a machine that had pushed 2 gigs into swap recover (i.e. whatever was consuming the memory did it faster than jobs could complete and release any). Increasing performance might have saved them but not adding more swap.
lvm also helps if you want to have additional partitions. Maybe one day you recognise that a separate partition for /var/log/httpd would be a good thing.
You are talking about the performance overhead ? Not sure about that. I think the flexibility you gain makes it at least worth thinking about it. Said that, I would be interested in hearing about disadvantages of lvm.
It really depends on the purpose of the machine. If it has to be a high performance server, I wouldn't want any extra overhead and I certainly wouldn't want bits and pieces of a partition to be spread into chunks far apart on the disk. It would be even better to put the busy content on separate drives to avoid seeks as much as possible.
LVM overhead is negligible. It is basically a kernel mapping of virtual memory space into 4MB+ extents across drives.
It basically has the same overhead as Linux's virtual memory subsystem.
The chunk size is configurable and chooses a sane default of 4MB (very large VGs use larger extents), you can implement striping within using configurable stripe sizes. And if max sustained throughput is your goal, set the extent size to 16MB or stripe across multiple drives.
-Ross
On 12/14/10 9:41 PM, Ross Walker wrote:
On Dec 14, 2010, at 7:01 PM, Les Mikeselllesmikesell@gmail.com wrote:
On 12/14/2010 5:14 PM, Markus Falb wrote:
But this only helps if you don't know where you will need to grow. If you know it is going to be under /var, just give it all the space you have in the first place and avoid the overhead of lvm.
To quote Jason, the OP: "what should my SWAP space be" ? How should I know ? lvm to the rescue.
I've never seen a machine that had pushed 2 gigs into swap recover (i.e. whatever was consuming the memory did it faster than jobs could complete and release any). Increasing performance might have saved them but not adding more swap.
lvm also helps if you want to have additional partitions. Maybe one day you recognise that a separate partition for /var/log/httpd would be a good thing.
You are talking about the performance overhead ? Not sure about that. I think the flexibility you gain makes it at least worth thinking about it. Said that, I would be interested in hearing about disadvantages of lvm.
It really depends on the purpose of the machine. If it has to be a high performance server, I wouldn't want any extra overhead and I certainly wouldn't want bits and pieces of a partition to be spread into chunks far apart on the disk. It would be even better to put the busy content on separate drives to avoid seeks as much as possible.
LVM overhead is negligible. It is basically a kernel mapping of virtual memory space into 4MB+ extents across drives.
It basically has the same overhead as Linux's virtual memory subsystem.
Maybe, if memory access time was measured in many milliseconds to move chunk to chunk...
On Dec 14, 2010, at 11:14 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 12/14/10 9:41 PM, Ross Walker wrote:
On Dec 14, 2010, at 7:01 PM, Les Mikeselllesmikesell@gmail.com wrote:
On 12/14/2010 5:14 PM, Markus Falb wrote:
But this only helps if you don't know where you will need to grow. If you know it is going to be under /var, just give it all the space you have in the first place and avoid the overhead of lvm.
To quote Jason, the OP: "what should my SWAP space be" ? How should I know ? lvm to the rescue.
I've never seen a machine that had pushed 2 gigs into swap recover (i.e. whatever was consuming the memory did it faster than jobs could complete and release any). Increasing performance might have saved them but not adding more swap.
lvm also helps if you want to have additional partitions. Maybe one day you recognise that a separate partition for /var/log/httpd would be a good thing.
You are talking about the performance overhead ? Not sure about that. I think the flexibility you gain makes it at least worth thinking about it. Said that, I would be interested in hearing about disadvantages of lvm.
It really depends on the purpose of the machine. If it has to be a high performance server, I wouldn't want any extra overhead and I certainly wouldn't want bits and pieces of a partition to be spread into chunks far apart on the disk. It would be even better to put the busy content on separate drives to avoid seeks as much as possible.
LVM overhead is negligible. It is basically a kernel mapping of virtual memory space into 4MB+ extents across drives.
It basically has the same overhead as Linux's virtual memory subsystem.
Maybe, if memory access time was measured in many milliseconds to move chunk to chunk...
The LVM portion that maps LBAs to LV offsets is completely in memory. When an LV is initially allocated it's extents are contiguous, only after growing it does it become fragmented and those fragments will be large, 4GB here, 4GB there, which should minimize the seek time factor (especially on busy systems).
For VGs containing muliple PVs you can stripe LVs across them to get multiple times the throughput.
The "overhead" that people talk about is the overhead of the memory lookup going from virtual memory LBA to physical disk(s) PBA, which is negligible.
Of course if you create snapshots, those have overhead, but not strictly LVM by itself.
-Ross
On 12/15/2010 8:49 AM, Ross Walker wrote:
LVM overhead is negligible. It is basically a kernel mapping of virtual memory space into 4MB+ extents across drives.
It basically has the same overhead as Linux's virtual memory subsystem.
Maybe, if memory access time was measured in many milliseconds to move chunk to chunk...
The LVM portion that maps LBAs to LV offsets is completely in memory. When an LV is initially allocated it's extents are contiguous, only after growing it does it become fragmented and those fragments will be large, 4GB here, 4GB there, which should minimize the seek time factor (especially on busy systems).
For VGs containing muliple PVs you can stripe LVs across them to get multiple times the throughput.
The "overhead" that people talk about is the overhead of the memory lookup going from virtual memory LBA to physical disk(s) PBA, which is negligible.
No, the bigger problem is that (a) the mapping table consumes RAM that would otherwise be available for filesystem buffers - but I suppose in a 64-bit machine you could add more to offset that, and (b) it screws up any notion that the filesystem has about optimizing head motion by keeping certain things nearby when the physical layout is remapped. And if you don't remap into non-contiguous chunks you didn't need it in the first place.
On Dec 15, 2010, at 10:37 AM, Les Mikesell lesmikesell@gmail.com wrote:
On 12/15/2010 8:49 AM, Ross Walker wrote:
LVM overhead is negligible. It is basically a kernel mapping of virtual memory space into 4MB+ extents across drives.
It basically has the same overhead as Linux's virtual memory subsystem.
Maybe, if memory access time was measured in many milliseconds to move chunk to chunk...
The LVM portion that maps LBAs to LV offsets is completely in memory. When an LV is initially allocated it's extents are contiguous, only after growing it does it become fragmented and those fragments will be large, 4GB here, 4GB there, which should minimize the seek time factor (especially on busy systems).
For VGs containing muliple PVs you can stripe LVs across them to get multiple times the throughput.
The "overhead" that people talk about is the overhead of the memory lookup going from virtual memory LBA to physical disk(s) PBA, which is negligible.
No, the bigger problem is that (a) the mapping table consumes RAM that would otherwise be available for filesystem buffers - but I suppose in a 64-bit machine you could add more to offset that, and (b) it screws up any notion that the filesystem has about optimizing head motion by keeping certain things nearby when the physical layout is remapped. And if you don't remap into non-contiguous chunks you didn't need it in the first place.
Like you mentioned in a 64-bit OS the table is of small consequence and on a 32-bit OS your not likely to be handling extremely large VGs so the table is of little consequence.
As far as file system optimizing head motion, if the space is contiguous on a single drive or mirror then that is not an issue, if the space is striped across multiple PVs then you just give the striping hints to mkfs like a hardware RAID (in fact if the LV is on a hardware RAID PV and not striped in LVM then use the striping hints for the HW RAID).
LVM isn't about re-mapping into non-contiguous regions, it's about volume management, where one can create and expand volumes other then at install time and without modifying the partition tables which brings risk.
If your managing a 3TB volume it's much easier to do so with LVM then gparted.
Snapshotting is anther handy feature for backups, or for creating a recovery point before an upgrade.
-Ross
Greetings,
On Wed, Dec 15, 2010 at 4:44 AM, Markus Falb markus.falb@fasel.at wrote:
On 14.12.2010 23:27, Les Mikesell wrote:
On 12/14/2010 4:16 PM, Markus Falb wrote:
On 14.12.2010 22:49, Jason T. Slack-Moehrle wrote:
Hi,
To quote Jason, the OP: "what should my SWAP space be" ? How should I know ? lvm to the rescue.
Doesn't lvm comes late in the scene?
Havnt you seen the formula: Recommended swap + (RAM * overcommit) = swap partition size
From the
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Virtualization_for_Serv...
and
http://kbase.redhat.com/faq/docs/DOC-15252
That should provide some answers...
Regards,
Rajagopal
On Dec 15, 2010, at 4:31 AM, Rajagopal Swaminathan raju.rajsand@gmail.com wrote:
Doesn't lvm comes late in the scene?
Not if you are putting swap and root on LVM.
2.6 kernels are capable of swapping to partition, file and logical volume with the same performance plus or minus a small overhead (microseconds), but the performance hit for a system in swap really negates any of that. Make sure if your system starts to hit swap you add more memory or tune your applications with a ceiling.
-Ross
Greetings,
On Wed, Dec 15, 2010 at 8:35 PM, Ross Walker rswwalker@gmail.com wrote:
On Dec 15, 2010, at 4:31 AM, Rajagopal Swaminathan raju.rajsand@gmail.com wrote:
Not if you are putting swap and root on LVM.
Boot Should remain in ext3 AFAIK. Or has it changed in RHEL6.
About to start experimenting with it tomorrow onwards - ext4, new storage management tool and all that. And yes, I have a messed up LVM auto setup RHEL6 in my hand to clean up.
So much to do so little time. sigh.
BTW anyone used eth0 and eth1 as bond0 and bond0 as a slave to bridge br0.
RHEL6 seems not to provide bridge networking -- brctl et. al. ... Will CentOS have any Idea about it?
Need it for VM network
Centos 5 had it to my knowledge for kvm et. al. IIRC.
Regards,
1) RAID 1 is good for reading while writing is a overhead for the disk and may hit the performance 2) Dont create RAID for swap and / root partition (Not Advisable) 3) Swap Size size should be 2X the size of the Physical memory 4) Always partition which uses high read/write to the disk eg /var/log /var/www/html and /home etc 5) Use LVM for partitions and swap where your Video files resides and make it minimum as of now and Learn the growth of the partition and increase as per requirement 6) My experience had always shown if your apps had memory leak , expect Swapping to happen for sure, where reboot of the apps or the system is needed
Thanks Philix
On Wed, Dec 15, 2010 at 12:07 AM, Jason T. Slack-Moehrle < slackmoehrle@me.com> wrote:
Hi All,
I have a new system with 2 Seagate 1TB SATA Enterprise level drives in it.
I want to RAID1 (mirror) these drives.
This machine will be a web-server in my apartment hosting an HTML video fan site I am creating. Apache, MySQL, PHP etc. This site will easily be 300+ gigs with all the versions of each video, the MySQL won't be huge, but will grow as data for each video is added (i.e location on the server, keyframe name, etc)
I am a bit confused by: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-raid-config.html
So if I simplify, I must:
Create a software raid partition on each drive
Create a RAID 1 out of that partition and use a mount point of /boot
Create other mount points I might want i.e swap, /home, etc
Create RAID1 out of these partitions
rinse and repeat this for each mount point I want
A few questions:
- This system support 16gb of RAM. I have 9gb in it, but I will max it out
over the next few months as I find great deals on RAM, what should my SWAP space be? I recall a long while ago that SWAP should match physical RAM.
- Any reason I can't just create a single mount point taking up the entire
drive and RAID1 the entire thing? Can anyone recommend some ideal mount points and sizes?
- What should I account for if my /var/www/html will be very large?
Best, -Jason _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, 16 Dec 2010, Philix T A wrote:
- RAID 1 is good for reading while writing is a overhead for the disk and
may hit the performance
Write overhead is minimal, since you're just writing out the same data twice to two equal performance drives (typically). I'd really not worry about the performance hit.
- Dont create RAID for swap and / root partition (Not Advisable)
No, that's wrong. Why on earth would you not want the root file system on RAID? And please explain the downsides of having swap on RAID1, given that the upside should be fairly obvious.
- Swap Size size should be 2X the size of the Physical memory
Historical advice. I've got a machine with 96Gbytes of RAM, are you really telling me I want 192Gbytes of swap? That's just plain bad advice.
- My experience had always shown if your apps had memory leak , expect
Swapping to happen for sure, where reboot of the apps or the system is needed
Then fix your memory leaks.
jh
On Thu, Dec 16, 2010 at 2:15 AM, Philix T A philixlinux@gmail.com wrote:
- Dont create RAID for swap and / root partition (Not Advisable)
Any rationale for this bad advice?
- Swap Size size should be 2X the size of the Physical memory
For a desktop, maybe.
- My experience had always shown if your apps had memory leak , expect
Swapping to happen for sure, where reboot of the apps or the system is needed
I hope that you only have such apps on your dev boxes!
On 12/16/2010 2:15 AM, Philix T A wrote:
- RAID 1 is good for reading while writing is a overhead for the disk
and may hit the performance
Unless you are doing something (such as video editing) that relies on ultra-fast hard drive access, you will probably never notice the difference. With hardware RAID, the performance hit will be even less.
- Dont create RAID for swap and / root partition (Not Advisable)
Why not? This defeats the purpose of the RAID. You need to mirror all filesystems to prevent data loss in the event of a hard drive crash. You need to mirror swap so that the system can continue running if one hard drive goes.
- Swap Size size should be 2X the size of the Physical memory
Not anymore. These days, I would not allocate more than 16GB for swap. You shouldn't really need any swap. Memory is cheap enough now that if your system is using swap, you should add more memory. Swap usage is a serious performance hit. Some people advocate running without any swap at all.
Philix T A [philixlinux@gmail.com] wrote
2) Dont create RAID for swap and / root partition (Not Advisable)
I would avoid putting / or /swap on RAID0 partitions, but RAID1 should not only not be a problem, it should be encouraged as a method of recuperating from a spindle failure.
3) Swap Size size should be 2X the size of the Physical memory
That was the other millennium, when it was certain that our programs used more Core/RAM than you had in your box. This is now, when you need swap only when you can't schedule your process load to not over-demand RAM, and you can't put more RAM in your box. In that kind of case, a larger box is more cost-effective than swap. I make certain swap is never used. Man 1 vmstat, look at the 'si' and 'so' fields, they should stay zero.
4) Always partition which uses high read/write to the disk eg /var/log /var/www/html and /home etc I'm not certain I understood what you said. My advice on partitioning is to group read-only (e.g. most of / & /usr) on partitions mounted -o ro, and writable portions (e.g. most of var) on a separate partition so the amount of fsck recovery is minimized. The OP asked about making a single partition for everything, this is (approximately) what a Red Hat default install will do. Making a RAID1 of everything is not a bad idea, it's called a backup ;)
6) My experience had always shown if your apps had memory leak , expect Swapping to happen for sure, where reboot of the apps or the system is needed
Buggy software (memory leaks included) should simply be avoided. Where they can't be avoided, a crontab entry to kill & restart every hour (or so). I'm not sure what the OP is asking.
It is my advice to post ONLY in plain text, not rich-text or html, to a mailing list. If it is impossible to post plain-text, some won't see your post and some who have a response won't send one since top-posting is "the best way" to reply to html posts, and top-posting is an abomination to many.
******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**