I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU]
My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
In article CAAOM8FXumoSAgbDe+PzryraRUHcsWOjWJQf-3Mc0TSn4ODRt9w@mail.gmail.com, Matt matt.mailinglists@gmail.com wrote:
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU]
My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
Tests I did some years ago indicated that the install process does not write grub boot information onto sdb, only sda. This was on Fedora 3 or CentOS 4.
I don't know if it has changed since then, but I always put the following in the %post section of my kickstart files:
# install grub on the second disk too grub --batch <<EOF device (hd0) /dev/sdb root (hd0,0) setup (hd0) quit EOF
Cheers Tony
On Fri, Jan 24, 2014 at 11:58 AM, Matt matt.mailinglists@gmail.com wrote:
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU]
My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
I've found that the grub boot loader is only installed on the first disk. When I do use software raid, I have made a habit of manually installing grub on the other disks (using grub-install). In most cases I dedicated a RAID1 array to the host OS and have a separate array for storage.
You can check to see that a boot loader is present with `file`.
~]# file -s /dev/sda /dev/sda: x86 boot sector; partition 1: ID=0xfd, active, starthead 1, startsector 63, 224847 sectors; partition 2: ID=0xfd, starthead 0, startsector 224910, 4016250 sectors; partition 3: ID=0xfd, starthead 0, startsector 4241160, 66878595 sectors, code offset 0x48
There are other ways to verify the boot loader is present, but that's the one I remember off the top of my head.
Use grub-install to install grub to the MBR of the other disk.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU]
My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
On Mon, Jan 27, 2014 at 9:26 AM, Matt matt.mailinglists@gmail.com wrote:
Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
There's a whole different set of issues with drives over 2TB. Not even sure if everything said earlier is correct for them especially regarding autodetect: https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
Booting from something smaller might have advantages.
Les Mikesell wrote:
On Mon, Jan 27, 2014 at 9:26 AM, Matt matt.mailinglists@gmail.com wrote:
Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
There's a whole different set of issues with drives over 2TB. Not even sure if everything said earlier is correct for them especially regarding autodetect: https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
Booting from something smaller might have advantages.
Yeah. A question - I've missed this discussion - you *do* have the drives partioned GPT, right? MBR is *not* compatible with > 2TB.
mark
On 01/27/2014 04:26 PM, Matt wrote:
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
Personalities : [raid1] md1 : active raid1 sda1[0] sdb1[1] 511936 blocks super 1.0 [2/2] [UU] md3 : active raid1 sda4[0] sdb4[1] 3672901440 blocks super 1.1 [2/2] [UU] bitmap: 0/28 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[1] sda3[0] 102334336 blocks super 1.1 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb2[1] sda2[0] 131006336 blocks super 1.1 [2/2] [UU]
My question is if sda one fails will it still boot on sdb? Did the install process write the boot sector on both disks or just sda? How do I check and if its not on sdb how do I copy it there?
Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
As long as you have duplicate SSD as backup and regularly backup /boot to that other SSD, it should be OK. Loosing SSD and /boot with new kernels would be a mayor problem for you, you would need to recreate them.
Ljubomir Ljubojevic (Love is in the Air) PL Computers Serbia, Europe
Google is the Mother, Google is the Father, and traceroute is your trusty Spiderman... StarOS, Mikrotik and CentOS/RHEL/Linux consultant
Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
As long as you have duplicate SSD as backup and regularly backup /boot to that other SSD, it should be OK. Loosing SSD and /boot with new kernels would be a mayor problem for you, you would need to recreate them.
I am thinking I will not have much I/O traffic on the SSD hopefully extending its lifespan. If it dies I will just need to reinstall Centos on a replacement SSD. My critical files will be in /vz that I need backup on and I think RAID1 will give me double read speed on that array. Trying to balance cost/performance/redundancy here.
From: Matt matt.mailinglists@gmail.com
I am thinking I will not have much I/O traffic on the SSD hopefully extending its lifespan. If it dies I will just need to reinstall Centos on a replacement SSD. My critical files will be in /vz that I need backup on and I think RAID1 will give me double read speed on that array. Trying to balance cost/performance/redundancy here.
Not sure if it is still the case but if my memory is correct, some time ago, RH was advising against using mdraid on ssds because of the mdraid surface scans or resyncs. I cannot find the source anymore...
JD
On 01/27/2014 05:45 PM, John Doe wrote:
From: Matt matt.mailinglists@gmail.com
I am thinking I will not have much I/O traffic on the SSD hopefully extending its lifespan. If it dies I will just need to reinstall Centos on a replacement SSD. My critical files will be in /vz that I need backup on and I think RAID1 will give me double read speed on that array. Trying to balance cost/performance/redundancy here.
Not sure if it is still the case but if my memory is correct, some time ago, RH was advising against using mdraid on ssds because of the mdraid surface scans or resyncs. I cannot find the source anymore...
SSD will NOT have RAID on it. He wrote so. And that can take his server offline until /boot is reinstalled.
Based on input from everyone here I am thinking of an alternate setup. Single small inexpensive 64GB SSD used as /boot, / and swap. Putting /vz on software RAID1 array on the two 4TB drives. I can likely just zip tie the SSD in the 1u case somewhere since I have no more drive bays. Does this seem like a better layout?
As long as you have duplicate SSD as backup and regularly backup /boot to that other SSD, it should be OK. Loosing SSD and /boot with new kernels would be a mayor problem for you, you would need to recreate them.
I am thinking I will not have much I/O traffic on the SSD hopefully extending its lifespan. If it dies I will just need to reinstall Centos on a replacement SSD. My critical files will be in /vz that I need backup on and I think RAID1 will give me double read speed on that array. Trying to balance cost/performance/redundancy here.
If I am putting both 4TB drives in a single RAID1 array for /vz would there be any advantage to using LVM on it?
On 01/29/2014 08:15 AM, Matt wrote:
If I am putting both 4TB drives in a single RAID1 array for /vz would there be any advantage to using LVM on it?
My (sometimes unpopular) advice is to set up the partitions on servers into two categories:
1) OS 2) Data
OS partitions don't really grow much. Most of our servers' OS partitions total less than 10 GB of used space after years of 24x7 use. I recommend keeping things *very* *simple* here, avoid LVM. I use simple software RAID1 with bare partitions.
Data partitions, by definition, would be much more flexible. As your service becomes more popular, you can get caught in a double bind that can be very hard to escape: On one hand, you need to add capacity without causing downtime because people are *using* your service extensively, but on the other hand you can't easily handle a day or so to transfer TBs of data because people are *relying* on your service extensively. To handle these cases you need something that gives you the ability to add capacity without (much) downtime.
LVM can be very useful here, because you can add/upgrade storage without taking the system offline, and although there *is* some downtime when you have to grow the filesystem (EG when using Ext* file systems) it's pretty minimal.
So I would strongly recommend using something to manage large amounts of data with minimal downtime if/when that becomes a likely scenario.
Comparing LVM+XFS to ZFS, ZFS wins IMHO. You get all the benefits of LVM and the file system, along with the almost magical properties that you can get when you combine them into a single, integrated whole. Some of ZFS' data integrity features (See RAIDZ) are in "you can do that?" territory. The main downsides are the slightly higher risk that ZFS on Linux' "non-native" status can cause problems, though in my case, that's no worry since we'll be testing any updates carefully prior to roll out.
In any event, realize that any solution like this (LVM + XFS/Ext, ZFS, or BTRFS) will have a significant learning curve. Give yourself *time* to understand exactly what you're working with, and use that time carefully.
On Wed, 2014-01-29 at 08:57 -0800, Lists wrote:
My (sometimes unpopular) advice is to set up the partitions on servers into two categories:
- OS
- Data
Absolutely. I have been doing this, without problems, for 5 years. Keeping the two distinct is best, in my opinion.
/data/...............
That's great advice.. I've *across the universe* also sectioned off /home directory and /opt Not to counter anything here, no sir eee, to add.. to the sane request from the previous mention...
It can make the difference sometimes with fast restores and there is a slight performance increase depending on the I/O activity and threads of the actual server and its role. Just saying... Don't flame.. I've been there; plus tax and went down there and brought back the souvenir.. really :)
At the end of the day, backups are just native commands, pick one: tar, cpio (yeah, still being used) etc. wrapped up in a script/program if you want to be a purist -
Here's something: I've done before and /after performance testing with real time data and User requests with just the 'basic' file partioning and then Partioning the partition -- really does wonders.. Of course your RAID solution comes into play, here, too.... This is with CentOS (whatever Unix type system). Apple slices up pretty good on their MAC OS - // think freeBSD combined with NeXT and some other interesting concoction of lovelies... and....
Oh, there is no counter or 'ideal' way to do this.. because why? EVERY infrastructure, culture, 'way we do it around here..' dictators are very different -- as always, your mileage may /vary/ == SO this isn't a 'how to' but a nice, could do...
Been there, got the,,, oh, I already addressed that. Have fun.. Better than digging a ditch. TASTE GREAT; LESS FILLING
~ so,
/swap /OS - whatever you want to call it, I don't call it OS in Unix/Linux, but that's fine /opt /usersHOMEdir
Pretty clean; simple.. Anyone says different, they're justifying their job. Nothing to justify here.
Good call though otherwise. I like it.
Wizard of Hass! Left Coast
On 1/29/2014 11:35 AM, Always Learning wrote:
On Wed, 2014-01-29 at 08:57 -0800, Lists wrote:
My (sometimes unpopular) advice is to set up the partitions on servers into two categories:
- OS
- Data
Absolutely. I have been doing this, without problems, for 5 years. Keeping the two distinct is best, in my opinion.
/data/...............
On Wed, Jan 29, 2014 at 1:49 PM, Jeffrey Hass xaccusa@gmail.com wrote:
Here's something: I've done before and /after performance testing with real time data and User requests with just the 'basic' file partioning and then Partioning the partition -- really does wonders..
How so, unless you are adding disk heads to the mix or localizing activity during your test?
On 01/29/2014 01:10 PM, Les Mikesell wrote:
How so, unless you are adding disk heads to the mix or localizing activity during your test?
Just ran into this: did a grep on what seemed to be a lightly loaded server, load average suddenly spiked unexpectedly. Turns out that it was performing over 130 write ops/second on a 7200 RPM drive! Partitioning data would into partitions would have no effect in this case.
How not so...say something important next time. People are worl I no here.. On Jan 29, 2014 1:11 PM, "Les Mikesell" lesmikesell@gmail.com wrote:
On Wed, Jan 29, 2014 at 1:49 PM, Jeffrey Hass xaccusa@gmail.com wrote:
Here's something: I've done before and /after performance testing with real time data and User requests with just the 'basic' file partioning and then Partioning the partition -- really does wonders..
How so, unless you are adding disk heads to the mix or localizing activity during your test?
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
You seem to imply something magic is going happen to performance with partitioning. There's not really much magic in these boxes. You either move the disk head farther more frequently or you don't. So if your test stays mostly constrained to a small slice of disk that you've partitioned you might think your performance is improved. But, that's only true if the test exactly matches real-world use - that is, in normal operation, the same disk heads won't frequently be moving to other locations to, for example, write logs.
Paul,
I forgot to mention with the 'unconventional' slicing of the Partitions, it does become unpopular in terms of 'vendor' support (if it applies.. ) and also expentencies on Code installs, etc. where environments are set based on 'known knowns' with Linux/UNIX layouts .. and the likes..
Major chances of failures, etc. So, if insistant on slicing up /DISKS/ and such ( I still believe in it but look at install scripts before I do and 'tweak' accordingly -- and willing to maintain that, it's a safe bet.. Most just click on ***.sh and blast away... Problems later..
It's funny how little folks actually prepare for an install/setup before going to Prod or the infamous 'golive' - but that's how I make money to come in an fix it.. SO PLEASE don't read the manual ~ and use logic here. lol.
Wizard of Hass! Shazam ~
On 1/29/2014 11:35 AM, Always Learning wrote:
On Wed, 2014-01-29 at 08:57 -0800, Lists wrote:
My (sometimes unpopular) advice is to set up the partitions on servers into two categories:
- OS
- Data
Absolutely. I have been doing this, without problems, for 5 years. Keeping the two distinct is best, in my opinion.
/data/...............
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Always Learning Sent: den 29 januari 2014 20:36 To: CentOS mailing list Subject: Re: [CentOS] Booting Software RAID
My (sometimes unpopular) advice is to set up the partitions on servers into two categories:
- OS
- Data
Absolutely. I have been doing this, without problems, for 5 years. Keeping the two distinct is best, in my opinion.
Exactly. Why would this be an unpopular piece of advice?
It might even be better to keep the OS by itself on one disk (with /boot, / and swap) and have the data on a separate disk.
Please enlighten me!
-- //Sorin
On 01/30/2014 12:28 AM, Sorin Srbu wrote:
My (sometimes unpopular) advice is to set up the partitions on servers into two categories:
- OS
- Data
Absolutely. I have been doing this, without problems, for 5 years. Keeping the two distinct is best, in my opinion.
Exactly. Why would this be an unpopular piece of advice?
It might even be better to keep the OS by itself on one disk (with /boot, / and swap) and have the data on a separate disk.
Please enlighten me!
I think the somewhat unpopular part is to recommend *against* using LVM for the OS partitions, voting instead to KISS, and only use LVM / Btrfs / ZFS for the "data" part. Some people actually think LVM should be used everywhere.
And for clarity's sake, I'm not suggesting a literal partition /os and /data, simply that there are areas of the filesystem used to store operating system stuff (EG: /bin, /boot, /usr), and areas used to store data (EG: /home, /var, /tmp), etc.
Keep the OS stuff as simple as possible, RAID1 against bare partitions, etc. because when things go south, the last thing you want is *another* thing to worry about.
Keep the data part such that you can grow as needed without (much) downtime. EG: LVM+XFS, ZFS, etc. (And please verify your backups regularly)
-Ben
On Thu, Jan 30, 2014 at 1:43 PM, Lists lists@benjamindsmith.com wrote:
And for clarity's sake, I'm not suggesting a literal partition /os and /data, simply that there are areas of the filesystem used to store operating system stuff (EG: /bin, /boot, /usr), and areas used to store data (EG: /home, /var, /tmp), etc.
The division is not at all clean, especially under /var. You've got stuff put there by a base OS install mingled with an unpredictable amount of logs and data.
On 1/30/2014 12:02 PM, Les Mikesell wrote:
On Thu, Jan 30, 2014 at 1:43 PM, Listslists@benjamindsmith.com wrote:
And for clarity's sake, I'm not suggesting a literal partition /os and /data, simply that there are areas of the filesystem used to store operating system stuff (EG: /bin, /boot, /usr), and areas used to store data (EG: /home, /var, /tmp), etc.
The division is not at all clean, especially under /var. You've got stuff put there by a base OS install mingled with an unpredictable amount of logs and data.
indeed. and its not unusual to discover a year after deployment that you need signfiicantly more space in /usr or whatever. I generally use LVM for my boot disk
On Thu, Jan 30, 2014 at 2:20 PM, John R Pierce pierce@hogranch.com wrote:
On 1/30/2014 12:02 PM, Les Mikesell wrote:
On Thu, Jan 30, 2014 at 1:43 PM, Listslists@benjamindsmith.com wrote:
And for clarity's sake, I'm not suggesting a literal partition /os and /data, simply that there are areas of the filesystem used to store operating system stuff (EG: /bin, /boot, /usr), and areas used to store data (EG: /home, /var, /tmp), etc.
The division is not at all clean, especially under /var. You've got stuff put there by a base OS install mingled with an unpredictable amount of logs and data.
indeed. and its not unusual to discover a year after deployment that you need signfiicantly more space in /usr or whatever. I generally use LVM for my boot disk
I'm getting more and more inclined to make the whole systems disposable/replaceable and using VMs for the smaller things instead of micro-managing volume slices. If something is running out of space it probably really needs a partition from a new disk mounted there anyway.
On Thu, 30 Jan 2014, Les Mikesell wrote:
I'm getting more and more inclined to make the whole systems disposable/replaceable and using VMs for the smaller things instead of micro-managing volume slices.
+1
Resource scarcity has changed dramatically over the past couple decades. HD space, for OS-installed files anyway, is about the least of anyone's worries these day.
I only consider separate partitions for three directory trees:
1. /boot -- for compatibility with older BIOSes 2. /home -- for continuity across systems or upgrades 3. /srv -- mostly human-maintained site data
Paul Heinlein wrote:
On Thu, 30 Jan 2014, Les Mikesell wrote:
I'm getting more and more inclined to make the whole systems disposable/replaceable and using VMs for the smaller things instead of micro-managing volume slices.
+1
Resource scarcity has changed dramatically over the past couple decades. HD space, for OS-installed files anyway, is about the least of anyone's worries these day.
I only consider separate partitions for three directory trees:
- /boot -- for compatibility with older BIOSes
- /home -- for continuity across systems or upgrades
- /srv -- mostly human-maintained site data
Eight years ago, I wrote an article for SysAdmin, suggesting a straight partition for /boot and root, and lvm for /home and /var, and /usr. These days, I might say RAID 1 for /boot and /, and RAID or not for swap, and another raid partition for everything else: home, other data directories....
At work, we're going to not more than 500G for /, but I'm thinking a lot less: I just rebuilt my own system at home, and gave / 150G, I think, and I have /var there (though I'd put web stuff elsewhere than on /).
mark
On Thu, 30 Jan 2014, m.roth@5-cent.us wrote:
Eight years ago, I wrote an article for SysAdmin, suggesting a straight partition for /boot and root, and lvm for /home and /var, and /usr. These days, I might say RAID 1 for /boot and /, and RAID or not for swap, and another raid partition for everything else: home, other data directories....
That's pretty much in line with our practice for standalone machines:
* /boot -- RAID 1 * / -- RAID 1 * /srv -- RAID 1 or 5, and it may not even be broken out * /home -- NAS (RAID 10, if it matters)
For VMs, there's just swap and /.
At work, we're going to not more than 500G for /, but I'm thinking a lot less: I just rebuilt my own system at home, and gave / 150G, I think, and I have /var there (though I'd put web stuff elsewhere than on /).
A RAID 1 of (relatively) inexpensive 80GB or 120GB SSDs are my default for swap and the root filesystem. Larger /srv filesystems, and the NAS holding /home, still require spinning platters on our budget.
Paul Heinlein wrote:
On Thu, 30 Jan 2014, m.roth@5-cent.us wrote:
Eight years ago, I wrote an article for SysAdmin, suggesting a straight partition for /boot and root, and lvm for /home and /var, and /usr. These days, I might say RAID 1 for /boot and /, and RAID or not for swap, and another raid partition for everything else: home, other data directories....
That's pretty much in line with our practice for standalone machines:
- /boot -- RAID 1
- / -- RAID 1
- /srv -- RAID 1 or 5, and it may not even be broken out
- /home -- NAS (RAID 10, if it matters)
For VMs, there's just swap and /.
At work, we're going to not more than 500G for /, but I'm thinking a lot less: I just rebuilt my own system at home, and gave / 150G, I think, and I have /var there (though I'd put web stuff elsewhere than on /).
A RAID 1 of (relatively) inexpensive 80GB or 120GB SSDs are my default for swap and the root filesystem. Larger /srv filesystems, and the NAS holding /home, still require spinning platters on our budget.
That's a *huge* amount of swap - we settled, years ago, and I think upstream recommends, 2G. Now, around here, our servers have *significantly* more than 2G, and if we see anything in swap, we know something's wrong.
mark
On Thu, 2014-01-30 at 16:35 -0500, m.roth@5-cent.us wrote:
Paul Heinlein wrote:
A RAID 1 of (relatively) inexpensive 80GB or 120GB SSDs are my default for swap and the root filesystem. Larger /srv filesystems, and the NAS holding /home, still require spinning platters on our budget.
That's a *huge* amount of swap - we settled, years ago, and I think upstream recommends, 2G. Now, around here, our servers have *significantly* more than 2G, and if we see anything in swap, we know something's wrong.
We don't allocate Swap. Instead we have large RAM and put tmp directories etc. on a RAM disk.
Oh, btw, one big reason I used to use lvm was that, when I wanted to upgrade to a new release, I'm make or wipe a partition or two, and install in that partition... and then I could boot to new or old.
mark
On 01/30/2014 10:27 PM, Paul Heinlein wrote:
On Thu, 30 Jan 2014, m.roth@5-cent.us wrote:
Eight years ago, I wrote an article for SysAdmin, suggesting a straight partition for /boot and root, and lvm for /home and /var, and /usr. These days, I might say RAID 1 for /boot and /, and RAID or not for swap, and another raid partition for everything else: home, other data directories....
That's pretty much in line with our practice for standalone machines:
- /boot -- RAID 1
- / -- RAID 1
- /srv -- RAID 1 or 5, and it may not even be broken out
- /home -- NAS (RAID 10, if it matters)
For VMs, there's just swap and /.
At work, we're going to not more than 500G for /, but I'm thinking a lot less: I just rebuilt my own system at home, and gave / 150G, I think, and I have /var there (though I'd put web stuff elsewhere than on /).
A RAID 1 of (relatively) inexpensive 80GB or 120GB SSDs are my default for swap and the root filesystem. Larger /srv filesystems, and the NAS holding /home, still require spinning platters on our budget.
What type of network/filesystem connection between VM's and storage you create (NFS, native, separate partitions....)?
On Thu, 30 Jan 2014, Ljubomir Ljubojevic wrote:
A RAID 1 of (relatively) inexpensive 80GB or 120GB SSDs are my default for swap and the root filesystem. Larger /srv filesystems, and the NAS holding /home, still require spinning platters on our budget.
What type of network/filesystem connection between VM's and storage you create (NFS, native, separate partitions....)?
We use NFS over gigabit ethernet, but our VMs don't stress our i/o all that much. A few run database servers, but it's all pretty light duty.
On 01/24/2014 06:58 PM, Matt wrote:
I installed Centos 6.x 64 bit with the minimal ISO and used two disks in RAID 1 array.
Filesystem Size Used Avail Use% Mounted on /dev/md2 97G 918M 91G 1% / tmpfs 16G 0 16G 0% /dev/shm /dev/md1 485M 54M 407M 12% /boot /dev/md3 3.4T 198M 3.2T 1% /vz
for quite some time (since 5.x era) i use http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 (with 6.x i don't even need the patch to mkinitrd)
the mbr or whatever it is is written in /dev/md_d0 .. and thats it in bios you put both hdd to boot and if the first have a problem the second will boot, mail you that you have a degraded raid and start resync after you replaced the drive. (and you can do it live)
HTH, Adrian
On Mon, Jan 27, 2014 at 12:33 PM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
for quite some time (since 5.x era) i use http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 (with 6.x i don't even need the patch to mkinitrd)
the mbr or whatever it is is written in /dev/md_d0 .. and thats it in bios you put both hdd to boot and if the first have a problem the second will boot, mail you that you have a degraded raid and start resync after you replaced the drive. (and you can do it live)
Does that all work the same for drives > 2 TB?
On 01/27/2014 08:42 PM, Les Mikesell wrote:
On Mon, Jan 27, 2014 at 12:33 PM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
for quite some time (since 5.x era) i use http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 (with 6.x i don't even need the patch to mkinitrd)
the mbr or whatever it is is written in /dev/md_d0 .. and thats it in bios you put both hdd to boot and if the first have a problem the second will boot, mail you that you have a degraded raid and start resync after you replaced the drive. (and you can do it live)
Does that all work the same for drives > 2 TB?
i have no idea .. it should .. my use cases at work are the boot drives (all under 500 GB) and home (but i have no hdd > 2 TB)
basically it is a raid over a block device so it does/should not matter what you write into it...
HTH, Adrian
Adrian Sevcenco wrote:
On 01/27/2014 08:42 PM, Les Mikesell wrote:
On Mon, Jan 27, 2014 at 12:33 PM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
for quite some time (since 5.x era) i use http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 (with 6.x i don't even need the patch to mkinitrd)
the mbr or whatever it is is written in /dev/md_d0 .. and thats it in bios you put both hdd to boot and if the first have a problem the second will boot, mail you that you have a degraded raid and start resync after you replaced the drive. (and you can do it live)
Does that all work the same for drives > 2 TB?
i have no idea .. it should .. my use cases at work are the boot drives
(all under 500 GB)
and home (but i have no hdd > 2 TB)
basically it is a raid over a block device so it does/should not matter what you write into it...
As I noted in a previous post, it's got to be GPT, not MBR - the latter doesn't understand > 2TB, and won't.
On a related note, what we've started doing at work is partitioning our root drives four ways, as they're now mostly 2TB that we're putting in, instead of three: /boot, swap, and /, with that as 1G, 2G, and 500G, and the rest of the drive separate. We like protecting /, while leaving more than enough space for logs that suddenly run away. At home, I'll probably do less for /, perhaps 100G.
mark
On Mon, Jan 27, 2014 at 2:50 PM, m.roth@5-cent.us wrote:
Adrian Sevcenco wrote:
On 01/27/2014 08:42 PM, Les Mikesell wrote:
On Mon, Jan 27, 2014 at 12:33 PM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
for quite some time (since 5.x era) i use http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1 (with 6.x i don't even need the patch to mkinitrd)
the mbr or whatever it is is written in /dev/md_d0 .. and thats it in bios you put both hdd to boot and if the first have a problem the second will boot, mail you that you have a degraded raid and start resync after you replaced the drive. (and you can do it live)
Does that all work the same for drives > 2 TB?
i have no idea .. it should .. my use cases at work are the boot drives
(all under 500 GB)
and home (but i have no hdd > 2 TB)
basically it is a raid over a block device so it does/should not matter what you write into it...
As I noted in a previous post, it's got to be GPT, not MBR - the latter doesn't understand > 2TB, and won't.
On a related note, what we've started doing at work is partitioning our root drives four ways, as they're now mostly 2TB that we're putting in, instead of three: /boot, swap, and /, with that as 1G, 2G, and 500G, and the rest of the drive separate. We like protecting /, while leaving more than enough space for logs that suddenly run away. At home, I'll probably do less for /, perhaps 100G.
There are few reasons why /boot should be on a partition/array >= 2TB.
GRUB doesn't support software raid levels other than 1 (sort of). It accesses one of the "mirrored" partitions and not the raid array itsself. So having large disks and only being able to use raid1 isn't often optimal either.
While 100MB was fine in the CentOS 5.x days, it only takes 512MB or so with CentOS 6.x to store a few Linux kernels (and other items - initrd, initramfs, etc).
Once we have GRUB2, then it could be possible to boot from LVM. But partitioning /boot separate from rootfs is not a big deal.
mark
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Am 27.01.2014 um 20:50 schrieb m.roth@5-cent.us:
Adrian Sevcenco wrote:
On 01/27/2014 08:42 PM, Les Mikesell wrote:
On Mon, Jan 27, 2014 at 12:33 PM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
Does that all work the same for drives > 2 TB?
i have no idea .. it should .. my use cases at work are the boot drives
(all under 500 GB)
and home (but i have no hdd > 2 TB)
basically it is a raid over a block device so it does/should not matter what you write into it...
As I noted in a previous post, it's got to be GPT, not MBR - the latter doesn't understand > 2TB, and won't.
IMHO this applies only to partitions (eg. 3TB HD with MBR 1x1TB, 1x2TB Partition).
-- LF
Leon Fauster wrote:
Am 27.01.2014 um 20:50 schrieb m.roth@5-cent.us:
Adrian Sevcenco wrote:
On 01/27/2014 08:42 PM, Les Mikesell wrote:
On Mon, Jan 27, 2014 at 12:33 PM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
Does that all work the same for drives > 2 TB?
i have no idea .. it should .. my use cases at work are the boot drives
(all under 500 GB) and home (but i have no hdd > 2 TB)
basically it is a raid over a block device so it does/should not matter what you write into it...
As I noted in a previous post, it's got to be GPT, not MBR - the latter doesn't understand > 2TB, and won't.
IMHO this applies only to partitions (eg. 3TB HD with MBR 1x1TB, 1x2TB Partition).
Perhaps... but fdisk can't deal with drives > 2TB, and if I'm forced to use parted, I might as well do gpt. I don't believe I've tried MBR, though I'm not sure if I've used 3TB for multiple partitions.
mark
On 1/28/2014 1:35 PM, Leon Fauster wrote:
As I noted in a previous post, it's got to be GPT, not MBR - the latter doesn't understand > 2TB, and won't.
IMHO this applies only to partitions (eg. 3TB HD with MBR 1x1TB, 1x2TB Partition).
it also applies to 3TB drive with 3TB partition, such as you typically use with LVM.
I don't like using raw disk devices in most cases as they aren't labeled so its impossible to figure out whats on them at some later date.
just use the gpt partitioning tools and you'll be fine. as long as its not the boot device, the BIOS doesn't even need to know about GPT.
|parted /dev/sdb ||"mklabel gpt"| |parted -a none /dev/sdb ||"mkpart primary 512s -1s"
that creates a single /dev/sdb1 partition using the whole drive starting at the 256kB boundary (which should be a nice round place on most raid, SSD, etc... the defaults are awful, the start sector is at an odd location) |
John R Pierce wrote:
On 1/28/2014 1:35 PM, Leon Fauster wrote:
As I noted in a previous post, it's got to be GPT, not MBR - the
latter
doesn't understand > 2TB, and won't.
IMHO this applies only to partitions (eg. 3TB HD with MBR 1x1TB, 1x2TB Partition).
it also applies to 3TB drive with 3TB partition, such as you typically use with LVM.
I don't like using raw disk devices in most cases as they aren't labeled so its impossible to figure out whats on them at some later date.
just use the gpt partitioning tools and you'll be fine. as long as its not the boot device, the BIOS doesn't even need to know about GPT.
|parted /dev/sdb ||"mklabel gpt"| |parted -a none /dev/sdb ||"mkpart primary 512s -1s"
that creates a single /dev/sdb1 partition using the whole drive starting at the 256kB boundary (which should be a nice round place on most raid, SSD, etc... the defaults are awful, the start sector is at an odd location)
Not a fan of that - a lot of the new drives actually use 4k blocks, *not* 512b, but serve it logically as 512. HOWEVER, you can see a real performance hit. my usual routine is parted -a optimal mklabel gpt mkpart pri ext4 0.0GB 3001.0GB q and that aligns them for optimal speed. The 0.0GB will start at 1M - the old start at sector(?) 63 will result in non-optimal alignment, not starting on a cylinder boundry, or something. Note also that parted is user hostile, so you have to know the exact magical incantations, or you get "this is not aligned optimally", but no idea of what it thinks you should do. What I did, above, works.
mark