On Tue, Mar 6, 2012 at 9:29 AM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
why will Centos 6 not boot from an mdraid 10 partition?
It has to load code before you have the kernel that understands raid or how to detect it. That's why they call it booting.
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot. I think RH needs to take a cue in that area....I'm not going to reconfigure my entire array to accommodate centos in this instance. if i don't need MDRAID 10 boot then this machine will come back to centos6..:) Centos 6 is great but it's not right for this particluar machine..:(
On Tue, Mar 6, 2012 at 11:27 AM, Les Mikesell lesmikesell@gmail.com wrote:
On Tue, Mar 6, 2012 at 9:29 AM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
why will Centos 6 not boot from an mdraid 10 partition?
It has to load code before you have the kernel that understands raid or how to detect it. That's why they call it booting.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Wed, Mar 7, 2012 at 9:49 AM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot.
That ubuntu version has probably switched to grub2. Good luck debugging it when it breaks - it is very different.
On Mar 7, 2012, at 11:02 AM, Les Mikesell lesmikesell@gmail.com wrote:
On Wed, Mar 7, 2012 at 9:49 AM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot.
That ubuntu version has probably switched to grub2. Good luck debugging it when it breaks - it is very different.
Plus it is very handy to have a /boot that is readable/mountable without LVM or MDRAID drivers loaded and configured.
/boot is only 256-512MB partition that is read only during boot and updated only when there is a new kernel, so it ain't no big thing. Even when RH goes to grub2 I think I'll keep this setup by default.
-Ross
Ross Walker wrote:
On Mar 7, 2012, at 11:02 AM, Les Mikesell lesmikesell@gmail.com wrote:
On Wed, Mar 7, 2012 at 9:49 AM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot.
That ubuntu version has probably switched to grub2. Good luck debugging it when it breaks - it is very different.
Plus it is very handy to have a /boot that is readable/mountable without LVM or MDRAID drivers loaded and configured.
/boot is only 256-512MB partition that is read only during boot and updated only when there is a new kernel, so it ain't no big thing. Even when RH goes to grub2 I think I'll keep this setup by default.
Don't make it less than 512M - we're debating between 512M and 1G here. Certainly, bleeding-edge fedora *needs* at least 256M *free* in /boot to do an upgrade in place, so I have to assume that's coming in the next few years for RHEL/CentOS.
mark
On 7.3.2012 19:08, m.roth@5-cent.us wrote:
Ross Walker wrote:
On Mar 7, 2012, at 11:02 AM, Les Mikesell lesmikesell-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org wrote:
On Wed, Mar 7, 2012 at 9:49 AM, William Warren hescominsoon-dGSttIWD7Blt2rXhg/qq1LELw3D7xbbmMrMQPFiV5cM@public.gmane.org wrote:
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot.
That ubuntu version has probably switched to grub2. Good luck debugging it when it breaks - it is very different.
Plus it is very handy to have a /boot that is readable/mountable without LVM or MDRAID drivers loaded and configured.
/boot is only 256-512MB partition that is read only during boot and updated only when there is a new kernel, so it ain't no big thing. Even when RH goes to grub2 I think I'll keep this setup by default.
Don't make it less than 512M - we're debating between 512M and 1G here.
You may have noticed that redhat recommends 250M http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installati...
Markus Falb wrote:
On 7.3.2012 19:08, m.roth@5-cent.us wrote:
Ross Walker wrote:
On Mar 7, 2012, at 11:02 AM, Les Mikesell lesmikesell-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org wrote:
On Wed, Mar 7, 2012 at 9:49 AM, William Warren hescominsoon-dGSttIWD7Blt2rXhg/qq1LELw3D7xbbmMrMQPFiV5cM@public.gmane.org wrote:
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot.
That ubuntu version has probably switched to grub2. Good luck debugging it when it breaks - it is very different.
Plus it is very handy to have a /boot that is readable/mountable without LVM or MDRAID drivers loaded and configured.
/boot is only 256-512MB partition that is read only during boot and updated only when there is a new kernel, so it ain't no big thing. Even when RH goes to grub2 I think I'll keep this setup by default.
Don't make it less than 512M - we're debating between 512M and 1G here.
You may have noticed that redhat recommends 250M http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installati...
Yeah, well, some of us have many servers more than 4 yrs old; so I'm assuming that three, four years from now, with CentOS 8 or 9, it'll do the same, and want maybe 800M. Plan for the future, y'know.
mark
On Wed, Mar 7, 2012 at 2:01 PM, m.roth@5-cent.us wrote:
You may have noticed that redhat recommends 250M http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installati...
Yeah, well, some of us have many servers more than 4 yrs old; so I'm assuming that three, four years from now, with CentOS 8 or 9, it'll do the same, and want maybe 800M. Plan for the future, y'know.
If the future continues anything like the past, you'll be be able to buy something new with twice the speed and 10x the space by then and be better off starting over than allocating more than you need today.
Les Mikesell wrote:
On Wed, Mar 7, 2012 at 2:01 PM, m.roth@5-cent.us wrote:
You may have noticed that redhat recommends 250M http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installati...
Yeah, well, some of us have many servers more than 4 yrs old; so I'm assuming that three, four years from now, with CentOS 8 or 9, it'll do the same, and want maybe 800M. Plan for the future, y'know.
If the future continues anything like the past, you'll be be able to buy something new with twice the speed and 10x the space by then and be better off starting over than allocating more than you need today.
a) You think I, or a *lot* of other folks, are going to do that at home? (Please - I'm trying to get my fiancee to at *least* go from *shudder* Vista to Win7) b) I'm a federal contractor, on site. You think the Republicans in Congress are going to increase our budget, to replace all servers over three or four years old?
mark "clue: phat chance"
On Wed, Mar 7, 2012 at 2:23 PM, m.roth@5-cent.us wrote:
If the future continues anything like the past, you'll be be able to buy something new with twice the speed and 10x the space by then and be better off starting over than allocating more than you need today.
a) You think I, or a *lot* of other folks, are going to do that at home? (Please - I'm trying to get my fiancee to at *least* go from *shudder* Vista to Win7)
If you leave them on, add up the power cost of running an old box for years.
b) I'm a federal contractor, on site. You think the Republicans in Congress are going to increase our budget, to replace all servers over three or four years old?
Yes, if someone does the report to show that it will save money in the big picture... But you don't necessarily throw out the 3 year old boxes, you can re-purpose them to something where people aren't waiting idly for their results, keeping some others for spare parts. In any case, someone should be accounting for the space/power/cooling that old hardware takes while accomplishing 10% as much as new.
Les Mikesell wrote:
On Wed, Mar 7, 2012 at 2:23 PM, m.roth@5-cent.us wrote:
If the future continues anything like the past, you'll be be able to buy something new with twice the speed and 10x the space by then and be better off starting over than allocating more than you need today.
a) You think I, or a *lot* of other folks, are going to do that at home? (Please - I'm trying to get my fiancee to at *least* go from *shudder* Vista to Win7)
If you leave them on, add up the power cost of running an old box for years.
Sorry, that doesn't work either: *everything* new seems to need a lot more power than the older stuff. Certainly, last time I upgraded my own system, I had to buy one that was 150% the power of the old one.
b) I'm a federal contractor, on site. You think the Republicans in Congress are going to increase our budget, to replace all servers over three or four years old?
Yes, if someone does the report to show that it will save money in the big picture... But you don't necessarily throw out the 3 year old boxes, you can re-purpose them to something where people aren't waiting idly for their results, keeping some others for spare parts. In any case, someone should be accounting for the space/power/cooling that old hardware takes while accomplishing 10% as much as new.
We already repurpose. There's a limit to parts - when something gets surplussed, it has to have pretty much what we bought it with (a different h/d's no big deal, though). And the budget won't *allow* say, 20 or 50 or 70 new servers in one year. I can give you, with a 98% confidence level, that our budget (we're not DoD) will *NOT* be upped by 10%.
The sob's just "saved money" by a) taking away our water coolers (we can use the metal water fountains, y'know) and b) not refilling one of the three soap dispensers in the men's room on this floor. Mind-bogglingly petty? Damn straight. Cheap? Bet *their* offices still have the water....
mark "and our people are trying to do real science"
ObDisclaimer: I speak only for myself, not for my agency or my employer.
On Wed, Mar 7, 2012 at 3:23 PM, m.roth@5-cent.us wrote:
a) You think I, or a *lot* of other folks, are going to do that at home? (Please - I'm trying to get my fiancee to at *least* go from *shudder* Vista to Win7)
If you leave them on, add up the power cost of running an old box for years.
Sorry, that doesn't work either: *everything* new seems to need a lot more power than the older stuff. Certainly, last time I upgraded my own system, I had to buy one that was 150% the power of the old one.
That was probably before power became a big thing for servers - in most cases now power and cooling are the limiting factors for expansion in a data center. Most of the new servers use 2.5" drives and while they might still use as much power per 1u of space due to using more blades in a chassis or having more CPUs and RAM, we get much more performance from the same space and power consumption. It's not such a big deal for desktops, but you can get small low power systems if you look around - or just use a laptop that will sleep when you close the lid.
Les Mikesell wrote:
On Wed, Mar 7, 2012 at 3:23 PM, m.roth@5-cent.us wrote:
a) You think I, or a *lot* of other folks, are going to do that at home? (Please - I'm trying to get my fiancee to at *least* go from *shudder* Vista to Win7)
If you leave them on, add up the power cost of running an old box for years.
Sorry, that doesn't work either: *everything* new seems to need a lot more power than the older stuff. Certainly, last time I upgraded my own system, I had to buy one that was 150% the power of the old one.
That was probably before power became a big thing for servers - in most cases now power and cooling are the limiting factors for expansion in a data center. Most of the new servers use 2.5" drives and while they might still use as much power per 1u of space due to using more blades in a chassis or having more CPUs and RAM, we get much more performance from the same space and power consumption. It's not such a big deal for desktops, but you can get small low power systems if you look around - or just use a laptop that will sleep when you close the lid.
Heh. Many of the new servers we are getting are all on the order of 48 or 64 cores, and they eat and drink power. The same UPS that would handle six 4 or 8 core boxes can handle *three*, if we're lucky, when a clustering job's running....
mark
On 03/07/12 2:09 PM, m.roth@5-cent.us wrote:
Heh. Many of the new servers we are getting are all on the order of 48 or 64 cores, and they eat and drink power. The same UPS that would handle six 4 or 8 core boxes can handle*three*, if we're lucky, when a clustering job's running....
yes but that 48 core server can easily handle 6 or more of those 4-8 core servers virtualized.
John R Pierce wrote:
On 03/07/12 2:09 PM, m.roth@5-cent.us wrote:
Heh. Many of the new servers we are getting are all on the order of 48 or 64 cores, and they eat and drink power. The same UPS that would handle six 4 or 8 core boxes can handle*three*, if we're lucky, when a clustering job's running....
yes but that 48 core server can easily handle 6 or more of those 4-8 core servers virtualized.
VM's? Sorry, we're doing very serious scientific computing - the couple or so VMs we had are going away. I mean, when, for example, one guy I support gets on a 48 core box, and proceeds to fire up an R job, and uses *all* of them.... Plus, we're running out of UPSs to stick them on to, and sockets to reach....
mark "gotta build that last one of the cluster today"
On 03/08/12 4:39 AM, mark wrote:
VM's? Sorry, we're doing very serious scientific computing - the couple or so VMs we had are going away. I mean, when, for example, one guy I support gets on a 48 core box, and proceeds to fire up an R job, and uses*all* of them.... Plus, we're running out of UPSs to stick them on to, and sockets to reach....
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ? thats still major win.
John R Pierce wrote:
On 03/08/12 4:39 AM, mark wrote:
VM's? Sorry, we're doing very serious scientific computing - the couple or so VMs we had are going away. I mean, when, for example, one guy I support gets on a 48 core box, and proceeds to fire up an R job, and uses*all* of them.... Plus, we're running out of UPSs to stick them on to, and sockets to reach....
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ? thats still major win.
Um, no - that's what I'm saying is *not* the case. The new suckers drink power - using a UPS that I could hang, say, 6 Dell 1950's off of, *if* I'm lucky, I can put three of the new servers. And at that, if a big jobs running (they very much vary in how much power they draw, depending on usage), even with only three on, I've seen the leds run up to where they're blinking, indicating it's near overload, over 90% capability.
mark
On Thu, Mar 8, 2012 at 8:33 AM, m.roth@5-cent.us wrote:
VM's? Sorry, we're doing very serious scientific computing - the couple or so VMs we had are going away. I mean, when, for example, one guy I support gets on a 48 core box, and proceeds to fire up an R job, and uses*all* of them.... Plus, we're running out of UPSs to stick them on to, and sockets to reach....
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ? thats still major win.
Um, no - that's what I'm saying is *not* the case. The new suckers drink power - using a UPS that I could hang, say, 6 Dell 1950's off of, *if* I'm lucky, I can put three of the new servers. And at that, if a big jobs running (they very much vary in how much power they draw, depending on usage), even with only three on, I've seen the leds run up to where they're blinking, indicating it's near overload, over 90% capability.
Yes, part of the power savings are deceptive - they only kick in when the CPUs are idle and your users would be one of the rare cases that peg them for long intervals. I think this is getting better in the current generation but haven't followed the latest changes.
On Thursday, March 08, 2012 10:52:02 AM Les Mikesell wrote:
Yes, part of the power savings are deceptive - they only kick in when the CPUs are idle and your users would be one of the rare cases that peg them for long intervals. I think this is getting better in the current generation but haven't followed the latest changes.
In scientific computing, there is no such thing as 'enough cores' and if 3 48 core servers physically fit in the space of three older 6 or 8 core servers, then the users will want to fill that space and get 3 more 48 core servers, and so your power density has doubled. So the '150%' power increase is (if I'm reading Mark correctly) per *rack unit* not per core. And, again, in this space you don't get any savings in power, since this sort of computing eats cores for breakfast. And virtualization to save power will not address this type of user's need.
I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.
On Mar 8, 2012, at 11:06 AM, Lamar Owen lowen@pari.edu wrote:
On Thursday, March 08, 2012 10:52:02 AM Les Mikesell wrote:
Yes, part of the power savings are deceptive - they only kick in when the CPUs are idle and your users would be one of the rare cases that peg them for long intervals. I think this is getting better in the current generation but haven't followed the latest changes.
In scientific computing, there is no such thing as 'enough cores' and if 3 48 core servers physically fit in the space of three older 6 or 8 core servers, then the users will want to fill that space and get 3 more 48 core servers, and so your power density has doubled. So the '150%' power increase is (if I'm reading Mark correctly) per *rack unit* not per core. And, again, in this space you don't get any savings in power, since this sort of computing eats cores for breakfast. And virtualization to save power will not address this type of user's need.
I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.
So, get more power and UPS.
The specs are published, so power consumption shouldn't be a "surprise".
-Ross
On Thursday, March 08, 2012 12:37:30 PM Ross Walker wrote:
On Mar 8, 2012, at 11:06 AM, Lamar Owen lowen@pari.edu wrote:
I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.
So, get more power and UPS.
So, can I put you down as being willing to donate the $2.5 million necessary to increase our power capacity (I'm looking out the door at two of our four 1MVA 12.4KV to 480/277 transformers (that we, not the utility, own), and any upgrade will involve the incoming buried primary) and get a couple or three more Mitsubishi 500KVA units? No? It's a great tax writeoff, being that we are a 501(c)(3) public not-for-profit foundation.....we'll give you a nice tax receipt. :-) Oh, and the $1.2 million for an additional 100 tons of redundant HVAC while we're at it....
The specs are published, so power consumption shouldn't be a "surprise".
It's not a surprise, it's just more cost than just the servers themselves, and budgets are tight.
On Mar 8, 2012, at 1:12 PM, Lamar Owen lowen@pari.edu wrote:
On Thursday, March 08, 2012 12:37:30 PM Ross Walker wrote:
On Mar 8, 2012, at 11:06 AM, Lamar Owen lowen@pari.edu wrote:
I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.
So, get more power and UPS.
So, can I put you down as being willing to donate the $2.5 million necessary to increase our power capacity (I'm looking out the door at two of our four 1MVA 12.4KV to 480/277 transformers (that we, not the utility, own), and any upgrade will involve the incoming buried primary) and get a couple or three more Mitsubishi 500KVA units? No? It's a great tax writeoff, being that we are a 501(c)(3) public not-for-profit foundation.....we'll give you a nice tax receipt. :-) Oh, and the $1.2 million for an additional 100 tons of redundant HVAC while we're at it....
Maybe time to co-locate some stuff to a third party data center that can scale up with your demand?
The specs are published, so power consumption shouldn't be a "surprise".
It's not a surprise, it's just more cost than just the servers themselves, and budgets are tight.
I know, you know, but there are some who don't budget for the environmental changes that come about with big iron.
-Ross
On Thu, Mar 8, 2012 at 11:37 AM, Ross Walker rswwalker@gmail.com wrote:
I live in the same sort of world, just on a smaller scale, and my biggest power consumer is storage, not compute, but I thoroughly understand Mark's points.
So, get more power and UPS.
The specs are published, so power consumption shouldn't be a "surprise".
Usually your whole building is designed around a certain amount of heat load and data centers designed a few years back are probably already maxed out due to the earlier rounds of density increases. So you will need at least more A/C and probably real estate too.
On Thursday, March 08, 2012 01:15:59 PM Les Mikesell wrote:
Usually your whole building is designed around a certain amount of heat load and data centers designed a few years back are probably already maxed out due to the earlier rounds of density increases. So you will need at least more A/C and probably real estate too.
And don't forget the floor load. Our EMC Clariions are heavy enough that I can't use a tile under them with any holes of any kind in them (especially vents) or I have tile surface deflection that's out of spec. And our floor in the main data center is rated 1,500 lbs (avoirdupois) per square foot. And the subfloor loading has to be considered, as well as how much the underfloor will 'flow' in CFM......
On 03/08/12 6:33 AM, m.roth@5-cent.us wrote:
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ? thats still major win.
Um, no - that's what I'm saying is*not* the case. The new suckers drink power - using a UPS that I could hang, say, 6 Dell 1950's off of,*if* I'm lucky, I can put three of the new servers. And at that, if a big jobs running (they very much vary in how much power they draw, depending on usage), even with only three on, I've seen the leds run up to where they're blinking, indicating it's near overload, over 90% capability.
ok, how do you figure 3 48 core modern servers are not more powerful computationally than 6 8 core servers? the 1950's were "cloverton" which were dual core2duo chips, 2 sockets, at ~ 2-3GHz, for your 8 cores per 1U.
now, I dunno what your 48 core servers are, if thats really 2x12 cores with 'hyperthreading', then those extra threads are NOT good for intense numerical compute work as they share the FPU, but even those 24 cores should be faster than twice as many 8 core systems.
John R Pierce wrote:
On 03/08/12 6:33 AM, m.roth@5-cent.us wrote:
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ? thats still major win.
Um, no - that's what I'm saying is*not* the case. The new suckers drink power - using a UPS that I could hang, say, 6 Dell 1950's off of,*if* I'm lucky, I can put three of the new servers. And at that, if a big jobs running (they very much vary in how much power they draw, depending on usage), even with only three on, I've seen the leds run up to where they're blinking, indicating it's near overload, over 90% capability.
ok, how do you figure 3 48 core modern servers are not more powerful computationally than 6 8 core servers? the 1950's were "cloverton" which were dual core2duo chips, 2 sockets, at ~ 2-3GHz, for your 8 cores per 1U.
I'm sorry, but to me, the above is a non sequitur. I was talking about how much power the servers drink, and that the UPSs that I have can barely, barely handle half as many or less, and I'm running out of UPSs, and out of power outlets for them in such a small space (that is, a dozen or so in each rack), without trying to go halfway across the room.
now, I dunno what your 48 core servers are, if thats really 2x12 cores with 'hyperthreading', then those extra threads are NOT good for intense numerical compute work as they share the FPU, but even those 24 cores should be faster than twice as many 8 core systems.
http://www.newegg.com/Product/Product.aspx?Item=N82E16819105264
mark
On Thu, Mar 08, 2012 at 02:51:58PM -0500, m.roth@5-cent.us wrote:
John R Pierce wrote:
On 03/08/12 6:33 AM, m.roth@5-cent.us wrote:
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ? thats still major win.
Um, no - that's what I'm saying is*not* the case. The new suckers drink power - using a UPS that I could hang, say, 6 Dell 1950's off of,*if* I'm lucky, I can put three of the new servers. And at that, if a big jobs running (they very much vary in how much power they draw, depending on usage), even with only three on, I've seen the leds run up to where they're blinking, indicating it's near overload, over 90% capability.
ok, how do you figure 3 48 core modern servers are not more powerful computationally than 6 8 core servers? the 1950's were "cloverton" which were dual core2duo chips, 2 sockets, at ~ 2-3GHz, for your 8 cores per 1U.
I'm sorry, but to me, the above is a non sequitur. I was talking about how much power the servers drink, and that the UPSs that I have can barely, barely handle half as many or less, and I'm running out of UPSs, and out of power outlets for them in such a small space (that is, a dozen or so in each rack), without trying to go halfway across the room.
If you need lots of smaller servers, supermicro makes a very nice single socket amd G34 board: http://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8SGL-F.cfm
I have a bunch of those in production, and they work well. Most of mine only have 32GiB ram; I bought them back when 8GiB modules were expensive; but if I bought one today, they'd have 64GiB, as 8GiB reg. ecc ddr3 is cheap now. They use more than half what a dual G34 board with double the ram/cpu would use, but not a lot more than half.
One of those single-socket G34 boards should use rather less power than a dual-socket 1950 with FBDIMMs and it should give you rather more compute power and ram. (ugh. as someone that uses a lot of ram and pays a lot for power, I hate FBDIMMs. I was almost entirely AMD socket F during that time period for the reg.ecc ddr2. all my new stuff is intel 56xx with reg. ecc ddr3.)
Also, they make lower power G34 CPUs... they cost a bit more, but when you are paying California prices for power, it's usually worth it, especially if you plan on keeping the thing for 5 years rather than just 3.
On 03/08/12 11:51 AM, m.roth@5-cent.us wrote:
I'm sorry, but to me, the above is a non sequitur. I was talking about how much power the servers drink, and that the UPSs that I have can barely, barely handle half as many or less, and I'm running out of UPSs, and out of power outlets for them in such a small space (that is, a dozen or so in each rack), without trying to go halfway across the room.
yes, you have to budget power and A/C as part of a server upgrade. ain't no such thing as a free lunch.
anyways, the CPU link you sent was a 12 core, so I guess you upgraded from a fairly low power 2 socket Intel (said Dell PE1950) to a rather HIGH power 4 socket AMD (those are 120W each CPU chips, never mind the rest of the infrastructure, like I bet you have way over 2X the RAM in those new boxes).
so yes, if you replaced N 2 socket servers with the same number N of 4 socket servers, and went from 8 to 48 cores each, of COURSE you're drawing more power. you have about 6 times the CPU capacity (assuming the cores are equivalent MIPS, often newer cores are more efficient at the same clockspeed than older ones). Most of the Xeon 5300 series in those PE1950s were 80 watt each (only the 2.67 and 3.0Ghz were 120W)
On Wednesday, March 07, 2012 05:06:13 PM Les Mikesell wrote:
It's not such a big deal for desktops, but you can get small low power systems if you look around - or just use a laptop that will sleep when you close the lid.
FWIW, Aleutia (www.aleutia.com) makes some nice really low power units. While they come by default from the factory preloaded with Ubuntu, they would be great CentOS machines.
the problem with that is when your boot drive dies your can't boot...with ubuntu at least if any drive dies i can stilll boot off of the other 3..:)
On Wed, Mar 7, 2012 at 12:40 PM, Ross Walker rswwalker@gmail.com wrote:
On Mar 7, 2012, at 11:02 AM, Les Mikesell lesmikesell@gmail.com wrote:
On Wed, Mar 7, 2012 at 9:49 AM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
well ubuntu allows me to boot from MD RAID10...so there's something they are doing that allows that to boot.
That ubuntu version has probably switched to grub2. Good luck debugging it when it breaks - it is very different.
Plus it is very handy to have a /boot that is readable/mountable without LVM or MDRAID drivers loaded and configured.
/boot is only 256-512MB partition that is read only during boot and updated only when there is a new kernel, so it ain't no big thing. Even when RH goes to grub2 I think I'll keep this setup by default.
-Ross
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
the problem with that is when your boot drive dies your can't boot...with ubuntu at least if any drive dies i can stilll boot off of the other 3..:)
You don't need a boot drive, you only need a *boot partition*.
So, you create a small *boot partition* with RAID1 and then allocate the rest of your drives to a RAID10 array.
You will still have redundancy (RAID1) on your boot partition.
On Wed, Mar 7, 2012 at 12:28 PM, William Warren hescominsoon@emmanuelcomputerconsulting.com wrote:
the problem with that is when your boot drive dies your can't boot...with ubuntu at least if any drive dies i can stilll boot off of the other 3..:)
You can make a raid1 with 4 members if you want - and you might as well since you'll want to make the partition layout match anyway. Does ubuntu actually install grub on all 4 mbr's? You'll have to do that manually, but it's not that hard.