Hi all.
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have: - possibility to install up to 32GB of RAM and at least 4 slots for it; - at least 6 SATA ports; - the possibility to use SAS disks; - at least one 1Gb/s network interface and the possibility to add another one; - size of 2 or 3U; - hot swap for disks. We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Best regards, Rafal Radecki.
From: Rafał Radecki radecki.rafal@gmail.com
We are looking for something rather not expensive but reliable which has a good support.
Something reliable, good and not expensive... hum... We all do!
But quite a few people recommend Supermicro.
JD
From: Rafał Radecki radecki.rafal@gmail.com
We are looking for something rather not expensive but reliable which has a good support.
I'd add a few more things to the list.....
Redundant power supplies dual gig nics "dual" quad core cpu's raid support w/battery backed cache remote management
pick up a 2nd(err..3rd) generation HP DL380G5
There's a lot of equipment around and can be had for a good price.
Save enough to even pick up a second one and keep on site for R&D and equipment/parts backup.
On 06/26/12 6:40 AM, Rafał Radecki wrote:
- at least 6 SATA ports;
- the possibility to use SAS disks;
if the system supports SAS disks, you don't need any SATA ports. you can plug SATA drives into SAS hotswap bays with most any SAS controllers.
Hi, Rafal,
Rafał Radecki wrote:
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have:
- possibility to install up to 32GB of RAM and at least 4 slots for it;
- at least 6 SATA ports;
- the possibility to use SAS disks;
- at least one 1Gb/s network interface and the possibility to add another
one;
- size of 2 or 3U;
- hot swap for disks.
We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Are you looking for full servers, or to build from parts? Someone just recommended Supermicro; I'm not a big fan of them just now - we have a good number of servers from Penguin Computing that use that, and the 64 core systems seem to have a lot of problems. Their 48 core servers seem fine.
Btw, all the above are 1U, have 3 hot swap bays, and are all SATA. Their support has been quite decent.
More expensive, Dell. An R41x, R61x, or, for heavy duty work, an R81x, are really serious, the last two take eight, I think, SAS drives (the small ones - 2.5"?). Their service is outstanding.
*All* of the above, both Dell and Penguin, have two NICs. All will take well beyond 64G of memory (we have a Penguin Altus 1804 and a Dell R815, I think it is, with (excuse me, my mind SEGVs every time I think of this) 250G of memory....
Under no circumstances should you buy Sun/Oracle. Service... here in the Washington, DC area, about a year and a half ago, it took me a MONTH to get one server fixed w/ on-site support. They're there to make a profit - I mean, how do you *think* Larry Ellison pays for his fighter jet, yacht, and Hawaiian island? - *NOT* to sell you hardware and service that serves your purposes.
mark
m.roth@5-cent.us wrote:
Hi, Rafal,
Rafał Radecki wrote:
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have:
- possibility to install up to 32GB of RAM and at least 4 slots for it;
- at least 6 SATA ports;
- the possibility to use SAS disks;
- at least one 1Gb/s network interface and the possibility to add
another one;
- size of 2 or 3U;
- hot swap for disks.
We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Are you looking for full servers, or to build from parts? Someone just recommended Supermicro; I'm not a big fan of them just now - we have a good number of servers from Penguin Computing that use that, and the 64 core systems seem to have a lot of problems. Their 48 core servers seem fine.
Btw, all the above are 1U, have 3 hot swap bays, and are all SATA. Their support has been quite decent.
More expensive, Dell. An R41x, R61x, or, for heavy duty work, an R81x, are really serious, the last two take eight, I think, SAS drives (the small ones - 2.5"?). Their service is outstanding.
*All* of the above, both Dell and Penguin, have two NICs. All will take well beyond 64G of memory (we have a Penguin Altus 1804 and a Dell R815, I think it is, with (excuse me, my mind SEGVs every time I think of this) 250G of memory....
Under no circumstances should you buy Sun/Oracle. Service... here in the Washington, DC area, about a year and a half ago, it took me a MONTH to get one server fixed w/ on-site support. They're there to make a profit - I mean, how do you *think* Larry Ellison pays for his fighter jet, yacht, and Hawaiian island? - *NOT* to sell you hardware and service that serves your purposes.
Sorry, need to follow myself up: *just* after I hit <send>, I forgot the obvious disclaimer: I have no idea what any of these folks are allowed to export to Poland.
mark
On 06/26/12 7:37 AM, m.roth@5-cent.us wrote:
Are you looking for full servers, or to build from parts? Someone just recommended Supermicro; I'm not a big fan of them just now - we have a good number of servers from Penguin Computing that use that, and the 64 core systems seem to have a lot of problems. Their 48 core servers seem fine.
The SuperMicro Intel stuff seems just fine. I'd be more leary of AMD.
John R Pierce wrote:
On 06/26/12 7:37 AM, m.roth@5-cent.us wrote:
Are you looking for full servers, or to build from parts? Someone just recommended Supermicro; I'm not a big fan of them just now - we have a good number of servers from Penguin Computing that use that, and the 64 core systems seem to have a lot of problems. Their 48 core servers seem fine.
The SuperMicro Intel stuff seems just fine. I'd be more leary of AMD.
We've had a number of servers fail, and it *seems* to be related to the motherboard. In fact, I just got the pass, and asked the secretary to call Fed Ex today to ship another one back to the vendor.
mark
Steve Thompson wrote:
On Tue, 26 Jun 2012, m.roth@5-cent.us wrote:
We've had a number of servers fail, and it *seems* to be related to the motherboard.
I too have had bad experiences with SuperMicro motherboards; never had one last more than three years.
These are with AMDs, but we have a good number of 48-core servers that have been pretty good (with a few exceptions); there are more problems with the 64 core servers.
mark
On 06/26/2012 12:03 PM, Steve Thompson wrote:
On Tue, 26 Jun 2012, m.roth@5-cent.us wrote:
We've had a number of servers fail, and it *seems* to be related to the motherboard.
I too have had bad experiences with SuperMicro motherboards; never had one last more than three years.
That runs counter to my experience; I've run several 24/7 for years and just retired one that was still running flawlessly after more than 12 years.
Apparently, YMMV
On Tue, Jun 26, 2012 at 03:03:23PM -0400, Steve Thompson wrote:
On Tue, 26 Jun 2012, m.roth@5-cent.us wrote:
We've had a number of servers fail, and it *seems* to be related to the motherboard.
I too have had bad experiences with SuperMicro motherboards; never had one last more than three years.
The problem with supermicro is that the end user assembles them; If you use ESD protection, this is fine. If you dont? go buy a dell or something.
The big problem is that many of the smaller assembly houses also don't believe ESD is a big deal. If there is carpet on the workshop floor? run. If you see techs working without a wrist strap? walk.
I've assembled hundreds of supermicro servers with and without ESD protection, and the behavior is fairly reproducable. Yeah, the problems don't always show up right away? but they come.
I remember when I first figured this out; we had been having about 1 in 3 of our supermicro servers not pass burn-in. Then, in production, we'd lose things like RAID cards and ethernet ports all the time. I'd spend days swapping out parts and RMAing stuff, just to get one server built. I mean, I didn't really believe that the factory was sending me broken shit, and there was noticable static in the office. (I always 'took the power supply pledge' before touching anything) Anyhow, I read a study by adaptec (we were using adaptec hardware raid in everything, and they were failing like crazy) saying that nearly all customer RMAs, upon inspection, were due to esd damage.
Well, the boss ended up ordering something like 70 servers (rather than the three every two weeks he was ordering before) - I talked him into letting me blow $200 on ESD protection, just to see if that was the problem, and instead of having 1 out of 3 die as before? all of them passed burn-in on the first try.
Properly assembled supermicro kit (both AMD and Intel) is just as good as the dell stuff. I have one server that's been chugging away for something like ten years now. (I need to get rid of it; Dual socket 604 xeons. It's a space heater, and it doesn't get me much by way of compute power. I've got all customers off of it, but my own personal vps? I haven't had time.)
But yeah, you've gotta get someone to assemble it that gives a shit. I mean, me? I know that it's my pager that is going off at 4am if something breaks. It's me that's going to have to fumble around with spares. I give a shit.
As it is, I'd rather assemble my own servers, than trust someone for whom a down hardware is not that big of a deal to assemble my stuff.
Assembling a superserver, if you don't fuck it up, takes about five minutes. Burn in is trivial when they pass... and when they don't pass, which is extremely rare, I know I screwed something up.
On the other hand... I have a very low opinion of dell support (granted, I'm pretty hard to please in that department.) but from what I've seen? all the big names ship okay stuff from the factory. They have proper esd precautions in the factory. So yeah; if you aren't willing to go with the table mat, the wrist strap, and the monitor, well, order the server from dell and don't open it.
On 06/28/12 8:56 PM, Luke S. Crawford wrote:
The problem with supermicro is that the end user assembles them; If you use ESD protection, this is fine. If you dont? go buy a dell or something.
well, the SM kit I've bought was built and integrated by a major name systems integrator. they were sold as complete solutions under this vendors' label, and supported by said vendor.
really, I'd say its all in the VAR and your service contract with them. very few VARs do the level of systems testing that HP or IBM or Dell or whatever do... If you really really want to be your own systems integrator, then do extensive burnin on new systems, and stock spare parts.
On Thu, Jun 28, 2012 at 09:57:33PM -0700, John R Pierce wrote:
On 06/28/12 8:56 PM, Luke S. Crawford wrote:
The problem with supermicro is that the end user assembles them; If you use ESD protection, this is fine. If you dont? go buy a dell or something.
well, the SM kit I've bought was built and integrated by a major name systems integrator. they were sold as complete solutions under this vendors' label, and supported by said vendor.
really, I'd say its all in the VAR and your service contract with them. very few VARs do the level of systems testing that HP or IBM or Dell or whatever do... If you really really want to be your own systems integrator, then do extensive burnin on new systems, and stock spare parts.
I agree. Except that you don't need to do all, or even most of the work that a systems integrator does. For me, the hard part of being a systems integrator is the sales and negotiation bullshit. That's why I don't build systems for other people. On top of that, you have to deal with your customers opening them up, without ESD protection, and adding garbage, or customers blaming OS bugs on you. If you only build for yourself, you don't have to worry about that sort of thing.
I mean,you still have to figure out if it's hardware or the OS, but at least you get to choose the OS.
But yes. stock spares. I try to make sure I always have one server (minus disks) ready to go; If I get a hardware problem (I can usually tell remotely) I put it in the van before I head down to the data center; If I can't figure things out quickly on-site, I take the hard drives out of the bad hardware, put them in the spare box, boot, and go. (Of course, I also have spares of other parts; but if something in production is down, you don't want to sit there farting around trying to figure out which DIMM is bad while the pager is exploding. Swap the whole thing and screw with it back at the shop after you have cleaned up the support queue.)
(if you use hardware raid, this becomes... more complicated. Test your procedure first.)
From what I've seen? the difference between no negotiation and
the best possible negotiation, when you buy whole servers? is often 50% of the total price. Sometimes more. When buying parts? it's 5%, if that. (we're talking in the 1-5 server quantity here. I'm sure things change if you are buying hundreds or thousands at once and you are saavy. I've never seen a saavy entity negotiate for hundreds or thousands of servers or parts for same.)
That, and to negotiate well, you need to have all of the knowledge you'd need to buy the parts to build your own server. Either way, unless you are prepared to just pay full price, you need to keep up with hardware and the relitive costs.
Heck, I'll do all the assembly and burn in work, and keep spares around, just to avoid the negotiation bullshit. For me? it's far easier. And if you ask me? dealing with broken hardware is downright relaxing compared with trying to convince some goddamn monkey that the reboot that happened last night was really a hardware issue, and yes, it came back up, but it still needs to get fixed. "But it works now, right?" (sorry... I just remember some extremely frustrating experiences dealing with dell's verson of Mordak. And I was getting paid by the hour, so if corporations had feelings, the company hiring me would really have felt worse.)
But that has as much to do with who I am and what skills I have as anything else. If I were an extrovert, I'd probably find 'educating' tech support to be less of a hellish experience.
And, of course, on all but the super expensive plans, if it's not acceptable to be down all weekend for a hardware failure on friday night, well, you still need those spares.
(Of course, if I only had one or two servers, it'd probably make sense to just pay twice the price and be done with it. But nearly all of my net worth is tied up in server hardware, so I can't walk away from that 50%.)
But yeah, my point is just that if you build the hardware yourself, you only have to do a small subset of the 'systems intigrator' work. Yeah, it's a lot more technical work than just firing the money cannon at dell or HP, but it's a lot less social work than trying to get a reasonable deal, or trying to get reasonable service out of dell or HP.
John R Pierce wrote:
J> The SuperMicro Intel stuff seems just fine. I'd be more leary of AMD.
On Tue, 26 Jun 2012 13:39:06 -0400, m.roth@5-cent.us said:
M> We've had a number of servers fail, and it *seems* to be related to the M> motherboard. In fact, I just got the pass, and asked the secretary to M> call Fed Ex today to ship another one back to the vendor.
Second that. We bought three servers, and two failed due to motherboard problems. A third one has performed beautifully for around 5 years.
We had warranty support, but not directly from SuperMicro. The reseller didn't return our calls, and after a short time their phone was disconnected because they went bankrupt. No more warranty for us, too bad, so sad.
Another thing to consider is the BIOS software; it was old when we bought the server, too old to handle drives > 1Tb 18 months after said drives had hit the market.
I'd also second the "don't buy Sun/Oracle" recommendation. Oracle isn't interested in anything but Fortune 50 business, and it shows.
On 06/26/12 7:59 PM, Karl Vogel wrote:
I'd also second the "don't buy Sun/Oracle" recommendation. Oracle isn't interested in anything but Fortune 50 business, and it shows.
its a shame, really, as Sun had some nice kit prior to the buyout.
re: Oracle's interest... they are only interested in getting your money, EVERYthing else is secondary, and thats true even if you're a global fortune 50 size company. I work for one and we get atrocious support.
2012/6/26 Rafał Radecki radecki.rafal@gmail.com:
Hi all.
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have:
- possibility to install up to 32GB of RAM and at least 4 slots for it;
- at least 6 SATA ports;
- the possibility to use SAS disks;
- at least one 1Gb/s network interface and the possibility to add another one;
- size of 2 or 3U;
- hot swap for disks.
We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Dell?
Something like Dell Poweredge R710 ?
-- Eero
Dne 26.6.2012 15:40, Rafał Radecki napsal(a):
Hi all.
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have:
- possibility to install up to 32GB of RAM and at least 4 slots for it;
- at least 6 SATA ports;
- the possibility to use SAS disks;
- at least one 1Gb/s network interface and the possibility to add another one;
- size of 2 or 3U;
- hot swap for disks.
We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Cisco UCS? :o) Kidding, too pricey for these requirements. Your requirements are quite low and almost every desktop motherboard is ok nowadays. If your infra is built with failure in mind, you can go Google way. No special HW, just HA sw solution. Regards, David Hrbáč
David Hrbáč wrote:
Dne 26.6.2012 15:40, Rafał Radecki napsal(a):
Hi all.
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have:
- possibility to install up to 32GB of RAM and at least 4 slots for it;
- at least 6 SATA ports;
- the possibility to use SAS disks;
- at least one 1Gb/s network interface and the possibility to add
another one;
- size of 2 or 3U;
- hot swap for disks.
We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Cisco UCS? :o) Kidding, too pricey for these requirements. Your requirements are quite low and almost every desktop motherboard is ok nowadays. If your infra is built with failure in mind, you can go Google way. No special HW, just HA sw solution.
I do not think you want any desktop m/b, not for a hosting provider server. Spend a bit more and get server-class machines. That's right, you *did* mention 2 or 4 U servers. Really, for what you want, you can get in a 1U box. The Penguins I mentioned were, I think (I didn't order them) under $10k or $12kUSD, or the Dells, were all under $20k each, maybe under $15kUSD.
mark
On 06/26/12 1:15 PM, m.roth@5-cent.us wrote:
That's right, you *did* mention 2 or 4 U servers. Really, for what you want, you can get in a 1U box.
the decision of 1U vs 2U is largely driven by disk and IO card requirements.
a 1U Intel server can take up to 4 3.5" or 8 2.5" hotswap drives, 2 CPU sockets (4 to 24 cores total), 18 or 24 dimms, and has 1-2 PCI-E slots
a 2U can take as many as 12 3.5" or 24 2.5" drives, the same CPU/memory, and typically 4 PCI-E slots, although I've seen more.
3U/4U stuff is generally quad socket and a lot more expensive.
John R Pierce wrote:
On 06/26/12 1:15 PM, m.roth@5-cent.us wrote:
That's right, you *did* mention 2 or 4 U servers. Really, for what you want, you can get in a 1U box.
the decision of 1U vs 2U is largely driven by disk and IO card requirements.
a 1U Intel server can take up to 4 3.5" or 8 2.5" hotswap drives, 2 CPU sockets (4 to 24 cores total), 18 or 24 dimms, and has 1-2 PCI-E slots
a 2U can take as many as 12 3.5" or 24 2.5" drives, the same CPU/memory, and typically 4 PCI-E slots, although I've seen more.
3U/4U stuff is generally quad socket and a lot more expensive.
Heh, heh. We have a cluster of 22 servers. One has "only" eight cores. 11 more, I think, have 48 cores, and the latest 10 have 64 cores; I think that's 12 core dies ("chips"). Everything except the first are all 1U.
mark
On 06/26/2012 08:40 AM, Rafał Radecki wrote:
Hi all.
I am working for a hosting provider in Poland. We are currently searching for an universal, extensible hardware platform which we would like to use in our server infrastructure. The platform should have:
- possibility to install up to 32GB of RAM and at least 4 slots for it;
- at least 6 SATA ports;
- the possibility to use SAS disks;
- at least one 1Gb/s network interface and the possibility to add another one;
- size of 2 or 3U;
- hot swap for disks.
We are looking for a solution in which we would be able to deploy a basic server with for example 2 SATA disks, 8GB of RAM, 1 NIC, 4 cors and to be able to use the same enclosure and motherboard and extend it to deploy a more heavy-duty server with for example 6 SATA disks, 24GB of RAM, 2 NICs, 8 cors. Which manufacturer can you recommend and why? We are looking for something rather not expensive but reliable which has a good support. All servers will be based on CentOS5/6 :)
Best regards, Rafal Radecki. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
We have a few machines where I work from Pogo Linux which work really well, http://www.pogolinux.com/
Patrick