Greets
Anyone on the list using newer HP G4 or G5 server hardware like DL36X or DL38X with Centos? Or other HP hardware?
Are you running Intel, AMD, or both?
Are you using SAS or SATA or a mix?
Are you using Centos 3, CentOS 4, or starting migration to CentOS 5
Are things rock solid stable without any issues?
I know older Compaq and HP boxen have been rock solid for us for years yet we wanted to check and test the waters b4 we consider unloading some bread.
Thanks in advance.
- rh
-- Abba Communications Spokane, WA www.abbacomm.net
Anyone on the list using newer HP G4 or G5 server hardware like DL36X or DL38X with Centos? Or other HP hardware?
Are you running Intel, AMD, or both?
Are you using SAS or SATA or a mix?
Are you using Centos 3, CentOS 4, or starting migration to CentOS 5
Are things rock solid stable without any issues?
I know older Compaq and HP boxen have been rock solid for us for years yet we wanted to check and test the waters b4 we consider unloading some bread.
We are using quite a lot of such boxes with CentOS and RHEL. Things are mostly rock solid except DL380G4. Those were absolute crap for us. If I remember right from 6 such servers we had to replace 1 DIMM in 3 or 4 servers, 2 DIMMs in one server. We also had ASR problems with one server. Our supplier brought motherboard for replacement which was even worse than ours. So they had to replace back our old motherboard and order one more motherboard from HP.
But why to worry about G4? G5 with Core2 Intels is much much faster. We have several such servers and those did not have any problems. CentOS 5 runs out-of-the-box on them. CentOS 4 some new update also runs without problems.
We are using SAS drives. Used SCSI on older servers. Those new SAS controllers also are much faster than old SCSI ones. For sequential reads/writes (dd if=... of=...) even several times faster.
When HP released DL385 server with Opterons we bought quite a few of them. Had no problems also. Just now Core2 is faster than Opteron so we turned back to Intel. :)
Mindaugas
On Thu, May 24, 2007 at 10:48:18PM +0300, Mindaugas said:
Anyone on the list using newer HP G4 or G5 server hardware like DL36X or DL38X with Centos? Or other HP hardware?
Are you running Intel, AMD, or both?
Are you using SAS or SATA or a mix?
Are you using Centos 3, CentOS 4, or starting migration to CentOS 5
Are things rock solid stable without any issues?
I know older Compaq and HP boxen have been rock solid for us for years yet we wanted to check and test the waters b4 we consider unloading some bread.
We are using quite a lot of such boxes with CentOS and RHEL. Things are mostly rock solid except DL380G4. Those were absolute crap for us. If I remember right from 6 such servers we had to replace 1 DIMM in 3 or 4 servers, 2 DIMMs in one server. We also had ASR problems with one server. Our supplier brought motherboard for replacement which was even worse than ours. So they had to replace back our old motherboard and order one more motherboard from HP.
I've got about 20 DL380G4's and have had Very few problems. We've had bad memory in a few servers, but no more frequently than any other model of HP we have. Any memory problem I've seen has shown up within the first month. It helps that we burn them in for a month before putting them into production. I run some drive / memory excersizing utilities during this time that pound on the server pretty hard. Compiling the Linux kernel over and over again also seems to be a good test :-)
Failures after the burn-in period are quite rare with the exception of 500G SATA drives which we have in a few archival arrays. They seem to go bad frequently. 15K RPM 142G SCSI also seem to fail more frequently than the norm. By comparison, I have never had an EMC drive (several hundred FC and SATA) go bad in the ~2 years they have been running by comparison, and they get pounded on a LOT harder.
But why to worry about G4? G5 with Core2 Intels is much much faster. We have several such servers and those did not have any problems. CentOS 5 runs out-of-the-box on them. CentOS 4 some new update also runs without problems.
Don't forget the firmware updates. The one problem I've had with G4's falling off the network were solved with firmware / driver updates.
We are using SAS drives. Used SCSI on older servers. Those new SAS controllers also are much faster than old SCSI ones. For sequential reads/writes (dd if=... of=...) even several times faster.
Agree - go SAS where possible. Only downside is that SAS drive capacity is a lot smaller with the smaller form factor HP uses.
When HP released DL385 server with Opterons we bought quite a few of them. Had no problems also. Just now Core2 is faster than Opteron so we turned back to Intel. :)
Ditto. Have about 30 385's that have been pretty solid, but the G5's are faster. It matters when you are doing jobs that take months to run... :-)
Walt Reed spake the following on 5/24/2007 1:20 PM:
On Thu, May 24, 2007 at 10:48:18PM +0300, Mindaugas said:
Anyone on the list using newer HP G4 or G5 server hardware like DL36X or DL38X with Centos? Or other HP hardware?
Are you running Intel, AMD, or both?
Are you using SAS or SATA or a mix?
Are you using Centos 3, CentOS 4, or starting migration to CentOS 5
Are things rock solid stable without any issues?
I know older Compaq and HP boxen have been rock solid for us for years yet we wanted to check and test the waters b4 we consider unloading some bread.
We are using quite a lot of such boxes with CentOS and RHEL. Things are mostly rock solid except DL380G4. Those were absolute crap for us. If I remember right from 6 such servers we had to replace 1 DIMM in 3 or 4 servers, 2 DIMMs in one server. We also had ASR problems with one server. Our supplier brought motherboard for replacement which was even worse than ours. So they had to replace back our old motherboard and order one more motherboard from HP.
I've got about 20 DL380G4's and have had Very few problems. We've had bad memory in a few servers, but no more frequently than any other model of HP we have. Any memory problem I've seen has shown up within the first month. It helps that we burn them in for a month before putting them into production. I run some drive / memory excersizing utilities during this time that pound on the server pretty hard. Compiling the Linux kernel over and over again also seems to be a good test :-)
Failures after the burn-in period are quite rare with the exception of 500G SATA drives which we have in a few archival arrays. They seem to go bad frequently. 15K RPM 142G SCSI also seem to fail more frequently than the norm. By comparison, I have never had an EMC drive (several hundred FC and SATA) go bad in the ~2 years they have been running by comparison, and they get pounded on a LOT harder.
I also had a rash of 500 G SATA drives fail I lost 4 out of 12 in the first month, and 2 more since I went into production. I think the Maxtor drives they were using are no longer being sold, and the replacements have been rock solid for over a year. The Adaptec SATA raid controllers have been nothing but junk, but it is probably also related to the Maxtor drives. I will never buy anything but 3ware SATA unless they mess with EXT2/3 again.
Thanks for all the feedback
So the bottom line is that the HP rackmount servers are rock solid in G1 through G5...
Just a coupla issues here and there?
I have been following the eBay G1 - G5 market for some time and have been trying to make heads or tails out of it... meaning, are people shedding flawed servers or junk, or.... is there truly just a large secondary market there.
We use G1 and G2 boxen on some lower to mid traffic functions and they are rock solid awesome.
We are just trying to figure out how many mortgages to take out on the home to get a couple of identical G5 units...
;-)
Sooooo what else do we need to know before we jump into the newer boxen?
- rh
-- Abba Communications Spokane, WA www.abbacomm.net
AbbaComm.Net spake the following on 5/24/2007 2:50 PM:
Thanks for all the feedback
So the bottom line is that the HP rackmount servers are rock solid in G1 through G5...
Just a coupla issues here and there?
I have been following the eBay G1 - G5 market for some time and have been trying to make heads or tails out of it... meaning, are people shedding flawed servers or junk, or.... is there truly just a large secondary market there.
We use G1 and G2 boxen on some lower to mid traffic functions and they are rock solid awesome.
We are just trying to figure out how many mortgages to take out on the home to get a couple of identical G5 units...
;-)
Sooooo what else do we need to know before we jump into the newer boxen?
Ebay gets flooded with everything -- good and bad. They could be anything from closeouts, grey market stuff, reconditioned, or just people that need the new and fast because of a new Windows server upgrade. I think if the server is on HP's linux support list for RHEL, then the current CentOS will work also. Just a few tricks in loading the HP tools on unsupported distros.
On Thu, 24 May 2007, AbbaComm.Net wrote:
Thanks for all the feedback
Sooooo what else do we need to know before we jump into the newer boxen?
funny you should ask. There is a large difference in the 'small stuff' between the dl3[68]0/g4 and g5. 1) you need c3u8 or c4u3 (most likely) to get an installation to work (due to the SAS drives) 2) ilo2 != ilo. say bye bye to remcons and hello to vsp and setting up the bios to get at the console at all in text mode. If you use the ilo in graphical mode (via the browser) you will like it, probably. Give yourself a month or so to figure it out before you go whole hog.
As is usually the case, once you get the installation knocked out and the out of band management issues taken care of (oh and don't forget to lock down the ipmi capable ilo2), they are rock solid and fast. Just a bit of a shock the first time you grab onto one.
------------------------------------------------------------------------ Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
Walt Reed wrote: <snip>
Any memory problem I've seen has shown up within the first month. It helps that we burn them in for a month before putting them into production. I run some drive / memory excersizing utilities during this time that pound on the server pretty hard. Compiling the Linux kernel over and over again also seems to be a good test :-)
<snip>
Ditto. Have about 30 385's that have been pretty solid, but the G5's are faster. It matters when you are doing jobs that take months to run... :-)
Walt - If you don't mind/are allowed to, can you tell us what your servers do? The reason i ask is this;
It seems to me that many users make decisions based on what they read on these lists, in mags, in (*cringe*) gartner reports etc, but i think we often miss the fact that many of the 'data points' come from squeaky wheels or completely irrelevant demos and 'studys'.
I always try to weight the current opinions based on the authors experience, zen-ness etc, *not* volume and clever nouns. I also weight heavily people with large karma, like (on this list) John, Mark, Karanbir, Jim etc - you *know* they would have looked at any issues like outdated firmware before they comment, so if they give something a bad review you can be fairly sure it is deserved.
So to get to the point - if you are doing tasks that take months, either you are doing it wrong, or it is a *real* job that requires a *real* OS on *great* hardware to get it done - I suspect the latter, which means your data points/comments are much more relevant than most.
So ... spill the beans :)
MrKiwi
On Fri, May 25, 2007 at 10:36:21AM +1200, MrKiwi said:
Walt Reed wrote:
<snip> >Any memory problem I've seen has shown up within the >first month. It helps that we burn them in for a month before putting >them into production. I run some drive / memory excersizing utilities >during this time that pound on the server pretty hard. Compiling the >Linux kernel over and over again also seems to be a good test :-) > <snip> >Ditto. Have about 30 385's that have been pretty solid, but the G5's are >faster. It matters when you are doing jobs that take months to run... >:-)
Walt - If you don't mind/are allowed to, can you tell us what your servers do? The reason i ask is this;
It seems to me that many users make decisions based on what they read on these lists, in mags, in (*cringe*) gartner reports etc, but i think we often miss the fact that many of the 'data points' come from squeaky wheels or completely irrelevant demos and 'studys'.
I always try to weight the current opinions based on the authors experience, zen-ness etc, *not* volume and clever nouns. I also weight heavily people with large karma, like (on this list) John, Mark, Karanbir, Jim etc - you *know* they would have looked at any issues like outdated firmware before they comment, so if they give something a bad review you can be fairly sure it is deserved.
So to get to the point - if you are doing tasks that take months, either you are doing it wrong, or it is a *real* job that requires a *real* OS on *great* hardware to get it done
- I suspect the latter, which means your data
points/comments are much more relevant than most.
So ... spill the beans :)
Sure... The big beasty is GIS data. We pre-render the entire US at 15 zoom levels into fairly large "meta" tiles with multiple layers, hand tweak the 100 largest cities, then combine layers and split in to smaller tiles. The result ends up a lot like Google Maps except with demographic data instead of point data. Every time Navtek releases new map data with updated roads / etc. info, we start the process over again.
Besides the pre-rendering, we have a large pile of servers that handle real-time rendering because there is no realistic way we could pre-render 1500+ indicators for the entire US at all zoom levels. Plus we allow people to upload their own datasets.
We tried Blade servers too, but AT&T pulled a hissy fit about the power / heat load and only let us use 40% of our rack space, so we went back to traditional servers (plus I hated the design of the power distribution on the p-class enclosures. Talk about stupid.)
An interesting thing about this project is that it is built almost entirely using open source technology with the exception of pre-rendering which uses ESRI because the quality of the maps is much better than open source tools such as mapserver. We use mapserver for the real-time rendering.
Besides dataplace, we have a pile of other more traditional web sites that we host. By the time you add up all the development, staging, database, etc. servers, it's a lot of equipment (all HP servers, with the EMC and Cisco,) and a huge amount of data.
Anyone on the list using newer HP G4 or G5 server hardware like DL36X or DL38X with Centos?
Yes
Or other HP hardware?
Yes
Are you running Intel, AMD, or both?
Both
Are you using SAS or SATA or a mix?
On the DL360, SFF SAS. Ultra3 SCSI on the DL585s.
Are you using Centos 3, CentOS 4, or starting migration to CentOS 5
RHEL 3 on the DL585 (legacy software requirements), moving to 5 when we change some stuff. Centos5 on the DL360
Are things rock solid stable without any issues?
Yes
Craig Miskell ======================================================================= Attention: The information contained in this message and/or attachments from AgResearch Limited is intended only for the persons or entities to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipients is prohibited by AgResearch Limited. If you have received this message in error, please notify the sender immediately. =======================================================================