Hi, folks,
I'm installing a RAID controller card for a large external RAID box in a Dell server. I've got two riser slots available. Here's the question: the controller card has some large chips on one side, and if I put it in riser 1, those chips face downwards in the box, blocking ease of cooling, while if I put it in riser 2, the chips will face up... but be right over a large chip on the m/b that's got a heat sink.
Opinions on which slot to use?
mark
On 9/16/2014 13:29, m.roth@5-cent.us wrote:
Opinions on which slot to use?
My opinion is that you should read "Hot Air Rises and Heat Sinks: Everything You Know About Cooling Electronics Is Wrong" by Tony Kordyban. It is quite readable, for all that it is a serious EE book.
Thermal dynamics is not a matter of opinion. Either someone is giving you a guess, in which case you have to decide if they're likely to do better than chance, or they have done the experiment or thermal modeling, in which case they aren't offering either a guess or an opinion.
You could just assume Dell has already worked through the thermal dynamics and has provided enough cooling headroom that both slots can be used with hot cards. If you've found that they don't fight you on warranty replacements for cooked mobos or third-party cards, go ahead and take a guess.
Or, you can do the experiment yourself. You have the equipment, you have the environment, and you have lm_sensors. Try it both ways, and let us know what you find. When your boss asks you why you are faffing about with R, tell him it's For Science! You do research there, right? Here you are, doing research, and you didn't even have to get a grant to do it. There's your science in the public interest, Mr Bossman!
Warren Young wrote:
On 9/16/2014 13:29, m.roth@5-cent.us wrote:
Opinions on which slot to use?
My opinion is that you should read "Hot Air Rises and Heat Sinks: Everything You Know About Cooling Electronics Is Wrong" by Tony Kordyban. It is quite readable, for all that it is a serious EE book.
My degree's in CIS, not EE, so I never got into that. Should I assume that you are an EE? If so, you could give me an opinion.... <snip>
Or, you can do the experiment yourself. You have the equipment, you have the environment, and you have lm_sensors. Try it both ways, and
No, I cannot "do the experiment". I've got to get these racked and up and running, for my users to use. They're not my toys....
let us know what you find. When your boss asks you why you are faffing about with R, tell him it's For Science! You do research there, right? Here you are, doing research, and you didn't even have to get a grant to do it. There's your science in the public interest, Mr Bossman!
We are doing science in the public interest - this is a US gov't scientific agency, and it's *not* defence-related, and they need this disk space.
mark
On 9/16/2014 1:39 PM, m.roth@5-cent.us wrote:
No, I cannot "do the experiment". I've got to get these racked and up and running, for my users to use. They're not my toys....
then call the vendor(s). ask their advice...
now... if this is a rack mount Dell PowerEdge server, and a server-oriented RAID card, the airflow is NOT convective heat rising, its massive forced air from the front of the chassis to the rear. so orientation and whatever is not very important, lots of air will be forced past the heatsinks.
John R Pierce wrote:
On 9/16/2014 1:39 PM, m.roth@5-cent.us wrote:
No, I cannot "do the experiment". I've got to get these racked and up and running, for my users to use. They're not my toys....
then call the vendor(s). ask their advice...
now... if this is a rack mount Dell PowerEdge server, and a server-oriented RAID card, the airflow is NOT convective heat rising, its massive forced air from the front of the chassis to the rear. so orientation and whatever is not very important, lots of air will be forced past the heatsinks.
Thanks. I was thinking that, except for the fact that it's a 1U, and so the gap between the heat sink on the m/b and the bottom of the card is not exactly large.
mark
On 9/16/2014 1:48 PM, m.roth@5-cent.us wrote:
Thanks. I was thinking that, except for the fact that it's a 1U, and so the gap between the heat sink on the m/b and the bottom of the card is not exactly large.
those 1U's tend to have a bunch of very high flow rate 80mm fans all the way across the case just behind the drive bays. noisy buggers. if either the IO card has vents on its backing plate, or there's vents on the chassis on the same side as the components, you should be golden. ideally, the heatsink fins are oriented parallel to the backplane connector so they are inline with the airflow...
On 9/16/2014 14:39, m.roth@5-cent.us wrote:
Warren Young wrote:
On 9/16/2014 13:29, m.roth@5-cent.us wrote:
Opinions on which slot to use?
My opinion is that you should read "Hot Air Rises and Heat Sinks: Everything You Know About Cooling Electronics Is Wrong" by Tony Kordyban. It is quite readable, for all that it is a serious EE book.
My degree's in CIS, not EE, so I never got into that. Should I assume that you are an EE?
No, I just play one on the Internet.
If so, you could give me an opinion....
I'm trying to tell you that a true EE would not give you an "opinion." One might give you a *guess* based on having done similar experiments or thermal modeling work, or one might insist on doing the experiment.
An experienced EE giving you a guess should couch it in plenty of warnings, since so much about success here is contingent:
- What is the operating environment temperature now?
- What will the temp be on the day when the site power goes down, cutting the aircon, while the server room keeps running on UPS, and the electronic door locks stay locked because *that* UPS is separate and someone forgot to check the battery, so it fell over as soon as the wall outlet flatlined?
- How many fans do you have in the case?
- Is it better if you add another, or worse?
Yes, it could be worse. One of the war stories in the book tells the tale of a problem device that wouldn't keep cool enough with a single case fan, so they added a second, which made airflow worse because it required them to add some really big holes in the case right next to the first fan, so a lot of the air went in one hole and right out the other.
Another war story, from personal experience: a workstation from a big-name manufacturer which ran wonderfully with the case closed up, but when you let all that cool outside air in through the side panel, it went into thermal lock-up because you'd disrupted the carefully-designed airflow channels.
The correct answer isn't always intuitively correct.
Or, you can do the experiment yourself. You have the equipment, you have the environment, and you have lm_sensors. Try it both ways, and
No, I cannot "do the experiment". I've got to get these racked and up and running, for my users to use. They're not my toys....
You can't spend two hours to run them under load, one hour in each configuration? These servers have to be production ready two minutes after first power-on?
Warren Young wrote:
On 9/16/2014 14:39, m.roth@5-cent.us wrote:
Warren Young wrote:
On 9/16/2014 13:29, m.roth@5-cent.us wrote:
Opinions on which slot to use?
My opinion is that you should read "Hot Air Rises and Heat Sinks: Everything You Know About Cooling Electronics Is Wrong" by Tony Kordyban. It is quite readable, for all that it is a serious EE book.
My degree's in CIS, not EE, so I never got into that. Should I assume that you are an EE?
No, I just play one on the Internet.
If so, you could give me an opinion....
I'm trying to tell you that a true EE would not give you an "opinion."
Fine, but the admins here might have some practical experience to offer an opinion.... <snip>
What is the operating environment temperature now?
What will the temp be on the day when the site power goes down,
cutting the aircon, while the server room keeps running on UPS, and the electronic door locks stay locked because *that* UPS is separate and someone forgot to check the battery, so it fell over as soon as the wall outlet flatlined?
Wrong scenario: if the site power goes down, very shortly a lot of the servers in the room will shut down by themselves from the firmware protection against overheating.
And if it's not during regular business hours, my manager will be notified and coming in, and he's got this odd thing called a "key" to get into the room and shut things down.
Assuming that the giant UPS next door doesn't kick in.
- How many fans do you have in the case?
Don't remember what it comes with. It's a std. Dell server.
- Is it better if you add another, or worse?
It's a rack-mount server, not someone's tower workstation. There's no place for more fans. <snip>
Another war story, from personal experience: a workstation from a big-name manufacturer which ran wonderfully with the case closed up, but when you let all that cool outside air in through the side panel, it went into thermal lock-up because you'd disrupted the carefully-designed airflow channels.
Right. I had the ignorant vendor of the video cameras and card that I bought last year ask me if I could run the server with the case open.... They really had *no* idea of what an actual server was.
The correct answer isn't always intuitively correct.
Or, you can do the experiment yourself. You have the equipment, you have the environment, and you have lm_sensors. Try it both ways, and
No, I cannot "do the experiment". I've got to get these racked and up and running, for my users to use. They're not my toys....
You can't spend two hours to run them under load, one hour in each configuration? These servers have to be production ready two minutes after first power-on?
I could... if I wanted the extra work. As it is, I may have the experiment: I realized that the right-hand one is *only* for a short adapter. Now, there's two servers, and two RAID boxes, and one of the RAID boxes had an adaptor to fit the back of a short slot... and for the life of me, I can't find one for the other....
mark
On Tue, September 16, 2014 2:29 pm, m.roth@5-cent.us wrote:
Hi, folks,
I'm installing a RAID controller card for a large external RAID box in a Dell server. I've got two riser slots available. Here's the question: the controller card has some large chips on one side, and if I put it in riser 1, those chips face downwards in the box, blocking ease of cooling, while if I put it in riser 2, the chips will face up... but be right over a large chip on the m/b that's got a heat sink.
I would use riser 1. If you use riser 2, you will create more resistance to airflow in area where already is one big heater. I assume, there are no separators of airflow going from front to end (or from middle where the set of fans are usually situated). My guess would be chips facing down have much less effect on effectiveness of cooling unless your configuration is such that airflow path along chips (i.e. underneath that board in riser 1) is totally blocked.
Valeri
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 16/09/2014 22:35, Valeri Galtsev wrote:
I would use riser 1. If you use riser 2, you will create more resistance to airflow in area where already is one big heater. I assume, there are no separators of airflow going from front to end (or from middle where the set of fans are usually situated). My guess would be chips facing down have much less effect on effectiveness of cooling unless your configuration is such that airflow path along chips (i.e. underneath that board in riser 1) is totally blocked.
I concur here, my guess is to use riser 1
If it were me, I would add the temperature sensors to my monitoring and change it if I thought it was too high.
If you have two servers have one in riser 1 and the other in riser 2 - Assuming the manufacturer hasn't got a recommendation or warning about using either. I would then add both to monitoring, and if there was any massive difference in temperature or perhaps even performance (not all PCI-Es are the same) then I would schedule a maintenance window to swap one of the cards in one of the servers after deployment.