<br><br><div class="gmail_quote">2011/3/21 <span dir="ltr"><<a href="mailto:m.roth@5-cent.us">m.roth@5-cent.us</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">Vladimir Budnev wrote:<br>
> Hello community.<br>
><br>
> We are running, Centos 4.8 on SuperMicro SYS-6026T-3RF with 2xIntel Xeon<br>
> E5630 and 8xKingston KVR1333D3D4R9S/4G<br>
><br>
> For some time we have lots of MCE in mcelog and we cant find out the<br>
> reason.<br>
<br>
</div>The only thing that shows there (when it shows, since sometimes it doesn't<br>
seem to) is a hardware error. You *WILL* be replacing hardware, sometime<br>
soon, like yesterday.<br>
<br>
"Normal" is not: *ANYTHING* here is Bad News. First, you've got DIMMs<br>
failing. CPU 53, assuming this system doesn't have 53+ physical CPUs,<br>
means that you have x-core systems, so you need to divide by x, so that if<br>
it's a 12-core system with 6 physical chips, that would make it DIMM 8<br>
associated with that physical CPU.<br>
<snip><br>
<div class="im">> One more interesting thins is the following output:<br>
> [root@zuno]# cat /var/log/mcelog |grep CPU|sort|awk '{print $2}'|uniq<br>
> 32<br>
> 33<br>
> 34<br>
> 35<br>
> 50<br>
> 51<br>
> 52<br>
> 53<br>
><br>
> Those numbers are always the same.<br>
<br>
</div>Bad news: you have *two* DIMMs failing, one associated with the physical<br>
CPU that has core 53, and another associated with the physical CPU that<br>
has cores 32-35.<br>
<br>
Talk to your OEM support to help identify which banks need replacing,<br>
and/or find a motherboard diagram.<br>
<br>
mark, who has to deal *again* with one machine with the same<br>
problem....<br></blockquote><div><br>Tnx for the asnwer!<br><br>Last night we'v made some research to find out which RAM modules bugged.<br><br>To be noticed we have 8 modules 4G each.<br><br>First we'v removed a3,b1 slots for each cpu, and there were no changes in HW behaviour. Errors appeared after boot.<br>
<br>Then we'v removed a1,a2 (yes i know that "for hight performance" we should place modules starting from a1 but it was our mistake and in any case server started) and ...and there were no errors during 1h. Usually we can observer errors coming ~every 5 mins.<br>
<br>Then we'v placed back 2 modules. At that step we had a1,a3,b1 slots occupied for each cpu. No errors.<br><br>Finally we'v placed last 2 modules...and no errors. It should be noticed that at that step we have exactly the same modules placement as before experiment.<br>
<br>Sounds strange, but at first glance looks like smthg was wrong with modules placement. But we cant realise why the problem didnt show for the first days, even month of server running. Noone touched server HW, so i have no idea what was that.<br>
<br>Now we are just waiting will there be errors again.<br></div></div><br>