I'm just about to build a new server and I'm looking for recommendations on what hardware to use.
I'm happy with either a brand name, or building my own, but would like a hardware RAID controller to run a pair of disks as RAID1 that is actually compatible with and manageable through Linux.
Any recommendations would be appreciated.
Gary
Hello, what is the purpose of this server?
On Thursday, November 2, 2017, Gary Stainburn gary@ringways.co.uk wrote:
I'm just about to build a new server and I'm looking for recommendations
on
what hardware to use.
I'm happy with either a brand name, or building my own, but would like a hardware RAID controller to run a pair of disks as RAID1 that is actually compatible with and manageable through Linux.
Any recommendations would be appreciated.
Gary _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
I don't use hardware raid (mdadm for the win!) so cannot speak to that.
DL20, bought it on a stock 'B' sale. Great price. Works well on Windows. HPE doesn't sell hard drive trays, etc. You pretty much have to buy their equipment.
You CAN get 3rd party parts (drive trays, etc.) but will nickel and dime you. Example, try to get an HPE-ODD power to sata power adapter. I haven't been able to locate one. The one HPE sells, doesn't work on a standard SSD drive. **NO** standard place inside machine to mount an SSD drive either. **NO** standard power connectors either. So trying in install a bootable SDD, then raid your storage drives will be a task. One I gave up on.
THEIR website says the DL20 gen9 it supports CentOS 6.x.... In reality, NO unless you want the pain of downloading, compiling drivers, etc. If you don't use THEIR hard drives, they work but you don't get "LED Support" from the smart array controller. i.e. A drive craps, the smart array won't lite up the dead drive tray. You have a 50/50 shot at guessing which one. At least the Smart Array software (in Windows) will tell you what bay its in.
The DL20 once you get past the crap in setting it up (again, you have to use the smart provisioning utility to install server 2012 r2 on it; Seriously HPE) but once up and running, so far no more headaches.
My ML10 gen9 experience is a mix.
The newer ML10 gen9 experience was worse. First several installs just never ran right. Unexplained lockup and crashes. Onboard nic never ran right. Now, it's using a transplanted install of CentOS 6.9, using installed Intel nics and this setup so far is running pretty well, no issues so far.
On the other hand, I've got an year (maybe two) older ML10 gen9 running CentOS 6.9. Hasn't given me a day of trouble from day one.
Hopefully some of this helps...
Regards,
Richard
-----Original Message----- From: CentOS [mailto:centos-bounces@centos.org] On Behalf Of vychytraly . Sent: Thursday, November 2, 2017 8:28 AM To: CentOS mailing list Subject: Re: [CentOS] low end file server with h/w RAID - recommendations
Hello, what is the purpose of this server?
On Thursday, November 2, 2017, Gary Stainburn gary@ringways.co.uk wrote:
I'm just about to build a new server and I'm looking for recommendations
on
what hardware to use.
I'm happy with either a brand name, or building my own, but would like a hardware RAID controller to run a pair of disks as RAID1 that is actually compatible with and manageable through Linux.
Any recommendations would be appreciated.
Gary _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
_______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
I just put a call into AT&T Office 365 asking them to explain the spoof warning thing...
To answer your question....
At the moment, no I can't. I like HPE stuff, we bought a DL380 gen9 say five months ago and totally happy with it. In fairness, its running Server 2012 r2 too but I didn't run into the hardware gotchas I did on the other stuff. It just seems HPE skimped on their lower end stuff and CentOS 6.x doesn't play well.
This whole incident with the DL20 JUST happened. It's (finally) been spinning Server 2012 r2 for about a week now. It was a long 5 week process just to get to to this answer.
I haven't had the time to research out what my next buys are going to be. I'm listening as well if someone has a suggestion.
Honestly, I'm leaning against Dell because their stuff just doesn't seem to be built to last. We have 1 T620, 2 R620 servers. So far just past the 5 year mark, 3 dead hard drives, 2 power supplies. That is with the machines mostly TURNED OFF. (Failed IT project after I was hired; aborted a move to a new ERP system) With my personal Dell laptop just bought 4 months ago, periodically get the 6 beep on power on error. Tells me Dell quality / quality control might not be where it needs to be.
Then again, I get a constant flow of HPE advisories..... :(
I've thinking of taking a look at Supermicro severs.
Bottom line is, they all have their quirks, problems, deficiencies....
WHY did Lenovo have to quit selling the RS140's? I *LOVE* those machines.... Fast, reliable and just work GREAT with Centos 6.9!
Regards,
Richard
-----Original Message----- From: CentOS [mailto:centos-bounces@centos.org] On Behalf Of hw Sent: Thursday, November 2, 2017 9:09 AM To: centos@centos.org Subject: Re: [CentOS] low end file server with h/w RAID - recommendations
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
What would you suggest as alternative, something from Dell? _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Richard Zimmerman wrote: <snip>
Honestly, I'm leaning against Dell because their stuff just doesn't seem to be built to last. We have 1 T620, 2 R620 servers. So far just past the 5 year mark, 3 dead hard drives, 2 power supplies. That is with the machines mostly TURNED OFF. (Failed IT project after I was hired; aborted a move to a new ERP system) With my personal Dell laptop just bought 4 months ago, periodically get the 6 beep on power on error. Tells me Dell quality / quality control might not be where it needs to be.
<snip> That's... odd. We have a bunch of Dells, R4[123]0's, R520's, R7[23]0's, an R815, etc, and they keep chugging. Some of them are 5, 6, 7 years old, and I rarely have any h/d issues, and PSU issues are equally rare.
If you're having this kind of issue, you should talk to Dell support, and escalate it, so that a manager takes ownership. I've had one do that, years ago, when we kept having issues with one machine, and they were *serious* about "taking ownership" (as opposed to Sun/Oracle, file them under "none of the above").
mark
Good to know about the HPE and Dell "gotchas", thanks to those who posted.
I can speak to SuperMicro (11 systems, mostly X9 and X10). Hardware seems to be fine, management utilities (IPMI - like iLO) are more basic. The real heartburn right now is that the browsers for Linux have pretty much dropped NPAPI which means remote console doesn't work since it needs Java. They have alternatives on their web site (look for IPMIView and IPMICFG). One of their solutions only works with Gnome (but I don't remember which one - too long ago). Differing versions of IPMI firmware have their own quirks. Bottom line: support is there but more basic and not as easy to use.
----- Original Message ----- From: "Richard Zimmerman" rzimmerman@riverbendhose.com To: "centos" centos@centos.org Sent: Thursday, November 2, 2017 8:33:17 AM Subject: Re: [CentOS] low end file server with h/w RAID - recommendations
I just put a call into AT&T Office 365 asking them to explain the spoof warning thing...
To answer your question....
At the moment, no I can't. I like HPE stuff, we bought a DL380 gen9 say five months ago and totally happy with it. In fairness, its running Server 2012 r2 too but I didn't run into the hardware gotchas I did on the other stuff. It just seems HPE skimped on their lower end stuff and CentOS 6.x doesn't play well.
This whole incident with the DL20 JUST happened. It's (finally) been spinning Server 2012 r2 for about a week now. It was a long 5 week process just to get to to this answer.
I haven't had the time to research out what my next buys are going to be. I'm listening as well if someone has a suggestion.
Honestly, I'm leaning against Dell because their stuff just doesn't seem to be built to last. We have 1 T620, 2 R620 servers. So far just past the 5 year mark, 3 dead hard drives, 2 power supplies. That is with the machines mostly TURNED OFF. (Failed IT project after I was hired; aborted a move to a new ERP system) With my personal Dell laptop just bought 4 months ago, periodically get the 6 beep on power on error. Tells me Dell quality / quality control might not be where it needs to be.
Then again, I get a constant flow of HPE advisories..... :(
I've thinking of taking a look at Supermicro severs.
Bottom line is, they all have their quirks, problems, deficiencies....
WHY did Lenovo have to quit selling the RS140's? I *LOVE* those machines.... Fast, reliable and just work GREAT with Centos 6.9!
Regards,
Richard
-----Original Message----- From: CentOS [mailto:centos-bounces@centos.org] On Behalf Of hw Sent: Thursday, November 2, 2017 9:09 AM To: centos@centos.org Subject: Re: [CentOS] low end file server with h/w RAID - recommendations
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
What would you suggest as alternative, something from Dell? _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
I can help a little here... Yes, dropping NPAPI is a huge problem. FireFox ESR is available for Linux x32 and x64.
Solved my problems using Lantronix Spider IP/KVM device until Java updates, then refuses to run it yet again :(
https://www.mozilla.org/en-US/firefox/organizations/faq/
Hopes this helps....
Richard
-----Original Message----- From: CentOS [mailto:centos-bounces@centos.org] On Behalf Of Leroy Tennison Sent: Thursday, November 2, 2017 11:08 AM To: centos Subject: Re: [CentOS] low end file server with h/w RAID - recommendations
Good to know about the HPE and Dell "gotchas", thanks to those who posted.
I can speak to SuperMicro (11 systems, mostly X9 and X10). Hardware seems to be fine, management utilities (IPMI - like iLO) are more basic. The real heartburn right now is that the browsers for Linux have pretty much dropped NPAPI which means remote console doesn't work since it needs Java. They have alternatives on their web site (look for IPMIView and IPMICFG). One of their solutions only works with Gnome (but I don't remember which one - too long ago). Differing versions of IPMI firmware have their own quirks. Bottom line: support is there but more basic and not as easy to use.
hw wrote:
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
What would you suggest as alternative, something from Dell?
Yep, Dell's are good. And I do *not* want to buy from HP, because their support is nothing like good. And once you run out the warranty, they don't want to even let you get things like firmware updates. Dell will.
Another company that's ok is ThinkMate, though their support ain't great, I think they're better than HP...oh, sorry, for a server, it'll be HPE (the company divided a year or two ago).
If you get a Dell, and one of their PERC cards, you're getting a rebranded LSI, sorry, Avago, um, who bought it last? Those are good and reliable, not super expensive.
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
One more thing: if you go with a vendor like ThinkMate, know that most of them are reselling Supermicro systems. We used to buy a lot of Penguins, but their quality control.... At any rate, the direct names are cheaper: if you go for Dell, find a reseller, who may get you a better deal than Dell direct. If you go this route, email me offlist, and I'll recommend my preferred reseller for Dell - the discounts are good.
mark
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Regards,
Richard
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Yeah - the WD Reds, and there are Seagate also NAS-rated, are about 1.3 or so times the price of consumer-grade, but work in servers (consumer grade WILL NOT*), whereas enterprise-grade are about 3 times the price of consumer grade, and the quality of NAS-rated vs enterprise isn't especially noticeable.
* The big differences, in addition to quality, between NAS-rated, or enterprise-rated, vs consumber/desktop grade are this: the desktop REALLY, REALLY want to spin down, whenever they can, and, most significant: TLER (time limit error recovery, I think). The consumer or desktop or laptop drives will, on encountering a hardware error, will keep trying for up to two *minutes*. The ones meant for server give up and relocate the sector after seven *seconds*. Servers gag and actively dislike a drive when it takes too long....
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
On 2 November 2017 at 12:21, hw hw@gc-24.de wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
The 2.5 inch drives have a pretty good lifetime these days and seem to be all you can get for various storage systems. It is like back when we all wanted and loved 5.25 drives and all you could get was the crappy 3.5 inch ones. And I expect in 3-5 years the 2.5 inch ones will be replaced with only able to get the in card SIMM like drives.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Stephen John Smoogen wrote:
On 2 November 2017 at 12:21, hw hw@gc-24.de wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
The 2.5 inch drives have a pretty good lifetime these days and seem to be all you can get for various storage systems. It is like back when we all wanted and loved 5.25 drives and all you could get was the crappy 3.5 inch ones. And I expect in 3-5 years the 2.5 inch ones will be replaced with only able to get the in card SIMM like drives.
Look at the prices. Who can afford even 20TB storage in 2.5" disks.
On Thu, November 2, 2017 11:21 am, hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
Ignoring existence of 2.5 inch, and getting rackmount machines with with 3.5 inch drives. Space wise (meaning GB wise) per U of rack they are at the very least the same, only much cheaper per GB.
Just my $0.02
Valeri
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev wrote:
On Thu, November 2, 2017 11:21 am, hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
Ignoring existence of 2.5 inch, and getting rackmount machines with with 3.5 inch drives. Space wise (meaning GB wise) per U of rack they are at the very least the same, only much cheaper per GB.
Y'know, I just had a thought: are there folks here who, when they say "server", are *not* thinking of rackmount servers?
mark
m.roth@5-cent.us wrote:
Valeri Galtsev wrote:
On Thu, November 2, 2017 11:21 am, hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
Ignoring existence of 2.5 inch, and getting rackmount machines with with 3.5 inch drives. Space wise (meaning GB wise) per U of rack they are at the very least the same, only much cheaper per GB.
Y'know, I just had a thought: are there folks here who, when they say "server", are *not* thinking of rackmount servers?
Does it matter? 19" cases are very well thought out, easy to work on and fit nicely into the racks. You can always use something else and enjoy the disadvantages, but why would you.
On 11/3/2017 1:19 AM, hw wrote:
Y'know, I just had a thought: are there folks here who, when they say "server", are *not* thinking of rackmount servers?
Does it matter? 19" cases are very well thought out, easy to work on and fit nicely into the racks. You can always use something else and enjoy the disadvantages, but why would you.
rack servers tend to be rather noisy, if they are being used in a SMB or SOHO environment you're probably looking at a tower server.
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
The DL20 gen9 I bought was setup LFF (3.5")
The DL380 gen9 could be either SFF (2.5) or LFF. I had to buy SFF for our new server due I was told to spec / build it exact to vendor recommendation.
To better? Answer this. Agreed, I'm not a fanboy of 2.5" stuff in enterprise equipment. To me a better but more costly answer would be setup a LFF SAN server and go from there.
My employer is a SMB (60 people?) and our storage is exploding at times. SAN's can give more economical storage and flexibility. Especially since were considering fail-over scenarios for not only our Windows ERP software but our all Linux based :) file servers.
Regards,
Richard
Richard Zimmerman wrote:
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
The DL20 gen9 I bought was setup LFF (3.5")
The DL380 gen9 could be either SFF (2.5) or LFF. I had to buy SFF for our new server due I was told to spec / build it exact to vendor recommendation.
To better? Answer this. Agreed, I'm not a fanboy of 2.5" stuff in enterprise equipment. To me a better but more costly answer would be setup a LFF SAN server and go from there.
My employer is a SMB (60 people?) and our storage is exploding at times. SAN's can give more economical storage and flexibility. Especially since were considering fail-over scenarios for not only our Windows ERP software but our all Linux based :) file servers.
Storage.... Ok, I'll give a plug to one of our favorite vendors: AC&NC, who manufacture JetStor.They're RAID appliances, very nice internal webserver to manages, and yes, it can send emails. The prices are *very* reasonable, cheaper than Dell, or HP, or even NetApp. Lessee, we just bought a couple, one for a new system, one for a backup thereof. Each of them ran just over $12k, with 12 10TB SAS drives, and a DAS card. And they are both reliable and *last*. We did just finally retire the one pair that had a true SCSI connection....
mark
hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
Sorry, that depends 100% on what you *order*. We tell our resellers that we want 3.5" drives, that's what we get. All the vendors, intluding Dell, and the smaller ones, online you can configure what you want, to price it out, and they *all* offer 3.5" drives.
The only 2.5" drives that we're ok with getting are the two internal SSD's for RAID 1 for the o/s, and nothing else.
You may not be talking to the right sales folks.
m.roth@5-cent.us wrote:
hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
Sorry, that depends 100% on what you *order*. We tell our resellers that we want 3.5" drives, that's what we get. All the vendors, intluding Dell, and the smaller ones, online you can configure what you want, to price it out, and they *all* offer 3.5" drives.
The only 2.5" drives that we're ok with getting are the two internal SSD's for RAID 1 for the o/s, and nothing else.
You may not be talking to the right sales folks.
That only goes when you buy new. Look at what you can get used, and you´ll see that there´s basically nothing that fits 3.5" drives.
On 11/3/2017 1:25 AM, hw wrote:
That only goes when you buy new. Look at what you can get used, and you´ll see that there´s basically nothing that fits 3.5" drives.
I bought a used HP DL180g6 a couple years ago, 12 x 3.5" on the front panel, and 2 more in back, came with all 14 HP trays, dual X5650. its a personal/charity server sitting at a coloc here in town. I have several of the same model server at work with 25 x 2.5" drives (and x5660 and more ram), they are workhorses.
John R Pierce wrote:
On 11/3/2017 1:25 AM, hw wrote:
That only goes when you buy new. Look at what you can get used, and you´ll see that there´s basically nothing that fits 3.5" drives.
I bought a used HP DL180g6 a couple years ago, 12 x 3.5" on the front panel, and 2 more in back, came with all 14 HP trays, dual X5650. its a personal/charity server sitting at a coloc here in town. I have several of the same model server at work with 25 x 2.5" drives (and x5660 and more ram), they are workhorses.
A couple years ago, yes. Now, not anymore.
On 11/4/2017 1:54 AM, hw wrote:
I bought a used HP DL180g6 a couple years ago, 12 x 3.5" on the front panel, and 2 more in back, came with all 14 HP trays, dual X5650. its a personal/charity server sitting at a coloc here in town. I have several of the same model server at work with 25 x 2.5" drives (and x5660 and more ram), they are workhorses.
A couple years ago, yes. Now, not anymore.
https://www.ebay.com/i/253122917302?chn=ps&dispItem=1
On 11/2/2017 9:21 AM, hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
2.5" SAS drives spinning at 10k and 15k RPM are the performance solution for online storage, like databases and so forth. also make more sense for large arrays of SSDs, as they don't even come in 3.5". With 2.5" you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max per 2U)... more disks == more IOPS.
3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large capacity bulk 'nearline' storage which is typically sequentially written once
John R Pierce wrote:
On 11/2/2017 9:21 AM, hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
2.5" SAS drives spinning at 10k and 15k RPM are the performance solution for online storage, like databases and so forth. also make more sense for large arrays of SSDs, as they don't even come in 3.5". With 2.5" you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max per 2U)... more disks == more IOPS.
3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large capacity bulk 'nearline' storage which is typically sequentially written once
We have a fair number of SAS 3.5" drives, and yes, 10k or 15k speeds.
mark
On 11/2/2017 2:18 PM, m.roth@5-cent.us wrote:
We have a fair number of SAS 3.5" drives, and yes, 10k or 15k speeds.
those are internally 2.5" disks in a 3.5" frame. you can't spin a 3.5" disk much faster than 7200 rpm without it coming apart.
John R Pierce wrote:
On 11/2/2017 2:18 PM, m.roth@5-cent.us wrote:
We have a fair number of SAS 3.5" drives, and yes, 10k or 15k speeds.
those are internally 2.5" disks in a 3.5" frame. you can't spin a 3.5" disk much faster than 7200 rpm without it coming apart.
Sorry, that's incorrect. I have, sitting here in front of me, a Dell-branded Seagate Cheetah, 600GB (it's a few years old) 15k 3.5" drive.
mark
On 11/2/2017 2:35 PM, m.roth@5-cent.us wrote:
John R Pierce wrote:
On 11/2/2017 2:18 PM,m.roth@5-cent.us wrote:
We have a fair number of SAS 3.5" drives, and yes, 10k or 15k speeds.
those are internally 2.5" disks in a 3.5" frame. you can't spin a 3.5" disk much faster than 7200 rpm without it coming apart.
Sorry, that's incorrect. I have, sitting here in front of me, a Dell-branded Seagate Cheetah, 600GB (it's a few years old) 15k 3.5" drive.
if you take one of those apart, you will find ~63mm (2.5") disk platters inside, and an identical mechanism to the 2.5" 600GB 15000 rpm drive of the same generation, just sitting in a larger frame.
John R Pierce wrote:
On 11/2/2017 9:21 AM, hw wrote:
Richard Zimmerman wrote:
hw wrote:
Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
I will second Marks comments here. Yes, 2.5" drive enterprise drives have been an issue. +1 for the WD Red drives, so far 3.5" w/ 2tb and 4tb drives, ZERO issues. I've had good luck with HGST NAS drives too. Unfortunately, that will come to an end soon (With WD owning HGST).
Most servers can fit only 2.5" disks these days. I keep wondering what everyone is doing about storage.
2.5" SAS drives spinning at 10k and 15k RPM are the performance solution for online storage, like databases and so forth. also make more sense for large arrays of SSDs, as they don't even come in 3.5". With 2.5" you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max per 2U)... more disks == more IOPS.
That´s not for storage because it´s so expensive that you can only use it for the limited amounts of data that actually benefit from, or require, the advantage in performance. For this application, it makes perfectly sense.
3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large capacity bulk 'nearline' storage which is typically sequentially written once
Why would you write them only once? Where are you storing your data when you do that?
On 11/3/2017 1:31 AM, hw wrote:
2.5" SAS drives spinning at 10k and 15k RPM are the performance solution for online storage, like databases and so forth. also make more sense for large arrays of SSDs, as they don't even come in 3.5". With 2.5" you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max per 2U)... more disks == more IOPS.
That´s not for storage because it´s so expensive that you can only use it for the limited amounts of data that actually benefit from, or require, the advantage in performance. For this application, it makes perfectly sense.
online high performance storage, vs nearline/archival storage. the first needs high IOPS and high concurrency. the 2nd needs high capacity, fast sequential speeds but little or no random access.. two completely different requirements. both are 'storage'.
3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large capacity bulk 'nearline' storage which is typically sequentially written once
Why would you write them only once? Where are you storing your data when you do that?
I meant to say write occasionally. on a nearline bulk system, files tend to get written sequentially, and stored for a long time.
John R Pierce wrote:
On 11/3/2017 1:31 AM, hw wrote:
2.5" SAS drives spinning at 10k and 15k RPM are the performance solution for online storage, like databases and so forth. also make more sense for large arrays of SSDs, as they don't even come in 3.5". With 2.5" you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max per 2U)... more disks == more IOPS.
That´s not for storage because it´s so expensive that you can only use it for the limited amounts of data that actually benefit from, or require, the advantage in performance. For this application, it makes perfectly sense.
online high performance storage, vs nearline/archival storage. the first needs high IOPS and high concurrency. the 2nd needs high capacity, fast sequential speeds but little or no random access.. two completely different requirements. both are 'storage'.
Not really. You don´t put things into storage you´re using frequently or all the time. Instead, you keep them around where you can get at them easily when you need them. Usually, that isn´t a lot of things.
Storage is for things you don´t use as much and usually provides a lot of room to put things.
3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large capacity bulk 'nearline' storage which is typically sequentially written once
Why would you write them only once? Where are you storing your data when you do that?
I meant to say write occasionally. on a nearline bulk system, files tend to get written sequentially, and stored for a long time.
Data in a database is stored for a long time as well, and a lot of it isn´t written to often, or basically only once. A spreadsheet may be written to more often, yet you might put it into storage because you have large amounts of data like that.
For large amounts of data, 2.5" disks aren´t suitable. Since there are like no servers at all you can buy used that fit 3.5" disks, I´m wondering what everyone does about storage. The servers that fit 3.5" disks were all sold a couple years ago, and even then it was difficult to find anything that would have room for more than 6 disks.
m.roth@5-cent.us wrote:
hw wrote:
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
What would you suggest as alternative, something from Dell?
Yep, Dell's are good.
That´s good to hear.
And I do *not* want to buy from HP, because their support is nothing like good.
Indeed, I wouldn´t buy HP new. They don´t even give you a price for a new battery for an UPS but tell you to open a ticket to get a price and expect you to pay for opening the ticket, and they have finally managed to completely mess up their web site so that you can´t find anything anymore.
And once you run out the warranty, they don't want to even let you get things like firmware updates. Dell will.
Yes, this is a really big problem which makes me look out for other manufacturers. I really like their hardware, but the manufacturer not standing behind their product breaks the deal.
Another company that's ok is ThinkMate, though their support ain't great, I think they're better than HP...oh, sorry, for a server, it'll be HPE (the company divided a year or two ago).
I´ve never heared of ThinkMate.
If you get a Dell, and one of their PERC cards, you're getting a rebranded LSI, sorry, Avago, um, who bought it last? Those are good and reliable, not super expensive.
Those don´t work at all. I had to return two of them because none of them worked in any of the boards I tried them, and the smart arrays I replaced them with work in the same boards. Dell always had a reputation for making incompatible hardware, and that experience proved it.
Maybe they work when you have Dell hardware, but I have none.
On Thu, November 2, 2017 11:18 am, hw wrote:
m.roth@5-cent.us wrote:
hw wrote:
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
What would you suggest as alternative, something from Dell?
Yep, Dell's are good.
That´s good to hear.
And I do *not* want to buy from HP, because their support is nothing like good.
Indeed, I wouldn´t buy HP new. They don´t even give you a price for a new battery for an UPS but tell you to open a ticket to get a price and expect you to pay for opening the ticket, and they have finally managed to completely mess up their web site so that you can´t find anything anymore.
And once you run out the warranty, they don't want to even let you get things like firmware updates. Dell will.
Yes, this is a really big problem which makes me look out for other manufacturers. I really like their hardware, but the manufacturer not standing behind their product breaks the deal.
Another company that's ok is ThinkMate, though their support ain't great, I think they're better than HP...oh, sorry, for a server, it'll be HPE (the company divided a year or two ago).
I´ve never heared of ThinkMate.
If you get a Dell, and one of their PERC cards, you're getting a rebranded LSI, sorry, Avago, um, who bought it last? Those are good and reliable, not super expensive.
Those don´t work at all. I had to return two of them because none of them worked in any of the boards I tried them, and the smart arrays I replaced them with work in the same boards. Dell always had a reputation for making incompatible hardware, and that experience proved it.
Maybe they work when you have Dell hardware, but I have none.
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca LSI (or whoever owns that line these days - Intel was the last one, I recollect)
With LSI beware that they have really nasty command line client, and do not have raid watch daemon with web interface like late 3ware had (alas, 3ware after they were bought out several times by competitors were drawn down out of existence).
Good luck!
Valeri
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 2017-11-02, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca LSI (or whoever owns that line these days - Intel was the last one, I recollect)
With LSI beware that they have really nasty command line client, and do not have raid watch daemon with web interface like late 3ware had (alas, 3ware after they were bought out several times by competitors were drawn down out of existence).
I believe Broadcom now owns LSI. LSI killed the 3ware line soon after they bought it, so the MegaRAID line is it from them now.
Seconded on the horrific LSI command line tools. Actually they have two tools, MegaCli and storcli. They're both horrible, storcli slightly less so. OTOH once you get your arrays configured you can forget about storcli (at least until a drive fails).
There are Nagios plugins that can check the status of LSI controllers, arrays, and drives. The plugin is nice even if you don't use Nagios; it'd be pretty easy to write a short shell wrapper that sent email if the plugin status wasn't OK.
--keith
On Thu, November 2, 2017 4:43 pm, Keith Keller wrote:
On 2017-11-02, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca LSI (or whoever owns that line these days - Intel was the last one, I recollect)
With LSI beware that they have really nasty command line client, and do not have raid watch daemon with web interface like late 3ware had (alas, 3ware after they were bought out several times by competitors were drawn down out of existence).
I believe Broadcom now owns LSI. LSI killed the 3ware line soon after they bought it, so the MegaRAID line is it from them now.
Seconded on the horrific LSI command line tools. Actually they have two tools, MegaCli and storcli. They're both horrible, storcli slightly less so. OTOH once you get your arrays configured you can forget about storcli (at least until a drive fails).
There are Nagios plugins that can check the status of LSI controllers, arrays, and drives. The plugin is nice even if you don't use Nagios; it'd be pretty easy to write a short shell wrapper that sent email if the plugin status wasn't OK.
Thanks, Keith, you just solved one of my problems (and I do use nagios, so life is even better ;-)
Valeri
--keith
-- kkeller@wombat.san-francisco.ca.us
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 2017-11-02, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Thu, November 2, 2017 4:43 pm, Keith Keller wrote:
There are Nagios plugins that can check the status of LSI controllers, arrays, and drives. The plugin is nice even if you don't use Nagios; it'd be pretty easy to write a short shell wrapper that sent email if the plugin status wasn't OK.
Thanks, Keith, you just solved one of my problems (and I do use nagios, so life is even better ;-)
Fabulous! Glad I could help. :)
I think the one I'm using is by Thomas Krenn:
https://github.com/thomas-krenn/check_lsi_raid
It's pretty thorough. It's a little too sensitive sometimes; for example, it will alert critical for a drive that's rebuilding (e.g. if you replaced a failing drive recently). But it covers everything I know of, including physical devices, logical volumes, and BBUs.
--keith
Valeri Galtsev wrote:
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca
Areca is forbiddingly expensive.
LSI (or whoever owns that line these days - Intel was the last one, I recollect)
With LSI beware that they have really nasty command line client, and do not have raid watch daemon with web interface like late 3ware had (alas, 3ware after they were bought out several times by competitors were drawn down out of existence).
I like CLIs and don´t like web interfaces ... 3ware used to make good cards, though also expensive.
On Fri, November 3, 2017 3:36 am, hw wrote:
Valeri Galtsev wrote:
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca
Areca is forbiddingly expensive.
Yes, and it is worth every dollar it costs. All good RAID cards will be on the same price level. Those cheaper ones I will not let into our stables (don't get me started ranting about them...)
LSI (or whoever owns that line these days - Intel was the last one, I recollect)
With LSI beware that they have really nasty command line client, and do not have raid watch daemon with web interface like late 3ware had (alas, 3ware after they were bought out several times by competitors were drawn down out of existence).
I like CLIs and don´t like web interfaces ...
I _am_ a command line person myself. Yet, when dealing with RAID, I do prefer GUI interface, as it is much harder to screw up when you use 3ware web interface, compared to, say, 3ware command client interface, the last being much better and clearer than LSI command client... Again, it can be just me, or it can be the same for many people that our perception of things in GUI is less prone to grave mistakes.
Valeri
3ware used to make good cards, though also expensive. _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev wrote:
On Fri, November 3, 2017 3:36 am, hw wrote:
Valeri Galtsev wrote:
<snip>
LSI (or whoever owns that line these days - Intel was the last one, I recollect)
With LSI beware that they have really nasty command line client, and do not have raid watch daemon with web interface like late 3ware had (alas, 3ware after they were bought out several times by competitors were drawn down out of existence).
Um, er, if you think the MegaRAID command line tool his user-unfriendly, the 3-Com one was downright user-hostile. *shudder* I'm really glad the systems with those have been surplussed.....
I like CLIs and don´t like web interfaces ...
I _am_ a command line person myself. Yet, when dealing with RAID, I do prefer GUI interface, as it is much harder to screw up when you use 3ware web interface, compared to, say, 3ware command client interface, the last being much better and clearer than LSI command client... Again, it can be just me, or it can be the same for many people that our perception of things in GUI is less prone to grave mistakes.
As I said, anyone who wants an easy to use script that works with MegaRAID< drop me an email, and I'll send it to you. It's a script, so you can verify it's not going to hold your RAID for ransom. <g.
mark
Valeri Galtsev wrote:
On Fri, November 3, 2017 3:36 am, hw wrote:
Valeri Galtsev wrote:
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca
Areca is forbiddingly expensive.
Yes, and it is worth every dollar it costs. All good RAID cards will be on the same price level. Those cheaper ones I will not let into our stables (don't get me started ranting about them...)
How does spending between 300 and 800 for an Areca 8 port pay out when you can get a P410 for less than 100? Are they 3--8 times faster, 3--8 times easier to replace, 3--8 times more reliable, 3--8 times easier to use, 3--8 times more durable, 3--8 times more energy efficient? What is it that makes them worthwhile?
[...]
I like CLIs and don´t like web interfaces ...
I _am_ a command line person myself. Yet, when dealing with RAID, I do prefer GUI interface, as it is much harder to screw up when you use 3ware web interface, compared to, say, 3ware command client interface, the last being much better and clearer than LSI command client... Again, it can be just me, or it can be the same for many people that our perception of things in GUI is less prone to grave mistakes.
It´s much easier to click the wrong button in a GUI than it is to enter the wrong command in a cli. If the cli is poor, the gui may seem much better --- I have a switch like that and I´m pissed that the cli is so bad. Unfortunately, there aren´t any decent 1GB switches with at least 24 ports that are fanless :(
And how do you automate a gui? That´s way easier to do with a cli.
On Sat, November 4, 2017 4:32 am, hw wrote:
Valeri Galtsev wrote:
On Fri, November 3, 2017 3:36 am, hw wrote:
Valeri Galtsev wrote:
If you have not Dell server hardware my choice of [hardware] RAID cards would be:
Areca
Areca is forbiddingly expensive.
Yes, and it is worth every dollar it costs. All good RAID cards will be on the same price level. Those cheaper ones I will not let into our stables (don't get me started ranting about them...)
How does spending between 300 and 800 for an Areca 8 port pay out when you can get a P410 for less than 100? Are they 3--8 times faster, 3--8 times easier to replace, 3--8 times more reliable, 3--8 times easier to use, 3--8 times more durable, 3--8 times more energy efficient? What is it that makes them worthwhile?
HP P410 controller is by no means close and by no means comparable with any of Areca RAID controllers. If I'm reading the description correctly, P410 supports: RAID 0, RAID 1, RAID 10
All the machines I need hardware RAID on use RAID 6, anything not capable doing it is out of consideration. I can do trivial thing like mirror or striped concatenation (RAID-1) of two devices by any brainless controller, and speed where RAID-1 is concerned has to do with speed of devices themselves, not that trivial "chop and shove to different devices" controller.
Sorry, one can not compare slingshot with machine gun (my apologies about "politically incorrect" comparison).
[...]
I like CLIs and donôt like web interfaces ...
I _am_ a command line person myself. Yet, when dealing with RAID, I do prefer GUI interface, as it is much harder to screw up when you use 3ware web interface, compared to, say, 3ware command client interface, the last being much better and clearer than LSI command client... Again, it can be just me, or it can be the same for many people that our perception of things in GUI is less prone to grave mistakes.
It´s much easier to click the wrong button in a GUI than it is to enter the wrong command in a cli.
I know, people are different, I leave it it to everyone's own decision with oneself. The very first screwup may confirm one's beliefs or change them to opposite.
If the cli is poor, the gui may seem much better
I have a switch like that and I´m pissed that the cli is so bad. Unfortunately, there aren´t any decent 1GB switches with at least 24 ports that are fanless :(
And how do you automate a gui?
I do not. As web interface 3ware has is provided by the daemon, in which you can configure all automated actions you need, and that daemon will do it according to your schedule (but rather the controller itself does most of them as configured through web interface). Those who used 3ware cards do know it and do use that nice feature.
That´s way easier to do with a cli.
And yes, when with one controller GUI daemon was failing to do one task I needed, I did use UNIX cron job which was executing what was necessary through cli interface.
This does not change my perception that _I_ with my mentality have less chance to screw up and obliterate RAID array when I need, say, to start rebuild if _I_ use GUI web interface, as opposed to command line interface (cli). Even if it is just me, I stay convinced to keep doing it this way which is safer for the data of my users that live on RAID I am dealing with.
And still after all that said, I am basically command line person ;-)
Valeri
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 2017-11-04, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Sat, November 4, 2017 4:32 am, hw wrote:
If the cli is poor, the gui may seem much better
Indeed. Before the storcli tool came out, the only CLI tool for the LSI cards was MegaCli, and it was atrocious. In that case I can imagine the GUI being preferable (even though the GUI isn't very good either).
Even the storcli tool isn't very good (as I've mentioned). I can completely understand someone preferring MSM (the daemon which provides the backend for the GUI tool) over storcli.
I do not. As web interface 3ware has is provided by the daemon, in which you can configure all automated actions you need, and that daemon will do it according to your schedule (but rather the controller itself does most of them as configured through web interface). Those who used 3ware cards do know it and do use that nice feature.
I never used the 3dm2 web GUI. I thought it was stupid and greatly preferred tw_cli. You can set at least scheduled verifies through tw_cli. (I don't know if you could use the 3dm2 GUI to schedule other tasks.) I only use 3dm2 to send out email alerts. I tried using MSM to send out email alerts but I got way way too many alerts for trivial events, so I ended up disabling it.
This does not change my perception that _I_ with my mentality have less chance to screw up and obliterate RAID array when I need, say, to start rebuild if _I_ use GUI web interface, as opposed to command line interface (cli). Even if it is just me, I stay convinced to keep doing it this way which is safer for the data of my users that live on RAID I am dealing with.
This is probably the most important consideration. Keeping our data safe is more important than a CLI vs GUI religious war. :)
Recently I had to use the LSI BIOS' GUI to configure arrays. Let me tell you, that was really no fun at all. It was still point and click but the GUI was so clunky that it was very difficult to tell what I was doing. And the help was useless, so I had to go to my laptop to do research on some of the options that the controller was asking about.
--keith
On Sat, November 4, 2017 1:56 pm, Keith Keller wrote:
On 2017-11-04, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Sat, November 4, 2017 4:32 am, hw wrote:
If the cli is poor, the gui may seem much better
Indeed. Before the storcli tool came out, the only CLI tool for the LSI cards was MegaCli, and it was atrocious. In that case I can imagine the GUI being preferable (even though the GUI isn't very good either).
Even the storcli tool isn't very good (as I've mentioned). I can completely understand someone preferring MSM (the daemon which provides the backend for the GUI tool) over storcli.
I do not. As web interface 3ware has is provided by the daemon, in which you can configure all automated actions you need, and that daemon will do it according to your schedule (but rather the controller itself does most of them as configured through web interface). Those who used 3ware cards do know it and do use that nice feature.
I never used the 3dm2 web GUI. I thought it was stupid and greatly preferred tw_cli. You can set at least scheduled verifies through tw_cli. (I don't know if you could use the 3dm2 GUI to schedule other tasks.) I only use 3dm2 to send out email alerts. I tried using MSM to send out email alerts but I got way way too many alerts for trivial events, so I ended up disabling it.
This does not change my perception that _I_ with my mentality have less chance to screw up and obliterate RAID array when I need, say, to start rebuild if _I_ use GUI web interface, as opposed to command line interface (cli). Even if it is just me, I stay convinced to keep doing it this way which is safer for the data of my users that live on RAID I am dealing with.
This is probably the most important consideration. Keeping our data safe is more important than a CLI vs GUI religious war. :)
Recently I had to use the LSI BIOS' GUI to configure arrays. Let me tell you, that was really no fun at all. It was still point and click but the GUI was so clunky that it was very difficult to tell what I was doing. And the help was useless, so I had to go to my laptop to do research on some of the options that the controller was asking about.
Here I would agree 100%. I used LSI BIOS "GUI" interface and didn't like it at all. It is more like "norton commander" or "midnight commander" if anybody still remembers those DOS tools. Anyway, in that LSI BIOS "GUI" I ended up disregarding mouse, and navigating and choosing actions just by keyboard ("tab" and "enter" keys, sometimes "esc" key IIRC). I kind of even didn't think that one could consider that GUI...
Valeri
--keith
-- kkeller@wombat.san-francisco.ca.us
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 11/4/2017 11:39 AM, Valeri Galtsev wrote:
How does spending between 300 and 800 for an Areca 8 port pay out when you can get a P410 for less than 100? Are they 3--8 times faster, 3--8 times easier to replace, 3--8 times more reliable, 3--8 times easier to use, 3--8 times more durable, 3--8 times more energy efficient? What is it that makes them worthwhile?
HP P410 controller is by no means close and by no means comparable with any of Areca RAID controllers. If I'm reading the description correctly, P410 supports: RAID 0, RAID 1, RAID 10
you need the optional cache module and the feature option for the p410 to support raid 5/6. I always ordered my HPs with the larger cache and the 'flash backed writeback cache' option rather than battery backed (flash backed uses supercaps which last approximately forever, while raid battery backup tends to fail in 3-4 years).
p410 is already quite obsolete, the gen8 servers I ordered a couple years ago came with P420, I don't doubt thats been replaced in gen9 stuff.
my personal preference is to get rid of the raid cards entirely and use plain SAS HBA's with OS native raid support, for everything but dedicated windows servers.
hw wrote:
m.roth@5-cent.us wrote:
hw wrote:
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
<snip>
And I do *not* want to buy from HP, because their support is nothing like good.
Indeed, I wouldn´t buy HP new. They don´t even give you a price for a new battery for an UPS but tell you to open a ticket to get a price and expect you to pay for opening the ticket, and they have finally managed to completely mess up their web site so that you can´t find anything anymore.
But wait, it's worse: the replacement *parts* have a different part number than the original. I had to replace a PSU on a blade enclosure, and had to get HP, or maybe a reseller, I forget, to tell me what the correct part number for the replacement part was, and, IIRC, there were both 6-digit number, or maybe 12.... <snip>
Another company that's ok is ThinkMate, though their support ain't great, I think they're better than HP...oh, sorry, for a server, it'll be HPE (the company divided a year or two ago).
I´ve never heared of ThinkMate.
No biggie. As I said, they're another reseller of Supermicro h/w. Good prices, so-so support.
If you get a Dell, and one of their PERC cards, you're getting a rebranded LSI, sorry, Avago, um, who bought it last? Those are good and reliable, not super expensive.
Those don´t work at all. I had to return two of them because none of them worked in any of the boards I tried them, and the smart arrays I replaced them with work in the same boards. Dell always had a reputation for making incompatible hardware, and that experience proved it.
Maybe they work when you have Dell hardware, but I have none.
Oh, ok, I was assuming you did. No, if you're not buying Dell hardware, with their own PERC cards, get an LSI/AVAGO/whoever. They *do* work on anything, and MegaRAID software is not hard to find. Note: if you go that route, I have a script I found only that makes basic monitoring *much* easier than the hostile MegaRAID interface....
mark
m.roth@5-cent.us wrote:
hw wrote:
m.roth@5-cent.us wrote:
hw wrote:
Richard Zimmerman wrote:
DO NOT buy the newer HPE DL20 gen9 or ML10 gen9 servers then (especially if using CentOS 6.x)
<snip> >> And I do *not* want to buy from HP, because their >> support is nothing like good. > > Indeed, I wouldn´t buy HP new. They don´t even give you a price for > a new battery for an UPS but tell you to open a ticket to get a price > and expect you to pay for opening the ticket, and they have finally > managed to completely mess up their web site so that you can´t find > anything anymore.
But wait, it's worse: the replacement *parts* have a different part number than the original. I had to replace a PSU on a blade enclosure, and had to get HP, or maybe a reseller, I forget, to tell me what the correct part number for the replacement part was, and, IIRC, there were both 6-digit number, or maybe 12....
<snip>
Well, I ended up buying a new UPS because a replacement battery was unavailable. Now I wouldn´t buy anything but APC for an UPS anymore.
[...]
If you get a Dell, and one of their PERC cards, you're getting a rebranded LSI, sorry, Avago, um, who bought it last? Those are good and reliable, not super expensive.
Those don´t work at all. I had to return two of them because none of them worked in any of the boards I tried them, and the smart arrays I replaced them with work in the same boards. Dell always had a reputation for making incompatible hardware, and that experience proved it.
Maybe they work when you have Dell hardware, but I have none.
Oh, ok, I was assuming you did. No, if you're not buying Dell hardware, with their own PERC cards, get an LSI/AVAGO/whoever. They *do* work on anything, and MegaRAID software is not hard to find. Note: if you go that route, I have a script I found only that makes basic monitoring *much* easier than the hostile MegaRAID interface....
I wouldn´t say I don´t buy Dell, only that I haven´t yet. HP is much easier to get (perhaps everyone is throwing them out because they can´t get firmware updates anymore).
There´s also something to HP hardware --- for example, what´s Dells equivalent to an ML350? I couldn´t find anything like it from Dell.
Gary Stainburn wrote:
I'm just about to build a new server and I'm looking for recommendations on what hardware to use.
I'm happy with either a brand name, or building my own, but would like a hardware RAID controller to run a pair of disks as RAID1 that is actually compatible with and manageable through Linux.
Any recommendations would be appreciated.
DL380 G7+ or the like, depending on how much data you want to store
On 11/2/2017 8:04 AM, Gary Stainburn wrote:
I'm just about to build a new server and I'm looking for recommendations on what hardware to use.
I'm happy with either a brand name, or building my own, but would like a hardware RAID controller to run a pair of disks as RAID1 that is actually compatible with and manageable through Linux.
Any recommendations would be appreciated.
If you want raid 5 or 6, then you should get a hardware controller. For raid 1, mdadm should work just fine. I would suggest trying it before buying a raid controller. If it works for you, you save a few hundred dollars and you have one less piece of hardware to worry about.
I haven't looked at them in quite a few years, but last time I was in the market for a raid controller, Areca controllers were the way to go.
On Thursday 02 November 2017 14:10:25 Bowie Bailey wrote:
If you want raid 5 or 6, then you should get a hardware controller. For raid 1, mdadm should work just fine. I would suggest trying it before buying a raid controller. If it works for you, you save a few hundred dollars and you have one less piece of hardware to worry about.
I haven't looked at them in quite a few years, but last time I was in the market for a raid controller, Areca controllers were the way to go.
I've used MDADM before on previous servers, but have found that this setup isn't hot swap. Ultimately if I had to replace a drive it involved a lot of effort, especially the first drive.
By using H/W RAID, it's literally just a case of removing the dead drive and inserting the replacement. I've got a number of IBM and DELL boxes like this. it's just a pity they're not compatible with Linux so I can't monitor or manage them while the servers are running. The only way I know I have problems is by watching the LEDS
Once upon a time, Gary Stainburn gary@ringways.co.uk said:
I've used MDADM before on previous servers, but have found that this setup isn't hot swap. Ultimately if I had to replace a drive it involved a lot of effort, especially the first drive.
I use mdadm RAID in a bunch of places; it isn't automated, but it isn't that hard to replace a drive. It would be nice if the storaged project had support for this (I think the various needed bits are there, just needs somebody to write a front-end to make the calls I guess).
By using H/W RAID, it's literally just a case of removing the dead drive and inserting the replacement. I've got a number of IBM and DELL boxes like this. it's just a pity they're not compatible with Linux so I can't monitor or manage them while the servers are running. The only way I know I have problems is by watching the LEDS
I also use Dell servers with various hardware RAID cards in a bunch of places. I install the Dell tools (they have yum repos for this), and then can use omreport and omconfig to monitor and manage RAID from within Linux, and their SNMP agent to monitor system health from my external monitoring system.
One nice thing with omreport/omconfig is that it doesn't matter what type of RAID card/chip/whatever they use (because they change things over generations), the commands are the same.
Newer Dell servers also have integrated RAID management to the DRAC directly, so you can monitor/manage/etc. through the out-of-band web and SSH UI.
Gary Stainburn wrote:
On Thursday 02 November 2017 14:10:25 Bowie Bailey wrote:
<snip>
By using H/W RAID, it's literally just a case of removing the dead drive and inserting the replacement. I've got a number of IBM and DELL boxes like this. it's just a pity they're not compatible with Linux so I can't monitor or manage them while the servers are running. The only way I know I have problems is by watching the LEDS
I don't understand the above - what's not compatible with Linux? We've got a ton, and if it's a Dell PERC/LSI, MegaRaid works just fine to monitor the drives.
mark
On 11/2/2017 7:20 AM, Gary Stainburn wrote:
it's just a pity they're not compatible with Linux so I can't monitor or manage them while the servers are running. The only way I know I have problems is by watching the LEDS
I have a couple python scripts I've used for monitoring LSI/Avago "Megaraid" controllers, these scripts run the 'megacli' command line tool and parse the incredibly verbose output to generate a concise summary, and another returns true/false to be used in an alert email script run from a crontab entry. I lifted these scripts off the net, and modified them a bit.
sample output, followed by the source for the two scripts.
# lsi-raidinfo | lsi-checkraid RAID ERROR
# lsi-raidinfo -- Controllers -- -- ID | Model c0 | LSI MegaRAID SAS 9261-8i
-- Volumes -- -- ID | Type | Size | Status | InProgress volume c0u0 | RAID10 1x2 | 2727G | Degraded | None volume c0u1 | RAID60 1x8 | 16370G | Optimal | None volume c0u2 | RAID60 1x8 | 16370G | Optimal | None
-- Disks -- -- Encl:Slot | vol-span-unit | Model | Status disk 8:1 | 0-0-1 | Z291VTRPST33000650NS 0003 | Online, Spun Up disk 8:2 | 1-0-0 | Z291VTKWST33000650NS 0003 | Online, Spun Up disk 8:3 | 1-0-1 | Z291VT9YST33000650NS 0003 | Online, Spun Up disk 8:4 | 1-0-2 | Z291VTT6ST33000650NS 0003 | Online, Spun Up disk 8:5 | 1-0-3 | Z291VT6CST33000650NS 0003 | Online, Spun Up disk 8:6 | 1-0-4 | Z291VTLAST33000650NS 0003 | Online, Spun Up disk 8:7 | 1-0-5 | Z291VTK1ST33000650NS 0003 | Online, Spun Up disk 8:8 | 1-0-6 | Z291VTNGST33000650NS 0003 | Online, Spun Up disk 8:9 | 1-0-7 | Z291VTRAST33000650NS 0003 | Online, Spun Up disk 8:10 | 2-0-0 | Z291VV05ST33000650NS 0003 | Online, Spun Up disk 8:11 | 2-0-1 | Z291VTW1ST33000650NS 0003 | Online, Spun Up disk 8:12 | 2-0-2 | Z291VTRLST33000650NS 0003 | Online, Spun Up disk 8:13 | 2-0-3 | Z291VTRXST33000650NS 0003 | Online, Spun Up disk 8:14 | 2-0-4 | Z291VSZGST33000650NS 0003 | Online, Spun Up disk 8:15 | 2-0-5 | Z291VSW1ST33000650NS 0003 | Online, Spun Up disk 8:16 | 2-0-6 | Z291VTB5ST33000650NS 0003 | Online, Spun Up disk 8:17 | 2-0-7 | Z291VSX8ST33000650NS 0003 | Online, Spun Up disk 8:18 | x-x-x | Z291VTS7ST33000650NS 0003 | Unconfigured(bad) disk 8:19 | 0-0-0 | Z291VT3HST33000650NS 0003 | Failed
There is at least one disk/array in a NOT OPTIMAL state.
# cat bin/lsi-checkraid #!/usr/bin/python # created by johnpuskar@gmail.com mailto:johnpuskar@gmail.com on 08/14/11 # rev 01 import os import re import sys if len(sys.argv) > 1: print 'Usage: accepts stdin from lsi-raidinfo' sys.exit(1) blnBadDisk = False infile = sys.stdin for line in infile: # print 'DEBUG!! checking line:'+str(line) if re.match(r'disk .*$',line.strip()): if re.match(r'^((?!Online, Spun Up|Online, Spun down|Hotspare, Spun Up|Hotspare, Spun down|Unconfigured(good), Spun Up).)*$',line.strip()): blnBadDisk = True badLine = line # print 'DEBUG!! bad disk found!' if re.match(r'volume ',line.strip()): if re.match(r'^((?!Optimal).)*$',line.strip()): # print 'DEBUG!! bad vol found!' blnBadDisk = True badLine = line if blnBadDisk == True: print 'RAID ERROR' # print badLine else: print 'RAID CLEAN'
# cat bin/lsi-raidinfo #!/usr/bin/python # megaclisas-status 0.6 # renamed lsi-raidinfo # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Pulse 2; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, # MA 02110-1301, USA. # # Copyright (C) 2007-2009 Adam Cecile (Le_Vert) ## modified by johnpuskar@gmail.com mailto:johnpuskar@gmail.com 08/14/11 # fixed for LSI 9285-8e on Openfiler
## modified by pierce@hogranch.com mailto:pierce@hogranch.com 2012-01-05 # fixed for newer version of megacli output on RHEL6/CentOS6 # output format extended to show raid span-unit and rebuild % complete import os import re import sys if len(sys.argv) > 2: print 'Usage: lsi-raidinfo [-d]' sys.exit(1) # if argument -d, only print disk info printarray = True printcontroller = True if len(sys.argv) > 1: if sys.argv[1] == '-d': printarray = False printcontroller = False else: print 'Usage: lsi-raidinfo [-d]' sys.exit(1) # Get command output def getOutput(cmd): output = os.popen(cmd) lines = [] for line in output: if not re.match(r'^$',line.strip()): lines.append(line.strip()) return lines def returnControllerNumber(output): for line in output: if re.match(r'^Controller Count.*$',line.strip()): return int(line.split(':')[1].strip().strip('.')) def returnControllerModel(output): for line in output: if re.match(r'^Product Name.*$',line.strip()): return line.split(':')[1].strip() def returnArrayNumber(output): i = 0 for line in output: if re.match(r'^Virtual (Drive|Disk).*$',line.strip()): i += 1 return i def returnArrayInfo(output,controllerid,arrayid): id = 'c'+str(controllerid)+'u'+str(arrayid) # print 'DEBUG: id = '+str(id) operationlinennumber = False linenumber = 0 units = 1 type = 'JBOD' span = 0 size = 0 for line in output: if re.match(r'^RAID Level.*$',line.strip()): type = line.strip().split(':')[1].strip() type = 'RAID' + type.split(',')[0].split('-')[1].strip() # print 'debug: type = '+str(type) if re.match(r'^Number.*$',line.strip()): units = line.strip().split(':')[1].strip() if re.match(r'^Span Depth.*$',line.strip()): span = line.strip().split(':')[1].strip() if re.match(r'^Size.*$',line.strip()): # Size reported in MB if re.match(r'^.*MB$',line.strip().split(':')[1]): size = line.strip().split(':')[1].strip('MB').strip() size = str(int(round((float(size) / 1000))))+'G' # Size reported in TB elif re.match(r'^.*TB$',line.strip().split(':')[1]): size = line.strip().split(':')[1].strip('TB').strip() size = str(int(round((float(size) * 1000))))+'G' # Size reported in GB (default) else: size = line.strip().split(':')[1].strip('GB').strip() size = str(int(round((float(size)))))+'G' if re.match(r'^State.*$',line.strip()): state = line.strip().split(':')[1].strip() if re.match(r'^Ongoing Progresses.*$',line.strip()): operationlinennumber = linenumber linenumber += 1 if operationlinennumber: inprogress = output[operationlinennumber+1] else: inprogress = 'None' if span > 1: type = type+'0' type = type + ' ' + str(span) + 'x' + str(units) return [id,type,size,state,inprogress] def returnDiskInfo(output,controllerid,currentarrayid): arrayid = False oldarrayid = False olddiskid = False table = [] state = 'Offline' model = 'Unknown' enclnum = 'Unknown' slotnum = 'Unknown' enclsl = 'Unknown' firstDisk = True for line in output: if re.match(r'Firmware state: .*$',line.strip()): state = line.split(':')[1].strip() if re.match(r'Rebuild',state): cmd2 = '/opt/MegaRAID/MegaCli/MegaCli64 pdrbld showprog physdrv['+str(enclnum)+':'+str(slotnum)+'] a'+str(controllerid)+' nolog' ll = getOutput(cmd2) state += ' completed ' + re.sub(r'Rebuild Progress.*Completed', '', ll[0]).strip(); if re.match(r'Slot Number: .*$',line.strip()): slotnum = line.split(':')[1].strip() if re.match(r'Inquiry Data: .*$',line.strip()): model = line.split(':')[1].strip() model = re.sub(' +', ' ', model) model = re.sub('Hotspare Information', '', model).strip() #remove bogus output from firmware 12.12 if re.match(r"(Drive|Disk)'s postion: .*$",line.strip()): spans = line.split(',') span = re.sub(r"(Drive|Disk).*DiskGroup:", '', spans[0]).strip()+'-' span += spans[1].split(':')[1].strip()+'-' span += spans[2].split(':')[1].strip() if re.match(r'Enclosure Device ID: [0-9]+$',line.strip()): if firstDisk == True: firstDisk = False else: enclsl = str(enclnum)+':'+str(slotnum) table.append([str(enclsl), span, model, state]) span = 'x-x-x' enclnum = line.split(':')[1].strip() # Last disk of last array enclsl = str(enclnum)+':'+str(slotnum) table.append([str(enclsl), span, model, state]) arraytable = [] for disk in table: arraytable.append(disk) return arraytable cmd = '/opt/MegaRAID/MegaCli/MegaCli64 adpcount nolog' output = getOutput(cmd) controllernumber = returnControllerNumber(output) bad = False # List available controller if printcontroller: print '-- Controllers --' print '-- ID | Model' controllerid = 0 while controllerid < controllernumber: cmd = '/opt/MegaRAID/MegaCli/MegaCli64 adpallinfo a'+str(controllerid)+' nolog' output = getOutput(cmd) controllermodel = returnControllerModel(output) print 'c'+str(controllerid)+' | '+controllermodel controllerid += 1 print '' if printarray: controllerid = 0 print '-- Volumes --' print '-- ID | Type | Size | Status | InProgress' # print 'controller number'+str(controllernumber) while controllerid < controllernumber: arrayid = 0 cmd = '/opt/MegaRAID/MegaCli/MegaCli64 ldinfo lall a'+str(controllerid)+' nolog' output = getOutput(cmd) arraynumber = returnArrayNumber(output) # print 'array number'+str(arraynumber) while arrayid < arraynumber: cmd = '/opt/MegaRAID/MegaCli/MegaCli64 ldinfo l'+str(arrayid)+' a'+str(controllerid)+' nolog' # print 'DEBUG: running '+str(cmd) output = getOutput(cmd) # print 'DEBUG: output '+str(output) arrayinfo = returnArrayInfo(output,controllerid,arrayid) print 'volume '+arrayinfo[0]+' | '+arrayinfo[1]+' | '+arrayinfo[2]+' | '+arrayinfo[3]+' | '+arrayinfo[4] if not arrayinfo[3] == 'Optimal': bad = True arrayid += 1 controllerid += 1 print '' print '-- Disks --' print '-- Encl:Slot | vol-span-unit | Model | Status' controllerid = 0 while controllerid < controllernumber: arrayid = 0 cmd = '/opt/MegaRAID/MegaCli/MegaCli64 ldinfo lall a'+str(controllerid)+' nolog' output = getOutput(cmd) arraynumber = returnArrayNumber(output) while arrayid<arraynumber: #grab disk arrayId info cmd = '/opt/MegaRAID/MegaCli/MegaCli64 pdlist a'+str(controllerid)+' nolog' #print 'debug: running '+str(cmd) output = getOutput(cmd) arraydisk = returnDiskInfo(output,controllerid,arrayid) for array in arraydisk: print 'disk '+array[0]+' | '+array[1]+' | '+array[2]+' | '+array[3] arrayid += 1 controllerid += 1 if bad: print '\nThere is at least one disk/array in a NOT OPTIMAL state.' sys.exit(1)