[CentOS] ssacli start rebuild?

Thu Nov 12 07:13:16 UTC 2020
Simon Matter <simon.matter at invoca.ch>

>> On Nov 11, 2020, at 6:00 PM, John Pierce <jhn.pierce at gmail.com> wrote:
>> On Wed, Nov 11, 2020 at 3:38 PM Warren Young <warren at etr-usa.com> wrote:
>>> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote:
>>>> I have yet to see software RAID that doesn't kill the performance.
>>> When was the last time you tried it?
>>> Why would you expect that a modern 8-core Intel CPU would impede I/O in
>>> any measureable way as compared to the outdated single-core 32-bit RISC
>>> CPU
>>> typically found on hardware RAID cards?  These are the same CPUs, mind,
>>> that regularly crunch through TLS 1.3 on line-rate fiber Ethernet
>>> links, a
>>> much tougher task than mediating spinning disk I/O.
>> the only 'advantage' hardware raid has is write-back caching.
> Just for my information: how do you map failed software RAID drive to
> physical port of, say, SAS-attached enclosure. I’d love to hot replace
> failed drives in software RAIDs, have over hundred physical drives
> attached to a machine. Do not criticize, this is box installed by someone
> else, I have “inherited” it.To replace I have to query drive serial
> number, power off the machine and pulling drives one at a time read the
> labels...

There are different methods depending on how the disks are attached. In
some cases you can use a tool to show the corresponding disk or slot.
Otherwise, once you have hot removed the drive from the RAID, you can
either dd to the broken drive or make some traffic on the still working
RAID and you'll see the disk immediately when looking at the disks busy

I've used Linux Software RAID during the last two decades and it has
always worked nicely while I started to hate hardware RAID more and more.
Now with U.2 NVMe SSD drives, at least when we started using them, there
were no RAID controllers available at all. And performance with Linux
Software RAID1 on AMD EPYC boxes is amazing :-)