IMHO, Hardware raid primarily exists because of Microsoft Windows and VMware esxi, neither of which have good native storage management.
Because of this, it's fairly hard to order a major brand (HP, Dell, etc) server without raid cards.
Raid cards do have the performance boost of nonvolatile write back cache. Newer/better cards use supercap flash for this, so battery life is no
The supercaps may be more stable than batteries but they can also fail. Since I had to replace the supercap of a HP server I know they also do fail. That's why they are also built as a module connected to the controller :-)
As for the write back cache, good SSDs do the same with integrated cache and supercaps, so you really don't need the RAID controller to do it anymore.
longer an issue
That said, make my Unix boxes zfs or mdraid+xfs on jbod for all the reasons previously given.
Same here, after long years of all kind of RAID hardware, I'm happy to run everything on mdraid+xfs. Software RAID on directly attached U.2 NMVe disks is all we use for new servers. It's fast, stable and also important, still KISS.
Regards, Simon