On Sun, October 22, 2017 3:35 pm, Joseph L. Casale wrote:
-----Original Message----- From: CentOS [mailto:centos-bounces@centos.org] On Behalf Of Noam Bernstein Sent: Sunday, October 22, 2017 8:54 AM To: CentOS mailing list centos@centos.org Subject: [CentOS] Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
Is anyone running any Areca RAID controllers with the latest CentOS 7
kernel,
3.10.0-693.2.2.el7.x86_64? We recently updated (from 3.10.0- 514.26.2.el7.x86_64), and weâve started having lots of problems. To
I run CentOS 7 fully updated, latest; kernel 3.10.0-693.2.2.el7.x86_64 on the machine that has a couple of Areca F/W V1.47 2009-07-16 & Model ARC-1680 Driver Version 1.20.00.15 2010/08/05
(these host three large RAID-6's), system lives on a mirror behind similar controller. No problems whatsoever.
Just a data point.
Valeri
add to the confusion, thereâs also a hardware problem (either with the
controller or
the backplane most likely) that weâre in the process of analyzing.
Regardless,
we have an ARC1883i, and with the older kernel the system is stable, but with the new kernel it locks up within 1-12 hours of boot, with errors in /var/log/messages that start with things like kernel: arcmsr0: abort device command of scsi id = 0 lun = 0 (that is indeed the RAID scsi device) and within a few minutes of those
also
things like Oct 19 23:06:57 radon kernel: INFO: task xfsaild/dm-9:913 blocked for more than 120 seconds.
You mention you have hardware problems, what are they? A write is
blocked
for longer than they host is willing to wait. There are a few sysctl
parameters
that affect this but I'd be more willing to suggest its related to your
hardware
problems.
jlc
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++