[CentOS] RAID question

Thu Apr 27 17:55:29 UTC 2006
Miguel <infopnte at pnte.cfnavarra.es>

Hello.

I have several systems (CentOS 3 and CentOS 4) with software raid and 
I've observed a difference in the raid state:

In CentOS 3 systems we find that mdadm --detail /dev/md* show:
..............
       Version : 00.90.00
 Creation Time : Fri Apr 16 14:59:43 2004
    Raid Level : raid1
    Array Size : 20289984 (19.35 GiB 20.78 GB)
   Device Size : 20289984 (19.35 GiB 20.78 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 5
   Persistence : Superblock is persistent

   Update Time : Thu Apr 20 15:52:50 2006
         State : dirty, no-errors   
<--------------------------------------------------- RAID STATE
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0


   Number   Major   Minor   RaidDevice State
      0      33        7        0      active sync   /dev/hde7
      1      34        7        1      active sync   /dev/hdg7
          UUID : 232294aa:45bd9dea:c62face1:2fbf7a60
        Events : 0.38
..........................................

while in CentOS 3 systems we find:
..................
       Version : 00.90.01
 Creation Time : Wed Feb 15 10:13:41 2006
    Raid Level : raid1
    Array Size : 76051584 (72.53 GiB 77.88 GB)
   Device Size : 76051584 (72.53 GiB 77.88 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 1
   Persistence : Superblock is persistent

   Update Time : Sun Apr 23 12:48:42 2006
         State : clean     
<-------------------------------------------------------  RAID STATE
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0


   Number   Major   Minor   RaidDevice State
      0       8        1        0      active sync   /dev/sda1
      1       8       17        1      active sync   /dev/sdb1
          UUID : 35c88d0d:0a9b5ba5:78a32da7:5d5cb1ec
        Events : 0.251478

..................

Which is the difference between dirty and clean state? Both systems work 
fine.