Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
is this hardware or software RAID..also what hardware do you have this running on?
Nigel Kendrick wrote:
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
CentOS mailing list CentOS@caosity.org http://lists.caosity.org/mailman/listinfo/centos .
William Warren wrote:
is this hardware or software RAID..also what hardware do you have this running on?
Nigel Kendrick wrote:
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
And what about your hardware set up? Havind 2 IDE disks on the same IDe cable and using softraid is not a good idea.
Michiel
There is nothing wrong with software RAID. Unless you are using intense I/O software RAID has just as much security and performance as hardware RAID.
<flamesuit on>
Michiel van Es wrote:
William Warren wrote:
is this hardware or software RAID..also what hardware do you have this running on?
Nigel Kendrick wrote:
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
And what about your hardware set up? Havind 2 IDE disks on the same IDe cable and using softraid is not a good idea.
Michiel
CentOS mailing list CentOS@caosity.org http://lists.caosity.org/mailman/listinfo/centos
William Warren wrote:
There is nothing wrong with software RAID. Unless you are using intense I/O software RAID has just as much security and performance as hardware RAID.
<flamesuit on>
Ehmm..dear William, where do I state that I am AGAINST software raid? I am just addressing that if you use and IDE software raid setup and use the 2 disks on the SAME cable that it will suck big time. I am using software raid for years now..I am PROsoftraid :-)
Michiel
Michiel van Es wrote:
William Warren wrote:
is this hardware or software RAID..also what hardware do you have this running on?
Nigel Kendrick wrote:
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
And what about your hardware set up? Havind 2 IDE disks on the same IDe cable and using softraid is not a good idea.
Michiel
CentOS mailing list CentOS@caosity.org http://lists.caosity.org/mailman/listinfo/centos
Often some BIOSes detect the geometry of the drives differently if they are split between primary and secondary controllers.
Having paired drives on a single controller is fine, the interface is faster than the drives anyway, and the geometry problem hardly ever occurs...
Software RAID on Linux works extremely well, unlike software RAID M$ Windoze...which hogs resources...
P.
William Warren wrote:
There is nothing wrong with software RAID. Unless you are using intense I/O software RAID has just as much security and performance as hardware RAID.
<flamesuit on>
Michiel van Es wrote:
William Warren wrote:
is this hardware or software RAID..also what hardware do you have this running on?
Nigel Kendrick wrote:
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
And what about your hardware set up? Havind 2 IDE disks on the same IDe cable and using softraid is not a good idea.
Michiel
CentOS mailing list CentOS@caosity.org http://lists.caosity.org/mailman/listinfo/centos
On Thu, 3 Feb 2005, Nigel Kendrick wrote:
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to sync!!???
Hi Nigel,
If SW-RAID (i.e. raidtools / mdadm) :-
You can tune kernel parameters to increase max/min speed values here:
# cat /proc/sys/dev/raid/speed_limit_max 10000 # cat /proc/sys/dev/raid/speed_limit_min 100
Set this to, say, power of 10 (or more):
# sysctl -w dev.raid.speed_limit_max=100000 # sysctl -w dev.raid.speed_limit_min=1000
Additionally tweak it further up with:
# renice -20 `pidof mdrecoveryd`
Do all this _before_ you glue up the mirror.
# dmesg <...> md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc. md: using maximum available idle IO bandwith (but not more than 100000 KB/sec) for reconstruction. <...>
# cat /proc/mdstat <...> md8 : active raid1 sdb5[2] sda5[0] 2096384 blocks [2/1] [U_] [========>............] recovery = 43.2% (907584/2096384) finish=0.3min speed=50421K/sec <...>
Don't recommend this if your server is I/O intensive and in production use. Massive performance degration while reconstructing.
SW-RAID rules. :-)
Regards, Morten