[CentOS] DRBD very slow....
Ross Walker
rswwalker at gmail.com
Wed Jul 22 13:50:01 UTC 2009
On Jul 22, 2009, at 5:16 AM, Coert Waagmeester <lgroups at waagmeester.co.za
> wrote:
> Hello all,
>
> we have a new setup with xen on centos5.3
>
> I run drbd from lvm volumes to mirror data between the two servers.
>
> both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB
> seagate satas.
>
> The one is a dual core xeon, and the other a quad-core xeon.
>
> I have a gigabit crossover link between the two with an MTU of 9000 on
> each end.
>
> I currently have 6 drbds mirroring across that link.
>
> The highest speed I can get through that link with drbd is 11 MB/sec
> (megabytes)
>
> But if I copy a 1 gig file over that link I get 110 MB/sec.
>
> Why is DRBD so slow?
>
> I am not using drbd encryption because of the back to back link.
> Here is a part of my drbd config:
>
> # cat /etc/drbd.conf
> global {
> usage-count yes;
> }
> common {
> protocol C;
> syncer { rate 80M; }
> net {
> allow-two-primaries;
> }
> }
> resource xenotrs {
> device /dev/drbd6;
> disk /dev/vg0/xenotrs;
> meta-disk internal;
>
> on baldur.somedomain.local {
> address 10.99.99.1:7793;
> }
> on thor.somedomain.local {
> address 10.99.99.2:7793;
> }
> }
Use iperf to measure the bandwidth/latency on those nics between the
two hosts.
If iperf comes back clean use dd with the oflag=direct to test the
performance of the drives on both sides (create a test LV). Roughly
you can multiply with the max number of outstanding I/Os your
application does to get a real number, use 4 if you don't know what
that is.
DRBD protocol C is completely synchronous and won't return a write
until it has been committed to disk on both sides.
Having disk controllers with nvram cache can make all the difference
in the world for this setup.
-Ross
More information about the CentOS
mailing list