I'm running CentOS 4.4 and drbd 0.7 with ext3 filesystems on the drbd device. I'm very interested in moving to drbd 0.8 and pulling my all of my drbd devices into one lvm for greater flexibility in filesystems. I've seen some notes on doing this but wonder if anyone on this list has a cookbook for doing this. Also noted that on the drbd site the only packages they have are set up for the 2.6.9-34.106 kernel. Any thoughts on when there might be a drbd .8 package for the current kernel?
Thanks for any wisdom on this, Chuck
On Mon, 2007-02-26 at 05:26 -0500, Chuck Mattern wrote:
I'm running CentOS 4.4 and drbd 0.7 with ext3 filesystems on the drbd device. I'm very interested in moving to drbd 0.8 and pulling my all of my drbd devices into one lvm for greater flexibility in filesystems. I've seen some notes on doing this but wonder if anyone on this list has a cookbook for doing this. Also noted that on the drbd site the only packages they have are set up for the 2.6.9-34.106 kernel. Any thoughts on when there might be a drbd .8 package for the current kernel?
Thanks for any wisdom on this, Chuck
I am not quite sure that drbd-8 is totally ready yet for prime time. Not that I don't trust them (I use drbd in production and I love it), but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for testing ... though that will probably need to wait until after CentOS 5 Beta is released.
Thanks, Johnny Hughes
Makes sense to me, I'll try to be patient ;-)
Johnny Hughes wrote:
On Mon, 2007-02-26 at 05:26 -0500, Chuck Mattern wrote:
I'm running CentOS 4.4 and drbd 0.7 with ext3 filesystems on the drbd device. I'm very interested in moving to drbd 0.8 and pulling my all of my drbd devices into one lvm for greater flexibility in filesystems. I've seen some notes on doing this but wonder if anyone on this list has a cookbook for doing this. Also noted that on the drbd site the only packages they have are set up for the 2.6.9-34.106 kernel. Any thoughts on when there might be a drbd .8 package for the current kernel?
Thanks for any wisdom on this, Chuck
I am not quite sure that drbd-8 is totally ready yet for prime time. Not that I don't trust them (I use drbd in production and I love it), but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for testing ... though that will probably need to wait until after CentOS 5 Beta is released.
Thanks, Johnny Hughes
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Johnny Hughes wrote:
I am not quite sure that drbd-8 is totally ready yet for prime time. Not that I don't trust them (I use drbd in production and I love it), but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for testing ... though that will probably need to wait until after CentOS 5 Beta is released.
Could you get the same effect by running software RAID1 with one of the drives connected via iscsi?
On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
I am not quite sure that drbd-8 is totally ready yet for prime time. Not that I don't trust them (I use drbd in production and I love it), but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for testing ... though that will probably need to wait until after CentOS 5 Beta is released.
Could you get the same effect by running software RAID1 with one of the drives connected via iscsi?
Provides the same effect as DRBD? ... not really ... as DRBD provides a second machine in hot standby mode with a totally synced partition that is ready to take over on a failure of the first machine. If the first computer blows up (power supply, hard drive crash, etc.), the second one starts up and takes over with no down time (except the time it takes to mount the partition and start the services on the new machine).
Johnny Hughes wrote:
On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
I am not quite sure that drbd-8 is totally ready yet for prime time. Not that I don't trust them (I use drbd in production and I love it), but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for testing ... though that will probably need to wait until after CentOS 5 Beta is released.
Could you get the same effect by running software RAID1 with one of the drives connected via iscsi?
Provides the same effect as DRBD? ... not really ... as DRBD provides a second machine in hot standby mode with a totally synced partition that is ready to take over on a failure of the first machine. If the first computer blows up (power supply, hard drive crash, etc.), the second one starts up and takes over with no down time (except the time it takes to mount the partition and start the services on the new machine).
How is the mirror/sync different than RAID1, and how is DRBD's version different than you would have if you exported the 2nd machine's partitions via iscsi and mirrored the live machine using md devices with one local, one iscsi member for each? If that is actually possible, I'd expect those general purpose components too be much better tested and more reliable than little-used code like DRBD. Does DRBD have special handling for updating slightly out-of-sync copies or does it have to rebuild the whole thing if not taken down cleanly also?
On Fri, 2007-03-02 at 15:35 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
I am not quite sure that drbd-8 is totally ready yet for prime time. Not that I don't trust them (I use drbd in production and I love it), but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for testing ... though that will probably need to wait until after CentOS 5 Beta is released.
Could you get the same effect by running software RAID1 with one of the drives connected via iscsi?
Provides the same effect as DRBD? ... not really ... as DRBD provides a second machine in hot standby mode with a totally synced partition that is ready to take over on a failure of the first machine. If the first computer blows up (power supply, hard drive crash, etc.), the second one starts up and takes over with no down time (except the time it takes to mount the partition and start the services on the new machine).
How is the mirror/sync different than RAID1, and how is DRBD's version different than you would have if you exported the 2nd machine's partitions via iscsi and mirrored the live machine using md devices with one local, one iscsi member for each? If that is actually possible, I'd expect those general purpose components too be much better tested and more reliable than little-used code like DRBD. Does DRBD have special handling for updating slightly out-of-sync copies or does it have to rebuild the whole thing if not taken down cleanly also?
I have no idea how it works, other than it uses the md device and raid 1 kernel code to mirror the drive/partition to a second machine ... and do so in real time. It uses heartbeat to create a cluster and does real time failover.
It does not require rebuilding the whole device if shutdown uncleanly ... it syncs from the last updated point.
My point was that the 0.8 (actually renamed 8.0.0) code was just released. The 0.6 and 0.7 code has been out and stable for quite sometime and I have been using it for more than 2 years.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Johnny Hughes Sent: Sunday, March 04, 2007 7:16 AM To: CentOS ML Subject: Re: [CentOS] CentOS 4.4 lvm and drbd 0.8?
On Fri, 2007-03-02 at 15:35 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
I am not quite sure that drbd-8 is totally ready yet
for prime time.
Not that I don't trust them (I use drbd in production
and I love it),
but I want to wait for an 8.0.1 or 8.0.2 level before I move the enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for
testing ... though that
will probably need to wait until after CentOS 5 Beta is
released.
Could you get the same effect by running software RAID1
with one of the
drives connected via iscsi?
Provides the same effect as DRBD? ... not really ... as
DRBD provides a
second machine in hot standby mode with a totally synced
partition that
is ready to take over on a failure of the first machine.
If the first
computer blows up (power supply, hard drive crash, etc.),
the second one
starts up and takes over with no down time (except the
time it takes to
mount the partition and start the services on the new machine).
How is the mirror/sync different than RAID1, and how is
DRBD's version
different than you would have if you exported the 2nd machine's partitions via iscsi and mirrored the live machine using md
devices with
one local, one iscsi member for each? If that is actually
possible, I'd
expect those general purpose components too be much better
tested and
more reliable than little-used code like DRBD. Does DRBD
have special
handling for updating slightly out-of-sync copies or does
it have to
rebuild the whole thing if not taken down cleanly also?
I have no idea how it works, other than it uses the md device and raid 1 kernel code to mirror the drive/partition to a second machine ... and do so in real time. It uses heartbeat to create a cluster and does real time failover.
It does not require rebuilding the whole device if shutdown uncleanly ... it syncs from the last updated point.
My point was that the 0.8 (actually renamed 8.0.0) code was just released. The 0.6 and 0.7 code has been out and stable for quite sometime and I have been using it for more than 2 years.
If you were running a later kernel version of MD, it is conceivable that you could create a mirror with a remote storage drive over iscsi.
It would be up to you though to figure out how to fail-over to it and to limit the bandwidth MD takes to that remote mirror and releasize that it will always be fully synchronous and so performance may not be the best over a WAN.
You can also use a pair of vise grip plyers to do the job of an adjustable wrench, but it will probably strip the bolt in the process.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ross S. W. Walker Sent: Sunday, March 04, 2007 10:44 AM To: CentOS mailing list Subject: RE: [CentOS] CentOS 4.4 lvm and drbd 0.8?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Johnny Hughes Sent: Sunday, March 04, 2007 7:16 AM To: CentOS ML Subject: Re: [CentOS] CentOS 4.4 lvm and drbd 0.8?
On Fri, 2007-03-02 at 15:35 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
I am not quite sure that drbd-8 is totally ready yet
for prime time.
Not that I don't trust them (I use drbd in production
and I love it),
but I want to wait for an 8.0.1 or 8.0.2 level before
I move the
enterprise CentOS RPMS to that version.
I would be open to producing some 8.0.0 rpms for
testing ... though that
will probably need to wait until after CentOS 5 Beta is
released.
Could you get the same effect by running software RAID1
with one of the
drives connected via iscsi?
Provides the same effect as DRBD? ... not really ... as
DRBD provides a
second machine in hot standby mode with a totally synced
partition that
is ready to take over on a failure of the first machine.
If the first
computer blows up (power supply, hard drive crash, etc.),
the second one
starts up and takes over with no down time (except the
time it takes to
mount the partition and start the services on the new machine).
How is the mirror/sync different than RAID1, and how is
DRBD's version
different than you would have if you exported the 2nd machine's partitions via iscsi and mirrored the live machine using md
devices with
one local, one iscsi member for each? If that is actually
possible, I'd
expect those general purpose components too be much better
tested and
more reliable than little-used code like DRBD. Does DRBD
have special
handling for updating slightly out-of-sync copies or does
it have to
rebuild the whole thing if not taken down cleanly also?
I have no idea how it works, other than it uses the md device and raid 1 kernel code to mirror the drive/partition to a second machine ... and do so in real time. It uses heartbeat to create a cluster and
does real
time failover.
It does not require rebuilding the whole device if shutdown uncleanly ... it syncs from the last updated point.
My point was that the 0.8 (actually renamed 8.0.0) code was just released. The 0.6 and 0.7 code has been out and stable for quite sometime and I have been using it for more than 2 years.
If you were running a later kernel version of MD, it is conceivable that you could create a mirror with a remote storage drive over iscsi.
It would be up to you though to figure out how to fail-over to it and to limit the bandwidth MD takes to that remote mirror and releasize that it will always be fully synchronous and so performance may not be the best over a WAN.
You can also use a pair of vise grip plyers to do the job of an adjustable wrench, but it will probably strip the bolt in the process.
If you do plan on using MD over iscsi why not try something interesting like a RAID level other than 1, say a RAID 3,4,5,6 and get some increased performance over drbd and regular iscsi.
You need a later kernel that supports MD bitmaps to prevent complete re-sync on disconnect and the storage would have to all be local, but say you have a bunch of servers all with direct attached storage and you wish to consolidate storage, but want to leverage all your existing direct-connect. You can have each server export it's storage via iSCSI have a central server that mounts all this storage and creates a fault tolerant MD RAID out of it, creates a LVM VG on top then re-exports it via iSCSI to different platforms.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ross S. W. Walker Sent: Sunday, March 04, 2007 11:00 AM To: CentOS mailing list Subject: RE: [CentOS] CentOS 4.4 lvm and drbd 0.8?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ross S. W. Walker Sent: Sunday, March 04, 2007 10:44 AM To: CentOS mailing list Subject: RE: [CentOS] CentOS 4.4 lvm and drbd 0.8?
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Johnny Hughes Sent: Sunday, March 04, 2007 7:16 AM To: CentOS ML Subject: Re: [CentOS] CentOS 4.4 lvm and drbd 0.8?
On Fri, 2007-03-02 at 15:35 -0600, Les Mikesell wrote:
Johnny Hughes wrote:
On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
Johnny Hughes wrote: > I am not quite sure that drbd-8 is totally ready yet
for prime time.
> Not that I don't trust them (I use drbd in production
and I love it),
> but I want to wait for an 8.0.1 or 8.0.2 level before
I move the
> enterprise CentOS RPMS to that version. > > I would be open to producing some 8.0.0 rpms for
testing ... though that
> will probably need to wait until after CentOS 5 Beta is
released.
Could you get the same effect by running software RAID1
with one of the
drives connected via iscsi?
Provides the same effect as DRBD? ... not really ... as
DRBD provides a
second machine in hot standby mode with a totally synced
partition that
is ready to take over on a failure of the first machine.
If the first
computer blows up (power supply, hard drive crash, etc.),
the second one
starts up and takes over with no down time (except the
time it takes to
mount the partition and start the services on the new
machine).
How is the mirror/sync different than RAID1, and how is
DRBD's version
different than you would have if you exported the 2nd machine's partitions via iscsi and mirrored the live machine using md
devices with
one local, one iscsi member for each? If that is actually
possible, I'd
expect those general purpose components too be much better
tested and
more reliable than little-used code like DRBD. Does DRBD
have special
handling for updating slightly out-of-sync copies or does
it have to
rebuild the whole thing if not taken down cleanly also?
I have no idea how it works, other than it uses the md device and raid 1 kernel code to mirror the drive/partition to a second machine ... and do so in real time. It uses heartbeat to create a cluster and
does real
time failover.
It does not require rebuilding the whole device if shutdown uncleanly ... it syncs from the last updated point.
My point was that the 0.8 (actually renamed 8.0.0) code was just released. The 0.6 and 0.7 code has been out and stable for quite sometime and I have been using it for more than 2 years.
If you were running a later kernel version of MD, it is conceivable that you could create a mirror with a remote storage drive over iscsi.
It would be up to you though to figure out how to fail-over to it and to limit the bandwidth MD takes to that remote mirror and releasize that it will always be fully synchronous and so performance may not be the best over a WAN.
You can also use a pair of vise grip plyers to do the job of an adjustable wrench, but it will probably strip the bolt in the process.
If you do plan on using MD over iscsi why not try something interesting like a RAID level other than 1, say a RAID 3,4,5,6 and get some increased performance over drbd and regular iscsi.
You need a later kernel that supports MD bitmaps to prevent complete re-sync on disconnect and the storage would have to all be local, but say you have a bunch of servers all with
When I said local I meant on the local area network.
direct attached storage and you wish to consolidate storage, but want to leverage all your existing direct-connect. You can have each server export it's storage via iSCSI have a central server that mounts all this storage and creates a fault tolerant MD RAID out of it, creates a LVM VG on top then re-exports it via iSCSI to different platforms.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
If you were running a later kernel version of MD, it is conceivable that you could create a mirror with a remote storage drive over iscsi.
It would be up to you though to figure out how to fail-over to it and to limit the bandwidth MD takes to that remote mirror and releasize that it will always be fully synchronous and so performance may not be the best over a WAN.
You can also use a pair of vise grip plyers to do the job of an adjustable wrench, but it will probably strip the bolt in the process.
Unix has always been about combining tools that each do one job well. If we already have a tool (iscsi) that exports remote block devices well, following standards that would the actual storage to be on non-linux devices, and another tool (md raid) that mirrors block devices, why not combine them instead of inventing yet another special purpose tool? I realize that drbd and nbd were developed before iscsi, but now that there is a standard cross-platform network block device, why shouldn't it be used? MD might need some new options to make it work as efficiently in this scenario, but that seems like a more useful place to add features - that is, there might be other situations where MD mirroring to an external iscsi partition would be useful, or even combining many iscsi exports into one raid volume.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Les Mikesell Sent: Sunday, March 04, 2007 12:51 PM To: CentOS mailing list Subject: Re: [CentOS] CentOS 4.4 lvm and drbd 0.8?
Ross S. W. Walker wrote:
If you were running a later kernel version of MD, it is conceivable that you could create a mirror with a remote storage drive over iscsi.
It would be up to you though to figure out how to fail-over to it and to limit the bandwidth MD takes to that remote mirror and releasize that it will always be fully synchronous and so performance may not be the best over a WAN.
You can also use a pair of vise grip plyers to do the job of an adjustable wrench, but it will probably strip the bolt in the process.
Unix has always been about combining tools that each do one job well. If we already have a tool (iscsi) that exports remote block devices well, following standards that would the actual storage to be on non-linux devices, and another tool (md raid) that mirrors block devices, why not combine them instead of inventing yet another special purpose tool? I realize that drbd and nbd were developed before iscsi, but now that there is a standard cross-platform network block device, why shouldn't it be used? MD might need some new options to make it work as efficiently in this scenario, but that seems like a more useful place to add features - that is, there might be other situations where MD mirroring to an external iscsi partition would be useful, or even combining many iscsi exports into one raid volume.
While yes combining tools is the Unix way, there are also some tools that are better suited for a task than others.
For example in my second post I suggest if you have a bunch of direct attached storage, say located on 20 servers in a local area network. If you export the storage from those 20 servers to a central server via iSCSI and use MD raid to create a large RAID-{3,4,5,6} array of those iSCSI targets. Then you use LVM to split those into differing volumes for re-export via iSCSI. You can then use drbd on those LVM volumes with asynchronous replication to a storage array off-site for DR purposes or to a local storage device as a volume snapshot server (where both copies need to be active at once). You can then re-export those volumes via iSCSI to different OS platforms to use as storage.
This could work well, but if you plan on using MD RAID1 to replicate data to off-site storage you will see very poor performance. MD RAID1 may work well in replicating storage synchronously between 2 local storage devices, but not to a remote storage device. In that case drbd would work better in that case. Also if you wanted both sides of a mirror to be active at once drbd 8.X is the only way to go.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.