On 17.05.2014 19:00, Steve Thompson wrote:
On Sat, 17 May 2014, SilverTip257 wrote:
Sounds like you might be reinventing the wheel.
I think not; see below.
DRBD [0] does what it sounds like you're trying to accomplish [1]. Especially since you have two nodes A+B or C+D that are RAIDed over iSCSI. It's rather painless to set up two-nodes with DRBD.
I am familiar with DRBD, having used it for a number of years. However, I don't think this does what I am describing. With a conventional two-node DRBD setup, the drbd block device appears on both storage nodes, one of which is primary. In this case, writes to the block device are done from the client to the primary, and the storage I/O is done locally on the primary and is forwarded across the network by the primary to the secondary.
What I am describing in my experiment is a setup in which the block device (/dev/mdXXX) appears on neither of the storage nodes, but on a third node. Writes to the block device are done from the client to the third node and are forwarded over the network to both storage servers. The whole setup can be done with only packages from the base repo.
I don't see how this can be accomplished with DRBD, unless the DRBD two-node setup then iscsi-exports the block device to the third node. With provision for failover, this is surely a great deal more complex than the setup that I have described.
If DRBD had the ability for the drbd block device to appear on a third node (one that *does not have any storage*), then it would perhaps be different.
Why specifically do you care about that? Both with your solution and the DRBD one the clients only see a NFS endpoint so what does it matter that this endpoint is placed on one of the storage systems? Also while with you solution streaming performance may be ok latency is going to be fairly terrible due to the round-trips and synchronicity required so this may be a nice setup for e.g. a backup storage system but not really suited as a more general purpose solution.
Regards, Dennis