Hi,
I would like to get some input from people who have used these options for mounting a remote server to a local server. Basically, I need to replicate / backup data from one server to another, but over the internet (i.e. insecure channels)
Currently we have been mounting an SMB share over SSH, but it's got it's own set of problems. And I don't know if this is optimal, or if I could setup something better. We don't have much control over the remote server, so I couldn't setup a VPN, or iSCSI or anything else. My options was FTP & SMB.
But I want to move the backups in-house, to save bandwidth and have more control over what we do.
So, with a new CentOS server & 2x1TB HDD's in RAID1 configuration, I can do pretty much whatever I want. The backup server(s) will serve backups for multiple servers, in different data centers (possible in different counties as well, I still need to think about this), so my biggest concern is security.
We mainly use cPanel & DotNetPanel (Windows ServerS) , but also WebMin & VirtualMin, so I need to stick with their native backup procedures and don't really want to use a too technical backup system.
The end users need access to the data 24/7, so having the remote share permanently mounted seems to be the best for this, then our support staff don't need to SSH into the servers and download the backups. With the mount, I can also use rsync backups, so an end user could restore only a single file if need be.
NOW, the question is: Which protocol would be best for this? I can only think of SMB, NFS & iSCSI The SMB mounts have worked well so far, but it's not as safe, and once the SMB share is mounted, I can't unmount it until the server reboots. This isn't necessarily a bad thing, but sometime the backup script will mount the share again (I think this is a bug in cPanel) and we end up with 4 or 5 open connection to the remote server.
NFS - last time I looked at it was on V3, which was IMO rather slow & insecure.
iSCSI - this doesn't allow for more than one connect to the same share. Sometimes I user might want to download a backup directly from the backup server via FTP / SSH / a web interface, which I don't think will work. We also sometimes need to restore a backup on a different server (if for example the HDD on the initial server is too full), so this isn't possible.
The remote shares also need to be mounted inside XEN domU's, or directly on CentOS / Windows servers.
what would be my best option for this?
Greetings,
On Thu, Jan 28, 2010 at 4:58 PM, Rudi Ahlers Rudi@softdux.com wrote:
Hi,
NOW, the question is: Which protocol would be best for this? I can only think of SMB, NFS & iSCSI
Just an innocent and possibly OOB suggestion -- what you think of sshfs
Regards
Rajagopal
On Thu, Jan 28, 2010 at 2:05 PM, Rajagopal Swaminathan < raju.rajsand@gmail.com> wrote:
Greetings,
On Thu, Jan 28, 2010 at 4:58 PM, Rudi Ahlers Rudi@softdux.com wrote:
Hi,
NOW, the question is: Which protocol would be best for this? I can only think of SMB, NFS & iSCSI
Just an innocent and possibly OOB suggestion -- what you think of sshfs
Regards
Rajagopal _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
heh, I knew I should have mentioned it, but due to the extra kernel modules that it needs, it's a bit impractical for our XEN domU's.
BUT, I also don't know what kind of performance gain it would give me, if any. Any experience with it?
Rudi Ahlers wrote:
Hi,
I would like to get some input from people who have used these options for mounting a remote server to a local server. Basically, I need to replicate / backup data from one server to another, but over the internet (i.e. insecure channels)
Currently we have been mounting an SMB share over SSH, but it's got it's own set of problems. And I don't know if this is optimal, or if I could setup something better. We don't have much control over the remote server, so I couldn't setup a VPN, or iSCSI or anything else. My options was FTP & SMB.
But I want to move the backups in-house, to save bandwidth and have more control over what we do.
So, with a new CentOS server & 2x1TB HDD's in RAID1 configuration, I can do pretty much whatever I want. The backup server(s) will serve backups for multiple servers, in different data centers (possible in different counties as well, I still need to think about this), so my biggest concern is security.
We mainly use cPanel & DotNetPanel (Windows ServerS) , but also WebMin & VirtualMin, so I need to stick with their native backup procedures and don't really want to use a too technical backup system.
The end users need access to the data 24/7, so having the remote share permanently mounted seems to be the best for this, then our support staff don't need to SSH into the servers and download the backups. With the mount, I can also use rsync backups, so an end user could restore only a single file if need be.
NOW, the question is: Which protocol would be best for this? I can only think of SMB, NFS & iSCSI The SMB mounts have worked well so far, but it's not as safe, and once the SMB share is mounted, I can't unmount it until the server reboots. This isn't necessarily a bad thing, but sometime the backup script will mount the share again (I think this is a bug in cPanel) and we end up with 4 or 5 open connection to the remote server.
NFS - last time I looked at it was on V3, which was IMO rather slow & insecure.
iSCSI - this doesn't allow for more than one connect to the same share. Sometimes I user might want to download a backup directly from the backup server via FTP / SSH / a web interface, which I don't think will work. We also sometimes need to restore a backup on a different server (if for example the HDD on the initial server is too full), so this isn't possible.
The remote shares also need to be mounted inside XEN domU's, or directly on CentOS / Windows servers.
what would be my best option for this?
Anytime someone mentions backups, I have a knee-jerk reaction to mention backuppc because it is simple and will likely do anything you need. Docs are here: http://backuppc.sourceforge.net/ It is packaged in epel. It can use rsync (with/without ssh), smb, or tar for the backup transport. Generally for anything remote, you'll want rsync, and you'll want it badly enough to set it up even on windows targets - which is not all that difficult.
Anytime someone mentions backups, I have a knee-jerk reaction to mention
backuppc because it is simple and will likely do anything you need. Docs are here: http://backuppc.sourceforge.net/ It is packaged in epel. It can use rsync (with/without ssh), smb, or tar for the backup transport. Generally for anything remote, you'll want rsync, and you'll want it badly enough to set it up even on windows targets - which is not all that difficult.
-- Les Mikesell lesmikesell@gmail.com
Thank you Les, but I'm not looking for a new backup program. We rely on the platform's native backup scripts. I'm looking for recommendation for a fast, reliable & secure remote backup server platform
On 1/28/2010 3:01 PM, Rudi Ahlers wrote:
Anytime someone mentions backups, I have a knee-jerk reaction to mention backuppc because it is simple and will likely do anything you need. Docs are here: http://backuppc.sourceforge.net/ It is packaged in epel. It can use rsync (with/without ssh), smb, or tar for the backup transport. Generally for anything remote, you'll want rsync, and you'll want it badly enough to set it up even on windows targets - which is not all that difficult. -- Les Mikesell lesmikesell@gmail.com <mailto:lesmikesell@gmail.com>
Thank you Les, but I'm not looking for a new backup program. We rely on the platform's native backup scripts. I'm looking for recommendation for a fast, reliable & secure remote backup server platform
I don't understand what a 'remote backup server platform' is if it doesn't involve backup software. If you just want to present a file or device interface you can do that over a WAN with ordinary protocols but you won't like it. You could split the difference with a local (to the targets) file share where the native backups dump a copy, followed by remote rsync'ing of that copy to a central server where a longer history might be managed (or letting backuppc do that part for you).
Rudi Ahlers wrote:
Hi,
I would like to get some input from people who have used these options for mounting a remote server to a local server. Basically, I need to replicate / backup data from one server to another, but over the internet (i.e. insecure channels)
NFS and CIFS and iSCSI are all terrible for WAN backups(assuming you don't have a WAN optimization appliance), tons of overhead. Use rsync over SSH, or rsync over HPNSSH. I transfer over a TB of data a day using rsync over HPNSSH across several WANs.
nate
On Thu, Jan 28, 2010 at 4:04 PM, nate centos@linuxpowered.net wrote:
Rudi Ahlers wrote:
Hi,
I would like to get some input from people who have used these options
for
mounting a remote server to a local server. Basically, I need to
replicate /
backup data from one server to another, but over the internet (i.e.
insecure
channels)
NFS and CIFS and iSCSI are all terrible for WAN backups(assuming you don't have a WAN optimization appliance), tons of overhead. Use rsync over SSH, or rsync over HPNSSH. I transfer over a TB of data a day using rsync over HPNSSH across several WANs.
nate
Hi Nate,
We used to do it like that - rsync over SSH, but the amount of support calls we got with this solution was just too much.
So, instead we mounted the backup volumes on the servers, and the end users (most of them being developers & graphic designers) could have direct access to their backups.
Currently we mount the SMB share over SSH, then rsync to it:
ssh -f -N -L 139:usabackup01:139 softdux@usabackup01 mount -t cifs //localhost/backups /bck/ -o username=xxxxxx,password=xxxxxxx rsync -avz /home/pete/* /bck/home/pete/
^ this is just a quick sample. The different control panels use rsync differently, and some users have their own rsync scripts as well.
But, I don't know if this is optimal. i.e. are other protocols which will work better, and I could only think of iSCSI & NFS, but I don't know if they're any better.
On 1/28/2010 3:13 PM, Rudi Ahlers wrote:
We used to do it like that - rsync over SSH, but the amount of support calls we got with this solution was just too much.
So, instead we mounted the backup volumes on the servers, and the end users (most of them being developers & graphic designers) could have direct access to their backups.
This is probably getting repetitive, but backuppc provides a web interface where server 'owners' can browse their own backups, select what they want, and click a button to restore or download to their desktop. It's not part of the distribution, but I think someone even has a fuse filesystem layer that gives normal-looking read access to the compressed/pooled storage. I don't know if you can wrap samba on top of that, though - or what kind of performance it has.
On Thu, Jan 28, 2010 at 11:34 PM, Les Mikesell lesmikesell@gmail.comwrote:
This is probably getting repetitive, but backuppc provides a web interface where server 'owners' can browse their own backups, select what they want, and click a button to restore or download to their desktop. It's not part of the distribution, but I think someone even has a fuse filesystem layer that gives normal-looking read access to the compressed/pooled storage. I don't know if you can wrap samba on top of that, though - or what kind of performance it has.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________
You're right, it is getting repetitive, but thank you for the advice, I'll look into backuppc
ok, forget about rsync. forget about which backup script is better, and which isn't. forget about how I get the data onto the order server. I don't care about backups, or rsync, or backuppc or bacula or amanda, or R1soft
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
On 1/28/2010 4:30 PM, Rudi Ahlers wrote:
ok, forget about rsync. forget about which backup script is better, and which isn't. forget about how I get the data onto the order server. I don't care about backups, or rsync, or backuppc or bacula or amanda, or R1soft
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
All are fine locally, horrible over network connections with high latency or limited bandwidth. iSCSI is probably harder to manage if you ever want to see the data from more than one connection.
Rudi Ahlers wrote:
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
ISCSI is not a file system, its purely a block device. works best over fast low latency dedicated links.
I think NFS would be better for unix to unix than SMB. SMB/CIFS is better for MS Windows, but neither works very well over high latency connections.
On Fri, Jan 29, 2010 at 1:05 AM, nate centos@linuxpowered.net wrote:
Rudi Ahlers wrote:
let's keep the question simple. WHICH filesystem would be best for this
type
of operation? SMB, NFS, or iSCSI?
none
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
nate, why not? Is it simply unavoidable at all costs to mount on system on another, over a WAN? That's all I really want todo
Rudi Ahlers wrote:
nate, why not? Is it simply unavoidable at all costs to mount on system on another, over a WAN? That's all I really want todo
If what you have now works, stick with it.. in general network file systems are very latency sensitive.
CIFS might work best *if* your using a WAN optimization appliance, I'm not sure how much support NFS gets from those vendors.
iSCSI certainly is the worst, block devices are very intolerant of latency.
AFS may be another option though quite a bit more complicated, as far as I know it's a layer on top of an existing file system that is used for things like replication
I have no experience with it myself.
nate
On Fri, Jan 29, 2010 at 1:18 AM, nate centos@linuxpowered.net wrote:
Rudi Ahlers wrote:
nate, why not? Is it simply unavoidable at all costs to mount on system
on
another, over a WAN? That's all I really want todo
If what you have now works, stick with it.. in general network file systems are very latency sensitive.
CIFS might work best *if* your using a WAN optimization appliance, I'm not sure how much support NFS gets from those vendors.
iSCSI certainly is the worst, block devices are very intolerant of latency.
AFS may be another option though quite a bit more complicated, as far as I know it's a layer on top of an existing file system that is used for things like replication
I have no experience with it myself.
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanx nate, this is what I wanted to hear :)
So, is there any benefit in using NFS over SMB in this case? The CIFS mounts can't be unmounted without a reboot, so they build-up a pool of mounts to the same server which cause extra latency
On 1/28/2010 5:23 PM, Rudi Ahlers wrote:
So, is there any benefit in using NFS over SMB in this case? The CIFS mounts can't be unmounted without a reboot, so they build-up a pool of mounts to the same server which cause extra latency
I don't understand either of not being able to unmount a cifs mount or not being able to avoid remounting when it is already mounted. That's probably something you can fix. The only thing that should keep you from unmounting would be if some file is open or a running process has its working directory under the mount point.
On Jan 28, 2010, at 6:23 PM, Rudi Ahlers rudiahlers@gmail.com wrote:
On Fri, Jan 29, 2010 at 1:18 AM, nate centos@linuxpowered.net wrote: Rudi Ahlers wrote:
nate, why not? Is it simply unavoidable at all costs to mount on
system on
another, over a WAN? That's all I really want todo
If what you have now works, stick with it.. in general network file systems are very latency sensitive.
CIFS might work best *if* your using a WAN optimization appliance, I'm not sure how much support NFS gets from those vendors.
iSCSI certainly is the worst, block devices are very intolerant of latency.
AFS may be another option though quite a bit more complicated, as far as I know it's a layer on top of an existing file system that is used for things like replication
I have no experience with it myself.
Thanx nate, this is what I wanted to hear :)
So, is there any benefit in using NFS over SMB in this case? The CIFS mounts can't be unmounted without a reboot, so they build-up a pool of mounts to the same server which cause extra latency
It's not easy backing up from behind the firewall.
What about using a service that will backup the mobile clients to an offsite repository that is accessible also from behind the firewall.
I was pitched something not too long ago about such a service, can't remember the name now unfortunately.
Otherwise you could look into some sort of WebDAV + Fuse setup or some specialized file system that is cached on the client but then syncs with the server in the background when available, then all your backups are local.
-Ross
On Fri, Jan 29, 2010 at 2:17 AM, Ross Walker rswwalker@gmail.com wrote:
It's not easy backing up from behind the firewall.
What about using a service that will backup the mobile clients to an offsite repository that is accessible also from behind the firewall.
I was pitched something not too long ago about such a service, can't remember the name now unfortunately.
Otherwise you could look into some sort of WebDAV + Fuse setup or some specialized file system that is cached on the client but then syncs with the server in the background when available, then all your backups are local.
-Ross
Hi Ross,
Backing up behind the firewall is made easy by using an SSH tunnel :)
We already have an offsite backup facility with a 3rd party, but I need more control over the backups, and want to setup an inhouse backup server which where all the client's account (this is hosting accounts & VPS's) be backed up to, then this server will do an rsync with all the data to the offsite backup server.
On Jan 29, 2010, at 2:37 AM, Rudi Ahlers rudiahlers@gmail.com wrote:
On Fri, Jan 29, 2010 at 2:17 AM, Ross Walker rswwalker@gmail.com wrote: It's not easy backing up from behind the firewall.
What about using a service that will backup the mobile clients to an offsite repository that is accessible also from behind the firewall.
I was pitched something not too long ago about such a service, can't remember the name now unfortunately.
Otherwise you could look into some sort of WebDAV + Fuse setup or some specialized file system that is cached on the client but then syncs with the server in the background when available, then all your backups are local. _______
Hi Ross,
Backing up behind the firewall is made easy by using an SSH tunnel :)
Communications is easy, but that's not what I'm talking about, I'm talking the management and coordination of backups from behind the firewall.
We already have an offsite backup facility with a 3rd party, nbut I need more control over the backups, and want to setup an inhouse backup server which where all the client's account (this is hosting accounts & VPS's) be backed up to, then this server will do an rsync with all the data to the offsite backup server.
You might find this to be a Herculean task that even if implemented, impossible to support.
-Ross
On 1/29/2010 1:37 AM, Rudi Ahlers wrote:
Backing up behind the firewall is made easy by using an SSH tunnel :)
We already have an offsite backup facility with a 3rd party, but I need more control over the backups, and want to setup an inhouse backup server which where all the client's account (this is hosting accounts & VPS's) be backed up to, then this server will do an rsync with all the data to the offsite backup server.
Can't this be at the same location as the source of the data to eliminate the latency issue?
The CIFS mounts can't be unmounted without a reboot,
so they build-up a pool of mounts to the same server which cause extra
latency
Is there an environmental restriction in your application or organization for this? Normally CIFS mounts can umounted easily in runtime.
At any rate... if I were in your shoes and really restricted to the options you propose, I would go with CIFS mounts through IPSEC tunnels.
-geoff
--------------------------------- Geoff Galitz Blankenheim NRW, Germany http://www.galitz.org/ http://www.galitz.org/ http://german-way.com/blog/ http://german-way.com/blog/
On Fri, Jan 29, 2010 at 12:12 PM, Geoff Galitz geoff@galitz.org wrote:
The CIFS mounts can't be unmounted without a reboot,
so they build-up a pool of mounts to the same server which cause extra
latency
Is there an environmental restriction in your application or organization for this? Normally CIFS mounts can umounted easily in runtime.
what do you mean by this?
At any rate... if I were in your shoes and really restricted to the options you propose, I would go with CIFS mounts through IPSEC tunnels.
Wouldn't IPSEC add more overhead than an SSH tunnel?
-geoff
Geoff Galitz Blankenheim NRW, Germany http://www.galitz.org/ http://german-way.com/blog/
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Rudi Ahlers wrote on Fri, 29 Jan 2010 12:27:49 +0200:
what do you mean by this?
exactly as he says. Any mounts can be undone (mount/umount). Maybe not thru your Cpanel, but in reality.
Kai
On 1/28/2010 5:13 PM, Rudi Ahlers wrote:
nate, why not? Is it simply unavoidable at all costs to mount on system on another, over a WAN? That's all I really want todo
You are introducing unpredictable delays and possible retries/disconnects into kernel layers that aren't very well prepared to deal with them. It may mostly work, but I assume you wouldn't be asking if you were happy with it. There might be something you could do with mirroring over DRDB if you are willing to duplicate the disk space on both sides - but you could use rsync for that too.
On Thu, Jan 28, 2010 at 12:30 PM, Rudi Ahlers rudiahlers@gmail.com wrote:
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
As someone said, these are all bad if your channel is insecure.
Actually I know nothing about iSCSI, maybe it is more robust.
Dave
On Friday, January 29, 2010 03:49 PM, Dave wrote:
On Thu, Jan 28, 2010 at 12:30 PM, Rudi Ahlers <rudiahlers@gmail.com mailto:rudiahlers@gmail.com> wrote:
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
As someone said, these are all bad if your channel is insecure.
Actually I know nothing about iSCSI, maybe it is more robust.
I foresee loads of kernel timeout messages in /var/log/messages...
Dave wrote:
On Thu, Jan 28, 2010 at 12:30 PM, Rudi Ahlers <rudiahlers@gmail.com mailto:rudiahlers@gmail.com> wrote:
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
As someone said, these are all bad if your channel is insecure.
Actually I know nothing about iSCSI, maybe it is more robust.
normally, iSCSI is done over dedicated SAN networks not even connected to your normal LAN
I suppose it can be secured but usually there's not even any authnetication, just targets masked by the initiators IP
On Fri, Jan 29, 2010 at 9:49 AM, Dave <tdbtdb+centos@gmail.comtdbtdb%2Bcentos@gmail.com
wrote:
On Thu, Jan 28, 2010 at 12:30 PM, Rudi Ahlers rudiahlers@gmail.comwrote:
let's keep the question simple. WHICH filesystem would be best for this type of operation? SMB, NFS, or iSCSI?
As someone said, these are all bad if your channel is insecure.
Actually I know nothing about iSCSI, maybe it is more robust.
Dave
Even through an SSH tunnel?
If this is the case, what other options are available?