- In regards to nightly incremental backups, we simply use the
rsync utility to pull the backups from a server onto the backup host also running linux
You're actually not the first to mention rsync, I'll man rsync and see what I can get!
I would highly advise simply paying the money for a RAID Controller card to handle all of this for you. You mentioned that you do not have physical access to the server, so the Raid card can easily handle hdd failures and send you an email letting you know that you need to replace the other drive.
That's actually a very good point. The only thing is that the machine doesn't generate any revenue and so it'll heavlily depend on the price of the RAID controler really... I know there is none at the moment which is why I was looking into dd and tar. I've read that dd physically read/write every blocks of the disk whereas tar handles `zero-files` differently... Thus my question!
Never a good idea to run X when you dont need to. All Administration can be done via config files on the command-line and any system-config gui programs usually have a commandline replacement anyway. But if you really wanted X check out 'yum grouplist' and 'yum groupinstall "X Window System"'
Good point, and I'm fully aware that one of the reasons win2k3 requires so much ram is the bloody UI! But I figured that with 2 ghz... I might give it a try! My concern tho' is really about configuring X... I've installed Centos on a test server here, and the X config is based on the server video/audio cards + monitor... What happens there there are none!!? :P
Thx for your time... You made me realise that even RAID1 could be usefull (don't know why, I was under the impression that with only two disks raid wasn't worth it...!?
Cheers Brad!
Seb.
_____________________________________________________________________ This message and any attachments are confidential and are solely intended for the use of the addressee(s). If you are not the intended recipient please contact the sender by reply email. Please also disregard the contents of this email and delete and destroy any copies immediately. CMPMedica Australia Pty Ltd does not accept liability for the views expressed in this email or for the consequences of any computer viruses that may be transmitted with this email. Also subject to copyright, no part of this message should be reproduced or transmitted without written consent.
On 14/09/06, Sebastien Tremblay sebastien.tremblay@au.cmpmedica.com
I know there is none at the moment which is why I was looking into dd and tar. I've read that dd physically read/write every blocks of the disk whereas tar handles `zero-files` differently... Thus my question!
I suspect dd-ing from an active disk isn't necessarily the best of ideas. You can:
# dd if=/dev/sda of=/dev/sdb
... one identical disk to another, or at least to another with the same geometry. If both disks are unmounted you'll get a consistent snap-shot of the system.
Doing similar with an active, mounted /dev/sda on a running system wouldn't guarantee the consitancy of the "backup". I think you'd be better partitioning up your disks similarly and tar-ing, though you mention 'zero-files', do you mean sparse files? In which case special measure may be required for those?
Will.
If you dont want the expense of raid controllers and you have two disks in your machine then have a look at software raid - works well for us.
Ian
<snip>
I would highly advise simply paying the money for a RAID Controller card to handle all of this for you. You mentioned that you do not have physical access to the server, so the Raid card can easily handle hdd failures and send you an email letting you know that you need to replace the other drive.
That's actually a very good point. The only thing is that the machine doesn't generate any revenue and so it'll heavlily depend on the price of the RAID controler really...
Run software RAID without a hardware controller. I have running Linux servers with identical hard drives using the kernel's software RAID support on production servers. Since you already have the two drives, it is a no cost approach, simply configure the software RAID in disk druid during install and off you go. If you are concerned about monitoring, write a cron job to check the contents of /proc/mdstat periodically.
The only issue I've had is there is a known bug in the way that grub installs its MBR (master boot record) so that if the primary drive fails, the secondary drive has no way to boot. The work around is to run the normal install, then just at the end when you are are ready to reboot, use the ATL-F? to switch to another session with the bash prompt (I think F3 but I always have to hunt for it) . Once in the bash prompt, start grub and use the commands: root (hd1,#) / setup( hd1) were # is the partition containing the Linux boot loader.
This may take a little research and testing, but once you have it down, it works well, even on low-end hardware.
Brett
Brett Serkez spake the following on 9/14/2006 5:04 AM:
<snip>
I would highly advise simply paying the money for a RAID Controller card to handle all of this for you. You mentioned that you do not have physical access to the server, so the Raid card can easily handle hdd failures and send you an email letting you know that you need to replace the other drive.
That's actually a very good point. The only thing is that the machine doesn't generate any revenue and so it'll heavlily depend on the price of the RAID controler really...
Run software RAID without a hardware controller. I have running Linux servers with identical hard drives using the kernel's software RAID support on production servers. Since you already have the two drives, it is a no cost approach, simply configure the software RAID in disk druid during install and off you go. If you are concerned about monitoring, write a cron job to check the contents of /proc/mdstat periodically.
Mdadm can be set to e-mail any failures to the sysadmin. http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html
On Thu, 2006-09-14 at 17:14 +1000, Sebastien Tremblay wrote:
- In regards to nightly incremental backups, we simply use the
rsync utility to pull the backups from a server onto the backup host also running linux
You're actually not the first to mention rsync, I'll man rsync and see what I can get!
You might also like backuppc: http://backuppc.sourceforge.net/. It can use ssh and rsync (or tar or smb...) to back up remote machines and it uses a scheme with compression and hard links of duplicates to greatly increase the amount of data it can keep on line in a given amount of space.
On 9/14/06, Les Mikesell lesmikesell@gmail.com wrote:
On Thu, 2006-09-14 at 17:14 +1000, Sebastien Tremblay wrote:
You might also like backuppc: http://backuppc.sourceforge.net/. It can use ssh and rsync (or tar or smb...) to back up remote machines and it uses a scheme with compression and hard links of duplicates to greatly increase the amount of data it can keep on line in a given amount of space.
hughesjr created CentOS rpms for backuppc. They're in the testing repo currently.
Grant