Subject: Re: [CentOS] Best way to duplicate a live Centos 5 server? To: CentOS mailing list centos@centos.org On Mon, Jun 18, 2012 at 11:21 AM, Cal Sawyer cal-s@blue-bolt.com wrote:
ReaR has suddenly become very interesting to me, probably explaining why it utterly fails to work properly (for me).I'm using 1.13 to pull a USB-based recovery image, but there's an error in the backup/NETFS/default/50_make_backup.sh script that doesn't mount the USB device after the mkrecovery step, so subsequent tar fails on write to the non-existent mountpoint. I fixed that, but on recovery it fails to mount the necessary directories on the restore drive as well, so "rear recover" quickly bombs out. Is anyone having any success actually using ReaR on CentOS 6.x? - csawyer
It intentionally doesn't deal with drives the kernel has marked as removable. I had trouble with that with the main drives on a SAS hotswap backplane in an earlier version but I think that is fixed now.
I'd recommend asking how to override this on the ReaR mail list. While the code seems usable (and yes, I have succeeded in using it on Centos 6x.), the documentation either doesn't exist yet or is very out of date. But, the authors are very responsive and it would be good to let them know about any bugs or usability problems. The mail list is still at sourceforge although the code has been moved to github and there is talk of moving the list elsewhere.
-- Les Mikesell lesmikesell@gmail.com
ReaR hasn't posed any insurmountable problems with USB removable media but i can see SATA/SAS needing some massaging in the device detection dep't.
Yes, i've gotten myself on the ReaR mailing list now.
You're right - documentation is pretty dire. Guess i'm not alone in hating doing it.
USB backup is broken due to the order in which path variables get set - sure is lot of fun trawling through the scripts to find out what gets set when. Hope the ReaR maintainers are interested in this and haven't gotten themselves mired in tape archive integration - i would have thought USB backup was the winner in terms of getting broad acceptance as a bare-metal DR solution.
However, when it works - wow. Just restored an HP dl360 w/HWRAID to a Presario desktop machine and it lives! No network, but that's small beans compared to the larger win.
- csawyer
On Tue, Jun 19, 2012 at 12:03 PM, Cal Sawyer cal-s@blue-bolt.com wrote:
You're right - documentation is pretty dire. Guess i'm not alone in hating doing it.
Yes, I really, really wish the stuff they are doing was documented, somewhere, anywhere. Not just how to use the program itself which is supposed to 'just work' unattended once you set it up, but the black magic of how they detect and reproduce all of the hardware, lvm, raid, filesystem, etc. across different distributions and versions.
USB backup is broken due to the order in which path variables get set - sure is lot of fun trawling through the scripts to find out what gets set when. Hope the ReaR maintainers are interested in this and haven't gotten themselves mired in tape archive integration - i would have thought USB backup was the winner in terms of getting broad acceptance as a bare-metal DR solution.
USB is sort of 'hands-on' for something that should be unattended, and adds a lot of unpredictable messiness in drive detection, boot order, etc. All you really have to do is export some NFS space and point ReaR to it. At least that is the easy way to get started. If you have another Linux box, just plug your USB drive in there and access it over NFS... problem solved.
Clonezilla-live plays in this space too, but it doesn't do raid or multiple disks at once, and you have to shut down to take the image.
My 'ideal' system would be to have ReaR generate a directory of what will be on the boot iso leaving that somewhere on the host without actually making the image. Then use backuppc to back up the whole host and its normal duplicate-pooling mechanism will keep the extra copies of the tools from taking extra space. Then when/if you need a bare-metal restore, you would first grab the directory of the iso contents, burn a boot CD, let that reconstruct the filesystem, then tell backuppc to restore to it. That way would be completely automatic and always be up to date, with the advantage of backuppc's efficient storage and easy online access to individual files and directories. If you don't mind wasting a small amount of space for the isos, I think that approach would already work if you tell ReaR to just make the boot image and to wait for an external program to do the restore after the filesystem has been rebuilt.
However, when it works - wow. Just restored an HP dl360 w/HWRAID to a Presario desktop machine and it lives! No network, but that's small beans compared to the larger win.
Yes, I've even modified the filesystem layout file to go from a software RAID to a non-RAID, and to change partition sizes during the restores. If it was documented, that capability by itself would be fantastic.
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
If you need detailed instructions, I can send you that!
On Tue, Jun 19, 2012 at 10:44 AM, Les Mikesell lesmikesell@gmail.comwrote:
On Tue, Jun 19, 2012 at 12:03 PM, Cal Sawyer cal-s@blue-bolt.com wrote:
You're right - documentation is pretty dire. Guess i'm not alone in hating doing it.
Yes, I really, really wish the stuff they are doing was documented, somewhere, anywhere. Not just how to use the program itself which is supposed to 'just work' unattended once you set it up, but the black magic of how they detect and reproduce all of the hardware, lvm, raid, filesystem, etc. across different distributions and versions.
USB backup is broken due to the order in which path variables get set - sure is lot of fun trawling through the scripts to find out what gets set when. Hope the ReaR maintainers are interested in this and haven't gotten themselves mired in tape archive integration - i would have thought USB backup was the winner in terms of getting broad acceptance as a bare-metal DR solution.
USB is sort of 'hands-on' for something that should be unattended, and adds a lot of unpredictable messiness in drive detection, boot order, etc. All you really have to do is export some NFS space and point ReaR to it. At least that is the easy way to get started. If you have another Linux box, just plug your USB drive in there and access it over NFS... problem solved.
Clonezilla-live plays in this space too, but it doesn't do raid or multiple disks at once, and you have to shut down to take the image.
My 'ideal' system would be to have ReaR generate a directory of what will be on the boot iso leaving that somewhere on the host without actually making the image. Then use backuppc to back up the whole host and its normal duplicate-pooling mechanism will keep the extra copies of the tools from taking extra space. Then when/if you need a bare-metal restore, you would first grab the directory of the iso contents, burn a boot CD, let that reconstruct the filesystem, then tell backuppc to restore to it. That way would be completely automatic and always be up to date, with the advantage of backuppc's efficient storage and easy online access to individual files and directories. If you don't mind wasting a small amount of space for the isos, I think that approach would already work if you tell ReaR to just make the boot image and to wait for an external program to do the restore after the filesystem has been rebuilt.
However, when it works - wow. Just restored an HP dl360 w/HWRAID to a Presario desktop machine and it lives! No network, but that's small beans compared to the larger win.
Yes, I've even modified the filesystem layout file to go from a software RAID to a non-RAID, and to change partition sizes during the restores. If it was documented, that capability by itself would be fantastic.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 07/08/2012 06:48 PM, Micky wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
If you need detailed instructions, I can send you that!
Yes, please! Could you either post here to the list, or to me personally?
Thank you,
Phil
On Jul 8, 2012, at 6:57 PM, Phil Savoie psavoie1783@rogers.com wrote:
On 07/08/2012 06:48 PM, Micky wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
If you need detailed instructions, I can send you that!
Yes, please! Could you either post here to the list, or to me personally?
Thank you,
Phil
What is running on the server? You might be able to get away with a dd, to build a duplicate disk. This disk can be directly attached or on another server tunneled through ssh.
On 07/08/12 7:14 PM, Joseph Spenner wrote:
What is running on the server? You might be able to get away with a dd, to build a duplicate disk. This disk can be directly attached or on another server tunneled through ssh.
or setup a drbd replica, wait for it to replicate, then stop the replication.
On 7/9/12, John R Pierce pierce@hogranch.com wrote:
On 07/08/12 7:14 PM, Joseph Spenner wrote:
What is running on the server? You might be able to get away with a dd, to build a duplicate disk. This disk can be directly attached or on another server tunneled through ssh.
or setup a drbd replica, wait for it to replicate, then stop the replication.
That requires drbd to be setup in advance doesn't it? I was trying this approach then ran into that wall. And given the amount of work required to get drbd working on a new setup, it seemed easier to use mdraid to do the same thing.
On 07/08/2012 10:14 PM, Joseph Spenner wrote:
On Jul 8, 2012, at 6:57 PM, Phil Savoiepsavoie1783@rogers.com wrote:
On 07/08/2012 06:48 PM, Micky wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
If you need detailed instructions, I can send you that!
Yes, please! Could you either post here to the list, or to me personally?
Thank you,
Phil
What is running on the server? You might be able to get away with a dd, to build a duplicate disk. This disk can be directly attached or on another server tunneled through ssh.
Centos 5.8 and Centos 6.2 servers. A duplicate disk is not what I am after as I cannot always replace with exact drives, i.e., same make, model, size, etc.
But thank you anyway...
Phil
_______________________________________
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Cen
On 07/08/12 8:20 PM, Phil Savoie wrote:
Centos 5.8 and Centos 6.2 servers. A duplicate disk is not what I am after as I cannot always replace with exact drives, i.e., same make, model, size, etc.
note that there's a lot of things where file by file, or even sector by sector, duplicates are NOT valid if made while the system is online.
for instance, relational databases such as PostgreSQL, Oracle, you can't just copy their files while the database server is running, as they rely on writes being made in a specific order.
Phil Savoie wrote:
On 07/08/2012 06:48 PM, Micky wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
If you need detailed instructions, I can send you that!
Yes, please! Could you either post here to the list, or to me personally?
The list, if you please. (A link will do.)
Thx T.
On 7/9/12, Micky mickylmartin@gmail.com wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
The problem I found with rsync is that it is very slow when there are a lot of small files. Any idea how this can be improved on or is that a fundamental limit?
On Mon, Jul 9, 2012 at 12:34 AM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
On 7/9/12, Micky mickylmartin@gmail.com wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
The problem I found with rsync is that it is very slow when there are a lot of small files. Any idea how this can be improved on or is that a fundamental limit?
One thing that helps is to break it up into separate runs, at least per-filesystem and perhaps some of the larger subdirectories. Depending on the circumstances, you might be able to do an initial run ahead of time when speed doesn't matter so much, then just before the cutover shut down the services that will be changing files and databases and do a final rsync which will go much faster.
Also, have you looked at clonezilla and ReaR?
On 7/9/12, Les Mikesell lesmikesell@gmail.com wrote:
One thing that helps is to break it up into separate runs, at least per-filesystem and perhaps some of the larger subdirectories. Depending on the circumstances, you might be able to do an initial run ahead of time when speed doesn't matter so much, then just before the cutover shut down the services that will be changing files and databases and do a final rsync which will go much faster.
I did try this but the time taken is pretty similar in the main delay is the part where rsync goes through all the files and spend a few hours trying to figure out what needs to be the updated on the second run after I shutdown the services. In hindsight, I might had been able to speed up things up considerably if I had generated a file list based on last modified time and passed it to rsync via the exclude/include parameters.
Also, have you looked at clonezilla and ReaR?
Yes, but due to time constraints, I figured it was safer to go with something simpler that I didn't have to learn as I go and could be done live without needed extra hardware on site. Plus it would be something that works at any site I needed it without extra software too.
On Tue, Jul 10, 2012 at 1:36 AM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
On 7/9/12, Les Mikesell lesmikesell@gmail.com wrote:
One thing that helps is to break it up into separate runs, at least per-filesystem and perhaps some of the larger subdirectories. Depending on the circumstances, you might be able to do an initial run ahead of time when speed doesn't matter so much, then just before the cutover shut down the services that will be changing files and databases and do a final rsync which will go much faster.
I did try this but the time taken is pretty similar in the main delay is the part where rsync goes through all the files and spend a few hours trying to figure out what needs to be the updated on the second run after I shutdown the services. In hindsight, I might had been able to speed up things up considerably if I had generated a file list based on last modified time and passed it to rsync via the exclude/include parameters.
Hours? This should happen in the time it takes to transfer a directory listing and read through it unless you used --ignore-times in the arguments. If you have many millions of files or not enough RAM to hold the list I suppose it could take hours.
Also, have you looked at clonezilla and ReaR?
Yes, but due to time constraints, I figured it was safer to go with something simpler that I didn't have to learn as I go and could be done live without needed extra hardware on site. Plus it would be something that works at any site I needed it without extra software too.
Rear 'might' be quick and easy. It is intended to be almost unattended and do everything for you. As for extra software - it is a 'yum install' from EPEL. The down side is that if it doesn't work, it isn't very well documented to help figure out how to fix it. I'd still recommend looking at it as a backup/restore solution with an option to clone. With a minimum amount of fiddling you can get it to generate a boot iso image that will re-create the source filesystem layout and bring up the network. Then, if you didn't want to let it handle the backup/restore part you could manually rsync to it from the live system.
Why in 6.3 they move OpenOffice to LibreOffice?
--- Michel Donais
On 11 July 2012 11:17, Michel Donais donais@telupton.com wrote:
Why in 6.3 they move OpenOffice to LibreOffice?
Michel Donais _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Michel,
I believe that is the reason why, I can be wrong
LibreOffice replaced OpenOffice as the standard office productivity suite in Red Hat Enterprise Linux 6. The 6.3 upgrade offers a new set of LibreOffice packages to replace remaining OpenOffice packages. There will be complete compatibility of documents between the older packages and LibreOffice’s newer ones. *This offers faster bug fixes and improved MS Office compatibility*
On Tue, 10 Jul 2012 22:17:43 -0400 "Michel Donais" donais@telupton.com wrote:
Why in 6.3 they move OpenOffice to LibreOffice?
Please don't reply to another thread on a mailing list and change the subject. It screws up the message threading.
On Jul 10, 2012, at 7:17 PM, "Michel Donais" donais@telupton.com wrote:
Why in 6.3 they move OpenOffice to LibreOffice?
Michel Donais _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Not sure. Probably same reason SciFi changed to SyFy; bored marketing people trying to 'add value'.
---
On 07/10/12 8:22 PM, Joseph Spenner wrote:
Not sure. Probably same reason SciFi changed to SyFy; bored marketing people trying to 'add value'.
LibreOffice was created when Oracle bought Sun, a bunch of the core developers quit and started their own project, as Oracle has a nasty history of twisting open source projects to suit their own needs. Oracle was invited to join the LibreOffice foundation, whereupon it would have become OpenOffice again, but instead, Oracle told all OpenOffice board members that they could not be involved with both projects. Shortly thereafter, Oracle laid off all the people worknig on OpenOffice, and 'gave' the project to Apache, where its stagnating.
Meanwhile, Google, Red Hat, SuSE, the FSF, and others have contributed one paid employee each to the LibreOffice project, which started with a fork of OpenOffice 3.3 beta, and is currently up to 3.5
On Wed, Jul 11, 2012 at 1:25 AM, John R Pierce pierce@hogranch.com wrote:
LibreOffice was created when Oracle bought Sun, a bunch of the core developers quit and started their own project,
BS if you ask me... Oracle bought Sun in APRIL 2009.
Sun programmers, on Oracle´s payroll, kept developing OpenOffice.org and release 3.3 was done under Oracle´s management. Even 3.4 Alpha was there when LO forked.
Under Oracle, OOCon in Budapest was done. Oracle also renamed the commercial build of the product (formerly known as "StarOffice" as "Oracle Open Office" -without the .org in the name), and even released an update to StarOffice 9 that included plenty of commercial filters...
Of course, the LO "freedom fighters" have another story of events, but what I´m saying here was told to by a member of the German team that stayed at Oracle until the last.
as Oracle has a nasty history of twisting open source projects to suit their own needs.
Oh really? the projects they are PAYING FOR in the first place?. Do you mean they have no right to influence the direction of the FOSS products they´re paying for?
I guess you will uninstall the Btrfs from your Linux kernel, then, (merged back in February) which was developed, gee, by an Oracle employee during several years, and which puts Linux on equal footing with Microsoft´s ReFS filesystem...
And OpenJDK 7, and MySQL Community Edition, and will never use VirtualBox (which Oracle made totally GPL, eliminating the separate "OSS" edition), or NetBeans, or Glassfish, just to name a few of the flagship Sun FOSS projects that Oracle has not only kept investing on, but increased the pace of development...
But hey, hating companies that put a lot of money in FOSS development just because they have some non-free products that pays for it all seems to be the latest vogue.
In the words of Shuttleworth (http://ho.io/libreoffice)
--- Shuttleworth has a fairly serious disagreement with how the OpenOffice.org/LibreOffice split came about. He said that Sun made a $100 million "gift" to the community when it opened up the OpenOffice code. But a "radical faction" made the lives of the OpenOffice developers "hell" by refusing to contribute code under the Sun agreement. That eventually led to the split, but furthermore led Oracle to finally decide to stop OpenOffice development and lay off 100 employees. He contends that the pace of development for LibreOffice is not keeping up with what OpenOffice was able to achieve and wonders if OpenOffice would have been better off if the "factionalists" hadn't won.
There is a "pathological lack of understanding" among some parts of the community about what companies bring to the table, he said. People fear and mistrust the companies on one hand, while asking "where can I get a job in free software?" on the other. Companies bring jobs, he said. There is a lot of "ideological claptrap" that permeates the community and, while it is reasonable to be cautious about the motives of companies, avoiding them entirely is not rational. ---
Just my $0.02 FC
On Wed, Jul 11, 2012 at 11:50 AM, Fernando Cassia fcassia@gmail.com wrote:
On Wed, Jul 11, 2012 at 1:25 AM, John R Pierce pierce@hogranch.com wrote:
LibreOffice was created when Oracle bought Sun, a bunch of the core developers quit and started their own project,
BS if you ask me... Oracle bought Sun in APRIL 2009.
[knip oracle/sun contributions to OSS projects]
As far as I am concerned, any OSS project can be forked. This has happened here and TUV is just eating its own dogfood using LO instead of OO.org.
Nothing shocking, really. Most informed people know how much Oracle has contributed to OSS, but also how it has tried 'monetize' other stuff (thinking java here, with the recent android controversy). They routinely profit from other people's work (their unbreakeble linux distribution is not truly theirs, is it?).
Sometimes it makes more sense to open source stuff, sometimes it doesn't. You win some, you lose some. Business as usual.
Mr Shuttleworth has obviously his own agenda on the discussion. He is the first one to have stuff forked for no (apparently) good reason (unity) instead of cooperiting with upstream.
just my 2 cents.
On Wed, Jul 11, 2012 at 7:36 AM, Natxo Asenjo natxo.asenjo@gmail.com wrote:
Most informed people know how much Oracle has contributed to OSS, but also how it has tried 'monetize' other stuff
Gee, someone could think that they are a for-profit corporation, like IBM (whose DB2 is NOT open source, or Lotus Notes, also NOT open source), yet I don´t see the level of IBM hatred that I routinely see wrt ORCL.
When any corporation puts money into the development of FOSS technologies (like IBM, Oracle, and RedHat has done, I applaud them). Yet, some people always find a need to bash those, as having an evil agenda, namely *god forbid* the PROFIT word...
But like you say business as usual or "move along, nothing to see here". ;)
FC
On 07/11/2012 07:28 AM, Fernando Cassia wrote:
On Wed, Jul 11, 2012 at 7:36 AM, Natxo Asenjo natxo.asenjo@gmail.com wrote:
Most informed people know how much Oracle has contributed to OSS, but also how it has tried 'monetize' other stuff
Gee, someone could think that they are a for-profit corporation, like IBM (whose DB2 is NOT open source, or Lotus Notes, also NOT open source), yet I don´t see the level of IBM hatred that I routinely see wrt ORCL.
When any corporation puts money into the development of FOSS technologies (like IBM, Oracle, and RedHat has done, I applaud them). Yet, some people always find a need to bash those, as having an evil agenda, namely *god forbid* the PROFIT word...
I think it is more the fact the Oracle seems to be two faced in their dealings with foss as opposed to IBM.
But like you say business as usual or "move along, nothing to see here". ;)
FC _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 11 July 2012 13:06, Steve Clark sclark@netwolves.com wrote:
I think it is more the fact the Oracle seems to be two faced in their dealings with foss as opposed to IBM.
So correct. Way back in 2001, in London I was there when IBM clearly stated they are going to spend one billion on Linux on that year. They did, all of us benefited, IBM got sued by SCO because of that (the lawsuit was for $1b damages)... All I see from Oracle is talk and then a bit of shafting and backstabbing. They do good work (OCFS2, some of the PHP stuff, btrfs come to mind) and then they do some very ugly stuff (undercutting TUV with the hope of... What exactly I haven't figured out yet).
They do a great database product. I only wished they stuck to doing just that.
Hakan Koseoglu hakan@koseoglu.org wrote:
On 11 July 2012 13:06, Steve Clark sclark@netwolves.com wrote:
I think it is more the fact the Oracle seems to be two faced in their dealings with foss as opposed to IBM.
So correct. Way back in 2001, in London I was there when IBM clearly stated they are going to spend one billion on Linux on that year. They
IBM is also two faced with their OSS engagement.
They treat linux different from others.
Jörg
Please write your concern to support@redhat.com. No one here really cares, because it's off topic. Thank you for your cooperation.
Joerg Schilling wrote:
Hakan Koseoglu hakan@koseoglu.org wrote:
On 11 July 2012 13:06, Steve Clark sclark@netwolves.com wrote:
I think it is more the fact the Oracle seems to be two faced in their
dealings with foss as opposed to IBM. So correct. Way back in 2001, in London I was there when IBM clearly stated they are going to spend one billion on Linux on that year. They
IBM is also two faced with their OSS engagement.
They treat linux different from others.
Well, but IBM *loves* Linux, and I saw that 10-12 years ago. Let me put it this way: you're one of the world's largest companies, and you make a wider range of computers than pretty much anyone, and you've been doing it longer than almost anyone.
Now, would you like to support S/38 (I'm sure some are still running), AS400, RISC6000, AIX, DOS/SP/VME (and I have no idea how many more acronyms have been added since I last worked on one in the mid-nineties), MVS, etc, etc... or run Linux on *everything*, and tell users, when they want to go to a larger system, "sure, same o/s, nobody needs to learn a new system, just recompile your in-house software...."
mark
On Tue, Jul 10, 2012 at 11:17 PM, Michel Donais donais@telupton.com wrote:
Why in 6.3 they move OpenOffice to LibreOffice?
The jihadists against Sun Contributor Agreement created the so-called "exodus" of programmers from OpenOffice.org to "Libre" Office, then spit Oracle in the eye and subsequently invited them for dinner (to join "the document foundation"). Actually, the so-called "exodus" left about 50-60 employees at ORCL, but still that wasn´t enough to contribute development.
Then a series of articles came out claiming that OpenOffice.org was "dead" and that was about the same time many distros decided they would package the "new" LibreOffice. Of course the OpenOffice first forkers, Novell, celebrated the move (remember Novell´s Go-OO fork, which supported MS-OOXML which Sun refused).
But the reality was a bit different and not so certain as TDF painted it... Oracle decided to contribute OpenOffice.org trademarks and source code to the Apache Foundation. Apache OpenOffice was thus born and IBM later announced its intention to support the project and contribute the former Lotus Symphony source code to the project, too.
That leaves us where we stand, with two free office suites forked from the same code.
For more read this from Ubuntu´s Shuttleworth: http://ho.io/libreoffice
Just my $0.02 FC
On Wed, Jul 11, 2012 at 1:14 AM, Fernando Cassia fcassia@gmail.com wrote:
but still that wasn´t enough to contribute development.
sorry, typo, I meant "to continue development" (of Sun/Oracle´s propietary product alongside OO.o on a dual-license, namely StarOffice which Oracle had renamed "Oracle Open Office" -without the .org).
FC
On Wednesday 11 July 2012, Fernando Cassia fcassia@gmail.com wrote:
The jihadists against Sun Contributor Agreement created the so-called "exodus" of programmers from OpenOffice.org to "Libre" Office, then spit Oracle in the eye and subsequently invited them for dinner (to join "the document foundation"). Actually, the so-called "exodus" left about 50-60 employees at ORCL, but still that wasn´t enough to contribute development.
I think this is getting dangerously close to trolling.
On Wed, Jul 11, 2012 at 9:16 AM, Yves Bellefeuille yan@storm.ca wrote:
I think this is getting dangerously close to trolling.
Not really, I just wanted to express my POV, which is not the mainstream opinion. I didn t come here to argue against LO in CentOS, the OP did. For me, fine, I'll just ignore it, as I don't use ANY office suite on servers.. End of story as far as I'm concerned.
FC
On 7/11/12, Les Mikesell lesmikesell@gmail.com wrote:
Hours? This should happen in the time it takes to transfer a directory listing and read through it unless you used --ignore-times in the arguments. If you have many millions of files or not enough RAM to hold the list I suppose it could take hours.
Not that many files definitely, more in the range of tens of thousands. But definitely more than an hour or two with small bursts of network traffic.
Rear 'might' be quick and easy. It is intended to be almost unattended and do everything for you. As for extra software - it is a 'yum install' from EPEL. The down side is that if it doesn't work, it isn't very well documented to help figure out how to fix it. I'd still recommend looking at it as a backup/restore solution with an option to clone. With a minimum amount of fiddling you can get it to generate a boot iso image that will re-create the source filesystem layout and bring up the network. Then, if you didn't want to let it handle the backup/restore part you could manually rsync to it from the live system.
I'll look into it when I need to do this again. It just isn't something I expect to do with any regularity and unfortunately server admin isn't what directly goes into my salary so it has to take a second priority.
On Tue, Jul 10, 2012 at 11:59 PM, Emmanuel Noobadmin centos.admin@gmail.com wrote:
On 7/11/12, Les Mikesell lesmikesell@gmail.com wrote:
Hours? This should happen in the time it takes to transfer a directory listing and read through it unless you used --ignore-times in the arguments. If you have many millions of files or not enough RAM to hold the list I suppose it could take hours.
Not that many files definitely, more in the range of tens of thousands. But definitely more than an hour or two with small bursts of network traffic.
Perhaps you have some very large files with small changes then (mailboxes, logfiles, db's, etc.). In that case the receiving rsync spends a lot of time copying the previous version of the file in addition to merging the changed bits.
Rear 'might' be quick and easy. It is intended to be almost unattended and do everything for you. As for extra software - it is a 'yum install' from EPEL. The down side is that if it doesn't work, it isn't very well documented to help figure out how to fix it. I'd still recommend looking at it as a backup/restore solution with an option to clone. With a minimum amount of fiddling you can get it to generate a boot iso image that will re-create the source filesystem layout and bring up the network. Then, if you didn't want to let it handle the backup/restore part you could manually rsync to it from the live system.
I'll look into it when I need to do this again. It just isn't something I expect to do with any regularity and unfortunately server admin isn't what directly goes into my salary so it has to take a second priority.
ReaR's (Relax and Restore) real purpose is to be a full-auto restore to the existing hardware after replacing disks, etc., something that is relatively hard to do with complex filesystem layouts (lvm, raid, etc.) and something armchair sysadmins are likely to need when they least expect it. It does that function pretty well with a couple of lines of config setup (point to an NFS share to hold the backup) for anything where live tar backups are likely to work. The whole point of the tool is that you don't need to know what it is doing and pretty much anyone could do the restore on bare metal. Using it to clone or to move to a modified layout is sort of an afterthought at this point but it is still not unreasonable - it is just a bunch of shell scripts wrapping the native tools from the system but you have to figure out the content of the files where it stores the layout to build.
On Jul 8, 2012, at 10:34 PM, Emmanuel Noobadmin wrote:
On 7/9/12, Micky mickylmartin@gmail.com wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
The problem I found with rsync is that it is very slow when there are a lot of small files. Any idea how this can be improved on or is that a fundamental limit?
I do dump/restores fir this sort of thing.
- aurf
aurfalien wrote:
On Jul 8, 2012, at 10:34 PM, Emmanuel Noobadmin wrote:
On 7/9/12, Micky mickylmartin@gmail.com wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
The problem I found with rsync is that it is very slow when there are a lot of small files. Any idea how this can be improved on or is that a fundamental limit?
I do dump/restores fir this sort of thing.
Depends on if you want everything, and of course, if there's a hardware difference, you need to chroot (assuming you rsync'd to special directories, like /new and /boot/new), and do some mounts and chroot, and rebuild the initrd.img
mark
On Mon, 2012-07-09 at 09:00 -0700, aurfalien wrote:
On Jul 8, 2012, at 10:34 PM, Emmanuel Noobadmin wrote:
On 7/9/12, Micky mickylmartin@gmail.com wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it's done.
The problem I found with rsync is that it is very slow when there are a lot of small files. Any idea how this can be improved on or is that a fundamental limit?
I do dump/restores fir this sort of thing.
+1
John
On 7/10/12, aurfalien aurfalien@gmail.com wrote:
I do dump/restores fir this sort of thing.
Thanks for this, I didn't know there was such a command until now! But it looks like it should work for me since bulk of the data are usually in /home which is a separate fs/mount usually. I can always resize the fs after transfer so I'll give this a try the next time I need to do a dup/migrate.
On 07/09/12 11:39 PM, Emmanuel Noobadmin wrote:
On 7/10/12, aurfalien aurfalien@gmail.com wrote:
I do dump/restores fir this sort of thing.
Thanks for this, I didn't know there was such a command until now! But it looks like it should work for me since bulk of the data are usually in /home which is a separate fs/mount usually. I can always resize the fs after transfer so I'll give this a try the next time I need to do a dup/migrate.
dump should not be used on mounted file systems, except / in single user.
restore can restore to any size file system of the same type (ext3, ext4) thats large enough to hold the files dumped.
On 7/10/12, John R Pierce pierce@hogranch.com wrote:
dump should not be used on mounted file systems, except / in single user.
Aha, thanks for the warning!
On 07/10/12 12:06 AM, Emmanuel Noobadmin wrote:
On 7/10/12, John R Pierce pierce@hogranch.com wrote:
dump should not be used on mounted file systems, except / in single user.
Aha, thanks for the warning!
IF you're using LVM, you can take a file system snapshot, and dump the snapshot, however, as this is a point-in-time replica of the file system.
On 7/10/12, John R Pierce pierce@hogranch.com wrote:
IF you're using LVM, you can take a file system snapshot, and dump the snapshot, however, as this is a point-in-time replica of the file system.
Unfortunately I wasn't.
It does seem that essentially all the better methods that minimize downtime require the system to be prepped when first installed, be it LVM/MD/DRBD.
So going ahead, I'm basically making it a point to use MD mirror on all new installs, including VMs that are not running RAID 1 virtually as the physical storage is already RAIDed.
The assumption is that I should be able to just add an iSCSI target as a member of the degraded RAID mirror, wait for it to sync, then shutdown and start the new server within minutes as opposed to waiting a couple of hours for rsync or any other forms of imaging/dump to backup the current state.
The added benefit of this approach, it would seem is that I could use that same approach to do backup of the entire fs.
On 9/7/2012 1:48 πμ, Micky wrote:
The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader.
We are using mondorescue (mondoarchive and mondorestore). Works fine and supports many ways of archiving/restoring, LVM etc.
I recommend it. Good both for backups and cloning.
Nick