[CentOS] s3 as mysql directory
jbilly2002 at gmail.com
Mon Oct 1 15:11:02 EDT 2012
700 GB is quite large. What was the amount of data you would have
backed up till this point ? Could be that the catalog data is building
up, have you looked at
While your setup in theory should work, have you tested restores.
Search through catalogs might be slower than you expect for 700 GB
On 10/1/12, Tim Dunphy <bluethundr at gmail.com> wrote:
> Hello list,
> I am soliciting opinion here as opposed technical help with an idea I
> have. I've setup a bacula backup system on an AWS volume. Bacula stores a
> LOT of information in it's mysql database (in my setup, you can also use
> postgres or sqlite if you chose). Since I've started doing this I notice
> that the mysql data directory has swelled to over 700GB! That's quite a lot
> and its' easting up valuable disk space.
> So I had an idea. What about uses the fuse based s3fs to mount an S3
> bucket on the local filesystem and use that as your mysql data dir? In
> other words mount your s3 bucket on /var/lib/mysql
> I used this article to setup the s3fs file system
> And everything went as planned. So my question to you dear listers is if I
> do start using a locally mounted s3 bucket as my mysqld data dir, will
> performance of the database be acceptable? If so, why? If not are there any
> other reasons why it would NOT be a good idea to do this?
> The steps I have in mind are basically this:
> 1) mysqldump --all-databases > alldb.sql
> 2) stop mysql
> 3) rm -rf /var/lib/mysql/*
> 4) mount the s3 bucket on /var/lib/mysql
> 5) start mysql
> 6) restore the alldb.sql dump
> Thanks for your opinions on this!
> GPG me!!
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
> CentOS mailing list
> CentOS at centos.org
More information about the CentOS