[CentOS] s3 as mysql directory
leonfauster at googlemail.com
Mon Oct 1 16:04:22 EDT 2012
Am 01.10.2012 um 20:24 schrieb Tim Dunphy:
> Hello list,
> I am soliciting opinion here as opposed technical help with an idea I
> have. I've setup a bacula backup system on an AWS volume. Bacula stores a
> LOT of information in it's mysql database (in my setup, you can also use
> postgres or sqlite if you chose). Since I've started doing this I notice
> that the mysql data directory has swelled to over 700GB! That's quite a lot
> and its' easting up valuable disk space.
> So I had an idea. What about uses the fuse based s3fs to mount an S3
> bucket on the local filesystem and use that as your mysql data dir? In
> other words mount your s3 bucket on /var/lib/mysql
> I used this article to setup the s3fs file system
> And everything went as planned. So my question to you dear listers is if I
> do start using a locally mounted s3 bucket as my mysqld data dir, will
> performance of the database be acceptable? If so, why? If not are there any
> other reasons why it would NOT be a good idea to do this?
> The steps I have in mind are basically this:
> 1) mysqldump --all-databases > alldb.sql
> 2) stop mysql
> 3) rm -rf /var/lib/mysql/*
> 4) mount the s3 bucket on /var/lib/mysql
> 5) start mysql
> 6) restore the alldb.sql dump
your motivation is to save the resources that are occupied by /var/lib/mysql ??
please check the size of your "mysqldump --all-databases > alldb.sql".
Is the dump also so big?
More information about the CentOS