Posted on

MySQL data backup: going beyond mysqldump

A user on a linux user group mailing list asked about this, and I was one of the people replying. Re-posting here as I reckon it’s of wider interest.

> […] tens of gigs of data in MySQL databases.
> Some in memory tables, some MyISAM, a fair bit InnoDB. According to my
> understanding, when one doesn’t have several hours to take a DB
> offline and do dbbackup, there was/is ibbackup from InnoBase.. but now
> that MySQL and InnoBase have both been ‘Oracle Enterprised’, said
> product is now restricted to MySQL Enterprise customers..
>
> Some quick searching has suggested Percona XtraBackup as a potential
> FOSS alternative.
> What backup techniques do people employ around these parts for backups
> of large mixed MySQL data sets where downtime *must* be minimised?
>
> Has your backup plan ever been put to the test?

You should put it to the test regularly, not just when it’s needed.
An untested backup is not really a backup, I think.

At Open Query we tend to use dual master setups with MMM, other replication slaves, mysqldump, and XtracBackup or LVM snapshots. It’s not just about having backups, but also about general resilience, maintenance options, and scalability. I’ll clarify:

  • XtraBackup and LVM give you physical backups. that’s nice if you want to recover or clone a complete instance as-is. But if anything is wrong, it’ll be all stuffed (that is, you can sometimes recover InnoDB tablespaces and there are tools for it, but time may not be on your side). Note that LVM cannot snapshot between multiple volumes consistently, so if you have your InnoDB ibdata/IBD files and iblog files on separate spindles, using LVM is not suitable.
  • mysqldump for logical (SQL) backups. Most if not all setups should have this. Even if the file(s) were to be corrupted, they’re still readable since it’s plain SQL. You can do partial restores, which is handy in some cases. It’ll be slower to load so having *only* an SQL dump of a larger dataset is not a good idea.
  • some of the above backups can and should *also* be copied off-site. that’s for extra safety, but in terms of recovery speed it may not be optimal and should not be relied upon.
  • having dual masters is for easier maintenance without scheduled outages, as well as resilience when for instance hardware breaks (and it does).
  • slaves. You can even delay a slave (Maatkit has a tool for this), so that would give you a live correct image even in case of a user error, provided you get to it in time. Also, you want enough slack in your infra to be able to initialise a new slave off an existing one. Scaling up at a time when high load is already occurring can become painful if your infra is not prepared for it.

A key issue to consider is this… if the dataset is sufficiently large, and the online requirements high enough, you can’t afford to just have backups. Why? Because, how quickly can you deploy new suitable hardware, install OS, do restore, validate, put back online?

In many cases one or more aspects of the above list simply take too long, so my summary would be “then you don’t really have a backup”. Clients tend to argue with me on that, but only fairly briefly, until they see the point: if a restore takes longer than you can afford, that backup mechanism is unsuitable.

So, we use a combination of tools and approaches depending on needs, but in general terms we aim for keeping the overall environment online (individual machines can and will fail! relying on a magic box or SAN to not fail *will* get you bitten) to vastly reduce the instances where an actual restore is required.
Into that picture also comes using separate test/staging servers to not have developers stuff around on live servers (human error is an important cause of hassles).

In our training modules, we’ve combined the backups, recovery and replication topics as it’s clearly all intertwined and overlapping. Discussing backup techniques separate from replication and dual master setups makes no sense to us. It needs to be put in place with an overall vision.

Note that a SAN is not a backup strategy. And neither is replication on its own.

Posted on
Posted on 3 Comments

Importing a file dumped from MySQL with mysqldump into drizzle

As a big fan of new technology, we try to keep up to date with what’s happening in the industry. As such, I decided to start using drizzle on my development machine since they announced GA this week.
First exercise: import a file dumped from a MySQL server I don’t have access to into drizzle. Normally, you can use drizzledump on the mysql server and make it dump a drizzle compatible file. Not in this case, so I decided to sed my way through the various errors. Not pretty, and I hope that at some point we’ll have a tool that can convert a mysqldump into a drizzle compatible file, but it works for now.
Here’s what I had to do. Note that this is by no means complete or comes with any guarantees, it’s just a starting point.
# This file started by setting a SQL_MODE. That doesn't exist in 
# drizzle, so we comment it out
sed -i "s/^SET SQL_MODE/#SET SQL_MODE/g" mysqldump.sql 

# The create database statement set a default character set. 
# Everything in drizzle is UTF8, so let's lose it!
sed -i "s/DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci//g" mysqldump.sql 

# The table definitions mentioned a default character set. 
# Everything in drizzle is UTF8, so let's lose it!
sed -i 's/DEFAULT CHARSET=utf8//g' mysqldump.sql 

# No MyISAM except for temporary tables, so away with it.
sed -i 's/ENGINE=MyISAM//g' mysqldump.sql 

# Invalid timestamps are not accepted in drizzle, so this should be a null 
# value. Since some of the columns in this file are actually NOT NULL defined, 
# for now I just set those dates to 1970. UGLY, but works for me. Don't do this 
# on anything that will ever go anywhere near production though!
sed -i "s/'0000-00-00/'1970-01-01/g" mysqldump.sql 

# tinyint doesn't exist anymore, so just replace with integer. Note that you'll 
# have to do this for all data types that no longer exist in drizzle
sed -i "s/tinyint(.*)/integer/g" mysqldump.sql
Hope this helps others!
Posted on 3 Comments
Posted on

A day in the life of Datacenter Disasters

Open Query currently hosts a large part of our infrastructure at Linode. We are extremely happy with their performance, stability and support. Unfortunately any chain is only as strong as it’s weakest link. This week, there was a major thunderstorm near the Hurricane Electric datacenter (anyone else think that name is funny in combination with the event in case?) in fremont and through a massive powersurge, most of HE’s datacenter lost power. Among the Linodes affected in our infrastructure were all of the machines involved in our MMM setup.

The masters came back up before the monitor, which is around the time I was alerted. Logging in, I noticed replication was broken on one of the masters, but the other master seemed healthy. Since the monitor was not up and it seemed like it could potentially be hours before it would, I decided it was time for manual action. Since our MMM setup doesn’t have slaves currently, I decided a good option would be to mimic MMM and move the virtual IP to the healthy server.

I executed the following manual commands to make the desired changes:

$ ip addr add <virtip> dev eth0
$ /usr/sbin/arping -I eth0 -c 5 <virtip>

That brought all our applications back online, which was the desired effect. I manually fixed replication by repositioning the masters. A while later, the monitor came up and automatically took over, bringing everything back to normal.

Everything went well, but it wasn’t until the next morning I realised there was a possible flaw in my logic (that din’t effect us, but I wanted to blog about it to make others realise): When replication stopped, master A was active. My commands above made master B the active master. Now, in theory it is possible that writes were sent to master A after replication broke, and commands that were sent to master B would presume those writes were executed there which they were not as replication didn’t execute them. This is one of those niche occasions where data-drift can occur without noticing it.

My recommendation is to not do what I did unless you are very certain your setup doesn’t suffer from this potential problem. If you do decide to use this trick however, make sure to use the maatkit mk-tablecheck and mk-tablesynch when all is well again to check for (and correct!) data drift.

Posted on
Posted on 1 Comment

PBXT early impressions in production use

With Paul McCullagh’s PBXT storage engine getting integrated into MariaDB 5.1, it’s never been easier to it out. So we have, on a slave off one of our own production systems which gets lots of inserts from our Zabbix monitoring system.

That’s possibly an ideal usage profile, since PBXT is a log based engine (simplistically stated, it indexes its transaction logs, rather than rewriting data from log into index and indexing that) so it should require less disk I/O than say InnoDB. And that means it should be particularly suited to for instance logging, which have lots of inserts on a sustained basis. Note that for short insert burst you may not see a difference with InnoDB because of caching, but sustain it and then you can notice.

Because PBXT has such different/distinct architecture there’s a lot of learning involved. Together with Paul and help from Roland Bouman we also created a stored procedure that can calculate the optimal average row size for PBXT, and even ALTER TABLE statements you can paste to convert tables. The AVG_ROW_LENGTH option is quite critical with PBXT, if set too big (or if you let PBXT guess and it gets it wrong) it’ll eat heaps more diskspace as well as being much slower, and if too small it’ll be slower also; this, it needs to be in the right ballpark. For existing datasets it can be calculated, so that’s what we’ve worked on. The procs will be published shortly, and Paul will also put them in with the rest of the PBXT files.

Another important aspect for PBXT is having sufficient cache memory allocated, otherwise operations can take much much longer. While the exact “cause” is different, one would notice similar performance aspects when using InnoDB on larger datasets and buffers that are too small for the purpose.

So, while using or converting some tables to PBXT takes a bit of consideration, effort and learning, it appears to be dealing with the real world very well so far – and that’s a testament to Paul’s experience. Paul is also very responsive to questions. As we gain more experience, it is our intent to try PBXT for some of our clients that have operational needs that might be a particularly good fit for PBXT.

I should also mention that it is possible to have a consistent transaction between PBXT, InnoDB and the binary log, because of the 2-phase commit (XA) infrastructure. This means that you should even be able to do a mysqldump with –single-transaction if you have both PBXT and InnoDB tables, and acquire a consistent snapshot!

More experiences and details to come.

Posted on 1 Comment
Posted on 4 Comments

Quest for Resilience: Multi-DC Masters

This is a Request for Input. Dual MySQL masters with MMM in a single datacentre are in common use, and other setups like DRBD and of course VM/SAN based failover solutions are conceptually straightforward also. Thus, achieving various forms of resilience within a single data-centre is doable and not costly.

Doing the same across multiple (let’s for simplicity sake limit it to two) datacentres is another matter. MySQL replication works well across longer links, and it can use MySQL’s in-built SSL or tools like stunnel. Of course it needs to be kept an eye on, as usual, but since it’s asynchronous the latency between the datacentres is not a big issue (apart from the fact that the second server gets up-to-date a little bit later).

But as those who have tried will know, having a client (application server) connection to a MySQL instance in a remote data-centre is a whole other matter, latency becomes a big issue and is generally very noticeable on the front-end. One solution for that is to have application servers only connect to their “local” MySQL server.

So the question to you is, do you now have (or have you had in the past) a setup with MySQL masters in different datacentres, what did that setup look like (which additional tools and infra did you use for it), and what were your experiences (good and bad, solutions to issues, etc). I’m trying to gather additional expertise that might already be about, which can help us all. Please add your input! thanks

Posted on 4 Comments
Posted on

Open Query @ MySQL Conf & Expo 2010

Walter and I are giving a tutorial on Monday morning, MySQL (and MariaDB) Dual Master Setups with MMM, I believe there are still some seats available – tutorials are a bit extra when you register for the conference, so you do need to sign up if you want to be there! It’s a hands-on tutorial/workshop, we’ll be setting up multiple clusters with dual master and the whole rest of the MMM fun, using VMs on your laptops and a separate wired network. Nothing beats messing with something live, breaking it, and seeing what happens!

Then on Tuesday afternoon (5:15pm, Ballroom F), Antony and I will do a session on the OQGRAPH engine: hierarchies/graphs inside the database made easy. If you’ve been struggling with trees in SQL, would really like to effectively use social networking in your applications, need to work with RDF datasets, or have been exploring neo4j but otherwise have everything in MySQL or MariaDB, this session is for you.

We (and a few others from OQ) will be around for the entire conference, the community dinner (Monday evening) and other social events, and are happy to answer any questions you might have. You’ll be able to easily recognise us in the crowds by our distinct friendly Open Query olive green shirts (green stands out because most companies mainly use blue/grey and orange/red).

Naturally we would love to do business with you (proactive support services, OQGRAPH development), but we don’t push ourselves on to unsuitable scenarios. In fact, we’re known to refer and even actively introduce clients to competent other vendors where appropriate. In any case, it’s our pleasure and privilege to meet you!

See you all in Santa Clara in a few days.

Posted on
Posted on

MySQL University session Oct 22: Dual Master Setups With MMM

This Thursday (October 22nd, 13:00 UTC), Walter Heck (of Open Query) will present Dual Master Setups With MMM. MMM (Multi-Master Replication Manager for MySQL) is a set of flexible scripts to perform monitoring/failover and management of MySQL master-master replication configurations (with only one node writable at any time). Session slides (PDF).

The toolset also has the ability to read balance standard master/slave configurations with any number of slaves, so you can use it to move virtual IP addresses around a group of servers depending on whether they are behind in replication. For more
information, see mysql-mmm.org.

For MySQL University sessions you point your browser here. You need a browser with a working Flash plugin. You may register for a Dimdim account, but you don’t have to.

Posted on
Posted on 3 Comments

Tool of the Day: rsnapshot

rsnapshot is a filesystem snapshot utility for making backups of local and remote systems, based on rsync. Rather than just doing a complete copy every time, it uses hardlinks to create incrementals (which are from a local perspective a full backup also). You can specify how long to keep old backups, and all the other usual jazz. You’d generally have it connect over ssh. You’ll want/need to run it on a filesystem that supports hardlinks, so that precludes NTFS.

In the context of MySQL, you can’t just do a filesystem copy of your MySQL data/logs, that would be inconsistent and broken. (amazingly, I still see people insisting/arguing on this – but heck it’s your business/data to gamble with, right?)

Anyway, if you do a local mysqldump also, or for instance use XtraBackup to take a binary backup of your InnoDB tablespace/logs, then rsnapshot can be used to automate the transfer of those files to a different geographical location.

Two extra things you need to do:

  • Regularly test your backups. They can fail, and that can be fatal. For XtraBackup, run the prepare command and essentially start up a MySQL instance on it to make sure it’s all happy. Havint this already done also saves time if you need to restore.
  • For restore time, you need to include the time needed to transfer files back to the target server.
Posted on 3 Comments
Posted on

Tool of the Day: Firefox Tab Kit extension

We often need many tabs open in a browser, and horizontal tabs become unmanageable. Tab Kit allows you to have them vertically on the left, with various additional configuration choices.

I opted for the tree structure, so when I open a tab from another one it’ll show up as a child to the original. I can “lock” tabs so they cannot be closed by an accidental click or keypress. They get a “read” marker so if you open a few tabs and leave them till later you can still tell which ones you’ve actually already looked at. And there’s colour coding also.

In short, a great help. Just click the Tools/Add-Ons menu in Firefox and find Tab Kit in the extensions. Install, configure, and enjoy!

Posted on