Migrating Your MySQL Database to Amazon RDS

By October 1, 2015 No Comments

If you manage your own database servers, at some point you’ve probably thought about moving to “the cloud”. Cloud databases provide some nice benefits when compared to traditional bare-metal servers:

  1. Ability to scale up (and down) easily
  2. Essentially unlimited disk space (no more “the hard disk is full” panic-attacks!)
  3. Automated backups and snapshots of your data
  4. Expertly-managed infrastructure, so you can focus on what matters (your data)

There are a variety of providers out there, but one option you should consider is Amazon’s cloud database service, named Amazon RDS.

Amazon RDS supports several databases: MySQL, PostgreSQL, Oracle, and SQL Server. This article will focus on MySQL, and specifically, how to get your data out of your current database, and into Amazon RDS for MySQL.

I recently followed these steps when exporting and importing an extremely large MySQL database that was a whopping 60+ GB! There are other ways to import your data, but I found this method to be the simplest. It doesn’t require setting up or configuring additional servers to middle-man the data transfer, and it doesn’t require special configuration changes to your existing MySQL server to make it work.

Let’s begin!

Step 1: Export Schema from your current MySQL Server

SSH into your current server and execute a command like the following, replacing the stuff in [BRACKETS] with your information:

$ mysqldump -u [MYSQLUSER] -p[MYSQLPASSWORD] --no-data [DATABASENAME] > schema_only.sql

This exports your schema as a single .sql file to the local disk.

Step 2: Export Data from your current MySQL Server

Still inside your SSH session:

  1. Run mysqldump with the “–tab” parameter to dump individual tables (schema AND data) to separate files. (We’re using a “data” directory in this example.)

    $ mysqldump -u [MYSQLUSER] -p[MYSQLPASSWORD] --single-transaction --compress --order-by-primary --tab=data [DATABASENAME]

    (If you get “permission denied” errors when exporting your data, see FOOTNOTE below)

  2. You should have a bunch of files in your data directory with .sql and .txt extensions.
  3. If you have a very large database, you might need to split any .txt files that are larger than 1 GB into files that are 1 GB each (Amazon recommendation). If none of your .txt files are larger than 1 GB, you can skip this step.

    The following command will produce split files with names: big_table.part_01, big_table.part_02, etc.

    $ cd data
    $ split -C 1024m -d big_table.txt big_table.part_

    If you’re on Mac OSX, the command is a little different (it does not support numeric “part” numbers, it uses letters instead). So you’ll get big_table.part_aa, big_table.part_ab, etc.

    $ split -b 1024m big_table.txt big_table.part_

    You should now delete (or rename) the original big_table.txt file to something like big_table.txt.original, since we now have it split into parts. This step is critical, otherwise you’ll end up importing data twice in Step 4 below!

Step 3: Load Schema into Amazon RDS

Still inside your SSH session, let’s apply our schema file to our remote RDS instance. Execute a command like the following, replacing the stuff in [BRACKETS] with your information:

$ mysql -u [RDS_USERNAME] -p[RDS_PASSWORD] --host=[RDS_HOSTNAME] [RDS_DATABASENAME] < schema_only.sql

This will create the (empty) tables on your RDS instance.

Step 4: Load Data into RDS

Still inside your SSH session, let’s upload our data files to our remote RDS instance.

NOTE: All tables must already exist, otherwise data import will fail!

$ mysqlimport --local --compress -u [RDS_USERNAME] -p[RDS_PASSWORD] --host=[RDS_HOSTNAME] [RDS_DATABASENAME] *.txt

Enjoy your new MySQL RDS Database

That’s it! You now have a “cloud database” courtesy of Amazon. RDS has a lot of features and options, which I encourage you to explore further (like automated backups!).




If you get “permission denied” errors when exporting your data, and you’re on Ubuntu, then try the following steps:

  1. Make sure “AppArmor” is not blocking file access to mysqldump.
    1. $ sudo nano /etc/apparmor.d/usr.sbin.mysqld
    2. add your export/data directories to the bottom of the file like so:
      /home/myusername/data/ r,
      /home/myusername/data/* rw,
    3. reload AppArmor $ sudo /etc/init.d/apparmor reload
    4. restart MySQL $ sudo service mysql restart
  2. Create your data directory
    $ mkdir data
    $ pwd
  3. Give global rw permissions to data (otherwise mysqldump can’t write to it)
    $ chmod a+rw data

Web Application Startup Guide

A 30-page ebook that covers positioning, marketing, pricing, and building your startup product, plus more.

Leave a Reply