Keeping reliable database backups isn’t just a best practice; it’s critical. Whether you’re running a WordPress site, managing an internal app, or maintaining a client’s online store, losing your database could be catastrophic. Fortunately, you can automate MySQL backups to Amazon S3 and never have to think about it again.
AWS offers tools like AWS Backup for full backup and disaster recovery management, but in this tutorial, we’ll focus on a lightweight, server-side approach: automatically exporting MySQL databases and sending them to a secure Amazon S3 bucket each night using a simple Bash script and cron job.
🧭 What You’ll Build
You’ll create a lightweight backup system that:
- Dumps all MySQL databases each night
- Compresses them into .sql.gz files
- Uploads them to a private S3 bucket
- Keeps a timestamp in each filename
- Organizes backups into year/month/day folders
- Automatically deletes local files older than 7 days
🔑 Step 1: Store Your MySQL Credentials Securely
Instead of placing your password directly inside your scripts or cron jobs, store it in /root/.my.cnf with proper permissions. This allows mysqldump and cron jobs to connect securely without exposing credentials.
Create the file using your favorite editor:
sudo vim /root/.my.cnf
Add the following:
[client]
user=dbuser
password=dbpassword
host=mydatabase.cluster-123abc456.us-east-1.rds.amazonaws.com
Then lock down permissions:
sudo chmod 600 /root/.my.cnf
sudo chown root:root /root/.my.cnf
This ensures only the root user can read the file.
📦 Step 2: Create Your Backup Script
All backup scripts should live under /usr/local/bin/ and start with backup_ so they’re easy to identify.
Here’s a reusable script you can copy, edit, and repeat for each database you need to back up.
Example: /usr/local/bin/backup_my_app_db.sh
#!/usr/bin/env bash
set -euo pipefail
# Database name
DB_NAME="my_app_db"
# Local backup directory
BACKUP_DIR="/var/backups/databases/${DB_NAME}"
mkdir -p "${BACKUP_DIR}"
# Timestamp for filenames
STAMP="$(date +'%Y-%m-%d_%H:%M:%S')"
DUMP="${BACKUP_DIR}/${DB_NAME}_${STAMP}.sql"
ARCHIVE="${DUMP}.gz"
# S3 bucket configuration
S3_BUCKET="my-database-backups"
DATE_PREFIX="$(date +'%Y/%m/%d')"
S3_KEY="${DB_NAME}/${DATE_PREFIX}/${DB_NAME}_${STAMP}.sql.gz"
S3_URI="s3://${S3_BUCKET}/${S3_KEY}"
# Create MySQL dump (includes routines, triggers, and events)
mysqldump \
--single-transaction \
--routines --triggers --events \
"${DB_NAME}" > "${DUMP}"
# Compress dump file
gzip -9 "${DUMP}"
# Upload to S3 (encrypted at rest)
aws s3 cp "${ARCHIVE}" "${S3_URI}" --sse AES256
# Remove local backups older than 7 days
find "${BACKUP_DIR}" -type f -name "${DB_NAME}_*.sql.gz" -mtime +7 -delete
Save and make it executable:
sudo chmod +x /usr/local/bin/backup_my_app_db.sh
✅ What This Script Does
- Dumps your MySQL database using mysqldump
- Compresses it to save space
- Uploads the backup to Amazon S3
- Keeps your S3 files organized by date
- Automatically cleans up old local files
Example output S3 path structure:
s3://my-database-backups/my_app_db/2025/10/15/my_app_db_2025-10-15_23:00:04.sql.gz
🪣 Step 3: IAM Role and Permissions
If this runs on an EC2 instance, attach an IAM role with a policy that allows access to your S3 bucket. You can use an AWS managed policy, such as AmazonS3FullAccess. This policy allows full access to all S3 buckets from your EC2 instance. We will attach that policy to our EC2 instance for now. Afterwards, you can, of course, use a custom policy to allow access only to specific buckets and actions (Recommended after you confirm everything is working).
If you’re running this outside EC2 (for example, on-premises or a different cloud), install and configure the AWS CLI:
aws configure
You’ll need to provide your access key and secret key once.
🕐 Step 4: Schedule Daily Cron Jobs
You can run multiple backup scripts on a schedule using crontab.
Open the root crontab:
sudo crontab -e
Add your entries:
MAILTO=youremail@example.com
# Run nightly backups at 11 PM Central Time
0 23 * * * /usr/local/bin/backup_my_app_db.sh >> /var/log/backups/my_app_db.log 2>&1
10 23 * * * /usr/local/bin/backup_wordpress_db.sh >> /var/log/backups/wp_backup.log 2>&1
20 23 * * * /usr/local/bin/backup_sales_db.sh >> /var/log/backups/sales_backup.log 2>&1
This setup:
- Runs each job sequentially between 11:00 PM and 11:20 PM
- Logs all output to /var/log/backups/
- Emails you if a job fails
💡 Step 5: Verify and Test
Before you rely on automation, always run a manual test.
sudo /usr/local/bin/backup_my_app_db.sh
Then check your S3 bucket:
aws s3 ls s3://my-database-backups/my_app_db/2025/10/15/
You should see your .sql.gz backup with a timestamp.
🚀 Wrap-Up
You now have an automated system that:
- Backs up your MySQL databases daily
- Compresses and uploads them to S3 securely
- Keeps your folder structure neat and timestamped
- Notifies you automatically if anything goes wrong
It’s simple, efficient, and production-ready. If you’d like to go a step further, consider adding AWS Backup Lifecycle Rules in S3 to move old files to Glacier for long-term retention.