When you have a website or blog it’s very important to do regular backups. And one common issue when the backups are really needed to restore a hacked website is that the backups where stored with the same hosting company as the main site, meaning the backups also got hacked.
Here’s a way of storing your backups with Amazon AWS in an S3 bucket. This requires you to have an Amazon AWS account and you need to make sure the AWS SDK for PHP is installed. You also need AWS access keys and the secret key (see the Amazon documentation on how to get).
This code will take all files and subfolders in the directory /path/to/source/directory, create a ZIP file and upload it to your S3 bucket. The backup file will contain the date in the file name to make it easier for you to find the version to restore if it’s ever needed.
<?php
require 'vendor/autoload.php'; // Make sure to include the autoload file from the AWS SDK
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$backupSourceDir = '/path/to/source/directory'; // Change this to the directory you want to backup
$backupFileName = 'backup_' . date('Y-m-d_H-i-s') . '.zip'; // You can change the file format if needed
$s3BucketName = 'your-s3-bucket-name';
$s3BackupKey = 'backups/' . $backupFileName;
// Initialize S3 client
$s3Client = new S3Client([
'version' => 'latest',
'region' => 'your-aws-region',
'credentials' => [
'key' => 'your-aws-access-key',
'secret' => 'your-aws-secret-key',
],
]);
// Create a zip archive of the directory
$zip = new ZipArchive();
if ($zip->open($backupFileName, ZipArchive::CREATE | ZipArchive::OVERWRITE) === true) {
$files = new RecursiveIteratorIterator(
new RecursiveDirectoryIterator($backupSourceDir),
RecursiveIteratorIterator::LEAVES_ONLY
);
foreach ($files as $name => $file) {
if (!$file->isDir()) {
$filePath = $file->getRealPath();
$relativePath = substr($filePath, strlen($backupSourceDir) + 1);
$zip->addFile($filePath, $relativePath);
}
}
$zip->close();
try {
// Upload the zip file to S3
$result = $s3Client->putObject([
'Bucket' => $s3BucketName,
'Key' => $s3BackupKey,
'SourceFile' => $backupFileName,
]);
echo "Backup uploaded to S3 successfully.\n";
} catch (S3Exception $e) {
echo "Error uploading backup to S3: " . $e->getMessage() . "\n";
}
// Delete the local zip file after upload
unlink($backupFileName);
} else {
echo "Could not create backup zip archive.\n";
}
?>
Once you have the script in your hosting account you should run it using cron on a regular basis (preferably once a day, but if you don’t update your site that often once a week could be fine). To run the backup script daily, you can create an entry in your crontab file. Open your crontab configuration by running:
crontab -e
Then add the following line to run the backup script every day at a specific time (adjust the time to your preferred schedule):
0 2 * * * /usr/bin/php /path/to/daily_backup_script.php >> /path/to/log/daily_backup.log 2>&1
In this example, the script will run at 2:00 AM every day. The output of the script will be appended to the daily_backup.log
file. Similarly, for the weekly backup script, you can add another entry to your crontab to run it on a specific day and time (adjust as needed):
0 3 * * 0 /usr/bin/php /path/to/weekly_backup_script.php >> /path/to/log/weekly_backup.log 2>&1
In this example, the script will run at 3:00 AM every Sunday (0
represents Sunday). The output will be appended to the weekly_backup.log
file.
Remember to replace /path/to/daily_backup_script.php
, /path/to/weekly_backup_script.php
, /path/to/log/daily_backup.log
, and /path/to/log/weekly_backup.log
with actual paths to your scripts and log files.
Make sure that the paths to PHP and the script files are correct on your system. You can find the path to PHP by running which php
in your terminal.
Also, ensure that the script files have execute permissions (chmod +x script.php
) so they can be executed by the cron job.
By setting up these cron jobs, your backup scripts will run automatically on a daily and weekly basis as per your schedule.