Backup and Recovery
MPCIUM provides robust backup and recovery capabilities to ensure your distributed MPC cluster data is protected against loss. The backup system uses an incremental backup strategy with encryption to efficiently store and secure your database state while minimizing resource usage.

Important: Always test your backup and recovery procedures in a non-production environment before implementing them in production.
Backup Configuration
MPCIUM supports configurable backup settings that can be adjusted in your config.yaml
file:
Backup Configuration
# Backup Configuration
backup_enabled: true # Enable/disable automatic backups (default: true)
backup_period_seconds: 300 # How often to perform backups in seconds (default: 300)
backup_dir: "./backups" # Directory where encrypted backups are stored
Configuration Parameters
backup_enabled (boolean)
Controls whether automatic backups are performed. Set to false
to disable backups entirely.
backup_period_seconds (integer)
Defines the interval between automatic backups in seconds. The default of 300 seconds (5 minutes) provides a good balance between data protection and system performance.
backup_dir (string)
Specifies the directory where encrypted backup files are stored. Ensure this directory has appropriate permissions and sufficient disk space.
Backup Strategy
MPCIUM implements an incremental backup strategy that efficiently stores only the changes since the last backup, reducing storage requirements and backup time. This approach ensures that each backup is not a duplicate of previous backups, saving significant storage space and network bandwidth while maintaining complete data protection.
Backup File Structure
Backup files are stored in the configured backup directory with the following naming convention:
node0/
└── backups/
├── backup-node0-2025-07-28_20-46-39-1.enc
├── backup-node0-2025-07-28_20-48-39-2.enc
├── backup-node0-2025-07-28_20-49-39-3.enc
└── latest.version
Backup File Format
Each backup file contains:
- Incremental database changes since the last backup (not full duplicates)
- Encrypted database state using AES-256-GCM encryption for security during transit and storage
- Metadata including creation timestamp, encryption parameters, and version information
- Chain of incremental updates that can be reconstructed to restore the complete database state
Backup Metadata Example
{
"algo": "AES-256-GCM",
"nonce_b64": "MgDMCSqKNujD+9na",
"created_at": "2025-07-28T20:46:39+07:00",
"since": 0,
"next_since": 6,
"encryption_key_id": "bd1937594bea2580"
}
Security: Backup files are encrypted using the same badger_password
used for database encryption, ensuring your backup data remains secure during transit and storage. The incremental approach combined with encryption provides both resource efficiency and data protection.
Recovery Procedures
Using mpcium-cli recover
To recover your MPCIUM database from encrypted backup files, use the mpcium-cli recover
command:
Recovery Command
mpcium-cli recover --help
NAME:
mpcium recover - Recover database from encrypted backup files
USAGE:
mpcium recover [command [command options]]
OPTIONS:
--backup-dir string, -b string Directory containing encrypted backup files
--recovery-path string, -r string Target path for database recovery
--backup-encryption-key string, -k string Encryption key for backup files (will prompt if not provided)
--force, -f Force overwrite if recovery path already exists (default: false)
--help, -h show help
Recovery Process
- Stop the MPCIUM node to ensure no active database operations
- Run the recovery command with appropriate parameters
- Update the configuration to point to the recovered database
- Restart the node with the recovered data

Recovery Example
# Step 1: Stop the MPCIUM node
# Step 2: Recover the database
mpcium-cli recover \
--backup-dir backups/ \
--recovery-path restore_db/node0
# Step 3: Update config.yaml
# Change db_path to: db_path: "restore_db"
# Step 4: Restart the node
mpcium start -n node0
AWS S3 Integration
MPCIUM supports AWS S3 for cloud-based backup storage, providing additional redundancy and disaster recovery capabilities.
Setting Up S3 Backup
1. Create S3 Bucket
Create S3 Bucket
aws s3 mb s3://node0-backup --region ap-southeast-1
2. Configure S3 Permissions
Ensure your AWS credentials have appropriate permissions for S3 operations:
S3 Policy Example
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::node0-backup",
"arn:aws:s3:::node0-backup/*"
]
}
]
}
S3 Backup Operations
Upload Local Backups to S3
Upload to S3
# Sync local backup folder to S3
aws s3 sync ./backups s3://node0-backup
Download Backups from S3
Download from S3
# Sync from S3 to local folder
aws s3 sync s3://node0-backup ./backups
Verify S3 Backup
Verify S3 Contents
# List S3 backup contents
aws s3 ls s3://node0-backup --recursive
# Check backup file details
aws s3api head-object --bucket node0-backup --key backup-node0-2025-07-28_20-46-39-1.enc
Former: Product Manager at TomoChain labs
Automated Backups
Cron Job Setup
Set up automated S3 backups using cron jobs to ensure regular cloud storage synchronization:
Cron Job Configuration
# Edit crontab
crontab -e
# Add backup job (runs every hour)
0 * * * * /usr/local/bin/backup-mpcium.sh
# Add daily backup verification
0 2 * * * /usr/local/bin/verify-backup.sh
Backup Script Example
Create a backup script to automate the S3 sync process:
Backup Script
#!/bin/bash
# /usr/local/bin/backup-mpcium.sh
# Configuration
NODE_NAME="node0"
S3_BUCKET="node0-backup"
BACKUP_DIR="/opt/mpcium/node0/backups"
LOG_FILE="/var/log/mpcium-backup.log"
# Log function
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"
}
# Start backup
log "Starting MPCIUM backup to S3"
# Sync local backups to S3
if aws s3 sync "$BACKUP_DIR" "s3://$S3_BUCKET" --delete; then
log "Backup completed successfully"
else
log "Backup failed"
exit 1
fi
# Verify backup
if aws s3 ls "s3://$S3_BUCKET" | grep -q "backup-$NODE_NAME"; then
log "Backup verification successful"
else
log "Backup verification failed"
exit 1
fi
Verification Script
Create a script to verify backup integrity:
Verification Script
#!/bin/bash
# /usr/local/bin/verify-backup.sh
# Configuration
S3_BUCKET="node0-backup"
LOG_FILE="/var/log/mpcium-backup-verify.log"
# Log function
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"
}
# Check S3 bucket accessibility
if aws s3 ls "s3://$S3_BUCKET" > /dev/null 2>&1; then
log "S3 bucket accessible"
else
log "S3 bucket not accessible"
exit 1
fi
# Count backup files
BACKUP_COUNT=$(aws s3 ls "s3://$S3_BUCKET" --recursive | grep -c "\.enc$")
log "Found $BACKUP_COUNT backup files in S3"
# Check for recent backups (within last 24 hours)
RECENT_BACKUPS=$(aws s3 ls "s3://$S3_BUCKET" --recursive | grep "$(date '+%Y-%m-%d')" | wc -l)
log "Found $RECENT_BACKUPS backups from today"
if [ "$RECENT_BACKUPS" -eq 0 ]; then
log "WARNING: No recent backups found"
exit 1
fi
Monitoring and Alerts
Set up monitoring for your backup processes:
Monitoring Setup
# Check backup script exit status
if [ $? -ne 0 ]; then
# Send alert (example with curl to webhook)
curl -X POST -H "Content-Type: application/json" \
-d '{"text":"MPCIUM backup failed"}' \
https://your-webhook-url.com/alert
fi
Best Practice: Test your backup and recovery procedures regularly in a testing environment to ensure they work correctly when needed.