A Docker container that runs Duplicacy CLI backups on a cron schedule
to two independent S3-compatible storages for cross-site redundancy.
- Features
- Architecture
- Quick Start
- Configuration Reference
- Scripts
- Backup Verification
- Notification Format
- Troubleshooting
- Guides
- Contributing
- Dual-storage backups -- every backup writes to two independent S3 endpoints (cross-site redundancy)
- Multi-repo from a single container -- one tiny daily wrapper per repository
- S3-compatible storage -- Garage, MinIO, AWS S3, Backblaze B2, and any S3-compatible provider
- AES-256-GCM encryption -- per-repo encryption passwords
- Parallel uploads -- configurable thread count via
DUPLICACY_THREADS - Staggered cron schedules -- avoid storage contention across servers
- Per-repo lock files -- automatic timeout kills stuck backups after
MAX_RUNTIME_HOURS - Retry with exponential backoff -- secondary storage retries with configurable attempts
- Secondary endpoint pre-flight -- verifies S3 reachability before attempting backup
- Weekly exhaustive prune -- reclaims actual storage space by scanning all chunks
- Monthly integrity check -- verifies all backup chunks and triggers Garage data scrubs
- Filter files -- exclude caches, thumbnails, and temporary data
- Telegram notifications -- via Shoutrrr (supports 70+ services)
- Multi-architecture Docker image -- amd64, arm64, armv7
- Alpine-based -- minimal image footprint
- UnRAID and Linux support -- back up shares, boot USB,
/etc,/home, crontabs, Tailscale state
Each server backs up to two remote S3 endpoints (never to itself), ensuring data survives the loss of any single node:
Server A --backup--> S3 on Server B (primary)
--backup--> S3 on Server C (secondary)
Server B --backup--> S3 on Server A (primary)
--backup--> S3 on Server C (secondary)
Server C --backup--> S3 on Server A (primary)
--backup--> S3 on Server B (secondary)
The daily wrapper scripts are four lines each and source the shared dual-executor.sh, which handles locking, backup, prune, and notification logic.
Copy docker-compose.yml and fill in your values:
services:
duplicacy-cli-cron:
image: drumsergio/duplicacy-cli-cron:3.2.5.2
container_name: duplicacy-cli-cron
restart: unless-stopped
volumes:
- /mnt/user/appdata/duplicacy/config:/config
- /mnt/user/appdata/duplicacy/cron:/etc/periodic
- /mnt/user:/local_shares
- /boot:/boot_usb
environment:
CRON_DAILY: "0 2 * * *"
CRON_WEEKLY: "0 4 * * 6"
DUPLICACY_THREADS: "8"
HOST: MyServer
TZ: Europe/Madrid
SHOUTRRR_URL: telegram://TOKEN@telegram?chats=CHAT_ID¬ification=no&parseMode=markdown
ENDPOINT_1: "192.168.1.100:9000"
ENDPOINT_2: "192.168.1.200:9000"
BUCKET: duplicacy
REGION: garage
MAX_RUNTIME_HOURS: "71"
# Credentials for each storage (primary + secondary):
DUPLICACY_APPDATA_S3_ID: YOUR_PRIMARY_KEY
DUPLICACY_APPDATA_S3_SECRET: YOUR_PRIMARY_SECRET
DUPLICACY_APPDATA_PASSWORD: YOUR_ENCRYPTION_PASS
DUPLICACY_APPDATAC_S3_ID: YOUR_SECONDARY_KEY
DUPLICACY_APPDATAC_S3_SECRET: YOUR_SECONDARY_SECRET
DUPLICACY_APPDATAC_PASSWORD: YOUR_ENCRYPTION_PASSSee docker-compose.yml in this repo for the full example with comments.
Edit config/config-s3.sh with your storage name, snapshot ID, and repo path. Then run it inside the container:
docker exec duplicacy-cli-cron sh /config/config-s3.shRepeat for each folder you want to back up (e.g., appdata, Multimedia, system, boot).
Each backup location gets a tiny wrapper script placed in the daily cron directory. The wrapper sets per-repo constants and sources the shared dual-executor.sh:
#!/usr/bin/env sh
STORAGENAME="appdata"
SNAPSHOTID="appdata"
REPO_DIR="/local_shares/appdata"
THREADS_OVERRIDE="8"
. /config/dual-executor.shPlace dual-executor.sh in the config volume, then create one wrapper per repo:
# Copy the executor to the config volume
cp scripts/dual-executor.sh /mnt/user/appdata/duplicacy/config/
# Create wrapper scripts in the cron directory
cat > /mnt/user/appdata/duplicacy/cron/daily/00-boot.sh << 'EOF'
#!/usr/bin/env sh
STORAGENAME="boot"
SNAPSHOTID="boot"
REPO_DIR="/boot_usb"
THREADS_OVERRIDE="8"
. /config/dual-executor.sh
EOF
chmod +x /mnt/user/appdata/duplicacy/cron/daily/00-boot.shScripts are executed alphabetically by run-parts, so prefix with numbers to control order (e.g., 00-boot.sh, 01-Multimedia.sh, 02-appdata.sh).
Tip: Use
THREADS_OVERRIDEper repo to tune performance. For HDD-backed repos with large files (media), lower threads (4-8) reduce disk seek contention. For SSD/NVMe or small-file repos, higher threads (8-16) improve throughput.
Copy scripts/exhaustive-prune.sh to the weekly cron directory:
cp scripts/exhaustive-prune.sh /mnt/user/appdata/duplicacy/cron/weekly/01-exhaustive-prune.sh
chmod +x /mnt/user/appdata/duplicacy/cron/weekly/01-exhaustive-prune.shThe exhaustive prune auto-discovers all repos under /local_shares/*/ and prunes both primary and secondary storages. It also handles extra repos (/boot_usb for UnRAID, /local_* for Ubuntu/Debian) and respects daily backup lock files to avoid conflicts.
Copy scripts/monthly-integrity-check.sh to the monthly cron directory:
cp scripts/monthly-integrity-check.sh /mnt/user/appdata/duplicacy/cron/monthly/01-integrity-check.sh
chmod +x /mnt/user/appdata/duplicacy/cron/monthly/01-integrity-check.shThis script verifies all backup chunks across every repo and, if you use Garage, triggers a data scrub on both target storage nodes.
| Variable | Default | Description |
|---|---|---|
CRON_DAILY |
0 2 * * * |
When daily backup scripts run |
CRON_WEEKLY |
0 4 * * 6 |
When weekly exhaustive prune runs (Saturday by default) |
CRON_MONTHLY |
0 5 1 * * |
When monthly integrity check runs (1st of month) |
DUPLICACY_THREADS |
4 |
Default parallel upload/download threads |
HOST |
$(hostname) |
Machine name shown in notifications |
TZ |
Etc/UTC |
Timezone |
SHOUTRRR_URL |
(empty) | Notification URL (Shoutrrr format) |
ENDPOINT_1 |
(required) | S3 endpoint for primary storage |
ENDPOINT_2 |
(required) | S3 endpoint for secondary storage |
BUCKET |
(required) | S3 bucket name |
REGION |
(required) | S3 region (use garage for Garage) |
MAX_RUNTIME_HOURS |
71 |
Kill stuck backups after this many hours |
SECONDARY_RETRIES |
3 |
Max retry attempts for secondary storage |
SECONDARY_PREFLIGHT_TIMEOUT |
120 |
Seconds to wait for secondary endpoint reachability |
GARAGE_ADMIN_TOKEN |
(empty) | Garage admin API token (for monthly scrub trigger) |
Duplicacy resolves credentials from environment variables by storage name. For dual-storage, you need two sets -- one for the primary and one for the secondary (C-suffix):
# Primary storage
DUPLICACY_<STORAGENAME>_S3_ID -> S3 access key ID
DUPLICACY_<STORAGENAME>_S3_SECRET -> S3 secret access key
DUPLICACY_<STORAGENAME>_PASSWORD -> repository encryption password
# Secondary storage (same name + "C" suffix)
DUPLICACY_<STORAGENAME>C_S3_ID -> S3 access key ID
DUPLICACY_<STORAGENAME>C_S3_SECRET -> S3 secret access key
DUPLICACY_<STORAGENAME>C_PASSWORD -> repository encryption password
Example for a storage named appdata:
DUPLICACY_APPDATA_S3_ID: GKabc123...
DUPLICACY_APPDATA_S3_SECRET: f42b4be...
DUPLICACY_APPDATA_PASSWORD: mySecretPassword
DUPLICACY_APPDATAC_S3_ID: GKdef456...
DUPLICACY_APPDATAC_S3_SECRET: a83c1d2...
DUPLICACY_APPDATAC_PASSWORD: mySecretPasswordWhen multiple servers share the same S3 backend, stagger CRON_DAILY to avoid contention:
| Server | CRON_DAILY |
Description |
|---|---|---|
| Server A | 0 2 * * * |
Runs at 2:00 AM |
| Server B | 0 3 * * * |
Runs at 3:00 AM |
| Server C | 0 4 * * * |
Runs at 4:00 AM |
Create .duplicacy/filters inside a repo to exclude paths from backup. This reduces backup time and storage for regenerable data:
# Exclude cache and generated content
-Cache/
-EncodedVideo/
-Thumbs/
-.DS_Store
-Thumbs.db
-*.tmp
See the Duplicacy wiki on filters for the full syntax.
Each daily wrapper creates a lock file at /tmp/duplicacy-<SNAPSHOTID>.lock. If a previous run is still active:
- Within
MAX_RUNTIME_HOURS: the new run is skipped with a Telegram notification. - Exceeds
MAX_RUNTIME_HOURS: the stuck process is killed and a fresh backup starts.
Daily prune (skipped on Saturdays when the weekly exhaustive prune runs):
-keep 0:180 # Delete all snapshots older than 180 days
-keep 30:90 # Keep one snapshot every 30 days if older than 90 days
-keep 7:30 # Keep one snapshot every 7 days if older than 30 days
-keep 1:7 # Keep one snapshot every day if older than 7 days
Weekly exhaustive prune runs with the -exhaustive flag to scan all chunks and reclaim actual storage space.
| Script | Schedule | Purpose |
|---|---|---|
scripts/dual-executor.sh |
Daily (sourced) | Shared backup + prune logic for dual-storage repos |
scripts/exhaustive-prune.sh |
Weekly | Full chunk scan across all repos to reclaim space |
scripts/monthly-integrity-check.sh |
Monthly | Chunk verification and Garage scrub trigger |
scripts/executor_unraid.sh |
(legacy) | NFS-based backup with copy to second destination |
scripts/executor_ubuntu.sh |
(legacy) | NFS-based backup for Ubuntu hosts |
#!/usr/bin/env sh
STORAGENAME="Multimedia"
SNAPSHOTID="Multimedia"
REPO_DIR="/local_shares/Multimedia"
THREADS_OVERRIDE="8"
. /config/dual-executor.shVerify that snapshots are being created on both storages:
docker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy list -storage appdata'
docker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy list -storage appdataC'Run an on-demand integrity check for a specific repo:
docker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy check -storage appdata -threads 4'To restore from a specific revision to a target path:
docker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy restore -r 42 -storage appdata -stats'Add -overwrite to replace existing files, or use -delete to remove files not present in the snapshot. See the Duplicacy CLI restore docs for full options.
For Garage S3 storage, check bucket sizes to confirm both destinations are receiving data:
# Using the Garage admin API
curl -s -H "Authorization: Bearer ${GARAGE_ADMIN_TOKEN}" \
http://192.168.1.100:3903/v2/GetBucketInfo?id=YOUR_BUCKET_ID | jq .bytesSuccessful backup:
[green] MyServer -- appdata
Primary: [ok]
Secondary: [ok]
[sync] Pruned
Skipped (previous run still in progress):
[skip] MyServer -- Multimedia
Skipped -- previous run still in progress (PID: 325)
Failed backup:
[red] MyServer -- appdata
Primary: [ok]
Secondary: [fail]
[sync] Pruned
Stuck job killed:
[warn] MyServer -- appdata
Killed after 71h timeout (PID: 1234)
A stale lock file may be left behind if the container was restarted mid-backup. Remove it manually:
docker exec duplicacy-cli-cron rm -f /tmp/duplicacy-<SNAPSHOTID>.lockIf the problem recurs, your backup may genuinely need more time. Increase MAX_RUNTIME_HOURS or reduce the data volume being backed up.
-
Verify endpoint reachability from inside the container:
docker exec duplicacy-cli-cron wget -q -O /dev/null -T 5 http://192.168.1.200:9000/Exit code 0 or 8 (HTTP 403) means the endpoint is reachable.
-
Check credentials: ensure the
DUPLICACY_<STORAGENAME>C_S3_IDandDUPLICACY_<STORAGENAME>C_S3_SECRETvariables match the secondary storage's access keys. -
Increase retry attempts: set
SECONDARY_RETRIES=5for unreliable network links.
Each repo directory must be initialized with duplicacy init before backups can run. Verify the .duplicacy directory exists:
docker exec duplicacy-cli-cron ls -la /local_shares/appdata/.duplicacy/If missing, re-run the initialization script:
docker exec duplicacy-cli-cron sh /config/config-s3.shCron job output is redirected to PID 1 stdout so Docker can capture it. Check with:
docker logs --tail 100 duplicacy-cli-cronIf logs are empty, verify the cron scripts are executable:
docker exec duplicacy-cli-cron ls -la /etc/periodic/daily/All wrapper scripts must have the execute bit set (chmod +x).
The weekly exhaustive prune scans all chunks across all repos. For large repositories, this is expected. If it overlaps with daily backups, it will wait up to 1 hour for locks to clear. Options:
- Stagger the weekly schedule earlier (e.g.,
CRON_WEEKLY: "0 0 * * 6") - Ensure daily backups finish well before the weekly prune starts
Duplicacy's memory usage scales with thread count. If the container is being OOM-killed:
- Lower
DUPLICACY_THREADSorTHREADS_OVERRIDE - Add a memory limit in your
docker-compose.yml:mem_limit: 512m
- Deploying Garage S3 (v2.x) and Hooking It Up to Duplicacy -- S3 approach (recommended)
- Backup Bliss: A Dockerized Duplicacy Setup for Your Home Servers -- NFS approach (legacy)
Contributions are welcome. Open an issue or submit a pull request.
This project follows the Contributor Covenant Code of Conduct.