Deploying Garage S3 (v2.x) and Hooking It Up to Duplicacy

A lightweight, three-node S3 cluster that doubles as a rock-solid backup target
Garage v2.x is almost here—Release Candidate 1 is already out of the oven—so it feels like the perfect moment to write up a clean, reproducible installation guide. After test-driving both SeaweedFS and Garage, I gravitated toward Garage for one simple reason: it packs a respectable S3 feature-set into an extremely small operational footprint. SeaweedFS still reigns supreme for very large or highly multi-tenant deployments, but for small-to-mid-sized homelab or SMB scenarios Garage shines.
Below you’ll find the full recipe I used to spin up a three-node Garage cluster (two UnRAID boxes and one Ubuntu server) and wire it to Duplicacy, my de-duplicating backup tool of choice. I decided to migrate to S3 to improve the reliability of the backups, as NFS sometimes fail.
Copy–paste your way through or adapt as needed—you should end up with a highly available S3 target that Duplicacy can talk to out of the box.
Deploying Garage
In this section, we will first deploy Garage. Let the fun begin!
Topology & Inventory
We’ll run three data nodes:
- Node 1 – UnRAID 7.1.x
- Node 2 – UnRAID 7.1.x
- Node 3 – Ubuntu 24.04.x
A single bucket will host the backups created by those three machines, making the setup nicely self-contained.
Shared RPC Secret
Generate once, reuse everywhere:
openssl rand -hex 32 > rpc_secret.hex
cat rpc_secret.hex
Paste that exact value into the rpc_secret
line of every garage.toml
.
Folder layout
For the case of UnRAID systems:
sudo mkdir -p /mnt/user/appdata/garage # holds garage.toml
sudo mkdir -p /mnt/user/my_disks/garage # holds meta/ & data/
sudo chown -R 1000:1000 /mnt/user/appdata/garage /mnt/user/my_disks/garage
And for the Ubuntu system:
sudo mkdir -p /home/user/garage # holds garage.toml
sudo mkdir -p /media/my_disk/garage # holds meta/ & data/
sudo chown -R 1000:1000 /home/user/garage /media/my_disk/garage
Garage.toml
Here are the parameters shown below—change the values marked Change but you can also modify the other commented lines.
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "lmdb" ## consider whether to use sqlite
metadata_auto_snapshot_interval = "6h"
replication_factor = 3 ## 3-way copies, change depending on needs and the number of instances in the cluster
compression_level = 2
rpc_bind_addr = "0.0.0.0:3901" ## Consider localhost and a proxy
rpc_public_addr = "<NODE-IP>:3901" ## Change
rpc_secret = "<PASTE-FROM-RPC_SECRET.HEX, EARLIER STEP>" ## Change
[s3_api]
s3_region = "garage"
api_bind_addr = "0.0.0.0:9000" ## Consider localhost and a proxy
root_domain = "api.s3.your.domain"
[s3_web]
bind_addr = "0.0.0.0:3902" ## Consider localhost and a proxy
root_domain = "web.s3.your.domain"
index = "index.html"
[admin]
api_bind_addr = "0.0.0.0:3903" ## Consider localhost and a proxy
metrics_token = "my_metrics_token"
admin_token = "my_admin_token"
Copy the same file onto all three nodes, tweaking only rpc_public_addr
to match each host’s IP.
Docker Compose
I avoid network: host
(as the default docs suggest) because sometimes, it causes hiccups on UnRAID reboots. Here’s a bridged-network alternative:
services:
garage:
image: dxflrs/garage:v2.0.0-rc1 # The guide is geared towards v2.x
container_name: garage
restart: unless-stopped
ports:
- "9000:9000"
- "3901:3901"
- "3902:3902"
- "3903:3903"
environment:
- GARAGE_CONFIG=/etc/garage.toml
volumes:
# For UnRAID:
- /mnt/user/appdata/garage/garage.toml:/etc/garage.toml:ro
- /mnt/user/my_disks/garage:/var/lib/garage # meta/ & data/
# For Ubuntu:
- /home/user/garage/garage.toml:/etc/garage.toml:ro
- /media/my_disk/garage:/var/lib/garage # meta/ & data/
Change the volume host endpoint as you need.
Fetch node IDs
Get the node IDs for each instance. Here I will use the alias
as it will be quite handy for the following steps too:
alias garage="docker exec -it garage /garage"
garage node id
Record each ID—we’ll need them to form the cluster.
Wire the cluster together
On every node except Node 1, use the node ID from the last step:
# On Node 2
garage node connect <NODE_1_FULL_NODE_ID>
# On Node 3
garage node connect <NODE_1_FULL_NODE_ID>
Repeat the same dance among the remaining nodes (2 & 3), then verify:
garage status
All three nodes should now see each other.
Define the layout
At this stage every node can see its peers, but Garage still has no idea how much space each machine contributes or which fault-domain (zone) it belongs to. That mapping is stored in the layout.
Think of the layout as Garage’s internal topology file:
-z <zone>
– logical failure domain. Nodes in the same zone are assumed to fail together (same rack, same room, same power feed, etc.).-c <capacity>
– how much raw storage the node is allowed to advertise to the cluster.-t <tag>
– a human-friendly label you’ll see in status outputs and the WebUI.
Grab the node IDs you noted earlier (or run garage status
again) and feed them into the following commands—one line per node:
garage layout assign <get_from_status> -z zone1 -c 8T -t node_1
garage layout assign <get_from_status> -z zone2 -c 8T -t node_2
garage layout assign <get_from_status> -z zone3 -c 8T -t node_3
garage layout show # To verify
garage layout apply # To commit
Voilà! You now have a three-zone cluster with 24 TB of raw space (and—because we set replication_factor = 3
—8 TB of net usable storage). Run garage status
once more; the output should list all nodes, zones and capacities, confirming that your Garage cluster is fully operational.
(Optional) Fancy WebUI
The community-made UI is slick and easy to deploy. Follow the most up-to-date instructions here: https://github.com/khairul169/garage-webui
This is how I deployed it in one of the UnRAID systems:
...
webui:
image: khairul169/garage-webui:1.0.9
container_name: garage-webui
restart: unless-stopped
volumes:
- /mnt/user/appdata/garage/garage.toml:/etc/garage.toml:ro
ports:
- 3909:3909
environment:
API_BASE_URL: "http://garage:3903"
S3_ENDPOINT_URL: "http://garage:3900"
AUTH_USER_PASS: "username:$2y$1DS..." # Strongly advise to configure auth
Preparing Garage for Duplicacy
We will prepare the access, creating the bucket and secrets.
Bucket
Create the bucket on any node, and check it has been successfully created:
garage bucket create duplicacy
garage bucket list # to verify
Access Keys & Policies
Let's now create the user and add RW permissions to the bucket:
garage key create duplicacy-cli
garage bucket allow duplicacy --key <KEY_FROM_LAST_COMMAND> --read --write
Do not forget to modify the command with the key when setting the permissions.
Create another user for the Duplicacy WebUI. We will use this in case of a needed recovery, as using this functionality is free:
garage key create duplicacy-web
garage bucket allow duplicacy --key <KEY_FROM_LAST_COMMAND> --read
garage bucket info duplicacy
(or the Garage Web UI) will confirm the permissions.
Hooking Duplicacy into Garage
At this point our Garage cluster is humming along, but without an actual backup engine it’s just empty disk space.
Enter Duplicacy—a cross-platform, deduplicating backup tool that can talk native S3. Below you’ll find:
- Two ready-to-run Docker services:
•duplicacy-cli-cron
– the headless scheduler that does the daily heavy lifting, writen by me here.
•duplicacy-web
– the official GUI you’ll use for ad-hoc restores (or if you prefer point-and-click configuration for everything else, for a fee). - A walk-through of the environment variables that glue Duplicacy to Garage.
- Two tiny shell scripts that initialise the repository and perform backup → prune → notify.
Duplicate the snippets verbatim, then read the commentary to understand why each line exists.
Docker compose
Use this example:
services:
duplicacy-cli-cron:
image: drumsergio/duplicacy-cli-cron:3.2.5
container_name: duplicacy-cli-cron
restart: unless-stopped
volumes:
- /mnt/user/appdata/duplicacy/config:/config
- /mnt/user/appdata/duplicacy/cron:/etc/periodic
- /mnt/remotes:/smb_nfs_shares # SMB/NFS Shares
- /mnt/disks:/unassigned_devices # Unassigned devices - Not in use
- /mnt/user:/local_shares # Shares
- /boot:/boot_usb # Boot USB
environment:
DUPLICACY_STORAGENAME_S3_ID: GKc7794...
DUPLICACY_STORAGENAME_S3_SECRET: b23...0c3
DUPLICACY_STORAGENAME_PASSWORD: ...
TZ: Etc/UTC
SHOUTRRR_URL: telegram://TG-ID:PWD@telegram?chats=CHAT-ID¬ification=no&parseMode=markdown
ENDPOINT: GARAGE-IP:9000
BUCKET: duplicacy
REGION: garage
HOST: HOSTNAME
# For Restores:
duplicacy-web:
image: saspus/duplicacy-web:v1.8.3
container_name: duplicacy-web
ports:
- 3875:3875
volumes:
- /mnt/user/appdata/duplicacy/web/config:/config
- /mnt/user/appdata/duplicacy/web/logs:/logs
- /mnt/user/appdata/duplicacy/web/cache:/cache
- /mnt/remotes:/smb_nfs_shares # SMB/NFS Shares
- /mnt/disks:/unassigned_devices # Unassigned devices - Not in use
- /mnt/user:/local_shares # Shares
- /boot:/boot_usb # Boot USB
environment:
DUPLICACY_STORAGENAME_S3_ID: GKc7794...
DUPLICACY_STORAGENAME_S3_SECRET: b23...0c3
DUPLICACY_STORAGENAME_PASSWORD: ...
TZ: Etc/UTC
privileged: true
Volume mapping cheat-sheet
/config
– Duplicacy’s “brain” (configuration, preferences)./etc/periodic
– Alpine’s built-in cron directory used by theduplicacy-cli-cron
image; anything you drop indaily
/weekly
etc. is executed automatically.- Everything else (
/local_shares
,/smb_nfs_shares
…) is your backup source—mount whatever data you care about
Environment-Variables
Duplicacy can read credentials and options from env-vars that follow a strict pattern:
DUPLICACY_<STORAGE_NAME>_S3_ID
DUPLICACY_<STORAGE_NAME>_S3_SECRET
DUPLICACY_<STORAGE_NAME>_PASSWORD
<STORAGE_NAME>
must match the name you give when you run duplicacy init
.
The benefit: no secrets in plaintext config files and super-easy container deployments; just stick them in Compose or your orchestrator’s secret store.
One-Time Repository Initialiser (config.sh
)
Placed inside /config/config.sh
, executed by the entrypoint of duplicacy-cli-cron
.
#!/usr/bin/env sh
set -eu
STORAGENAME="STORAGENAME"
SNAPSHOTID="STORAGENAME"
REPO="/local_shares/STORAGENAME"
URL="minio://${REGION}@${ENDPOINT}/${BUCKET}/${HOST}/${STORAGENAME}"
cd "$REPO"
if [ ! -f .duplicacy/preferences ]; then
duplicacy init -e -storage-name "$STORAGENAME" "$SNAPSHOTID" "$URL"
fi
duplicacy list -storage $STORAGENAME
Key takeaways:
minio://
is Duplicacy’s generic S3 driver; Garage is 100 % S3-compatible so it just works.s3://
only works if HTTPS is set up, which is not my case.- The directory hierarchy
${BUCKET}/${HOST}/${STORAGENAME}
keeps backups from multiple machines neatly separated. -e
encrypts data before it leaves the box; Garage only sees ciphertext.
Daily Backup & Prune Job
Drop the following script into /etc/periodic/daily/backup
inside the container (or mount it there). Alpine’s cron will execute it once every 24 h. Modify STORAGENAME
with the actual storage name you use:
#!/usr/bin/env sh
set -eu
set -o pipefail
# ───────── constants ────────────────────────────────────────────────
REPO_DIR="/local_shares/STORAGENAME"
STORAGENAME="STORAGENAME"
SNAPSHOTID="STORAGENAME"
MACHINENAME="${HOST:-$(hostname)}"
SHOUTRRR_URL="${SHOUTRRR_URL:-}"
# ───────── helper ───────────────────────────────────────────────────
notify() {
[ -n "$SHOUTRRR_URL" ] && /usr/local/bin/shoutrrr send -u "$SHOUTRRR_URL" -m "$1"
}
run_and_capture() { # $1 = log-header, $2 = command…
header="$1"; shift
echo "--- ${header} ---"
tmp=$(mktemp)
# shellcheck disable=SC2086
/bin/sh -c "$*" 2>&1 | tee "$tmp"
code=$?
out=$(cat "$tmp"); rm "$tmp"
echo
return $code
}
# ───────── run backup ───────────────────────────────────────────────
cd "$REPO_DIR"
run_and_capture "Backup Output" "duplicacy backup -storage $STORAGENAME -stats"
BACKUP_EXIT=$?; BACKUP_MSG=$( [ $BACKUP_EXIT -eq 0 ] && \
echo "✅ Backup completed successfully" || \
echo "🚨 Backup failed — check logs" )
# ───────── prune old revisions ──────────────────────────────────────
run_and_capture "Prune Output" \
"duplicacy prune -storage $STORAGENAME -keep 0:360 -keep 30:180 -keep 7:30 -keep 1:7"
PRUNE_EXIT=$?; PRUNE_MSG=$( [ $PRUNE_EXIT -eq 0 ] && \
echo "✅ Prune completed successfully" || \
echo "🚨 Prune failed — check logs" )
# ───────── notification ────────────────────────────────────────────
MSG=$(cat <<EOF
🖥️ *${MACHINENAME}* — _${SNAPSHOTID}_
---------------------------------------------
${BACKUP_MSG}
${PRUNE_MSG}
EOF
)
echo "--- Notification Sent ---"
printf "%s\n" "$MSG"
notify "$MSG
What happens:
duplicacy backup
uploads any new or changed blocks—deduplicated and encrypted.duplicacy prune
keeps a Grandfather-Father-Son style retention:
• Hourly for 7 days,
• Daily for 30 days,
• Monthly for 6 months,
• Yearly forever (or 360 if you keep the default).- A Telegram message is pushed via Shoutrrr so you know whether the job succeeded (can be configured with any Shoutrrr-enabled service).
Restoring Data
Spin up duplicacy-web
, point your browser to http://<host>:3875
, log in, and add the same storage credentials you used in the CLI.
Duplicacy’s web UI autodetects existing snapshots, lets you browse revisions, and performs point-and-click restores—handy when you don’t feel like reaching for the terminal.
Recap
- Garage now exposes an S3 endpoint with triple-replicated, zone-aware storage.
- Duplicacy pushes encrypted, deduplicated backups into that endpoint on a schedule.
- Telegram (or any Shoutrrr-supported service) keeps you in the loop.
- A separate Web UI (Garage’s or Duplicacy’s) gives you visual insights and tools whenever you need them.
With that, your homelab (or small-office) backup pipeline is fully automated, versioned, and redundant—yet still lightweight enough to run on three modest machines. Sleep tight!