The homelab currently consists of two devices: a Beelink S12 Pro which runs most of the services and a Raspberry Pi 4 that runs the Adguard DNS.

Todos

  • Investigate ArchiveBox for archiving the web
  • Fix Authelia to cache sessions in Redis
  • Secure access to translation (translate.c.devon.lol)
  • Add Linkding bookmark tagging n8n automation
  • Add translation service
  • Version docker compose stacks
  • Watch for docker container updates
  • Close SSH to password login
  • Capture links from more services (Mastodon, Lemmy, etc.) to Linkding
  • Add video call service
    • Couldn’t find a good service for this, so I’ve tabled it for now. I’ll re-evaluate later.

Services

Media Stack

  • Prowlarr and qBittorrent are connected to the gluetun (vpn) container to route their traffic through the VPN
  • In order for Prowlarr to be able to manage trackers in Radarr, Sonarr, and Lidarr, those also need to be added to the VPN container’s network
  • The VPN container must then expose the ports and configure the domains for all services connected to it
  • qBittorrent will report trackers unreachable and will remain ā€œstalledā€ on all torrents until a port is forwarded in the VPN

Usenet Services

  • My primary usenet server is Eweka.
  • I have a 500GB block from UsenetExpress that gets used rarely for backup. This is a one-time payment until the block is used. The block is currently at 478GB remaining as of 2025-10-12.

Pihole

I have migrated DNS from Pihole to Adguard Home running on a separate Raspberry Pi 4. This section of documentation is no longer current. See the Adguard Home section below.

Access Pihole at https://192.168.1.162:38793/admin or https://pihole.c.devon.lol.

DNS should be routed through the homelab and will be handled by Pihole. This handles blocking but also routes domains internally when accessed from a system connected to the DNS.

If this container is edited or rebuilt, you must get a console to it (which can be done via Portainer) and run pihole -a -p to set a password for the web UI

Adguard Home

Access Adguard Home at https://192.168.1.13 or https://dns.c.devon.lol (assuming DNS through Adguard Home is working).

Adguard Home runs on a separate device — a Raspberry Pi 4. I moved it so that if I need to perform maintenance on the home server, I will not lose DNS in the interim. I plan to run only this service on the Raspberry Pi.

DNS should be routed to the Raspberry Pi at 192.168.1.13. Like the Pihole service before it, this handles blocking and forwards to one of three DNS providers, unless the domain is a local cloud domain (*.c.devon.lol) in which case the requests are routed to the home server.

HTTP traffic on the domain name is handled by the Caddy reverse proxy on the home server and is proxied through to the Pi. Caddy does not allow any traffic outside the local network, meaning Adguard Home’s interface is inaccessible outside the network. The reverse proxy is configured in the Caddyfile instead of via docker-compose labels, since it is not on the same Docker instance. The Caddyfile is at /media/Media/apps/caddy/Caddyfile. Traffic on the IP goes directly to the Pi.

The stack and container are accessible through the main Portainer instance.

Adguard Home’s Homepage dashboard widget is currently configured not through the Docker integration but in /media/Media/apps/homepage/config/services.yaml.

Portainer

Access Portainer at https://192.168.1.162:9443/ or https://portainer.c.devon.lol, if DNS is set up to go through Adguard Home on the Raspberry Pi (192.168.1.13). This is only accessible on the local network.

Portainer is running on the home server, but the Raspberry Pi’s Docker is also accessible by way of a Portainer agent running on that device.

Portainer is the only stack that cannot be managed by Portainer. To manage it, modify the docker-compose.yml at /home/raddevon/docker-compose-stacks/portainer, take the stack down with docker compose down, and bring it back up with docker compose up -d.

Caddy

Caddy is the reverse proxy to all the other services. It is a custom build to include caddy-docker-proxy, replace-response for replacing the canonical vertd URL with my vertd instance, and the Porkbun DNS module so that auto-HTTPS for wildcard domains will work.

ARG CADDY_VERSION=2.6.1
FROM caddy:${CADDY_VERSION}-builder AS builder
 
RUN xcaddy build \
    --with github.com/lucaslorentz/caddy-docker-proxy/v2 \
    --with github.com/caddyserver/replace-response \
    --with github.com/caddy-dns/porkbun
 
FROM caddy:${CADDY_VERSION}-alpine
 
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
 
CMD ["caddy", "docker-proxy"]

Changing the stack

Most services can be configured from within Portainer, but this breaks if you try to change Portainer itself. Instead:

  1. ssh raddevon@192.168.1.162
  2. su - and provide root user password
  3. edit ~/docker-compose-stacks/portainer/docker-compose.yml and save changes
  4. run docker compose down && docker compose up -d to restart stack with new changes

Updating

If you want to simply update the version of Portainer, use these commands:

docker-compose pull
docker-compose up --force-recreate --build -d
docker image prune -f

Compose file

services:
  portainer:
    image: portainer/portainer-ce:2.21.4-alpine
    container_name: portainer
    command: -H unix:///var/run/docker.sock
    ports:
      - "9443:9443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
      - "/DATA/AppData/Portainer/data:/data"
    restart: unless-stopped
    networks:
      - caddy
    labels:
      caddy: portainer.c.devon.lol # Domain name
      caddy.1_import: authelia_auth
      caddy.2_import: deny_external
 
 
networks:
  caddy:
    external: true

Syncthing

Syncthing is used for syncing files, alongside a small custom service (freetube-sync-manager) that handles merging of conflict copies, specifically for FreeTube playlist, profile, and history sync. That project is at /Users/raddevon/Documents/projects/freetube-sync-manager. Build and deploy instructions are in that project’s readme.

Obsidian LiveSync (via CouchDB)

Found some instructions on how to set this up on reddit, but I ended up using a prebuilt image that was already properly configured instead.

I was unable to use the generated setup URI to configure, but it’s easy to configure manually.

FieldValue
Server URIhttps://notesync.c.devon.lol
Usernamecouch
PasswordSee password locker
Databaseobsidian
I can access the web-based database administration tool at https://notesync.c.devon.lol/_utils/

Baikal (CalDAV/CardDAV)

BaĆÆkal is our self-hosted server for synchronizing calendars, contacts, and reminders across all devices. It uses the open CalDAV and CardDAV standards, making it compatible with a wide range of native applications on macOS, iOS, Android, and other platforms.

We use BaĆÆkal as a more feature-rich alternative to Radicale, primarily for its ability to send email-based event invitations to attendees, a key feature for scheduling and collaboration.

Administrator Notes

This service is managed via Docker Compose and fronted by caddy-docker-proxy. The configuration contains several key details that were necessary to ensure client compatibility.

  • Service URL: https://dav.c.devon.lol
  • Stack: dav
  • Storage: BaĆÆkal uses a single-file SQLite database, located in the data volume. Backups are as simple as copying the contents of the data directory.
    • Config Path: /media/Media/apps/baikal/config
    • Data Path: /media/Media/apps/baikal/data
  • Authentication: The service uses its own built-in Basic authentication and is not integrated with Authelia. This was a deliberate choice to resolve a fundamental protocol mismatch between Authelia’s redirect-based authentication and the direct challenge-response method expected by native clients (especially on macOS).
  • Proxy Configuration: The caddy-docker-proxy labels for this service are critical. They must include Host and X-Forwarded-Proto headers to allow BaĆÆkal to correctly generate public-facing URLs. Without these, client validation fails with a DAAccountValidationDomain error.
  • Email Invitations: Invitation support is configured via the MSMTPRC environment variable in the docker-compose.yml file, not through the web UI.
  • User Management: All users must be created manually through the admin dashboard at the main service URL. There is no self-registration. When creating a user, the username should be the user’s email address.

Connecting Your Devices

To use this service, contact the administrator for an account (username and password). Once you have your credentials, you can connect your devices.

macOS (Calendar, Contacts, & Reminders)

The macOS setup process is very specific and must be followed exactly.

  1. Open System Settings → Internet Accounts.
  2. Click Add Account… → Add Other Account….
  3. You will add two separate accounts: one ā€œCalDAVā€ (for Calendars/Reminders) and one ā€œCardDAVā€ (for Contacts).
  4. For each, select Advanced as the Account Type.
  5. Enter the following details precisely:
    • User Name: Your BaĆÆkal username
    • Password: Your BaĆÆkal password
    • Server Address: dav.c.devon.lol
    • Server Path: /dav.php/principals/YOUR_USERNAME/ (replace YOUR_USERNAME with your actual username, and include the final slash)
    • Port: 443
    • Use SSL: Must be checked.

iOS / iPadOS

Setup on iOS is typically easier and can often discover the correct settings automatically.

  1. Go to Settings → Calendar (or Contacts) → Accounts → Add Account.
  2. Select Other.
  3. Choose Add CalDAV Account (or Add CardDAV Account).
  4. Enter the following:
    • Server: dav.c.devon.lol
    • User Name: Your BaĆÆkal username
    • Password: Your BaĆÆkal password
    • Description: A name for the account (e.g., ā€œHome Contactsā€).
  5. iOS should verify the account and complete the setup. If it fails, you may need to enter the full server path (/dav.php/principals/YOUR_USERNAME/) in the ā€œAdvancedā€ settings.

Android

The recommended client is DAVx⁵.

  1. Install and open DAVx⁵.
  2. Add a new account (+) and select ā€œLogin with URL and user nameā€.
  3. Enter the following:
  4. DAVx⁵ will connect and discover your address books and calendars, allowing you to choose which ones to sync.

Sharing a Calendar

Sharing may be done through a client or by going to the calendar’s administration URL, which is https://dav.c.devon.lol/dav.php/caledars/<user-email>/<calendar-id>. Find the calendar ID through the admin interface by clicking the information button next to a calendar. Easiest to just copy the URI in the info popover and visit that URI. The username (email address) and password provided for the page should be those of the owner.

From that page, scroll to the bottom to find the sharing form. In the ā€œShare with (uri)ā€ field, enter a mailto URI (i.e., mailto:<email-address>). This email address should match the registered email address of another user in Baikal.

Mosquitto (MQTT)

This service is used for OwnTracks. It utilizes MQTT over WebSockets to leverage encryption from Caddy.

Creating a new user

  1. Get shell on the mosquitto container. (Should be /bin/sh.) You can do this using the ā€œConsoleā€ link in Portainer.
  2. Run this command:
mosquitto_passwd -b /mosquitto/config/passwd <username> <password>

OwnTracks Configuration (Android)

OptionValue
ModeMQTT
Hostmqtt.c.devon.lol
Port443
Client IDSome unique value
Use WebSocketsOn
Device IDAnything (maybe device model)
Tracker IDUser initials
Username & PasswordWhatever you created in the MQTT server’s CLI
TLSOn
All other settingsDefault values

Homepage

Homepage is a dashboard with service discovery via Docker labels. You can find examples of these in Adding a service.

I now run distinct instances of Homepage for Tiffany and I. These each have individualized configs at /media/Media/apps/homepage/camptiff and /media/Media/apps/homepage/raddevon respectively. Configuration can be personalized in Docker labels by adding .instance.<instanceName> after the homepage segment of the label name. instanceName is set in each instance’s settings.yaml file, and they are currently set to camptiff and raddevon respectively.

For example, if you wanted to place a widget in a different group per instance, you would achieve it with these labels:

homepage.instance.camptiff.group: "Tools"
homepage.instance.raddevon.group: "Media"

Omitting the instance name will cause the configuration value to be applied to all instances. That seems to be true even if an instance-specific value is added.

To show a label-configured widget for only one user, you may include the instance marker segments in all of the homepage labels for that service.

Widgets may also be configured in the config files, but prefer Docker labels for consistency.

FindMyDevice (FMD)

This is used as a self-hosted phone finder for our Graphene Pixel 9 phones. The stack runs both this service and the ntfy service which, alongside the ntfy app, provides notifications for the FMD service.

The FMD service is not required to use FMD on the phone — it can be used exclusively via SMS — but the service makes it convenient to see the location of the device on a map and to interface with it without having to send SMS commands.

Issues

Recently, our devices started getting a 403 from the server even though the official configuration instructions claim having anonymous write access on for up* topics would be sufficient for UnifiedPush. I fixed this by giving explicit permissions to our user accounts on the server (using ntfy accesss <username> 'up*' read-write from a shell) and then adding those accounts to the devices. It’s not clear what changed because the previous method had worked when it was initially set up.

Adding a service

  1. Add a new stack by copy/pasting a docker compose YAML into Portainer
  2. To expose the service to Caddy, add it to the caddy network by including
services:
  <name>:
    networks:
      - caddy

in the stack’s docker compose service definition, along with this top-level network definition:

networks:
  caddy:
    external: true
  1. Add caddy and homepage labels:
services:
  <name>:
    labels:
      # If you need to define multiple domains, add an underscore and a number to every caddy label (e.g., caddy_0, caddy_0.reverse_proxy, etc.) so that the correct directives are associated with the correct domain.
      caddy: <subdomain>.c.devon.lol # Domain name
      # Reverse proxy port number is the container's internal port, not the exposed port
      # Note there is no color before the port. Space only.
      caddy.reverse_proxy: "{{upstreams <container-port>}}"
      # Use this next block only if the service backend uses self-signed HTTPS.
      #### Start self-signed HTTPS to backend
      # caddy.reverse_proxy: "https://{{upstreams 9443}}"
      # caddy.reverse_proxy.transport: http
      # caddy.reverse_proxy.transport.tls:
      # caddy.reverse_proxy.transport.tls_insecure_skip_verify:
      #### End self-signed HTTPS to backend
      # Import the authelia_auth snippet if the service needs to be behind authelia auth. Use only if the service doesn't have its own authentication or if its authentication can be (and is) disabled. Snippets defined at /media/Media/apps/caddy/Caddyfile
      caddy.import: authelia_auth
      # Use this snippet for services that should be available only via the LAN. Snippets defined at /media/Media/apps/caddy/Caddyfile
      caddy.import: deny_external
      # User deny_internal for a service that should not be available via the LAN.
      caddy.import: deny_internal
      # Only needed to debug problems
      caddy.log:
      caddy.log.format: console
      # When using a wildcard domain (like *.wake.c.devon.lol), you need DNS verification for auto SSL
      # This is how you configure that.
      caddy.tls.dns: porkbun
      caddy.tls.dns.api_key: "{env.PORKBUN_API_KEY}" # These vars on configured on the caddy stack
      caddy.tls.dns.api_secret_key: "{env.PORKBUN_API_SECRET_KEY}" # These vars on configured on the caddy stack
      # To add the service to homepage...
      homepage.instance.raddevon.group: Media
      homepage.instance.camptiff.group: Tools
      # šŸ‘† Add instance segments when users need individualized settings
      homepage.name: Jellyfin
      # You can use any icon in this repo by name: https://github.com/walkxcode/dashboard-icons/
      homepage.icon: jellyfin.png
      # You may alternatively use icons from Material Design icons, Simple Icons, or selfh.st/icons
      # Append MD or SI icons with `-#ffffff` to define the color
      # homepage.icon: mdi-flask-outline-#ff0000
      homepage.href: https://media.c.devon.lol # Link destination
      homepage.description: Media server
      # See widget documentation: https://gethomepage.dev/widgets/
      homepage.widget.type: jellyfin
      homepage.widget.url: https://media.c.devon.lol # Internal path to API or service
      # All widgets have a type and most have a `url`.
      # Other properties are defined in the widget documentation.
      homepage.widget.key: $JELLYFIN_API_KEY
      homepage.widget.enableUser: true
      homepage.widget.enableNowPlaying: true
      homepage.widget.enableBlocks: true
  1. If using authelia and not using 2FA for this service, SSH to the server (ssh raddevon@192.168.1.162) and add a new domain rule to the authelia config (at /DATA/AppData/authelia/config/configuration.yml):
access_control:
  default_policy: two_factor
  rules:
    - domain: <subdomain>.c.devon.lol
      policy: one_factor

two_factor is the default policy, so no additional configuration in Authelia is required if you’re using that. Restart authelia container after saving. 5. Commit the change to the docker-compose repo by navigating to /DATA/AppData/Portainer/data/compose and then doing the regular Git thing to add and commit the new files.

To make the host accessible

Add the following to the docker-compose:

    extra_hosts:
      - host.docker.internal:host-gateway

You may then access the host at host.docker.internal.

Backups

Backups are handled by Backrest which is a front-end for restic running at https://backups.c.devon.lol. The backups consist of two backup repositories/plans:

  1. Docker volumes are backed up first to a local repository named docker-volume-backups stored at /media/Media/backups. This backup runs daily at 3am. Before running the backups, included Docker containers are brought down via script (at /media/Media/apps/backrest/scripts) to make sure the databases are captured in a good state in the backup. The containers are started again once the backup is complete. All the same data is captured again as part of the off-site backup, but this backup is also captured in the event one of those is captured in a bad state at that time. In that event, this backup can be restored to restore the Docker containers’ databases in a good state. The backup is encrypted with a password stored in 1Password as ā€œBackrest/restic docker volume backupsā€.
  2. All other system data with the exception of non-original media (movies, TV, etc.) is backed up to a Backblaze B2 bucket called home-server-backup-restic in the EU region. (Note: I have two Backblaze accounts: one in the US West region and this one. This one is titled ā€œBackblaze home server backup (EU region)ā€ in 1Password.) The connected Backrest repository is home-server-backup. The backup runs at 4am each day. The backup is encrypted with a password stored in 1Password as ā€œBackrest/restic full homelab backupā€. Both backup plans send requests to Uptime Kuma to register a success after a successful backup is completed. Uptime Kuma looks for a ping on those service-specific endpoints once per day to continue to mark the backups as ā€œup.ā€

Updating

You can see images that currently have updates available in Cup.

General

When a container needs an update, you can often update through Portainer by clicking into the container, clicking the ā€œRecreateā€ button, and toggling the ā€œRe-pull imageā€ option.

However, this does not work on containers with some network_mode settings because Portainer appears to try to re-create the container with all existing options including a generated hostname that is not part of the config. network_mode and hostname are mutually exclusive settings, so the container rebuild fails.

I have created a script just for this scenario: dupdate. Run dupdate <service-name> and the script will discover the correct stack for the service, switch into the stack directory, pull the image, and rebuild the container with the new image. It will conditionally include a stack.env file if one exists in the stack’s directory. dupdate is at /home/raddevon/bin, but it is on the PATH, so it can be run from anywhere on the system.

Note: Containers will only update as far as the tag allows, so if a container is not updating but you’re also not getting errors, check the stack’s config to see if it is pinned to a particular version.

Postgres

These instructions are generalized and will work for many applications. Before doing this, you should consult the application’s documentation to see if it has instructions for upgrading that may including other necessary steps.

Backup

When updating a Postgres container, first ensure the application connected to it can support the target version. If it can, get a shell on the Postgres container and dump the data:

pg_dumpall -U <username> > backup.sql

You can discover the username via the stack’s docker-compose.yaml.

Move the backup.sql file to a volume so that it can be accessed from the host. On the host, move it to to a location apart from the database data location (in the event that’s where you wrote it previously).

Move the data directory on the host and re-create the existing data directory. Postgres containers demand an empty data directory to initialize, so this gives it one while also creating a backup which can be restored if something goes wrong.

mv data/ data-backup/

Add a new bind to the container binding that file’s location into the container. (I usually bind it directly in the container’s root because it makes it easier to restore.)

services:
  immich-db:
    …
    volumes:
      …
      /media/Media/apps/immich/backup.sql:/backup.sql

For redundancy, trigger a manual backup of the docker containers and the entire system at https://backup.c.devon.lol.

Update

Update the Postgres version tag on the container as well and update the stack to relaunch the container with the new version and the new bind.

Restore

Once the container is up, shell into it and restore the backup:

pgsql -U <username> -f /backup.sql

Clean Up

After verifying functionality, delete /backup.sql and the backup data directory. Remove any temporary volume mounts from the docker compose.

Rebuilding

From scratch

If this needs to be rebuilt, for example, to upgrade to new hardware, this process makes it relatively easy.

Home server

  1. Install Debian on new server
  2. Install Docker and compose
  3. Copy the docker volumes and the contents of /DATA/AppData to the new system. This can be done from the old system using rsync, although rsync will need to be installed on the new system as well. When copying the docker volumes from /var/lib/docker/volumes, you will need to run rsync with sudo from both ends. To do so on the remote, add --rsync-path="sudo rsync"
  4. Add the Portainer compose file and bring it up with sudo docker compose up -d
  5. Export a backup from the existing Portainer instance (Settings > General, backup is the last option on the page)
  6. Access the new Portainer instance and initialize it with the exported backup
  7. Install ntfs-3, mount the media drive at /media/Media, and add it to /etc/fstab so that it will automatically be mounted
  8. Create the caddy network with sudo docker network create caddy
  9. Create the vpn_bridge network with sudo docker network create --subnet=172.15.0.0/16 vpn_bridge. The subnet makes the static IP for the sanzb container work.
  10. Start bringing up docker-compose stacks in the new Portainer: caddy, authelia, followed by the others in any order, maybe saving media for last
  11. (Optional) Install Samba and configure it to share /media/Media

Pi DNS

  1. Install Raspberry Pi OS
  2. Install Docker and compose
  3. In Portainer on the home server, add an environment to get instructions on adding the Portainer agent on the Pi and connecting it
  4. Bring up the Adguard Home container and configure
  5. Add a wildcard DNS rewrite (Filters > DNS rewrites) for *.c.devon.lol to the home server IP.
  6. Update DNS on the router to point to this system
  7. Enable TCP port 2375 for external connection to Docker. This allows for connection to Docker from services on the home server, like Uptime Kuma and Homepage.
  8. Back on the home server, add a site to the Caddyfile proxying dns.c.devon.lol to the Pi

Troubleshooting

Reverse proxy

General

Add Caddy log directive labels for the problem container:

services:
  <name>:
    labels:
      ...
      caddy.log:
      caddy.log.format: console

After making these changes, restart the stack. Try again to load the service, and check the logs of the Caddy docker container to see the error log.

502 Errors

Try restarting the server or the container stack that is the source of the error.