featured image

Observability in a Sovereign Ecosystem: Deploying Uptime Kuma and Umami on TrueNAS SCALE for Zero-Trust Website Monitoring

This tutorial guides you through the deployment of an internal observability stack within a self-hosted personal cloud environment. Leveraging TrueNAS SCALE, ZFS, and a Tailscale mesh VPN, we will set up Uptime Kuma for robust service monitoring and Umami for privacy-respecting analytics.

Published

Tue Nov 11 2025

Technologies Used

Uptime Kuma Umami TrueNAS Scale Tailscale ZFS
Beginner 14 minutes

The Blind Spot Dilemma: Why Sovereign Clouds Demand Telemetry

Repurposing legacy hardware—like a nine-year-old gaming PC—into a secure, self-hosted personal cloud is a triumph of data sovereignty. By deploying TrueNAS SCALE, ZFS, and a Tailscale mesh VPN, you’ve successfully wrested your data away from SaaS conglomerates. However, running a complex suite of isolated, zero-trust applications introduces a critical operational blind spot: Silent Failures.

When you intentionally hide your Nextcloud, Vaultwarden, and Immich instances from the public internet using a private mesh network, standard cloud monitoring tools can no longer reach them. If a container crashes, a ZFS dataset hits capacity, or a Tailscale node drops, you won’t know until you actively try (and fail) to access your passwords or files.

To solve this, we will deploy Uptime Kuma for robust, internal service monitoring, and Umami for privacy-respecting, lightweight analytics.

In this tutorial, we will bypass the basic TrueNAS GUI click-throughs to examine the underlying declarative configurations. We will provision persistent ZFS datasets, configure Zero-Trust webhook alerting, and architect an internal telemetry stack that efficiently polls your self-hosted ecosystem without exposing a single port to the public web.

Assembling the Instruments: Infrastructure Prerequisites

Before we deploy our observability stack, your environment must meet specific baseline requirements. This guide assumes you are not merely running consumer-level software, but are treating your home lab as production infrastructure.

Knowledge Base

  • ZFS Topology: You should understand how to provision and permission ZFS datasets, specifically dealing with POSIX ACLs for container storage.
  • Container Orchestration: Familiarity with Docker Compose syntax. Even if deploying via the TrueNAS SCALE Apps UI, understanding the underlying YAML mappings for volumes, networks, and environment variables is crucial for troubleshooting.
  • Overlay Networks: A conceptual grasp of Tailscale (WireGuard) routing, specifically how internal IP addresses map to the node running your containers.

Environment

  • Host OS: TrueNAS SCALE (Dragonfish 24.04 or Electric Eel 24.10+). Note that SCALE is transitioning its backend from Kubernetes (k3s) to Docker Compose. The logic in this guide maps directly to Docker primitives.
  • Storage: A configured ZFS pool (e.g., tank) with available space.
  • Applications:
    • Uptime Kuma (v1.23+)
    • Umami (v2.0+)
    • PostgreSQL (v15+) - Required as the backend datastore for Umami.

The Nervous System: Architecting Zero-Trust Observability

Think of your TrueNAS server as a high-security bank. Vaultwarden and Nextcloud are the vaults inside. If you hire an external security guard (like an external ping service) to check the vaults, you have to leave the front door open for them. Instead, Uptime Kuma and Umami act as the bank’s internal closed-circuit camera system. They live inside the building, constantly checking the locks and logging foot traffic, reporting back to you via an encrypted walkie-talkie (Tailscale + Webhooks).

Here is the data flow of our internal telemetry stack:

graph TD
    subgraph TrueNAS SCALE Host
        subgraph ZFS Pool
            UK_Data[(Uptime Kuma Data)]
            UM_DB[(Umami Postgres DB)]
        end
        
        subgraph Container Network
            UK[Uptime Kuma]
            UM[Umami App]
            PG[PostgreSQL]
            
            VW[Vaultwarden]
            NX[Nextcloud]
        end
        
        UK -->|HTTP/TCP Poll| VW
        UK -->|HTTP/TCP Poll| NX
        UM -->|Read/Write| PG
    end
    
    UK_Data --- UK
    UM_DB --- PG
    
    UK -->|Alerts| Discord[Discord/Slack Webhook]
    Client[Client Device] -->|Tailscale VPN| UM
    Client -->|Tailscale VPN| UK

Wiring the Sensors: Containerized Deployment and Configuration

While TrueNAS SCALE provides a UI for community apps, under the hood, these are translated into container manifests. To truly understand how this is built, we will look at the underlying Docker Compose representation of our deployment.

The Naive Approach: Ephemeral Storage

A junior deployment might look like docker run -p 3001:3001 louislam/uptime-kuma:1. This is dangerous. If the container restarts or TrueNAS reboots, your monitoring history and alert configurations are permanently destroyed.

The Refined Solution: ZFS-Backed Persistence

First, we must provision our ZFS datasets via the TrueNAS CLI (or UI). We need strict isolation for our telemetry data.

# Create isolated datasets for our monitoring stack
zfs create tank/apps/uptimekuma
zfs create tank/apps/umami
zfs create tank/apps/umami/db

# Set ownership to the default 'apps' user/group (UID/GID 568 in TrueNAS)
chown -R 568:568 /mnt/tank/apps/uptimekuma
chown -R 568:568 /mnt/tank/apps/umami

Now, let’s look at the deployment logic for Uptime Kuma. We bind the container’s internal data directory directly to our ZFS dataset.

# Uptime Kuma Deployment Logic
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      # Binds to the host IP. Through Tailscale, this is accessed securely.
      - "3001:3001"
    volumes:
      # 💡 Pro-Tip: Mapping to the ZFS dataset ensures ARC caching 
      # and protects data from container lifecycles.
      - /mnt/tank/apps/uptimekuma:/app/data
    security_opt:
      - no-new-privileges:true

Next, we deploy Umami. Umami requires a database. We will use PostgreSQL. Notice how we use an internal bridge network to ensure Umami can talk to Postgres, but Postgres is completely isolated from the host.

# Umami Analytics & Database Deployment Logic
services:
  umami-db:
    image: postgres:15-alpine
    container_name: umami_db
    restart: always
    environment:
      POSTGRES_DB: umami
      POSTGRES_USER: umami
      # 🔴 Danger: Never hardcode passwords in production. 
      # Use TrueNAS SCALE's secret management in the UI.
      POSTGRES_PASSWORD: your_secure_password
    volumes:
      # Persistent ZFS mapping for the database
      - /mnt/tank/apps/umami/db:/var/lib/postgresql/data
    networks:
      - umami-net

  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    container_name: umami
    restart: always
    ports:
      - "3000:3000"
    environment:
      # Connection string referencing the isolated internal container
      DATABASE_URL: postgresql://umami:your_secure_password@umami-db:5432/umami
      # Hashing salt for anonymized analytics
      APP_SECRET: your_randomly_generated_secret
    depends_on:
      - umami-db
    networks:
      - umami-net

networks:
  umami-net:
    driver: bridge

Configuring the Heartbeat

Once Uptime Kuma is running, you configure monitors. Because Uptime Kuma is running on the TrueNAS host, you use the internal container IPs or TrueNAS local IP to poll your services (e.g., polling http://192.168.1.100:8080 for Vaultwarden).

💡 Pro-Tip - Upside Down Mode: A brilliant security use-case for Uptime Kuma is “Upside Down Mode”. Instead of monitoring if an app is up, monitor your public IP’s SSH port (22). Set the monitor to invert the status. If port 22 becomes accessible to the public web (a massive security risk), Uptime Kuma will trigger an outage alert, immediately warning you of a firewall breach.

Into the Engine Room: ZFS, SQLite, and Polling Physics

Why go through the trouble of mapping specific ZFS datasets for lightweight monitoring apps?

🔵 Deep Dive: The Physics of Polling and Caching Uptime Kuma relies on an internal SQLite database, while Umami uses PostgreSQL. Both databases perform frequent, highly concurrent, but very small write operations. Uptime Kuma, checking 15 services every 60 seconds, commits to its SQLite database hundreds of times an hour.

If this were running on an SD card or a cheap SSD, the write amplification would quickly degrade the flash memory. However, because we bound these volumes to TrueNAS SCALE’s ZFS File System, we leverage the ZIL (ZFS Intent Log) and ARC (Adaptive Replacement Cache). ZFS takes these asynchronous, tiny polling commits, coalesces them in RAM, and flushes them to the legacy hardware’s spinning disks in contiguous blocks. This drastically reduces disk I/O latency and entirely neutralizes the performance penalty typically associated with high-frequency database writes on aging hardware.

Furthermore, Uptime Kuma is written in Node.js. It uses a single-threaded asynchronous event loop. When it dispatches 20 HTTP requests to check your home lab’s health, it doesn’t block the thread waiting for Vaultwarden to respond. It registers the callbacks and goes to sleep, allowing a 9-year-old CPU to monitor enterprise-scale infrastructure while utilizing less than 50MB of RAM.

Stress Testing the Wires: False Positives and Network Partitions

Monitoring infrastructure is useless if you stop trusting it.

The Retry Gauntlet (Mitigating False Positives)

By default, you might configure Uptime Kuma to ping a service every 60 seconds with 0 retries. 🔴 Danger: This is a recipe for alert fatigue. If Nextcloud performs a background garbage collection and drops a single HTTP packet, you will be paged at 2:00 AM.

The Fix: Configure the monitor’s “Retries” to 3 and the “Heartbeat Retry Interval” to 20 seconds. If a service drops, Kuma will retry every 20 seconds. It will only send a webhook to your Discord/Slack if the service remains unreachable for a full minute (3 consecutive failures).

Managing the Tailscale Split-Brain

Because you are accessing these telemetry dashboards via Tailscale (Zero-Trust), what happens if the TrueNAS server loses connection to the Tailscale control plane, but your local ISP is still up?

Your containers (Vaultwarden, Nextcloud) might actually be healthy, but you cannot reach them from your laptop remotely. If Uptime Kuma is only sending alerts to a local service, you won’t know. The Mitigation: Configure Uptime Kuma’s webhook alerts (e.g., Discord or Telegram) to route out over the standard WAN connection. Even if Tailscale drops, TrueNAS can still reach the Discord API, allowing Kuma to send a message: "Tailscale interface down on TrueNAS!"

Securing Umami Analytics

Umami is designed to track user interactions without cookies. If you are tracking internal usage of your Nextcloud instance, the Umami tracking script (umami.js) must be injected into your web pages. Ensure that the APP_SECRET in your Docker configuration is securely generated and never rotated, as doing so will break the deterministic hashing of your historical tracking data, fragmenting your analytics.

Mastering Observability in a Sovereign Ecosystem

You now understand how to orchestrate a production-grade observability stack within a sovereign personal cloud. By deploying Uptime Kuma and Umami on TrueNAS SCALE, and mapping their data engines directly to ZFS datasets, you have created a highly resilient, low-footprint nervous system for your infrastructure.

This fundamentally ties back to the ethos of the upcycled 9-year-old server project. You aren’t just blindly hosting files; you are applying enterprise site reliability engineering (SRE) principles to your personal data. You have decoupled your application logic from your telemetry, utilizing zero-trust networks to ensure that while your server silently and securely hums away in the dark, you always have a perfectly clear view of its vitals.

We respect your privacy.

← View All Tutorials

Related Projects

    Ask me anything!