featured image

Breaking Free from Cloud Oligopolies: Architecting a Zero-Trust Media and Document Vault with Immich, Nextcloud, and Tailscale

This tutorial provides a comprehensive, technical deep dive into building a secure, self-hosted media and document vault using Immich and Nextcloud, all while implementing Tailscale for zero-trust remote access. It covers the architectural decisions behind optimizing ZFS datasets for performance, configuring Docker containers for seamless integration, and establishing a secure networking layer that eliminates the need for public-facing ports.

Published

Tue Nov 11 2025

Technologies Used

Tailscale TrueNAS Scale Docker Immich Nextcloud ZFS
Advanced 43 minutes

The conveniences of Google Photos and Google Drive come at the steep cost of your digital privacy and a perpetual subscription fee. As we demonstrated in our hardware repurposing overview, a nine-year-old PC running TrueNAS Scale can easily handle modern container orchestration. However, simply installing applications isn’t enough. When scaling personal cloud solutions, improper storage mapping can lead to catastrophic hardware wear—specifically, ZFS write amplification caused by redundant dataset writes.

In this deep dive, we are going to deploy a highly optimized, self-hosted media server (Immich) alongside a powerful file synchronization platform (Nextcloud). We will construct a declarative docker-compose pipeline that handles high-resolution vector search and gigabyte-scale file uploads, all while explicitly optimizing our ZFS file system to eliminate mechanical hard drive thrashing.

The Self-Host Engineer’s Armory: Preparing the Debian Foundation

To successfully implement this architecture, you must shift your mindset from a standard “desktop user” to an Infrastructure-as-Code (IaC) engineer.

Knowledge Base:

  • ZFS Dataset Hierarchy: An understanding of how ZFS allocates blocks and how volume mounts bridge the host file system to containerized environments.
  • Docker Orchestration: Familiarity with declarative deployments using Docker Compose (we will use Dockge as our UI abstraction layer, but the raw YAML logic remains identical).
  • Reverse Proxy/Mesh VPNs: A foundational understanding of how Tailscale or Cloudflare Tunnels route traffic without exposing raw ports to the public internet.

Environment Prerequisites:

  • Host OS: TrueNAS Scale (Debian-based Linux) or any robust Debian/Ubuntu server running Docker.
  • File System: ZFS pools configured (our examples will assume a pool named tank).
  • Deployment Engine: Docker Engine 24.x+ and Docker Compose v2.x+.
  • Hardware Acceleration (Optional): If your legacy hardware includes an Nvidia GPU, ensure the proprietary Nvidia drivers and Nvidia Container Toolkit are installed to leverage CUDA for Immich’s machine learning tasks.

The Sovereign Cloud Topology: Decoupling Storage from Logic

Before writing a single line of YAML, we must visualize our data flow.

Think of this architecture as a high-security automated warehouse. The ZFS datasets are the indestructible, physical concrete vaults (persistent storage). The Docker containers are the specialized warehouse robotic workers (ephemeral logic). Finally, the Tailscale/Cloudflare network is the secure, unmarked loading dock—the outside world cannot even see the building, but authorized delivery vehicles can dock instantly.

graph TD
    subgraph "Zero-Trust Perimeter (Tailscale / Cloudflare)"
        Client[End User Device] -->|Encrypted Tunnel| Proxy[Reverse Proxy / Mesh IP]
    end

    subgraph "Docker Host (Ephemeral Logic)"
        Proxy -->|Port 2283| IS[Immich Server]
        Proxy -->|Port 8081| NC[Nextcloud Server]
        
        IS <--> ML[Immich Machine Learning]
        IS <--> Redis[(Redis Cache)]
        IS <--> PG[(PostgreSQL)]
    end

    subgraph "ZFS Pool: 'tank' (Persistent Storage Vaults)"
        IS -->|Volume Mount| UPL[Dataset: image/uploads]
        PG -->|Volume Mount| DB[Dataset: image/db]
        NC -->|Volume Mount| NCDATA[Dataset: nextcloud/data]
        NC -->|Volume Mount| NCCFG[Dataset: nextcloud/config]
    end

By decoupling the persistent data (ZFS) from the application logic (Docker), we ensure that if a container crashes or an image requires a rebuild, our personal data remains completely untouched and perfectly preserved.

Forging the Vault: Declarative Infrastructure and ZFS Optimization

Let’s begin the implementation. Our first challenge is configuring Immich.

Step 1: The ZFS Dataset Paradigm Shift

🔴 Danger: The Naive Approach to installing Immich involves utilizing up to seven unique storage paths (Library, Uploads, Thumbs, Profile, Video, Postgres Data, Postgres Backup). In early iterations of this project, this configuration caused severe write redundancy. Every photo uploaded was written twice across disparate datasets to accomplish the same caching task, violently degrading mechanical hard drive lifespans.

The Refined Solution consolidates this into just two highly targeted datasets: uploads (for our media) and db (for our PostgreSQL database).

Navigate to your TrueNAS CLI or GUI and create the following structure:

  • /mnt/tank/configs/image/uploads
  • /mnt/tank/configs/image/db
  • /mnt/tank/configs/nextcloud/config
  • /mnt/tank/configs/nextcloud/data

Step 2: Architecting the Immich Stack

We will write our docker-compose.yml to reflect this optimized approach. Let’s look at the first chunk handling the core API and Machine Learning blocks:

version: '3.8'
services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:release
    volumes:
      # We map our single, consolidated ZFS dataset for all media
      - /mnt/tank/configs/image/uploads:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - '2283:2283'
    depends_on:
      - redis
      - database
    restart: unless-stopped

Here, we bind our host’s /mnt/tank/configs/image/uploads to the container’s internal /usr/src/app/upload directory. All raw images, transcoded videos, and thumbnails will now safely reside in a single, snapshot-friendly ZFS dataset.

Next, we establish the ephemeral cache and persistent database logic:

  redis:
    container_name: immich_redis
    image: docker.io/redis:6.2-alpine@sha256:d6c2911ac51b289db208767581a5d154544f2b2ee2e14a7236ea90bbe1db09db
    restart: unless-stopped

  database:
    container_name: immich_postgres
    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a451e5286046e848de2202cece253b8433d3ab2049d5bd32f91547
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
    volumes:
      # Dedicated dataset strictly for ACID compliant database transactions
      - /mnt/tank/configs/image/db:/var/lib/postgresql/data
    restart: unless-stopped

We separate the PostgreSQL data into its own dataset (image/db). Because ZFS handles small, random database writes very differently than large sequential video writes, splitting these datasets allows TrueNAS to optimize block sizes (recordsize) perfectly for the database engine under the hood.

Step 3: Integrating the Nextcloud Document Hub

Nextcloud operates as our Google Drive replacement. Unlike Immich, Nextcloud from linuxserver.io bundles much of its stack, but it requires explicit configuration for internal routing.

  nextcloud:
    image: lscr.io/linuxserver/nextcloud:latest
    container_name: nextcloud
    environment:
      - PUID=1000 # Matches the UID of the TrueNAS 'apps' user
      - PGID=1000
      - TZ=America/New_York
    volumes:
      # Separating the config layer from the raw storage layer
      - /mnt/tank/configs/nextcloud/config:/config
      - /mnt/tank/configs/nextcloud/data:/data
    ports:
      - 8081:443 # Mapping the host port 8081 to Nextcloud's internal HTTPS
    restart: unless-stopped

Silicon Symbiosis: Unmasking ZFS Write Amplification

Why did the “Naive” 7-dataset Immich approach chew through hard drives, and why is our new approach better?

🔵 Deep Dive: Write Amplification and Copy-on-Write ZFS is a Copy-on-Write (CoW) file system. When you modify a file, ZFS doesn’t overwrite the existing data block. Instead, it writes the new data to a new block, updates the pointer, and then frees the old block.

When Immich was configured with seven different datasets (separating thumbnails, profile pictures, and library files), uploading a single photo triggered a cascade. The core server wrote the photo to the library. The ML container generated a thumbnail and wrote it to thumbs. Because these were separate datasets, ZFS had to manage metadata, checksums, and block allocations completely independently for every single micro-write across the mechanical disks, causing massive seek-time delays and “thrashing.”

By combining all media into /mnt/tank/configs/image/uploads, ZFS can batch these asynchronous writes into a single Transaction Group (TXG) in the ZIL (ZFS Intent Log), flushing them sequentially to the hard drives in one smooth, contiguous mechanical sweep. This exponentially increases performance and hardware lifespan.

Defending the Fortress: Trust Boundaries and Persistence Pitfalls

A self-hosted application is only as secure as its boundary. Once your Nextcloud container spins up, if you attempt to access it via your Tailscale IP or your Cloudflare Tunnel domain (e.g., nextcloud.serversatho.me), you will be immediately blocked by an “Access through untrusted domain” error.

This is not a bug; it is a vital security mechanism.

Mitigating Host Header Poisoning

Nextcloud is strictly programmed to reject HTTP requests if the Host header doesn’t explicitly match a pre-approved whitelist. Because our reverse proxy is handling the request and forwarding it, Nextcloud sees a mismatched Host header and panics.

We must manually inject our domain into the container’s persistent volume. Open a shell on your host machine:

# Escalate to root
sudo su
# Navigate to the bound persistent configuration directory
cd /mnt/tank/configs/nextcloud/config/www/nextcloud/config/
# Edit the PHP configuration array
nano config.php

Inside this file, you will find the 'trusted_domains' array. You must map your specific reverse proxy domain or mesh IP into this array:

  'trusted_domains' => 
  array (
    0 => '10.99.0.191:8081', # The local Docker bridge IP
    1 => 'nextcloud.serversatho.me', # The secure Cloudflare Tunnel / Tailscale domain
  ),

💡 Pro-Tip: Notice the syntax here. If your domain includes a subdomain, be sure to escape any periods with a backslash if required by your specific proxy setup, though standard Nextcloud arrays simply require exact string matches. Save the file and refresh your browser—the fortress gates will now recognize your cryptographic key and let you in.

The PostgreSQL Permission Trap

When connecting the image/db dataset to the Postgres container, you will face catastrophic crash loops if you forget Linux permission basics. The Docker container runs Postgres under a specific user ID (usually UID 999 or 70). However, TrueNAS defaults dataset ownership to the root user.

If you mapped the volume without fixing permissions, Postgres cannot initialize its write-ahead logs and the container dies instantly. When creating the dataset in TrueNAS, ensure you configure the ACL (Access Control List) to grant the “Apps” user (or the specific UID 999) full Read/Write/Execute permissions.

We respect your privacy.

← View All Tutorials

Related Projects

    Ask me anything!