On this page
- The Architect’s Workbench: TrueNAS Scale, Portainer, and ZFS Datasets
- The Harbor Master’s Design: Decoupling Storage from Compute
- Forging the Stack: Translating Compose Files for Persistent ZFS Storage
- The Naive Approach
- The Refined Solution: Chunk 1 - The Supporting Infrastructure
- The Refined Solution: Chunk 2 - The Milvus Engine
- Bridging the Gap: ZFS Memory Mapping and Container Lifecycle APIs
- Navigating the Minefield: OOM Kills, ACL Nightmares, and Network Isolation
- Mastering the Fleet: Total Sovereignty over Containerized Microservices
The high-level overview of our personal cloud infrastructure highlighted a critical architectural pivot: moving away from fragmented SaaS subscriptions toward a sovereign, self-hosted ecosystem powered by a nine-year-old gaming PC. While deploying lightweight applications like Nextcloud or Uptime Kuma on TrueNAS Scale is straightforward, the true test of this legacy hardware comes when we introduce enterprise-grade, multi-container distributed systems.
Today, we are bridging the gap between theory and execution. We are going to deploy Milvus, a highly performant vector database designed for AI workflows (like semantic search and RAG), using Portainer’s Stacks (Docker Compose).
TrueNAS Scale’s built-in application catalog is fantastic for beginners, but it abstracts away the granular control required for complex microservices. By leveraging Portainer, we bypass restrictive app catalogs, giving us absolute command over container lifecycle, memory limits, and ZFS volume bindings. You will learn how to parse a raw docker-compose.yml file, translate its storage parameters to respect TrueNAS’s ZFS dataset rules, and safely run a heavy database on aging silicon without starving your host operating system.
The Architect’s Workbench: TrueNAS Scale, Portainer, and ZFS Datasets
Before we spin up vector search capabilities on our home server, we must ensure our foundational environment is prepared to handle the load. Milvus is not a monolithic application; it requires supporting infrastructure (MinIO for object storage and etcd for metadata).
Knowledge Base:
- You must understand the difference between Docker Named Volumes and Host Bind Mounts.
- Familiarity with YAML syntax and the
docker-composespecification. - Basic comprehension of TrueNAS Scale dataset permissions (ACLs).
Environment Setup:
- Host OS: TrueNAS Scale (Debian-based).
- Orchestrator: Portainer Community Edition (CE) deployed via TrueNAS Apps (running with root/Docker socket access as demonstrated in the source setup).
- Storage OS: A configured ZFS Pool (e.g.,
tank). - Target Application: Milvus Standalone (v2.6.x) [1].
You must pre-create a generic dataset in TrueNAS Scale dedicated to this stack. For this guide, assume you have created a dataset located at /mnt/tank/apps/milvus.
The Harbor Master’s Design: Decoupling Storage from Compute
Think of Portainer as a master harbor pilot. Massive cargo ships (the containers: Milvus, MinIO, etcd) arrive carrying vast amounts of data. If the harbor pilot allows them to drop their cargo wherever they please, the harbor (your server) becomes an unmanageable mess. The pilot must strictly direct each ship to a specific, reinforced concrete dock (your ZFS datasets).
By default, Docker wants to manage storage internally. Our architecture forces Docker to surrender that control to TrueNAS’s ZFS file system. This ensures our vector data benefits from ZFS’s self-healing checksums, automated snapshots, and easy SMB access via Tailscale.
graph TD
A[TrueNAS Scale Host] -->|ZFS File System| B[/mnt/tank/apps/milvus]
B --> C[etcd_data/]
B --> D[minio_data/]
B --> E[milvus_data/]
F(Portainer CE) -.->|Orchestrates| G{Docker Engine}
G -->|Network: milvus-bridge| H[Container: milvus-etcd]
G -->|Network: milvus-bridge| I[Container: milvus-minio]
G -->|Network: milvus-bridge| J[Container: milvus-standalone]
H ==>|Bind Mount| C
I ==>|Bind Mount| D
J ==>|Bind Mount| E
In this architecture, compute is ephemeral. You could completely destroy the Portainer instance and the Milvus containers, but because of the specific host bind mounts, your vector data remains perfectly intact on the ZFS array.
Forging the Stack: Translating Compose Files for Persistent ZFS Storage
Portainer’s “Stacks” feature is essentially a web-based UI wrapper around docker-compose up -d. However, blindly pasting a developer’s quick-start compose file into Portainer is a recipe for disaster on a TrueNAS system.
The Naive Approach
If you look at the official Milvus standalone documentation, their provided docker-compose.yml uses relative path volume mapping [1]:
# 🔴 DANGER: DO NOT USE THIS IN PORTAINER
version: '3.5'
services:
etcd:
image: quay.io/coreos/etcd:v3.5.5
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
🔴 Danger: If you deploy this in Portainer, the . (current directory) resolves to Portainer’s internal, hidden data directory located deep within the TrueNAS app structure (e.g., /var/lib/docker/volumes/...). This data is nearly impossible to back up, won’t be accessible via SMB, and bypassing ZFS datasets means you lose snapshot protection.
The Refined Solution: Chunk 1 - The Supporting Infrastructure
To fix this, we replace relative paths with Absolute Host Bind Mounts targeting our TrueNAS dataset. Open Portainer, navigate to Stacks, click Add Stack, name it milvus, and use our refined implementation.
First, we define the underlying network and the dependencies: etcd (for metadata) and minio (for raw object storage).
version: '3.5'
networks:
# Creates an isolated bridge network for the stack
milvus-net:
driver: bridge
services:
# etcd manages Milvus cluster metadata and configurations
etcd:
container_name: milvus-etcd
image: quay.io/coreos/etcd:v3.5.5
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
volumes:
# Refined: Hardcoded absolute path to TrueNAS ZFS dataset
- /mnt/tank/apps/milvus/etcd:/etcd
networks:
- milvus-net
# MinIO acts as the persistence layer for Milvus log snapshots and index files
minio:
container_name: milvus-minio
image: minio/minio:RELEASE.2023-03-20T20-16-18Z
environment:
# Default credentials (change in production!)
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
ports:
- "9001:9001"
- "9000:9000"
volumes:
# Refined: Hardcoded absolute path to TrueNAS ZFS dataset
- /mnt/tank/apps/milvus/minio:/minio_data
command: minio server /minio_data --console-address ":9001"
networks:
- milvus-net
The Refined Solution: Chunk 2 - The Milvus Engine
Now we append the main standalone container to our stack. This is the application layer that performs the actual vector math. Notice how we introduce resource limits—a critical step when running on 9-year-old hardware.
standalone:
container_name: milvus-standalone
image: milvusdb/milvus:v2.6.13
command:["milvus", "run", "standalone"]
environment:
- ETCD_ENDPOINTS=etcd:2379
- MINIO_ADDRESS=minio:9000
# Enforces startup order
depends_on:
- "etcd"
- "minio"
ports:
- "19530:19530" # gRPC port for SDK connections
- "9091:9091" # WebUI port
volumes:
# Refined: Hardcoded absolute path to TrueNAS ZFS dataset
- /mnt/tank/apps/milvus/data:/var/lib/milvus
networks:
- milvus-net
# Pro-Tip: Prevent legacy hardware from crashing
deploy:
resources:
limits:
memory: 8G
💡 Pro-Tip: Once pasted into the Web Editor, hit Deploy the stack. Portainer communicates directly with the TrueNAS Docker daemon, pulls the images, creates the milvus-net bridge, and spins up the containers in the exact dependency order specified.
Bridging the Gap: ZFS Memory Mapping and Container Lifecycle APIs
Now that the code is running, let’s explore why we architected it this way, specifically focusing on how Docker interacts with TrueNAS under the hood.
When you specify /mnt/tank/apps/milvus/data as a bind mount, Portainer translates your Compose YAML into a Docker API POST request. The Docker daemon interacts with the TrueNAS Linux kernel to map the container’s virtual file system (/var/lib/milvus) directly to the ZFS file system inode.
🔵 Deep Dive: The Memory Tug-of-War
Vector databases like Milvus rely heavily on HNSW (Hierarchical Navigable Small World) graph algorithms to perform fast similarity searches. To achieve low latency, Milvus builds these graphs in the Heap memory of the container.
Simultaneously, TrueNAS relies on ZFS ARC (Adaptive Replacement Cache), which uses unused system RAM to cache frequently accessed disk blocks.
On an aging 9-year-old gaming PC, RAM is likely limited (e.g., 16GB or 32GB). If Milvus runs unchecked and ingests millions of vectors, its heap size will balloon. This creates a resource war between the Docker container and the TrueNAS host OS. By implementing the deploy.resources.limits.memory: 8G flag in our Compose file, we set a hard cgroup limit on the Docker daemon. If Milvus tries to exceed 8GB, the Linux Kernel’s OOM (Out Of Memory) killer will terminate the container, saving your TrueNAS host from a kernel panic or complete system lockup.
Navigating the Minefield: OOM Kills, ACL Nightmares, and Network Isolation
Even with a refined deployment, running custom containers on TrueNAS Scale presents specific gauntlets you must navigate.
- The Root Permission Paradox (ACLs): By default, Docker containers run processes as
root. When Milvus writes index files to/mnt/tank/apps/milvus/data, those files are owned byroot. If you later try to access this dataset via a TrueNAS SMB share (authenticated as your standard personal user over Tailscale), you will get a “Permission Denied” error.- Mitigation: Before deploying the stack, ensure the ZFS dataset in TrueNAS is set to a “Generic” or “SMB” preset, and explicitly grant your TrueNAS user account
Read/Writeprivileges using the TrueNAS ACL manager. Portainer runs as root, so it will write successfully, but standardizing the ACLs ensures you can still back up or migrate those files remotely.
- Mitigation: Before deploying the stack, ensure the ZFS dataset in TrueNAS is set to a “Generic” or “SMB” preset, and explicitly grant your TrueNAS user account
- Startup Race Conditions: Our project relies on Tailscale for secure, zero-trust remote access. If the TrueNAS server reboots after a power failure, Docker might start the Milvus stack before the Tailscale daemon has initialized its secure mesh tunnel.
- Mitigation: Avoid binding container ports to specific host IP addresses (e.g.,
100.x.x.x:19530). By binding to"19530:19530", the container listens on0.0.0.0(all interfaces). Because the server sits behind a standard router firewall, it remains secure, while automatically becoming available on the Tailscale IP as soon as the Tailscale service boots up.
- Mitigation: Avoid binding container ports to specific host IP addresses (e.g.,
- Database Corruption on Hard Stops: Unlike lightweight stateless apps, Milvus and etcd are highly stateful. If you use Portainer to “Kill” the container rather than “Stop” it, you send a
SIGKILLinstead of aSIGTERM. This prevents etcd from writing its final memory buffer to ZFS, risking index corruption. Always use Portainer’s graceful Stop button, which allows the database to perform its shutdown routines.
Mastering the Fleet: Total Sovereignty over Containerized Microservices
You now know how to deploy a complex, multi-container AI workload on a repurposed TrueNAS legacy server using Portainer Stacks. More importantly, you understand the vital translation step: converting standard Docker documentation into architecture that respects your host’s ZFS storage pools and memory limitations.
By successfully deploying Milvus, you’ve unlocked the backend infrastructure required to self-host LLM-powered business logic or personal AI projects (like an automated photo tagging service or local document RAG). You have achieved total sovereignty over both your code and your data, proving that aging silicon, when orchestrated with precision, is more than capable of enterprise-grade performance.