Homelab
A privacy-first, self-hosted infrastructure running 24 services across 3 servers. All data processed locally with zero cloud dependencies for core functionality.
Architecture
Hardware
TerraMaster F2-424
16 GB RAM, ZFS (500 GB mirror + 8 TB RAIDZ)
20+ Docker containers
Windows PC
Ryzen 5 5600X, 32 GB RAM, RTX 4060
LM Studio, Whisper, Ollama
Home Assistant Green
Dedicated HA appliance
Home Assistant + voice satellites
MacBook Pro
Development, Obsidian sync
Obsidian REST API
Services (24)
Workflow automation platform with 80+ active workflows
Smart home control with voice satellites and contextual automations
Local LLM inference server (Qwen3 8B) — powers all AI workflows
GPU-accelerated speech-to-text transcription
Alternative LLM backend running alongside LM Studio
RAG search across all documents with vector embeddings
Vector database for semantic search across knowledge base
Document OCR, classification, and archival
Structured wiki — auto-updated daily journals and weekly summaries
Personal notes and daily journals with REST API for automation
RSS reader with AI-powered article scoring and digests
Self-hosted bookmark manager
Media server for movies, TV, and music
Self-hosted photo management with ML-powered organization
Self-hosted Bitwarden-compatible password manager
VPN for secure remote access to all services
Secure tunnels for external service access
Service health monitoring with alerting
Real-time system performance metrics
Push notification hub for all automations
Self-hosted PDF manipulation toolkit
File sync between devices — reMarkable, Obsidian vault
Self-hosted remote desktop access
Privacy-first budgeting and financial tracking
Automation Highlights
80+ n8n workflows handle everything from morning briefings to infrastructure self-healing. All AI processing runs locally via LM Studio on an RTX 4060.
Morning Briefing
AI-generated daily briefing with weather, tasks, calendar, and health data — delivered via email and voice
Omi Pipeline
Wearable captures conversations, transcribes locally via Whisper, classifies by context, extracts action items to Todoist
PKM Pipeline
Documents from Obsidian, reMarkable, and Paperless are processed, embedded, and routed to BookStack and vector search
Self-Healing Infrastructure
AI-powered triage of service failures with auto-restart, health checks, and rollback
Docker Update Pipeline
Weekly container update checks with safe rollout — pre-backup, health check, and auto-rollback on failure
RSS AI Digest
Daily AI-curated news digest — articles scored by relevance, top picks summarized and pushed via notification
AI Stack
Local LLM Inference
LM Studio serves Qwen3 8B via an OpenAI-compatible API. Every AI-powered workflow — from document classification to morning briefings — runs against this single local endpoint. No data leaves the network.
RAG & Semantic Search
Documents from Obsidian, BookStack, and Paperless are embedded into Qdrant vector collections. AnythingLLM provides conversational search across all knowledge sources.
Speech-to-Text
Faster-Whisper runs GPU-accelerated on the RTX 4060, transcribing audio from the Omi wearable and voice satellites in real-time.