Back to Blog

Building My Homelab: From Zero to 20+ Self-Hosted Services

7 min read

After years of working in enterprise IT environments—from Kennedy Space Center to animation studios—I've learned that the best way to truly master infrastructure is to build and maintain your own. My homelab has evolved from a single server running a few Docker containers to a full-fledged multi-node cluster hosting over 20 production services.

Why Self-Host?#

The decision to self-host isn't just about cost savings or privacy (though both are significant benefits). It's about:

  • Complete Control: You own your data, your infrastructure, and your destiny
  • Learning Opportunities: Every service deployment teaches you something new
  • Real-World Experience: Testing configurations in a safe environment before deploying at work
  • Privacy: Your data stays on your hardware, not in someone else's cloud
  • Reliability: When properly configured, self-hosted services can be more reliable than third-party alternatives

The Hardware Foundation#

My current setup consists of:

Compute Cluster#

  • 3 Proxmox Nodes (pve01–pve03): Virtual machines running Proxmox VE in a multi-node cluster with LXC containers and VMs
  • Proxmox Backup Server (pbs01): Dedicated PBS instance for automated daily and weekly backups
  • Unraid NAS: 8-bay storage server providing NFS shares for media and long-term backup storage
  • GPU Passthrough: NVIDIA GTX 1060 6GB on pve01 for Plex hardware transcoding, GTX 1050 Ti 4GB on pve02 for Ollama LLM inference

Networking#

  • UniFi UCG Ultra Cloud Gateway: Handling VLANs, routing, and network segmentation
  • UniFi USW-Lite-8-PoE: PoE switch for access points and wired devices
  • UniFi U7 Lite APs: Wi-Fi 7 mesh network with wired and wireless backhaul
  • Pi-hole (x2): Redundant DNS filtering with nebula-sync replication between instances
  • Cloudflare Tunnel: Secure external access without opening firewall ports

Docker Hosts#

Three dedicated LXC containers (docker01–docker03) running all Docker services, managed through Komodo for deployment automation. All stacks live at /etc/komodo/stacks/ on each host.

Core Infrastructure Services#

Traefik Reverse Proxy#

The backbone of my infrastructure is Traefik, running as a native systemd service on a dedicated LXC container. It handles:

  • Automatic wildcard SSL certificates for *.pearsondw.com via Cloudflare DNS challenge
  • File-based routing configuration with dynamic watch mode
  • Centralized access logs for security monitoring

Every service in my homelab is accessed through Traefik, providing a consistent and secure entry point.

Authentik SSO#

Rather than managing separate credentials for 20+ services, I deployed Authentik on docker01 for single sign-on. This provides:

  • OAuth 2.0/OpenID Connect authentication
  • LDAP support for legacy applications
  • Multi-factor authentication (TOTP)
  • Centralized user management

Logging into one service grants access to all authorized applications—just like enterprise SSO systems I manage professionally.

Monitoring Stack: Zabbix + Grafana#

You can't manage what you don't monitor. My monitoring infrastructure includes:

Zabbix 7 for comprehensive infrastructure monitoring:

  • 18 monitored hosts across hypervisors, Docker hosts, and network devices
  • PSK-encrypted Zabbix Agent 2 on all Linux hosts
  • SNMP and HTTP API monitoring for Proxmox nodes
  • Custom UserParameters for UniFi Gateway monitoring (clients, health, WAN latency)
  • Automated alerting via Telegram

Grafana for visualization:

  • Real-time dashboards with Zabbix data source integration
  • Zabbix Full Server Status dashboard for at-a-glance health checks
  • Historical trend analysis

Automation with n8n#

n8n on docker03 is my workflow automation platform, handling:

  • Zabbix Alert Analyzer: Monitors active problems hourly and sends Telegram alerts with AI analysis powered by Ollama (Mistral 7B)
  • Health check notifications to Telegram
  • Integration between services
  • Custom business logic for personal automation

Container Management: Komodo#

Komodo on docker01 provides deployment automation and management for all Docker Compose stacks across all three Docker hosts. Combined with Dozzle for real-time log aggregation across docker01, docker02, docker03, and the Unraid NAS via remote agents.

Network Documentation: NetBox#

NetBox on docker02 serves as my source of truth for network documentation and IP address management, tracking all infrastructure inventory and connections.

Dashboard: Homepage#

A Homepage dashboard on a dedicated LXC container provides quick access to all homelab services at a glance.

Ansible Automation: Semaphore#

Semaphore provides a web UI for running Ansible playbooks, handling VM provisioning, system updates, and configuration management across the cluster.

Media & AI Services#

Plex Media Server#

Plex runs in a privileged LXC container on pve01 with NVIDIA GTX 1060 6GB GPU passthrough for hardware transcoding. Media is served from the Unraid NAS via NFS mounts.

Ollama LLM Server#

Ollama runs in a privileged LXC container on pve02 with NVIDIA GTX 1050 Ti 4GB GPU passthrough. Currently running Mistral 7B for AI-powered automation workflows through n8n, including intelligent Zabbix alert analysis.

Backup Strategy#

I follow a structured backup approach with defined maintenance windows:

  • Daily Backups: PBS runs automated snapshots at 21:00 EST with 14-day retention
  • Weekly Backups: Unraid receives full backups every Sunday at 01:00 EST with zstd compression
  • Zabbix Maintenance Windows: Automated suppression of alerts during backup I/O spikes

Lessons Learned#

1. Start Small, Scale Gradually#

Don't try to deploy everything at once. Start with essential services (reverse proxy, monitoring) and build from there. Each service you add should solve a specific problem.

2. Backups Are Non-Negotiable#

I learned this the hard way after a failed disk led to data loss. Now I follow the 3-2-1 rule:

  • 3 copies of data
  • 2 different storage types (local + NAS)
  • 1 offsite backup (cloud storage via encrypted uploads)

Proxmox Backup Server runs automated daily backups of all VMs and containers.

3. Documentation Is Critical#

I maintain extensive documentation in Markdown files covering:

  • Service configurations
  • Network topology diagrams
  • Disaster recovery procedures
  • Lessons learned from outages

Six months from now, you won't remember why you configured something a certain way. Document it.

4. Security From Day One#

  • All services behind Traefik with automatic HTTPS
  • Cloudflare Tunnel for external access (no open ports)
  • Authentik SSO with MFA enabled
  • Network segmentation via VLANs
  • PSK-encrypted Zabbix agents
  • Regular security updates via Ansible automation

5. Monitor Everything#

If a service is important enough to run, it's important enough to monitor. Zabbix tracks:

  • CPU, memory, disk, and network usage
  • Service availability (HTTP checks)
  • Docker container health
  • Custom application metrics
  • UniFi Gateway health (WAN latency, client count, firmware)

When something fails at 2 AM, I get a Telegram notification before users notice.

The Future#

My homelab is never "done." Current projects include:

  • Expanded Automation: More Ansible playbooks for configuration management
  • Additional Monitoring: Deeper Zabbix templates for application-level metrics
  • MCP Server Development: Building Model Context Protocol servers for AI tool integration

Getting Started#

If you're interested in building your own homelab, start simple:

  1. Hardware: An old laptop or mini PC is enough to start
  2. Hypervisor: Proxmox VE (free) or VMware ESXi
  3. First Services: Pi-hole for DNS, Traefik for reverse proxy
  4. Expand: Add services as you identify needs

The homelab community is incredibly supportive. Check out r/homelab and r/selfhosted on Reddit for inspiration and help.

Conclusion#

Building a homelab has been one of the most rewarding technical projects I've undertaken. It's taught me more about infrastructure, networking, and system administration than any certification course could. More importantly, it's given me a platform to experiment, learn, and build without the constraints of production systems.

Whether you're a seasoned sysadmin or just starting your IT journey, I highly recommend building your own homelab. The skills you'll develop are directly applicable to professional environments, and the satisfaction of running your own infrastructure is unmatched.

Want to know how I exposed these services securely to the internet without opening firewall ports? Check out my post on Cloudflare Tunnel.

Related Posts