Automating Proxmox VM Lifecycle with Ansible and Semaphore
Manually SSH-ing into each of my 18 hosts to apply updates, create users, or change configurations doesn't scale — even in a homelab. Ansible handles the automation, and Semaphore provides a web UI that makes running playbooks as simple as clicking a button.
Why Semaphore?#
Ansible on its own is powerful but requires command-line access, inventory management, and remembering the right flags for each playbook. Semaphore adds:
- Web-based UI: Run playbooks from a browser — no terminal needed
- Task History: See what ran, when, and whether it succeeded
- Scheduled Runs: Trigger playbooks on a cron schedule
- Access Control: Grant team members playbook access without SSH keys
- Inventory Management: Visual inventory editor instead of flat files
For a homelab, it turns Ansible from a power tool into an appliance.
Architecture#
Semaphore runs on a dedicated host and stores its playbooks, inventory, and credentials in a local database. The playbooks themselves live in a Git repository, pulled automatically when a task runs.
Semaphore Web UI → Ansible Engine → SSH → Target Hosts
All Ansible commands execute from Semaphore's working directory at semaphore/opt/semaphore/tmp/proxmox/, which contains the playbooks, roles, and inventory files.
The Playbook Library#
VM Creation: pve_create_vm.yml#
Creating a new VM in Proxmox involves dozens of API calls — specifying CPU, memory, disk, network, cloud-init parameters, and more. This playbook automates the entire process:
- Provisions a VM on the specified Proxmox node
- Configures hardware resources (CPU, RAM, disk)
- Attaches to the correct VLAN/network
- Applies cloud-init for initial OS configuration
- Starts the VM and waits for SSH availability
What used to take 15 minutes of clicking through the Proxmox UI now takes 30 seconds.
VM Onboarding: pve_onboard.yml#
After a VM is created, it needs to be prepared for Ansible management. The onboarding playbook handles:
- Creates the
ansibleuser with SSH key authentication - Configures passwordless sudo for the ansible user
- Deploys SSH authorized keys from a central key store
- Sets the hostname to match the inventory name
- Configures basic networking (DNS, NTP)
This is the one playbook I run manually the first time (since the new VM doesn't have the ansible user yet). After onboarding, all subsequent automation uses the ansible user.
Common Linux Configuration: doug-common-linux-playbook.yml#
This is the workhorse playbook that applies my standard Linux baseline. It uses a custom role called doug-common-linux with modular task files:
Date/NTP Configuration:
- Sets timezone to
America/New_York - Configures NTP synchronization
- Ensures consistent timestamps across all hosts
Hostname Management:
- Sets hostname from inventory
- Updates
/etc/hostswith the correct hostname mapping
User Management:
- Creates standard user accounts
- Configures SSH keys
- Sets password policies
- Manages sudo access
Package Installation:
- Installs a standard package set (htop, curl, wget, jq, etc.)
- Removes unnecessary default packages
- Configures unattended security updates
Mail Configuration:
- Configures outbound mail relay
- Sets up mail aliases for system notifications
System Configuration:
- Kernel parameter tuning
- Sysctl settings for network performance
- Log rotation configuration
System Updates: update_linux_servers.yml#
Rolling updates across all managed hosts:
ansible-playbook -i inventory update_linux_servers.ymlThis playbook:
- Runs
apt update && apt upgradeon Debian-based hosts - Handles package holds and pinned versions
- Reports which packages were updated
- Optionally reboots hosts that require it (kernel updates)
- Processes hosts in serial to avoid taking down all services simultaneously
Running this through Semaphore means I can trigger updates from my phone, watch the progress in real-time, and review the results later — all without opening a terminal.
Additional Playbooks#
install_alloy.yml: Deploys Grafana Alloy agent for metrics collectioninstall-nvim.yml: Installs and configures Neovim with my preferred settingsping.yml: Simple connectivity test across all hosts (great for verifying after network changes)
The Enigma Secrets Manager#
Ansible playbooks often need credentials — root passwords, API tokens, service accounts. Hardcoding these in playbooks or inventory files is a security risk.
I use a custom Ansible lookup plugin called Enigma that reads credentials from ~/.enigma.json:
# In a playbook
common_users_root_password: "{{ lookup('enigma', '7604', 'password') }}"Enigma stores credentials as a JSON key-value store with numeric IDs. Each entry can have arbitrary fields (username, password, bot_token, chat_id, etc.). The file lives outside the Git repository, so credentials never end up in version control.
This pattern keeps secrets management simple — no HashiCorp Vault server to maintain, no cloud secrets manager to pay for. Just a JSON file with restrictive permissions on the Semaphore host.
Custom Roles#
doug-common-linux#
This is my primary Ansible role, organized into modular task files:
roles/doug-common-linux/
├── tasks/
│ ├── main.yml # Imports all task files
│ ├── date-ntp.yml # Timezone and NTP
│ ├── hostname.yml # Hostname configuration
│ ├── users.yml # User management
│ ├── system.yml # System configuration
│ ├── packages.yml # Package management
│ ├── mail.yml # Mail relay setup
│ └── domain.yml # Domain joining (optional)
├── defaults/
│ └── main.yml # Default variables
├── handlers/
│ └── main.yml # Service restart handlers
└── templates/
└── ... # Configuration file templates
Each task file is self-contained and can be skipped with tags. Need to update just the NTP configuration across all hosts? Run the playbook with --tags ntp and only that task file executes.
Workflow Example: Adding a New Host#
Here's the typical workflow when I spin up a new LXC container or VM:
- Create the VM/Container (Proxmox UI or
pve_create_vm.yml) - Onboard — Run
pve_onboard.ymlto create the ansible user and configure SSH - Configure — Run
doug-common-linux-playbook.ymlto apply the standard baseline - Deploy Service — Install the specific service (Docker, Zabbix Agent, etc.)
- Monitor — Add to Zabbix monitoring (usually automatic via discovery)
Steps 2–4 are all Semaphore button clicks. A new host goes from bare OS to fully configured in about 10 minutes, with consistent configuration every time.
Lessons Learned#
1. Idempotency Is Everything#
Every task in every playbook must be idempotent — running it twice produces the same result as running it once. This means using state: present instead of shell commands, checking before modifying, and avoiding destructive operations.
2. Serial Execution for Updates#
Never run updates on all hosts simultaneously. If an update breaks something, you want to catch it on the first host before it propagates. Set serial: 1 or serial: 3 in update playbooks.
3. Test in Dev, Deploy in Prod#
I have template VMs specifically for testing playbook changes. New role modifications get tested on a throwaway VM before running against production hosts. Semaphore's task history makes it easy to compare test runs against production runs.
4. Keep Roles Modular#
The doug-common-linux role started as a monolithic task file. Breaking it into separate files for users, packages, NTP, etc. made it dramatically easier to maintain and debug. Each file is small enough to understand at a glance.
5. Git-Tracked Playbooks#
All playbooks and roles live in a Git repository. Semaphore pulls from Git on each run, so the latest changes are always applied. This also provides full audit history — who changed what, when, and why.
What's Next#
- Expanded Zabbix Agent deployment: Automate Zabbix Agent 2 installation and PSK configuration as part of the onboarding playbook
- Backup verification: Playbook to verify Proxmox Backup Server integrity and test restores
- Certificate rotation: Automate Let's Encrypt certificate renewal tracking
For more on the monitoring that works alongside this automation, see Monitoring Everything: Zabbix 7 in a Homelab. For the complete infrastructure overview, check out Building My Homelab.