Episode 0x04: Managing Proxmox with Terraform
Table of Contents
NOTE: Many commands in this post make use of specific constants tied to my own setup. Make sure to tailor these to your own needs. These examples should serve as a guide, not as direct instructions to copy and paste.
NOTE: Check out the final code at homelab repo on my Github account.
Introduction
With our network now set up, it’s time to create our Kubernetes nodes using Terraform. Subsequently, we’ll configure and deploy Kubernetes on these nodes using Kubespray and Ansible.
Terrafrom Setup
Provider
We’ll use the bpg Proxmox provider for this setup. Refer to its documentation if you encounter any issues with the steps outlined below. Ensure you have also set up a Terraform Provisioner Account as detailed in Episode 2. Here’s an example of how to import the bpg provider:
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.57.1"
}
}
}
provider "proxmox" {
endpoint = "https://pve.geekembly.com:8006/"
insecure = true
ssh {
agent = true
}
}
NOTE: Make sure to replace pve.geekembly.com
with your own domain name, and modify your /etc/hosts
to point it to your Proxmox server.
NOTE: The bpg
module primarily uses the Proxmox user created earlier. However, to provision certain resources, SSH access is also required, hence the SSH agent is enabled. Although there are various ways to supply credentials, using environment variables is one straightforward option:
export PROXMOX_VE_USERNAME="terraform-prov@pve"
export PROXMOX_VE_PASSWORD="<your-pass-here>"
export PROXMOX_VE_SSH_USERNAME="root"
export PROXMOX_VE_SSH_PASSWORD="<your-ssh-pass-here>"
Make sure checking up the bpg documentations.
Define Required Resources
First, we’ll download the latest Debian image to our Proxmox server:
resource "proxmox_virtual_environment_download_file" "latest_debian_12_bookworm_qcow2_img" {
content_type = "iso"
datastore_id = "local"
file_name = "debian-12-generic-amd64.img"
node_name = var.proxmox_node_name
url = "https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2"
overwrite = true
overwrite_unmanaged = true
checksum_algorithm = "sha512"
checksum = "f7ac3fb9d45cdee99b25ce41c3a0322c0555d4f82d967b57b3167fce878bde09590515052c5193a1c6d69978c9fe1683338b4d93e070b5b3d04e99be00018f25"
}
Next, let’s store our SSH public key in a local file:
data "local_file" "ssh_public_key" {
filename = "./id_rsa.pub"
}
We’ll use the SSH key in the cloud-init configuration:
resource "proxmox_virtual_environment_file" "cloud_config" {
content_type = "snippets"
datastore_id = "local"
node_name = var.proxmox_node_name
source_raw {
data = <<-EOF
#cloud-config
users:
- name: root
ssh_authorized_keys:
- ${trimspace(data.local_file.ssh_public_key.content)}
- name: ansible
groups:
- sudo
shell: /bin/bash
ssh_authorized_keys:
- ${trimspace(data.local_file.ssh_public_key.content)}
sudo: ALL=(ALL) NOPASSWD:ALL
runcmd:
- apt update
- apt install -y qemu-guest-agent net-tools nfs-common
- timedatectl set-timezone Europe/Berlin
- systemctl enable qemu-guest-agent
- systemctl start qemu-guest-agent
- sysctl -w fs.inotify.max_queued_events=2099999999
- sysctl -w fs.inotify.max_user_instances=2099999999
- sysctl -w fs.inotify.max_user_watches=2099999999
- echo "done" > /tmp/cloud-config.done
EOF
file_name = "cloud-config.yaml"
}
}
This configuration adds our public SSH key to both the root
and ansible
users. Having SSH access to the root user is handy for connecting to nodes, primarily for debugging.
Creating VMs
To create VMs, use the proxmox_virtual_environment_vm
resource:
resource "proxmox_virtual_environment_vm" "k8s-controlplane" {
for_each = { for _, cp in var.k9s_control_planes :
cp.vm_id => cp
}
name = "k8s-cp-${format("%02d", each.key + 1)}"
description = "kubernetes control plane node ${each.key + 1}"
tags = ["terraform", "debian", "k8s-cp"]
node_name = var.proxmox_node_name
vm_id = each.value.vm_id
agent {
enabled = true
}
cpu {
cores = each.value.cpu.cores
sockets = each.value.cpu.sockets
type = "host"
}
memory {
dedicated = each.value.memory
}
disk {
datastore_id = each.value.disk.datastore_id
file_id = proxmox_virtual_environment_download_file.latest_debian_12_bookworm_qcow2_img.id
interface = "virtio0"
iothread = true
size = each.value.disk.size
}
network_device {
bridge = each.value.network.bridge
vlan_id = each.value.network.vlan_id
model = "virtio"
}
initialization {
ip_config {
ipv4 {
address = "dhcp"
}
}
user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
}
}
This code references some variables we define like:
variable "k9s_control_planes" {
description = "Specification of the control plane nodes"
type = list(object({
vm_id = number
cpu = object({
sockets = number # Number of cpu sockets
cores = number # Number of cpu cores
})
memory = number # Size of memory in MB
disk = object({
datastore_id = string # ID of the data store in proxmox
size = number # Disk size in GB
})
network = object({
bridge = string # The name of the network bridge
vlan_id = number # The VLAN identifier.
})
}))
default = [
{
vm_id = 5000
cpu = { sockets = 1, cores = 2 }
memory = 4096
disk = { datastore_id = "local-lvm", size = 32 }
network = { bridge = "vmbr0", vlan_id = 100 }
},
{
vm_id = 5001
cpu = { sockets = 1, cores = 2 }
memory = 4096
disk = { datastore_id = "local-lvm", size = 32 }
network = { bridge = "vmbr0", vlan_id = 100 }
}
]
}
Worker nodes can be defined similarly. Make sure to strucutre your terrafrom code the way you like. I’ll leave the details to readers.
Outputs
Once VMs are created, output their IP addresses:
output "control_planes_IPv4" {
value = [
for vm in proxmox_virtual_environment_vm.k8s-controlplane : vm.ipv4_addresses[1][0]
]
}
We can generate an inventory file for Kubespray to set up Kubernetes. The inventory file is an .ini
file such as this:
k8s-cp-01 ansible_host=10.100.0.23 ansible_become=true
k8s-wk-01 ansible_host=10.100.0.29 ansible_become=true
# other nodes here
[kube_control_plane]
k8s-cp-01
# other nodes here
[etcd]
k8s-cp-01
# other nodes here
[kube_node]
k8s-wk-01
# other nodes here
[k8s_cluster:children]
kube_node
kube_control_plane
To create the inventory file, we first set up a template inventory_template.tpl
:
%{ for idx, ip in cp_ips }
k8s-cp-${format("%02d", idx + 1)} ansible_host=${ip} ansible_become=true
%{ endfor }
%{ for idx, ip in worker_ips }
k8s-wk-${format("%02d", idx + 1)} ansible_host=${ip} ansible_become=true
%{ endfor }
[kube_control_plane]
%{ for i in range(cp_count) }
k8s-cp-${format("%02d", i + 1)}
%{ endfor }
[etcd]
%{ for i in range(cp_count) }
k8s-cp-${format("%02d", i + 1)}
%{ endfor }
[kube_node]
%{ for i in range(worker_count) }
k8s-wk-${format("%02d", i + 1)}
%{ endfor }
[k8s_cluster:children]
kube_node
kube_control_plane
Then create the final inventory file using:
locals {
cp_ips = [
for vm in proxmox_virtual_environment_vm.k8s-controlplane : vm.ipv4_addresses[1][0]
]
worker_ips = [
for vm in proxmox_virtual_environment_vm.k8s-worker : vm.ipv4_addresses[1][0]
]
}
resource "local_file" "ansible_inventory" {
filename = "../../ansible/inventory.ini"
content = templatefile("${path.module}/inventory_template.tpl", {
cp_count = length(var.k9s_control_planes)
worker_count = length(var.k9s_workers)
cp_ips = local.cp_ips
worker_ips = local.worker_ips
})
}
We will use this inventory file in the next episode, to install k8s.
Cluster Configuration.
My cluster consists of 2 control plane nodes and 5 worker nodes. The configurations are as follows:
For Control Plane Nodes:
Device | Description |
---|---|
cpu | socket = 1, core = 2(host) |
ram | 4 Gi |
disk | 32 Gi |
For Worker Nodes:
Device | Description |
---|---|
cpu | socket = 2, core = 4(host) |
ram | 8 Gi |
disk | 32 Gi |
Don’t worry about resource constraints and not having enough dedicated cpu cores or RAM; Proxmox handles resource sharing efficiently.
With everything in place, run terraform plan
and then terraform apply
. If everything goes well, you should see your Kubernetes nodes up and running in a couple of minutes.
Stay tuned for the next steps in setting up Kubernetes! 🚀