Episode 0x07: Load Balancing with MetalLB
Table of Contents
NOTE: Many commands in this post make use of specific constants tied to my own setup. Make sure to tailor these to your own needs. These examples should serve as a guide, not as direct instructions to copy and paste.
NOTE: Check out the final code at homelab repo on my Github account.
Introduction
In our fifth episode, we discussed the limitations of bare metal Kubernetes clusters compared to those managed by cloud providers. Specifically, we noted that we can’t expose services by setting their type to LoadBalancer
. This requires us to repeatedly perform port-forwarding to access services—a significant limitation. To overcome this, we need MetalLB, a load balancer designed for bare metal Kubernetes clusters. I strongly recommend you manually install MetalLB on your Kubernetes cluster by following their installation guide. Doing it manually provides a deeper understanding of what goes into the process. Here, we’ll choose a different route and configure Kubespray to install it for us.
Installation
Metallb
Remember from episode 5 that we stated we would revisit our cluster configuration by modifying cluster_variables.yaml
? It’s time to modify this file to include MetalLB in our Kubernetes installation process. Add the following lines to cluster_variables.yaml
:
metallb_enabled: true
metallb_speaker_enabled: "{{ metallb_enabled }}"
metallb_namespace: "metallb-system"
metallb_version: v0.13.9
metallb_protocol: "layer2"
metallb_port: "7472"
metallb_memberlist_port: "7946"
metallb_config:
speaker:
nodeselector:
kubernetes.io/os: "linux"
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
effect: "NoSchedule"
controller:
nodeselector:
kubernetes.io/os: "linux"
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
effect: "NoSchedule"
address_pools:
primary:
ip_range:
- 10.100.0.128/25
auto_assign: true
layer2:
- primary
This configuration enables MetalLB and sets it up using the Layer 2 protocol. We’re dedicating the higher range of our VLAN 10.100.0.128/25
for MetalLB and specifying that MetalLB components (speaker and controller) should run on Linux nodes while tolerating control-plane nodes with NoSchedule
taints. Adjust this configuration based on your own subnet if necessary.
NOTE: As mentioned in episode episode 3 part 1, we reserved the higher range IPs of our VLAN 10.100.0.128/25
for the metallb.
After adding these configurations, recreate your cluster. As per episode 5, you can do this by running:
ansible-playbook -i inventory.ini -e @cluster_variables.yaml --user=ansible playbook.yml
NOTE: The above command may be destructive. If the NFS subdir provisioner stops working after running it, follow the steps in episode 6 to reinstall it.
If everything goes well, MetalLB should be ready to use. Let’s test it.
Test MetalLB
To test MetalLB, deploy an Nginx instance using the following YAML definition:
kubectl apply -f - << EOF
apiVersion: v1
kind: Namespace
metadata:
name: test-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: test-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: test-nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
EOF
Now, run
kubectl get svc -n test-nginx
MetalLB should allocate an external IP for your service from the VLAN subnet’s higher range.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.233.33.183 10.100.0.129 80:30317/TCP 38h
Navigate to that IP in your browser to ensure you can access Nginx. Finally, clean up the test deployment:
kubectl delete namespace test-nginx
With MetalLB set up, we now have a load balancer. Next, we can set up our ingress in a similar manner. 🚀