Episode 0x05: Installing Kubernetes with Kubespray
Table of Contents
NOTE: Many commands in this post make use of specific constants tied to my own setup. Make sure to tailor these to your own needs. These examples should serve as a guide, not as direct instructions to copy and paste.
NOTE: Check out the final code at homelab repo on my Github account.
Introduction
Setting up Kubernetes from scratch can be quite complex and may not yield significant benefits. Kubespray offers a more streamlined solution, allowing us to deploy a production-ready Kubernetes cluster with ease. Kubespray leverages Ansible to handle the installation on nodes that we’ve previously configured.
Installation
To begin, create a directory named ansible
and add Kubespray as a submodule within this directory:
git submodule add https://github.com/kubernetes-sigs/kubespray
Follow the Ansible installation guide provided by Kubespray to set up Ansible.
Next, create a playbook.yaml
file with the following content:
- hosts: all
become: yes
tasks:
- name: Set hostname from inventory
hostname:
name: "{{ inventory_hostname }}"
# Import kubespray playbook to deploy k8s cluster
- import_playbook: ./kubespray/cluster.yml
You will also need an inventory.ini
file based on the inventory created in the previous episode. Here’s my example, your most likey will have different IPs.
k8s-cp-01 ansible_host=10.100.0.23 ansible_become=true
k8s-cp-02 ansible_host=10.100.0.26 ansible_become=true
k8s-wk-01 ansible_host=10.100.0.29 ansible_become=true
k8s-wk-02 ansible_host=10.100.0.25 ansible_become=true
k8s-wk-03 ansible_host=10.100.0.27 ansible_become=true
k8s-wk-04 ansible_host=10.100.0.24 ansible_become=true
k8s-wk-05 ansible_host=10.100.0.30 ansible_become=true
[kube_control_plane]
k8s-cp-01
k8s-cp-02
[etcd]
k8s-cp-01
[kube_node]
k8s-wk-01
k8s-wk-02
k8s-wk-03
k8s-wk-04
k8s-wk-05
[k8s_cluster:children]
kube_node
kube_control_plane
NOTE: The template file we created in the previous episode includes all control plane nodes in the etcd
section in the ini file. However, etcd
configuration should only include an odd number of nodes. If you have an even number of control plane nodes, remove one from the etcd
section.
Next, configure your installation by creating a cluster_variables.yaml
file with the following initial content:
cluster_name: homelab-k8s # Choose any name you like
kube_proxy_mode: iptables
helm_enabled: true
We’ll revisit this file in future episodes. The final version can be found on my Github account though.
For now, ensure you manually SSH into all nodes to avoid SSH confirmation prompts while running Ansible. Time to deploy your Kubernetes cluster:
ansible-playbook -i inventory.ini -e @cluster_variables.yaml --user=ansible playbook.yml
Ensure that the user matches the one defined in your cloud-init file mentioned in Episode 5. This may take some time, so feel free to grab a ☕.
Accessing cluster from your PC
Once the installation is complete, SSH into one of the control plane nodes, e.g., ssh root@10.100.0.23
, and verify your setup with:
kubectl get nodes
You should see a list of ready nodes.
root@k8s-cp-01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-cp-01 Ready control-plane 19d v1.29.5
k8s-cp-02 Ready control-plane 19d v1.29.5
k8s-wk-01 Ready <none> 19d v1.29.5
k8s-wk-02 Ready <none> 19d v1.29.5
k8s-wk-03 Ready <none> 19d v1.29.5
k8s-wk-04 Ready <none> 19d v1.29.5
k8s-wk-05 Ready <none> 19d v1.29.5
Next, copy the kubectl
config from /etc/kubernetes/admin.conf
out of your control plane node to your work PC.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1C... #truncated
server: https://127.0.0.1:6443
name: homelab-k8s
contexts:
- context:
cluster: homelab-k8s
user: kubernetes-admin
name: kubernetes-admin@homelab-k8s
current-context: kubernetes-admin@homelab-k8s
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1C... #truncated
client-key-data: LS0tLS1C... #truncated
Add this to your ~/.kube/config
file on your work machine. If this is your only cluster, you can replace the entire file. Modify https://127.0.0.1:6443
to the external IP of your control plane node, e.g., https://10.100.0.23:6443
.
Now you should be able to access your cluster from your PC. Test the connection:
kubectl get nodes
Testing the Cluster
Deploy an Nginx instance using the following YAML definition:
kubectl apply -f - << EOF
apiVersion: v1
kind: Namespace
metadata:
name: test-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: test-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: test-nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
And forward the port to your local machine:
kubectl port-forward -n test-nginx svc/nginx-service 8080:80
Visit http://localhost:8080
to see Nginx running.
To clean up the test deployment:
kubectl delete namespace test-nginx
Conclusion
Our cluster is up and running, but we are not done yet. We have some additional tasks to address:
- Persistence: Unlike cloud providers that offer persistent volumes out of the box, we must figure out how to handle storage as our local node storage is limited, and it’s the only persistence source availble to our cluster currently.
- Service Exposure: We need a better way to expose services without manual port forwarding. Cloud providers natively support load balancers, allowing services to be accessed via an external IP when specified as
LoadBalancer
. That doesn’t work on our cluster for now, as we don’t have a LoadBalancer yet (metallb to the rescue!). - Domain Access: Accessing services by IP is impractical. Setting up an Ingress is required to use domain names instead.
- HTTPS Support: Secure access with HTTPS needs certificate generation and management for each application. For that we will leveredge
cert-manager
.
We’ll cover these in upcoming episodes. Stay tuned! 🚀