23 KiB
layout, title, date, categories, highlight
| layout | title | date | categories | highlight | |
|---|---|---|---|---|---|
| post | ThinkCentre Kubernetes Home Server Step 3 (NFS, K3s, Kube-VIP, MinIO, Longhorn, Traefik, Cert-Manager, DNS, Adguard Home) | 2026-01-02 20:26:00 -0400 |
|
true |
No automation yet. We still run commands by hand in each node. Automation will require Ansible for the nodes and Terraform for AWS. too much work...
We need to prepare the operating system for Kubernetes and storage networking.
Initial NAS verification:
- has a static private IP in your router settings
- has NFS enabled
- has firewall enabled
- firewall has incoming rule (all traffic, or 111/tcp, 111/udp, 2049/tcp) from the 3 nodes
backupsfolder mounted for NFS
Phase 1: Operating System & Network Prep
Run on ALL 3 Nodes
-
Install System Dependencies
sudo apt update && sudo apt install nfs-common open-iscsi curl -y -
Verify NAS Connectivity
/sbin/showmount -e 192.168.2.135 # Expected: # Export list for 192.168.2.135 # /volume1/backups 192.168.2.250, 192.168.2.251, 192.168.2.252 -
Configure Firewall (Trust LAN & VPN)
# Allow communication between nodes (LAN) # if .250, run .251 and .252 # needed so servers can communicate with each other sudo ufw allow from 192.168.2.250 sudo ufw allow from 192.168.2.251 sudo ufw allow from 192.168.2.252 # Allow Mac/VPN to talk to K3s API sudo ufw allow in on wg0 to any port 6443 proto tcp -
System Config (Swap & IP Forwarding)
# 1. Disable Swap sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # 2. Enable IP Forwarding echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p # 3. Mount NAS (Update path if needed) sudo mkdir -p /mnt/nas echo "192.168.2.135:/volume1/backups /mnt/nas nfs defaults 0 0" | sudo tee -a /etc/fstab sudo mount -aroot@boga-server-1:~# ls -l /mnt/nas total 0 root@boga-server-1:~# mount | grep /mnt/nas 192.168.2.135:/volume1/backups on /mnt/nas type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.2.135,mountvers=3,mountport=48867,mountproto=udp,local_lock=none,addr=192.168.2.135)
Phase 2: Initialize Cluster (Node 1)
Run on Node 1
-
Generate Token (Run once, save this):
openssl rand -hex 10 -
Install K3s Leader
--node-ip: Prevents crash when Kube-VIP adds a second IP.--tls-san: authorizes both the VIP and the VPN IP for SSL.--cluster-init: Tells K3s this is the first node of the cluster.--flannel-iface enp0s31f6: Forces pod traffic over Ethernet (crucial since you have VPN interfaces that might confuse it). Verify your interface name with ip a if unsure.--tls-san 192.168.2.240: Pre-authorizes your future Floating IP (Kube-VIP) so SSL works later.--disable traefik/servicelb: We will install the "Pro" versions of these manually.
# REPLACE <YOUR_TOKEN> below curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - \ --cluster-init \ --token <YOUR_TOKEN> \ --flannel-iface enp0s31f6 \ --disable servicelb \ --disable traefik \ --node-ip 192.168.2.250 \ --tls-san 192.168.2.240 \ --tls-san 10.100.0.10 -
Watch for Followers
kubectl get nodes -w
Phase 3: Join Followers (Node 2 & 3)
Run on Node 2 and Node 3
- Node 2: Replace
node-ipwith192.168.2.251andtls-sanwith10.100.0.11. - Node 3: Replace
node-ipwith192.168.2.252andtls-sanwith10.100.0.12.
# Example for NODE 2
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - \
--server https://192.168.2.250:6443 \
--token <YOUR_TOKEN> \
--flannel-iface enp0s31f6 \
--disable servicelb \
--disable traefik \
--node-ip 192.168.2.251 \
--tls-san 192.168.2.240 \
--tls-san 10.100.0.11
Phase 4: Deploy Kube-VIP (Load Balancer)
Run on Node 1
-
Apply RBAC Permissions
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml -
Create Manifest (
nano kubevip.yaml)apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-vip-ds namespace: kube-system spec: selector: matchLabels: app.kubernetes.io/name: kube-vip-ds template: metadata: labels: app.kubernetes.io/name: kube-vip-ds spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/master operator: Exists - matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists containers: - args: - manager env: - name: vip_arp value: 'true' - name: port value: '6443' - name: vip_interface value: 'enp0s31f6' # <--- the physical interface - name: vip_cidr value: '32' - name: cp_enable value: 'true' - name: cp_namespace value: 'kube-system' - name: vip_ddns value: 'false' - name: svc_enable value: 'true' - name: address value: '192.168.2.240' # <--- the floating ip image: ghcr.io/kube-vip/kube-vip:v0.6.4 imagePullPolicy: Always name: kube-vip securityContext: capabilities: add: - NET_ADMIN - NET_RAW hostNetwork: true serviceAccountName: kube-vip tolerations: - effect: NoSchedule operator: Exists - effect: NoExecute operator: Exists -
Apply
kubectl apply -f kubevip.yaml -
Verify IP
ip addr show enp0s31f6 # Look for secondary IP: 192.168.2.240/32 -
check the pods
kubectl get pods -n kube-system -l app.kubernetes.io/name=kube-vip-ds #NAME READY STATUS RESTARTS AGE #kube-vip-ds-g98zh 1/1 Running 0 14s #kube-vip-ds-pxbjs 1/1 Running 0 14s #kube-vip-ds-vq8sp 1/1 Running 0 14s
Phase 5: Remote Access (Mac)
-
Retrieve Config (On Node 1)
sudo cat /etc/rancher/k3s/k3s.yaml -
Configure Mac
nano ~/.kube/config-homelab -
Update:
- Paste the config.
- Change:
server: https://127.0.0.1:6443 - To:
server: https://10.100.0.10:6443 - Note: We use the VPN IP (.10), NOT the VIP (.240). This avoids "Asymmetric Routing" packet drops while using WireGuard.
-
Connect
export KUBECONFIG=~/.kube/config-homelab kubectl get nodes
6. Storage (Longhorn & NAS Backup)
We want a distributed block storage for Pods and enable off-site backups to the NAS. However, Longhorn requires an S3-compatible endpoint for backups. We will deploy MinIO as a gateway that mounts the NAS via NFS and exposes it as S3.
A. Install Longhorn via Helm (On Mac)
We install Longhorn in the longhorn-system namespace.
- Add Repo:
helm repo add longhorn https://charts.longhorn.io
helm repo update
- Install:
helm install longhorn longhorn/longhorn \
--namespace longhorn-system \
--create-namespace \
--set defaultSettings.defaultDataPath="/var/lib/longhorn"
- Verify:
Wait for all pods to be
Running.
kubectl get pods -n longhorn-system -w
B. Deploy MinIO Bridge (The S3 Gateway)
We will use a Kubernetes Secret to manage the credentials so we don't hardcode passwords in our YAML files.
- Create the Credentials Secret:
Run this command in your terminal. Replace
<YOUR_PASSWORD>with a strong password.
kubectl create secret generic minio-secret \
--from-literal=rootUser=admin \
--from-literal=rootPassword=<YOUR_PASSWORD> \
-n longhorn-system
- Create the Manifest:
nano minio-bridge.yaml
Notice that env now uses valueFrom, pointing to the secret we just created.
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: longhorn-system
spec:
selector:
app: minio
ports:
- name: api
port: 9000
targetPort: 9000
- name: console
port: 9001
targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
namespace: longhorn-system
spec:
replicas: 1
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:RELEASE.2023-09-30T07-02-29Z
args:
- server
- /data
- --console-address
- :9001
env:
# SENSITIVE: We pull these from the 'minio-secret' we created via CLI
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: rootUser
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: rootPassword
volumeMounts:
- name: nas-storage
mountPath: /data
ports:
- containerPort: 9000
- containerPort: 9001
volumes:
- name: nas-storage
hostPath:
path: /mnt/nas
type: Directory
- Apply:
kubectl apply -f minio-bridge.yaml
- Initialize Bucket:
- Port Forward Console:
kubectl port-forward -n longhorn-system deployment/minio 9001:9001
- Access: http://localhost:9001
- Login: Use the
adminandpasswordyou defined in step 1. - Action: Create a bucket named
backups.
C. Configure Longhorn Backup Target
Now we tell Longhorn to use the local MinIO service. We need to create a specific secret format that Longhorn expects for S3 targets.
- Create Backup Secret:
Note: Replace <YOUR_PASSWORD> with the EXACT same password you used in Step B.1.
kubectl create secret generic longhorn-backup-secret \
--from-literal=AWS_ACCESS_KEY_ID=admin \
--from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_PASSWORD> \
--from-literal=AWS_ENDPOINTS=http://minio.longhorn-system:9000 \
-n longhorn-system
- Configure Settings (via UI):
- Port forward UI:
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
-
Access: http://localhost:8080
-
Navigate: Settings -> Backup Targets.
-
Update
defaulttarget fields: -
Backup Target:
s3://backups@home/ -
Backup Target Credential Secret:
longhorn-backup-secret -
Save: Verify the Green Checkmark.
7. Ingress Controller (Traefik v3)
Now that storage is settled, we need a way to expose services to the web properly, avoiding kubectl port-forward.
We will install Traefik v3 using Helm.
-
Add Repo:
helm repo add traefik https://traefik.github.io/charts helm repo update -
Create Config File: We need to customize Traefik to trust your forwarded headers (since you are behind a VPN/Proxy).
nano traefik-values.yaml(mac)hostNetwork: true service: enabled: true type: LoadBalancer # This assigns the VIP to Traefik so you can access it via 192.168.2.240 loadBalancerIP: '192.168.2.240' # We use an annotation to tell Kube-VIP which IP to assign annotations: kube-vip.io/loadbalancerIPs: '192.168.2.240' ports: web: # New V3 Syntax for HTTP -> HTTPS redirect redirections: entryPoint: to: websecure scheme: https permanent: true websecure: tls: enabled: true # Security: Trust headers from VPN and LAN so logs show real client IPs additionalArguments: - '--entryPoints.web.forwardedHeaders.trustedIPs=10.100.0.0/24,192.168.2.0/24' - '--entryPoints.websecure.forwardedHeaders.trustedIPs=10.100.0.0/24,192.168.2.0/24' -
Install:
helm install traefik traefik/traefik \ --namespace kube-system \ --values traefik-values.yaml -
Verify:
kubectl get svc -n kube-system traefik
- You should see
EXTERNAL-IPas192.168.2.240.
Once this is running, we can create an IngressRoute to access the Longhorn Dashboard via a real URL (e.g.,
longhorn.home.lab) or ping curl -v -k -H "Host: longhorn.home.lab" https://192.168.2.240.
8. Expose Dashboards (Ingress Objects)
Now we create a routing rule ("Ingress") that tells Traefik: "When someone asks for longhorn.home.lab, send them to
the Longhorn Dashboard."
-
Create Ingress Manifest:
nano longhorn-ingress.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: longhorn-dashboard namespace: longhorn-system annotations: cert-manager.io/cluster-issuer: 'homelab-issuer' traefik.ingress.kubernetes.io/router.entrypoints: websecure spec: tls: - hosts: - longhorn.home.lab secretName: longhorn-tls-certs # Cert-manager will create this in longhorn-system rules: - host: longhorn.home.lab http: paths: - path: / pathType: Prefix backend: service: name: longhorn-frontend port: number: 80nano minio-ingress.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minio-dashboard namespace: longhorn-system annotations: cert-manager.io/cluster-issuer: 'homelab-issuer' traefik.ingress.kubernetes.io/router.entrypoints: websecure spec: tls: - hosts: - minio.home.lab secretName: minio-tls-certs # Cert-manager will create this rules: - host: minio.home.lab http: paths: - path: / pathType: Prefix backend: service: name: minio port: number: 9001 # Dashboard Port -
Apply them (mac):
kubectl apply -f longhorn-ingress.yaml kubectl apply -f minio-ingress.yaml -
Run this on Node 1, Node 2, and Node 3 to open the firewall:
# Allow HTTP/HTTPS from VPN clients sudo ufw allow from 10.100.0.0/24 to any port 80 proto tcp sudo ufw allow from 10.100.0.0/24 to any port 443 proto tcp # Allow HTTP/HTTPS from Home LAN (just in case) sudo ufw allow from 192.168.2.0/24 to any port 80 proto tcp sudo ufw allow from 192.168.2.0/24 to any port 443 proto tcpon Mac, you should see ports 80 and 443
kubectl get svc -n kube-system traefik #NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE #traefik LoadBalancer 10.43.49.19 192.168.2.240 80:31504/TCP,443:32282/TCP 16m -
Configure DNS: Since you don't have a real DNS server running yet, you must tell your Mac where to find this domain. On your Mac:
sudo nano /etc/hostsAdd this line at the bottom:
192.168.2.240 longhorn.home.lab minio.home.lab -
Test: Open your browser and visit: https://longhorn.home.lab and https://minio.home.lab or ping
curl -v -k -H "Host: longhorn.home.lab" https://192.168.2.240.
- Note: You will see a "Not Secure" warning because Traefik is using a self-signed default certificate. This is normal. Click "Advanced" -> "Proceed". Next step, we will host our certificate manager.
Verification
Check that your Ingress is picking up the correct TLS certificates from Cert-Manager.
# Verify Longhorn Certificate
echo | openssl s_client -showcerts -servername longhorn.home.lab -connect 192.168.2.240:443 2>/dev/null | openssl x509 -noout -issuer -dates
# Expected Issuer: CN = homelab-ca
# Verify MinIO Certificate
echo | openssl s_client -showcerts -servername minio.home.lab -connect 192.168.2.240:443 2>/dev/null | openssl x509 -noout -issuer -dates
# Expected Issuer: CN = homelab-ca
# Check Kubernetes Secret
kubectl get certificate -n longhorn-system
# Expected: READY=True
We will deploy AdGuard Home. It blocks ads network-wide and allows us to define "DNS Rewrites" so that *.home.lab
automatically points to your Traefik LoadBalancer (.240).
A. Deploy AdGuard
We assign AdGuard a dedicated LoadBalancer IP: 192.168.2.241.
nano adguard-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
name: adguard-system
---
# 1. Storage for persistent configs (Longhorn)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguard-data
namespace: adguard-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguard-conf
namespace: adguard-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 1Gi
---
# 2. The Application
apiVersion: apps/v1
kind: Deployment
metadata:
name: adguard
namespace: adguard-system
spec:
replicas: 1
selector:
matchLabels:
app: adguard
template:
metadata:
labels:
app: adguard
spec:
containers:
- name: adguard
image: adguard/adguardhome:v0.107.43
ports:
- containerPort: 53 # DNS UDP
protocol: UDP
- containerPort: 53 # DNS TCP
protocol: TCP
- containerPort: 3000 # Setup UI
- containerPort: 80 # Web UI
volumeMounts:
- name: data
mountPath: /opt/adguardhome/work
- name: conf
mountPath: /opt/adguardhome/conf
volumes:
- name: data
persistentVolumeClaim:
claimName: adguard-data
- name: conf
persistentVolumeClaim:
claimName: adguard-conf
---
# 3. The Service (Exposes IP .241)
apiVersion: v1
kind: Service
metadata:
name: adguard-dns
namespace: adguard-system
annotations:
kube-vip.io/loadbalancerIPs: '192.168.2.241' # <--- The DNS IP
spec:
selector:
app: adguard
type: LoadBalancer
ports:
- name: dns-udp
port: 53
targetPort: 53
protocol: UDP
- name: dns-tcp
port: 53
targetPort: 53
protocol: TCP
- name: web-setup
port: 3000
targetPort: 3000
protocol: TCP
- name: web-ui
port: 80
targetPort: 80
protocol: TCP
kubectl apply -f adguard-deployment.yaml
# Wait for EXTERNAL-IP to be 192.168.2.241
kubectl get svc -n adguard-system -w
B. Initial Setup & Firewall
- Firewall: We must open Port 53 (DNS) on all nodes so the LAN can talk to the cluster. Run on all 3 nodes:
sudo ufw allow from 192.168.2.0/24 to any port 53 proto udp
sudo ufw allow from 192.168.2.0/24 to any port 53 proto tcp
sudo ufw allow from 10.100.0.0/24 to any port 53 proto udp
sudo ufw allow from 10.100.0.0/24 to any port 53 proto tcp
- Wizard:
- Go to:
http://192.168.2.241:3000 - Admin Interface: All Interfaces, Port 80.
- DNS Server: All Interfaces, Port 53.
- Setup: Create Admin/Password.
C. Critical Configuration (DNS Rewrites & Rate Limits)
- Magic Rewrites:
- Go to Filters -> DNS Rewrites -> Add.
- Domain:
*.home.lab - Answer:
192.168.2.240(This points to Traefik). - Result: Any request for
longhorn.home.laborminio.home.labstays on the LAN.
- Disable Rate Limiting (Crucial!):
- Go to Settings -> DNS Settings.
- Set Rate limit to
0(Unlimited). - Why? If we point our Router to AdGuard later, all traffic looks like it comes from 1 IP. Default settings will ban the router and kill the internet.
D. Expose AdGuard Dashboard
nano adguard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adguard-ui
namespace: adguard-system
annotations:
cert-manager.io/cluster-issuer: 'homelab-issuer'
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
tls:
- hosts:
- adguard.home.lab
secretName: adguard-tls-certs
rules:
- host: adguard.home.lab
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adguard-dns
port:
number: 80
kubectl apply -f adguard-ingress.yaml
E. Client Configuration
Scenario: The ISP Router blocks LAN IPs for DNS assignment (DNS Rebind Protection).
Instead of configuring the Router, we manually configure Critical Devices (MacBook, Phone) to use AdGuard.
-
On Mac (Wi-Fi Settings):
- DNS Servers: Remove existing entries. Add
192.168.2.241. - Search Domain:
home(optional).
- DNS Servers: Remove existing entries. Add
-
Final Test:
- Access https://longhorn.home.lab
- Access https://minio.home.lab
- Access https://adguard.home.lab
Recap
Infrastructure Map (Physical & Virtual Nodes)
| Device Name | Role | VPN IP (wg0) |
Physical IP (LAN) | Public IP |
|---|---|---|---|---|
| EC2 Proxy | VPN Hub | 10.100.0.1 |
10.0.x.x |
3.99.x.x (Static) |
| MacBook | Client | 10.100.0.2 |
(Dynamic) | (Hidden) |
| Node 1 | K3s Master | 10.100.0.10 |
192.168.2.250 |
(Hidden) |
| Node 2 | K3s Worker | 10.100.0.11 |
192.168.2.251 |
(Hidden) |
| Node 3 | K3s Worker | 10.100.0.12 |
192.168.2.252 |
(Hidden) |
| NAS-Server | Backup Storage | N/A | 192.168.2.135 |
(Hidden) |
| Home Router | Gateway / DHCP | N/A | 192.168.2.1 |
(Dynamic ISP) |
Service & Domain Registry (Apps)
This table maps the Kubernetes services to their LoadBalancer IPs and Domain URLs.
| Application | Role | Hosting Type | VIP / Endpoint | External URL (Ingress) |
|---|---|---|---|---|
| Kube-VIP | App Load Balancer | DaemonSet | 192.168.2.240 |
N/A |
| Traefik | Ingress Controller | Deployment | 192.168.2.240 |
*.home.lab |
| AdGuard Home | DNS Load Balancer | Deployment | 192.168.2.241 |
https://adguard.home.lab |
| Longhorn | Storage Dashboard | K8s Service | 192.168.2.240 |
https://longhorn.home.lab |
| MinIO | S3 Gateway | K8s Service | 192.168.2.240 |
https://minio.home.lab |