added part 3
This commit is contained in:
@@ -1,17 +1,20 @@
|
|||||||
---
|
---
|
||||||
layout: post
|
layout: post
|
||||||
title: 'ThinkCentre M720q + Debian: 24/7 Server Setup Step 1'
|
title: 'ThinkCentre Kubernetes Home Server Step 1 (BIOS Setup)'
|
||||||
date: 2025-12-31 22:00:00 -0400
|
date: 2025-12-31 22:00:00 -0400
|
||||||
categories:
|
categories:
|
||||||
- homelab
|
- homelab
|
||||||
highlight: true
|
highlight: true
|
||||||
---
|
---
|
||||||
|
|
||||||
> _I recently picked up 3 refurbished Lenovo ThinkCentre M720q Tiny to migrate my Kubernetes cluster from AWS and slash
|
> _I recently picked up 3 refurbished Lenovo ThinkCentre M720q Tiny to migrate my Kubernetes cluster from AWS and cut
|
||||||
> my EKS costs. These machines are great for being silent and power-efficient, but out of the box, they're tuned as
|
> my EKS costs. These machines are great for being silent and power-efficient, but out of the box, they're tuned as
|
||||||
> office
|
> office desktops, not as high-availability server racks. Here's how I configured the hardware and OS settings to make
|
||||||
> desktops, not as high-availability server racks. Here's how I configured the hardware and OS settings to make this
|
> this 3-node cluster production-ready. In the next steps, we will take advantage of Cloudflare and AWS for must-buy
|
||||||
> 3-node cluster production-ready._
|
> services like domain name, static ipv4, and web app firewall. Our cloud budget is 10 CAD/month. The bottleneck will
|
||||||
|
> strictly be the lenovo servers and our home Wi-Fi upload speed. The home server should work in any network, meaning it
|
||||||
|
> must not rely on the home router settings (current limitations: steps 1-3 are not automated). I already own a NAS
|
||||||
|
> server which we will use for daily k8 backups for the nodes._
|
||||||
|
|
||||||
Not the cleanest setup, no cable management, servers upside down, stickers still on, no room for ventilation, and they
|
Not the cleanest setup, no cable management, servers upside down, stickers still on, no room for ventilation, and they
|
||||||
have too much wiggle room in the rack, but the software is sound and that's 90% of the work done :D
|
have too much wiggle room in the rack, but the software is sound and that's 90% of the work done :D
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
layout: post
|
layout: post
|
||||||
title: 'ThinkCentre M720q + Debian: 24/7 Server Setup Step 2'
|
title: 'ThinkCentre Kubernetes Home Server Step 2 (Debian, SSH, Firewall, hub-and-spoke VPN, AWS)'
|
||||||
date: 2026-01-01 23:08:00 -0400
|
date: 2026-01-01 23:08:00 -0400
|
||||||
categories:
|
categories:
|
||||||
- homelab
|
- homelab
|
||||||
@@ -22,13 +22,17 @@ DDNS services. So we implement a VPN, specifically a hub-and-spoke VPN using an
|
|||||||
|
|
||||||
## 2. Network Map
|
## 2. Network Map
|
||||||
|
|
||||||
| Device | Role | VPN IP (`wg0`) | Physical IP (LAN) | Public IP |
|
| Device / Resource | Type | Role | VPN IP (`wg0`) | Physical IP (LAN) | Public IP |
|
||||||
| ------------- | ------- | -------------- | ----------------- | ------------------- |
|
| ----------------- | ------------- | -------------------- | -------------- | ----------------- | ------------------- |
|
||||||
| **EC2 Proxy** | **HUB** | `10.100.0.1` | `10.0.x.x` | `3.99.x.x` (Static) |
|
| **EC2 Proxy** | To be created | VPN Hub | `10.100.0.1` | `10.0.x.x` | `3.99.x.x` (Static) |
|
||||||
| **MacBook** | Client | `10.100.0.2` | (Dynamic) | (Dynamic) |
|
| **MacBook** | Existing | Client | `10.100.0.2` | (Dynamic) | (Hidden) |
|
||||||
| **Node 1** | Server | `10.100.0.10` | `192.168.2.250` | (Hidden) |
|
| **Node 1** | Existing | K3s Master | `10.100.0.10` | `192.168.2.250` | (Hidden) |
|
||||||
| **Node 2** | Server | `10.100.0.11` | `192.168.2.251` | (Hidden) |
|
| **Node 2** | Existing | K3s Worker | `10.100.0.11` | `192.168.2.251` | (Hidden) |
|
||||||
| **Node 3** | Server | `10.100.0.12` | `192.168.2.252` | (Hidden) |
|
| **Node 3** | Existing | K3s Worker | `10.100.0.12` | `192.168.2.252` | (Hidden) |
|
||||||
|
| **NAS-Server** | Existing | Backup Storage | N/A | `192.168.2.135` | (Hidden) |
|
||||||
|
| **Home Router** | Existing | Gateway / DHCP | N/A | `192.168.2.1` | (Dynamic ISP) |
|
||||||
|
| **Kube-VIP** | To be created | **App** LoadBalancer | N/A | `192.168.2.240` | (Hidden) |
|
||||||
|
| **AdGuard Home** | To be created | **DNS** LoadBalancer | N/A | `192.168.2.241` | (Hidden) |
|
||||||
|
|
||||||
## 3. Phase 1: EC2 Setup (The Hub)
|
## 3. Phase 1: EC2 Setup (The Hub)
|
||||||
|
|
||||||
@@ -79,10 +83,10 @@ PostDown = iptables -D FORWARD -i %i -j ACCEPT
|
|||||||
PublicKey = <MAC_PUBLIC_KEY>
|
PublicKey = <MAC_PUBLIC_KEY>
|
||||||
AllowedIPs = 10.100.0.2/32
|
AllowedIPs = 10.100.0.2/32
|
||||||
|
|
||||||
# Peer: Node 1
|
# Peer: Node 1 (192.168.2.240/32 is the the floating IP of the kubernetes cluster, we will need it for step 3)
|
||||||
[Peer]
|
[Peer]
|
||||||
PublicKey = <NODE_1_PUBLIC_KEY>
|
PublicKey = <NODE_1_PUBLIC_KEY>
|
||||||
AllowedIPs = 10.100.0.10/32
|
AllowedIPs = 10.100.0.10/32, 192.168.2.240/32
|
||||||
|
|
||||||
# Peer: Node 2
|
# Peer: Node 2
|
||||||
[Peer]
|
[Peer]
|
||||||
@@ -115,7 +119,7 @@ apt update
|
|||||||
|
|
||||||
Even though we will set a static IP on the Debian node itself, your router might still try to claim that IP for another
|
Even though we will set a static IP on the Debian node itself, your router might still try to claim that IP for another
|
||||||
device via DHCP. Ensure you go into your router settings and reserve those IPs by marking them as "Static" in the
|
device via DHCP. Ensure you go into your router settings and reserve those IPs by marking them as "Static" in the
|
||||||
device list or by shrinking the DHCP IPs range to .. Reboot the nodes to apply the changes.
|
device list AND by shrinking the DHCP IPs range to 192.168.2.2 -> 192.168.2.239. Reboot the nodes to apply the changes.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ip link show
|
ip link show
|
||||||
@@ -240,6 +244,7 @@ ping 192.168.2.251
|
|||||||
[Interface]
|
[Interface]
|
||||||
Address = 10.100.0.11/24 # <--- change per node (.10, .11, .12)
|
Address = 10.100.0.11/24 # <--- change per node (.10, .11, .12)
|
||||||
PrivateKey = <NODE_PRIVATE_KEY>
|
PrivateKey = <NODE_PRIVATE_KEY>
|
||||||
|
MTU = 1280
|
||||||
|
|
||||||
[Peer]
|
[Peer]
|
||||||
PublicKey = <EC2_PUBLIC_KEY>
|
PublicKey = <EC2_PUBLIC_KEY>
|
||||||
@@ -261,13 +266,14 @@ ping 192.168.2.251
|
|||||||
[Interface]
|
[Interface]
|
||||||
PrivateKey = <MAC_PRIVATE_KEY>
|
PrivateKey = <MAC_PRIVATE_KEY>
|
||||||
Address = 10.100.0.2/24
|
Address = 10.100.0.2/24
|
||||||
DNS = 1.1.1.1
|
DNS = 192.168.2.241, 192.168.2.1
|
||||||
|
MTU = 1280
|
||||||
|
|
||||||
[Peer]
|
[Peer]
|
||||||
PublicKey = <EC2_PUBLIC_KEY>
|
PublicKey = <EC2_PUBLIC_KEY>
|
||||||
Endpoint = <EC2_PUBLIC_IP>:51820
|
Endpoint = <EC2_PUBLIC_IP>:51820
|
||||||
# Route VPN traffic AND Home Network traffic through EC2
|
# Route VPN traffic AND Home Network traffic through EC2
|
||||||
AllowedIPs = 10.100.0.0/24, 192.168.2.0/24
|
AllowedIPs = 10.100.0.0/24
|
||||||
PersistentKeepalive = 25
|
PersistentKeepalive = 25
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -395,22 +401,6 @@ If you can ping EC2 (`10.100.0.1`) but not the Nodes (`10.100.0.11`):
|
|||||||
- Ensure `iptables` is installed on Amazon Linux (`sudo dnf install iptables -y`) and that your `PostUp` rules in
|
- Ensure `iptables` is installed on Amazon Linux (`sudo dnf install iptables -y`) and that your `PostUp` rules in
|
||||||
`wg0.conf` are active.
|
`wg0.conf` are active.
|
||||||
|
|
||||||
### Connection Stalls (MTU Issues)
|
|
||||||
|
|
||||||
If SSH connects but "hangs" or freezes, or if ping works but SSH doesn't, you likely have an MTU (Packet Size) issue.
|
|
||||||
This is common with home ISPs (PPPoE).
|
|
||||||
|
|
||||||
- **Fix:** Lower the MTU on the Node interface.
|
|
||||||
- Edit `sudo nano /etc/wireguard/wg0.conf` on the Node:
|
|
||||||
|
|
||||||
```ini
|
|
||||||
[Interface]
|
|
||||||
...
|
|
||||||
MTU = 1280
|
|
||||||
```
|
|
||||||
|
|
||||||
- Restart WireGuard: `sudo systemctl restart wg-quick@wg0`
|
|
||||||
|
|
||||||
### SSH "Permission Denied" (Public Key)
|
### SSH "Permission Denied" (Public Key)
|
||||||
|
|
||||||
If you see a handshake but SSH rejects you:
|
If you see a handshake but SSH rejects you:
|
||||||
@@ -431,3 +421,16 @@ ssh-keygen -R 10.100.0.10
|
|||||||
ssh-keygen -R 10.100.0.11
|
ssh-keygen -R 10.100.0.11
|
||||||
ssh-keygen -R 10.100.0.12
|
ssh-keygen -R 10.100.0.12
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
In next steps, we will automate the infrastructure using Ansible and deploy Kubernetes, with all services running in the
|
||||||
|
cluster and HPA rules properly configured. The expected traffic flows will be as follows:
|
||||||
|
|
||||||
|
- Admin Traffic: Admin → EC2 Bastion (WireGuard) → Home Cluster (complete)
|
||||||
|
- Web Traffic: User → AWS CloudFront (WAF) → Cloudflare Tunnel → Home Cluster (next steps)
|
||||||
|
|
||||||
|
Our total cost is approximately 10\$ (CAD) per month to cover the EC2 instance (on demand pricing ~ 4\$/mo), elastic IP
|
||||||
|
allocation (~5\$/mo), and the domain (~1\$/mo). Cloudflare will be used on the free tier.
|
||||||
|
|
||||||
|
[[2026-01-02-homelab-part3]]
|
||||||
|
|||||||
858
_posts/homelab/2026-01-02-homelab-part3.md
Normal file
858
_posts/homelab/2026-01-02-homelab-part3.md
Normal file
@@ -0,0 +1,858 @@
|
|||||||
|
---
|
||||||
|
layout: post
|
||||||
|
title: 'ThinkCentre Kubernetes Home Server Step 3 (NFS, K3s, Kube-VIP, MinIO, Longhorn, Traefik, Cert-Manager, DNS, Adguard Home)'
|
||||||
|
date: 2026-01-02 20:26:00 -0400
|
||||||
|
categories:
|
||||||
|
- homelab
|
||||||
|
highlight: true
|
||||||
|
---
|
||||||
|
|
||||||
|
[[2026-01-01-homelab-part2]]
|
||||||
|
|
||||||
|
No automation yet. We still run commands by hand in each node. Automation will require Ansible for the nodes and
|
||||||
|
Terraform for AWS. too much work...
|
||||||
|
|
||||||
|
We need to prepare the operating system for Kubernetes and storage networking.
|
||||||
|
|
||||||
|
Initial NAS verification:
|
||||||
|
|
||||||
|
- has a static private IP in your router settings
|
||||||
|
- has NFS enabled
|
||||||
|
- has firewall enabled
|
||||||
|
- firewall has incoming rule (all traffic, or 111/tcp, 111/udp, 2049/tcp) from the 3 nodes
|
||||||
|
- `backups` folder mounted for NFS
|
||||||
|
|
||||||
|
### Phase 1: Operating System & Network Prep
|
||||||
|
|
||||||
|
Run on ALL 3 Nodes
|
||||||
|
|
||||||
|
1. Install System Dependencies
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt update && sudo apt install nfs-common open-iscsi curl -y
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Verify NAS Connectivity
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/sbin/showmount -e 192.168.2.135
|
||||||
|
# Expected:
|
||||||
|
# Export list for 192.168.2.135
|
||||||
|
# /volume1/backups 192.168.2.250, 192.168.2.251, 192.168.2.252
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Configure Firewall (Trust LAN & VPN)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Allow communication between nodes (LAN)
|
||||||
|
# if .250, run .251 and .252
|
||||||
|
# needed so servers can communicate with each other
|
||||||
|
sudo ufw allow from 192.168.2.250
|
||||||
|
sudo ufw allow from 192.168.2.251
|
||||||
|
sudo ufw allow from 192.168.2.252
|
||||||
|
|
||||||
|
# Allow Mac/VPN to talk to K3s API
|
||||||
|
sudo ufw allow in on wg0 to any port 6443 proto tcp
|
||||||
|
```
|
||||||
|
|
||||||
|
4. System Config (Swap & IP Forwarding)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Disable Swap
|
||||||
|
sudo swapoff -a
|
||||||
|
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||||
|
|
||||||
|
# 2. Enable IP Forwarding
|
||||||
|
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
|
||||||
|
sudo sysctl -p
|
||||||
|
|
||||||
|
# 3. Mount NAS (Update path if needed)
|
||||||
|
sudo mkdir -p /mnt/nas
|
||||||
|
echo "192.168.2.135:/volume1/backups /mnt/nas nfs defaults 0 0" | sudo tee -a /etc/fstab
|
||||||
|
sudo mount -a
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
root@boga-server-1:~# ls -l /mnt/nas
|
||||||
|
total 0
|
||||||
|
root@boga-server-1:~# mount | grep /mnt/nas
|
||||||
|
192.168.2.135:/volume1/backups on /mnt/nas type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.2.135,mountvers=3,mountport=48867,mountproto=udp,local_lock=none,addr=192.168.2.135)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Initialize Cluster (Node 1)
|
||||||
|
|
||||||
|
Run on Node 1
|
||||||
|
|
||||||
|
1. Generate Token (Run once, save this):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl rand -hex 10
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install K3s Leader
|
||||||
|
- `--node-ip`: Prevents crash when Kube-VIP adds a second IP.
|
||||||
|
- `--tls-san`: authorizes both the VIP and the VPN IP for SSL.
|
||||||
|
- `--cluster-init`: Tells K3s this is the first node of the cluster.
|
||||||
|
- `--flannel-iface enp0s31f6`: Forces pod traffic over Ethernet (crucial since you have VPN interfaces that might
|
||||||
|
confuse
|
||||||
|
it). Verify your interface name with ip a if unsure.
|
||||||
|
- `--tls-san 192.168.2.240`: Pre-authorizes your future Floating IP (Kube-VIP) so SSL works later.
|
||||||
|
- `--disable traefik/servicelb`: We will install the "Pro" versions of these manually.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# REPLACE <YOUR_TOKEN> below
|
||||||
|
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - \
|
||||||
|
--cluster-init \
|
||||||
|
--token <YOUR_TOKEN> \
|
||||||
|
--flannel-iface enp0s31f6 \
|
||||||
|
--disable servicelb \
|
||||||
|
--disable traefik \
|
||||||
|
--node-ip 192.168.2.250 \
|
||||||
|
--tls-san 192.168.2.240 \
|
||||||
|
--tls-san 10.100.0.10
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Watch for Followers
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get nodes -w
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Join Followers (Node 2 & 3)
|
||||||
|
|
||||||
|
Run on Node 2 and Node 3
|
||||||
|
|
||||||
|
- Node 2: Replace `node-ip` with `192.168.2.251` and `tls-san` with `10.100.0.11`.
|
||||||
|
- Node 3: Replace `node-ip` with `192.168.2.252` and `tls-san` with `10.100.0.12`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Example for NODE 2
|
||||||
|
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - \
|
||||||
|
--server https://192.168.2.250:6443 \
|
||||||
|
--token <YOUR_TOKEN> \
|
||||||
|
--flannel-iface enp0s31f6 \
|
||||||
|
--disable servicelb \
|
||||||
|
--disable traefik \
|
||||||
|
--node-ip 192.168.2.251 \
|
||||||
|
--tls-san 192.168.2.240 \
|
||||||
|
--tls-san 10.100.0.11
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Deploy Kube-VIP (Load Balancer)
|
||||||
|
|
||||||
|
Run on Node 1
|
||||||
|
|
||||||
|
1. Apply RBAC Permissions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create Manifest (`nano kubevip.yaml`)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: kube-vip-ds
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/name: kube-vip-ds
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: kube-vip-ds
|
||||||
|
spec:
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: node-role.kubernetes.io/master
|
||||||
|
operator: Exists
|
||||||
|
- matchExpressions:
|
||||||
|
- key: node-role.kubernetes.io/control-plane
|
||||||
|
operator: Exists
|
||||||
|
containers:
|
||||||
|
- args:
|
||||||
|
- manager
|
||||||
|
env:
|
||||||
|
- name: vip_arp
|
||||||
|
value: 'true'
|
||||||
|
- name: port
|
||||||
|
value: '6443'
|
||||||
|
- name: vip_interface
|
||||||
|
value: 'enp0s31f6' # <--- the physical interface
|
||||||
|
- name: vip_cidr
|
||||||
|
value: '32'
|
||||||
|
- name: cp_enable
|
||||||
|
value: 'true'
|
||||||
|
- name: cp_namespace
|
||||||
|
value: 'kube-system'
|
||||||
|
- name: vip_ddns
|
||||||
|
value: 'false'
|
||||||
|
- name: svc_enable
|
||||||
|
value: 'true'
|
||||||
|
- name: address
|
||||||
|
value: '192.168.2.240' # <--- the floating ip
|
||||||
|
image: ghcr.io/kube-vip/kube-vip:v0.6.4
|
||||||
|
imagePullPolicy: Always
|
||||||
|
name: kube-vip
|
||||||
|
securityContext:
|
||||||
|
capabilities:
|
||||||
|
add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- NET_RAW
|
||||||
|
hostNetwork: true
|
||||||
|
serviceAccountName: kube-vip
|
||||||
|
tolerations:
|
||||||
|
- effect: NoSchedule
|
||||||
|
operator: Exists
|
||||||
|
- effect: NoExecute
|
||||||
|
operator: Exists
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Apply
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f kubevip.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Verify IP
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ip addr show enp0s31f6
|
||||||
|
# Look for secondary IP: 192.168.2.240/32
|
||||||
|
```
|
||||||
|
|
||||||
|
5. check the pods
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get pods -n kube-system -l app.kubernetes.io/name=kube-vip-ds
|
||||||
|
#NAME READY STATUS RESTARTS AGE
|
||||||
|
#kube-vip-ds-g98zh 1/1 Running 0 14s
|
||||||
|
#kube-vip-ds-pxbjs 1/1 Running 0 14s
|
||||||
|
#kube-vip-ds-vq8sp 1/1 Running 0 14s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Remote Access (Mac)
|
||||||
|
|
||||||
|
1. Retrieve Config (On Node 1)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo cat /etc/rancher/k3s/k3s.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Configure Mac
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nano ~/.kube/config-homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Update:
|
||||||
|
- Paste the config.
|
||||||
|
- Change: `server: https://127.0.0.1:6443`
|
||||||
|
- To: `server: https://10.100.0.10:6443`
|
||||||
|
- Note: We use the VPN IP (.10), NOT the VIP (.240). This avoids "Asymmetric Routing" packet drops while using
|
||||||
|
WireGuard.
|
||||||
|
|
||||||
|
4. Connect
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export KUBECONFIG=~/.kube/config-homelab
|
||||||
|
kubectl get nodes
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Storage (Longhorn & NAS Backup)
|
||||||
|
|
||||||
|
We want a distributed block storage for Pods and enable off-site backups to the NAS. However, Longhorn requires an
|
||||||
|
S3-compatible endpoint for backups. We will deploy **MinIO** as a gateway that mounts the NAS via NFS and exposes it as
|
||||||
|
S3.
|
||||||
|
|
||||||
|
### A. Install Longhorn via Helm (On Mac)
|
||||||
|
|
||||||
|
We install Longhorn in the `longhorn-system` namespace.
|
||||||
|
|
||||||
|
1. Add Repo:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add longhorn https://charts.longhorn.io
|
||||||
|
helm repo update
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm install longhorn longhorn/longhorn \
|
||||||
|
--namespace longhorn-system \
|
||||||
|
--create-namespace \
|
||||||
|
--set defaultSettings.defaultDataPath="/var/lib/longhorn"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Verify:
|
||||||
|
Wait for all pods to be `Running`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get pods -n longhorn-system -w
|
||||||
|
```
|
||||||
|
|
||||||
|
### B. Deploy MinIO Bridge (The S3 Gateway)
|
||||||
|
|
||||||
|
We will use a **Kubernetes Secret** to manage the credentials so we don't hardcode passwords in our YAML files.
|
||||||
|
|
||||||
|
1. **Create the Credentials Secret:**
|
||||||
|
Run this command in your terminal. Replace `<YOUR_PASSWORD>` with a strong password.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create secret generic minio-secret \
|
||||||
|
--from-literal=rootUser=admin \
|
||||||
|
--from-literal=rootPassword=<YOUR_PASSWORD> \
|
||||||
|
-n longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Create the Manifest:** `nano minio-bridge.yaml`
|
||||||
|
|
||||||
|
_Notice that `env` now uses `valueFrom`, pointing to the secret we just created._
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: minio
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: minio
|
||||||
|
ports:
|
||||||
|
- name: api
|
||||||
|
port: 9000
|
||||||
|
targetPort: 9000
|
||||||
|
- name: console
|
||||||
|
port: 9001
|
||||||
|
targetPort: 9001
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: minio
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: minio
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: minio
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: minio
|
||||||
|
image: minio/minio:RELEASE.2023-09-30T07-02-29Z
|
||||||
|
args:
|
||||||
|
- server
|
||||||
|
- /data
|
||||||
|
- --console-address
|
||||||
|
- :9001
|
||||||
|
env:
|
||||||
|
# SENSITIVE: We pull these from the 'minio-secret' we created via CLI
|
||||||
|
- name: MINIO_ROOT_USER
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: minio-secret
|
||||||
|
key: rootUser
|
||||||
|
- name: MINIO_ROOT_PASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: minio-secret
|
||||||
|
key: rootPassword
|
||||||
|
volumeMounts:
|
||||||
|
- name: nas-storage
|
||||||
|
mountPath: /data
|
||||||
|
ports:
|
||||||
|
- containerPort: 9000
|
||||||
|
- containerPort: 9001
|
||||||
|
volumes:
|
||||||
|
- name: nas-storage
|
||||||
|
hostPath:
|
||||||
|
path: /mnt/nas
|
||||||
|
type: Directory
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Apply:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f minio-bridge.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Initialize Bucket:**
|
||||||
|
|
||||||
|
- Port Forward Console:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl port-forward -n longhorn-system deployment/minio 9001:9001
|
||||||
|
```
|
||||||
|
|
||||||
|
- Access: http://localhost:9001
|
||||||
|
- Login: Use the `admin` and `password` you defined in step 1.
|
||||||
|
- Action: Create a bucket named `backups`.
|
||||||
|
|
||||||
|
### C. Configure Longhorn Backup Target
|
||||||
|
|
||||||
|
Now we tell Longhorn to use the local MinIO service. We need to create a _specific_ secret format that Longhorn expects
|
||||||
|
for S3 targets.
|
||||||
|
|
||||||
|
1. **Create Backup Secret:**
|
||||||
|
|
||||||
|
_Note: Replace `<YOUR_PASSWORD>` with the EXACT same password you used in Step B.1._
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create secret generic longhorn-backup-secret \
|
||||||
|
--from-literal=AWS_ACCESS_KEY_ID=admin \
|
||||||
|
--from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_PASSWORD> \
|
||||||
|
--from-literal=AWS_ENDPOINTS=http://minio.longhorn-system:9000 \
|
||||||
|
-n longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Configure Settings (via UI):**
|
||||||
|
|
||||||
|
- Port forward UI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
|
||||||
|
```
|
||||||
|
|
||||||
|
- Access: http://localhost:8080
|
||||||
|
- Navigate: **Settings** -> **Backup Targets**.
|
||||||
|
- Update `default` target fields:
|
||||||
|
- **Backup Target:** `s3://backups@home/`
|
||||||
|
- **Backup Target Credential Secret:** `longhorn-backup-secret`
|
||||||
|
|
||||||
|
- Save: Verify the Green Checkmark.
|
||||||
|
|
||||||
|
## 7. Ingress Controller (Traefik v3)
|
||||||
|
|
||||||
|
Now that storage is settled, we need a way to expose services to the web properly, avoiding `kubectl port-forward`.
|
||||||
|
|
||||||
|
We will install **Traefik v3** using Helm.
|
||||||
|
|
||||||
|
1. Add Repo:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add traefik https://traefik.github.io/charts
|
||||||
|
helm repo update
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create Config File:
|
||||||
|
We need to customize Traefik to trust your forwarded headers (since you are behind a VPN/Proxy).
|
||||||
|
`nano traefik-values.yaml` (mac)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
hostNetwork: true
|
||||||
|
|
||||||
|
service:
|
||||||
|
enabled: true
|
||||||
|
type: LoadBalancer
|
||||||
|
# This assigns the VIP to Traefik so you can access it via 192.168.2.240
|
||||||
|
loadBalancerIP: '192.168.2.240'
|
||||||
|
# We use an annotation to tell Kube-VIP which IP to assign
|
||||||
|
annotations:
|
||||||
|
kube-vip.io/loadbalancerIPs: '192.168.2.240'
|
||||||
|
|
||||||
|
ports:
|
||||||
|
web:
|
||||||
|
# New V3 Syntax for HTTP -> HTTPS redirect
|
||||||
|
redirections:
|
||||||
|
entryPoint:
|
||||||
|
to: websecure
|
||||||
|
scheme: https
|
||||||
|
permanent: true
|
||||||
|
websecure:
|
||||||
|
tls:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# Security: Trust headers from VPN and LAN so logs show real client IPs
|
||||||
|
additionalArguments:
|
||||||
|
- '--entryPoints.web.forwardedHeaders.trustedIPs=10.100.0.0/24,192.168.2.0/24'
|
||||||
|
- '--entryPoints.websecure.forwardedHeaders.trustedIPs=10.100.0.0/24,192.168.2.0/24'
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Install:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm install traefik traefik/traefik \
|
||||||
|
--namespace kube-system \
|
||||||
|
--values traefik-values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get svc -n kube-system traefik
|
||||||
|
```
|
||||||
|
|
||||||
|
- You should see `EXTERNAL-IP` as `192.168.2.240`.
|
||||||
|
|
||||||
|
Once this is running, we can create an **IngressRoute** to access the Longhorn Dashboard via a real URL (e.g.,
|
||||||
|
`longhorn.home.lab`) or ping `curl -v -k -H "Host: longhorn.home.lab" https://192.168.2.240`.
|
||||||
|
|
||||||
|
## 8. Expose Dashboards (Ingress Objects)
|
||||||
|
|
||||||
|
Now we create a routing rule ("Ingress") that tells Traefik: "When someone asks for `longhorn.home.lab`, send them to
|
||||||
|
the Longhorn Dashboard."
|
||||||
|
|
||||||
|
1. **Create Ingress Manifest:**
|
||||||
|
`nano longhorn-ingress.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: longhorn-dashboard
|
||||||
|
namespace: longhorn-system
|
||||||
|
annotations:
|
||||||
|
cert-manager.io/cluster-issuer: 'homelab-issuer'
|
||||||
|
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- longhorn.home.lab
|
||||||
|
secretName: longhorn-tls-certs # Cert-manager will create this in longhorn-system
|
||||||
|
rules:
|
||||||
|
- host: longhorn.home.lab
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: longhorn-frontend
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
`nano minio-ingress.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: minio-dashboard
|
||||||
|
namespace: longhorn-system
|
||||||
|
annotations:
|
||||||
|
cert-manager.io/cluster-issuer: 'homelab-issuer'
|
||||||
|
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- minio.home.lab
|
||||||
|
secretName: minio-tls-certs # Cert-manager will create this
|
||||||
|
rules:
|
||||||
|
- host: minio.home.lab
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: minio
|
||||||
|
port:
|
||||||
|
number: 9001 # Dashboard Port
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Apply them (mac):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f longhorn-ingress.yaml
|
||||||
|
kubectl apply -f minio-ingress.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Run this on Node 1, Node 2, and Node 3 to open the firewall:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Allow HTTP/HTTPS from VPN clients
|
||||||
|
sudo ufw allow from 10.100.0.0/24 to any port 80 proto tcp
|
||||||
|
sudo ufw allow from 10.100.0.0/24 to any port 443 proto tcp
|
||||||
|
|
||||||
|
# Allow HTTP/HTTPS from Home LAN (just in case)
|
||||||
|
sudo ufw allow from 192.168.2.0/24 to any port 80 proto tcp
|
||||||
|
sudo ufw allow from 192.168.2.0/24 to any port 443 proto tcp
|
||||||
|
```
|
||||||
|
|
||||||
|
on Mac, you should see ports 80 and 443
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get svc -n kube-system traefik
|
||||||
|
#NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
#traefik LoadBalancer 10.43.49.19 192.168.2.240 80:31504/TCP,443:32282/TCP 16m
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Configure DNS:**
|
||||||
|
Since you don't have a real DNS server running yet, you must tell your Mac where to find this domain.
|
||||||
|
**On your Mac:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
Add this line at the bottom:
|
||||||
|
|
||||||
|
```text
|
||||||
|
192.168.2.240 longhorn.home.lab minio.home.lab
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Test:
|
||||||
|
Open your browser and visit: https://longhorn.home.lab and https://minio.home.lab or ping
|
||||||
|
`curl -v -k -H "Host: longhorn.home.lab" https://192.168.2.240`.
|
||||||
|
|
||||||
|
- Note: You will see a "Not Secure" warning because Traefik is using a self-signed default certificate. This is normal.
|
||||||
|
Click "Advanced" -> "Proceed". Next step, we will host our certificate manager.
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
Check that your Ingress is picking up the correct TLS certificates from Cert-Manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify Longhorn Certificate
|
||||||
|
echo | openssl s_client -showcerts -servername longhorn.home.lab -connect 192.168.2.240:443 2>/dev/null | openssl x509 -noout -issuer -dates
|
||||||
|
# Expected Issuer: CN = homelab-ca
|
||||||
|
|
||||||
|
# Verify MinIO Certificate
|
||||||
|
echo | openssl s_client -showcerts -servername minio.home.lab -connect 192.168.2.240:443 2>/dev/null | openssl x509 -noout -issuer -dates
|
||||||
|
# Expected Issuer: CN = homelab-ca
|
||||||
|
|
||||||
|
# Check Kubernetes Secret
|
||||||
|
kubectl get certificate -n longhorn-system
|
||||||
|
# Expected: READY=True
|
||||||
|
```
|
||||||
|
|
||||||
|
We will deploy **AdGuard Home**. It blocks ads network-wide and allows us to define "DNS Rewrites" so that `*.home.lab`
|
||||||
|
automatically points to your Traefik LoadBalancer (`.240`).
|
||||||
|
|
||||||
|
### A. Deploy AdGuard
|
||||||
|
|
||||||
|
We assign AdGuard a dedicated LoadBalancer IP: **`192.168.2.241`**.
|
||||||
|
|
||||||
|
`nano adguard-deployment.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: adguard-system
|
||||||
|
---
|
||||||
|
# 1. Storage for persistent configs (Longhorn)
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: adguard-data
|
||||||
|
namespace: adguard-system
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
storageClassName: longhorn
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 5Gi
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: adguard-conf
|
||||||
|
namespace: adguard-system
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
storageClassName: longhorn
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
---
|
||||||
|
# 2. The Application
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: adguard
|
||||||
|
namespace: adguard-system
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: adguard
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: adguard
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: adguard
|
||||||
|
image: adguard/adguardhome:v0.107.43
|
||||||
|
ports:
|
||||||
|
- containerPort: 53 # DNS UDP
|
||||||
|
protocol: UDP
|
||||||
|
- containerPort: 53 # DNS TCP
|
||||||
|
protocol: TCP
|
||||||
|
- containerPort: 3000 # Setup UI
|
||||||
|
- containerPort: 80 # Web UI
|
||||||
|
volumeMounts:
|
||||||
|
- name: data
|
||||||
|
mountPath: /opt/adguardhome/work
|
||||||
|
- name: conf
|
||||||
|
mountPath: /opt/adguardhome/conf
|
||||||
|
volumes:
|
||||||
|
- name: data
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: adguard-data
|
||||||
|
- name: conf
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: adguard-conf
|
||||||
|
---
|
||||||
|
# 3. The Service (Exposes IP .241)
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: adguard-dns
|
||||||
|
namespace: adguard-system
|
||||||
|
annotations:
|
||||||
|
kube-vip.io/loadbalancerIPs: '192.168.2.241' # <--- The DNS IP
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: adguard
|
||||||
|
type: LoadBalancer
|
||||||
|
ports:
|
||||||
|
- name: dns-udp
|
||||||
|
port: 53
|
||||||
|
targetPort: 53
|
||||||
|
protocol: UDP
|
||||||
|
- name: dns-tcp
|
||||||
|
port: 53
|
||||||
|
targetPort: 53
|
||||||
|
protocol: TCP
|
||||||
|
- name: web-setup
|
||||||
|
port: 3000
|
||||||
|
targetPort: 3000
|
||||||
|
protocol: TCP
|
||||||
|
- name: web-ui
|
||||||
|
port: 80
|
||||||
|
targetPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f adguard-deployment.yaml
|
||||||
|
# Wait for EXTERNAL-IP to be 192.168.2.241
|
||||||
|
kubectl get svc -n adguard-system -w
|
||||||
|
```
|
||||||
|
|
||||||
|
### B. Initial Setup & Firewall
|
||||||
|
|
||||||
|
1. **Firewall:** We must open Port 53 (DNS) on all nodes so the LAN can talk to the cluster.
|
||||||
|
_Run on all 3 nodes:_
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo ufw allow from 192.168.2.0/24 to any port 53 proto udp
|
||||||
|
sudo ufw allow from 192.168.2.0/24 to any port 53 proto tcp
|
||||||
|
sudo ufw allow from 10.100.0.0/24 to any port 53 proto udp
|
||||||
|
sudo ufw allow from 10.100.0.0/24 to any port 53 proto tcp
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Wizard:**
|
||||||
|
|
||||||
|
- Go to: `http://192.168.2.241:3000`
|
||||||
|
- **Admin Interface:** All Interfaces, Port 80.
|
||||||
|
- **DNS Server:** All Interfaces, Port 53.
|
||||||
|
- **Setup:** Create Admin/Password.
|
||||||
|
|
||||||
|
### C. Critical Configuration (DNS Rewrites & Rate Limits)
|
||||||
|
|
||||||
|
1. **Magic Rewrites:**
|
||||||
|
|
||||||
|
- Go to **Filters** -> **DNS Rewrites** -> **Add**.
|
||||||
|
- **Domain:** `*.home.lab`
|
||||||
|
- **Answer:** `192.168.2.240` (This points to Traefik).
|
||||||
|
- _Result:_ Any request for `longhorn.home.lab` or `minio.home.lab` stays on the LAN.
|
||||||
|
|
||||||
|
2. **Disable Rate Limiting (Crucial!):**
|
||||||
|
|
||||||
|
- Go to **Settings** -> **DNS Settings**.
|
||||||
|
- Set **Rate limit** to `0` (Unlimited).
|
||||||
|
- _Why?_ If we point our Router to AdGuard later, all traffic looks like it comes from 1 IP. Default settings will ban
|
||||||
|
the router and kill the internet.
|
||||||
|
|
||||||
|
### D. Expose AdGuard Dashboard
|
||||||
|
|
||||||
|
`nano adguard-ingress.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: adguard-ui
|
||||||
|
namespace: adguard-system
|
||||||
|
annotations:
|
||||||
|
cert-manager.io/cluster-issuer: 'homelab-issuer'
|
||||||
|
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- adguard.home.lab
|
||||||
|
secretName: adguard-tls-certs
|
||||||
|
rules:
|
||||||
|
- host: adguard.home.lab
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: adguard-dns
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f adguard-ingress.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### E. Client Configuration
|
||||||
|
|
||||||
|
_Scenario: The ISP Router blocks LAN IPs for DNS assignment (DNS Rebind Protection)._
|
||||||
|
|
||||||
|
Instead of configuring the Router, we manually configure **Critical Devices** (MacBook, Phone) to use AdGuard.
|
||||||
|
|
||||||
|
1. **On Mac (Wi-Fi Settings):**
|
||||||
|
- **DNS Servers:** Remove existing entries. Add `192.168.2.241`.
|
||||||
|
- **Search Domain:** `home` (optional).
|
||||||
|
|
||||||
|
2. **Final Test:**
|
||||||
|
- Access https://longhorn.home.lab
|
||||||
|
- Access https://minio.home.lab
|
||||||
|
- Access https://adguard.home.lab
|
||||||
|
|
||||||
|
## Recap
|
||||||
|
|
||||||
|
### Infrastructure Map (Physical & Virtual Nodes)
|
||||||
|
|
||||||
|
| Device Name | Role | VPN IP (`wg0`) | Physical IP (LAN) | Public IP |
|
||||||
|
| --------------- | -------------- | -------------- | ----------------- | ------------------- |
|
||||||
|
| **EC2 Proxy** | VPN Hub | `10.100.0.1` | `10.0.x.x` | `3.99.x.x` (Static) |
|
||||||
|
| **MacBook** | Client | `10.100.0.2` | (Dynamic) | (Hidden) |
|
||||||
|
| **Node 1** | K3s Master | `10.100.0.10` | `192.168.2.250` | (Hidden) |
|
||||||
|
| **Node 2** | K3s Worker | `10.100.0.11` | `192.168.2.251` | (Hidden) |
|
||||||
|
| **Node 3** | K3s Worker | `10.100.0.12` | `192.168.2.252` | (Hidden) |
|
||||||
|
| **NAS-Server** | Backup Storage | N/A | `192.168.2.135` | (Hidden) |
|
||||||
|
| **Home Router** | Gateway / DHCP | N/A | `192.168.2.1` | (Dynamic ISP) |
|
||||||
|
|
||||||
|
### Service & Domain Registry (Apps)
|
||||||
|
|
||||||
|
This table maps the Kubernetes services to their LoadBalancer IPs and Domain URLs.
|
||||||
|
|
||||||
|
| Application | Role | Hosting Type | VIP / Endpoint | External URL (Ingress) |
|
||||||
|
| ---------------- | --------------------- | ------------ | --------------- | ------------------------- |
|
||||||
|
| **Kube-VIP** | **App** Load Balancer | DaemonSet | `192.168.2.240` | N/A |
|
||||||
|
| **Traefik** | Ingress Controller | Deployment | `192.168.2.240` | `*.home.lab` |
|
||||||
|
| **AdGuard Home** | **DNS** Load Balancer | Deployment | `192.168.2.241` | https://adguard.home.lab |
|
||||||
|
| **Longhorn** | Storage Dashboard | K8s Service | `192.168.2.240` | https://longhorn.home.lab |
|
||||||
|
| **MinIO** | S3 Gateway | K8s Service | `192.168.2.240` | https://minio.home.lab |
|
||||||
Reference in New Issue
Block a user