Monitoring My Homelab with Grafana: From Proxmox to Kubernetes
A practical, beginner-friendly observability setup for a two-host Proxmox homelab: Prometheus + Grafana in Kubernetes, scraping Proxmox hosts and Pi-hole.
đ Part of: Homelab Observability

Overview
Once your homelab grows beyond a single VM, âit feels slowâ isnât a troubleshooting strategy anymore.
My current setup looks like this:
- Two Proxmox servers
- Host A: runs my Kubernetes cluster
- Host B: runs âhouse servicesâ (right now: a single Pi-hole VM)
- Goal: one place to understand health and performance across all of it.
This post is a practical walkthrough for deploying a simple monitoring stack:
- Prometheus to collect metrics
- Grafana to visualize them
- Node Exporter to monitor hosts and nodes
- Pi-hole exporter to monitor DNS performance and ad-block stats
Uptime isnât the goal â visibility is.
What I Monitor (and Why)
I keep it opinionated and small. If a metric doesnât help me make decisions, it doesnât get a dashboard.
Proxmox hosts
- CPU usage (and steal time, if applicable)
- Memory pressure
- Disk I/O latency and saturation
- Network throughput / errors
Kubernetes
- Node CPU/memory
- Pod restarts / crash loops
- Node readiness
Pi-hole
- Queries/sec and blocked %
- DNS latency (fast DNS feels like fast internet)
- Top clients (what device is being noisy?)
High-Level Architecture
Iâm deliberately splitting responsibilities:
- Monitoring stack runs inside Kubernetes
- Prometheus scrapes:
- Kubernetes node metrics
- Proxmox host metrics (via node_exporter)
- Pi-hole metrics (via exporter)
Proxmox hosts ââ
Pi-hole VM ââââźâ> Prometheus â> Grafana
Kubernetes ââââ
Prerequisites
- A working Kubernetes cluster
kubectlaccess- Helm installed on your workstation
- A storage class for persistent volumes (Longhorn, etc.) if you want Grafana persistence
Install the stack with Helm
I use kube-prometheus-stack because it gives you a solid baseline quickly:
- Prometheus
- Grafana
- Node exporter (as a DaemonSet)
- Lots of ready-to-use dashboards
Weâll disable Alertmanager for now to keep things simple.
1) Add the Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
2) Create a namespace
kubectl create namespace monitoring
3) Create a minimal values.yaml
Save as values.yaml:
grafana:
enabled: true
# Optional but recommended so dashboards/users persist across restarts
persistence:
enabled: true
type: pvc
size: 10Gi
# Expose Grafana in a simple way for a homelab.
# You can swap this to an Ingress later.
service:
type: ClusterIP
prometheus:
prometheusSpec:
retention: 7d
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
alertmanager:
enabled: false
# A lot of homelabs don't need these at first.
kubeEtcd:
enabled: false
kubeControllerManager:
enabled: false
kubeScheduler:
enabled: false
Note: the
kubeEtcd/kubeControllerManager/kubeSchedulertoggles depend on your distro and whether those endpoints are reachable. Disabling them avoids noisy âtarget downâ alerts in many homelabs.
4) Install
helm upgrade --install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--values values.yaml
5) Access Grafana
Get the Grafana admin password:
kubectl -n monitoring get secret monitoring-grafana -o jsonpath="{.data.admin-password}" | base64 --decode; echo
Port-forward:
kubectl -n monitoring port-forward svc/monitoring-grafana 3000:80
Open: http://localhost:3000
Login:
- User:
admin - Password: (the command output above)
Monitor Proxmox hosts with node_exporter
This is the â80% winâ for Proxmox monitoring. You can later add a Proxmox API exporter if you want deeper VM-level insights.
Option A (recommended): install node_exporter directly on each Proxmox host
On each Proxmox node:
apt-get update
apt-get install -y prometheus-node-exporter
systemctl enable --now prometheus-node-exporter
By default, node_exporter listens on :9100.
Validate from another machine (replace IP):
curl -s http://PROXMOX_IP:9100/metrics | head
Add Proxmox targets to Prometheus
In kube-prometheus-stack, the cleanest way is to add a small additional scrape config.
- Create a file
additional-scrape-configs.yaml:
- job_name: "proxmox-nodes"
static_configs:
- targets:
- "192.168.30.10:9100" # proxmox-1
- "192.168.30.11:9100" # proxmox-2
- Create a secret from it:
kubectl -n monitoring create secret generic additional-scrape-configs \
--from-file=additional-scrape-configs.yaml \
--dry-run=client -o yaml | kubectl apply -f -
- Update
values.yamlto reference it:
prometheus:
prometheusSpec:
additionalScrapeConfigsSecret:
enabled: true
name: additional-scrape-configs
key: additional-scrape-configs.yaml
- Apply the Helm upgrade:
helm upgrade --install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--values values.yaml
Now Grafana/Prometheus should show your Proxmox targets.
Viewing Proxmox Host Metrics in Grafana
Once you have node-exporter running on your Proxmox hosts and Prometheus scraping their metrics, you can visualize host data in Grafana using the Node Exporter Full dashboard.
How to import the Node Exporter Full dashboard:
- In Grafana, go to the left sidebar and click on Dashboards > Import.
- In the âImport via grafana.comâ field, enter the dashboard ID: 1860
- Click Load.
- Select your Prometheus data source.
- Click Import.
After importing, open the Node Exporter Full dashboard. In the Instance dropdown, you should see your Proxmox host(s) (e.g., 192.168.30.67:9100). Select your host to view its metrics (CPU, memory, disk, network, etc.).
If you donât see your host, double-check that:
- node-exporter is running and accessible on port 9100 from your Prometheus server
- Prometheus is scraping the correct IP:9100
- The firewall allows traffic on port 9100
This dashboard provides a comprehensive view of your Proxmox hostâs health and performance alongside your Kubernetes cluster metrics.
Monitor Pi-hole
Pi-hole has a web UI and some internal stats, but exporters make it easy to graph in Grafana.
1) Create a Pi-hole App Password
Recent versions of Pi-hole use App Passwords for API access (not a traditional API token):
- Log in to the Pi-hole admin web UI (usually at
http://<pihole-ip>/admin). - Go to Settings > API / Web interface.
- In the App Passwords section, create a new app password (give it a name like "exporter").
- Enable the app password and copy the generated valueâthis is what youâll use as
PIHOLE_PASSWORDin the exporter config.
If you do not see the App Passwords section, update Pi-hole to the latest version. You must be logged in as an admin user to create or manage app passwords.
2) Install a Pi-hole exporter in Kubernetes
Before you create the Deployment, first create a Kubernetes Secret to securely store your Pi-hole app password:
kubectl -n monitoring create secret generic pihole-exporter-secret \
--from-literal=PIHOLE_PASSWORD="YOUR_PIHOLE_APP_PASSWORD"
Replace YOUR_PIHOLE_APP_PASSWORD with the app password you created in the Pi-hole UI.
One commonly used exporter is eko/pihole-exporter. Weâll deploy it as a simple Deployment + Service.
Create pihole-exporter.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pihole-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: pihole-exporter
template:
metadata:
labels:
app: pihole-exporter
spec:
containers:
- name: pihole-exporter
image: ekofr/pihole-exporter:latest
ports:
- containerPort: 9617
env:
- name: PIHOLE_HOSTNAME
value: "192.168.30.20" # <-- Pi-hole IP
- name: PIHOLE_PASSWORD
valueFrom:
secretKeyRef:
name: pihole-exporter-secret
key: PIHOLE_PASSWORD
Apply it:
kubectl apply -f pihole-exporter.yaml
This way, your app password is not stored in source control or visible in the manifest.
c) Expose the exporter with a Service
Create pihole-exporter-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: pihole-exporter
namespace: monitoring
labels:
app: pihole-exporter
spec:
selector:
app: pihole-exporter
ports:
- name: metrics
port: 9617
targetPort: 9617
Apply it:
kubectl apply -f pihole-exporter-service.yaml
3) Tell Prometheus to scrape the exporter
Create pihole-servicemonitor.yaml:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pihole-exporter
namespace: monitoring
labels:
release: monitoring
spec:
selector:
matchLabels:
app: pihole-exporter
namespaceSelector:
matchNames:
- monitoring
endpoints:
- port: metrics
interval: 30s
Apply it:
kubectl apply -f pihole-servicemonitor.yaml
If the ServiceMonitor doesnât get picked up, double-check the
release: monitoringlabel matches your kube-prometheus-stack release name (monitoringin this post).
Setting Up Grafana Dashboards
Once your exporters and Prometheus are running, you can visualize your data in Grafana. Hereâs how to get started with the most useful dashboards for your homelab:
1) Node Exporter Full (Proxmox & Kubernetes nodes)
- In Grafana, go to Dashboards > Import.
- Enter dashboard ID 1860 ("Node Exporter Full") and click Load.
- Select your Prometheus data source and click Import.
- Use the Instance dropdown to select your Proxmox or Kubernetes node (e.g.,
192.168.30.10:9100).
This dashboard shows CPU, memory, disk, network, and more for any host running node_exporter.
2) Pi-hole Exporter Dashboard
- Go to Dashboards > Import in Grafana.
- Use dashboard ID 10176 ("Pi-hole Exporter - Grafana Dashboard") or search for "pihole" on grafana.com/dashboards.
- Select your Prometheus data source and import.
- In the dashboard, use the instance dropdown to select your Pi-hole exporter (e.g.,
pihole-exporter:9617).
Youâll see queries over time, blocked percentage, top clients, and more.
3) Kubernetes Cluster Dashboards
If you installed kube-prometheus-stack, youâll get several built-in dashboards for cluster health, pods, and nodes. Look for dashboards like Kubernetes / Compute Resources / Node and Kubernetes / Compute Resources / Pod.
Tips for Exploring Your Data
- Use the Explore tab in Grafana to run ad-hoc PromQL queries (e.g.,
pihole_domains_blockedornode_cpu_seconds_total). - Filter by instance to focus on a specific host or exporter.
- Set dashboard time ranges to match your troubleshooting window.
With these dashboards, youâll have a single pane of glass for your Proxmox hosts, Kubernetes nodes, and Pi-hole DNS stats.
Hardening and âNice to Haveâ Improvements
Exposing Grafana with an Ingress
How to find your ClusterIssuer name:
To list all cert-manager ClusterIssuers in your cluster, run:
kubectl get clusterissuer
Use the name from the output (e.g., letsencrypt-prod) in the Ingress annotation:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
To access Grafana from outside your cluster, you can expose it with an Ingress (just like Argo CD). Hereâs a basic example using the Kubernetes Ingress API:
Step 1: Create a Certificate for Grafana
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: grafana-cert
namespace: monitoring
spec:
secretName: grafana-tls
issuerRef:
name: letsencrypt-prod # or your ClusterIssuer name
kind: ClusterIssuer
commonName: grafana.example.com
dnsNames:
- grafana.example.com
Apply it with:
kubectl apply -f grafana-cert.yaml
Wait for the certificate to be issued (check with kubectl describe certificate -n monitoring).
Step 2: Create the Ingress for Grafana
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
spec:
rules:
- host: grafana.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: monitoring-grafana # Use the actual Service name from kubectl get svc -n monitoring
port:
number: 80 # Use the actual Service port
How to check your Service name and port:
Run:
kubectl get svc -n monitoring
Look for the Service that points to your Grafana pod (often monitoring-grafana) and note the port (usually 80 or 3000). Update your Ingress manifest to match.
tls:
- hosts:
- grafana.example.com
secretName: grafana-tls
Apply it with:
kubectl apply -f grafana-ingress.yaml
Key tips:
- Replace
grafana.example.comwith your domain. - Make sure your DNS points to your ingress controller.
- The
secretName(e.g.,grafana-tls) must match the one in your Certificate. - The service name/port should match your Grafana Service (often
grafanaon port 80 or 3000).
After a minute, you should be able to access Grafana securely at your chosen domain with HTTPS.
These are great follow-ups (or a Part 2):
- Add an Ingress for Grafana (and put it behind auth)
- Add Alertmanager (simple âdisk fullâ and ânode downâ alerts first)
- Add Loki for centralized logs
- Add long-term retention or remote storage (if you care)
Lessons Learned (So Far)
- Disk latency causes more ârandom issuesâ than CPU.
- Pi-hole performance affects the entire house.
- Kubernetes will happily restart things and you wonât notice⌠unless youâre watching.
Monitoring isnât about collecting metrics â itâs about understanding your system.
Troubleshooting: Pi-hole Data Not Showing in Grafana
If your Pi-hole info isnât showing up in Grafana or isnât available as a host in the Pi-hole dashboard, check the following:
- Exporter Pod Status: Make sure the
pihole-exporterpod is running and not in a CrashLoop or error state:
kubectl -n monitoring get pods
- Service & Endpoints: Confirm the
pihole-exporterService exists and has endpoints:
kubectl -n monitoring get svc,pods,endpoints | grep pihole
- Prometheus Target Discovery:
- In Grafana, go to Connections > Data sources, select your Prometheus data source, then click the Explore tab or look for a Targets or Status section (the exact location may vary by Grafana version).
- Alternatively, access the Prometheus UI directly (often at
/prometheuson your cluster or via port-forward) and go to Status > Targets. - Look for a
pihole-exporterjob/target. If itâs missing or âdown,â check your ServiceMonitor and Service labels/selectors.
- ServiceMonitor Configuration: Ensure the
ServiceMonitoris in the same namespace as the exporter and matches the Service labels. - Exporter Logs: Check the logs for the exporter pod for errors connecting to Pi-hole or authentication issues:
kubectl -n monitoring logs deploy/pihole-exporter
- App Password: Double-check the app password in your Kubernetes Secret matches what you created in Pi-hole.
- Network Access: The exporter pod must be able to reach your Pi-hole instance (check network policies, firewalls, and the
PIHOLE_HOSTNAMEvalue). - Prometheus Scrape: Confirm Prometheus is scraping the exporter endpoint (should see metrics at
/metrics).
If all the above are correct, the Pi-hole exporter should appear as an âinstanceâ in the Grafana dashboard dropdown. If not, review the logs and configuration for typos or network issues.
This post is Part 1 of my Homelab Observability series:
- Part 1: Metrics with Grafana + Prometheus â
- Part 2: Alerting with Alertmanager
- Part 3: Logs with Loki
- Part 4: Capacity planning with real data
đ Part of: Homelab Observability
Related Posts
Kubernetes Logging with Loki
Collect, store, and query Kubernetes logs using Loki with MinIO object storage and Grafana.
Homelab Alerting with Alertmanager and Free Discord Integration
How to set up free, reliable alerting for your homelab using Prometheus Alertmanager and Discord webhooks.
Kubernetes on Proxmox: GitOps Automation with ArgoCD
Implement GitOps workflows for automated, declarative deployments using ArgoCD - manage your entire homelab from Git
