Pi Hole is a network wide ad blocker. It can protect an entire home from advertisements. It can even help to speed up the network. However, running PiHole Docker+Kubernetes poses a couple interesting challenges.
This is one of several posts on containerization of common, useful software.
Before we get started—running an ad blocker like this can be done with most linux operating system(s). A Raspberry Pi is often the preferred approach (thus the name).
If you're new to Raspberry Pi, the popular CanaKits are a great place to start. I prefer to buy the Raspberry Pi 4, power adapter, micro SD cards, and heatsinks separately. Not only is this cheaper, but it reduces e-waste.
PiHole Docker Considerations
The easiest way to install PiHole is to run it in Docker. It makes it easy to update or uninstall PiHole. It also lets you keep the parts of the network isolated for security. Even the Raspberry Pi can be replaced in just a few minutes. Plus, the configuration can be saved into a networked drive.
Stability is very important when it comes to Pi-Hole.If you haven’t already done so, make sure your home server is highly-available.
Devices on the network require a DNS server to connect to the internet. Setting Pi-Hole as the primary DNS server on the router means it must be constantly available. If it goes down, devices may go offline. Yet, using it as the primary DNS server is the only way to get the benefits of Pi-Hole for every device.
Some users may decide that they prefer to run Pi-Hole directly on the host. They might argue that Docker provides additional points of potential failure. I prefer to run everything in Kubernetes because it gives me convenient observability tools. This means I can monitor everything remotely.
It may be tempting to run Pi-Hole in host networking mode. I prefer to avoid that whenever possible…
Kubernetes Helm Chart
As of this writing, there is not an official Kubernetes helm chart (automated installer) for Pi-Hole. However, I was able to get the Pi-Hole helm chart from mojo2600 working well.
In the README, there is a reference to MetalLB. This is a load balancer for bare-metal clusters. For running a home server, this is an excellent way to assign a static IP without resorting to hostNetwork or NodePorts.
Custom DNS Names (DNSMasq)
When accessing the home server(s) it’s best if the traffic never leaves the local network. This is faster (no internet requests to the outside world). It is also more secure (no data leaves the network).
A typical way of doing this is to access the server via the LAN IP address. This can only be done when connected to the home network. Otherwise, use the public domain name. However, this complicates various other tasks. It becomes necessary to write software that uses a different URL appropriate to the context. You have to remember to type the right URL or IP address. In some cases, this may even become a problem (when server software is not written to support both simultaneously).
Instead, it’s possible to use Pi-Hole to run dnsmasq. This allows you make it so that queries from inside the network receive the local IP address of the server instead of the remote IP address. See below for configuration files.
In the cabin, Home Assistant runs on a server on the local network at
192.168.0.100. This is accessible to the outside world as
house.snowy-cabin.com. Normally, users typing this URL in the browser would receive back the public IP address. This is described in creating a server with a custom domain name. However, when inside the network, we can run
nslookup house.snowy-cabin.com to see:
Server: 192.168.0.101 Address: 192.168.0.101#53 Name: house.snowy-cabin.com Address: 192.168.0.100
- The Server has a static IP address at
- It tells us that the Name lives at Address
If we instead ask Google about this server (
nslookup house.snowy-cabin.com 184.108.40.206):
Server: 220.127.116.11 Address: 18.104.22.168#53 Non-authoritative answer: Name: house.snowy-cabin.com Address: 22.214.171.124
Queries coming from the outside world instead connect to the public IP.The domain
snowy-cabin.comis registered with AWS Route53. This does the same thing as PiHole Docker. It provides the IP address response to nslookup. But it is accessible to the outside world. So it responds with the public IP address.
Using Switchboard, https and client IP addresses are supported natively. This means that I can type
https://house.snowy-cabin.com in the browser at home and have a secure/encrypted connection to the local server. The server is also able to identify my LAN IP address as making the request. This enables features like Home Assistant’s trusted networks & trusted users. Because Home Assistant can identify the request as coming from the LAN, it is able to bypass the login.
The Pi Hole admin tool should be easy to access. I like using Home Assistant as a hub for the various services on my network. This requires embedding the admin interface in an iframe. This is tricky because Pi Hole configures
lighttpd to DENY cross-origin requests.
A work-around is to edit
/etc/lighttpd/lighttpd.conf. From within the docker container:
sed -i "s/\"DENY\"/\"ALLOW\"/" /etc/lighttpd/lighttpd.conf \ && service lighttpd restart
Unfortunately, this does not survive restarts. Attempting to override the value in
/etc/lighttpd/external.conf has thusfar been unsuccessful.
My Deployment File(s)
Here are the
values.yaml I use with the helm chart:
persistentVolumeClaim: enabled: true admin: existingSecret: pi-hole-secrets serviceTCP: loadBalancerIP: 192.168.0.101 annotations: metallb.universe.tf/allow-shared-ip: pi-hole type: LoadBalancer serviceUDP: loadBalancerIP: 192.168.0.101 annotations: metallb.universe.tf/allow-shared-ip: pi-hole type: LoadBalancer dnsmasq: customDnsEntries: - address=/house.snowy-cabin.com/192.168.0.100
The persistentVolumeClaim refers to NFS volume, configured via the nfs-client-provisioner chart. The admin.existingSecret uses a pre-existing password for login to the PiHole docker frontend (web UI).
Other than that, the configuration more or less matches the example in the helm chart.