Running Home Assistant on Kubernetes Instead of the Usual Docker Path

Running Home Assistant on Kubernetes Instead of the Usual Docker Path

I moved Home Assistant into Kubernetes because my home lab had already crossed the line from “one server running containers” into “a small platform.” The usual Home Assistant advice is good advice: use Home Assistant OS if you want the appliance experience, or use Home Assistant Container if you want to manage the operating system and run Home Assistant as one container. I did not move to Kubernetes because that is simpler. It is not.

I moved because I wanted Home Assistant to follow the same operational model as the rest of my lab: GitOps-managed manifests, reviewable changes, Kubernetes health checks, shared ingress, centralized metrics, and explicit ownership of the companion services that used to sit next to Home Assistant in Docker Compose.

The trick was admitting what Home Assistant really is in my house. It is not just a web app. It is a LAN appliance, a device discovery point, an MQTT hub, a Matter controller client, an ESPHome workstation, and the center of a pile of local callbacks. A normal Kubernetes ClusterIP service is not enough for that.

The Standard Home Assistant Choices

As of the current Home Assistant direction, the supported end-user installation methods are Home Assistant OS and Home Assistant Container. Home Assistant announced in 2025 that Core and Supervised were deprecated for end-user installs, with Home Assistant OS and Container becoming the supported paths going forward. The important part for this design is that Container is still a supported method, but it is not the same experience as Home Assistant OS.

Home Assistant OS is the appliance model. It gives Home Assistant control over the operating system, the Supervisor, backups, updates, and apps, formerly called add-ons. That is the experience most people should choose when they want Home Assistant to be the primary purpose of a device or VM.

Home Assistant Container is the “I manage the platform” model. It runs the Home Assistant Core application in an OCI container. It is commonly run with Docker or Docker Compose, but the container itself is not Docker-specific. The tradeoff is that there is no Supervisor managing add-ons for you. If you want MQTT, ESPHome, Matter Server, file sharing, or other adjacent services, you run and configure those containers yourself.

Kubernetes fits under that second model. I am still running Home Assistant as a container. I am just using Kubernetes as the container platform instead of a single-host Docker Compose project.

That distinction matters. I did not try to run Home Assistant OS inside Kubernetes. I did not try to run the Supervisor as a nested platform manager. Kubernetes became the supervisor for the workload shape, and Home Assistant remained Home Assistant Container.

References:

Why I Did Not Just Keep Docker Compose

Docker Compose worked. That is worth saying plainly. It is still a very good shape for Home Assistant when one host owns the whole stack.

The problem was not that Compose failed. The problem was that everything else around it had changed. My ingress, certificates, observability, image automation, and service documentation had moved into Kubernetes and GitOps. Home Assistant was becoming the exception that still needed separate operational muscle memory.

Before the migration, the Compose stack had the expected pattern:

  • Home Assistant itself.
  • MQTT broker.
  • ESPHome dashboard.
  • Matter Server.
  • Device-specific bridge containers.
  • A few services that expected localhost-style access because they used to
    share host networking.
  • Persistent config directories on the host.

That is manageable, but it creates two sources of truth. The Kubernetes cluster knows about most of the lab. Compose knows about Home Assistant. The router, ingress, and monitoring know about both. I wanted one place where the desired state lived.

Kubernetes gave me:

  • Declarative manifests reviewed in Git.
  • Flux reconciliation instead of hand-run container updates.
  • Image policy automation for Home Assistant and sidecars.
  • Namespaced secrets and RBAC boundaries.
  • Central logs and metrics.
  • Consistent ingress patterns.
  • A rollback shape that is documented next to the workload.

It also gave me more ways to break things. That is the tradeoff. Home Assistant on Kubernetes is only worth it if the platform already exists and is already part of how you operate the lab.

The Shape I Ended Up With

The deployment is a single Kubernetes Deployment with one replica and a Recreate strategy. That is deliberate. Home Assistant owns stateful config, local device assumptions, and one LAN identity. I do not want two copies of the pod racing over the same config directory or advertising the same presence on the network.

Inside that one pod, I run multiple containers:

  • Home Assistant.
  • Matter Server.
  • ESPHome.
  • MQTT.
  • Device bridge containers.
  • Small proxy sidecars for compatibility with old port expectations.
  • A path proxy for the ESPHome dashboard.

That looks unusual if you expect one container per pod. In this case, the multi-container pod is the compatibility boundary. The old Compose deployment had several services sharing the host network and assuming local connectivity. Putting them in one pod preserves the important part of that behavior: they share one network namespace and can still reach each other through localhost.

The first goal was not to redesign every integration at once. The first goal was to move the stack into Kubernetes without changing the behavior Home Assistant and the devices already depended on.

Why a Normal Kubernetes Service Was Not Enough

Home Assistant needs to participate in local discovery. The official Zeroconf integration documentation says Home Assistant scans the network for supported devices and can also make Home Assistant discoverable to other services. It chooses broadcast interfaces through the Network integration, and IPv6 becomes part of that behavior when selected interfaces have IPv6 enabled.

That matters for a home automation controller. Some integrations are not “connect to this HTTPS URL and call it a day.” They rely on mDNS, SSDP, callbacks, local device reachability, or a stable source identity. Matter makes that even more sensitive because local IPv6 and multicast behavior need to be boring.

So I did not put Home Assistant only behind a ClusterIP, NodePort, or shared ingress VIP. Those are useful for web traffic, but they do not make the pod a first-class participant on the LAN.

Instead, I gave the pod a secondary LAN-facing interface with Multus and a bridge CNI attachment. Cilium remains the normal cluster CNI. Multus adds a second interface to the Home Assistant pod. That second interface is attached to the host’s LAN bridge and receives a reserved LAN address.

Conceptually, the shape is:

Kubernetes pod network
        |
   normal CNI interface
        |
Home Assistant pod
        |
   secondary LAN interface
        |
host LAN bridge
        |
home LAN

That gives Home Assistant its own LAN presence while still letting Kubernetes own lifecycle and configuration. I can route normal HTTP traffic through ingress, but Home Assistant itself is not hidden behind only an HTTP proxy.

Reference:

The GitOps Boundary

Everything that describes the workload lives in Git: the namespace, deployment, network attachment, services, endpoint routing, ingress, image policies, and observability configuration.

Secrets do not live in Git. The manifests reference Kubernetes Secrets by name, but the secret values are supplied through the cluster’s secret-management path. That keeps the public shape reviewable without turning the repository into a credential store.

I also kept the first storage step conservative. Instead of redesigning Home Assistant storage during the migration, Kubernetes mounts the existing configuration and data directories from the node. That made rollback simple: stop the Kubernetes workload and restart the old container stack against the same data.

That host-path approach is not my ideal final storage model. It pins the workload to the node that has the data and the LAN bridge. But it made the migration honest. I changed one major variable at a time: the scheduler and deployment system changed, while the application data stayed where Home Assistant already expected it.

Replacing Add-ons With Sidecars

The biggest mental shift is add-ons.

In Home Assistant OS, add-ons are managed from inside Home Assistant through the Supervisor. Under the hood, those apps are container images, but the Supervisor owns the lifecycle and integration experience.

In Home Assistant Container, there is no Supervisor panel managing those apps. That is true whether the container runtime is Docker Compose or Kubernetes. In my Kubernetes setup, I replaced the add-on habit with explicit containers:

  • MQTT is a normal broker container in the same pod.
  • ESPHome is a normal ESPHome container in the same pod.
  • Matter Server is a normal container with its own data mount.
  • Device bridge services are normal containers with explicit secrets and
    resource limits.

The upside is that nothing is magical. I can read the Deployment and know what is running. I can pin images, set resource requests, inspect logs, and roll back through Git.

The downside is that Home Assistant is no longer the UI where these companion services are installed or updated. Kubernetes owns them. If an add-on has a configuration screen in Home Assistant OS, I need to understand the equivalent container image, environment variables, config files, ports, volumes, and update policy.

That is fine for me because it matches the rest of the lab. It is not a better default for someone who wants Home Assistant to be the platform.

How I Enabled HACS

When I say “HACS” here, I mean the Home Assistant Community Store, not Home Assistant add-ons. HACS installs custom integrations and frontend resources inside the Home Assistant configuration directory. It is not a replacement for the Supervisor and it is not an add-on manager for containers.

That distinction is useful in Kubernetes. HACS fits naturally because Home Assistant still has a persistent /config directory inside the container. The custom integrations HACS manages live under custom_components/ in that configuration directory.

My Kubernetes approach is:

  1. Keep the Home Assistant configuration directory persistent.
  2. Install HACS into that config directory, so the custom_components/hacs
    files survive pod restarts.
  3. Restart Home Assistant through the Kubernetes rollout path.
  4. Add HACS as an integration from the Home Assistant UI.
  5. Treat HACS-managed integrations as application-level state, not Kubernetes
    manifests.

That last point is important. I do not want Kubernetes to pretend it owns every file HACS downloads. HACS owns its custom integration content inside the Home Assistant config volume. Kubernetes owns the container, mounts, secrets, networking, and lifecycle.

If I wanted stricter reproducibility, I could vendor specific custom integrations into Git or mirror them as init-container artifacts. I have not needed that yet. For now, I keep HACS as Home Assistant state and make sure the configuration directory is backed up.

References:

How ESPHome Fits

ESPHome was the part I did not want to bury behind a generic ingress rule and hope for the best.

The ESPHome documentation notes that the dashboard can run in Docker, and that host networking is needed for online status indicators in the normal Docker case. That is a clue: ESPHome cares about local network behavior, not only serving a web page.

In my Kubernetes deployment, ESPHome runs as a container in the same pod as Home Assistant. It has:

  • Its own persistent config mount.
  • Access to the device interfaces it needs for local flashing workflows.
  • The same LAN-facing network namespace as Home Assistant.
  • A stable dashboard port.
  • Credentials supplied as a Kubernetes Secret.

For browser access, I added a small path proxy sidecar. The ESPHome dashboard expects to live at its own root path, but I wanted it reachable under a path on the Home Assistant route. The proxy listens on an internal port, forwards to ESPHome on localhost, preserves websocket upgrade headers, and rewrites the basic asset paths so the dashboard works below a path prefix.

Conceptually:

Browser
  |
Ingress path for ESPHome
  |
path proxy sidecar
  |
localhost:ESPHome dashboard

That lets ESPHome stay operationally close to Home Assistant while still being clear in Kubernetes: it is a separate container, with separate resources, separate secrets, and separate logs.

Reference:

Ingress Is for Humans, LAN Identity Is for Devices

One of the mistakes I wanted to avoid was treating ingress as the answer to every network problem.

Ingress is excellent for browser and API traffic. It gives me TLS, routing, consistent access names, and a common way to expose services internally or externally. Home Assistant’s web UI belongs there.

Device traffic is different. Some devices discover Home Assistant. Some need Home Assistant to discover them. Some integrations store callback URLs. Some local protocols care about the interface or address Home Assistant uses. Some companion services expect raw TCP ports.

So I split the responsibilities:

  • Ingress handles user-facing HTTP and HTTPS.
  • The pod’s LAN interface handles local discovery and device-facing behavior.
  • Kubernetes Services give cluster-internal callers stable names.
  • Small proxy sidecars preserve legacy ports when a companion container listens
    on a different internal port.

That split made the migration easier to reason about. If the Home Assistant UI loads but discovery fails, I know where to look. If local discovery works but a remote browser route fails, I know that is ingress or DNS, not the pod’s LAN identity.

Rollouts Are Different for a LAN Appliance

For a stateless web app, I normally want rolling updates. Start the new pod, wait for readiness, then stop the old pod.

For this Home Assistant shape, I use Recreate. There should only be one active pod with the LAN identity and the writable config. That means updates briefly stop the old pod before the new one starts.

To make that less painful, image pre-pulling matters. I do not want Home Assistant down while the node downloads several large images. The better sequence is:

  1. Resolve the new desired images from Git.
  2. Pull them onto the target node ahead of time.
  3. Let Flux apply the deployment update.
  4. Restart the singleton pod with cache hits instead of waiting on downloads.

This is one of the places Kubernetes is both more powerful and less forgiving than Compose. Kubernetes gives me the tools to build a clean rollout gate, but I have to design it. The default rolling-update instinct is wrong for a pod that owns one LAN address and shared state.

Observability Changed the Experience

Once Home Assistant ran in Kubernetes, it became part of the lab’s normal observability surface.

I can see pod restarts, container resource usage, logs by container, ingress traffic, and Home Assistant metrics in the same place as the rest of the cluster. The Home Assistant Prometheus endpoint is scraped through the metrics pipeline with a token mounted only where it is needed. Logs from Home Assistant, ESPHome, MQTT, Matter Server, and bridge containers are separated by container name instead of being a pile of Compose logs on one host.

That did not make Home Assistant magically more reliable. It made failures easier to classify:

  • Home Assistant process problem.
  • Companion container problem.
  • LAN discovery problem.
  • Ingress problem.
  • Secret/config problem.
  • Node or storage problem.

That classification is the main operational win.

What I Would Tell Someone Else

I would not recommend Kubernetes as the default Home Assistant install.

If you want Home Assistant to manage the full appliance experience, use Home Assistant OS. If you want a simple server-managed container, use Home Assistant Container with Docker or Compose. Both are easier to support and closer to the standard path.

Kubernetes starts making sense when all of these are true:

  • You already run Kubernetes for other home-lab services.
  • You already have GitOps or a similar deployment workflow.
  • You are comfortable owning the add-on replacements yourself.
  • You understand your local discovery and callback requirements.
  • You can provide a real LAN identity when integrations need it.
  • You have a rollback path that does not depend on wishful thinking.

For my lab, the migration was worth it because Home Assistant stopped being the special case. It became another reviewed, observable, reconciled workload, while still keeping the LAN behavior that a home automation controller needs.

The important part was not “put Home Assistant in Kubernetes.” The important part was “do not pretend Home Assistant is only a web app.” Once I treated it as a LAN appliance with a Kubernetes control plane around it, the design became much clearer.

Comments are closed.