Giving Plex on Kubernetes a Real LAN IP with br0, Multus, and GitOps
This is the pattern we used to migrate Plex from Docker Compose to Kubernetes
without taking away the LAN identity that clients already knew. Plex did not
move behind only a normal Kubernetes Service. Instead, the Plex pod received a
real 192.168.1.x address on the home LAN through br0, Multus, and the CNI
bridge plugin.
The example implementation gives the pod 192.168.1.50/24 on a secondary
interface named lan0, keeps Cilium as the primary cluster network, and still
lets internal nginx and Cloudflare Tunnel ingress route to Plex. The same shape
can be reused for other LAN-appliance workloads, but it should be used
deliberately because it places a pod directly on the LAN.
Why Plex Wants More Than a ClusterIP
Most Kubernetes workloads are happy behind a ClusterIP, Ingress, or
LoadBalancer Service. Plex is awkward because it is both a web application and
a LAN appliance. Local clients expect to discover it, connect to the same
long-lived address, and sometimes use ports that do not map cleanly to a normal
HTTP ingress path.
In this migration pattern, Plex previously lived on the same home-lab host with
a LAN identity already understood by clients and the router: 192.168.1.50.
The goal was not just to run a container under Kubernetes. The goal was to
preserve that LAN identity while still letting GitOps own the workload.
Kubernetes offers several possible shapes for this:
hostNetwork: true, which makes the pod share the node’s network namespace.- A Cilium or MetalLB
LoadBalancerVIP, which advertises a Service address on
the LAN. - A secondary pod interface with Multus, where the pod gets its own LAN address.
For this migration, the secondary interface is the clean fit. Plex keeps an actual LAN IP inside its own pod network namespace, while Cilium continues to own normal Kubernetes pod networking.
Architecture
The resulting shape is:
- Cilium remains the primary CNI for the pod’s default Kubernetes interface.
- Multus is installed as a CNI shim so pods can request extra network
attachments. - The bridge CNI attaches a second interface to the host bridge
br0. - The Plex pod annotation requests the
plex-br0network and names the
resulting interfacelan0. - The
plex-br0NetworkAttachmentDefinitionassigns192.168.1.50/24with
static IPAM. - Plex advertises
http://192.168.1.50:32400/throughADVERTISE_IP. - Internal and public ingress paths can still target Plex by using a Service plus
EndpointSlice that points at192.168.1.50:32400.
The important distinction is that the LAN address is not a Kubernetes Service VIP. It is on the pod itself:
LAN 192.168.1.0/24
|
br0 on k8s-node-1
|
bridge CNI attachment
|
plex pod lan0 = 192.168.1.50/24
That makes Plex look like a first-class LAN host while still running as a Kubernetes Deployment.
Multus LAN IP vs. Cilium Service VIP
This migration uses both ideas, but for different jobs.
Plex’s address, 192.168.1.50, is a pod-owned LAN address. Multus asks the bridge
CNI plugin to add a second interface to the Plex pod and attach that interface
to br0. The address is inside the Plex network namespace, so Plex can bind to
it and advertise it to clients.
The internal ingress address, 192.168.1.20, is a Cilium L2-announced Service VIP.
Cilium answers for that VIP on the LAN and forwards traffic to the ingress
controller Service. That is a good shape for shared HTTP ingress, but it is not
the same as giving Plex itself a LAN identity.
That distinction matters for apps like Plex:
- A Service VIP is a virtual load-balancer address.
- A Multus LAN IP is an address assigned to the workload.
- Plex discovery and client behavior are easier to reason about when Plex
advertises the address that is actually on its own interface. - Ordinary web apps should usually use ClusterIP, Gateway, Ingress, or a
LoadBalancer VIP instead of a direct LAN pod interface.
Prerequisites
Before applying this pattern, confirm the node and cluster have the following:
- A Linux bridge on the LAN. In this example, the bridge name is
br0. - A reserved, unused LAN address for Plex. In this example, that address is
192.168.1.50. - A Kubernetes node selector or scheduling constraint that pins Plex to the node
withbr0and the media paths. In this example, that node isk8s-node-1. - Cilium configured so Multus can coexist with it.
- Multus installed on every node that might run the pod.
- The bridge CNI plugin installed on the node.
- The static Plex IP excluded from DHCP or reserved in the network source of
truth. Static CNI IPAM will not ask your router whether the address is safe. - A rollback path for the old Plex instance. Here, the preserved rollback command
is:
docker compose -f /opt/plex/docker-compose.yml up -d
Also check that only one Plex process will write to the same config directory at
a time. This migration reuses /opt/plex/config as a host path, so the Compose
container must be stopped before the Kubernetes pod is scaled up.
Building br0 from Scratch
The Kubernetes manifests assume the node already has a bridge named br0 that
is connected to the physical LAN. On k8s-node-1, that bridge already existed
because LXC containers and host services were using it. If you are starting from
a plain Linux server, create the bridge first and make the host’s LAN address
live on the bridge rather than directly on the physical NIC.
The target shape is:
physical NIC, for example eno1
|
br0
|
host IP, for example 192.168.1.10/24
A minimal Netplan-style example looks like this:
network:
version: 2
ethernets:
eno1:
dhcp4: false
dhcp6: false
bridges:
br0:
interfaces:
- eno1
addresses:
- 192.168.1.10/24
routes:
- to: default
via: 192.168.1.1
nameservers:
addresses:
- 192.168.1.1
parameters:
stp: false
forward-delay: 0
For a server that gets its host address from DHCP, the bridge can use dhcp4: true instead of static addresses and routes; the important point is still
that the host address belongs to br0.
Apply this from an attended console or out-of-band management path. A bad bridge config can disconnect SSH. After applying it, verify:
ip -br link show br0
ip -br addr show br0
bridge link
ip route
Then confirm the node can still reach the LAN gateway, DNS, and the Kubernetes
API. Only after br0 is stable should you install Multus or schedule a
LAN-present workload.
Installing the Bridge CNI Plugin
Multus does not implement the bridge attachment itself. It dispatches the
request to the CNI plugin named in the NetworkAttachmentDefinition. For this
post, that plugin is bridge, and the binary must exist on the node where Plex
runs, normally under /opt/cni/bin/bridge.
Check the node before cutover:
ssh k8s-node-1 'test -x /opt/cni/bin/bridge && ls -l /opt/cni/bin/bridge'
ssh k8s-node-1 'ls -1 /opt/cni/bin | sed -n "1,40p"'
If the binary is missing, install the standard CNI plugins for your
distribution or node bootstrap path. Do not work around this by changing the
Plex pod to hostNetwork; that changes the security model and loses the
separate pod-owned LAN identity that this design is trying to preserve.
GitOps Layout
The relevant manifests are split into platform and app packages:
config/kubernetes/platform/multus/multus.yamlconfig/kubernetes/platform/multus/kustomization.yamlconfig/kubernetes/platform/cilium/helmrelease.yamlconfig/kubernetes/apps/plex/network-attachment-definition.yamlconfig/kubernetes/apps/plex/deployment.yamlconfig/kubernetes/apps/plex/service.yamlconfig/kubernetes/apps/plex/endpointslice.yamlconfig/kubernetes/apps/plex/dns-records.yamlconfig/kubernetes/apps/plex/ingress.yamlconfig/kubernetes/apps/plex/ingress-public-cloudflare.yamlconfig/kubernetes/apps/plex/kustomization.yaml
The live app kustomization includes ../plex from
config/kubernetes/apps/live/kustomization.yaml. The live platform
kustomization includes Multus from
config/kubernetes/platform/live/kustomization.yaml.
Cilium Must Leave Room for Multus
Cilium is still the primary network. The key setting in
config/kubernetes/platform/cilium/helmrelease.yaml is:
values:
cni:
exclusive: false
That prevents Cilium from taking exclusive ownership of the host CNI config in a
way that would break the Multus shim. The same Helm values also pin Cilium’s
LAN-facing device to br0 for this cluster:
values:
devices: br0
nodePort:
directRoutingDevice: br0
That global br0 pin is a single-node assumption. If your cluster has multiple
nodes and they do not all expose the LAN through the same bridge name, do not
copy this blindly. Either standardize the bridge name on every node or move to a
device selection policy that matches the actual fleet.
Installing Multus
The Multus platform package in this repo installs the
NetworkAttachmentDefinition CRD, RBAC, a multus ServiceAccount, and the
kube-multus-ds DaemonSet in kube-system.
The DaemonSet writes a Multus shim config to the host:
{
"cniVersion": "0.3.1",
"name": "multus-cni-network",
"type": "multus-shim",
"logLevel": "verbose",
"logToStderr": true,
"clusterNetwork": "/host/etc/cni/net.d/05-cilium.conflist"
}
That tells Multus to delegate the primary pod network to Cilium, while still
allowing pods to ask for additional interfaces through
NetworkAttachmentDefinition objects.
Render the package before applying it:
kubectl kustomize config/kubernetes/platform/multus
kubectl kustomize config/kubernetes/platform/live
After GitOps reconciles, check the DaemonSet:
kubectl -n kube-system get ds kube-multus-ds
kubectl -n kube-system get pods -l app.kubernetes.io/name=multus -o wide
kubectl get crd network-attachment-definitions.k8s.cni.cncf.io
Defining the Plex LAN Attachment
The app-specific network is
config/kubernetes/apps/plex/network-attachment-definition.yaml:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: plex-br0
labels:
app.kubernetes.io/name: plex
spec:
config: |
{
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "br0",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.1.50/24"
}
]
}
}
This is intentionally small. It says: when a pod asks for plex-br0, run the
bridge CNI, attach the new pod interface to host bridge br0, and assign
192.168.1.50/24.
There is no gateway in the current manifest. That keeps the attachment focused
on LAN presence and avoids changing the pod’s default route away from the
primary Cilium interface.
For this migration, that is the intended behavior: Cilium owns the pod’s default
route, while lan0 owns Plex’s local LAN identity.
Requesting the Interface from the Deployment
The Plex Deployment requests the secondary network with a pod annotation:
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[{"name":"plex-br0","interface":"lan0"}]'
The interface field gives the attached interface a predictable name. That
makes validation easier because the operator can look for lan0 instead of
guessing whether the extra interface became net1, eth1, or another name.
The same Deployment pins Plex to k8s-node-1:
spec:
nodeSelector:
kubernetes.io/hostname: k8s-node-1
That matters because this version of the migration reuses local host paths and
the node-local br0 bridge. It is not a portable multi-node Plex deployment.
Preserving Plex Behavior
The Deployment carries the Plex-specific details:
env:
- name: ADVERTISE_IP
value: http://192.168.1.50:32400/
It also exposes the known Plex LAN ports in the container spec:
- TCP
32400 - TCP
8324 - TCP
32469 - UDP
1900 - UDP
5353 - UDP
32410through32414
The migration keeps the LinuxServer image on latest but pins the resolved
digest in Git:
image: lscr.io/linuxserver/plex:latest@sha256:b785bdd60e781662f16e0526a6b54c07856739df95ab558a674a3c084dbde423
It also reuses the original host data layout:
/opt/plex/configmounted at/config/srv/media/Videos/srv/media/Music/srv/media/Pictures/srv/photos/family-a/srv/photos/family-b/dev/dvb/dev/shmbacked by a memoryemptyDir
Those choices minimize cutover risk because the Kubernetes pod sees the same
content as the old Compose container. The tradeoff is that the workload is tied
to k8s-node-1.
Connecting Ingress to a Pod LAN IP
Because Plex is now directly reachable at 192.168.1.50, internal and public
routes can point to that LAN IP instead of selecting the pod through a normal
Kubernetes Service selector.
The example uses a selectorless Service:
apiVersion: v1
kind: Service
metadata:
name: plex-lan
spec:
type: ClusterIP
ports:
- name: https
protocol: TCP
port: 32400
targetPort: 32400
And a matching manual EndpointSlice:
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: plex-lan
labels:
kubernetes.io/service-name: plex-lan
addressType: IPv4
ports:
- name: https
protocol: TCP
port: 32400
endpoints:
- addresses:
- 192.168.1.50
conditions:
ready: true
That lets Kubernetes ingress controllers route to Plex while Plex remains a LAN resident endpoint.
The internal ingress in config/kubernetes/apps/plex/ingress.yaml uses
internal-nginx and proxies to plex-lan on port 32400. The public
Cloudflare ingress in
config/kubernetes/apps/plex/ingress-public-cloudflare.yaml also points to the
same service.
DNS
This deployment manages two internal DNS records with custom DNS record resources:
spec:
name: plex.lan.example.com
recordType: A
address: 192.168.1.50
and:
spec:
name: plex.example.com
recordType: A
address: 192.168.1.20
The intent is:
plex.lan.example.comresolves directly to the Plex pod LAN IP.plex.example.comresolves internally to the internal ingress VIP.- Public DNS remains outside this app manifest path and continues to use the
Cloudflare route.
Public Cloudflare DNS is owned by the Cloudflare Tunnel ingress controller, not
by the internal DNS CRD. During cutover, the controller updated
plex.example.com from the old tunnel target to the Kubernetes-managed tunnel
target. The internal DNS records remain the split-horizon LAN path.
Cutover Sequence
A conservative cutover looks like this:
- Render the platform and app packages:
kubectl kustomize config/kubernetes/platform/multus
kubectl kustomize config/kubernetes/platform/live
kubectl kustomize config/kubernetes/apps/plex
kubectl kustomize config/kubernetes/apps/live
- Reconcile platform GitOps first and confirm Multus is ready:
flux -n flux-system reconcile source git your-gitops-repo
flux -n flux-system reconcile kustomization platform
kubectl -n kube-system rollout status ds/kube-multus-ds
kubectl get network-attachment-definitions.k8s.cni.cncf.io -A
- Stop the old Plex process:
docker compose -f /opt/plex/docker-compose.yml down
- Reconcile the Plex app:
flux -n flux-system reconcile kustomization apps
kubectl -n plex rollout status deploy/plex
- Validate the LAN IP and application behavior before changing client-facing
DNS or public routes.
Validation Commands
Check that the rendered app contains the network attachment and Deployment annotation:
kubectl kustomize config/kubernetes/apps/plex | less
Check that the pod has both the primary Kubernetes interface and lan0:
kubectl -n plex exec deploy/plex -- ip -br addr
Expected shape:
lo UNKNOWN 127.0.0.1/8
eth0 UP <pod-cidr-address>/...
lan0 UP 192.168.1.50/24
Check that Plex is listening on the LAN IP:
curl -fsSI http://192.168.1.50:32400/web
Check the Kubernetes route through the selectorless Service:
kubectl -n plex get svc plex-lan
kubectl -n plex get endpointslice plex-lan -o yaml
Check ingress status:
kubectl -n plex get ingress plex plex-public-cloudflare
curl -I https://plex.example.com
The migration plan records that curl -I https://plex.example.com returned
HTTP/2 401 with x-plex-protocol: 1.0, which proves the Cloudflare route
reached Plex at that point in validation.
Check DNS:
dig +short plex.lan.example.com
dig +short plex.example.com
Expected internal answers from the current manifests:
192.168.1.50
192.168.1.20
Check Plex media and device access from inside the pod:
kubectl -n plex exec deploy/plex -- ls -la /config
kubectl -n plex exec deploy/plex -- ls -la /srv/media/Videos
kubectl -n plex exec deploy/plex -- ls -la /dev/dvb
If GPU transcoding is part of the workload, also validate the node exposes the GPU resource and the container can use it:
kubectl describe node k8s-node-1 | rg -n "nvidia.com/gpu|Capacity|Allocatable"
kubectl -n plex exec deploy/plex -- nvidia-smi
If the Plex image you use does not include nvidia-smi, validate GPU
availability with a short CUDA/NVIDIA test pod and then confirm hardware
transcoding through Plex’s dashboard during real playback.
Common Failure Modes
The pod is stuck in ContainerCreating
Start with events:
kubectl -n plex describe pod -l app.kubernetes.io/name=plex
Likely causes:
- Multus is not installed or the
kube-multus-dspod is not ready on the node. - The
NetworkAttachmentDefinitionname is wrong or in the wrong namespace. - The bridge CNI binary is missing from
/opt/cni/binon the node. - The host bridge
br0does not exist on the node where the pod landed.
The pod starts, but lan0 is missing
Check the pod annotation:
kubectl -n plex get deploy plex -o jsonpath='{.spec.template.metadata.annotations.k8s\.v1\.cni\.cncf\.io/networks}{"\n"}'
Then check the rendered NetworkAttachmentDefinition:
kubectl -n plex get network-attachment-definition plex-br0 -o yaml
If those are correct, inspect Multus logs:
kubectl -n kube-system logs -l app.kubernetes.io/name=multus --tail=200
The LAN IP conflicts with another device
Symptoms include intermittent connectivity, ARP flapping, Plex becoming
reachable from one client but not another, or your router/controller showing the
wrong MAC for 192.168.1.50.
Before cutover, check that the IP is reserved and quiet:
ping -c 1 -W 1 192.168.1.50
arp -n 192.168.1.50
If a conflict appears after cutover, scale the Kubernetes Deployment to zero first. Do not start Compose until the duplicate address is gone.
The pod cannot reach the internet
This pattern gives the pod a second LAN interface, but the pod’s default route should normally remain on Cilium’s primary interface. Confirm routes:
kubectl -n plex exec deploy/plex -- ip route
If the bridge attachment unexpectedly changed the default route, review the bridge CNI IPAM configuration and whether a gateway was added.
LAN clients cannot discover Plex
Confirm that Plex is listening and advertising the LAN address:
kubectl -n plex exec deploy/plex -- printenv ADVERTISE_IP
curl -fsSI http://192.168.1.50:32400/web
Also check whether the client discovery path depends on multicast traffic that
must be allowed by local firewall policy. This manifest exposes UDP 1900,
5353, and 32410-32414, but Kubernetes container ports are documentation and
metadata; firewall behavior still depends on the host, CNI, and network policy
stack.
In this environment, the final proof was not only curl: local browser, remote
browser, and a local Plex app direct to 192.168.1.50 were all confirmed working.
Ingress returns errors even though direct LAN access works
Check the selectorless service and EndpointSlice:
kubectl -n plex get svc plex-lan -o yaml
kubectl -n plex get endpointslice plex-lan -o yaml
Then check the ingress controller logs. The internal ingress uses HTTPS to the backend with certificate verification disabled:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
If Plex is serving plain HTTP only in a future revision, that backend protocol must change with it.
Security Notes
Putting a pod directly on the LAN changes the security model. The workload is no longer reachable only through Kubernetes Services and ingress controllers. It is a LAN endpoint.
Treat the Plex pod like a host on the home network:
- Reserve the IP in the network source of truth before assigning it in the
NetworkAttachmentDefinition. - Keep Plex pinned to the node where the bridge and host paths are expected.
- Do not reuse the same LAN IP in Compose, LXC, DHCP, or another Kubernetes
workload. - Avoid granting this pattern broadly. Use it only for workloads that need real
LAN presence. - Keep public exposure separate from LAN exposure. In this example, public
access goes through Cloudflare Tunnel ingress instead of direct router port
forwards. - Be careful with host paths. The Plex pod can read and write the mounted media
and config directories. - Be careful with device mounts such as
/dev/dvband GPU access. They expand
what the container can interact with on the host. - Keep secrets out of Git. This app manifest does not contain Plex account
tokens or Cloudflare credentials.
NetworkPolicy may still be useful for the primary Cilium interface, but it does not automatically describe all traffic entering through the bridge-attached LAN interface. Validate the actual enforcement behavior before relying on it for LAN-side isolation.
Rollback
The rollback path is deliberately simple:
- Scale Kubernetes Plex to zero through GitOps or directly during an incident:
kubectl -n plex scale deploy/plex --replicas=0
- Confirm
192.168.1.50is no longer assigned to the pod:
kubectl -n plex get pods -o wide
ping -c 1 -W 1 192.168.1.50
- Start the preserved Compose service:
docker compose -f /opt/plex/docker-compose.yml up -d
- Validate Plex directly:
curl -fsSI http://192.168.1.50:32400/web
- Leave the Kubernetes manifests in Git until the failure is understood. If the
manifests themselves are destabilizing the platform, suspend the relevant
Flux Kustomization instead of deleting evidence during the incident.
The most important rollback rule is that Compose Plex and Kubernetes Plex must
not both run against /opt/plex/config and 192.168.1.50 at the same time.
Adapting the Pattern
For another LAN-present workload, change the following:
- The namespace and app labels.
- The
NetworkAttachmentDefinitionname. - The static IP address.
- The pod annotation.
- The node selector.
- Any app-specific advertised URL or callback URL.
- The Service and EndpointSlice, if ingress needs to route to the LAN IP.
- DNS records and firewall policy.
Do not copy the Plex host paths, ports, or GPU settings unless the new workload actually needs them.
The general rule is: keep Cilium as the Kubernetes network, use Multus only for the extra LAN interface, and make the LAN IP assignment explicit in Git.
References
- Multus pod network annotation and
NetworkAttachmentDefinitionquickstart:
https://k8snetworkplumbingwg.github.io/multus-cni/docs/quickstart.html - CNI bridge plugin:
https://www.cni.dev/plugins/current/main/bridge/ - CNI static IPAM plugin:
https://www.cni.dev/plugins/current/ipam/static/ - Cilium Helm reference for
cni.exclusive:
https://docs.cilium.io/en/stable/helm-reference/ - Kubernetes Services and selectorless Services:
https://kubernetes.io/docs/concepts/services-networking/service/