Summary
On SEAPATH nodes running Docker alongside KVM virtual machines, the br_netfilter kernel module (loaded by Docker) causes all bridged traffic — including KVM VM traffic — to pass through iptables. libvirt's default iptables FORWARD policy is DROP. The result is that a KVM VM can be running, its network interface visible on the bridge, its MAC address learned via ARP — but all IP traffic is silently dropped.
This was discovered during deployment of an ABB vSSC600 virtual protection relay on a SEAPATH Debian standalone node (kernel 6.1.0-41-rt-amd64, libvirt 9.0.0, QEMU 7.2.19).
Relationship to #379
Closed issue #379 ("Unable to access VM when using a linux bridge on hypervisor") describes the same symptom — a VM connected to a Linux bridge is unreachable despite correct configuration. That issue was closed with a workaround of using an OVS bridge instead. The root cause (br_netfilter + iptables FORWARD DROP) was not identified at the time. This issue documents the actual mechanism and proposes a platform-level fix.
How to Reproduce
- Deploy a SEAPATH Debian standalone node with Docker installed
- Create a Linux bridge (e.g.
mgmt-bridge) using systemd-networkd
- Deploy a KVM VM with a bridge interface on
mgmt-bridge
- From an external machine on the same L2 segment, attempt to ping the VM's IP address
- Observe: ARP resolves (VM MAC is visible) but ping and all higher-layer traffic fails
How to Confirm
lsmod | grep br_netfilter
cat /proc/sys/net/bridge/bridge-nf-call-iptables
# If br_netfilter loaded and value = 1 — affected
sudo iptables -L FORWARD -n -v
# Will show DROP policy and incrementing dropped packet count
Fix
# Disable immediately
sudo sysctl -w net.bridge.bridge-nf-call-iptables=0
sudo sysctl -w net.bridge.bridge-nf-call-ip6tables=0
sudo sysctl -w net.bridge.bridge-nf-call-arptables=0
# Persist across reboots
echo 'net.bridge.bridge-nf-call-iptables = 0' | sudo tee -a /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 0' | sudo tee -a /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-arptables = 0' | sudo tee -a /etc/sysctl.conf
This fix only affects bridged traffic. Docker continues to function normally — Docker uses NAT/masquerade for container networking, not bridging.
Proposed Platform Change
Add these three sysctl settings to the SEAPATH hardening playbook or node provisioning process as a standard configuration item on nodes that run Docker alongside KVM VMs. This would silently resolve the conflict for all future deployments without manual intervention.
Environment
- SEAPATH Debian standalone, kernel 6.1.0-41-rt-amd64
- KVM / libvirt 9.0.0 / QEMU 7.2.19
- Docker installed (loads br_netfilter)
- VM: ABB vSSC600 SW v1.5.1 with Linux bridge interface
Reported by @ni8towl
Summary
On SEAPATH nodes running Docker alongside KVM virtual machines, the
br_netfilterkernel module (loaded by Docker) causes all bridged traffic — including KVM VM traffic — to pass through iptables. libvirt's default iptables FORWARD policy is DROP. The result is that a KVM VM can be running, its network interface visible on the bridge, its MAC address learned via ARP — but all IP traffic is silently dropped.This was discovered during deployment of an ABB vSSC600 virtual protection relay on a SEAPATH Debian standalone node (kernel 6.1.0-41-rt-amd64, libvirt 9.0.0, QEMU 7.2.19).
Relationship to #379
Closed issue #379 ("Unable to access VM when using a linux bridge on hypervisor") describes the same symptom — a VM connected to a Linux bridge is unreachable despite correct configuration. That issue was closed with a workaround of using an OVS bridge instead. The root cause (
br_netfilter+ iptables FORWARD DROP) was not identified at the time. This issue documents the actual mechanism and proposes a platform-level fix.How to Reproduce
mgmt-bridge) using systemd-networkdmgmt-bridgeHow to Confirm
Fix
This fix only affects bridged traffic. Docker continues to function normally — Docker uses NAT/masquerade for container networking, not bridging.
Proposed Platform Change
Add these three sysctl settings to the SEAPATH hardening playbook or node provisioning process as a standard configuration item on nodes that run Docker alongside KVM VMs. This would silently resolve the conflict for all future deployments without manual intervention.
Environment
Reported by @ni8towl