Clive S. — Cloud Infrastructure Engineer

On‑Prem Network Segmentation with OPNsense + VLANs

Context:

I run a virtualized on prem environment on a two node Proxmox cluster. It hosts internal ingress, DNS, monitoring, Git, dev stacks, automation, media services, and a rotating set of experiments. For a long time all of it lived on a flat consumer LAN. It was fast and it worked, but it also meant every service shared the same trust boundary.

Problem & Constraints:

The old network was functional but not defensible. A flat LAN makes lateral movement cheap, and that is the living off the land problem. Compromise one service, expose one admin UI, or land on one machine through a bad container default, and the next step is using what already exists. SSH. Web panels. Internal DNS. Saved credentials. Admin surfaces. This is not about being personally targeted. It is about reducing the number of paths that exist when something goes wrong.

Proxmox is the biggest example of why this matters. Leaving the Proxmox web GUI reachable from the general client network is sloppy and dangerous. It is the control plane for the entire environment. If an attacker gets to that UI, the blast radius is not one VM, it is the whole cluster. Even without a targeted attacker, it only takes one compromised client device or one mistake on the wrong network to make the hypervisor management surface reachable. That is not an acceptable posture.

The constraints were real. I was not going to rebuild the client network or break existing devices. The upstream router stays. Each Proxmox node has one physical NIC, so segmentation requires VLAN trunking. I also did not want to solve this with expensive hardware. This project was about being resourceful and using the resources I already had, then filling the one missing gap with a cheap managed switch.

Solution & Architecture:

I deployed OPNsense as a firewall and router VM called opn fw01 on the stable Proxmox node. The WAN interface stays on the client network and pulls an address from the upstream router. The LAN side becomes an 802.1Q trunk. Proxmox is configured with a VLAN aware bridge so VMs join the correct network by tagging their virtual NICs.

The environment is segmented into six purpose built VLAN networks. INFRA is for management surfaces and control points. CORE is for shared services such as Traefik, AdGuard, Grafana, Prometheus, Uptime Kuma, and Gitea. DEV is for development and staging surfaces. COMPUTE is for worker boxes and ops workloads including clawd. MEDIA is for Plex and Immich. LAB is an isolated sandbox for experiments and change validation. Systems here are treated as disposable and are rebuilt from known good baselines.

Policy is the point. Inter VLAN traffic is denied by default. Required paths are re enabled explicitly. DEV and COMPUTE can reach CORE for DNS and internal ingress. LAB can reach CORE only when necessary. CORE does not initiate into workload networks by default. INFRA is reserved for administration and is reachable through the VPN admin plane rather than the general LAN.

VLAN MAP

   VLAN  NAME      SUBNET         PURPOSE                          RUNS HERE
   10    INFRA     *.*.10.0/24     management and control plane     OPNsense UI, Proxmox mgmt later
   20    CORE      *.*.20.0/24     shared services layer            Traefik, AdGuard, Grafana, Prometheus, Kuma, Gitea
   30    DEV       *.*.30.0/24     dev and staging                  r9-dev, code-server, WordPress, React, n8n
   40    COMPUTE   *.*.40.0/24     worker workloads                  clawd host, ops and worker VMs
   50    MEDIA     *.*.50.0/24     media and personal data          Plex, Immich
   60    LAB       *.*.60.0/24     isolated test and validation     disposable test VMs

Proof / Outcome:

The VPN admin plane is a core part of the design. Administration is done through Tailscale instead of exposing management ports or relying on the client network. That means no inbound port forwards on the router. No direct SSH exposure from the internet. No assumption that being on WiFi makes a device trusted. When I need to manage a VM or a service, I come in through VPN identity and then SSH to the host over the private overlay. Management stays private even when I am away from home.

Proof is operational and the VLAN interfaces exist with dedicated subnets. Gateways respond on each VLAN, VMs receive the correct network placement by VLAN tag. Traffic behaves according to policy, which is the real measurement. A compromised workload network should not be able to reach the hypervisor management plane. A test network should not be able to enumerate the whole environment while Media services should not be a pivot into the control plane. This is how segmentation reduces blast radius in practice and reduces the value of living off the land techniques.

The outcome is the same services with guardrails. Admin surfaces move into INFRA and are removed from the general client network. Shared services are centralized in CORE. Experiments move into a bounded zone, the design is deliberately resourceful. Open source software plus a cheap managed switch buys real trust boundaries and a safer operating model without a datacenter budget.

Next Steps:

Move Proxmox management endpoints fully into INFRA and restrict access to VPN identity only. Convert internal routing targets from hard coded IPs to DNS aliases so migrations stop being painful. Tighten allow rules per VLAN so each network only has the minimum paths it needs.