Our edge network architecture

2026-01-28 · 6 min read

Why latency matters for games

Web traffic can tolerate an extra 50ms of latency without anyone noticing. Game traffic can't. In competitive Minecraft PvP, even 10ms of additional latency changes the outcome of fights. Any DDoS protection solution for game servers has to add near-zero overhead.

This constraint shaped every architectural decision we made. Our filtering runs at the kernel level. Our proxy processes use lock-free data structures. Our config updates propagate without restarting the proxy process.

The XDP fast path

The first layer of filtering happens in XDP (eXpress Data Path), a Linux kernel technology that lets us process packets before they even enter the network stack. Our eBPF programs run in the kernel and can drop known-bad traffic in single-digit microseconds.

XDP handles the high-volume, low-complexity decisions: known-bad IP addresses, rate limiting per source IP, and basic protocol validation. Everything that makes it past XDP goes to the userspace proxy for deep inspection.

Userspace protocol parsing

The proxy process runs our full protocol parsers — Java handshake/login state machines, RakNet session tracking, and firewall rule evaluation. This is where we catch sophisticated attacks that require protocol understanding.

The proxy is built on Tokio (Rust's async runtime) and uses ArcSwap for configuration — when you change a firewall rule or add a backend in the dashboard, the new config is swapped in atomically. No connections are dropped, no proxy restart needed. The change takes effect within seconds through Redis pub/sub.

For zero-downtime binary upgrades, we use SO_REUSEPORT and file descriptor passing. The new proxy process starts, binds to the same ports, and the old process drains its existing connections. Players experience zero interruption.

Edge distribution

Our edge nodes are distributed globally — as close to players as possible. When a player in Seoul connects to your server, they hit our Seoul edge node first. The filtering happens there, and only clean traffic is forwarded to your backend.

Each edge node runs independently. If one goes down, DNS automatically routes players to the next closest node. Health checks run every 20 seconds, and failover is automatic.

The edge nodes pull their configuration from a central Redis instance. When you update settings in the dashboard, the API publishes a config update to Redis, and every edge node picks it up within seconds. This means your firewall rules, backend list, and domain settings are always in sync across the entire network.