featured image

Simple Router: Building Network Infrastructure From Scratch in a Containerized Environment

A deep dive into building a production-grade network router using C, Mininet, OpenFlow, and Flask, all within a Docker container. Learn how to implement core networking protocols, software-defined networking principles, and a browser-based terminal interface for interactive learning.

Published

Sat Jan 31 2026

Technologies Used

C Docker Mininet OpenFlow Flask Python
View on GitHub

Live Demo

Loading demo...

Project Overview

This project is a production-grade network router implementation that combines low-level C networking protocols with Software-Defined Networking (SDN) principles, all wrapped in a modern web-accessible interface. It’s designed for anyone who wants to understand how packets traverse the internet—not through theory, but through hands-on interaction with a fully functional, containerized routing system. The target audience includes students learning network fundamentals, engineers preparing for infrastructure roles, and developers building network applications who need a testing sandbox.

Why Build Yet Another Router?

Network routing is one of those foundational concepts that’s often taught through diagrams and abstract explanations. You learn about ARP, ICMP, IP forwarding, and routing tables in isolation—never seeing how they orchestrate together in a living, breathing system. The pain point this project addresses is the gap between conceptual knowledge and practical implementation.

Traditional networking courses provide either oversimplified simulators or require expensive hardware labs. Cloud-based alternatives exist, but they’re often black boxes that hide the actual implementation. This creates a knowledge gap: engineers understand what packets should do, but not how to actually make it happen when you’re starting from raw Ethernet frames.

This project solves that by providing a complete, transparent routing environment that runs anywhere Docker exists. You can ping hosts, traceroute through router interfaces, download files via HTTP—all while seeing exactly how your C-based router handles ARP resolution, TTL decrementation, ICMP error generation, and longest-prefix matching. It’s the difference between reading a recipe and actually cooking the dish.

What This Router Actually Does

The system provides four major capabilities that work together to create a realistic network environment:

  • Full-Stack IP Routing Engine: The router implements the complete data plane of a network router. It performs longest-prefix matching on incoming packets to determine next-hop destinations, decrements TTL values, recalculates checksums, and forwards packets to the correct egress interface. When a packet’s destination requires MAC address resolution, the router queues the packet and initiates ARP discovery, maintaining a cache with expiration timers.

  • ICMP Diagnostic Protocol Suite: Beyond simple packet forwarding, the router acts as a network citizen that communicates with hosts using ICMP. It responds to echo requests (ping), generates Time Exceeded messages when TTL reaches zero (enabling traceroute functionality), and sends Destination Unreachable messages for invalid routes or closed ports. This makes the router behave like real Cisco or Juniper hardware.

  • Software-Defined Networking Integration: Rather than hardwiring the router to physical ports, this implementation uses OpenFlow—a protocol that separates the control plane from the data plane. A POX controller manages the OpenFlow switch that connects all hosts, while custom handlers bridge between the OpenFlow world and the traditional VNS (Virtual Network System) protocol that the router speaks. This architecture demonstrates how modern SDN separates packet forwarding logic from network topology management.

  • Browser-Based Interactive Terminal: The entire Mininet network runs inside Docker, but instead of requiring SSH access or container shells, users interact through a web interface on port 8080. The Flask application streams a pseudo-terminal (PTY) over WebSockets using SocketIO, giving you a full-fidelity terminal experience where you can execute network commands, observe routing behavior, and run automated test suites—all from your browser without installing Mininet locally.

Tech Stack & Architecture

The Stack

The project is a polyglot system that leverages different languages for their strengths:

  • Frontend: HTML + JavaScript + Xterm.js (terminal emulation) + WebSockets (real-time bidirectional communication)
  • Backend Web Layer: Python 3 + Flask + Flask-SocketIO + Eventlet (async I/O for PTY streaming)
  • Network Control Plane: Python 2 (POX requirements) + POX OpenFlow framework + Twisted (async networking for VNS protocol)
  • Data Plane: C (router implementation) with raw socket programming, linked against standard networking libraries
  • Network Emulation: Mininet + Open vSwitch
  • Infrastructure: Docker + Docker Compose for single-command deployment

The Decision Matrix

Why C for the Router Core?

Network routers operate at the performance edge where microseconds matter. C provides direct memory manipulation for packet headers, zero-overhead abstraction, and the ability to work with raw Ethernet frames through system calls like sendto() and recvfrom(). When you’re processing potentially thousands of packets per second and need to parse headers, perform bitwise subnet calculations for longest-prefix matching, and manipulate MAC addresses byte-by-byte, high-level languages introduce unacceptable latency. Additionally, understanding network programming in C teaches you exactly what happens at the system call boundary—knowledge that transfers to debugging production network issues on Linux servers.

Why POX + OpenFlow Instead of Native Mininet?

While Mininet comes with basic switching capabilities, OpenFlow provides programmatic control over packet forwarding at a granular level. POX acts as a centralized brain that can dynamically install flow rules, intercept packets for custom processing, and expose network state through events. This project uses POX to intercept every packet entering the network, serialize it through the VNS protocol to the C router, and inject the router’s response back into the network. This architecture mirrors real-world SDN deployments where separate controllers manage forwarding behavior, making the project valuable for understanding modern data center networking beyond just learning routing algorithms.

Why Docker with Privileged Mode?

Mininet creates network namespaces to simulate isolated hosts—a capability that requires kernel-level permissions. Docker’s privileged mode grants the container CAP_NET_ADMIN and other capabilities necessary to create virtual interfaces and manipulate routing tables. The alternative would be running Mininet directly on the host OS, which pollutes the host’s network stack and creates reproducibility issues. Docker ensures that every deployment starts from an identical base image with known versions of Mininet, Open vSwitch, Python interpreters, and GCC. The tradeoff is accepting the security implications of privileged mode, which is acceptable in a development/educational context but would require hardening for production use.

Technical Challenges

The ARP Cache Coherence Problem

One of the most subtle challenges in router implementation is managing the ARP cache lifecycle while maintaining packet queuing semantics. Here’s the logical puzzle: when a packet arrives for a destination whose MAC address is unknown, the router must queue that packet, send an ARP request, and potentially handle multiple queued packets for the same destination. Meanwhile, ARP entries expire, ARP requests can timeout, and new packets keep arriving.

The architecture handles this through a threaded ARP cache manager that runs on a one-second timer. For each ARP request entry, it tracks the time of the last request sent and the number of retries attempted. If five seconds elapse without a response, the manager stops retrying and sends ICMP Host Unreachable messages to all queued packets’ senders. When an ARP reply arrives, the cache manager dequeues all waiting packets for that destination, rewrites their destination MAC addresses, and forwards them in order.

The tricky part is the race condition: what if a new packet arrives for the destination just as the ARP reply is being processed? The solution uses pthread mutexes to lock the cache during lookups and insertions. The packet handling flow checks the cache twice—once before queuing and once during the dequeue operation—to handle entries that become valid between the queue insertion and transmission.

Bridging Three Async Worlds: OpenFlow, VNS, and PTY Streaming

The project has three concurrent event loops that must coordinate: POX’s Twisted reactor handling OpenFlow messages, the VNS protocol server communicating with the C router, and EventLet managing WebSocket connections for the browser terminal. Each uses different async I/O primitives that don’t naturally compose.

The integration strategy uses an event-driven publish-subscribe pattern. POX components raise custom events (SRPacketIn, RouterInfo, SRPacketOut) that cross thread boundaries through POX’s event system. The VNS server runs in its own Twisted reactor thread, listening on port 8888 for the router’s connection. When OpenFlow receives a packet, the ofhandler raises an SRPacketIn event that the srhandler catches, serializes into VNS protocol format, and transmits to the C router over TCP.

The router processes the packet, makes routing decisions, and sends it back through the VNS connection. The srhandler receives it, raises an SRPacketOut event, and the ofhandler converts it into an OpenFlow PACKET_OUT message that the switch forwards to the correct port.

Meanwhile, the web layer runs in a separate Python 3 process (since POX requires Python 2 but Flask benefits from Python 3’s async improvements). It spawns Mininet in a pseudo-terminal and uses select() to poll for output, which it forwards over WebSockets to the browser. Input from the browser writes directly to the PTY’s master file descriptor.

The startup sequence is fragile: OpenVSwitch must start first, then POX to establish the controller, then Mininet to create the topology and connect to the controller, then the router to register with the VNS server, and finally the web interface. The entrypoint.sh orchestrates this with sleep delays and background job spawning, which is admittedly a code smell—production systems would use health checks and service dependencies.

Lessons Learned

Abstraction Layers Are Contracts, Not Shields: Working across C/Python 2/Python 3, OpenFlow/VNS/raw sockets, and Docker/Mininet/native Linux networking hammered home that abstractions leak. The router crashed mysteriously until I realized Python 2’s buffering was delaying log output, OpenFlow was silently dropping packets with invalid checksums, and Docker’s networking mode wasn’t exposing the privileged capabilities Mininet needed. Understanding your entire stack—from kernel capabilities to language runtimes to protocol specifications—is non-negotiable for infrastructure engineering.

Interactive Tooling Beats Documentation: The automated test suite validates correctness, but the web terminal is what makes the project usable. Being able to type client ping 192.168.2.2, see the ICMP echo request traverse the router in real-time logs, and watch the reply come back transforms abstract algorithms into tangible cause-and-effect. When building systems, invest heavily in observability and interactive debugging surfaces.

Concurrency Primitives Are Implementation Details; State Machines Are Design: I initially tried using locks everywhere in the C router to “be safe” and ended up with deadlocks. The breakthrough came from recognizing that packet processing is a state machine: each packet transitions through validation → routing table lookup → ARP resolution → forwarding. Each state is independent and stateless. Locks only protect shared structures (the ARP cache, routing table). This mental model eliminated entire classes of bugs and made the code clearer.

Future Directions

The README’s roadmap hints at several logical extensions, but based on the codebase architecture, here are the highest-impact next steps:

Visual SVG Topology Diagram with Live Link Status: The web interface currently only shows the terminal. Adding a visual network topology using D3.js or similar would transform the user experience. The topology could show the three hosts, router interfaces, and switch, with links changing color based on interface status (parsed from the router’s status messages or OpenFlow port states). Users could click on links to bring them down, simulating network partitions.

CI/CD Pipeline with Automated Grading: The testcases.py script runs 11 comprehensive tests that validate ARP, ICMP, routing, traceroute, and HTTP functionality. Currently these run manually. Integrating GitHub Actions to build the Docker image, run all tests, and report scores as status checks would enable regression testing and provide confidence for refactoring. The test outputs already emit JSON results, making parsing straightforward.

Multi-Router Topologies with Dynamic Routing Protocols: The current single-router architecture is intentionally simple for educational purposes. The natural evolution is implementing RIP or OSPF to enable route discovery between multiple routers. This would require extending the router to send/receive routing updates, maintain a routing table that evolves over time, and handle route convergence scenarios. The POX infrastructure already supports arbitrary topologies, so the heavy lifting would be in the C routing protocol implementation.


Closing Thought: This project represents the philosophy that understanding systems requires building them. Network engineering isn’t just about knowing that routers forward packets—it’s about handling the ARP cache invalidation when a host changes its MAC, debugging why traceroute shows asymmetric paths, and recognizing the OpenFlow flow table miss that’s causing 10% packet loss. By building a router from raw sockets to web interface, you don’t just learn networking theory—you internalize the failure modes, race conditions, and design tradeoffs that define production infrastructure.

Try It Out

Check out the live demo or explore the source code on GitHub.

We respect your privacy.

← View All Projects

Related Tutorials

    Ask me anything!