Project Overview
I built a production-grade network router that implements the Routing Information Protocol (RIP) using low-level C, integrated it with Software-Defined Networking (SDN) through OpenFlow, and packaged everything into an interactive, browser-based simulation environment. This project demonstrates end-to-end system design—from packet-level protocol implementation to user-facing visualization—making complex routing behaviors visible, testable, and reproducible without requiring physical hardware.
The Purpose: Why This Software Matters
Network engineering education and protocol research face a critical challenge: testing routing behaviors is expensive, time-consuming, and risky. Physical routers cost thousands of dollars, lab setups require dedicated space, and configuration errors can disrupt production networks. Students learning networking concepts often rely on static diagrams and theoretical explanations without the hands-on experience that builds true understanding.
For researchers developing new routing algorithms or validating protocol modifications, the barriers are even higher. They need reproducible environments where they can inject failures, observe convergence patterns, and validate correctness across dozens of test scenarios—all without the overhead of managing physical infrastructure.
This project solves that pain point by providing a complete, virtualized network environment that behaves like real hardware but runs entirely in software. Researchers can spin up a three-router topology in seconds, inject link failures with a single click, and watch routing tables converge in real-time. Educators can demonstrate distance-vector routing dynamics visually rather than conceptually. The barrier to experimentation drops from “impossible without a lab” to “click a button in your browser.”
The Solution: A Multi-Layer Network Simulation Platform
The system delivers four core capabilities that work in concert to create a comprehensive learning and testing environment:
-
Standards-Compliant Router Implementation: A from-scratch C implementation of RIP routing with full support for ARP resolution, ICMP messaging, IPv4 forwarding with TTL management, and dynamic routing table updates. The router handles real packet processing—validating checksums, performing longest-prefix matching, managing ARP caches with timeout mechanisms, and generating proper ICMP error messages when destinations are unreachable or TTL expires.
-
Software-Defined Networking Integration: A POX-based OpenFlow controller manages virtual switches and coordinates packet flow between routers and hosts. This demonstrates the convergence of traditional routing protocols with modern SDN architectures, where control-plane logic (routing decisions) separates cleanly from data-plane operations (packet forwarding).
-
Real-Time Interactive Dashboard: A Flask-powered web application provides instant visibility into network behavior through an SVG-based topology visualization that updates link states in real-time. Users can generate traffic (Ping, Traceroute, HTTP requests) between any nodes, inject link failures to test convergence, and interact directly with the Mininet environment through an integrated terminal built with xterm.js.
-
Automated Validation Framework: Fifteen comprehensive test cases validate everything from basic connectivity to complex failure scenarios involving multiple simultaneous link failures and convergence timing requirements. Each test programmatically verifies that routing tables converge correctly and traffic follows expected paths.
Tech Stack & Architecture
The Stack
Data Plane
- C (ANSI C standard) - Router core implementation for minimal overhead and precise control over packet structures
- Mininet - Network virtualization platform creating virtual hosts, switches, and links
- Open vSwitch 2.9+ - Production-grade virtual switch with OpenFlow support
Control Plane
- Python 2.7 (POX Framework) - OpenFlow 1.0 controller managing switch behavior and topology discovery
- VNS Protocol (Custom Bridge Module) - Coordinates communication between the C router and Python control plane
Application Layer
- Python 3.x (Flask + Socket.IO) - Real-time web backend with WebSocket communication
- JavaScript + xterm.js - Interactive browser-based terminal and SVG visualization
- Docker + Docker Compose - Containerized deployment ensuring reproducibility
The Decision Matrix
Why C for the Router Core?
When implementing packet-level network protocols, language choice directly impacts performance and expressiveness. I chose C for three critical reasons: First, router performance depends on tight control over memory layout—network packet headers are precisely defined byte structures (Ethernet frames, IP headers, UDP/TCP segments), and C’s struct packing and pointer arithmetic provide exact control without runtime overhead. Second, educational authenticity matters—production routers use C/C++ for their data planes, so this implementation provides realistic exposure to actual networking codebases. Third, debugging packet-level issues requires tools like Wireshark and tcpdump, which naturally integrate with C programs through standard packet capture libraries.
Why the Python 2.7 / Python 3.x Split?
This dual-Python architecture reflects real-world legacy integration challenges. Mininet and POX both depend on Python 2.7, which reached end-of-life in 2020 but remains essential for many network simulation tools due to their deep dependency chains. Rather than attempting a risky migration or forking upstream projects, I architected the system to isolate the legacy components (control plane) from modern components (web dashboard). The Flask web server runs in Python 3.x, leveraging modern async capabilities through Socket.IO and eventlet for real-time updates, while the POX controller operates in its native Python 2.7 environment. This design demonstrates practical system integration skills—knowing when to bridge incompatible components rather than forcing unnecessary rewrites.
Why Docker with Privileged Mode?
Network virtualization with Mininet requires low-level kernel operations: creating network namespaces, manipulating virtual interfaces, and managing Open vSwitch kernel modules. These operations require Linux capabilities that standard Docker containers don’t provide. Running in privileged mode grants the container necessary permissions to perform network namespace manipulation and virtual switch management. While this trades some isolation for functionality, it’s the correct engineering decision for a development/education tool where reproducibility and ease-of-deployment outweigh the security considerations that would matter in multi-tenant production environments. A single command—docker run—gives users a complete, working network lab that would otherwise require hours of manual configuration across multiple dependencies.
Technical Challenges: Managing Asynchronous Protocol Convergence
One of the most architecturally complex aspects of this system is coordinating timing between three asynchronous subsystems: the C routers exchanging RIP updates, the OpenFlow controller managing switch state, and the web dashboard displaying real-time updates.
The Core Problem: Distance-vector routing protocols like RIP are inherently asynchronous and timing-dependent. When a link fails, routers don’t discover this instantly—they rely on periodic updates (every 30 seconds in standard RIP) and timeout mechanisms. During convergence, routing tables may temporarily contain inconsistent information as updates propagate hop-by-hop through the network. If the web dashboard queries routing state too early, it shows incomplete data. If test cases don’t wait long enough after injecting failures, they produce false negatives.
The Architectural Solution: The system implements a three-tier timing coordination strategy. At the lowest level, the C router uses a timeout thread that wakes every second to check for expired ARP cache entries and triggers RIP request/response cycles. The POX controller maintains OpenFlow connection state machines, responding immediately to switch events but coordinating with router startup through a ten-second warmup period that ensures all OpenFlow handshakes complete before routers attempt packet forwarding. At the application level, the automated test framework explicitly sleeps for 30 seconds after topology initialization and after each link state change, allowing sufficient time for RIP convergence before validation begins.
This tiered approach reflects a key insight about distributed systems: you cannot precisely synchronize independent processes running on different schedulers, so instead you design each layer to be eventually consistent and build in explicit synchronization points where correctness requires it. The web dashboard embraces this reality by showing real-time state regardless of convergence status, letting users observe the convergence process itself rather than hiding it behind synchronization barriers.
Packet Flow Orchestration: Another subtle challenge involves the interaction between ARP resolution and IP forwarding. When the router needs to forward a packet but doesn’t have the destination’s MAC address cached, it must queue the IP packet, send an ARP request, wait for the ARP reply, then forward the original packet. This requires careful state management—tracking which packets are waiting on which ARP resolutions, implementing timeouts for unanswered ARP requests, and generating appropriate ICMP “destination unreachable” messages when resolution fails. The implementation uses a linked-list queue structure attached to each ARP cache entry, demonstrating classical data structure application in systems programming.
Lessons Learned
Polyglot System Design Requires Thoughtful Boundaries
Building a system that spans C, Python 2.7, Python 3.x, JavaScript, and shell scripting taught me that the hard part isn’t writing code in multiple languages—it’s designing clean interfaces between them. The VNS protocol serves as a binary message boundary between C and Python, using explicit length-prefixed packets with defined message types. The Flask Socket.IO layer provides a JSON-based boundary between Python and JavaScript, using event names as method contracts. Each boundary point is a potential failure mode, so I learned to invest heavily in logging and error handling at crossing points. The principle generalizes: in polyglot systems, spend as much time designing your inter-language contracts as you do writing the language-specific logic.
Virtualization is Not Just Performance—It’s Reproducibility
Before Docker, running this project required installing specific versions of Mininet, compiling Open vSwitch with correct flags, managing Python 2 and 3 simultaneously, and hoping your kernel supported the necessary network namespaces. Every environment was slightly different, leading to subtle bugs that appeared on some machines but not others. Containerization eliminated an entire class of “works on my machine” problems. But the deeper lesson is that reproducibility is a feature—one that requires architectural decisions from day one. The Dockerfile isn’t just deployment automation; it’s executable documentation of the exact environment the software expects.
User Experience Matters Even for Technical Tools
Early versions of this project required users to open five terminal windows, start services in a specific order, and manually configure IP addresses. Despite implementing a technically correct router, the cognitive load made it nearly unusable for its intended audience. Adding the web dashboard transformed it from a “technically impressive” project into a genuinely useful teaching tool. The integrated terminal eliminated the need to SSH into containers. The visual topology made abstract routing concepts concrete. The one-click fault injection made testing convergence trivial instead of tedious. The lesson: even when building developer tools or educational software, investing in UX design multiplies the value of your technical implementation.
Future Directions
The roadmap reflects both incremental improvements and strategic expansions:
Performance Observability: While the current dashboard shows topology state, it lacks quantitative metrics. Adding real-time graphs for packet rate, routing table size, convergence time, and bandwidth utilization would transform this from a binary “working/not working” tool into a platform for performance analysis. Researchers could compare convergence times across different topologies or measure the overhead of different routing algorithms quantitatively.
Modern Protocol Support: The current implementation uses OpenFlow 1.0 (circa 2009) because of POX’s limitations. Migrating to OpenFlow 1.3+ would enable multi-table pipelines, more flexible matching, and meter-based QoS—features that reflect how modern SDN deployments actually work. This would make the project relevant for industrial SDN training, not just academic protocol study.
RESTful Testing API: The automated test framework currently runs as a monolithic Python script. Exposing network operations (create topology, inject failure, verify path) through a REST API would enable integration with CI/CD pipelines, property-based testing frameworks, and classroom autograders. Students could submit routing implementations that get automatically tested against a suite of failure scenarios, with results returned programmatically.
Cross-Protocol Comparison: Currently, the system only implements RIP. Adding OSPF (a link-state protocol) and potentially BGP (a path-vector protocol) would create a powerful comparative learning environment. Students could watch the same link failure propagate through RIP’s slow hop-by-hop convergence versus OSPF’s near-instant LSA flooding, building intuition about protocol trade-offs that textbooks can only describe abstractly.
Packet Capture Export: While the system can generate PCAP files, there’s no mechanism to export them from the web interface. Adding a download button that packages captured traffic from all interfaces into a single archive would enable offline analysis with Wireshark, creating a bridge between hands-on experimentation and deep packet-level investigation.
Closing Thoughts
This project sits at the intersection of low-level systems programming, distributed protocol design, web development, and network virtualization. It demonstrates that modern software engineering increasingly requires polyglot skills—not just knowing multiple languages, but understanding when and why to use each one, and how to compose them into coherent systems.
For recruiters: this project showcases end-to-end ownership of a complex system, from packet-level bit manipulation in C to user-facing interactive visualization in JavaScript, all deployed in a production-grade containerized environment.
For engineers: the challenges I solved—coordinating asynchronous protocol convergence, bridging incompatible runtime environments, and making technical complexity accessible to non-experts—are the same challenges we face when building production distributed systems at scale.
The code is a router. The real product is a platform that makes learning and experimentation accessible.
Try It Out
Check out the live demo or explore the source code on GitHub.