featured image

Sentinel Fall Detector: Engineering Real-Time Safety With Sensor Fusion and Signal Processing

A deep dive into the architecture and engineering of Sentinel, a real-time fall detection system running on a Raspberry Pi 5 that uses an IMU, Kalman filter, and custom classification logic to deliver intelligent, orientation-aware fall detection with a Qt Quick dashboard for live feedback.

Published

Tue Apr 14 2026

Technologies Used

C++ Raspberry Pi IMU Qt
View on GitHub

Live Demo

Loading video...

Beyond the Buzzword: What Sentinel Actually Does

Sentinel Fall Detector is a real-time embedded safety system that runs on a Raspberry Pi 5 and uses an inertial measurement unit to detect when a person has fallen — and, more importantly, when they haven’t. It targets caregivers and healthcare settings where false alarms erode trust and delayed alerts cost lives. The system reads raw accelerometer and gyroscope data at 100 Hz, fuses it through a Kalman filter, runs it through a multi-stage classification pipeline, and renders the device’s live orientation on a Qt Quick dashboard — all within a 10-millisecond processing budget per cycle.

The Problem With “Just Set a Threshold”

Most fall detection approaches start and end with a single idea: if the accelerometer spikes above some G-force value, trigger an alarm. In practice, this is nearly useless. Sitting down in a chair produces a measurable impact. Bumping a table sends a sharp spike. A phone dropped onto a couch registers freefall followed by a sudden stop. Every one of these events looks like a fall to a naive threshold detector, and after a few false alarms, users disable the system entirely.

The real challenge is not detecting that something happened — it is classifying what happened. A genuine fall is a temporal sequence: a period of weightlessness, followed by a high-energy impact, accompanied by a significant change in body orientation. A person sitting down shares some of those characteristics but diverges in others. Solving this requires treating the problem as a time-series classification task, not a single-sample comparison. That framing shaped every architectural decision in Sentinel.

What Sentinel Delivers: Intelligent Detection, Not Just Alerts

  • Orientation-Aware Fall Classification — Rather than reacting to a single acceleration spike, Sentinel compares the user’s current posture against their state from 500 milliseconds ago. A fall that leaves someone horizontal triggers an emergency. An impact where posture remains upright is logged and dismissed. This eliminates the largest category of false positives in threshold-based systems.

  • Predictive Fall Risk Scoring — Sentinel doesn’t wait for a fall to happen. It continuously analyzes the frequency content of the user’s motion, looking for the irregular gait patterns and low-frequency tremors that clinical research associates with elevated fall risk. When instability crosses a threshold, the system issues a warning — giving caregivers time to intervene before an event occurs.

  • Stable Orientation Estimation Under Noise — Raw IMU data is noisy and drifts over time. Sentinel fuses accelerometer and gyroscope readings through a Kalman filter that estimates both the true angle and the sensor’s bias simultaneously, producing orientation data that is both responsive and stable — critical for making reliable classification decisions downstream.

  • Real-Time Visual Feedback — A Qt Quick dashboard renders a 3D model of the device that mirrors its physical orientation in real time, alongside a live G-force meter and risk status indicators. This gives caregivers immediate, intuitive situational awareness without requiring them to interpret raw numbers.

Under the Hood: Architecture and the Reasoning Behind It

The Stack

LayerTechnologyRole
LanguageC++17Deterministic memory, zero-overhead abstractions, direct hardware access
UI FrameworkQt 6 Quick (QML)Declarative UI with native 3D transforms and property binding
Signal ProcessingCustom Kalman filter, Radix-2 FFTSensor fusion and frequency-domain gait analysis
Hardware InterfaceLinux I2C (/dev/i2c-1)Direct register-level communication with the LSM6DSO32 IMU
Build SystemCMake 3.16+Cross-platform build configuration with Qt integration
Target PlatformRaspberry Pi 5Affordable, Linux-capable SBC with I2C and display output

Why These Choices Matter

C++17 over Python or Java. This system processes sensor data at 100 Hz with a hard 10-millisecond deadline per cycle. That budget includes I2C communication, matrix operations for the Kalman filter, FFT computation, fall classification, and UI updates. Garbage collection pauses or interpreter overhead would introduce jitter that directly degrades classification accuracy. C++17 provides the deterministic timing this domain requires, while features like constexpr if enable compile-time specialization of the matrix library for the exact dimensions the Kalman filter uses — eliminating runtime branching in the hottest loop.

Qt 6 Quick over a web-based dashboard. The UI needs to render 3D rotations at 50 Hz driven by live sensor data on a Raspberry Pi’s display. A browser-based approach would introduce an additional serialization layer, WebSocket latency, and compositor overhead. Qt Quick’s property binding system allows the QML layer to subscribe directly to C++ object properties through the meta-object system, and its scene graph handles 3D transforms natively — no JavaScript animation loop required.

Custom math primitives over Eigen or a general-purpose library. The Kalman filter operates exclusively on 2x2 and 2x1 matrices. A general-purpose linear algebra library would bring thousands of unused headers, longer compile times, and runtime dispatch overhead to select between SIMD paths. A purpose-built template matrix class with compile-time dimension checks and unrolled 2x2 specializations is smaller, faster, and makes the Kalman filter’s mathematical structure explicit in the code.

Solving the Hardest Problem: Temporal Classification on a Tight Budget

The most complex engineering challenge in Sentinel is the fall detection pipeline itself — specifically, the problem of classifying an event that unfolds over time using only a fixed-size sliding window of historical data and no machine learning model.

The system maintains a circular buffer holding the last 500 milliseconds of sensor readings. When the detection logic observes a sustained period of near-weightlessness (acceleration dropping below 0.5 G for more than 50 milliseconds), it enters a “potential fall” state. This is the freefall phase — the moment when a body is accelerating toward the ground and the sensor briefly experiences reduced apparent gravity. A person sitting down in a chair never produces this signature; their descent is controlled and maintains roughly 1 G throughout.

If and only if a potential fall state is active, the system then watches for a high-energy impact — an acceleration spike exceeding 2.8 G. When that impact arrives, the system doesn’t immediately trigger an alarm. Instead, it reaches back into the circular buffer and retrieves the sensor snapshot from 500 milliseconds prior — the “steady state” before the event began — and computes two metrics.

The first is the relative posture change: how much the user’s orientation has shifted compared to where they were half a second ago, measured as the Euclidean distance across roll and pitch deltas. The second is the peak downward velocity, estimated by integrating the world-frame vertical acceleration component over time.

These two values feed into a decision matrix. A hard fall requires both a large posture change (greater than 45 degrees, indicating the person went from upright to horizontal) and high downward velocity. A controlled sit-down shows moderate velocity but minimal orientation shift. A stumble-and-recover shows a posture change but lower velocity. Each combination maps to a distinct classification, and only the genuine fall triggers the emergency alert.

This approach is elegant because it avoids the two failure modes of simpler systems simultaneously: it won’t fire on a sharp impact with no posture change (a bumped table), and it won’t fire on a large posture change with no preceding freefall (rolling over in bed). The temporal sequencing — freefall then impact then posture comparison — creates a conjunction of conditions that is narrow enough to be accurate but broad enough to catch the range of real-world fall dynamics.

A safety timeout resets the potential-fall state if no impact follows within one second, preventing the system from staying armed indefinitely after a momentary sensor glitch.

From Code to Conviction: What This Project Reinforced

Specificity defeats generality in embedded systems. The instinct to reach for a general-purpose library or a flexible architecture is strong, but in resource-constrained real-time systems, every abstraction has a cost measured in microseconds and cache misses. Building purpose-fit components — a matrix class that only handles the dimensions you need, a buffer sized exactly to your analysis window — produces a system where every byte of memory and every CPU cycle is accounted for. The discipline of building only what the problem demands results in software that is both faster and easier to reason about.

The best detection systems are defined by what they ignore. The bulk of the engineering effort in Sentinel went not into detecting falls, but into correctly dismissing non-falls. Product thinking in safety-critical domains means recognizing that a false positive is not a minor inconvenience — it is the mechanism by which users lose trust and disable the system. Designing the decision matrix around rejection criteria first, and detection criteria second, produced a fundamentally more reliable system than optimizing for sensitivity alone.

Sensor fusion is a design philosophy, not just a technique. The Kalman filter is a mathematical tool, but the principle behind it — that combining imperfect information sources yields better estimates than relying on any single source — extended beyond orientation tracking into the fall detection logic itself. The decision matrix fuses posture change, velocity, and temporal sequence rather than relying on any single indicator. Thinking in terms of “what independent signals can I combine?” became a recurring design heuristic throughout the project.

What Comes Next: Evolving From Prototype to Product

Wireless alert delivery. The current system displays alerts on a local dashboard, which requires someone to be watching the screen. The natural next step is pushing fall notifications over Bluetooth Low Energy or Wi-Fi to a caregiver’s phone or a nursing station, transforming Sentinel from a monitoring tool into an active alerting system.

Automated validation infrastructure. The testing scenarios documented in the project are thorough but manual. Building a test harness that replays recorded IMU data through the detection pipeline would enable regression testing across hundreds of scenarios, catch threshold drift during tuning, and provide the confidence needed before deploying updates to a device someone depends on for safety.

On-device learning for personalized baselines. The current thresholds (0.5 G for freefall, 2.8 G for impact, 45 degrees for posture change) are tuned for a general population. Capturing a user’s normal movement patterns during an initial calibration period and adjusting these thresholds to their specific gait and activity profile would reduce false positives further — particularly for users with mobility aids or atypical movement patterns.

Try It Out

Check out the source code on GitHub.

We respect your privacy.

← View All Projects

Related Tutorials

    Ask me anything!