No single sensor gives a complete, accurate picture of the robot’s state. Odometry drifts over time but is fast. Camera-based landmark detection is accurate but slow and intermittent. A compass gives heading but not position. Sensor fusion combines multiple imperfect sensors to produce an estimate better than any individual sensor could provide. This chapter covers the complementary filter (a simple, effective tool) and the Kalman filter (the principled probabilistic framework).
We want to estimate the robot’s state $\mathbf{x}$ (e.g., pose $(x, y, \theta)$). We have:
The challenge: the process model accumulates error (odometry drift), and measurements are noisy. Fusion optimally combines both.
A complementary filter is the simplest fusion technique: high-pass filter one source, low-pass filter another, add them. It is most commonly used to fuse a gyroscope (accurate short-term heading changes, but drifts) with a compass (accurate long-term heading, but noisy):
$\theta_{fused} = \alpha \cdot (\theta_{prev} + \dot{\theta}{gyro} \cdot \Delta t) + (1 - \alpha) \cdot \theta{compass}$
Typical $\alpha = 0.98$ (mostly trust the gyro’s integral, slightly correct toward compass). The filter parameter $\alpha$ corresponds to a cutoff frequency: the gyro contribution dominates at high frequencies (fast rotations), the compass at low frequencies (drift correction).
For our platform (which lacks a gyroscope), the complementary filter can fuse odometry heading with heading derived from visual landmarks:
alpha = 0.95
def update_heading(odom_theta, visual_theta_available, visual_theta, dt):
if visual_theta_available:
fused = alpha * odom_theta + (1 - alpha) * visual_theta
else:
fused = odom_theta # no measurement available, trust odometry
return fused
The Kalman filter is the optimal linear estimator when:
State estimate $\hat{\mathbf{x}}$ and covariance $\mathbf{P}$ (uncertainty) are propagated through two steps:
Predict: $\hat{\mathbf{x}}^- = \mathbf{F} \hat{\mathbf{x}} + \mathbf{B} \mathbf{u}$ $\mathbf{P}^- = \mathbf{F} \mathbf{P} \mathbf{F}^T + \mathbf{Q}$
Update (when a measurement $\mathbf{z}$ arrives): $\mathbf{K} = \mathbf{P}^- \mathbf{H}^T (\mathbf{H} \mathbf{P}^- \mathbf{H}^T + \mathbf{R})^{-1}$ $\hat{\mathbf{x}} = \hat{\mathbf{x}}^- + \mathbf{K}(\mathbf{z} - \mathbf{H}\hat{\mathbf{x}}^-)$ $\mathbf{P} = (\mathbf{I} - \mathbf{K}\mathbf{H})\mathbf{P}^-$
The Kalman gain automatically adjusts the blend based on the relative uncertainties: when $\mathbf{R}$ is large (noisy sensor), $\mathbf{K}$ is small (trust the prediction more). When $\mathbf{P}^-$ is large (prediction is uncertain), $\mathbf{K}$ is large (trust the measurement more).
Robot kinematics are nonlinear (the pose update involves $\cos\theta$, $\sin\theta$). The EKF linearises the nonlinear model around the current state estimate using the Jacobian:
| $\mathbf{F} = \frac{\partial f}{\partial \mathbf{x}}\bigg | _{\hat{\mathbf{x}}}$ |
For differential-drive odometry with state $\mathbf{x} = (x, y, \theta)^T$ and controls $(\Delta d, \Delta\theta)$:
$\mathbf{F} = \begin{pmatrix} 1 & 0 & -\Delta d \sin(\theta + \Delta\theta/2) \ 0 & 1 & \Delta d \cos(\theta + \Delta\theta/2) \ 0 & 0 & 1 \end{pmatrix}$
The off-diagonal elements capture how heading uncertainty causes positional uncertainty.
Setting noise covariances. Practical values for a small robot with 1200 CPR encoders:
import numpy as np
Q = np.diag([0.01**2, 0.01**2, 0.02**2])
R = np.diag([0.03**2, 0.03**2, 0.05**2])
EKF SLAM augments the state vector to include both the robot pose and landmark positions:
$\mathbf{x} = (x_r, y_r, \theta_r, x_{L1}, y_{L1}, x_{L2}, y_{L2}, \ldots)^T$
The covariance matrix $\mathbf{P}$ captures cross-correlations between the robot pose and landmark estimates. When a new landmark is observed, the state vector grows; when a known landmark is re-observed, the EKF update corrects both robot and landmark estimates simultaneously.
For 10 landmarks, the state vector has $3 + 10\times 2 = 23$ elements, and $\mathbf{P}$ is $23 \times 23$ — very tractable. EKF SLAM scales as $O(N^2)$ with the number of landmarks; for environments with fewer than ~100 landmarks it is practical on the Raspberry Pi.
A practical fusion application: combine odometry (continuous, drifting) with periodic ultrasound wall measurements (accurate, intermittent) to improve corridor-following position estimates.
The EKF state is $(x, y, \theta)$. Predict with odometry. Update when the ultrasound measures a known wall: if the robot is in a corridor of known width $W$, and the right-side ultrasound reads $d_r$ m, the distance to the left wall is $W - d_r$. The expected measurement given state $(x, y, \theta)$ is $H\hat{\mathbf{x}} = $ distance from the robot to the right wall, computed from the map geometry.
This simple fusion significantly reduces lateral drift in corridors without any additional hardware.
With sensing, planning, and estimation in place, Chapter 15 assembles the complete autonomous navigation system.
Navigation: