This chapter assembles every component developed in the preceding fourteen chapters into a complete autonomous navigation system. The mission: deploy the robot in an unknown room, have it build a map, navigate to a specified goal pose, and stop. All without human intervention after the start command.
The navigation pipeline has four layers, running at different update rates:
Layer 4 — Mission Control (0.5 Hz) Behaviour Tree: mission phases
Layer 3 — Planning (1–2 Hz) A* / RRT on occupancy grid
Layer 2 — Localisation (10 Hz) EKF: odometry + sensor fusion
Layer 1 — Control (50 Hz) PID: wheel speed + heading
Each layer reads from the layer below and sends commands down. Sensor data flows up. The architecture is concurrent: Python threads or asyncio tasks run each layer at its own rate.
| Component | Chapter | Role in pipeline |
|---|---|---|
| Motor + PID | 2, 4 | Low-level wheel speed control |
| Kinematics | 3 | Convert (v,ω) to wheel speeds |
| Odometry | 5 | Dead-reckoning pose estimate |
| Ultrasound scan | 7 | Map update + obstacle detection |
| Camera / ArUco | 8 | Landmark detection for EKF update |
| EKF | 14 | Fused pose estimate |
| Occupancy grid | 12 | Environment map, updated incrementally |
| A* planner | 10 | Global path in known/mapped areas |
| Potential field | 9 | Local obstacle avoidance |
| Behaviour Tree | 13 | Mission sequencing and mode switching |
config = RobotConfig()
drive = DifferentialDrive(config)
odometry = OdometryTracker(config)
ekf = EKFLocaliser(config)
grid = OccupancyGrid(5.0, 5.0, resolution_m=0.05)
planner = AStarPlanner(grid)
camera = CameraPerception()
mission = MissionBT(drive, ekf, grid, planner, camera)
try:
mission.run()
finally:
drive.stop()
The top-level BT (Chapter 13 architecture instantiated):
Sequence [MISSION]
├── Action: hardware_self_test
├── Action: initial_spin_scan # rotate 360°, build first map estimate
├── Fallback [reach_goal]
│ ├── Sequence [planned_navigation]
│ │ ├── Condition: map_sufficient
│ │ ├── Action: plan_path_to_goal
│ │ └── Action: execute_path
│ └── Action: explore_and_map # wander reactively, build map
└── Action: report_goal_reached
map_sufficient checks whether the occupancy grid has enough free-space information around the robot and goal to support planning. If not, the robot falls back to explore_and_map — a Bug-1-style wall-following behaviour that simultaneously builds the occupancy grid.
execute_path steps through A* waypoints. At each step it checks the ultrasound for new obstacles. If an obstacle appears ahead:
If replanning fails (the goal is now blocked), escalate to the BT parent (Fallback), which falls back to explore_and_map to find an alternative route.
The blended velocity command combines path-following and reactive avoidance:
v_path, omega_path = path_follower.command(ekf.pose, next_waypoint)
v_react, omega_react = potential_field(ekf.pose, scan, goal)
# Weight by obstacle proximity
alpha = obstacle_proximity_weight(scan) # 0 = far, 1 = close
v = (1 - alpha) * v_path + alpha * v_react
omega = (1 - alpha) * omega_path + alpha * omega_react
drive.move(v, omega)
This blending ensures smooth transitions: when obstacles are distant, path following dominates; when they are close, reactive avoidance takes over.
The Python implementation uses two threads:
A threading.Lock protects the shared EKF pose and occupancy grid from concurrent access.
On a Raspberry Pi 4 (4 GB), with a 5 m × 4 m room:
Limitation: 2-D flat-world assumption. The entire system assumes a flat floor. Ramps, thresholds, and multi-floor environments require 3-D maps.
Limitation: single ultrasound. The HC-SR04 sees only one direction per measurement. A ring of sensors (or a lidar) would give much richer map updates.
Extension: dynamic obstacle tracking. Track moving obstacles (people walking by) as separate dynamic objects rather than incorporating them into the static map.
Extension: multi-goal missions. Replace the single-goal BT with a waypoint sequence. This is a simple BT modification.
Extension: probabilistic lane following. Replace the line-following PID with a particle filter that tracks the line position, handling line loss and intersections probabilistically.
The robot now has everything it needs:
Each layer is individually testable (the code/ directory contains unit tests for every module) and replaceable: swap the ultrasound for a lidar, add a GPS module, replace A* with RRT* — the interfaces remain the same.
Robotics is an iterative discipline. The first deployment rarely goes as planned. But with this architecture, failures are diagnosable: add a topic logger to each layer, replay the sensor data, and identify where the estimate, plan, or execution diverged from expectation. That is the engineering cycle that turns a prototype into a reliable system.
Good luck, and drive safely.
Navigation: