Chapter 13: Practical Projects

From Theory to Rooftops

You have spent twelve chapters absorbing theory. You understand mesh topologies, LoRa modulation, routing protocols, decentralized applications, security models, and hardware specifications. You can calculate a link budget, configure a BATMAN-adv mesh, pin content on IPFS, and size a solar panel for an off-grid node. Your head is full of knowledge. Your workbench is empty.

This chapter fixes that.

What follows are seven complete, build-it-this-weekend projects that take the concepts from every previous chapter and turn them into working systems you can touch, deploy, and show to your neighbors. Each project includes a bill of materials with realistic costs, step-by-step instructions, configuration details, Python code for automation and monitoring, testing procedures, and ideas for extending the project further. These are not toy demos — they are real deployments that solve real problems, and several of them can be combined into a comprehensive alternative network infrastructure for your community.

A few notes before we begin. Costs are approximate and reflect early 2026 pricing from common sources like Amazon, AliExpress, and Mouser. Prices fluctuate; shop around. Skill levels vary — Project 1 requires basic Linux command-line comfort, while Project 6 involves outdoor construction and electrical wiring. Read through the entire project before purchasing anything. And most importantly: start with one project, get it working, then expand. The temptation to buy hardware for all seven projects at once is strong. Resist it. A working two-node mesh teaches you more than a pile of unopened boxes.

Let us build something.

Project 1: Neighborhood Mesh Network

Overview

This is the flagship project of alternative networking: a multi-node mesh network that connects several houses in a neighborhood, provides local services, and optionally shares a single internet connection among all participants. When you are done, your neighbors will be able to share files, chat, read a community wiki, and browse the web — all through a network that none of you pay a monthly fee for (beyond the shared internet connection, if you choose to have one).

We will build a 3–5 node mesh using BATMAN-adv (the Layer 2 mesh protocol from Chapter 3) running on OpenWrt routers. Layer 2 is the right choice here because it makes the mesh look like one big LAN — devices get IP addresses from a single DHCP server, mDNS discovery works across all nodes, and applications that expect a flat local network work without modification. For a neighborhood of 3–5 nodes, BATMAN-adv’s scalability limitations are irrelevant.

Hardware List

Component Quantity Approx. Cost Notes
GL.iNet GL-MT3000 (Beryl AX) 3–5 $70 each Wi-Fi 6, OpenWrt-based, good range
Outdoor enclosure (IP65) 1–2 $15 each For nodes mounted outside
Ethernet cables (Cat6, various lengths) 5–10 $5–15 total For connecting devices to nodes
Raspberry Pi 4 (4 GB) 1 $55 Gateway + local services server
MicroSD card (32 GB+) 1 $10 For the Raspberry Pi
USB-C power supplies 3–5 $10 each One per router node
Total (3-node setup)   $300–$350  
Total (5-node setup)   $450–$520  

If budget is tight, substitute the GL-MT3000 with the GL-MT300N-V2 (Mango) at $25 each — you lose 5 GHz and Wi-Fi 6, but the mesh works the same way. For outdoor links between houses that are more than 50 meters apart, consider adding a pair of TP-Link CPE210 or Ubiquiti LiteAP units as dedicated point-to-point backhaul.

Step-by-Step Build

Step 1: Flash OpenWrt on all routers.

GL.iNet routers ship with OpenWrt-based firmware, but the stock GL.iNet firmware includes proprietary additions. For maximum flexibility, flash pure OpenWrt. Download the latest stable OpenWrt image for your specific model from openwrt.org, access the router’s web interface at 192.168.8.1, navigate to the firmware upgrade page, and upload the OpenWrt sysupgrade image. Wait for the router to reboot. Repeat for every router.

Step 2: Install BATMAN-adv on each node.

SSH into each router (default: ssh root@192.168.1.1 after flashing OpenWrt) and install the required packages:

opkg update
opkg install batctl-full kmod-batman-adv

Step 3: Configure the mesh interface.

On each node, create the BATMAN-adv mesh interface. Edit /etc/config/network to add:

config interface 'bat0'
    option proto 'batadv'
    option routing_algo 'BATMAN_IV'
    option aggregated_ogms '1'
    option bridge_loop_avoidance '1'

config interface 'mesh'
    option proto 'batadv_hardif'
    option master 'bat0'
    option mtu '1536'

Then configure the wireless radio to create an ad-hoc (IBSS) interface for mesh traffic in /etc/config/wireless:

config wifi-iface 'mesh_radio'
    option device 'radio0'
    option network 'mesh'
    option mode 'mesh'
    option mesh_id 'neighborhood-mesh'
    option encryption 'none'
    option mesh_fwding '0'

Apply with wifi reload and service network restart.

Step 4: Set up a bridge for client access.

You want clients (laptops, phones) to connect via Wi-Fi and be on the same Layer 2 network as the mesh. Create a bridge that includes bat0 and a client-facing Wi-Fi access point:

config interface 'lan'
    option type 'bridge'
    option proto 'static'
    option ipaddr '10.10.0.X'   # Unique per node: .1, .2, .3, etc.
    option netmask '255.255.255.0'
    list ifname 'bat0'

Add a regular Wi-Fi access point (not mesh) for clients, using the same SSID on all nodes so devices roam seamlessly between them.

Step 5: Configure DHCP on one node only.

Designate one node (ideally the one with the Raspberry Pi attached) as the DHCP server. On that node, enable dnsmasq with a range like 10.10.0.100 to 10.10.0.250. On all other nodes, disable the DHCP server — since BATMAN-adv creates a Layer 2 mesh, DHCP broadcasts from the server node will reach all clients across the mesh.

Step 6: Set up the Raspberry Pi as a gateway and services server.

Connect the Pi to the designated gateway node via Ethernet. Give it a static IP (e.g., 10.10.0.1). If you want to share an internet connection, connect the Pi’s second network interface (or a USB Ethernet adapter) to your ISP’s router and configure NAT:

sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -i eth1 -o eth0 -m state \
    --state RELATED,ESTABLISHED -j ACCEPT
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward

Step 7: Deploy local services.

On the Raspberry Pi, install community services:

Network Monitoring with Python

Deploy this script on the Raspberry Pi to continuously monitor mesh health:

import subprocess, json, time

def get_mesh_neighbors():
    """Query BATMAN-adv for mesh neighbor status."""
    result = subprocess.run(
        ["batctl", "meshif", "bat0", "n", "-j"],
        capture_output=True, text=True
    )
    return json.loads(result.stdout) if result.returncode == 0 else []

def check_mesh_health():
    neighbors = get_mesh_neighbors()
    print(f"[{time.strftime('%H:%M:%S')}] Mesh neighbors: {len(neighbors)}")
    for n in neighbors:
        addr = n.get("hard_ifaddr", "unknown")
        quality = n.get("last_seen_msecs", -1)
        print(f"  {addr} — last seen {quality}ms ago")
    return len(neighbors)

while True:
    check_mesh_health()
    time.sleep(30)

Testing

Power on all nodes and verify the mesh is formed: run batctl meshif bat0 o on any node to see the originator table — a list of every node in the mesh and the best route to reach it. Every node should appear in every other node’s originator table. Connect a laptop to any node’s client Wi-Fi. Verify you get a DHCP address in the 10.10.0.x range. Ping the Raspberry Pi at 10.10.0.1. Open a browser and access the file server, chat, and wiki. Walk to a different node’s coverage area — your device should roam seamlessly.

Then test resilience: unplug one intermediate node. Verify that the mesh reroutes and connectivity is maintained (assuming an alternative path exists). Plug it back in and confirm it rejoins the mesh within 30–60 seconds.

Extension Ideas

Project 2: Off-Grid LoRa Communication Network

Overview

When Wi-Fi mesh reaches its range limits — when your community spans kilometers, when terrain blocks line of sight, when power outlets do not exist — LoRa steps in. This project builds an off-grid text messaging and GPS tracking network using Meshtastic firmware on affordable LoRa radio modules. No internet connection. No cell towers. No monthly fees. Just radio waves, bouncing from node to node across a mesh that works in forests, across valleys, and on mountaintops.

Meshtastic turns cheap LoRa hardware into a mesh messaging network. Each node is both an endpoint (you type messages on it) and a relay (it forwards other nodes’ messages). Messages hop across the mesh until they reach their destination — or are broadcast to everyone. The range per hop is typically 1–5 km in obstructed environments and 10–30 km with line of sight. With three or four nodes strategically placed, you can cover an entire rural valley.

Hardware List

Component Quantity Approx. Cost Notes
LILYGO T-Beam v1.2 (with GPS) 3–4 $30–35 each ESP32 + SX1262 LoRa + GPS
868/915 MHz LoRa antenna (SMA) 3–4 $5–8 each Match your regional frequency
18650 Li-ion batteries (3500mAh) 3–4 $5 each One per T-Beam
USB-C cables 3–4 $3 each For charging and flashing
6W solar panel (USB output) 1–2 $15 each For outdoor relay nodes
Weatherproof junction box 1–2 $8 each For outdoor solar nodes
Total (3-node setup)   $150–$190  

The LILYGO T-Beam is the standard Meshtastic device — it has the ESP32 microcontroller, an SX1262 LoRa radio, a GPS receiver, an OLED display, and an 18650 battery holder all on one board. Alternatives include the Heltec V3 ($18, no GPS) and the RAK WisBlock ($40, modular and more rugged).

Step-by-Step Build

Step 1: Flash Meshtastic firmware.

Download the Meshtastic firmware flasher from flasher.meshtastic.org (a web-based tool that works in Chrome/Edge). Connect each T-Beam via USB. Select your device type (T-Beam v1.2 SX1262). Click flash. The process takes about 60 seconds per device.

Step 2: Initial configuration via the Meshtastic app.

Install the Meshtastic app on your phone (available for iOS and Android). Power on a T-Beam. It will appear as a Bluetooth device. Pair with it from the app. Configure:

Step 3: Build the solar outdoor node.

Take one T-Beam, set it to ROUTER role, connect a fully charged 18650 battery, and place it in a weatherproof junction box. Run the antenna cable through a waterproof cable gland in the box. Mount the 6W solar panel on top of the box and connect its USB output to the T-Beam’s charging port. The T-Beam in router mode draws approximately 30–50 mA; the 3500 mAh battery alone provides 70+ hours of operation, and the solar panel keeps it running indefinitely in any climate with moderate sunlight.

Mount the outdoor node as high as possible — on a pole, a chimney, a tree, or a rooftop peak. Height is range. A node at 10 meters elevation will easily outperform one at ground level, even with the same antenna and transmit power.

Step 4: Test range and coverage.

With the outdoor relay node mounted and running, walk or drive with a handheld node. Send test messages every few hundred meters and note where you lose connectivity. Use the GPS tracking feature to create a coverage map — the Meshtastic app shows node positions on a map when GPS is active.

Python Integration for Automated Alerts

Meshtastic provides a Python API that lets you send and receive messages programmatically. This is incredibly powerful for automated monitoring, alerting, and integration with other systems.

import meshtastic
import meshtastic.serial_interface

def send_alert(message, destination="^all"):
    """Send a text message via Meshtastic over serial."""
    iface = meshtastic.serial_interface.SerialInterface()
    iface.sendText(message, destinationId=destination)
    iface.close()

# Example: weather station sends temperature alert
temperature = 38.5  # from a sensor
if temperature > 35:
    send_alert(f"⚠ HEAT ALERT: {temperature}°C at Station-1")

For a more sophisticated setup, you can run a listener that triggers actions based on incoming messages:

import meshtastic
from meshtastic.serial_interface import SerialInterface
from pubsub import pub

def on_receive(packet, interface):
    """Handle incoming Meshtastic messages."""
    text = packet.get("decoded", {}).get("text", "")
    sender = packet.get("fromId", "unknown")
    if text:
        print(f"[{sender}]: {text}")
        if "SOS" in text.upper():
            log_emergency(sender, text)

pub.subscribe(on_receive, "meshtastic.receive")
iface = SerialInterface()  # blocks and listens

Testing

Verify that all nodes see each other in the Meshtastic app’s node list. Send a direct message from Node A to Node C — if they are not in direct radio range, the message should hop through Node B (the relay). Check the hop count displayed in the message delivery confirmation. Test the range boundary by moving a handheld node until messages stop being delivered, then back up until they resume. This is your coverage edge.

For GPS tracking, enable position broadcasting on all nodes and verify that each node’s position appears on the map in the Meshtastic app. The default broadcast interval is 15 minutes for router nodes; you can decrease it for testing.

Extension Ideas

Project 3: Emergency Communication Kit

Overview

When disaster strikes — earthquake, wildfire, hurricane, flood, prolonged power outage — commercial communication infrastructure is often the first casualty. Cell towers lose power or backhaul. Internet service drops. Landlines go silent. In exactly the moments when communication is most critical, the tools everyone depends on stop working.

This project builds a portable emergency communication kit that fits in a single Pelican-style case and can be deployed in under 15 minutes. It provides local Wi-Fi connectivity, mesh messaging via LoRa, offline maps, a medical reference database, local file sharing, and enough power to run for 24+ hours on battery alone or indefinitely with the included solar panel. This is the kit you grab when everything else has failed.

Hardware List

Component Quantity Approx. Cost Notes
Pelican 1450 case (or equivalent) 1 $80–120 Waterproof, crushproof
Raspberry Pi 4 (4 GB) 1 $55 Local server
MicroSD card (64 GB) 1 $12 Pre-loaded with offline content
LILYGO T-Beam v1.2 2 $30–35 each Meshtastic nodes
868/915 MHz antenna (foldable) 2 $8 each Compact for storage
USB Wi-Fi adapter (external antenna) 1 $15 Access point with range
18650 batteries (3500 mAh) 2 $5 each For T-Beams
Portable power station (300 Wh) 1 $100–150 Powers everything
20W folding solar panel 1 $30 Recharges the power station
USB hub (powered) 1 $15 Connects Pi to peripherals
Ethernet cable (short, 0.5 m) 2 $3 each Internal connections
Velcro strips, cable ties, foam padding $15 Securing components in case
Total   $430–$540  

Step-by-Step Build

Step 1: Prepare the Raspberry Pi.

Install Raspberry Pi OS Lite (64-bit) on the SD card. Before first boot, enable SSH by placing an empty file named ssh in the boot partition. Configure Wi-Fi country settings. Boot the Pi and perform initial setup:

sudo apt update && sudo apt upgrade -y
sudo apt install -y hostapd dnsmasq nginx python3-pip

Step 2: Configure the Pi as a Wi-Fi access point.

Use the USB Wi-Fi adapter as an access point so people can connect with phones and laptops. Configure hostapd:

# /etc/hostapd/hostapd.conf
interface=wlan1
driver=nl80211
ssid=EMERGENCY-NET
hw_mode=g
channel=7
wmm_enabled=0
auth_algs=1
wpa=0

Configure dnsmasq to provide DHCP and a captive portal that redirects all web requests to the Pi’s local services page:

# /etc/dnsmasq.conf
interface=wlan1
dhcp-range=192.168.4.10,192.168.4.100,24h
address=/#/192.168.4.1

The address=/#/192.168.4.1 line is the captive portal trick — it resolves all DNS queries to the Pi’s IP, so any URL typed into a browser lands on your local services page.

Step 3: Pre-load offline content.

This is where the kit becomes genuinely useful in an emergency. Download and install:

Step 4: Build the captive portal landing page.

Create a simple HTML page that serves as the hub for all offline services:

<!-- /var/www/html/index.html -->
<h1>🚨 Emergency Network</h1>
<p>You are connected to a local emergency network.</p>
<ul>
  <li><a href="/maps">Offline Maps</a></li>
  <li><a href="/medical">Medical Reference</a></li>
  <li><a href=":8080">File Sharing</a></li>
  <li><a href="/info">Emergency Info</a></li>
</ul>
<p>LoRa mesh messaging available via Meshtastic app.</p>

Step 5: Set up Meshtastic nodes.

Flash and configure two T-Beam nodes as described in Project 2. One stays in the kit as the base station; the other is a field unit that can be handed to a scout, placed on a hilltop, or given to a neighboring shelter. Both should be pre-configured on the same channel with a known pre-shared key.

Step 6: Pack the case.

Use foam padding and Velcro to secure all components inside the Pelican case. The Pi, power station, and USB hub should be mounted so the case can be opened and the system powered on without removing anything. The solar panel, antennas, and field Meshtastic node pack in the remaining space. Include a laminated instruction card with step-by-step deployment instructions — in an emergency, the person opening the kit might not be the person who built it.

Deployment Script

Automate the startup sequence with a single Python script that verifies all systems are operational:

import subprocess, socket, os

def check_service(name, command):
    result = subprocess.run(command, shell=True, capture_output=True)
    status = "✓ UP" if result.returncode == 0 else "✗ DOWN"
    print(f"  {status}{name}")
    return result.returncode == 0

print("=" * 40)
print("EMERGENCY KIT SYSTEM CHECK")
print("=" * 40)
checks = [
    ("Wi-Fi AP (hostapd)", "systemctl is-active hostapd"),
    ("DHCP (dnsmasq)", "systemctl is-active dnsmasq"),
    ("Web server (nginx)", "systemctl is-active nginx"),
    ("File server (dufs)", "pgrep -x dufs"),
    ("Meshtastic serial", "ls /dev/ttyUSB0"),
]
passed = sum(check_service(n, c) for n, c in checks)
print(f"\n{passed}/{len(checks)} services operational.")

Testing

Test the full deployment procedure: close the case, hand it to someone who was not involved in building it, give them the laminated instruction card, and time how long it takes to get the system operational. Target: under 15 minutes from opening the case to a working Wi-Fi network with all services accessible.

Connect a phone to the EMERGENCY-NET Wi-Fi. Verify the captive portal redirects to the landing page. Check that offline maps load and are navigable. Open the medical reference and search for a condition. Upload a file through the file server. Send a message via Meshtastic.

Run the system on battery power and measure actual runtime. The 300 Wh power station should run the Pi (7 W) plus USB peripherals (3 W) for approximately 30 hours. With the solar panel deployed in daylight, runtime is indefinite.

Extension Ideas

Project 4: Decentralized File Server

Overview

Centralized file sharing dies when the server dies. Dropbox, Google Drive, OneDrive — they all require internet connectivity to their corporate data centers. On an alternative network, you need a file sharing solution that is content-addressed (files are identified by their content, not their location), distributed (any node with a copy can serve the file), and resilient (no single point of failure destroys access).

IPFS — the InterPlanetary File System we covered in Chapter 8 — is exactly this. This project deploys an IPFS node on a Raspberry Pi, gives it a user-friendly web interface, and adds Python automation for managing pinned content and backups. The result is a file server that your community can use to share documents, photos, videos, and software — with the guarantee that content, once pinned, persists as long as any node in the network has it.

Hardware List

Component Quantity Approx. Cost Notes
Raspberry Pi 4 (4 GB or 8 GB) 1 $55–$75 8 GB recommended for IPFS
MicroSD card (32 GB) 1 $10 For the OS
USB 3.0 SSD (256 GB or larger) 1 $25–35 IPFS datastore — SD is too slow
Ethernet cable 1 $5 Wired connection to network
USB-C power supply (5V 3A) 1 $10 Official Pi PSU recommended
Case with fan 1 $10 IPFS is CPU-intensive
Total   $115–$145  

The USB SSD is not optional. IPFS performs constant reads and writes to its datastore, and SD cards will fail within weeks under this workload. Use a real SSD connected via USB 3.0.

Step-by-Step Build

Step 1: Install and configure the OS.

Flash Raspberry Pi OS Lite (64-bit) to the SD card. Boot, update, and mount the USB SSD:

sudo mkfs.ext4 /dev/sda1
sudo mkdir /mnt/ipfs-storage
sudo mount /dev/sda1 /mnt/ipfs-storage
# Add to /etc/fstab for persistence
echo '/dev/sda1 /mnt/ipfs-storage ext4 defaults 0 2' | \
    sudo tee -a /etc/fstab

Step 2: Install IPFS.

Download and install the Kubo IPFS implementation (the reference Go implementation):

wget https://dist.ipfs.tech/kubo/v0.27.0/kubo_v0.27.0_linux-arm64.tar.gz
tar xzf kubo_v0.27.0_linux-arm64.tar.gz
cd kubo && sudo bash install.sh

Initialize the IPFS repository on the SSD:

export IPFS_PATH=/mnt/ipfs-storage/.ipfs
ipfs init --profile=lowpower

The lowpower profile is essential for Raspberry Pi — it reduces connection limits and resource usage. Configure IPFS for LAN operation by editing the config:

ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config --json Discovery.MDNS.Enabled true

Step 3: Create a systemd service.

So IPFS starts automatically on boot:

# /etc/systemd/system/ipfs.service
[Unit]
Description=IPFS Daemon
After=network.target

[Service]
Environment=IPFS_PATH=/mnt/ipfs-storage/.ipfs
ExecStart=/usr/local/bin/ipfs daemon --enable-gc
Restart=on-failure
User=pi

[Install]
WantedBy=multi-user.target

Enable and start: sudo systemctl enable --now ipfs.

Step 4: Deploy a web interface.

IPFS comes with a built-in web UI at http://<pi-ip>:5001/webui. This provides file browsing, upload, pinning management, and peer information. For a simpler, more user-friendly interface, deploy the IPFS Desktop Web or build a minimal upload page with nginx:

<!-- /var/www/html/upload.html -->
<h1>Community File Sharing (IPFS)</h1>
<form action="/api/v0/add" method="post" enctype="multipart/form-data">
  <input type="file" name="file" />
  <button type="submit">Upload to IPFS</button>
</form>
<p>After uploading, you'll receive a CID. Share this CID with
anyone on the network to let them access your file.</p>

Python Content Manager

This script manages pinned content — the files your node permanently stores and serves:

import subprocess, json, os

IPFS = "/usr/local/bin/ipfs"
ENV = {"IPFS_PATH": "/mnt/ipfs-storage/.ipfs"}

def pin_file(filepath):
    """Add and pin a file to IPFS, return its CID."""
    result = subprocess.run(
        [IPFS, "add", "-Q", "--pin=true", filepath],
        capture_output=True, text=True, env={**os.environ, **ENV}
    )
    cid = result.stdout.strip()
    print(f"Pinned: {filepath}{cid}")
    return cid

def list_pins():
    """List all pinned content with sizes."""
    result = subprocess.run(
        [IPFS, "pin", "ls", "--type=recursive"],
        capture_output=True, text=True, env={**os.environ, **ENV}
    )
    for line in result.stdout.strip().split("\n"):
        if line:
            print(f"  📌 {line}")

For automated backups, schedule a cron job that pins an entire directory nightly:

import subprocess, os, datetime

BACKUP_DIR = "/srv/community-files"
LOG = "/var/log/ipfs-backup.log"

def backup_directory(directory):
    """Recursively add a directory to IPFS and pin it."""
    result = subprocess.run(
        ["/usr/local/bin/ipfs", "add", "-rQ", "--pin=true", directory],
        capture_output=True, text=True,
        env={**os.environ, "IPFS_PATH": "/mnt/ipfs-storage/.ipfs"}
    )
    cid = result.stdout.strip()
    timestamp = datetime.datetime.now().isoformat()
    with open(LOG, "a") as f:
        f.write(f"{timestamp} | {directory}{cid}\n")
    return cid

Testing

After starting the IPFS daemon, verify it is running: ipfs id should return your node’s peer ID and addresses. Add a test file: echo "hello mesh" | ipfs add — note the CID. Retrieve it: ipfs cat <CID>. If you have a second IPFS node on the network, verify that adding a file on one node makes it retrievable from the other (IPFS uses mDNS to discover local peers automatically).

Check the web UI at http://<pi-ip>:5001/webui. Upload a file through the interface and verify it appears in the pinned content list. Monitor resource usage with htop — IPFS on a Pi 4 should use 200–500 MB of RAM and intermittent CPU.

Extension Ideas

Project 5: Community Chat Server

Overview

Chat is the killer app for any network. It is the first thing people want when a network exists, and the last thing they want to give up when resources are scarce. For an alternative network, you need a chat system that runs locally, requires no internet connection, supports multiple rooms and private messages, works from any web browser, and — critically — can federate with other servers when internet connectivity is available.

Matrix is the protocol, and Synapse is the reference server implementation. Matrix is an open, decentralized communication protocol designed for exactly the scenario we care about: it works on isolated local networks, it can federate with the global Matrix network when connectivity exists, and it preserves message history across server restarts and network partitions. The client is Element Web, a polished web application that runs in any modern browser.

Hardware List

Component Quantity Approx. Cost Notes
Raspberry Pi 4 (4 GB+) 1 $55 Synapse needs RAM
MicroSD card (32 GB+) 1 $10 OS + database
USB SSD (128 GB+) 1 $20 Database storage
Ethernet cable 1 $5 Wired connection
USB-C power supply 1 $10  
Total   $100  

If you built the mesh from Project 1, you can reuse the same Raspberry Pi for both the mesh gateway and the Matrix server. The Pi 4 has enough resources to handle both, provided you have adequate storage.

Step-by-Step Build

Step 1: Install Synapse.

The simplest method on Raspberry Pi OS:

sudo apt install -y matrix-synapse-py3

During installation, provide your server name. For a local-only deployment, use a simple name like chat.local or mesh.community. This name is permanent — it becomes part of every user ID (@alice:chat.local), so choose wisely.

Step 2: Configure Synapse for local network operation.

Edit /etc/matrix-synapse/homeserver.yaml:

server_name: "chat.local"
listeners:
  - port: 8008
    type: http
    resources:
      - names: [client, federation]
enable_registration: true
enable_registration_without_verification: true
database:
  name: sqlite3
  args:
    database: /mnt/ipfs-storage/synapse/homeserver.db

Key settings: enable_registration allows anyone on the network to create an account (appropriate for a trusted local network; disable on public-facing servers). Using SQLite is fine for small deployments (under 100 users); switch to PostgreSQL for larger communities.

Step 3: Deploy Element Web.

Element Web is the flagship Matrix client — a single-page web application that connects to your Synapse server. Download and serve it via nginx:

wget https://github.com/element-hq/element-web/releases/download/v1.11.x/element-v1.11.x.tar.gz
tar xzf element-v1.11.x.tar.gz -C /var/www/html/element

Create the Element configuration at /var/www/html/element/config.json:

{
    "default_server_config": {
        "m.homeserver": {
            "base_url": "http://chat.local:8008",
            "server_name": "chat.local"
        }
    }
}

Configure nginx to serve Element and proxy Synapse:

server {
    listen 80;
    server_name chat.local;
    root /var/www/html/element;
    location /_matrix { proxy_pass http://127.0.0.1:8008; }
}

Step 4: Create rooms and configure.

Start Synapse (sudo systemctl start matrix-synapse), open Element in a browser (http://chat.local), and register the first user — this will be the admin. Create community rooms:

Step 5: Set up IRC bridge (optional).

If your community also uses IRC, the matrix-appservice-irc bridge connects Matrix rooms to IRC channels, so messages posted in one appear in the other. This is useful for bridging to ham radio operators who use IRC over packet radio, or to communities that prefer lightweight IRC clients.

Backup and Maintenance Script

import subprocess, shutil, datetime, os

SYNAPSE_DB = "/mnt/ipfs-storage/synapse/homeserver.db"
BACKUP_DIR = "/mnt/ipfs-storage/backups/synapse"
MAX_BACKUPS = 7

def backup_synapse():
    """Create a timestamped backup of the Synapse database."""
    os.makedirs(BACKUP_DIR, exist_ok=True)
    timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    backup_path = f"{BACKUP_DIR}/synapse_{timestamp}.db"
    shutil.copy2(SYNAPSE_DB, backup_path)
    print(f"Backup created: {backup_path}")
    # Prune old backups
    backups = sorted(os.listdir(BACKUP_DIR))
    while len(backups) > MAX_BACKUPS:
        old = os.path.join(BACKUP_DIR, backups.pop(0))
        os.remove(old)
        print(f"Removed old backup: {old}")

backup_synapse()

Testing

Register two user accounts from different devices. Send a message in #general from one account and verify it appears on the other. Test private (direct) messages. Send a file attachment — images, documents, and audio should all work through Element. Restart the Synapse service and verify that all message history is preserved.

If you have internet connectivity, test federation: configure DNS properly (or use .well-known delegation) and verify that users on your server can join rooms on matrix.org. This is optional for local-only deployments but valuable when connectivity is available.

Extension Ideas

Project 6: Solar-Powered Mesh Node

Overview

The ultimate alternative network node is one that requires no grid power, no shelter, and no regular maintenance. Mount it on a pole, aim it at the network, and walk away. It runs on sunlight, communicates via mesh, and reports its own health via LoRa telemetry. This project combines the electrical engineering of solar power systems with the networking knowledge from previous chapters to build a completely autonomous mesh node that can be deployed in remote locations.

This is the most technically demanding project in this chapter. It involves electrical wiring, weatherproofing, mechanical mounting, and careful power budgeting. But the result — a self-sustaining network node that extends your mesh into areas where running power cables is impossible — is worth the effort.

Hardware List

Component Quantity Approx. Cost Notes
GL.iNet GL-MT3000 or similar OpenWrt router 1 $70 Wi-Fi mesh node
LILYGO T-Beam v1.2 1 $35 LoRa telemetry
30W monocrystalline solar panel 1 $30–40 Sized for ~7W continuous load
MPPT solar charge controller (e.g., Victron 75/10) 1 $60 Efficient charging
LiFePO4 battery (12V 12Ah) 1 $45–65 2+ days autonomy
12V to 5V USB buck converter (dual output) 1 $8 Powers router and T-Beam
Weatherproof junction box (IP67, large) 1 $20–30 Houses all electronics
Cable glands (PG7, PG9, PG11) 6–8 $8 pack Weatherproof cable entry
Outdoor Ethernet cable (UV-rated) 10 m $10 If wired backhaul needed
Mounting pole or bracket 1 $15–30 Rooftop or standalone
LoRa antenna (fiberglass, outdoor) 1 $15 High-gain for telemetry
Wi-Fi antenna (outdoor omnidirectional) 1–2 $12 each Better range than internal
Stainless steel hardware, zip ties, sealant $15 Mounting and weatherproofing
Total   $360–$420  

Solar Power System Design

The power budget determines everything. Calculate it wrong, and your node dies on the third cloudy day.

Load calculation:

Device Power Draw Daily Consumption
GL-MT3000 router 5–7 W 144 Wh (at 6W × 24h)
T-Beam (router mode) 0.2 W 4.8 Wh
Buck converter losses (~10%) 0.7 W 16.8 Wh
Total ~7 W ~166 Wh/day

Battery sizing: For 2 days of autonomy (surviving two consecutive fully overcast days), you need 332 Wh of usable capacity. LiFePO4 batteries can be discharged to 20% safely, so the total battery capacity needed is $332 \div 0.8 = 415$ Wh. A 12V 12Ah LiFePO4 battery provides 144 Wh — so you could use three of those, or one 12V 35Ah battery (420 Wh). For most climates, a single 12V 20Ah LiFePO4 ($240 Wh$) provides about 1.2 days of autonomy, which is acceptable if your area gets reasonable sun.

Solar panel sizing: You need to replace 166 Wh per day. With an average of 4 peak sun hours (conservative for temperate climates) and 80% system efficiency, the panel needs to produce $166 \div (4 \times 0.8) = 52$ watts. A 30W panel at 4 peak sun hours actually provides about 96 Wh/day after losses — which is enough for a 12V 20Ah battery setup if you accept occasional depth of discharge during bad weather weeks. For truly reliable autonomous operation, use a 50W panel.

def design_solar_system(load_watts, autonomy_days=2,
                        sun_hours=4, dod=0.8, eff=0.8):
    """Calculate solar power system requirements."""
    daily_wh = load_watts * 24
    battery_wh = (daily_wh * autonomy_days) / dod
    panel_watts = daily_wh / (sun_hours * eff)
    battery_ah_12v = battery_wh / 12

    print(f"Load: {load_watts}W continuous ({daily_wh} Wh/day)")
    print(f"Battery: {battery_wh:.0f} Wh ({battery_ah_12v:.0f} Ah @12V)")
    print(f"Solar panel: {panel_watts:.0f}W minimum")
    return {"battery_wh": battery_wh, "panel_watts": panel_watts}

# Design for our mesh node
design_solar_system(load_watts=7, sun_hours=4)

Step-by-Step Build

Step 1: Prepare the enclosure.

Drill holes for cable glands in the bottom of the junction box (never the top or sides — water finds every hole). Install cable glands for: solar panel cable (2 conductors), Wi-Fi antenna pigtail(s), LoRa antenna pigtail, and optionally Ethernet. Apply silicone sealant around each gland for extra protection.

Step 2: Wire the power system.

Connect the solar panel to the MPPT charge controller’s input terminals. Connect the LiFePO4 battery to the controller’s battery terminals. Connect the 12V-to-5V buck converter to the controller’s load terminals (or directly to the battery, depending on the controller model). The buck converter provides two USB outputs — one for the router, one for the T-Beam.

Critical safety note: LiFePO4 batteries require a charge controller with a LiFePO4 profile. Using a lead-acid profile will overcharge and potentially damage the battery. Verify the charge controller settings before connecting the battery.

Step 3: Configure the router.

Pre-configure the GL-MT3000 with OpenWrt, BATMAN-adv, and the mesh settings from Project 1 before installing it in the enclosure. Connect external Wi-Fi antenna pigtails (using U.FL to SMA adapters if the router’s internal antennas are not externally accessible). Verify mesh connectivity with a temporary setup before final assembly.

Step 4: Configure LoRa telemetry.

Flash the T-Beam with Meshtastic and set it to ROUTER mode. This node serves dual duty: it relays Meshtastic messages across the LoRa mesh and it reports the solar node’s health (battery voltage, solar input, temperature) as telemetry. Meshtastic’s built-in device metrics (battery level, voltage, channel utilization) are broadcast automatically.

Step 5: Final assembly and weatherproofing.

Mount all components inside the enclosure using standoffs, Velcro, or DIN rail (if the box is large enough). Route cables neatly. Double-check all connections. Close the enclosure and test for water-tightness by spraying it with a garden hose for five minutes — if any water gets in, fix the seals before deploying.

Step 6: Mount and deploy.

Mount the enclosure on the pole or bracket. Mount the solar panel above it, angled toward the equator (south in the Northern Hemisphere, north in the Southern Hemisphere) at an angle roughly equal to your latitude. Aim Wi-Fi antennas toward the rest of the mesh network. Verify mesh connectivity and LoRa telemetry from your base location.

Remote Monitoring via LoRa

Since the node may be deployed where you cannot easily access it, use the T-Beam’s LoRa link for remote health monitoring:

import meshtastic
from meshtastic.serial_interface import SerialInterface
from pubsub import pub

def on_telemetry(packet, interface):
    """Monitor remote solar node health via LoRa telemetry."""
    telemetry = packet.get("decoded", {}).get("telemetry", {})
    if telemetry:
        metrics = telemetry.get("deviceMetrics", {})
        battery = metrics.get("batteryLevel", -1)
        voltage = metrics.get("voltage", 0)
        node_id = packet.get("fromId", "?")
        print(f"Node {node_id}: battery={battery}%, voltage={voltage}V")
        if battery < 20:
            print(f"  ⚠ LOW BATTERY WARNING for {node_id}!")

pub.subscribe(on_telemetry, "meshtastic.receive")
iface = SerialInterface()

Testing

Before mounting at the permanent location, run a 72-hour bench test. Place the complete system (panel, battery, controller, router, T-Beam) outdoors and monitor power consumption, battery state of charge through a full day/night cycle, and mesh connectivity. Verify that the battery charges fully during the day and the node survives the night without dropping below 40% state of charge.

After deployment, monitor for two weeks. Check LoRa telemetry daily. Watch for battery voltage trends — a steadily declining average voltage over several days indicates insufficient solar input and requires either a larger panel or reduced power consumption (e.g., scheduling the Wi-Fi radio to turn off during low-usage hours).

Extension Ideas

Project 7: Network Monitoring Dashboard

Overview

A network you cannot see is a network you cannot fix. As your alternative network grows beyond two or three nodes, you need a centralized monitoring dashboard that tells you, at a glance, which nodes are up, how they are performing, and when something goes wrong. This project builds a real-time monitoring dashboard using Python and FastAPI that collects metrics from your mesh nodes, visualizes them with charts, and sends alerts when nodes fail.

This dashboard is designed to monitor the mesh network from Project 1, the LoRa network from Project 2, and the solar node from Project 6 — but its architecture is flexible enough to monitor anything that can be pinged or queried.

Hardware

No new hardware required. This runs on any existing computer on your network — the Raspberry Pi from Project 1, a spare laptop, or any Linux machine. The only requirement is Python 3.9+ and network access to the nodes you want to monitor.

Architecture

The dashboard consists of three components:

  1. Collector: A background process that periodically pings nodes, queries BATMAN-adv stats, and records Meshtastic telemetry. Data is stored in a SQLite database.
  2. API: A FastAPI server that exposes the collected data as JSON endpoints.
  3. Frontend: A simple HTML/JavaScript page that fetches data from the API and renders charts using Chart.js.

Step-by-Step Build

Step 1: Set up the project.

mkdir -p mesh-monitor/{static,templates}
cd mesh-monitor
pip install fastapi uvicorn jinja2 aiohttp aiosqlite

Step 2: Build the data collector.

# collector.py — runs as a background service
import asyncio, aiosqlite, subprocess, time

DB_PATH = "mesh_monitor.db"
NODES = {
    "gateway": "10.10.0.1",
    "node-2": "10.10.0.2",
    "node-3": "10.10.0.3",
    "node-4": "10.10.0.4",
}
INTERVAL = 60  # seconds between checks

async def init_db():
    async with aiosqlite.connect(DB_PATH) as db:
        await db.execute("""CREATE TABLE IF NOT EXISTS metrics (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            timestamp REAL, node TEXT, 
            is_up INTEGER, latency_ms REAL
        )""")
        await db.commit()

The core monitoring function pings each node and records the results:

async def ping_node(host, count=3):
    """Ping a node and return (is_up, avg_latency_ms)."""
    proc = await asyncio.create_subprocess_exec(
        "ping", "-c", str(count), "-W", "2", host,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE
    )
    stdout, _ = await proc.communicate()
    if proc.returncode != 0:
        return False, 0.0
    # Parse average latency from ping output
    for line in stdout.decode().split("\n"):
        if "avg" in line:
            avg = float(line.split("/")[4])
            return True, avg
    return True, 0.0

The collection loop runs indefinitely, writing metrics to SQLite:

async def collect_loop():
    await init_db()
    while True:
        async with aiosqlite.connect(DB_PATH) as db:
            for name, ip in NODES.items():
                is_up, latency = await ping_node(ip)
                await db.execute(
                    "INSERT INTO metrics VALUES (NULL,?,?,?,?)",
                    (time.time(), name, int(is_up), latency)
                )
                status = f"{latency:.1f}ms" if is_up else "✗ DOWN"
                print(f"  {name} ({ip}): {status}")
            await db.commit()
        await asyncio.sleep(INTERVAL)

if __name__ == "__main__":
    asyncio.run(collect_loop())

Step 3: Build the FastAPI server.

# server.py — the API and web frontend
from fastapi import FastAPI, Request
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
import aiosqlite, time

app = FastAPI(title="Mesh Network Monitor")
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
DB_PATH = "mesh_monitor.db"

@app.get("/")
async def dashboard(request: Request):
    return templates.TemplateResponse("dashboard.html",
                                      {"request": request})

The API endpoints expose node status and historical metrics:

@app.get("/api/status")
async def node_status():
    """Get current status of all nodes."""
    async with aiosqlite.connect(DB_PATH) as db:
        db.row_factory = aiosqlite.Row
        rows = await db.execute_fetchall("""
            SELECT node, is_up, latency_ms, timestamp 
            FROM metrics WHERE id IN (
                SELECT MAX(id) FROM metrics GROUP BY node
            )
        """)
    return [dict(r) for r in rows]

@app.get("/api/history/{node}")
async def node_history(node: str, hours: int = 24):
    """Get historical metrics for a specific node."""
    since = time.time() - (hours * 3600)
    async with aiosqlite.connect(DB_PATH) as db:
        db.row_factory = aiosqlite.Row
        rows = await db.execute_fetchall(
            "SELECT * FROM metrics WHERE node=? AND timestamp>?",
            (node, since)
        )
    return [dict(r) for r in rows]

Step 4: Build the dashboard frontend.

Create the HTML dashboard with Chart.js for visualization:

<!-- templates/dashboard.html -->
<!DOCTYPE html>
<html>
<head>
    <title>Mesh Network Monitor</title>
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
    <style>
        body { font-family: monospace; background: #1a1a2e; color: #eee; }
        .node { display: inline-block; margin: 10px; padding: 15px;
                border-radius: 8px; min-width: 150px; }
        .up { background: #16213e; border: 2px solid #0f3; }
        .down { background: #16213e; border: 2px solid #f33; }
        h1 { text-align: center; color: #0f3; }
    </style>
</head>
<body>
    <h1>📡 Mesh Network Monitor</h1>
    <div id="nodes"></div>
    <canvas id="latencyChart" height="100"></canvas>
    <script>
        async function update() {
            const resp = await fetch('/api/status');
            const nodes = await resp.json();
            let html = '';
            for (const n of nodes) {
                const cls = n.is_up ? 'up' : 'down';
                const icon = n.is_up ? '🟢' : '🔴';
                html += `<div class="node ${cls}">
                    ${icon} <strong>${n.node}</strong><br>
                    ${n.is_up ? n.latency_ms.toFixed(1)+'ms' : 'OFFLINE'}
                </div>`;
            }
            document.getElementById('nodes').innerHTML = html;
        }
        update();
        setInterval(update, 30000);
    </script>
</body>
</html>

Step 5: Add alerting.

When a node goes down, you want to know immediately. Add an alert system that can notify via Meshtastic, email, or a simple log:

# alerts.py — alert on node failures
import json, time

ALERT_COOLDOWN = 300  # seconds between repeated alerts
last_alert = {}

def check_and_alert(node, is_up, send_func):
    """Alert if a node is down, with cooldown to avoid spam."""
    now = time.time()
    if not is_up:
        last = last_alert.get(node, 0)
        if now - last > ALERT_COOLDOWN:
            msg = f"🔴 ALERT: Node '{node}' is DOWN at {time.ctime()}"
            send_func(msg)
            last_alert[node] = now
            return True
    else:
        if node in last_alert:
            msg = f"🟢 RECOVERED: Node '{node}' is back UP"
            send_func(msg)
            del last_alert[node]
    return False

Step 6: Run everything.

Start the collector and API server as separate processes:

# Terminal 1: start the collector
python collector.py &

# Terminal 2: start the web dashboard
uvicorn server:app --host 0.0.0.0 --port 8000

For production deployment, use systemd services for both processes so they start on boot and restart on failure.

Testing

Open http://<server-ip>:8000 in a browser. Verify all monitored nodes appear with green indicators. Deliberately power off one node and wait one monitoring cycle (60 seconds). The dashboard should show the node in red. Verify that an alert is generated. Power the node back on and confirm the recovery notification appears.

Test the historical charts: leave the system running for several hours, then query the API at /api/history/gateway?hours=4 and verify that data points are present for every monitoring interval. The Chart.js visualization should show a clean latency graph.

Extension Ideas

Combining Projects: The Complete Community Network

These seven projects are designed to work together. Here is how they combine into a comprehensive community alternative network:

The backbone is Project 1’s BATMAN-adv mesh, connecting homes across the neighborhood with seamless Wi-Fi roaming and shared internet access. Project 6’s solar nodes extend the mesh into areas where grid power is unavailable — hilltop relay points, community gardens, trailheads.

Communication is handled by Project 5’s Matrix server for daily chat and Project 2’s LoRa mesh for off-grid and long-range messaging. The LoRa network also provides telemetry from remote solar nodes, feeding data into Project 7’s monitoring dashboard.

Services include Project 4’s IPFS node for decentralized file sharing and Project 5’s community chat. A local wiki, map server, and file sharing portal run on the mesh gateway’s Raspberry Pi.

Resilience comes from Project 3’s emergency kit — a grab-and-go case that can bootstrap a minimal network anywhere, connect to the existing mesh via LoRa, and provide critical services even if the main network is damaged.

Visibility is provided by Project 7’s monitoring dashboard, which tracks the health of every Wi-Fi node, every LoRa device, and every solar power system from a single web interface.

┌─────────────────────────────────────────────────────────┐
│                  COMMUNITY NETWORK                       │
│                                                          │
│  ┌─────────┐    ┌─────────┐    ┌─────────┐              │
│  │ Node 1  │◄──►│ Node 2  │◄──►│ Node 3  │  Wi-Fi Mesh  │
│  │(Gateway)│    │ (Home)  │    │ (Solar) │  (BATMAN-adv)│
│  └────┬────┘    └─────────┘    └────┬────┘              │
│       │                             │                    │
│  ┌────┴────┐                   ┌────┴────┐              │
│  │ Rasp Pi │                   │ T-Beam  │  LoRa Mesh   │
│  │ Matrix  │                   │ Relay   │  (Meshtastic)│
│  │  IPFS   │                   └────┬────┘              │
│  │Dashboard│                        │                    │
│  └─────────┘                   ┌────┴────┐              │
│                                │ T-Beam  │              │
│  ┌──────────────┐              │ Handheld│              │
│  │ Emergency Kit│◄── LoRa ───►└─────────┘              │
│  │  (Pelican)   │                                        │
│  └──────────────┘                                        │
└─────────────────────────────────────────────────────────┘

The total cost for the complete system — all seven projects — is approximately $1,500–$2,000, depending on hardware choices and quantities. Split across a neighborhood of 5–10 households, that is $150–$400 per household for a resilient, community-owned communication infrastructure that works with or without the commercial internet. Compare that to a single year of ISP bills.

Final Notes and Best Practices

Start small. Build Project 1 (mesh) or Project 2 (LoRa) first. Get it working. Live with it for a few weeks. Then add projects incrementally based on what your community actually needs — not what sounds technically cool.

Document everything. Every configuration choice, every IP address assignment, every antenna orientation. When you are on a rooftop six months from now troubleshooting a failed node, you will thank yourself for keeping notes.

Train your neighbors. A network is only as resilient as the community that maintains it. Show people how to restart a node, how to check the monitoring dashboard, how to send a Meshtastic message. The person who built the network should not be a single point of failure.

Back up your configurations. Before upgrading firmware, changing settings, or adding nodes, back up every configuration file. An scp to another machine takes ten seconds and saves hours of reconstruction.

Monitor power systems obsessively. The number one cause of node failure in outdoor deployments is power — dead batteries, failed charge controllers, corroded connections, insufficient solar input. Project 7’s monitoring dashboard should be checking battery voltages constantly.

Plan for failure. Every node will fail eventually. Batteries degrade. SD cards corrupt. Antennas blow off in storms. Animals chew cables. Design your network so that no single node failure disconnects any part of the community. Redundancy is not a luxury — it is the entire point of mesh networking.

Keep it legal. Know your country’s regulations for Wi-Fi power limits, LoRa frequency allocations, and amateur radio licensing requirements. Most of these projects use license-free bands, but regulations vary by country and change over time. Ignorance is not a defense.

Have fun. Alternative networking is one of the most satisfying technical hobbies because the result is tangible, useful, and community-building. When your neighbor’s kid sends their first message across the mesh, when the monitoring dashboard shows all nodes green after a storm, when the emergency kit gets its first real-world test and works — those moments make every hour of configuration and every rooftop climb worth it.

Now go build something.


← Previous: Hardware Guide Table of Contents Next: Future and Emerging Technologies →