Every chapter so far has been about building the pipes — the physical infrastructure, the mesh protocols, the routing algorithms that move bits from point A to point B across networks you own and control. But pipes without water are just hollow tubes. The reason you build a network is to do something with it: to send messages, share files, publish information, coordinate with your community. And this is where most alternative network projects hit a wall they did not see coming.
You build a beautiful mesh network. Every node is routing perfectly. Babel is converging in milliseconds. Your link budget calculations were spot-on. You can ping every node in the neighborhood. And then someone asks: “Great, so… how do I send a message?” And you realize that every messaging app they know — WhatsApp, Signal, Telegram, iMessage — requires a connection to the commercial internet, to reach centralized servers operated by corporations in data centers thousands of kilometers away. Your mesh is working flawlessly, but none of the software people actually use can function on it.
This is the fundamental problem of decentralized applications: the vast majority of software built in the last two decades assumes the existence of centralized infrastructure. Your web browser assumes DNS will resolve domain names. Your email assumes SMTP servers are reachable on the public internet. Your cloud storage assumes Amazon’s S3 or Google’s servers will always be there. Strip away that centralized infrastructure — which is exactly what an alternative network does — and the software collapses.
This chapter is about the software that does not collapse. Applications and protocols designed from the ground up to work without centralized servers, without corporate infrastructure, and without the assumption that the global internet is always available. Some of these are federated — they distribute authority across multiple servers rather than concentrating it in one. Some are fully peer-to-peer — they have no servers at all. Some are designed specifically for the kind of constrained, intermittent, local-first environments that alternative networks create.
By the end of this chapter, you will understand the major decentralized application protocols, know how to deploy them on your alternative network, and — perhaps most importantly — understand the trade-offs that each design makes. Because decentralization is not free. It comes with costs in complexity, performance, and user experience that you need to understand before you commit to a particular architecture.
Let us start with the most fundamental question: why centralization is a problem in the first place.
The modern internet runs on a simple architectural pattern: clients talk to servers. Your phone is a client. Google’s data center is a server. Every interaction — every search, every message, every file upload — flows through that server. The server is the authority. It stores the data. It enforces the rules. It decides who can participate and who cannot.
This client-server architecture is efficient, simple, and enormously scalable. It is also a catastrophic single point of failure at every level — technical, political, and economic.
Technical fragility. When a centralized service goes down, everyone goes down simultaneously. The October 2021 Facebook outage — caused by a misconfigured BGP update — took down Facebook, Instagram, WhatsApp, and Messenger for approximately six hours. For the billions of people worldwide who depend on WhatsApp as their primary communication tool, this was not an inconvenience. It was a blackout. Businesses could not process orders. Families could not coordinate. In some countries, emergency services that relied on WhatsApp groups were disrupted. A single configuration error at a single company in a single data center affected roughly three billion people.
Censorship and control. A centralized service can be compelled to censor, surveill, or shut down by any government with jurisdiction over its operators. Russia blocked Telegram. China blocks virtually every Western platform. India has ordered platforms to remove content and hand over user data on numerous occasions. Even in democratic countries, a single court order can force a platform to deplatform users, delete content, or provide encryption backdoors. The centralized architecture makes this trivially easy — there is exactly one throat to choke.
Data harvesting. Centralized services know everything about their users — not because surveillance is an unfortunate side effect, but because it is the business model. Google reads your email to target ads. Facebook tracks your activity across the web. Amazon knows what you buy, what you browse, what you almost bought but did not. This data is stored indefinitely, shared with advertisers, and — inevitably — leaked in breaches. The Equifax breach exposed 147 million people’s financial records. The Yahoo breach exposed 3 billion accounts. Centralization creates honeypots that attract attackers precisely because the data is so concentrated and so valuable.
Economic capture. Centralized platforms extract rents. App stores take 30% of developer revenue. Cloud providers lock you into proprietary APIs. Social media platforms manipulate algorithmic feeds to maximize engagement (and therefore ad revenue) at the expense of users’ well-being and accurate information. Once a platform achieves dominance, switching costs make it nearly impossible for users to leave, even when the platform’s interests diverge from their own.
For alternative networks, the problems are even more acute. A mesh network that depends on centralized internet services for its applications is not truly independent — it is a local transport layer for a remote dependency. When the internet connection that feeds your mesh goes down (which, if you are building an alternative network, is presumably a scenario you care about), all your centralized applications go with it.
Not all non-centralized architectures are the same, and the distinctions matter enormously for alternative networks.
Federation distributes authority across multiple independent servers that communicate using a shared protocol. Email is the oldest federated system: anyone can run an email server, and all email servers speak SMTP to each other. Modern federated systems include Matrix (messaging), Mastodon (social media), and XMPP (messaging). In a federated system, you choose a server (or run your own), and your server communicates with other servers on your behalf. There is no single point of failure for the network as a whole, but your individual experience depends on your server. If your server goes down, you are offline until it comes back.
Federation’s great strength is that it is architecturally familiar — the server still does the heavy lifting, which means the client can be lightweight, and the user experience can be close to centralized services. Its weakness is that it still requires servers: machines that are always on, always reachable, and capable of handling the load of their users. For alternative networks with limited infrastructure, running a reliable server is a non-trivial commitment.
Full decentralization eliminates the distinction between client and server entirely. Every participant in the network is a full peer, storing its own data and communicating directly with other peers. Scuttlebutt and IPFS are examples: there is no server to run, no authority to trust, no infrastructure beyond the peers themselves. The network exists because the participants exist.
Full decentralization’s great strength is resilience — there is literally nothing to shut down, no server to seize, no company to subpoena. Its weakness is that it pushes complexity onto every participant. Each peer must store data, make routing decisions, and handle the overhead of peer discovery. This can be demanding on mobile devices, and the user experience often suffers compared to the polished apps that centralized services can afford to build.
Peer-to-peer (P2P) is sometimes used interchangeably with full decentralization, but there is a subtle distinction. P2P typically refers to the communication pattern — peers talking directly to each other — while decentralization refers to the authority structure. A system can be P2P in communication but still have centralized elements: BitTorrent uses P2P for file transfer but relies on centralized (or at least well-known) trackers for peer discovery. A truly decentralized system is P2P in both communication and authority.
For alternative networks, the ideal architecture depends on your constraints:
| Factor | Federation | Full Decentralization |
|---|---|---|
| Server infrastructure needed | Yes (at least one) | No |
| Works without internet | If server is local | Yes |
| Client complexity | Low | High |
| Data availability | Server must be online | Depends on peers |
| User experience polish | Good | Variable |
| Censorship resistance | Moderate | High |
| Storage requirements per user | Low (server stores) | High (each peer stores) |
In practice, the most resilient alternative network deployments use a mix: federated services for everyday use (because the UX is better) and fully decentralized protocols as a fallback (because they work even when servers go down).
The traditional web is built on location addressing: you request a file by specifying where it lives — a URL like https://example.com/report.pdf. This means you need to know (and reach) the specific server hosting that file. If the server is down, moved, or censored, the file is gone, even if a thousand copies exist elsewhere on the internet.
IPFS (InterPlanetary File System) replaces location addressing with content addressing: instead of asking “where is this file?”, you ask “who has this file?” Every file added to IPFS is hashed using a cryptographic hash function, producing a unique Content Identifier (CID) — a fingerprint derived from the file’s contents. The CID for a file is always the same, regardless of who stores it or where. If you and I both add the same photo to IPFS, it gets the same CID. Anyone who has a copy can serve it. No specific server is required.
This seemingly simple change has profound implications:
Integrity is guaranteed. Since the CID is derived from the content, any modification to the file produces a different CID. If you request QmX4z...abc and receive data that does not hash to that value, you know it has been tampered with. There is no need to trust the server — the math verifies the content.
Deduplication is automatic. If the same file is added by a thousand different users, IPFS stores only one copy (per node that pins it). The CID is the same for identical content, so the network naturally deduplicates.
Censorship is extremely difficult. To suppress a file on IPFS, you would need to identify and shut down every peer that has pinned it. There is no central server to issue a takedown notice to. As long as a single peer anywhere in the world has the file pinned and is reachable, the content is available.
Offline and local-first use is natural. IPFS nodes discover each other on the local network automatically using mDNS. Two IPFS nodes on the same mesh network can share files without any internet connectivity, using the same protocol and the same CIDs they would use on the global IPFS network.
Under the hood, IPFS organizes data using Merkle Directed Acyclic Graphs (Merkle DAGs) — a data structure that will be familiar if you have used Git (which uses Merkle trees for commits) or blockchains (which use Merkle trees for transaction verification).
When you add a large file to IPFS, it is split into chunks (typically 256 KB each). Each chunk is hashed to produce a CID. Then, a parent node is created that contains the list of chunk CIDs, and that parent node is itself hashed to produce the overall file CID. For very large files, this structure can be multiple levels deep — a tree of hashes where every leaf is a data chunk and every interior node is a list of child CIDs.
This structure has several elegant properties. You can verify any chunk independently. You can download chunks from different peers in parallel (since each chunk has its own CID and can be requested independently). And if two files share some identical chunks — say, two versions of a document that differ only in a few paragraphs — the shared chunks are stored only once.
Directories in IPFS are represented the same way: a directory is a node whose children are the CIDs of its files and subdirectories. This means an entire website — HTML, CSS, images, everything — can be represented as a single root CID.
A critical concept in IPFS is pinning. By default, IPFS nodes only store content they have explicitly added or pinned. Content that passes through a node (because it was requested and relayed) is cached temporarily but eventually removed by garbage collection. If you want content to persist on your node, you must pin it.
This has important implications for alternative networks: content on IPFS is available only as long as at least one online, reachable peer has it pinned. If you add a file to your IPFS node and your node goes offline, the file becomes unavailable (unless someone else has also pinned it). Pinning services like Pinata, Infura, and Web3.Storage provide always-on pinning for a fee — but they are centralized services, which partially defeats the purpose.
For an alternative network, the solution is community pinning: multiple nodes within the network pin important content, ensuring availability even if individual nodes go offline. IPFS Cluster is a tool designed for exactly this — it coordinates pinning across multiple IPFS nodes, automatically ensuring that a configurable number of replicas exist for each piece of content.
Setting up an IPFS node is straightforward. The reference implementation, Kubo (formerly go-ipfs), runs on Linux, macOS, and Windows:
# Download and install Kubo (IPFS implementation)
wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz
tar xvfz kubo_v0.24.0_linux-amd64.tar.gz
cd kubo && sudo bash install.sh
# Initialize the IPFS repository
ipfs init
# Start the daemon (connects to the global IPFS network)
ipfs daemon &
# Add a file
echo "Hello from the mesh" > hello.txt
ipfs add hello.txt
# Output: added QmXgBz... hello.txt
# Retrieve it by CID
ipfs cat QmXgBz...
For an alternative network that operates without internet connectivity, you will want to configure IPFS to use only the local network and disable bootstrap connections to the global network:
# Remove all bootstrap peers (prevents connecting to global IPFS)
ipfs bootstrap rm --all
# IPFS will still discover local peers via mDNS
# Verify with:
ipfs swarm peers
IPFS can serve static websites directly. You add a directory containing your site’s files, and IPFS gives you a CID that represents the entire site:
# Create a simple website
mkdir mysite
echo "<html><body><h1>Welcome to our mesh</h1></body></html>" > mysite/index.html
# Add the entire directory to IPFS
ipfs add -r mysite/
# Output includes the root directory CID
# Access via local gateway
# http://localhost:8080/ipfs/QmRootCID/
The site is now accessible to anyone on your mesh network who can reach your IPFS node, via the built-in gateway that Kubo runs on port 8080. On the global internet, public gateways like https://ipfs.io/ipfs/CID provide web access to IPFS content without requiring users to run their own node.
For human-readable naming (because nobody wants to type a CID into a browser), IPFS supports IPNS (InterPlanetary Name System) — a mutable naming layer that maps a persistent name (based on your node’s cryptographic key) to a CID that you can update whenever you publish new content.
The ipfshttpclient library provides a clean Python interface to a running IPFS node:
import ipfshttpclient
# Connect to local IPFS daemon
client = ipfshttpclient.connect("/ip4/127.0.0.1/tcp/5001")
# Add a file
result = client.add("hello.txt")
print(f"CID: {result['Hash']}")
# Retrieve content by CID
content = client.cat(result["Hash"])
print(content.decode())
# Pin content to ensure it persists
client.pin.add(result["Hash"])
For alternative networks, a useful pattern is a background service that automatically pins content from community members, ensuring local availability:
import ipfshttpclient
import time
def community_pinner(watch_peer_ids, poll_interval=60):
"""Auto-pin content published by trusted community peers."""
client = ipfshttpclient.connect()
known_cids = set()
while True:
for pin in client.pin.ls(type="recursive")["Keys"]:
known_cids.add(pin)
time.sleep(poll_interval)
Matrix is an open standard for real-time, federated communication. Think of it as what email did for messages — a protocol that anyone can implement, anyone can run a server for, and all servers can talk to each other. But unlike email (which was designed in the 1980s and shows its age), Matrix was designed for the modern era: it supports end-to-end encryption by default, rich media, voice and video calls, and the kind of real-time synchronization that people expect from apps like Slack or WhatsApp.
The Matrix ecosystem consists of several components:
Homeservers are the federated servers that store user data and synchronize with each other. The reference homeserver implementation is Synapse (written in Python, somewhat ironically for a high-performance server). A newer, more performant implementation called Dendrite (written in Go) is maturing rapidly. Your Matrix identity is tied to your homeserver — if you register on matrix.example.org, your user ID is @alice:matrix.example.org.
Clients are the applications people actually use. Element (available for web, desktop, iOS, and Android) is the flagship client, but dozens of others exist: FluffyChat, Nheko, Fractal, SchildiChat, and many more. Because Matrix is an open protocol, anyone can build a client.
Rooms are the fundamental unit of communication. Every conversation — whether a one-on-one chat, a group chat, or a public forum — is a room. Rooms have a unique identifier and can be federated across multiple homeservers. When two users on different homeservers join the same room, their homeservers synchronize the room’s history between them.
Events are the atomic units of data in Matrix. Every message, every state change (someone joins a room, changes the topic, kicks a user) is an event. Events are organized in a Directed Acyclic Graph (DAG) — yes, another one — which allows Matrix to handle the complex concurrent edits that inevitably arise when multiple servers are synchronizing in real time.
Matrix implements end-to-end encryption (E2EE) using a protocol called Megolm, designed specifically for encrypted group communication. The key components are:
Olm handles the initial key exchange between two devices using a Double Ratchet algorithm (similar to Signal’s protocol). When you send an encrypted message to a new device, Olm establishes a shared secret between your device and theirs.
Megolm handles the actual message encryption within rooms. Instead of encrypting each message separately for each recipient (which would be enormously expensive in a room with hundreds of members), Megolm uses a shared session key that is distributed to all authorized participants via Olm. Each message is encrypted once with the session key, and all participants can decrypt it. The session key “ratchets” forward regularly, so compromising a current key does not reveal past messages — a property called forward secrecy.
For alternative networks, Matrix’s E2EE is particularly important: since your mesh traffic may pass through nodes you do not control, encryption ensures that intermediate nodes cannot read message content.
Running a Matrix homeserver on your local network gives your community a messaging platform that works entirely without the internet. Here is a minimal Synapse deployment:
# Install Synapse via pip (or use Docker/packages)
pip install matrix-synapse
# Generate configuration
python -m synapse.app.homeserver \
--server-name mesh.local \
--config-path homeserver.yaml \
--generate-config \
--report-stats=no
# Start the homeserver
synctl start
The --server-name parameter is critical. For an alternative network, use a .local domain or any domain that your network’s DNS (or mDNS) can resolve. Users will register as @username:mesh.local and can communicate with anyone else on your homeserver — or, if your network has internet connectivity, with users on any other Matrix homeserver worldwide.
For resource-constrained deployments (like a Raspberry Pi serving a small community), consider Dendrite instead of Synapse — it uses significantly less RAM and CPU.
One of Matrix’s most powerful features is bridging — connecting Matrix rooms to other communication platforms. Official and community-maintained bridges exist for:
For alternative networks, bridging is valuable when your network does have intermittent internet access. Users on the mesh can communicate via Matrix at all times. When an internet connection is available, bridges forward messages to and from people on conventional platforms. When the internet goes down, local Matrix communication continues uninterrupted.
import requests
HOMESERVER = "http://localhost:8008"
TOKEN = "your_access_token"
def send_matrix_message(room_id, message):
"""Send a message to a Matrix room via the client-server API."""
url = f"{HOMESERVER}/_matrix/client/v3/rooms/{room_id}/send/m.room.message"
headers = {"Authorization": f"Bearer {TOKEN}"}
payload = {"msgtype": "m.text", "body": message}
resp = requests.post(url, json=payload, headers=headers)
return resp.json()
Most decentralized protocols still assume that the internet exists — they just want to avoid centralizing within the internet. Secure Scuttlebutt (SSB) makes no such assumption. It was designed to work even if the internet does not exist at all — over local Wi-Fi, over Bluetooth, over a USB drive physically carried between computers. If two devices can exchange bytes in any way whatsoever, Scuttlebutt can synchronize between them.
This makes SSB uniquely suited to alternative networks, and it is worth understanding why its design is so different from everything else.
In Scuttlebutt, each user is identified by a cryptographic keypair. Your public key is your identity — there is no username registration, no server to authenticate against, no authority that grants or revokes identities. You generate a keypair, and you exist.
Every action you take — every post, every reply, every “like,” every follow — is appended to your personal append-only log. This log is a strictly ordered sequence of signed messages. Message 1, then message 2, then message 3, and so on, each one signed by your private key. Nothing is ever deleted or modified — the log only grows. This may seem wasteful, but it has a crucial property: replication is trivially simple. If I have messages 1 through 47 of your log, and you have messages 1 through 52, I only need messages 48 through 52 to catch up. There is no complex synchronization algorithm, no conflict resolution, no need to determine “which version is correct.” The log is the single source of truth, and it only grows.
Scuttlebutt propagates data through gossip replication — the same mechanism that epidemics use to spread through populations. When two SSB peers connect, they exchange information about which logs they have and how up-to-date each log is. Then they exchange the missing messages. That is it. No coordination. No routing tables. No global state.
The gossip model has a beautiful property: it works over any transport. Two peers can gossip over TCP on a local network. They can gossip over Wi-Fi Direct. They can gossip over Bluetooth. They can even gossip via sneakernet — physically carrying a storage device from one computer to another. A researcher in a remote village can write posts offline, travel to town, sync with a peer who has internet access, and their posts propagate outward to the entire Scuttlebutt network. Days or weeks later, replies arrive through the same (or different) physical path.
While Scuttlebutt works perfectly in small, locally connected groups, it has a bootstrap problem: how do you discover peers outside your local network? Two mechanisms address this:
Pubs are publicly accessible SSB peers that act as always-on relay points. When you connect to a pub, it gossips with you — sharing logs from everyone else who has connected to it, and accepting your logs for redistribution. Pubs are not servers in the traditional sense — they do not have authority, they do not control access, and your data exists independently of any pub. But they do provide a convenient rendezvous point for peers who cannot directly connect to each other.
Rooms (a newer concept) provide a similar function but with better privacy. A room server facilitates connections between peers without storing their data. Peers connect to the room, discover each other, and then gossip directly. The room server itself never sees the content of the messages.
For alternative networks, pubs and rooms are valuable but not essential. In a mesh network where all nodes can reach each other, SSB peers discover each other directly and gossip without any intermediary.
The two most popular Scuttlebutt clients are:
Patchwork is a desktop application that presents SSB as a social network — with posts, threads, likes, and a feed of updates from people you follow. It is beginner-friendly and gives a good sense of what SSB looks like as a user experience.
Manyverse is a mobile application (Android and iOS) that is specifically designed for off-grid and mesh use. It can sync over Wi-Fi, Bluetooth, and even LAN discovery without any internet connectivity. For alternative network deployments, Manyverse is the recommended client — it embodies the local-first, offline-first philosophy that makes SSB unique.
import json
import hashlib
from nacl.signing import SigningKey
def create_ssb_identity():
"""Generate a new Scuttlebutt-style identity (Ed25519 keypair)."""
signing_key = SigningKey.generate()
public_key = signing_key.verify_key
identity = f"@{public_key.encode().hex()}.ed25519"
return {"identity": identity, "signing_key": signing_key}
def create_log_entry(signing_key, sequence, previous, content):
"""Create a signed, append-only log entry."""
entry = {
"sequence": sequence,
"previous": previous,
"content": content,
"timestamp": __import__("time").time(),
}
raw = json.dumps(entry, sort_keys=True).encode()
signature = signing_key.sign(raw).signature.hex()
entry["signature"] = signature
return entry
Briar occupies a unique position in the landscape of decentralized messaging: it was designed specifically for activists, journalists, and human rights defenders operating in environments where the government is actively trying to surveil, disrupt, and punish communication. This is not a theoretical threat model — Briar was built in response to the real-world surveillance of journalists and activists in countries like Iran, Myanmar, China, and Belarus.
Briar’s design makes three assumptions that distinguish it from every other messaging platform:
The internet may be monitored or unavailable. Briar can communicate over Tor (when internet exists), over Wi-Fi (local network), and over Bluetooth (direct device-to-device). If the government shuts down the internet, Briar keeps working.
Servers will be seized or compromised. Briar has no servers. Zero. There is no server to seize, no database to subpoena, no admin account to compromise. All data is stored on users’ devices only.
Metadata is as dangerous as content. Knowing who talks to whom can be as dangerous as knowing what they say. Briar routes all internet traffic through Tor onion services, hiding both the content and the communication pattern from network observers.
Briar’s multi-transport architecture is its most distinctive feature:
Tor transport — When internet access is available, Briar routes messages through the Tor network. Each Briar user creates a Tor hidden service, and contacts communicate via these hidden services. This means that even if the internet is monitored, an observer cannot determine who is talking to whom — they see only Tor traffic.
Wi-Fi transport — When devices are on the same Wi-Fi network (including a mesh network!), Briar discovers peers and communicates directly. No internet required. This makes Briar an ideal messaging app for alternative networks.
Bluetooth transport — When even Wi-Fi is unavailable, Briar can communicate over Bluetooth between nearby devices. The range is limited (typically 10-30 meters), but in a protest or a building, this can be sufficient. Combined with mesh routing (which Briar’s Bluetooth transport supports), messages can hop between devices to reach recipients beyond Bluetooth range.
Briar is perhaps the most natural fit for alternative networks of any application discussed in this chapter. Consider the scenarios:
Briar’s limitation is scope: it currently supports only private messaging and group forums. There is no voice calling, no file sharing beyond images, and no integration with other platforms. It is a messaging tool optimized for security, not a general-purpose communication platform. For communities that need both, the recommendation is Briar for sensitive communication and Matrix for everything else.
The Fediverse — a portmanteau of “federation” and “universe” — is a collection of interconnected social networking platforms that communicate using the ActivityPub protocol. Unlike the corporate social media platforms (which are isolated silos — you cannot send a tweet from Instagram), Fediverse platforms can interoperate: a Mastodon user can follow a PeerTube channel, comment on a Pixelfed photo, and share a Lemmy post, all from their Mastodon account.
The major Fediverse platforms include:
Mastodon — A microblogging platform similar to Twitter/X. Users write short posts (“toots”), follow other users, and see a chronological feed (no algorithmic manipulation). Each Mastodon instance (server) is independently operated, with its own rules, moderation policies, and community culture. As of 2025, there are over 10,000 active Mastodon instances with millions of users combined.
PeerTube — A video hosting platform similar to YouTube, but federated. Videos are stored on the instance that hosts them, and can be discovered and watched by users on other instances. PeerTube also supports WebTorrent — using peer-to-peer video streaming to reduce the bandwidth burden on the hosting server.
Pixelfed — A photo sharing platform similar to Instagram. Federated, ad-free, and chronological.
Lemmy — A link aggregation and discussion platform similar to Reddit. Federated, so communities can exist on different instances but still interact.
Funkwhale — A music streaming and sharing platform similar to SoundCloud or Spotify, but federated.
ActivityPub defines two layers: the client-to-server (C2S) API (how users interact with their instance) and the server-to-server (S2S) API (how instances communicate with each other). The S2S protocol is what makes federation work.
When a user on instance A follows a user on instance B, instance A sends a Follow activity to instance B. When the user on instance B publishes a new post, instance B sends a Create activity (containing the post) to all instances that have followers subscribed to that user. Each instance stores a copy of the post and displays it to the relevant followers.
The data model is based on ActivityStreams 2.0, a W3C standard for representing social interactions as JSON objects. Activities include things like Create, Like, Announce (reblog/boost), Follow, Block, and Delete.
For alternative networks, running a Fediverse instance provides a local social platform that works without the internet. Mastodon is the most popular choice, but it is resource-heavy (Ruby on Rails, PostgreSQL, Redis, Elasticsearch). For constrained environments, lighter alternatives exist:
GoToSocial — A lightweight ActivityPub server written in Go. Uses SQLite (no separate database server needed), minimal RAM footprint, and supports the core Mastodon client API so that existing Mastodon apps work with it.
Akkoma — A fork of Pleroma, written in Elixir. More features than GoToSocial, but still much lighter than Mastodon.
Microblog.pub — A single-user ActivityPub server written in Python. Ideal if you want a personal instance with minimal resources.
import requests
import json
def send_activitypub_post(actor_url, inbox_url, content, private_key):
"""
Send an ActivityPub Create activity (simplified).
In production, you'd also need HTTP Signatures for auth.
"""
activity = {
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Create",
"actor": actor_url,
"object": {
"type": "Note",
"content": content,
"attributedTo": actor_url,
},
}
headers = {"Content-Type": "application/activity+json"}
resp = requests.post(inbox_url, json=activity, headers=headers)
return resp.status_code
For a mesh network with no internet, all federation stays local — users on the same instance communicate effortlessly, and if you run multiple instances on the network, they federate with each other. When internet access is intermittently available, the instances can federate with the wider Fediverse, pulling in posts from followed accounts and pushing out local content.
You can decentralize file storage with IPFS, messaging with Matrix, and social media with Mastodon. But there is one centralized service that almost everything depends on, and it is so fundamental that most people never think about it: the Domain Name System (DNS).
DNS translates human-readable names (example.com) into IP addresses (93.184.216.34). It is hierarchical and centralized: at the top sit 13 root server clusters, controlled by a handful of organizations under the coordination of ICANN (Internet Corporation for Assigned Names and Numbers), a single entity based in California. Below them sit the Top-Level Domain (TLD) servers (.com, .org, .net), each operated by a designated registry. Below those sit authoritative servers for individual domains, often operated by registrars or hosting providers.
This hierarchy creates multiple chokepoints:
For alternative networks, DNS is doubly problematic: it typically depends on reaching servers on the public internet, which may not be available, and it depends on centralized authorities, which contradicts the philosophy of decentralized infrastructure.
Several projects attempt to provide naming without centralized authority:
GNS is part of the GNUnet framework — a comprehensive peer-to-peer networking stack developed as part of the GNU Project. GNS provides a fully decentralized naming system where each user controls their own namespace, identified by their public key. You can assign names within your namespace (e.g., alice.gnu → an IP address or IPFS CID), and other users can reference your namespace by your public key or by a local petname they assign to you.
GNS uses the R5N DHT (Randomized Recursive Routing) for name resolution across the network. It provides query privacy — the DHT nodes that help resolve a name cannot determine what name is being looked up — which is a significant advantage over other decentralized naming systems.
Namecoin was the first fork of Bitcoin, created in 2011 specifically to provide decentralized domain name registration. Domain names are registered on the Namecoin blockchain under the .bit TLD. Registration and renewal require spending Namecoin (NMC), which prevents squatting (in theory — the low price of NMC has made squatting trivial in practice).
To resolve .bit domains, you need either a full Namecoin node or access to a Namecoin DNS resolver. This is the fundamental challenge with blockchain-based naming: the resolver infrastructure is not built into standard operating systems, so users need special software or configuration.
Handshake is a more recent blockchain-based naming protocol that aims higher than Namecoin: rather than creating an alternative TLD, Handshake attempts to decentralize the root zone itself. In Handshake, top-level domain names are auctioned on the blockchain, and the owners of those TLDs can operate them however they wish.
ENS provides human-readable names (like alice.eth) that resolve to Ethereum addresses, IPFS content hashes, or traditional IP addresses. ENS names are NFTs on the Ethereum blockchain, and resolution uses Ethereum smart contracts. ENS has achieved significant adoption in the Web3 ecosystem, but its dependence on the Ethereum blockchain makes it expensive (gas fees for registration and updates) and slow (resolution requires blockchain queries).
For most alternative networks, the practical approach to naming is simpler than any of these blockchain systems:
import json
import socket
class MeshNameResolver:
"""Simple name resolution for local mesh networks using mDNS + a
shared JSON registry replicated via IPFS or gossip."""
def __init__(self, registry_path="mesh_names.json"):
self.registry_path = registry_path
self.names = self._load_registry()
def _load_registry(self):
try:
with open(self.registry_path) as f:
return json.load(f)
except FileNotFoundError:
return {}
def register(self, name, address):
self.names[name] = address
with open(self.registry_path, "w") as f:
json.dump(self.names, f, indent=2)
def resolve(self, name):
return self.names.get(name, None)
In practice, a mesh network can use mDNS (multicast DNS) for automatic discovery of services on the local network, combined with a shared registry (replicated via IPFS, Scuttlebutt, or a simple gossip protocol) for more persistent naming. This is not as elegant as a blockchain-based solution, but it works today, requires no external dependencies, and is perfectly suited to local alternative networks.
IPFS provides content addressing and peer-to-peer file sharing, but it does not guarantee persistence — if no one pins your file, it disappears. Several projects build on or complement IPFS to provide reliable, long-term decentralized storage.
Filecoin is the incentive layer built on top of IPFS by the same team (Protocol Labs). Storage providers commit hard drive space to the network and are paid in FIL tokens to store data. Clients pay to store files for a specified duration. The Filecoin blockchain verifies that storage providers actually store the data they claim to (using Proof-of-Replication and Proof-of-Spacetime cryptographic proofs).
Filecoin solves IPFS’s persistence problem through economic incentives — you pay someone to keep your data online. However, it requires blockchain participation, transaction fees, and a minimum deal size that makes it impractical for small files. For alternative networks, Filecoin is relevant only if your network has internet connectivity and you want to guarantee global availability of important data.
Arweave takes a radically different approach: pay once, store forever. Instead of paying for a time period, you make a single payment (in AR tokens) that is calculated to cover the cost of storage in perpetuity, based on economic models that account for declining storage costs over time. Data stored on Arweave is replicated across the network’s miners and is designed to be accessible indefinitely.
This is appealing for content that should never disappear — community archives, historical records, legal documents. The trade-off is cost (storing large amounts of data is expensive) and the philosophical question of whether any system can truly guarantee “forever.”
Storj is a decentralized cloud storage network that encrypts, shreds, and distributes files across a global network of storage nodes. Unlike IPFS (which is peer-to-peer) or Filecoin (which uses blockchain verification), Storj uses a more traditional architecture with satellite coordinator nodes that manage the network, track which nodes store which pieces, and handle payment. Files are erasure-coded, so data can be recovered even if a significant fraction of storage nodes go offline.
Storj provides an S3-compatible API, which means existing applications that use Amazon S3 can switch to Storj with minimal code changes. For alternative networks that want cloud storage without Amazon, this is a compelling option — though it still requires internet connectivity.
For alternative networks that want to keep data entirely local, self-hosted storage solutions are often the best choice:
MinIO is an open-source object storage server that is API-compatible with Amazon S3. It runs on a single machine or can be deployed as a distributed cluster across multiple nodes. For a mesh network, MinIO provides a familiar, well-documented storage API that applications can use without any internet dependency.
SeaweedFS is a distributed file system designed for high throughput. It separates metadata (managed by a master server) from file data (stored on volume servers), which allows it to scale to billions of files. For community networks with significant storage needs, SeaweedFS provides a robust, self-hosted foundation.
from minio import Minio
def setup_mesh_storage():
"""Connect to a MinIO instance running on the local mesh."""
client = Minio(
"storage.mesh.local:9000",
access_key="mesh-admin",
secret_key="mesh-secret-key",
secure=False, # Use True with TLS in production
)
# Create a bucket for community files
if not client.bucket_exists("community-files"):
client.make_bucket("community-files")
return client
def upload_file(client, filename, filepath):
"""Upload a file to mesh storage."""
client.fput_object("community-files", filename, filepath)
print(f"Uploaded {filename} to mesh storage")
| Solution | Internet Required | Cost | Persistence Guarantee | Best For |
|---|---|---|---|---|
| IPFS (raw) | No | Free | None (must pin) | Local file sharing |
| IPFS Cluster | No | Free | Community-managed | Important local content |
| Filecoin | Yes | FIL tokens | Economic incentive | Global archival |
| Arweave | Yes | AR tokens (one-time) | Permanent (designed) | Permanent records |
| Storj | Yes | USD/GB/month | SLA-backed | Cloud replacement |
| MinIO | No | Hardware only | Self-managed | Local S3-compatible storage |
| SeaweedFS | No | Hardware only | Self-managed | High-volume local storage |
Building a mesh network and installing decentralized software is only half the battle. Actually running services on alternative networks — reliably, accessibly, and sustainably — presents a set of challenges that the decentralized software community often glosses over.
In conventional networks, Network Address Translation (NAT) is a constant headache for peer-to-peer applications. When a device sits behind a NAT router, it can initiate outgoing connections but cannot easily receive incoming ones. Since most home networks use NAT, direct peer-to-peer connections between two home users require techniques like STUN (discovering your public IP), TURN (relaying through a server), or hole punching (tricking NAT routers into allowing incoming connections).
On alternative networks, NAT is less of a problem — mesh networks typically assign routable addresses to all nodes, eliminating the need for NAT. However, if your alternative network connects to the internet (and most do, at least intermittently), nodes that serve content to internet users will still face NAT issues.
Mesh nodes that come and go — phones joining the mesh, laptops sleeping and waking, portable nodes being moved — have constantly changing addresses. Services hosted on such nodes are unreachable as soon as the address changes. The solution is to either use stable nodes for hosting (a Raspberry Pi that stays on and stays connected) or use an overlay addressing scheme that provides stable identifiers.
Several overlay networks provide stable, reachable addresses that work regardless of the underlying network topology:
Tor onion services (formerly “hidden services”) allow you to host a service that is reachable via a .onion address. The service’s actual IP address is hidden — Tor handles routing through its anonymity network. Onion services work from behind NAT, require no port forwarding, and provide a stable address. For an alternative network with intermittent internet access, running services as onion services means they are reachable by anyone on the Tor network without any DNS or static IP requirements.
I2P eepsites are the I2P equivalent — websites and services hosted within the I2P anonymity network, reachable via .i2p addresses. I2P is designed for internal communication (within the I2P network) rather than for accessing the regular internet, which makes it a natural fit for alternative networks that want to keep services local and private.
Yggdrasil is an experimental encrypted mesh overlay network that assigns each node a stable IPv6 address derived from its cryptographic key. Yggdrasil nodes automatically discover each other (via multicast on the local network or via configured peers on the internet) and build an encrypted overlay. The key insight for alternative networks is that a Yggdrasil address never changes — it is derived from your cryptographic key, not from your network topology. A service hosted on a Yggdrasil address is reachable as long as the node is connected to any Yggdrasil peer, regardless of NAT, dynamic IPs, or network changes.
# Install Yggdrasil (provides stable IPv6 addresses on any network)
sudo apt install yggdrasil
# Generate a configuration with a persistent keypair
sudo yggdrasil -genconf | sudo tee /etc/yggdrasil/yggdrasil.conf
# Start Yggdrasil — your node gets a stable 200:/8 IPv6 address
sudo systemctl start yggdrasil
# Check your Yggdrasil address
yggdrasilctl getSelf
# Output includes your permanent IPv6 address, e.g., 200:abcd:ef01:...
The most robust pattern for running services on an alternative network combines several layers:
import subprocess
import json
def get_yggdrasil_address():
"""Get this node's stable Yggdrasil IPv6 address."""
result = subprocess.run(
["yggdrasilctl", "-json", "getSelf"],
capture_output=True, text=True
)
data = json.loads(result.stdout)
return data.get("IPv6Address", "unknown")
def announce_service(service_name, port):
"""Announce a service on the mesh using Yggdrasil addressing."""
addr = get_yggdrasil_address()
print(f"Service '{service_name}' available at [{addr}]:{port}")
return f"[{addr}]:{port}"
Everything we have discussed in this chapter — IPFS, Matrix, Scuttlebutt, Briar — shares a philosophical thread that the academic paper “Local-First Software” by Kleppmann et al. (2019) crystallized into a set of principles. Local-first software is software that:
Works offline. The application is fully functional without any network connection. Network access enhances it (by enabling synchronization) but is not required.
Stores data locally. Your data lives on your device, not on someone else’s server. You own it, you control it, and you can access it even if every server in the world goes offline.
Synchronizes when possible. When a network connection is available (whether to the internet, a mesh network, or even a direct Bluetooth link), the application synchronizes data with peers — pushing your changes and pulling theirs.
Handles conflicts gracefully. When two users modify the same data while disconnected, the application must resolve the conflict automatically, without losing either user’s work.
These principles map perfectly onto alternative networks, where connectivity is intermittent, bandwidth is limited, and centralized servers are either unavailable or deliberately avoided.
The key enabling technology for local-first software is the Conflict-free Replicated Data Type (CRDT) — a data structure that can be modified independently on multiple devices and then merged without conflicts. CRDTs achieve this through mathematical guarantees: the merge operation is commutative (order does not matter), associative (grouping does not matter), and idempotent (merging the same change twice has no additional effect).
There are two families of CRDTs:
State-based CRDTs (CvRDTs) — Each replica maintains the full state, and synchronization consists of sending the entire state and merging it. The merge function takes two states and produces a state that incorporates both. Example: a G-Counter (grow-only counter) where each node maintains its own count, and the merged value is the sum of all node counts.
Operation-based CRDTs (CmRDTs) — Replicas exchange operations (the individual changes made), and each replica applies the operations in any order. This is more bandwidth-efficient but requires reliable delivery of operations (though not ordering).
Common CRDT types include:
Let us build a practical example — a simple collaborative document using CRDTs that can synchronize across peers on an alternative network. We will implement a Last-Writer-Wins Element Set (LWW-Element-Set) combined with a simple text CRDT for a shared key-value document.
import time
import json
import uuid
class LWWRegister:
"""Last-Writer-Wins Register — a single value with a timestamp."""
def __init__(self):
self.value = None
self.timestamp = 0
self.node_id = str(uuid.uuid4())[:8]
def set(self, value):
self.timestamp = time.time()
self.value = value
def merge(self, other_value, other_timestamp):
if other_timestamp > self.timestamp:
self.value = other_value
self.timestamp = other_timestamp
Now a shared document built on LWW registers — each field in the document is an independent register that can be edited and merged:
class SharedDocument:
"""A CRDT-based shared document using LWW-Registers per field."""
def __init__(self, doc_id):
self.doc_id = doc_id
self.fields = {} # field_name -> LWWRegister
def edit(self, field, value):
if field not in self.fields:
self.fields[field] = LWWRegister()
self.fields[field].set(value)
def get(self, field):
reg = self.fields.get(field)
return reg.value if reg else None
def merge(self, remote_state):
for field, (value, timestamp) in remote_state.items():
if field not in self.fields:
self.fields[field] = LWWRegister()
self.fields[field].merge(value, timestamp)
def get_state(self):
return {
f: (r.value, r.timestamp) for f, r in self.fields.items()
}
And here is how two peers would use it on a mesh network:
# Peer A creates a document and edits it
doc_a = SharedDocument("community-notes")
doc_a.edit("title", "Mesh Network Meeting Notes")
doc_a.edit("date", "2026-03-25")
doc_a.edit("attendees", "Alice, Bob, Carol")
# Peer B creates the same document and makes different edits
doc_b = SharedDocument("community-notes")
doc_b.edit("title", "Mesh Meeting Notes — March")
doc_b.edit("location", "Community Center")
# When they connect, they exchange and merge state
state_a = doc_a.get_state()
state_b = doc_b.get_state()
doc_a.merge(state_b) # A gets B's changes
doc_b.merge(state_a) # B gets A's changes
# Both now have the same merged document
for field in ["title", "date", "attendees", "location"]:
print(f"{field}: {doc_a.get(field)}")
After merging, both peers have the complete document. The title field has a conflict (both edited it), but the LWW-Register resolves it deterministically — the edit with the later timestamp wins. Fields that only one peer edited (date, attendees, location) are simply added to the other peer’s copy.
For a more sophisticated collaborative editing experience (supporting concurrent edits within a single text field — like Google Docs), you would use an RGA or YATA CRDT, or use a library like Automerge or Yjs (both available for JavaScript, with Python bindings emerging). These are complex data structures, but the principle is the same: any change made on any peer can be merged with any other peer’s changes, in any order, producing the same result.
The beauty of CRDT-based applications on alternative networks is that synchronization is transport-agnostic. The state or operations can be exchanged over:
import json
def sync_via_file(doc, export_path="sync_state.json"):
"""Export document state for sneakernet synchronization."""
state = doc.get_state()
with open(export_path, "w") as f:
json.dump(state, f)
print(f"State exported to {export_path}")
def import_sync(doc, import_path="sync_state.json"):
"""Import and merge state from another peer."""
with open(import_path) as f:
remote_state = json.load(f)
# Convert lists back to tuples for merge
remote_state = {k: tuple(v) for k, v in remote_state.items()}
doc.merge(remote_state)
print(f"Merged state from {import_path}")
This file-based synchronization works over literally any medium — email attachment, IPFS pin, USB drive, even printed QR codes if you are desperate enough. The CRDT guarantees that the merge will be correct regardless of how the state was transported.
No single tool solves every problem. A well-designed alternative network deploys a stack of decentralized services, each handling what it does best:
| Layer | Purpose | Recommended Tools |
|---|---|---|
| Naming | Human-readable service discovery | mDNS + shared registry via IPFS |
| File sharing | Static content distribution | IPFS with community pinning |
| Storage | Persistent structured data | MinIO (S3-compatible) or SeaweedFS |
| Messaging | Real-time chat and coordination | Matrix/Synapse (primary) + Briar (fallback) |
| Social | Community updates and discussion | GoToSocial or Akkoma (Fediverse) |
| Addressing | Stable service endpoints | Yggdrasil IPv6 overlay |
| Privacy | Anonymous access from internet | Tor onion services |
| Collaboration | Shared documents and data | CRDT-based local-first apps |
| Sync | Offline-capable data replication | Scuttlebutt gossip protocol |
When building or choosing applications for alternative networks, keep these principles in mind:
Offline-first, network-second. The application must be useful without any network connectivity. Synchronization is a bonus, not a requirement. Users should never see a spinner waiting for a server that may not be reachable.
Tolerate partitions. Your mesh network will partition — nodes will go offline, links will fail, groups of nodes will be temporarily isolated from each other. Applications must handle this gracefully, queuing changes for synchronization when connectivity returns.
Minimize bandwidth assumptions. If your application requires megabits per second, it will not work on a LoRa backhaul or a congested mesh. Design for the worst case. Text is cheap. Images are affordable. Video is a luxury. Design accordingly.
Embrace eventual consistency. In a centralized system, everyone sees the same data at the same time. In a decentralized system on an intermittent network, different peers will have different views of the data at different times. This is not a bug — it is a fundamental property of the architecture. Use CRDTs and design your user experience to accommodate temporary inconsistency.
Layer your security. Do not depend on a single security mechanism. Use end-to-end encryption (so intermediate nodes cannot read content), transport encryption (so passive observers cannot see traffic patterns), and application-level access control (so even peers who receive data cannot access content they are not authorized for).
Degrade gracefully. When bandwidth is limited, reduce functionality rather than failing entirely. When peers are unreachable, work locally. When storage is full, prioritize recent data over historical data. Every constraint should trigger a degradation strategy, not an error.
The decentralized application ecosystem is maturing rapidly. Projects that were experimental curiosities five years ago — IPFS, Matrix, Scuttlebutt — now have millions of users, commercial backing, and production-grade implementations. The tooling for building local-first applications is improving: Automerge and Yjs make CRDTs accessible to application developers; libp2p provides modular networking for peer-to-peer applications; IPFS Helia brings content addressing to web browsers.
The next frontier is usability. The technical foundations for decentralized applications are solid, but the user experience often lags behind centralized alternatives. Making decentralized applications as easy to use as their centralized counterparts — without sacrificing the properties that make them valuable — is the defining challenge of this space.
For alternative network builders, the message is clear: the software exists. It works. It can be deployed today on your mesh network, your community network, your off-grid homestead. The challenge is not technical impossibility but engineering effort — choosing the right tools, configuring them for your constraints, and building the local expertise to maintain them.
Your network is not just pipes. It is a platform. And the applications you run on it determine whether your alternative network is a technical curiosity or a genuine alternative to the corporate internet.
| ← Previous: Routing Protocols for Alternative Networks | Table of Contents | Next: Packet Radio and Ham Networks → |