Case Study — Distributed Systems & Full-Stack Web

Navajo
White

Python Django Django REST Framework PostgreSQL JSON GitHub API Git Heroku / Render CI/CD HTMX CSS JavaScript Figma
View on GitHub
Scroll to explore

A federated social platform

NavajoWhite is a federated blogging and social platform built on an inbox-push model, inspired by Mastodon and the modern fediverse. Each team operated an independent codebases each communicating by pushing entries, likes, comments, and follow events directly to the inbox of the target author's node.

Role
Full-Stack Developer
Team
6 developers, 5 external nodes
Deliverables
University capstone; distributed systems course
For users:

data sovereignty means your content lives on the node you chose, not a platform you don't control. Censorship resistance follows naturally: a post removed on one node can still exist on others that already received it, and users can migrate nodes without losing their identity.

For moderators:

decentralization puts oversight closer to the community. Each node operator sets their own policies and federation rules, enabling more contextual, human moderation than any global platform algorithm can offer and better curation as a result.

For the system:

outages are isolated to individual nodes. Horizontal scalability means each node grows independently based on its own load, with no central bottleneck to bring everything down at once.

5
External Systems
integrated
4
Content visibility
tiers
REST
Cross-node
protocol
30+
API
endpoints

How it was built

Building a social network where no single person is in charge sounds simple, but it creates an immediate problem: how do you get completely separate systems, built by different teams, to talk to each other reliably? We solved this with a shared rulebook. Every action on the platform gets packaged into a small, labelled message and delivered directly to the right person's inbox on whichever server they live on. It works a bit like email: you don't need to be on the same provider as someone to send them a message, you just need to know their address.


The hardest part wasn't writing the code, it was working with five other teams who were building their own versions of the same system. When we sat down to connect our servers together, we found we'd all made slightly different assumptions about how things should be formatted. We had to negotiate a common standard mid-project and update our database structure without breaking everything that already worked. On the front end, the challenge was taking all those messages arriving from different servers, each formatted slightly differently, and turning them into a single, clean feed that just looks like a normal social media timeline.

System architecture diagram
Database Design

The database had to account for the fact that users live on different servers. Rather than copying everyone's profile data and keeping it up to date everywhere, we gave every person a permanent web address that works as their unique ID across the whole network, no matter which server you're on, that address always points back to the right person. The schema was designed around eight core models across 2 Django applications, AuthorApp and PostApp, ensuring clean handling of concerns across the local and remote boundary.

The trickiest design problem was representing remote authors. Rather than duplicating profile data from other nodes, each remote author is identified by a composite URL built from their home node's identifier and username. This gave us a stable, dereferenceable key for cross-node references without the consistency problems that come from copying data you don't own.

Inbox-Push Model

Every action on the platform is represented as a typed JSON object and delivered by POST to the recipient's inbox endpoint via HTTP protocol. The sending node is responsible for routing public posts go to every follower's inbox, friends-only posts go to mutual follows only. All nodes require administrators to initiate a handshake and verify each other via the API. This ensured only the necessary information was being shared across nodes and increased the users control over who saw their account.

The inbox endpoint is an append queue for each node. No logic runs on delivery, objects are stored as-is and processed on read to reduce overhead. This keeps the receiving node passive and makes the system easier to reason about: each node only needs to trust its own data, not the correctness of remote processing.

Mid-Project Restructure

A significant schema restructure was necessary after cross-team API discussions surfaced incompatibilities in how nodes were modelling shared objects. Coordinating that migration across an active codebase without breaking existing endpoints was one of the more demanding parts of the project.

Front End

Early on I created clean and flexible Figma prototypes to work towards. This lead the groundwork for how we handled the user flow and Django Views. To reduce redundancy I created HTMX templates. This decrease te umber of HTML files to maintain and simplified our workflow

The stream is a merged view much like instagram. All posts, comments and likes are all pulled together into a single chronological feed. Rather than surfacing raw events, each object type renders as a notification feed item, a like appears as a post with its reaction context, a follow request as an actionable card, keeping the experience readable without hiding activity and streamlining actionable items

Overall, the aim was to create a user flow and interface that was familiar for our users, while still being flexible enough to integrate the posts of other systems

Cross-Node Rendering

The main challenge was normalizing content from five external nodes, each with slightly different field shapes and image handling. Rendering had to be defensive: missing fields fell back gracefully, and images were validated before display rather than assumed present.

What it does

01
Federated Follow Graph

Authors on any node can follow authors on any other node. The follow request is sent to the target node's inbox; acceptance is recorded locally. Unfollow events propagate the same way.

02
Visibility Tiers

Posts can be public (broadcast to all followers), friends-only (mutual follows), unlisted (linkable but undiscoverable), or private. Each tier routes to the correct inboxes with no manual filtering required on the receiver side.

03
Inbox Object Model

Every action is a typed JSON object POSTed to an inbox endpoint. The inbox is a simple append queue; the stream view is a computed merge of inbox objects and locally-known public content.

04
Higher User & Admin Control

Users gain greater control over the nodes and communities they engage with, while administrators have enhanced oversight of their communities and increased authority to enforce custom rules.

05
GitHub Activity Import

Connecting a GitHub account streams commit and repository events into the author's local feed as native post objects. Followers receive these on their stream just like hand-authored entries.

06
Image & Linked Media Entries

Authors can publish unlisted image entries and reference them from text posts, enabling rich media posts without hosting images inline. The image URL is stable and dereferenceable across nodes.

What I learned

The hardest engineering problem was schema alignment. Each of the five external teams evolved their API subtly differently, field names drifted, optional fields became required, date formats shifted. Robust cross-node integration required writing adaptive serializers that could handle multiple payload shapes for the same object type.

Working against live external systems also meant debugging failures with no stack traces. I learned to approach distributed systems defensively: log everything on ingestion, validate payload shapes at the boundary, and degrade gracefully when a remote node is unreachable rather than propagating errors to the user.

Key Takeaway

Building a distributed system taught me that the hardest problems aren't algorithmic, they're coordination problems. Clear API contracts, consistent data shapes, and defensive ingestion logic matter far more than clever code.

On the product side, designing the stream experience surfaced interesting UX questions unique to federation. When content arrives from a node that is temporarily offline, how do you indicate origin? When a like is sent but delivery fails, should it retry silently or surface an error? These aren't hypothetical edge cases, they happened in every demo.

The project gave me hands-on experience with the architecture that underpins real federated platforms like Mastodon, and a much stronger intuition for REST API design, distributed data ownership, and the tradeoffs between consistency and availability in a system with no central coordinator.

Skills Built

REST API design & versioning, cross-team API negotiation, Django serializers, async delivery patterns, multi-node integration testing, full-stack ownership from models to UI.