01 — Overview
MindSea is a mobile app designed to help students track their emotions over time, understand how mood patterns affect academic performance, and access self-help resources from meditation techniques to local professional care.
How do our emotions affect our academic performance and how do we make that visible to students in a way that's private, accessible, and easy enough to use every day?
02 — Process
Surveyed and interviewed students to identify emotional pain points tied to academic performance. Created personas to anchor design decisions in real user needs.
Mapped all core user journeys, logging, reviewing stats, reading articles, finding resources, then produced wireframes for all 8 screens.
Ran Think Aloud sessions combined with semi-structured interviews. Prioritised open pathways over feature-specific evaluation to catch unexpected friction.
Applied a high-impact, low-effort change list. Reduced click counts, improved label clarity, enforced consistent icon language, and added a dark mode.
02 — User Research
Our target audience is students who want to better understand how their emotional state connects to their studies. We started by brainstorming the core features users would need, then grounded those decisions in three user personas representing different accessibility needs, cultural contexts, and relationships with technology.
Third-year undergraduate on a football scholarship. Familiar with technology but doesn't discuss emotions openly. Colour blind and prefers interfaces he can use without contacts. Currently struggling to progress with coursework despite being socially well-adjusted.
International student from China, two years in the US. Struggles with spoken English confidence, leading to anxiety in class discussions and presentations. Tech-savvy and organised so uses her laptop heavily. Experiences frustration that accumulates across the day.
Reduced finger dexterity from an accident makes standard mobile UI difficult to use. Requires an adaptable interface. Academically recovered after a difficult first year but gets highly frustrated when learning takes longer than expected. Has formal academic accommodations.
The personas converged on three design priorities: usability (minimal friction to log), repeatability (easy enough to return to daily), and privacy (data behind a username, never exposed). For emotion selection we adopted Plutchik's Wheel of Emotions as a base, then refined the options through direct student surveys.
03 — User Flows & Wireframes
We chose mobile as the platform to encourage in-the-moment logging. Studying adjacent health and fitness apps informed the structure: a nav split into Logging, Stats, Articles, and Resources. The key design tension was balancing depth (multiple emotions per log, intensity scale, category tags) against speed; every extra tap is a reason not to log.
Stats show emotional patterns across weeks and months, not individual log entries. The user sees the shape of their experience, not a ledger of bad days.
Common icons, graph conventions borrowed from fitness apps, and a checklist tag system mean users identify options rather than having to remember or type them.
Framing logs as diary entries grounds the interaction in a familiar mental model, reducing the learning curve for first-time users.
Eight screens covering the full user journey from onboarding through to settings.
Name, gender, birthday, password setup. Must accept T&Cs to reach the landing page.
Welcome screen with a prompt to check in and create a new log.
Review emotions across different time periods — weekly and monthly views.
Number of logs per topic and most common associated emotion. Arrows to navigate months.
0–5 intensity slider, time selection, hashtag category, and optional notes.
Web articles and videos surfaced based on recent logs and patterns.
Local clinics and mental health providers for professional support.
Account info, categories, and language preferences.
04 — Testing
Prototyping gave us confidence in the concept. Testing gave us the corrections. We ran multiple user tests specifically to avoid designing in isolation and went in expecting to find problems.
We combined a Think Aloud study with a brief semi-structured interview. The goal wasn't to evaluate specific features, it was to understand the pathways users took naturally and surface unexpected friction. Keeping the prompts open meant participants led the session rather than following our assumptions.
Reactions were mixed. Many appreciated the clean, modern interface, but some found it confusing without colour. The greyscale prototype made graphs feel overwhelming rather than informative. Icon and emoji usage was inconsistent enough to break recognition. The "Check-In" label caused repeated confusion about the core feature.
A high-impact, low-effort change list guided the iteration. Every change addressed a specific friction point observed during testing.
The original label caused consistent confusion about what the action did. Renaming it resolved the ambiguity for the app's most-used feature.
Added the Log Mood button to the persistent top bar. Since it's the most frequent action, it needed to be reachable from every screen without navigating back to Home.
Applied consistent colour coding tied to emotion categories across the entire app. Pressing and holding a graph element reveals its label, thereby reducing cognitive load while keeping depth available.
Replaced the freeform hashtag system with a predefined checklist. Eliminated misspelling errors and made categorisation faster by enabling recognition over recall.
Emojis were inconsistent with the icon set used elsewhere in the app. Replaced with the same emotion-colour system from the stats screens for a unified visual language.
Logout button, in-app tutorials, and a dark mode, each low effort with meaningful impact on first-use experience and long-term retention.
"Check-In" button on landing. No way to log from other screens. Greyscale made the hierarchy ambiguous.
"Log Mood" with matching icon in the top bar. Accessible from any screen. Clearer primary action hierarchy.
Freeform hashtag input for categories. Prone to misspelling and inconsistency across logs.
Predefined checklist. Faster to complete, eliminates errors, editable in Settings.
Greyscale graphs with emoji labels. Overwhelming without colour; emojis felt inconsistent with icon set.
Emotion-mapped colours applied consistently. Press-and-hold reveals labels. Tutorial added to top right.
05 — Reflection
MindSea was my first end-to-end UX project, from research through to a tested, iterated prototype. The gap between what we designed and what users actually did during testing was the most instructive part.
User testing revealed issues that static reviews missed, proving that real behavior matters more than assumptions. Designing for accessibility from the start led to better decisions around usability and interaction. Balancing depth with speed ensured the core experience stayed quick while allowing optional detail. Prioritizing recognition over recall made the app more intuitive and easier to use daily.
Strengthened user testing and the ability to turn insights into design improvements. Built a solid foundation in accessibility-first design for diverse user needs. Improved prioritization by balancing simplicity with feature depth. Enhanced ability to design intuitive, low-friction interfaces using familiar patterns.