Case Study — UX Research & Product Design

MindSea

Figma LucidChart Wireframing Prototyping User Testing User Research Personas

A mental health tracker for students

MindSea is a mobile app designed to help students track their emotions over time, understand how mood patterns affect academic performance, and access self-help resources from meditation techniques to local professional care.

Role
UX Research, Persona development, Wireframing, Prototyping, User Testing
Team
Faiza Rahman, Favour Nwachukwu, Julie Pilz, Nilanjan Ghatak, Saba Gul
Deliverables
Design a mental health app that tracks emotions and provides resources for students
The Design Question

How do our emotions affect our academic performance and how do we make that visible to students in a way that's private, accessible, and easy enough to use every day?

3
User personas
researched
8+
Wireframe
screens designed
4
Design iterations
from testing
10
live
testers
Phase 01
Research
Understanding user needs, expectations, and outcomes

Surveyed and interviewed students to identify emotional pain points tied to academic performance. Created personas to anchor design decisions in real user needs.

Phase 02
Prototyping
Creating clear and minimal user flows for ease of use

Mapped all core user journeys, logging, reviewing stats, reading articles, finding resources, then produced wireframes for all 8 screens.

Phase 03
Testing
Using structured interviews to recognise weak points

Ran Think Aloud sessions combined with semi-structured interviews. Prioritised open pathways over feature-specific evaluation to catch unexpected friction.

Phase 04
Final Product
Polishing the interactive prototype based on learnings

Applied a high-impact, low-effort change list. Reduced click counts, improved label clarity, enforced consistent icon language, and added a dark mode.

Who are we designing for?

Our target audience is students who want to better understand how their emotional state connects to their studies. We started by brainstorming the core features users would need, then grounded those decisions in three user personas representing different accessibility needs, cultural contexts, and relationships with technology.

Personas
Michael
21 years old — Psychology, Howard University

Third-year undergraduate on a football scholarship. Familiar with technology but doesn't discuss emotions openly. Colour blind and prefers interfaces he can use without contacts. Currently struggling to progress with coursework despite being socially well-adjusted.

Accessibility Colour contrast
Sarah
25 years old — Linguistics, Graduate student

International student from China, two years in the US. Struggles with spoken English confidence, leading to anxiety in class discussions and presentations. Tech-savvy and organised so uses her laptop heavily. Experiences frustration that accumulates across the day.

Language support Multilingual UI
Quinn
23 years old — Neuroscience

Reduced finger dexterity from an accident makes standard mobile UI difficult to use. Requires an adaptable interface. Academically recovered after a difficult first year but gets highly frustrated when learning takes longer than expected. Has formal academic accommodations.

Motor accessibility Adaptable UI
Research Findings

The personas converged on three design priorities: usability (minimal friction to log), repeatability (easy enough to return to daily), and privacy (data behind a username, never exposed). For emotion selection we adopted Plutchik's Wheel of Emotions as a base, then refined the options through direct student surveys.

Mapping the experience

We chose mobile as the platform to encourage in-the-moment logging. Studying adjacent health and fitness apps informed the structure: a nav split into Logging, Stats, Articles, and Resources. The key design tension was balancing depth (multiple emotions per log, intensity scale, category tags) against speed; every extra tap is a reason not to log.

Principle 01
Proportions over moments

Stats show emotional patterns across weeks and months, not individual log entries. The user sees the shape of their experience, not a ledger of bad days.

Principle 02
Recognition over recall

Common icons, graph conventions borrowed from fitness apps, and a checklist tag system mean users identify options rather than having to remember or type them.

Principle 03
Metaphor as onboarding

Framing logs as diary entries grounds the interaction in a familiar mental model, reducing the learning curve for first-time users.

Wireframes — V1 Screens

Eight screens covering the full user journey from onboarding through to settings.

01
Login

Name, gender, birthday, password setup. Must accept T&Cs to reach the landing page.

02
Home

Welcome screen with a prompt to check in and create a new log.

03
Summary

Review emotions across different time periods — weekly and monthly views.

04
Monthly View

Number of logs per topic and most common associated emotion. Arrows to navigate months.

05
Log Mood

0–5 intensity slider, time selection, hashtag category, and optional notes.

06
Articles

Web articles and videos surfaced based on recent logs and patterns.

07
Resources

Local clinics and mental health providers for professional support.

08
Settings

Account info, categories, and language preferences.

What users actually did

Prototyping gave us confidence in the concept. Testing gave us the corrections. We ran multiple user tests specifically to avoid designing in isolation and went in expecting to find problems.

Technique

We combined a Think Aloud study with a brief semi-structured interview. The goal wasn't to evaluate specific features, it was to understand the pathways users took naturally and surface unexpected friction. Keeping the prompts open meant participants led the session rather than following our assumptions.

Findings

Reactions were mixed. Many appreciated the clean, modern interface, but some found it confusing without colour. The greyscale prototype made graphs feel overwhelming rather than informative. Icon and emoji usage was inconsistent enough to break recognition. The "Check-In" label caused repeated confusion about the core feature.

Improvements made

A high-impact, low-effort change list guided the iteration. Every change addressed a specific friction point observed during testing.

"Check-In" → "Log Mood"

The original label caused consistent confusion about what the action did. Renaming it resolved the ambiguity for the app's most-used feature.

Log from anywhere

Added the Log Mood button to the persistent top bar. Since it's the most frequent action, it needed to be reachable from every screen without navigating back to Home.

Emotion-mapped colours

Applied consistent colour coding tied to emotion categories across the entire app. Pressing and holding a graph element reveals its label, thereby reducing cognitive load while keeping depth available.

Tags → Checklist

Replaced the freeform hashtag system with a predefined checklist. Eliminated misspelling errors and made categorisation faster by enabling recognition over recall.

Emoji removal

Emojis were inconsistent with the icon set used elsewhere in the app. Replaced with the same emotion-colour system from the stats screens for a unified visual language.

QoL additions

Logout button, in-app tutorials, and a dark mode, each low effort with meaningful impact on first-use experience and long-term retention.

Before → After: Key screens
Before - Home

"Check-In" button on landing. No way to log from other screens. Greyscale made the hierarchy ambiguous.

After - Home

"Log Mood" with matching icon in the top bar. Accessible from any screen. Clearer primary action hierarchy.

Before — Log

Freeform hashtag input for categories. Prone to misspelling and inconsistency across logs.

After — Log

Predefined checklist. Faster to complete, eliminates errors, editable in Settings.

Before — Stats

Greyscale graphs with emoji labels. Overwhelming without colour; emojis felt inconsistent with icon set.

After — Stats

Emotion-mapped colours applied consistently. Press-and-hold reveals labels. Tutorial added to top right.

What I learned

MindSea was my first end-to-end UX project, from research through to a tested, iterated prototype. The gap between what we designed and what users actually did during testing was the most instructive part.

Key Takeaway

User testing revealed issues that static reviews missed, proving that real behavior matters more than assumptions. Designing for accessibility from the start led to better decisions around usability and interaction. Balancing depth with speed ensured the core experience stayed quick while allowing optional detail. Prioritizing recognition over recall made the app more intuitive and easier to use daily.

Skills Built

Strengthened user testing and the ability to turn insights into design improvements. Built a solid foundation in accessibility-first design for diverse user needs. Improved prioritization by balancing simplicity with feature depth. Enhanced ability to design intuitive, low-friction interfaces using familiar patterns.