Team
7 UX Designers
8 Engineers
Contribution
Product Thinking
Product Design
Generative + Evaluative Research
Duration
Sep 2021 - May 2022
Background
I was a UX Designer for CLAWS, multi-disciplinary group of students at the University of Michigan competing in the NASA SUITS Challenge. Our goal was to design and build an AR interface that assists astronauts complete their missions on the lunar surface.
Impact
We were selected as finalists and invited to the Johnson Space Center for Test Week. Our product, HOSHI, was successfully tested and evaluated by NASA engineers, scientists, and astronauts in a simulated lunar environment.
Opportunity
Ahead of the 2025 Artemis Missions, NASA is interested leveraging AR AR to help astronauts execute mission processes efficiently and safely.
Solution
Informed by extensive research with past astronauts and mission data, HOSHI enables its users to confidently complete tasks and respond to unexpected comlications.
Enhances lunar navigation
Relays crucial information to help crew independently navigate and maintain situational awareness
Optimizes geological sampling
Supports crews' data collection and preserves physical energy through hands-free control
Supports autonomy
Health metrics and safe, flexible route plans help astronauts confidently respond to emergencies
Contex
After submitting our proposal, my team was 1 of 11 student organizations selected to compete in the 2022 SUITS Challenge.
Process
Generative Research
Only a few of us had designed for AR/VR, but none of us had designed for the moon. Leveraging several methods helped us build knowledge and refine interview protocols to maximize our time with experts.
5 Mission Briefings
Identified lunar equipment design + solution needs
4 Interviews
Uncovered Mission Control + astronaut workflows
Literature Review
Analyzed 200+ pages on past trips + spacesuit design
Pain Points
Our desk research revealed that extra-vehicular activity (EVA) is highly regulated to ensure crew safety, but this creates extensive friction and hinders astronauts' autonomy.
Restricted actions and movement
Crew must wait for Mission Control to research, plan, and relay every route point
Ineffective cognitive support
Crew memorize protocols, use cuff checklists, or wait for audio guidance from experts
Disorientating environment
Poor depth perception and extreme lighting manipulate crews' vision accuracy
Design Tenets
Our research revealed that AR interactions should be secondary to real-world mission tasks. With limited user access, we created these tenants using our research insights to stay aligned to this goal.
Balance safety + autonomy
Reduce cognitive load and empower astronauts with meaningful and actionable info
Preserve focus on the real world
Align interface with working conditions to support the crew's situational awareness
Trainability > learnability
Support crews' mental models, but assume they'll be extensively trained to use the product
Personas and Scenarios
Since we built the team from scratch, we wanted to foster a strong partnership and begin identifying technical constraints.
I led the Search + Rescue scenario ideation
After sharing research insights and Challenge requirements, I collaborated with 2 engineers to brainstorm and identify the key needs and interactions of this situation. I presented our scenario script to generate team-wide discussion and ensure we were agreed on anticipated tasks.
Ideation
We started with sketches to quickly explore a breadth of solutions. Many of these ideas weren't included in the final solution, but since we hadn't spoken to target users, this phase was key for revealing our early assumptions and questions.
Converging Ideation
During my VR internship, I learned how to test concepts using lo-fi physical prototypes. I helped my teammates layer interface elements to create low-cost, but accurate testing tools. We ran hallway usability testing with the engineering team for feedback and to determine viability.
Key Design Decision
We needed to determine how engineering would deploy concepts on our HoloLens. After consideration, we opted to use a pre-built design system to help engineers dedicate more time to learning.
Option 1: Use the MRTK Design System
This provided us with pre-built Figma and Unity components for the HoloLens. Although we wanted flexibility with the UI, engineers could could create rapid prototypes to help us assess spatial interactions.
Option 2: Design and develop UI from scratch
This would maximize our control over the design, ensuring components aligned with research insights. But, design and engineering would have to dedicate more time to refining assets. We agreed wanted sufficient time to nail our delivery of features and interactions.
Mid-fi Ideation
Below is an example of how I analyzed concepts for navigational support. Ground-level trails and meshes could provide astronauts with environmental guidance, at the cost of limiting their real-world visibility. In contrast, eye-level solutions were less intrusive and distracting, but didn't provide context.
Mid-Fi Testing and Iteration
I ran remote usability and concept testing on Figma to give astronauts a flexible way to share feedback and validate decisions. But 2D interfaces couldn’t replicate 3D elements in the real-world, so our devs deployed weekly to fix bugs and usability issues
Final Design
Our presentation and testing session received overwhelmingly positive reviews from judges, including the Manager of NASA EVAs and a Director Specialist from Microsoft.
Directional aids and landmark indicators
Markers move around the viewport periphery to help astronauts quickly gauge their position relative to landmarks, rather than seeing a static map.
"I really like your aids that always point home. It's really easy to lose situational awareness... nice having a simple feature in case something goes wrong"
Prioritize critical information
Relaying crucial data helps crew confidently work and navigate, without increasing cognitive strain.
"The 'find your buddy' feature and navigation is great, and the way you talk that is how we would talk a crew rescue on board"
Minimize use of unsafe interaction methods
HOSHI's primary input is voice control, followed by eye gaze and physical hand gestures. Designing for mixed methods ensures HOSHI remains practical for the strenuous and unpredictable environment.
"I like the way you've thought about comm and that you have multiple comm paths... I also like the wake word (Hey VEGA)"
Reflection
Explore and research widely
Our curiosity enabled us to discover diverse inspiration sources. While many of our initial concepts would fail, ideating with this mindset helped us stay optimistic and flexible within uncharted territory.
Invest in partnerships
Starting a team from scratch initially created friction, but embedding team workshops quickly fostered trust and productive collaboration.