Persona Library
← All personas
mazeresearchAPP-100

The Maze Unmoderated Research Lead

#maze#ux-research#usability-testing#unmoderated#figma#testing
Aha Moment

“What was the moment this product clicked?” —

Identity

A UX researcher or product designer at a company where research is valued but researcher time is scarce. They use Maze to run tests they can't run fast enough with moderated sessions. They design the test, connect the Figma prototype, send the link, and come back to results in 24–72 hours. They know unmoderated testing misses the nuance of moderated sessions. They also know that running 8 moderated sessions takes 2 weeks of scheduling and 2 days of synthesis. Maze takes 2 hours to set up and 1 hour to analyze. They're using the right tool for the question.

Intention

What are they trying to do? —

Outcome

What do they produce? —

Goals
  • Get directional usability signal fast enough to influence a design decision
  • before the decision is already made
  • Test with real users at a scale that moderated sessions can't reach on the same timeline
  • Produce results that are credible enough to change a stakeholder's mind
Frustrations
  • Testers who complete tasks incorrectly and skew the success rate in ways that
  • aren't reflective of the real problem
  • Task design that seemed clear in the test setup and turned out to be ambiguous
  • in the results
  • Figma prototype connections that break when the designer iterates on the file
  • between test setup and data collection
  • Results that show a problem but don't explain why — the quantitative data
  • without the qualitative context
Worldview
  • A decision made without user input is a guess with consequences
  • Unmoderated testing is fast research, not cheap research — it answers different questions
  • The question you test is more important than the tool you test with
Scenario

A new onboarding flow is going to a design review next Thursday. The researcher has a Figma prototype. They design a Maze test: 3 tasks, 1 open question, targeting users who match the product's persona. They launch it Monday. By Wednesday they have 42 responses. Task 1: 89% success rate. Task 2: 54% — something is wrong. Task 3: 83%. The path analysis on task 2 shows users going to the wrong screen first. They clip the heatmap and the path visualization for the Thursday review. The flow changes. This is the job.

Context

Uses Maze for 2–6 studies per month. Tests Figma prototypes primarily. Uses Maze's panel for participant recruitment or sends links to their own user panel. Has a question library of tasks and follow-up questions they reuse across studies. Analyzes results in Maze's dashboard — success rates, path analysis, heatmaps, time on task. Exports results to Dovetail or a slide deck for stakeholder presentation. Uses Maze alongside moderated sessions — Maze for directional, moderated for depth. Has a template for common test types: navigation test, first-click test, concept test.

Impact
  • Automatic flagging of testers who complete tasks suspiciously fast or in ways
  • that suggest they're not engaging removes low-quality responses from the analysis
  • Live prototype sync that updates the test when the Figma file changes removes
  • the broken prototype problem without requiring a test rebuild
  • Open response analysis that groups themes across qualitative answers adds the
  • "why" context that quantitative task data alone doesn't provide
  • Test template library with built-in best practices for task writing removes
  • the "did I write this task clearly?" uncertainty that distorts results
Composability Notes

Pairs with `figma-primary-user` for the design-to-test-to-iterate research workflow. Contrast with `hotjar-primary-user` for the structured-usability-test vs. passive-session-observation research approach. Use with `dovetail-primary-user` for the research team that synthesizes Maze results alongside moderated session notes.