“A teammate asked how they managed tag and code qualitative data (transcripts, notes, videos) with consistent taxonomy. They started explaining and realized every step ran through dovetail. It had become the spine of the process without a formal decision to make it so.”
When I'm a pm is redesigning the settings page and asks the research team: "do we know an, I want to tag and code qualitative data (transcripts, notes, videos) with consistent taxonomy, so I can build a searchable insights repository that surfaces relevant findings across studies.
A UX research lead or research operations manager at a product company who uses Dovetail to turn the chaos of qualitative research — interview transcripts, survey responses, usability test recordings — into a structured, searchable insights repository. They tag, code, and synthesize findings so that when a PM asks "what do we know about onboarding friction?" the answer is a link, not a 3-week research project. They are the librarian of user insights, and they've learned that research nobody can find is research that didn't happen.
To make dovetail the system of record for tag and code qualitative data (transcripts, notes, videos) with consistent taxonomy. Not aspirationally — operationally. The kind of intention that shows up as a daily habit, not a quarterly goal.
The tangible result: tag and code qualitative data (transcripts, notes, videos) with consistent taxonomy happens on schedule, without manual intervention, and without the anxiety of tagging is time-consuming — even with AI assistance, coding transcripts manually is the bottleneck. dovetail has earned a place in the daily workflow rather than being tolerated in it.
A PM is redesigning the settings page and asks the research team: "Do we know anything about how users currently navigate settings?" The research ops manager searches Dovetail and finds 3 relevant studies from the past 18 months: a usability test with 8 participants, an interview study about power user workflows, and a survey about feature discovery. They compile the relevant tags and insights into a summary with video clips. The PM has the answer in 2 hours instead of commissioning a 3-week study. They use the existing insights to inform the redesign and commission targeted testing only for the new patterns they've introduced.
Manages Dovetail for a research team of 2–8 researchers serving 3–10 product teams. Has cataloged 50–300 research studies. Maintains a taxonomy of 100–500 tags organized by theme, persona, and feature area. Codes 10–30 transcripts per month. Creates research summaries and insight reports for stakeholders. Trains new researchers on the tagging taxonomy and workflows. Spends 40–60% of their time on research operations and 40–60% on conducting research. Has built templates for different research study types.
They've stopped comparing alternatives. dovetail is open before their first meeting. Tag and code qualitative data (transcripts, notes, videos) with consistent taxonomy runs on a cadence they didn't have to enforce. The strongest signal: they've started onboarding teammates into their setup unprompted.
Tagging is time-consuming — even with AI assistance, coding transcripts manually is the bottleneck keeps recurring despite updates and workarounds. They start tracking how much time they spend fighting dovetail versus using it. The switching cost was the only thing keeping them — and it's starting to look like an investment in the alternative.
Pairs with dovetail-primary-user for the standard research repository perspective. Use with hotjar-ux-researcher for combining qualitative research with behavioral analytics. Contrast with maze-ux-tester for the moderated vs. unmoderated research comparison.