“It happened mid-workflow — they've shipped a new onboarding flow behind a feature flag to 10% of users.. The first time they watched a session replay and saw exactly why users were confused — no guessing needed. That was the moment it stopped being a tool they were evaluating and became one they relied on.”
When I'm they've shipped a new onboarding flow behind a feature flag to 10% of users, I want to understand how users are actually using the product, not how the team assumes they are, so I can ship features behind flags and roll them out with confidence based on usage data.
A product engineer or full-stack developer at a startup of 5–50 people who chose PostHog — or advocated for it — because they wanted product analytics that behave like engineering tools. They self-host or use PostHog Cloud. They instrument events themselves. They use feature flags as part of their development workflow. They are not a data analyst but they want to be able to answer product questions without filing a request to one.
To reach the point where understand how users are actually using the product, not how the team assumes they are happens through posthog as a matter of routine — not heroic effort. Their deeper aim: ship features behind flags and roll them out with confidence based on usage data.
posthog becomes invisible infrastructure. Understand how users are actually using the product, not how the team assumes they are works without intervention. The old problem — event schemas that drift because different engineers instrument things differently — is a memory, not a daily fight. Event schema governance tools that surface instrumentation gaps before they become.
They've shipped a new onboarding flow behind a feature flag to 10% of users. They're two weeks in. They want to know if users who saw the new flow completed onboarding at a higher rate than those who saw the old flow. They're in PostHog building a funnel. They've realized that one of the events they thought they were tracking doesn't exist in the data — it was never instrumented. They're opening their IDE and PostHog side by side.
Uses PostHog Cloud or self-hosted. Instruments events via PostHog's JavaScript or server-side SDK. Uses feature flags for 3–8 active experiments or gradual rollouts. Builds dashboards for their own use and occasionally for the PM to look at. Has strong opinions about event naming conventions; those opinions are not followed consistently across the team. Reviews session recordings occasionally — more often when something unexpected shows up in a funnel. Has PostHog Slack notifications for key metric thresholds.
The proof is behavioral: understand how users are actually using the product, not how the team assumes they are happens without reminders. They've customized posthog beyond the defaults — especially session replay with event timeline — and their usage is deepening, not plateauing. Feature flags are part of every deployment — nothing ships to 100% on day one.
It's not one thing — it's the accumulation. Interface lacks polish compared to enterprise analytics platforms that they've reported, worked around, and accepted. Then a competitor demo shows the same workflow without the friction, and the sunk cost argument collapses. Their worldview — analytics should be a developer discipline, not a separate team's responsibility — makes them unwilling to compromise once a better option is visible.
Pairs with `mixpanel-primary-user` to map the product-engineer vs. PM analytics tool philosophy. Contrast with `data-engineer` for teams where product analytics feeds into a larger data infrastructure. Use with `linear-primary-user` for the full engineering workflow: issue → build → flag → measure.