Persona Library
← All personas
posthogtechnicalAPP-062

The PostHog Product Engineer

#posthog#analytics#product-engineer#events#feature-flags#open-source
Aha Moment

It happened mid-workflow — they've shipped a new onboarding flow behind a feature flag to 10% of users.. The first time they watched a session replay and saw exactly why users were confused — no guessing needed. That was the moment it stopped being a tool they were evaluating and became one they relied on.

Job Story (JTBD)

When I'm they've shipped a new onboarding flow behind a feature flag to 10% of users, I want to understand how users are actually using the product, not how the team assumes they are, so I can ship features behind flags and roll them out with confidence based on usage data.

Identity

A product engineer or full-stack developer at a startup of 5–50 people who chose PostHog — or advocated for it — because they wanted product analytics that behave like engineering tools. They self-host or use PostHog Cloud. They instrument events themselves. They use feature flags as part of their development workflow. They are not a data analyst but they want to be able to answer product questions without filing a request to one.

Intention

To reach the point where understand how users are actually using the product, not how the team assumes they are happens through posthog as a matter of routine — not heroic effort. Their deeper aim: ship features behind flags and roll them out with confidence based on usage data.

Outcome

posthog becomes invisible infrastructure. Understand how users are actually using the product, not how the team assumes they are works without intervention. The old problem — event schemas that drift because different engineers instrument things differently — is a memory, not a daily fight. Event schema governance tools that surface instrumentation gaps before they become.

Goals
  • Understand how users are actually using the product, not how the team assumes they are
  • Ship features behind flags and roll them out with confidence based on usage data
  • Own their analytics data without it being locked in a vendor's warehouse
Frustrations
  • Event schemas that drift because different engineers instrument things differently
  • Dashboards that answer the question they had three months ago but not the one they have now
  • Feature flags that are easy to create and hard to clean up
  • The gap between what the instrumentation captures and what the PM wants to know
Worldview
  • Analytics should be a developer discipline, not a separate team's responsibility
  • Data you don't own is data you can't fully trust
  • A feature flag is a deployment tool as much as an experiment tool
Scenario

They've shipped a new onboarding flow behind a feature flag to 10% of users. They're two weeks in. They want to know if users who saw the new flow completed onboarding at a higher rate than those who saw the old flow. They're in PostHog building a funnel. They've realized that one of the events they thought they were tracking doesn't exist in the data — it was never instrumented. They're opening their IDE and PostHog side by side.

Context

Uses PostHog Cloud or self-hosted. Instruments events via PostHog's JavaScript or server-side SDK. Uses feature flags for 3–8 active experiments or gradual rollouts. Builds dashboards for their own use and occasionally for the PM to look at. Has strong opinions about event naming conventions; those opinions are not followed consistently across the team. Reviews session recordings occasionally — more often when something unexpected shows up in a funnel. Has PostHog Slack notifications for key metric thresholds.

Success Signal

The proof is behavioral: understand how users are actually using the product, not how the team assumes they are happens without reminders. They've customized posthog beyond the defaults — especially session replay with event timeline — and their usage is deepening, not plateauing. Feature flags are part of every deployment — nothing ships to 100% on day one.

Churn Trigger

It's not one thing — it's the accumulation. Interface lacks polish compared to enterprise analytics platforms that they've reported, worked around, and accepted. Then a competitor demo shows the same workflow without the friction, and the sunk cost argument collapses. Their worldview — analytics should be a developer discipline, not a separate team's responsibility — makes them unwilling to compromise once a better option is visible.

Impact
  • Event schema governance tools that surface instrumentation gaps before they become
  • data quality problems remove the "we thought we were tracking that" discovery
  • Feature flag lifecycle management that surfaces stale flags and prompts cleanup
  • removes the technical debt that accumulates in flag-heavy codebases
  • Experiment results that account for novelty effect and reach statistical significance
  • before surfacing conclusions prevent premature decisions based on early data
  • Session recordings linked directly from funnel drop-off points accelerate the
  • time from "something's wrong here" to "I can see what's happening"
Composability Notes

Pairs with `mixpanel-primary-user` to map the product-engineer vs. PM analytics tool philosophy. Contrast with `data-engineer` for teams where product analytics feeds into a larger data infrastructure. Use with `linear-primary-user` for the full engineering workflow: issue → build → flag → measure.