Persona Library
← All personas
posthoganalyticsAPP-134

The PostHog Growth Engineer

#posthog#analytics#growth#experimentation#product-analytics
Aha Moment

It happened mid-workflow — the growth engineer is running an A/B test on the onboarding flow.. The first time they watched a session replay and saw exactly why users were confused — no guessing needed. That was the moment it stopped being a tool they were evaluating and became one they relied on.

Job Story (JTBD)

When I'm the growth engineer is running an a/b test on the onboarding flow, I want to run A/B tests without needing a separate experimentation platform, so I can use feature flags to gradually roll out and roll back features safely.

Identity

A growth engineer, product engineer, or technical PM who uses PostHog as their all-in-one growth stack — analytics, feature flags, A/B tests, session replay. They chose PostHog because they didn't want to stitch together Amplitude, LaunchDarkly, and Hotjar. They think in funnels, retention curves, and statistical significance. They are technical enough to self-serve but product-minded enough to care about the "so what" behind the data.

Intention

To reach the point where run A/B tests without needing a separate experimentation platform happens through posthog as a matter of routine — not heroic effort. Their deeper aim: use feature flags to gradually roll out and roll back features safely.

Outcome

posthog becomes invisible infrastructure. Run A/B tests without needing a separate experimentation platform works without intervention. The old problem — feature flag evaluation can add latency to page loads if not configured carefully — is a memory, not a daily fight. Clearer experiment result summaries that translate statistical significance into plain-language recommendations reduce decision paralysis.

Goals
  • Run A/B tests without needing a separate experimentation platform
  • Use feature flags to gradually roll out and roll back features safely
  • Combine quantitative analytics with session replays to understand both what and why
  • Self-serve on data without waiting for analysts to build dashboards
Frustrations
  • Feature flag evaluation can add latency to page loads if not configured carefully
  • The query language for complex funnels has a learning curve that the docs don't fully address
  • Session replay storage costs scale faster than expected on high-traffic apps
  • Statistical significance calculations sometimes feel like a black box
Worldview
  • If you're not measuring it, you're guessing — and guessing doesn't compound
  • Feature flags aren't just a deployment tool — they're a product development philosophy
  • The fastest growth teams are the ones that can run experiments without filing tickets
Scenario

The growth engineer is running an A/B test on the onboarding flow. Variant B shows a 12% improvement in activation rate, but the sample size is still small and the confidence interval is wide. The PM wants to call it and ship Variant B. The engineer wants to wait for statistical significance. They spend 30 minutes in PostHog trying to figure out if the current results are trustworthy or if they need another week of data. The tool shows a p-value but doesn't clearly communicate what that means in practical terms.

Context

Works at a Series A–C startup with 10K–500K monthly active users. Has PostHog self-hosted or on cloud. Manages 15–50 feature flags across the product. Runs 3–8 experiments per quarter. Uses the PostHog API to programmatically create cohorts and query data. Checks PostHog dashboards daily. Has integrated PostHog with their CI/CD pipeline for automated flag management.

Success Signal

The proof is behavioral: run A/B tests without needing a separate experimentation platform happens without reminders. They've customized posthog beyond the defaults — especially product analytics with funnels and retention — and their usage is deepening, not plateauing. Feature flags are part of every deployment — nothing ships to 100% on day one.

Churn Trigger

Interface lacks polish compared to enterprise analytics platforms. Feature flag evaluation can add latency to page loads if not configured carefully keeps recurring despite updates and workarounds. Self-hosting maintenance became a burden that distracted from product work. The switching cost was the only thing keeping them — and it's starting to look like an investment in the alternative.

Impact
  • Clearer experiment result summaries that translate statistical significance into plain-language recommendations reduce decision paralysis
  • Feature flag performance monitoring that shows evaluation latency impact prevents hidden performance regressions
  • Better cost visibility for session replay storage helps teams budget before they get surprised
  • Experiment templates for common patterns (onboarding, pricing, activation) accelerate the time-to-first-experiment
Composability Notes

Pairs with posthog-primary-user for the standard product analytics perspective. Contrast with amplitude-primary-user for the analytics-first approach without built-in experimentation. Use with mixpanel-product-analyst for the analyst-side view of the same data.