Persona Library
← All personas
flyiotechnicalAPP-030

The Fly.io Container Developer

#flyio#deployment#containers#edge#developer#infrastructure
Aha Moment

It happened mid-workflow — they're deploying a Phoenix application — Elixir, with WebSockets and a persistent database connecti. flyio handled something they'd been doing manually, and it just worked. That was the moment it stopped being a tool they were evaluating and became one they relied on.

Job Story (JTBD)

When I'm deploying a phoenix application — elixir, with websockets and a persiste, I want to deploy containerized applications to multiple regions with a single command, so I can run stateful workloads — databases, queues, persistent services — without a managed service tier.

Identity

A backend or full-stack developer who needs to run server-side applications — not just static sites and serverless functions — and wants them deployed globally without managing Kubernetes or paying for managed Kubernetes overhead. They found Fly.io and found a platform that takes a Dockerfile and runs it near users. They `fly deploy`. It works. They have opinions about Fly.io that include real affection and specific frustrations, which is the relationship one has with a platform they actually depend on.

Intention

To deploy containerized applications to multiple regions with a single command — reliably, without workarounds, and without becoming the team's single point of failure for flyio.

Outcome

A backend or full-stack developer who trusts their setup. Deploy containerized applications to multiple regions with a single command is reliable enough that they've stopped checking. Incident transparency and postmortem quality that matches the reliability expectations. They've moved from configuring flyio to using it.

Goals
  • Deploy containerized applications to multiple regions with a single command
  • Run stateful workloads — databases, queues, persistent services — without a managed service tier
  • Keep infrastructure costs proportional to actual usage rather than provisioned capacity
Frustrations
  • Fly.io incident history that has been significant enough to affect their uptime
  • and their confidence in the platform for production workloads
  • Documentation that's technically accurate but assumes more infrastructure knowledge
  • than the developer-experience positioning implies
  • Machine management that requires CLI fluency that casual deployments shouldn't need
  • Cold starts on Fly machines that have been scaled to zero — the latency is felt by users
Worldview
  • Deployment should be a command, not a project
  • Geographic distribution is a user experience decision, not an infrastructure luxury
  • The right level of infrastructure complexity is "enough to give you control,
  • not so much that control is all you do"
Scenario

They're deploying a Phoenix application — Elixir, with WebSockets and a persistent database connection requirement. Vercel doesn't work for this. Heroku would work but the pricing doesn't. They write a Dockerfile. They run `fly launch`. They answer 4 prompts. The app is deployed in 3 regions in 8 minutes. The WebSocket connections route to the nearest region. The database is a Fly Postgres instance in the primary region with a replica in the secondary. They didn't open a cloud provider console once.

Context

Uses Fly.io for 2–6 applications. Deploys backend services, APIs, and full-stack applications that require persistent server processes. Uses Fly Postgres for at least one project. Uses `flyctl` as their primary interface — occasionally the Fly dashboard for status checks. Has configured autoscaling with minimum machines to avoid cold starts on critical services. Has set up private networking between Fly apps. Has been affected by at least one Fly.io incident and has a plan for how they'd respond to another one. Has recommended Fly to developers deploying workloads that don't fit Vercel's model.

Success Signal

Two things you'd notice: they reference flyio in conversation without being asked, and they've built workflows on top of it that weren't in the original plan. Deploy containerized applications to multiple regions with a single command is consistent and expanding. They're now focused on run stateful workloads — databases, queues, persistent services — without a managed service tier — a sign the basics are solved.

Churn Trigger

The trigger is specific: and their confidence in the platform for production workloads, combined with a high-stakes deadline. flyio fails them at exactly the wrong moment. That evening, they're reading comparison posts. What makes it irreversible: they fundamentally believe deployment should be a command, not a project, and flyio just proved it doesn't share that belief.

Impact
  • Incident transparency and postmortem quality that matches the reliability expectations
  • of production workloads builds the trust that Fly's developer experience earns
  • Machine state visibility in the dashboard that matches `flyctl status` output
  • removes the CLI dependency for routine status checks
  • Cold start mitigation for zero-scaled machines that's configurable per service
  • enables cost efficiency for low-traffic services without user-facing latency
  • Documentation pathways organized by use case (stateless API, stateful service,
  • database-backed app) rather than by infrastructure concept reduce the reading
  • time to first successful deploy
Composability Notes

Pairs with `supabase-primary-user` for developers who use Supabase for Postgres and Fly for application hosting. Contrast with `vercel-primary-user` for the serverless/static vs. containerized/stateful deployment philosophy. Use with `datadog-primary-user` for production observability on Fly-deployed applications.