Persona Library
← All personas
flyiotechnicalAPP-030

The Fly.io Container Developer

#flyio#deployment#containers#edge#developer#infrastructure
Aha Moment

“What was the moment this product clicked?” —

Identity

A backend or full-stack developer who needs to run server-side applications — not just static sites and serverless functions — and wants them deployed globally without managing Kubernetes or paying for managed Kubernetes overhead. They found Fly.io and found a platform that takes a Dockerfile and runs it near users. They `fly deploy`. It works. They have opinions about Fly.io that include real affection and specific frustrations, which is the relationship one has with a platform they actually depend on.

Intention

What are they trying to do? —

Outcome

What do they produce? —

Goals
  • Deploy containerized applications to multiple regions with a single command
  • Run stateful workloads — databases, queues, persistent services — without a managed service tier
  • Keep infrastructure costs proportional to actual usage rather than provisioned capacity
Frustrations
  • Fly.io incident history that has been significant enough to affect their uptime
  • and their confidence in the platform for production workloads
  • Documentation that's technically accurate but assumes more infrastructure knowledge
  • than the developer-experience positioning implies
  • Machine management that requires CLI fluency that casual deployments shouldn't need
  • Cold starts on Fly machines that have been scaled to zero — the latency is felt by users
Worldview
  • Deployment should be a command, not a project
  • Geographic distribution is a user experience decision, not an infrastructure luxury
  • The right level of infrastructure complexity is "enough to give you control,
  • not so much that control is all you do"
Scenario

They're deploying a Phoenix application — Elixir, with WebSockets and a persistent database connection requirement. Vercel doesn't work for this. Heroku would work but the pricing doesn't. They write a Dockerfile. They run `fly launch`. They answer 4 prompts. The app is deployed in 3 regions in 8 minutes. The WebSocket connections route to the nearest region. The database is a Fly Postgres instance in the primary region with a replica in the secondary. They didn't open a cloud provider console once.

Context

Uses Fly.io for 2–6 applications. Deploys backend services, APIs, and full-stack applications that require persistent server processes. Uses Fly Postgres for at least one project. Uses `flyctl` as their primary interface — occasionally the Fly dashboard for status checks. Has configured autoscaling with minimum machines to avoid cold starts on critical services. Has set up private networking between Fly apps. Has been affected by at least one Fly.io incident and has a plan for how they'd respond to another one. Has recommended Fly to developers deploying workloads that don't fit Vercel's model.

Impact
  • Incident transparency and postmortem quality that matches the reliability expectations
  • of production workloads builds the trust that Fly's developer experience earns
  • Machine state visibility in the dashboard that matches `flyctl status` output
  • removes the CLI dependency for routine status checks
  • Cold start mitigation for zero-scaled machines that's configurable per service
  • enables cost efficiency for low-traffic services without user-facing latency
  • Documentation pathways organized by use case (stateless API, stateful service,
  • database-backed app) rather than by infrastructure concept reduce the reading
  • time to first successful deploy
Composability Notes

Pairs with `supabase-primary-user` for developers who use Supabase for Postgres and Fly for application hosting. Contrast with `vercel-primary-user` for the serverless/static vs. containerized/stateful deployment philosophy. Use with `datadog-primary-user` for production observability on Fly-deployed applications.