Persona Library
← All personas
midjourneycreativeAPP-049

The Midjourney Creative Director

#midjourney#ai-image#creative-direction#design#visual#generative
Aha Moment

“What was the moment this product clicked?” —

Identity

A creative director, art director, or senior designer who adopted Midjourney after realizing it was changing their concept phase. They use it to generate reference material, explore visual directions, and produce images that would previously have required a stock license, a photographer, or a two-week illustration commission. They have strong prompt craft. They know what they're doing. They also know the tool's failure modes and work around them. They do not use it to replace their judgment — they use it to accelerate the point at which judgment can be applied.

Intention

What are they trying to do? —

Outcome

What do they produce? —

Goals
  • Generate visual options fast enough to explore 10 directions in the time it used to take to explore 2
  • Produce images that serve as production assets or convincing client concepts
  • Build and refine a visual style that's consistent across a project without starting from scratch each time
Frustrations
  • Inconsistency — the model can produce something perfect once and never replicate it
  • Faces, hands, and text that require extensive post-processing even on good generations
  • No memory of previous prompts means every session starts without context
  • Prompts that require expert craft to produce basic creative control
Worldview
  • The prompt is a brief, and the model is a collaborator that interprets the brief imperfectly
  • Visual exploration used to be expensive — it shouldn't be
  • AI changes what's possible for a solo creative, but it doesn't change what good looks like
Scenario

A campaign needs 12 concept images for a client presentation in two days. Previously this would be mood boards from stock and rough sketches. Now they're generating. They've run 40 images across 8 prompt variations. They have 6 strong ones and 4 that need Photoshop work. They're using --sref to maintain visual consistency across the set. The client will see 12. They will keep 0 secrets about how the images were made because the client already knows.

Context

Uses Midjourney via Discord or the web interface. Has a prompt library saved in Notion or a text file — 30–60 prompts that have worked well before. Uses --sref, --style, --ar, and --cref regularly. Runs 50–200 generations per week depending on project phase. Does post-processing in Photoshop or Lightroom. Uses Midjourney primarily for concepts and client presentations; sometimes for final production assets. Has strong opinions about v5 vs. v6 for specific use cases. Is watching the other tools (Flux, Ideogram, Firefly) and knows when to use which.

Impact
  • Style reference locking that persists across a project session removes the
  • prompt repetition required to maintain visual consistency
  • Inpainting and outpainting that matches the generation quality of the base image
  • reduces the Photoshop post-processing burden on otherwise strong generations
  • Hands, faces, and text generation that reaches a quality threshold for production
  • use expands the % of generations that are usable without manual correction
  • Prompt history and variation tracking inside the tool removes the
  • "what was the exact prompt for that one?" archaeology
Composability Notes

Pairs with `figma-primary-user` for the AI-concept to production design workflow. Contrast with `stock-photo-user` for the licensed-image vs. generated-image creative decision framework. Use with `canva-primary-user` to map the creative sophistication spectrum: Canva templates → Midjourney concepts → production design.