“The shift was quiet. They'd been using cursor for weeks, mostly out of obligation. Then multi-file editing with AI awareness solved a problem they'd been routing around — and suddenly the friction of context window limits mean the AI loses track of the full codebase on large projects felt absurd. They couldn't go back.”
When I'm refactoring a payment processing module from callbacks to async, I want to generate boilerplate, tests, and repetitive code patterns without writing them manually, so I can use AI to understand unfamiliar codebases faster than reading the code line by line.
A developer who has made Cursor their primary IDE and restructured their workflow around AI-assisted coding. They don't use AI as autocomplete — they use it as a pair programmer, architect, and refactoring partner. They've learned which prompts work, which context windows matter, and when to trust the AI vs. when to verify manually. They are faster than they were in VS Code, but they've also developed new anxieties about code they didn't fully write.
To generate boilerplate, tests, and repetitive code patterns without writing them manually — reliably, without workarounds, and without becoming the team's single point of failure for cursor, leveraging AI-powered code completion with codebase context.
A developer who trusts their setup. Generate boilerplate, tests, and repetitive code patterns without writing them manually is reliable enough that they've stopped checking. Larger context windows that span multiple files eliminate the "AI forgot about the other module" problem. They've moved from configuring cursor to using it.
The developer is refactoring a payment processing module from callbacks to async/await. They select the file, describe the transformation in natural language, and Cursor generates the refactored version. It looks clean — proper error handling, consistent patterns. But when they run the tests, two edge cases fail because the AI didn't understand a business rule embedded in a comment three files away. The developer fixes the edge cases manually and updates the prompt for next time. The whole refactor took 20 minutes instead of 2 hours, but those 20 minutes required more careful review than writing code from scratch would have.
Uses Cursor as their daily driver IDE. Has a paid subscription and uses Claude or GPT-4 models depending on the task. Works on codebases ranging from 10K to 500K lines. Generates 30–50% of their code through AI assistance. Has developed personal prompt libraries for common patterns. Reviews AI-generated code more carefully than human-written code. Tracks their velocity improvement and estimates they're 2–3x faster on implementation tasks.
Two things you'd notice: they reference cursor in conversation without being asked, and they've built workflows on top of it that weren't in the original plan. Cmd+K inline editing with natural language has become part of their muscle memory. They're now focused on use AI to understand unfamiliar codebases faster than reading the code line by line — a sign the basics are solved.
Not a feature gap — a trust failure. Context window limits mean the AI loses track of the full codebase on large projects happens at the worst possible moment, and cursor offers no path to resolution. The AI's confident-but-wrong completions slowed them down more than manual coding. Their belief — aI doesn't replace developers — it replaces the parts of development that weren't the hard part anyway — has been violated one too many times.
Pairs with cursor-primary-user for the standard AI-assisted development perspective. Contrast with vscode-primary-user for the traditional IDE workflow. Use with github-open-source-maintainer for the code review perspective on AI-generated contributions.