We began using Cursor about a year ago, and it has changed how we build software in ways we didn't expect. Not because it writes perfect code, but because it changes where you spend your time and attention. More time to make decisions that matter and less time on boilerplate and mechanical work. This post goes into great detail about how we use it every day, what the setup looks like, and what we would do differently if we were starting from scratch.
Why Agent Mode Changes the Game
The majority of AI coding tools are autocomplete tools. They keep an eye on what you type and suggest the next line or block. That's helpful, but you can't get much more done because you're still doing all the driving.
Cursor's agent mode is different in kind, not just in degree. The agent can search your whole codebase using both keyword and semantic search. It can also read and edit multiple files at once, run terminal commands, check types, run linters, and keep going until it meets a condition you set. The unit of work changes from "write this line" to "add this feature."
That change has real effects on how you organize your work, which is what most of this post is about.
Step One: Write a specification before you write code.
The one change that made the biggest difference had nothing to do with how Cursor was set up. It was putting in a planned specification step before any code generation starts.
When a non-trivial feature comes up, we start by talking about it in a focused way and using a reasoning model to make the idea clearer. The goal isn't code. The goal is to write a specification that answers the questions: what are we building, what are the edge cases, how does it fit into the current architecture, and what does the data model look like? We save this as spec.md in the repo.
After that, we send the spec to a planning prompt that breaks the work down into small, sequential steps. Every step should be able to be done in one focused agent conversation, and each one should connect to the previous one instead of leaving code that isn't connected to anything. In prompt_plan.md, the output looks like this:
## Step 1
Create the `CreateInvoiceForm` component in `src/features/invoices/` following
the pattern in `src/features/users/CreateUserForm.tsx`. Include fields for
client, amount, due date, and line items. No validation yet.
## Step 2
Add validation using `FormValidator` from `src/utils/FormValidator.ts`.
Email fields use `isValidEmail`, required fields use `isNotEmpty`,
amount uses `isWholeNumber`. Follow the pattern in `src/features/auth/LoginForm.tsx`.
## Step 3
Wire the form to `InvoiceService.create()` in `src/services/InvoiceService.ts`.
Handle server-side errors using `methods.setError()` on the relevant fields.
Show a success toast on completion.
We also make a checklist called todo.md from the plan and check off steps as we finish them. The checklist tells you exactly where you left off when you have to stop working and pick it up again the next day. You don't have to read anything again.
For most features, this whole planning process takes about fifteen minutes. It always saves more time than that by not having to do the work again.
Step Two: Letting the Agent Know About Your Project
When the agent knows the rules of your project from the start of every conversation, it does a much better job. In Cursor, you can do this by using the .cursor/rules/ directory, which has markdown files that the agent reads before each session.
Don't think of these files as instructions for people; think of them as instructions that the agent will always follow. Ours usually include three things: the commands we need to build, check, and test the project; the coding standards we follow; and links to the codebase's best examples.
# Commands
- `yarn dev`: Start the development server
- `yarn build`: Build the project
- `yarn lint`: Run ESLint and Stylelint with autofix
- `yarn lint:base`: Run Prettier and ESLint only
- `yarn lint:styles`: Run Stylelint on SCSS files
# Code conventions
- TypeScript everywhere, no `any` types
- Components go in `src/components/`, each in its own folder with an index file
- API calls go through the Axios instance defined in `src/lib/api.ts`
- Use named exports for components, default exports for pages
- See `src/components/Button/Button.tsx` as the canonical component example
# Workflow
- Run `yarn lint` after every series of changes
- New features go in `src/features/` following the existing structure
- API routes go in `app/api/` following existing patterns
These files are commited to git. Automatically, every developer on the team uses the same agent context. The agent doesn't need to be reminded of your rules every time you ask, and it stops making the same structural mistakes over and over. We add new rules slowly, especially when we see the agent make the same mistake twice at different times.
The way the project's files are organized is also important here. An agent who knows how to navigate a well-organized codebase can quickly find the right files and put new code in the right place. An agent who is not organized will make placement decisions that you will have to fix. This isn't a new idea, but the agent makes the cost of bad organization more obvious and immediate.
Step Three: Habits That Matter for Execution
Once you have the plan and the project set up, execution comes down to a set of habits that took us a while to develop consistently.
Keep conversations short and to the point
For each step in your plan, start a new conversation. Long conversations get worse because the agent has to deal with a context window that keeps getting bigger with each response. After a conversation has gone through a lot of turns, the agent starts to lose focus. Sometimes it edits files it shouldn't or tries the same things that didn't work before. When you start over for each bounded task, the context stays clean and the output is reliable.
Write specific prompts
This is the habit that will help you the most. A vague prompt makes output that needs to be fixed several times. A specific prompt yields functional output on the initial attempt. The difference in real life:
Weak prompt:
Add a date picker to the invoice form.
Strong prompt:
In `src/features/invoices/CreateInvoiceForm.tsx`, add a due date field using
MUI's DatePicker. Wire it through FormInput's `renderInput` prop following the
pattern in `src/features/projects/CreateProjectForm.tsx`. Validate that the
date is not in the past using a custom validator in `src/utils/FormValidator.ts`
called `isFutureDate`.
The second prompt is longer, but you don't need to follow up. You get a feel for the right level of detail over time. This is about how much you would need to explain to a developer who knows the codebase but hasn't seen this feature before.
Tag files only when you know they are useful
The agent can find files that are important by searching the codebase, and it does this pretty well. But if you already know which files are part of a task, tagging them with @ makes things clear and speeds things up. Adding files that aren't needed has the opposite effect, so only include the ones you need.
Commit after every working step
This one is easy to miss, and it always hurts when you do. Agents quickly make a lot of changes to files. It can be hard or impossible to go back to a known good state if you don't commit often. After each step in the plan is done, we commit with a message that explains what that step did. If you ask the agent to, it can write these commit messages from the diff. It usually does a better job than we would if we were in a hurry.
Step Four: Review Without Shortcuts
AI-generated code has a problem that makes it harder to review than code written by people. It is grammatically correct, consistently formatted, and stylistically consistent. The mistakes are logical and semantic, and it's easy to miss them if you only look at them quickly.
TypeScript is the best way we've found to fix the problem. We set it up very carefully, and we have a rule in .cursor/rules/ that tells the agent to run yarn lint after it makes changes. The agent sees the linter output and fixes its own style and type mistakes before showing the result. Without this, those mistakes build up without anyone noticing and show up at bad times.
We keep an eye on the agent when the task is not simple, in addition to TypeScript. Cursor shows a live difference as changes are made. If the agent starts going in a clearly wrong direction, it's faster to stop it right away and send it in the right direction than to wait and go back. We use the built-in review pass to look over the suggested changes again before we accept them after the agent is done.
We also ask the agent to write a short summary of what it changed and why for bigger features. This is the beginning of the PR description and helps the reviewer understand the purpose without having to read every line in detail.
Automating Repetitive Git Work
Cursor has helped us save a lot of time by automating the mechanical parts of git workflows. We have a markdown file in .cursor/commands/ that has the /pr command in it:
Create a pull request for the current changes.
1. Run `git diff` to review staged and unstaged changes
2. Write a clear, specific commit message based on what changed
3. Commit and push to the current branch
4. Use `gh pr create` to open a PR with a descriptive title and body
5. Return the PR URL
The agent looks at the actual diff and writes a commit message and PR description that accurately describes the changes. This is added to git with the project so that all the developers on the team can use it. We have similar commands for /review, which runs linters and lists possible problems, and /fix-issue, which gets a GitHub issue by number and fixes it.
Once you know a workflow you do several times a day, it's worth making commands like these. The investment is small, but the time savings are big over time.
Debugging with Accuracy
If something is broken and you don't know why right away, the success of the debugging session depends almost entirely on how well you can explain the problem. A vague description leads to a vague investigation. The agent can instrument the code, gather runtime data, and work from evidence if they know exactly how to reproduce the problem, what inputs to use, and what behavior to expect versus what actually happens.
A screenshot is almost always better than a written description for UI bugs. Put the picture right into the chat and tell them what the behavior should be. The agent is good at understanding visual context and can find layout or rendering problems that would take a few sentences to explain correctly.
When a bug is really hard to fix, the best way to deal with it is to treat it like a real debugging session instead of a quick fix. This means describing the steps to reproduce the bug, asking the agent to add targeted logging, reproducing the bug, and then sharing the output. This takes more time to set up than just asking "why is this broken?" but it gives you a real answer instead of a guess.
What Stays the Same and What Changes
It's important to be clear about what integrating Cursor AI actually changes in a development workflow, because the answer is more than just "everything is faster."
The cost of doing clearly defined work is what changes. Writing a component that follows a set pattern, adding validation to a form, connecting a new endpoint to an existing service, making tests for existing code, and updating dependencies one at a time while running tests after each update. Everything speeds up.
The price of unclear thinking stays the same. The agent won't help you if you can't write a good prompt that clearly explains what you want to build. The agent will work within a poorly structured architecture and make it worse faster. The planning step we talked about before is there to make sure this happens: you have to do the thinking before the agent can do the execution.
The developers on our team who get the most out of Cursor are the ones who come to the execution phase with a clear plan and specific prompts. People who use it to avoid thinking get the least out of it, and they always end up with code that needs to be thrown away.
Where to Begin
The order in which you show this workflow to a team for the first time is important.
Get into the habit of planning. Before you start coding, take the time to make a written spec and a sequential prompt plan. This makes the quality of the output better no matter what tool you use to run it, and it makes every step after that more predictable.
Then, create a folder called .cursor/rules/ and put the commands to build and check your project, your core code conventions, and links to two or three standard examples in the codebase inside it. Put it in git right away. Only add to it when you see the agent make the same mistake twice, not just because you think they might.
After that, work on your commit discipline. You get a commit with a specific message for every step you finish. This may sound mechanical, but it's the difference between being able to fix a bad agent output in thirty seconds and spending twenty minutes going through changes in a dozen files.
When these three things are in place, everything else—custom commands, parallel agents, more detailed rules—becomes an improvement on a stable foundation instead of adding to a shaky one.








