November 15, 2025
Agents as patient pair programmers
Most of my best weeks lately have had an extra “engineer” on the team: an AI agent sitting in the editor with me.
Not a replacement, not a magic architect—just a relentlessly available pair programmer that never gets tired of typing boilerplate or sketching alternatives.
Tools like Codex, Augment Code Copilot, and Claude Sonet 4.5 are changing how I work in the same way good linters and test runners once did. They remove friction, not responsibility. The trick is to treat them as helpers that amplify your intent instead of oracles that decide what to ship.
This site is a small, very real example. Most of the layout, content wiring, and deployment pipeline started as conversations with Codex. When the project hit a wall—static exports, nginx edge cases, a tricky sitemap—I swapped to Augment Code with Claude Sonet 4.5 to unstick things and then went right back to editing by hand. The end result feels like my work, just delivered with a lot less thrash.
Here’s how I’m using agents day to day to improve productivity without handing them the steering wheel.
Agents tighten the feedback loop
The biggest win isn’t that an agent can write code “for” you. It’s that it can make your feedback loop brutally short.
Instead of:
- Think about how something might look.
- Tab over to docs and examples.
- Sketch a version yourself.
- Refine it through a few rounds of trial and error.
You can move to:
- Describe the outcome you want.
- Get a concrete draft or three.
- Critique, trim, and adapt.
That shift—from blank page to editable draft—is where most of the productivity gain lives.
Codex is especially good at this: you describe a section (“Contact card with three links on the left and a simple form on the right, styled like the rest of the site”), and it gets you to something coherent and on-brand in a couple of minutes. You still decide what to keep, but you’re not burning your morning on flexbox alignment and Tailwind class names.
Augment Code Copilot plays a similar role, but closer to the code you already have. It reads the surrounding context, suggests inline changes, and lets you say “yes/no” without breaking your editing flow. When you pair it with a reasoning model like Claude Sonet 4.5, you get both the low-friction inline edits and a more deliberate voice to talk through architecture or tricky edge cases.
Concrete patterns that actually help
A few patterns have consistently paid off for me:
1. Use agents to scaffold, not to finish
I rarely ask an agent to “build the whole feature”. Instead, I ask for scaffolding:
- “Give me a tiny
ContactFormcomponent with subject, contact, and message fields that posts to this URL.” - “Sketch a GitHub Actions workflow: build the Next static export, build a Docker image, push to GHCR, and bump the Helm values.”
- “Outline a
sitemap.tsxfor a localized blog with yearly archives.”
Those responses are allowed to be 80% right. The remaining 20% is where judgment and context live: how our environments are wired, how the Helm chart is structured, which metrics matter for this particular project.
The important part is that the agent gives me a working starting point that compiles. I can then slice back the complexity, rename things, and make it feel like the rest of the codebase.
2. Treat agents as a second pair of eyes
Agents are good at catching the kinds of mistakes you’d spot in a teammate’s PR review:
- Inconsistent imports and alias use.
- Missing
baseUrlor path mappings for TypeScript. - Runtime errors, like trying to access a streamed
paramsobject synchronously in a route. - Edge cases like Next’s
output: "export"needing explicitdynamic = "force-static"on sitemap routes.
For this site, Codex caught a few of those early, but it also got tangled once the constraints stacked up: static export, nginx in a container, GitHub Actions, Helm, localized routes, and a handwritten blog engine.
That’s where I reached for Augment Code plus Claude Sonet 4.5. Claude was better at holding all of those constraints in working memory and reasoning about them as a system:
- “If you move the form into its own client component, what breaks?”
- “What happens to
/blogand/da/blogin a static export?” - “Why is nginx returning 403 for everything except
/?”
It wasn’t about it “being smarter”; it was about having a second mental stack that could keep track of all the moving parts while I focused on the shape of the experience.
3. Use agents to narrate and document your intent
I’ve found that asking an agent to restate what I’m trying to do is one of the simplest sanity checks:
- “Summarize what this GitHub Actions workflow is doing.”
- “Explain how this contact section works to someone who hasn’t seen the code.”
- “Describe the deployment pipeline from
git pushto traffic hitting nginx.”
If the summary is wrong, I know my code is either unclear or misaligned with the idea in my head. That’s a cheap way to catch design drift before it shows up in bugs or confused teammates.
On this site, that looked like:
- Having Codex describe how the
ContactSectionandContactFormcomponents cooperate. - Letting Claude Sonet 4.5 explain the sitemap strategy across locales and years.
- Asking Augment Code Copilot for a one-paragraph README addition when we added Docker + nginx serving the exported
out/folder.
The payoff is that future-me (or a future teammate) gets a codebase that reads more like a story and less like a pile of clever one-offs.
This site as a working example
A lot of portfolio sites hand-wave about “built with AI”. Here’s what actually happened here.
-
Initial layout and components:
I leaned heavily on Codex to sketch the Home page layout, the skill and services sections, and the overall visual rhythm. The instructions were mostly English: “hero with three CTAs”, “blog teaser section with three cards”, “localized copy for Danish and English”. -
Content system and blog:
The markdown-based blog, yearly archives, and post snippet logic started as Codex drafts. It wired upsrc/content/posts, file-based routing, and thegetPostSnippethelper that now strips out image embeds for cleaner previews. -
Contact flow:
The contact section went through a few iterations:- Simple static form for demo.
- Extracted
ContactFormcomponent with client-side submission. - Integration with a Google Apps Script endpoint and a post-submit “thank you” state that hides the form and updates the card title. Codex got most of the way there; Claude Sonet 4.5 helped iron out the UX and error handling.
-
Static export + nginx + Docker:
Building a static export, serving it from nginx in Docker, and keeping routes like/blogworking turned out to be the thorniest part.- Codex helped scaffold the Dockerfile and nginx config.
- Augment Code Copilot suggested small inline fixes as files moved around.
- Claude Sonet 4.5 walked through the 403s and 404s from nginx, explained the interaction between exported files and
try_files, and helped land on a config that doesn’t break client-side routing.
-
CI/CD and Helm:
The GitHub Actions workflow that builds the static site, builds and pushes the image to GHCR, and bumps the Helm values came from a mix of:- You providing an existing pipeline snippet.
- Codex adapting it for this repo.
- Claude Sonet 4.5 tightening the edges around tags,
APP_VERSION, and how the chart picks up the new image.
At every step, the agents produced drafts. I made the calls: which patterns to adopt, when to simplify, when to throw away a suggestion and write something more boring but more maintainable.
Agents are tools, not teammates
It’s tempting to anthropomorphize these systems (“Claude understands my codebase better than I do”), but that’s a dangerous mental model.
Good agents:
- Have no skin in the game.
- Don’t sit on call.
- Don’t talk to your users.
- Don’t wake up at 3am when nginx falls over.
You do.
So I try to keep a few guardrails in place:
- Never ship a change you don’t understand. If the suggestion feels magical, break it into smaller, inspectable pieces before merging.
- Keep diffs small. Ask agents for focused edits instead of sweeping rewrites. It’s easier to review and easier to roll back.
- Prefer boring solutions. If the agent suggests something clever and something obvious, choose the obvious one unless you can articulate a strong reason not to.
- Write your own tests. Agents can help draft tests, but they shouldn’t be the ones deciding what “correct” looks like.
When I stick to those rules, the agents feel like power tools. When I don’t, they feel like someone else is holding the keyboard and I’m just rubber-stamping changes I barely read.
Where to start if you’re curious
If you haven’t woven agents into your day-to-day yet, a few easy entry points:
- Use Codex (or a similar code-focused agent) to refactor a small component you already understand. Compare before/after and keep only what improves clarity.
- Ask Augment Code Copilot to suggest inline changes in a file you’re already editing instead of jumping into a big new feature with it.
- Bring in a reasoning-oriented model like Claude Sonet 4.5 when you’re designing a system, not just when you’re stuck in syntax.
And—if you want a concrete example—poke around the history of this site. Almost every file has fingerprints from one of those tools, but the overall experience is still mine. The agents just helped me get there without burning a week on yak shaving.
That’s the sweet spot: agents as patient pair programmers that keep your hands moving and your head clear, while you stay fully responsible for the work that ships.