Dennis Vinterfjärd

Dennis Vinterfjärd

Pixxle

night owl.🦉I do tech stuff from time to time. 🧑‍💻

Gists

Why I Switched from Claude Code to Codex

aiblog

I didn't switch because Claude Code was bad. I switched because my day-to-day workflow changed, and Codex fits it better right now.

I've genuinely been very happy with Claude Code for a long time. If you look at how much AI agents have improved over the last six months, it's honestly wild. Once you establish good standards in a project, the first pass from newer models is almost always close to right.

I recently read Peter Steinberg's post, Shipping at Inference Speed, and I agreed with a lot of it. I've also noticed that I read less and less code line by line now. I still review for anti-patterns and obvious garbage, but I don't inspect every single line the way I used to.

That doesn't mean no discipline. I usually work in repos with many contributors, so I don't commit directly to main.

Why PR Reviews matter more than ever in the age of AI-generated code

aiculture

There is still debate about whether AI coding assistants truly improve developer productivity. In my experience they do, and by a lot. Work that used to take days can now be generated in minutes. The problem is that this speed introduces a new failure mode: complacency.

I keep seeing developers become less critical of code when they generated it with AI, and even less critical when reviewing AI-generated code from teammates. Bugs, performance issues, and avoidable technical debt pass through because everyone is moving too fast. The answer is not to reject AI. The answer is to raise the bar on pull request reviews.

The most common trap is "does it work" thinking. A developer prompts an agent, gets a lot of code quickly, runs it, sees green output, and moves on. But functional code is not the same thing as production-safe code. I have seen AI produce solutions that returned correct data while introducing multiple full table scans in a single API path. It worked in test conditions and would have hurt real users at scale. A reviewer caught it.

AI also makes very large diffs normal. A 2,000-line PR used to be rare. Now it shows up constantly. Large diffs push everyone toward skimming, and skimming destroys review quality. That is why responsibility sits with the PR author, not the model: break work down, explain intent clearly, and make the change reviewable.

A strong review culture also depends on psychological safety. Reviewers need to be able to say "no" without social fallout. That gets harder with seniority imbalance, where junior reviewers may see problems but still hesitate to block a senior engineer's PR.

Spec Driven Development Is Quietly Changing How We Use AI Editors

aiblog

The AI editor space has been moving so fast that it is hard to keep up. Every few weeks there is a new editor, model, or pricing change that forces you to reevaluate your workflow. Some tools push the quality bar up, some tools create new friction, and all of them are changing how we build software.

My own workflow has gone through that same cycle. VS Code improved by adding agents directly in the editor, but using it often felt like death by approval dialog. Cursor used to feel like the power-user default, but the latest pricing and rate-limit behavior made it harder to trust for sustained heavy use. Windsurf turned into an ongoing acquisition saga where the future felt unclear, and if you want that timeline, Theo covered it well in this video, this one, and this one.

Claude Code stayed strong through all of this. Plan Mode is still one of the best workflows I have used because it forces a clean outline before implementation, and as a Vim user I like having the agent in a separate CLI instead of bolted into my editor setup.

Then I started using Kiro. At the time, it was free, shipped with Anthropic models, and pushed a workflow that felt much closer to how I actually want to work: start from intent, lock requirements, shape design, then execute with traceable tasks. Before that, I was doing a manual dance with markdown docs and chat tabs, copying context around and hoping I did not lose state. Kiro made that flow more native.

What sold me was not just document generation, it was continuity. I could describe a change, get requirements, refine them, get design output, and walk through a task list while still being able to redirect the architecture in the middle without restarting from zero. That was a major workflow upgrade.

Rust in the AI Era: The Backend Language of the Future?

aifuture

I have been thinking about this for a while, and after a lot of discussions at work, I keep landing on the same conclusion: Rust might be the best backend language for the AI era.

That is a big claim, but my reasoning is practical. Rust is strict, sometimes painfully strict, and it forces discipline in memory handling, typing, and error paths. The upside is that when the code compiles, the baseline reliability is very high.

That matters even more now that more backend code is being written with AI assistance. AI-generated code can look convincing while still hiding edge cases or weak failure handling. Rust changes that dynamic because the compiler rejects a lot of fragile code before it ever runs.

This does not mean you can trust AI blindly. You still need design judgment, code review, and tests. But Rust gives you a tougher safety net, and that safety net is exactly what I want when the pace of generated code keeps increasing.

The biggest limitation right now is ecosystem maturity in the AI context. Rust adoption is still lower than many mainstream languages, which likely means less high-quality training data for models. But even with that limitation, the language-level guarantees are strong enough that I still prefer it.

Catching Bugs and Planning the Future: AI Tooling in Practice

aiarchitecturefuture

A few months ago I started experimenting with Gemini as a PR reviewer in our main frontend repository. It cannot approve PRs on its own, but it can leave comments and suggestions for developers to act on. The early feedback from the team was strong, so after reviewing enough examples myself, I enabled Gemini by default across all pull requests in all repositories.

That is where things got interesting.

During our yearly external penetration test, one finding pointed out that our OneTimePassword generator used C# Random, which is not cryptographically secure. The fix looked straightforward: switch to a secure random generator. But Gemini then caught a subtle follow-up issue that could have easily slipped through human review.

Gemini AI catching the RandomNumberGenerator bug

A single excluded digit had reduced OTP entropy by 46%. It was an easy mistake to make and a serious security issue if it had reached production.

How AI Has Rewritten My Approach to Software Development

aiblog

The last few months have completely changed how I think about building software. I started with small experiments in Visual Studio Code using agent mode. At first it felt like a fun way to automate boring tasks, but once I got better at prompting and started writing better playbooks, the output quality jumped fast.

That led me to Cursor. Compared to what I had in VS Code, the AI integration felt smoother and more natural, and my productivity went up. Not by a little, but enough to feel it every day. Then I got hit by the downside: a rogue background agent ignored spending limits and burned over $300 in one night. After that I kept looking.

Next I tried Claude Code. I was skeptical at first because a CLI workflow sounded like a step backward, but the productivity gains were real. Moving from Cursor to Claude felt like a big jump, not a small upgrade.

At work, we had an old ETL pipeline everyone hated: overengineered, hard to maintain, and full of friction. I rewrote it from scratch one evening and had around 12,000 lines of clean Python with close to full test coverage in about three hours. The same weekend I picked up a startup idea from a friend and built a working SaaS prototype in about ten hours with around 15,000 lines of React and TypeScript. The speed was one thing, but the quality was what surprised me most.

I went from AI skeptic, to cautious user, to fully convinced. We are in a different era now. In many cases it is faster and cheaper to rebuild than to keep patching legacy code forever. That changes how we should write software: less code written to impress other developers, more code written so both humans and AI can understand and extend it safely.