Software Isn’t Just Written, It’s Curated
This is the first in a short series exploring how AI is changing the way we design, develop, and deliver software. Before jumping into the next parts, I’d love to hear your thoughts, whether you agree, disagree, or see things differently; here in the comments or if you want to reach out to me directly. The discussion is as important as the writing.
One quick note before we dive in — this isn’t about vibe coding. You know the style: prompt the AI, get some code, hope it runs, and move on without fully understanding it. That might work for hacking together a game or side project — if it works, great; if it doesn’t, no big deal.
But that doesn’t fly inside a Fortune 500 company, where ethical, legal and financial obligations are real, the stakes are high, and the impact reaches thousands (or millions) of people. Vibe coding might get a laugh in a startup demo but it won’t cut it for the CEO. Maybe one day. But not yet. Not today.
This article is about real software engineering, where speed and experimentation are welcome, but quality, security, context, and correctness highly matter.
…
I’ve been in software engineering for over 20 years. I’ve seen trends come and go, languages rise and fall, and tools shift from magic to mundane. By now, we can all agree that this AI moment is different. It’s not just another tool or a trend, it’s changing the shape of the work itself.
So I’ve been asking myself: Where does my career, as a principal software engineering consultant, go from here? How do I stay relevant? How do I keep doing high-quality, meaningful work? And how can I help my clients to do the same, without falling into the many traps that come with over-trusting AI?
What’s emerging for me is something I’m starting to call Fastloop, a new approach to software development where engineers shift from coders to curators. In this model, AI accelerates the work, but the engineer ensures quality, intent, and ethics remain intact. It’s about fast thinking, fast feedback, and thoughtful execution. Kind of like how Agile reshaped delivery, I believe Fastloop (or Curated Engineering, or whatever we end up calling it) is starting to reshape how we build software in the age of AI. This series is my way of exploring those questions out loud.
We’ve been at an inflection point for a while now. When ChatGPT first came out, there was a wave of fear and hype. Some thought it would take over the world overnight. But now, most of us are shifting from fear to curiosity, adopting these tools (ChatGPT, Cursor, NotebookLM, etc) and exploring how they might actually improve our work and effectiveness.
It’s tempting to think of these times, where AI is reshaping software development, as something brand new. But the truth is, we’ve been building toward it for years. What’s different now is the speed of change. The pace of idea-to-implementation has compressed dramatically. And that’s not just because we’re working smarter, it’s because we’re working with machines that “think” fast. “Think” is in quotes for a reason — machines don’t actually think. They compute, predict, and generate. But they can make us think faster by shortening the feedback loop and giving us responses, scaffolding, and even full prototypes in seconds. I will call it “thinking” in this article and the following ones.
So… A Code Curator?
What should we name this new role? Or should we give it a new role? Let’s for now call it a code curator as opposed to a traditional software engineer, until we decide what’s best. A code curator is not just someone who can prompt ChatGPT, write specs for Cursor or wire up a language model. It’s a hybrid: part software engineer, part data scientist, part product thinker. I see it not as a new job title, but as a merging of mindsets.
From Fast Thinking to Fast Coding
Computers have always helped us think faster. But with AI, we’ve crossed into a new domain: fast response. Fast code generation. The time between “what if we built…” and “here’s a working prototype” is collapsing. That changes everything from how we design, to how we validate, to how we collaborate.
But let’s be real: just because AI can write code doesn’t mean it understands context well. It doesn’t grasp nuance, business trade-offs, or edge cases. It can mimic logic but it needs a guide.
Prototypes vs. Production: A Crucial Line
One of the biggest misconceptions right now is that AI-written code is “production-ready” by default. It’s not.
Right now, AI is excellent at producing quick prototypes. It’s fast, confident, and surprisingly capable at stitching together working demos, POCs (Proof of Concepts), and initial scaffolds. But enterprise-ready, production-grade code is a different beast. It needs to be secure, maintainable, observable, resilient under load, and considerate of edge cases and integration boundaries.
AI can be part of that journey, but it needs help and lots of guidance. It needs experienced engineers to review its output, spot the blind spots, stress test the assumptions, and apply human judgment. This doesn’t mean AI isn’t useful in production, it just means it needs a partner. And that’s where engineers come in.
For example, a senior engineer might use AI to generate two or three different versions of a component or approach, and then evaluate which one is the most robust, secure, and scalable. The AI speeds up exploration but the decision-making and quality assurance still rely on human expertise. That extra layer of critical thinking, that’s not overhead. That’s the work.
Have you run into this scenario in your own work? What did you learn from comparing AI-generated options to your own judgment? Share an example if you can; those stories help us all.
Redefining What “Takes a Long Time”
When someone says, “this will take a while,” we have to ask: which part? Writing the code? Not anymore. What actually takes time now is the deep stuff: aligning with stakeholders, designing meaningful experiences, building scalable and secure architectures, and ensuring the AI-generated pieces fit cleanly into the whole.
This is where experienced engineers remain irreplaceable. The craft has changed but the need for craft hasn’t.
Where are you spending most of your time these days? Has AI freed you up for deeper work, or added more complexity?
Engineers Are Still the Medium
Marshall McLuhan’s phrase, “the medium is the message,” has never been more relevant. If code is the artifact and the AI is the tool, then the engineer is the real medium directing the interaction between intent and implementation. Engineers shape the message through curation, guidance and judgement. Engineers are not just building software, they’re shaping how software gets built. That shift requires more than new tools — it requires a mindset change.
Mindset Shift: From Coders to Curators
AI is moving us from being builders of every brick to curators of blueprints. We’re less likely to start from scratch, we’re steering, shaping, and validating. The value engineers bring is in knowing what to ask, what to accept, and what to push back on.
That shift calls for new skills, yes, but also a new posture. Less about typing, more about thinking. Less about output, more about outcomes.
What does your version of this mindset shift look like? Are you curating, prompting, designing? Or are you still trying to find the right role in all of this?
Fastloop? A Name for the Shift So what should we call this emerging way of working — this shift towards super fast feedback, thoughtful prototyping, and collaborative human-AI problem-solving?
I am calling it Fastloop for now. It’s not a job title or a strict methodology, it’s a mindset. One where engineers aren’t replaced by AI but re-centered as curators of quality, guides of intent, and the guardians of context. Where speed doesn’t sacrifice substance, and tools don’t replace thoughts.
Does Fastloop resonate with you? Or is there another name that better captures this new mode of working? I’d love to hear from you, what questions are you wrestling with in this AI-powered shift? I’d love to hear your take, your experience might be exactly what someone else needs to hear.
Next up in Part 2:
Do you want to dig in any of the topics above a little deeper or should we dig into the accountability puzzle: What happens when AI writes the code? Who owns it? Who’s liable when things go wrong? And how do we help our clients navigate the legal and ethical terrain without losing speed or trust?
Poll Options:
-
Who’s accountable when AI writes code?
-
What’s the right way to structure AI-enabled teams?
-
What does “good engineering” look like in the AI era?
-
Something else—comment below!