From Writing Code to Setting Direction

AI has changed the way we build software. Not by replacing developers and architects, but by changing what they spend their time on.

From our perspective at Immeo, this is one of the most fundamental shifts we have seen in software development in many years.

It is also a shift that is easy to misunderstand. Because when AI can write code faster, it can look like a story about automation. In practice, it is at least as much a story about governance. The value does not lie in producing more code. The value lies in the right people being able to spend more time on the decisions that shape the quality, longevity, and business value of the solution.

A shift in role, not in relevance.

The traditional role of a software developer has for decades been about translating requirements into code. Understand the problem, design the solution, write the implementation, test and deliver. This remains a demanding discipline. But the division of tasks is changing.

Many of the tasks that previously took up a large part of a developer's day - routine implementation and debugging of familiar patterns - can today be handled by AI if given the right framework. This frees up time for what still requires experience, context, and judgement: architecture, prioritisation, review, trade-offs, and an understanding of the business the solution is meant to support.

This does not mean the developer becomes less important. It means the role becomes sharper. When AI takes on more of the implementation, the quality of the direction becomes more critical.

Our approach: AI with direction, not vibe coding

Vibe coding is an approach where you ask AI to build software by describing what you want – and then accept the output without systematically understanding, reviewing, or governing what the AI produces. This can yield quick results in the prototype phase but typically leads to code that is difficult to maintain, test, and build upon. The term is used in the industry to describe AI-assisted development that prioritises speed over expert control.

There is a significant difference between using AI as an advanced autocomplete tool and using it as a development agent that works independently on concrete tasks. The former makes the individual developer faster. The latter changes the working process itself.

At Immeo, we use AI agents as a regular part of our development practice. In each project, we typically stick to one primary coding agent to ensure consistency in our working method, but we do not choose tools dogmatically. What matters is not which agent we choose. What matters is the framework it operates within.

The agent works directly in the systems the team already uses – project board, repository, and documentation. It reads tasks, implements, tests, and reports progress. In practice, this means the agent works within the same workflows the team already uses, whether that is Azure DevOps, Jira, or a GitHub-based setup. Our developers and architects steer the direction and review the deliverables but spend less time writing code line by line.

This shifts the effort from mechanical production to expert-led oversight. We spend more time on architectural direction, integration patterns, quality assurance, and the decisions that have consequences long after the first release.

At the same time, it creates a more transparent way of working. When the agent operates in the same systems as the rest of the team, progress and decisions become more visible. It is not a black box. It is a working process that can be followed, assessed, and governed.

ai

The most important thing we have learned about AI in software development is this: AI without a compass produces code, but not necessarily the right code, in the right way, with the right quality. AI is only as good as the framework it is given.

This has a practical implication. Without explicit frameworks, the AI makes its own architectural decisions along the way. Each response may seem reasonable in isolation, but over time architectural drift occurs: a codebase that slowly fragments because local choices do not add up to a coherent whole.

That is why we deliberately work with skills: structured accelerators that make our best practices, architectural principles, and coding standards explicit in a form the agent can work directly from. This is not process for process's sake. It is a way of putting experience into a system, so that the AI operates from the same technical foundation as the team.

Skills typically cover four areas:

This makes a real difference. Output becomes more consistent. New tasks do not start from scratch. And implicit knowledge is made explicit and reusable across projects, developers, and tools.

We deliberately align with established standards and reuse existing skills and integration patterns where it makes sense. We only build our own when there is a genuine Immeo-specific reason to do so. This discipline matters, because AI-assisted development quickly becomes difficult to govern if each project invents its own method along the way.

Review is different, not easier

Reviewing AI-generated code is not the same as reviewing code written by a colleague. The volume is greater, the pace higher, and the failure patterns are different.

AI-generated code rarely fails syntactically. It more often fails on semantic correctness, architectural consistency, and edge cases that are not described clearly in the task. It can deliver something that looks plausible, and that may even work locally, without fitting properly into the solution as a whole.

For this reason, we work in practice with two layers of review. The functional review is about whether the solution actually does what it is supposed to, and whether tests and edge cases are covered. The architectural review is about whether the implementation follows the patterns and principles we have chosen, and whether it supports the long-term structure rather than undermining it.

The second layer cannot be delegated to AI. It requires technical judgement and architectural sensibility. This is precisely why seniority and professional discipline become more important – not less – in an AI-assisted development process.

This does not exclude AI from assisting in the review process itself. AI can be used to highlight deviations from established patterns, point to unhandled error scenarios, or compare an implementation against existing conventions in the codebase. But it is still the developer or architect who reads, assesses, and makes the call. AI can support the eye – it cannot replace it.

The right agent for the right task

Not all AI agents are built for the same purpose. Our primary coding agents - such as Claude Code, Codex, Cursor, and GitHub Copilot – are well suited for working broadly and continuously in a codebase. But for more narrowly defined tasks, a specialised agent can produce better results. For rapid UI generation, v0 from Vercel or Lovable.ai are examples of agents built precisely for that purpose, and which can supplement during the phases of a project where that makes sense.

The important thing is not that everything must go through one tool. The important thing is that the choice is deliberate. The agent we use for a task must fit the purpose and be able to operate within the framework we have established. Otherwise, you get speed without the ability to stay in control.

What this requires of the organisation

An AI-assisted development process does not run itself. In several respects, it actually places higher demands on the team than a traditional approach.

First, it requires stronger architectural judgement. Good decisions about structure, patterns, and dependencies scale quickly when AI implements within them. So do poor decisions.

Second, it requires sharper task definition. AI does not pick up the phone when a task is unclear. It fills in the gaps with its own assumptions. This means requirements, task breakdown, and acceptance criteria need to be more precise than before.

Third, it requires continuous maintenance of the frameworks in use. Skills are not something you set up once. Technologies, standards, and teams evolve. If the framework does not keep pace, the AI begins to operate on outdated assumptions.

Fourth, it requires explicit knowledge management. Best practices, principles, and decisions must be made clear and shareable, so that both people and agents can apply them consistently. This is an investment, but it is also where much of the scaling effect comes from.

What this means for our clients

For clients, the central question is not whether we write more or less code. The central question is whether we deliver more value and more robustness for the same investment.

The short answer is yes. You get more for the same investment, because we shift our effort from routine implementation to architecture, integrations, quality assurance, and technical decisions with long-term impact. This gives you more of the deep expertise that actually reduces risk and increases the durability of the solution.

You also get greater transparency. Close integration with project boards and repositories has long been an important part of our way of working. When the agent operates in the same systems that the team and client already follow, progress, tasks, and deliverables become visible as a natural part of the work – without requiring separate status meetings to surface them.

And you get the opportunity to test ideas earlier in the process, while it is still inexpensive to adjust direction. This makes it easier to learn quickly without compromising quality.

Vil du høre mere?

Kontakt
Rasmus Corlin Director

+45 2222 0261

rco@immeo.dk

rco - portræt
breaker