by Bruno Campos
đ The Vibe Coding Dream
Imagine this: âWrite me an app that allows me to book squash court venues for me and my friends.â
Simple request, right? The complexity packed into that single sentence is absolutely gargantuan. Weâre talking about communication with venues, APIs for booking (if they exist), a platform for friends to see availability and locations, venue confirmation systems, payment processing, authentication, database selection, real-time updates (if venue APIs support it), and more. And hereâs the kicker: most squash venues Iâve encountered have either a half-maintained Google Sheet (at best), or something even more archaic, pen and paper [shock].
The âviberâ here expects a single LLM to figure all this out and return a fully-functioning, hosted, tested, investor-ready app. And you know what? Thatâs a fantastic benchmark for testing LLM capabilities and Iâll continue testing new multi-billion dollar models with similar prompts and be just as disappointed as everyone else when they inevitably fall short because after all⌠theyâre multi-billion dollar models.
But hereâs the uncomfortable truth: whilst I believe weâll eventually reach a state where anyone can create, ship, and maintain production-level code with no coding experience, weâre either quite far from this reality despite what tech companies tell investors, or, more likely given every previous development in tech history, weâll still need a version of âdevelopersâ to do the critical thinking behind products that make it to production and become successful.
đŻ Enter Intent Driven Development
This is where Intent Driven Development comes in. Unlike vibe coding, which attempts to minimise the level of thinking and input from the user (note I didnât say developer, because a true viber can and should be anyone), Intent Driven Development embraces a different reality: you and your favourite LLM buddy work through different steps of a workflow side by side, validating each otherâs work and churning out incredible outputs at a fraction of the time you normally would.
Youâll still spend the same time thinking about systems, architecture, and business context. But youâll spend dramatically less time worrying about semicolons, terminal setup, and Cthulhu-spawned error messages that make you question your life choices.
The key difference? Intent Driven Development assumes youâre the architect. You make the important decisions. The ones that require domain knowledge, business context, and strategic thinking. The LLM handles the translation of your intent into working code, following best practices of your specific repo, running tests, and debugging those cryptic error messages that would normally send you down a 3-hour ~Stack Overflow~GPT rage induced rabbit hole of debugging your code.
Think of it this way: you could theoretically work entirely using pen and paper, drawing architecture diagrams, sketching UI designs, mapping out user flows, and then hand it all to your LLM assistant to bring to life. But this is only possible if you create the right platform for your LLM first.
đ ď¸ Building the LLM Platform
Hereâs the crucial insight: this workflow is only possible if you have confidence that your LLM buddy knows how to think like you, get the right information from documentation, has context on the project, and most importantly, has the capability to run tools and test things itself without much supervision.
The evolution of LLM tooling has shifted from âLLMs creating from knowledgeâ to âLLMs using tools to access truthâ. This is the paradigm shift that makes Intent Driven Development possible.
đ Model Context Protocol (MCP)
What it is: MCP is an open protocol that standardises how AI assistants connect to external data sources and tools. Think of it as USB-C for AIâa universal way for LLMs to plug into your systems.
Why itâs useful: Instead of your LLM hallucinating how to use an API or guessing at your companyâs best practices, MCP servers provide exact, up-to-date instructions and data. Your LLM can query your internal documentation, access your databases with proper permissions, or interact with your companyâs specific toolingâall through a standardised interface.
Technical explanation: MCP servers expose tools, resources, and prompts to LLM clients through a JSON-RPC protocol. When your LLM needs information, it makes a structured request to an MCP server, which returns accurate, real-time data rather than relying on stale training data.
Analogy: Imagine teaching someone to cook by having them memorise every recipe book ever written versus giving them access to a kitchen, fresh ingredients, and a digital assistant that can look up any recipe on demand. MCP is the latterâproviding fresh, accurate information exactly when needed.
The impact: With MCP, youâre no longer playing slots with a large language model, hoping it gets your companyâs coding standards right this time. Youâre leveraging an intelligent (though admittedly unwise) artificial brain that can access the ground truth of how things work in your environment.
đ¤ Agentic Tools: Claude Code, Skills, Commands, and Agents
What they are: These are systems that allow LLMs to execute multi-step workflows autonomously. Claude Code (my favourite of these tools at the moment), for example, can read files, run commands, search codebases, execute tests, and iterate on failuresâall without constant human intervention.
Why theyâre useful: They close the feedback loop. Instead of the traditional flow where you write code â LLM suggests changes â you copy-paste â you run tests â you report errors â LLM suggests fixes (repeat ad nauseam), agentic tools allow: you describe intent â LLM executes â LLM validates â LLM self-corrects â presents solution.
Technical explanation: These tools combine LLM reasoning with the ability to execute commands, read outputs, and make decisions based on results. Skills are reusable workflows, Commands are custom project-specific instructions, and Agents are autonomous executors that can chain multiple actions together.
Analogy: Traditional LLM coding assistance is like having a brilliant consultant who gives you advice but never touches the codebase. Agentic tools are like having a junior developer who can actually implement, test, and debug their own workâthey just need your architectural guidance.
The impact: You spend your mental energy on the problems only you can solveâthe architectural decisions, the business logic, the user experience trade-offs. The LLM handles the mechanical work of implementation, testing, and debugging.
The difference is profound: in traditional copilot mode, youâre the middle person shuttling information between the AI and your systems. In Intent Driven Development, the AI closes the loop itself, only coming back to you when itâs done or when it needs architectural guidance.
đ§° Other Essential Tools
Language Server Protocols (LSP): Allow LLMs to understand your code structure, navigate definitions, and provide context-aware suggestions just like your IDE does.
Testing Frameworks Integration: LLMs can write, run, and interpret test results, iterating until tests pass, all without your intervention.
Documentation Scrapers: Tools that keep LLMs updated with the latest framework documentation, ensuring they use current APIs and patterns rather than deprecated ones from their training data.
CI/CD Integration: LLMs can trigger builds, interpret CI failures, and even suggest fixes for pipeline issues.
The common thread? All these tools shift LLMs from âguessing based on training dataâ to âaccessing ground truth and validating resultsâ.
đŻ Case Study: DBT and the Power of Closed Loops
Let me illustrate this with a concrete example from the data engineering world: dbt (Data Build Tool).
đ What is DBT?
DBT has become the industry standard for analytics engineering. Itâs a transformation tool that allows data teams to write modular SQL, test data quality, document models, and version control everything. If you work with data warehouses, youâre probably using dbt or will at some point in your career work with it or a similar tool.
đ The Traditional DBT Workflow
Hereâs what building a new data pipeline typically looks like:
đŤ The Pain Points
Notice how much time is spent in that debugging loop? The bit before touching code can take ages (understanding requirements, mapping architecture), and the bit touching code should be quick but often takes even longer as you go back and forth with dbt, debugging strange unexpected errors youâve never seen, managing virtual environments, wrestling with package versions, and questioning your career choices.
Hereâs what happens in practice:
-
The Discovery Phase (Days to Weeks): You spend time understanding what stakeholders actually need, which data sources are available, how to model the transformations, and what the downstream dependencies are.
-
The Implementation Phase (Hours to Days): You write the SQL, create tests, run
dbt run, encounter a cryptic error, Google it, try a fix, run again, encounter another error, manage package conflicts, realise your virtual environment is working as well as a bee with no wings, fix that, run again⌠-
The Validation Phase (Hours): Tests fail in unexpected ways, CI complains about standards you forgot about, you iterate until everything passes.
-
The Mental Lock-in: By this point, youâre so deep in the weeds that even if you realise your initial architecture wasnât optimal, youâre reluctant to pivot. âWell, Iâve already done all this work, I donât want to just throw it away.â
đ The Intent Driven Approach with DBT MCP
Now imagine a different workflow powered by DBTâs MCP integration:
You: âI need to build an incremental model that tracks user activity events, joining with the user dimension table, and implementing the extract_data_for_incremental_run macro pattern we use for performance.â
Here you spend time writing out the intent for your project clearly, using the recommended workflow patterns to get the most out of your LLM buddy, draw out architecture and ultimately brainstorm exactly what you believe is the best approach for your use case giving all this context to your LLM.
Your LLM assistant:
- Accesses your companyâs DBT project through MCP
- Understands your existing macro patterns and conventions
- Generates the model SQL following your standards
- Writes appropriate tests
- Runs
dbt runitself and interprets any errors - Debugs and re-runs until successful
- Runs
dbt testand fixes any test failures - Validates against CI standards
- Comes back to you: âModel created and tested successfully. Ready for review.â
What did you do? You spent your time thinking about the big problems: whether this incremental strategy is right, if the grain of the model makes sense for downstream use cases, if the business logic correctly captures user activity. You didnât waste mental energy on syntax errors, package conflicts, or deciphering error messages.
What did the LLM do? It handled the mechanical work of implementation, testing, and debugging. It accessed your companyâs exact patterns through MCP, ran commands through agent capabilities, and iterated until everything worked.
đ Closing the Loop
This is what I call âclosing the loopâ:
The LLM has a complete feedback loop. It can generate, execute, observe results, and correct itself. Youâre only brought back into the conversation when:
- The work is complete and ready for your review
- A genuine architectural decision is needed
- Something unexpected happens that requires human judgement
The result? You spend all your time thinking entirely about the big problems that only you can solve. You have the mental clarity to pivot if needed, not being bound by sunk cost. Youâre not so deep in the weeds of getting the damn thing to run that you lose sight of whether youâre building the right thing.
And a bonus of unlocking parallelism: with the LLM handling the mechanical work, you can be working on multiple features or projects simultaneously, each with its own LLM assistant churning away in the background. Once you unlock this, the productivity gains are astronomical.
đŻ The Platform-First Philosophy
Hereâs the key insight that makes all of this possible: you must build the platform first for your company/project.
Intent Driven Development isnât about finding the perfect LLM or waiting for the next breakthrough in AI capabilities. Itâs about deliberately constructing an environment where LLMs can:
- Access accurate, up-to-date information about your systems
- Execute commands and observe real results
- Validate their own work against your standards
- Self-correct based on actual feedback, not hallucinated assumptions
This means:
Implementing MCP servers for your critical tools and documentation Creating skills and commands that encode your teamâs workflows Setting up agentic execution environments where LLMs can safely test their work Establishing feedback loops where LLMs can learn from actual results
Once youâve built this platform, something magical happens: you stop playing slots with AI suggestions and start having a genuinely productive partnership with an intelligent (if unwise) assistant that you can multiply for very low cost.
đ The Future Is Intent
Weâre at an inflection point in software development. The future isnât âAI replaces developersâ or âdevelopers ignore AI and keep doing things the old wayâ (which I see more and more people doing as they become exhausted from repetitive formulaic responses, code and badly thought outputs). The future is developers who operate at a higher level of abstraction, thinking in terms of intent, architecture, and user value whilst AI handles the mechanical translation to working code.
This future arrives faster for teams that build the platforms to enable it. Every MCP server you implement, every agentic workflow you establish, every feedback loop you close, these are investments in a world where your entire team can work at the speed of thought rather than the speed of typing.
The vibers will keep prompting for their fully-formed apps, and thatâs fine, itâs a great north star and I will definitely be part of this group! But those who want to get the most out of these tools now and unlock coding super powers, building real systems for real users, can achieve something remarkable TODAY: spending all our mental energy on problems that actually require human insight, whilst AI handles the repeatable mechnical stuff.
Build your platforms. Define your intents. Make AI actually useful.
Welcome to Intent Driven Development.
Peace and Love.
Bruno Campos' Blog