Effective Vibe Coding Runs on MCPs
Introduction
During a recent episode of the Dwarkesh Podcast, Trenton Bricken had a quote that’s become a bit of a mantra for me:
My question is always, are you giving the model enough context? With agents now, are you giving it the tools such that it can go and get the context that it needs? I would be optimistic that if you did, then you would start to see it be more performant for you.
Since we’re all now coding with AI agents, one of the most important problems we now face is one of context. Fortunately, we have a solution: the Model Context Protocol (MCPs). MCPs actually solve a couple of problems:
- They make it possible for AI agents to use tools and APIs
- They connect the agent to many new resources, all of which can be ingested as context
Tool use is a super interesting topic, but it won’t really be the focus of this post. Instead, I’ll be writing about setting up AI agents, particularly GitHub Copilot in VS Code, to use MCPs. I’ll do this by providing a high level example of a problem that I recently solved at work along with some general tips for using MCPs in your own work.
MCPs at a glance
Generally speaking, MCPs are a way to connect a model in a local client, like the Claude App or Copilot in VS Code, to a remote server that can provide context. The protocol is designed to be simple and extensible, allowing for a wide range of applications.
A basic version of the architecture, as explained on the MCP website looks like this. The key takeaway from the diagram below is the flow: your local machine (like VS Code) hosts a client that communicates via the MCP protocol to various “MCP Servers.” These servers can then connect to local data sources or reach out to the internet for information, feeding it all back to your AI agent as context.
As the diagram indicates, the protocol is for handling communication between a host client and a server. The earliest version of MCP servers were designed to be run on a local machine, but the protocol has been extended to allow for remote servers as well. This is particularly useful for accessing large datasets or APIs that are not available locally.
VS Code and GitHub Copilot have built-in support for MCPs, which makes it easy to set up and use. Anthropic also has some reference servers which are surprisingly useful despite the “reference” designation. This includes:
- Filesystem – A way for a model to access a local file system.
- Fetch – Get the content of a URL.
- Memory – Persistent storage of notes, previous conversations, etc..
- Sequential Thinking – Reasoning for any model, not just those trained with that capability.
Setting up MCPs in VS Code
Setting up MCPs in VS Code is straightforward, especially if you’re familiar with manually editing the settings.json
file. Following the documentation, you need to
- Open VS Code Settings: Use the command palette (
Ctrl+Shift+P
orCmd+Shift+P
) and typePreferences: Open User Settings (JSON)
to open yoursettings.json
file directly. - Locate or Create the
mcp
block: Find the"mcp": {}
object in your settings file. If it doesn’t exist, you can add it. - Configure Your Servers: Add your desired MCP servers to the
servers
object following the MCP format. Each server needs a unique name and a command to run it. - Add Inputs: If your MCP server requires inputs, you can define them in the
inputs
array. This is useful for things like API keys or other configuration parameters.
I’ve been using fetch
and the perplexity
servers most heavily. Here’s what the relevant section of my settings.json
file looks like:
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "perplexity-key",
"description": "Perplexity API Key",
"password": true
}
],
"servers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
},
"perplexity-mcp": {
"command": "uvx",
"type": "stdio",
"args": ["perplexity-mcp"],
"env": {
"PERPLEXITY_API_KEY": "${input:perplexity-key}"
}
}
}
}
}
An Agent-centered Software Development Lifecycle
At Google, we had a software development lifecycle that I was surprised to discover is very amenable to agent-driven workflows. While not explicity documented in the SWE Book, we can produce a rough outline of the process. I’ll follow the version from theproductmanager.com:
Image courtesy of Claude.
The stages are:
- Planning and Analysis: Defining the project’s purpose, scope, and goals.
- Defining Requirements: Translating the planning insights into detailed, actionable software requirements.
- Design: Creating the architectural blueprint of the software, including user flows, wireframes, and technical specifications.
- Development: The actual coding and implementation of the software based on the design.
- Testing: Rigorously checking the software for functionality, performance, security, and usability.
- Deployment: Releasing the finished software in a controlled and managed environment.
- Maintenance: Ongoing activities to fix bugs, address issues, and make enhancements to improve the user experience.
Agents fit well into this lifecycle, particularly the first several stages. For example, in the Planning and Analysis stage, you can use an agent to gather information about the problem domain, identify user needs, and analyze existing solutions. In the Defining Requirements stage, agents can help translate high-level goals into specific, actionable requirements. Working with a reasoning agent makes Design especially easy. And obviously, agents are now playing a key role in the Development and Testing stages, where they can assist with coding, debugging, and testing the software.
This isn’t all that different from the process discussed by @hrishioa on X:
Another way to make Claude Code a 10x engineer for a complex change:
— Hrishi (@hrishioa) June 21, 2025
1. Make a plan for the change (if you need it) with Gemini.
2. Open a new branch.
3. Ask Claude to implement the change and maintain a https://t.co/S1FkGAvYtQ that is an APPEND-ONLY log with gotchas, judgement… https://t.co/IIqdsQ0hs7
Example: MCPs solving a problem at work
At Delphos Labs, I recently worked on a project that needed me to develop an agentic tool. I had iterated on it a few times, but I was unhappy with the results. Going back the the drawing board, I followed something much closer to the process above.
- I had a large collection of prompts for models in the task, but with Gemini I refined a particular plan
- Once the plan was set, I could use that the produce a first draft of the agents for my problem
Gemini recommended using CrewAI. For whatever reason, the original version focused on a class-based approach, which was a little outdated. There are still versions of this in the CrewAI Examples GitHub repo, so it’s not that surprising of a pattern to follow.
Nonetheless, I wanted to get it fixed. How do I get the right context to the agent? I used the fetch
MCP server to get the context from a URL. With a version of my code already in VS Code, I started prompts like this:
Follow the examples in the documentation here:
https://docs.crewai.com/guides/crews/first-crew
Update the classes to use the yaml-based approach instead of the class-based approach.
Produce all necessary new files as needed.
And it was able to do this almost entirely on its own. I had some queries that I developed in a Colab notebook. Using the #
shortcut in Copilot, the agent could read the notebook, execute its code as needed and then translate these into my project.
As I was prompting the agent to develop unit tests, I noticed that the pipeline could use refactoring. A query for the agent was able to guide this the right way for the project: search for how to break this particular data dependency that I'm seeing
.
While these are admittedly at a somewhat high level and light on details, I think the point is clear. The models are quite capable, but the process that most developers follow often requires referencing docs, looking at other assets and searching for information about problems. MCPs bring this to the agent, making the experience much closer to working with another developer. It’s also quite clear that this is the long-term future of the craft: the developer is a guide and a source of feedback, while the agent continues to build out capabilities that can be directed towards the problem at hand.
Tools used
- Research happened in Copilot, with the Perplexity MCP server
- Copilot was also used to write the post and provided several rounds of feedback
- Gemini was the editor
- The diagram was created using Claude, based on a summary from Gemini.