Context Serialization

In a recent edition of The Sequence Engineering newsletter, “Why Did MCP Win?,” the authors point to context serialization and exchange as a reason—perhaps the most important reason—why everyone’s talking about the Model Context Protocol. I was puzzled by this—I’ve read a lot of technical and semitechnical posts about MCP and haven’t seen context serialization […]

Apr 29, 2025 - 13:22
 0
Context Serialization

In a recent edition of The Sequence Engineering newsletter, “Why Did MCP Win?,” the authors point to context serialization and exchange as a reason—perhaps the most important reason—why everyone’s talking about the Model Context Protocol. I was puzzled by this—I’ve read a lot of technical and semitechnical posts about MCP and haven’t seen context serialization mentioned. There are tutorials, lists of available MCP servers, and much more but nothing that mentions context serialization itself. I was even more puzzled after reading through the MCP specification, in which the terms “context serialization” and “context exchange” don’t appear.

What’s going on? The authors of the Sequence Engineering piece found the bigger picture, something more substantial than just using MCP to let Claude control Ableton. (Though that’s fun. Suno, beware!) It’s not just about letting language models drive traditional applications through a standard API. There isn’t a separate section on context serialization because all of MCP is about context serialization. That’s why it’s called the Model Context Protocol. Yes, it provides ways for applications to tell models about their capabilities so that agents can use those capabilities to complete a task. But it also gives models the means to share the current context with other applications that can make use of it. For traditional applications like GitHub, sharing context is meaningless. For the latest generation of applications that use networks of models, sharing context opens up new possibilities.

Here’s a relatively simple example. You may be using AI to write a program. You add a new feature, test it, and it works. What happens next? From within your IDE, you can call traditional applications like Git to commit the changes—not a big deal, and some AI tools like Aider can already do that. But you also want to send a message to your manager and team members describing the project’s current state. Your AI-enhanced IDE might be able to generate an email. But Gmail has its own integrations with Gemini for writing email, and you’d prefer to use that. So your IDE can package everything relevant about your context and send it to Gemini, with instructions to decide what’s important, generate the message, and send the message via Gmail after it has been created. That’s different: Instead of an AI using a traditional application, now we have two AIs collaborating to complete a task. There can even be a conversation between the AIs about what to say in the message. (And you need to confirm that the result meets your expectations—vibe emailing to a boss seems like an antipattern.)

Now we can start talking about networks of AIs working together. Here’s an example that’s only somewhat more complex. Imagine an AI application that helps farmers plan what they will plant. That application might want to use:

  • An economics service to forecast crop prices
  • A service to forecast seed prices
  • A service to forecast fertilizer prices
  • A service to forecast fuel prices
  • A weather service
  • An agronomy model that predicts what crops will grow well at the farm’s location

The application would probably require several more services that I can’t imagine–is there an entomology model that can forecast insect infestations? (Yes, there is.) AI can already do a good job of predicting weather, and the financial industry is using AI to do economic modeling. One could imagine doing this all on a giant, “know everything” LLM (maybe GPT-6 or 7). But one thing we’re learning is that smaller, specialized models often outperform large generalist models in their areas of specialization. An AI that models crop prices should have access to a lot of important data that isn’t public. So should models that forecast seed prices, fertilizer prices, and fuel prices. All of these models are probably subscription-based services. It’s likely that a large farming business or cooperative would develop proprietary in-house models.

The farmer’s AI needs to gather information from these specialized models by sending context to them: what the farmer wants to know, of course, but also the location of the fields, weather patterns over the past year, the farm’s production over the past few years, the farm’s technological capabilities, the availability of resources like water, and more. Furthermore, it’s not just a matter of asking each of these models a question, getting the answers, and generating a result; a conversation needs to happen between the specialist AIs because each answer will influence the others. It may be possible to predict the weather without knowing about economics, but you can’t do agricultural economics if you don’t understand the weather. This is where MCP’s value really lies. Building an application that asks models questions? That’s definitely useful, but any high school student can build an app that sends a prompt to ChatGPT and screen-scrapes the results. Anthropic’s Computer Use API goes a step further by automating the clicking and screen-scraping. The real value is in connecting models to each other so they can have conversations—so that a model that predicts the price of corn can discover weather forecasts for the coming year. We can build networks of AI models and agents. That’s what MCP supports. We couldn’t imagine this application just a few years ago. Now we can’t just imagine it, we can start building it. As Blaise Agüera y Arcas argues, intelligence is collective and social. MCP gives us the tools to build artificial social intelligence.

The industry has been talking about agents for some time now—dozens of years, really. The most recent burst of agentic discussion started just over a year ago. For the past year we’ve had models that were good enough, but we were missing an important piece of the puzzle: the ability to send context from one model to another. MCP provides some of the missing pieces. Google’s new A2A protocol provides more of them. That’s what context serialization is all about, and that’s what it enables: networks of collaborating AIs, each acting as a specialist. Now, the only question is: What will we build?