Exploring the MCP Ecosystem: Looking Under the Hood
Exploring the MCP Ecosystem: Looking Under the Hood In my previous article, I introduced Model Context Protocol (MCP) as the USB-C of AI integrations - a standardized way to connect LLMs with external tools and data sources. Today, we're strapping on our digital spelunking gear and descending deeper into the mechanics of MCP. Fair warning: we're about to get technical. But don't worry – even if you're not a hardcore developer, I've sprinkled in enough analogies and plain English explanations that you'll walk away with a much better understanding of how MCP actually works. So, grab your favorite caffeinated beverage and let's dive in! Function Calling: The Prerequisite for MCP Before we can understand MCP, we need to address a fundamental question: Can any LLM use MCP, or is there a prerequisite? The simple answer is that MCP depends entirely on a model's ability to use function calling (sometimes called "tool use"). If you're not familiar with function calling, it's a capability that allows LLMs to: Understand available functions/tools described in JSON schema format Decide when to use these functions based on user queries Invoke these functions with the correct parameters Process the results returned from these functions Think of it like knowing how to use a phone book. It's not enough to be intelligent – you need to understand what a phone book is, when to use it, how to look up entries, and what to do with the phone numbers you find. Not all models offer this capability, and those that do support it with varying levels of sophistication. Want to see which models can handle function calling? Check out the Berkeley Function Calling Leaderboard - it's an excellent resource that ranks models based on their function calling abilities.

Exploring the MCP Ecosystem: Looking Under the Hood
In my previous article, I introduced Model Context Protocol (MCP) as the USB-C of AI integrations - a standardized way to connect LLMs with external tools and data sources. Today, we're strapping on our digital spelunking gear and descending deeper into the mechanics of MCP.
Fair warning: we're about to get technical. But don't worry – even if you're not a hardcore developer, I've sprinkled in enough analogies and plain English explanations that you'll walk away with a much better understanding of how MCP actually works. So, grab your favorite caffeinated beverage and let's dive in!
Function Calling: The Prerequisite for MCP
Before we can understand MCP, we need to address a fundamental question: Can any LLM use MCP, or is there a prerequisite?
The simple answer is that MCP depends entirely on a model's ability to use function calling (sometimes called "tool use"). If you're not familiar with function calling, it's a capability that allows LLMs to:
- Understand available functions/tools described in JSON schema format
- Decide when to use these functions based on user queries
- Invoke these functions with the correct parameters
- Process the results returned from these functions
Think of it like knowing how to use a phone book. It's not enough to be intelligent – you need to understand what a phone book is, when to use it, how to look up entries, and what to do with the phone numbers you find.
Not all models offer this capability, and those that do support it with varying levels of sophistication. Want to see which models can handle function calling? Check out the Berkeley Function Calling Leaderboard - it's an excellent resource that ranks models based on their function calling abilities.