Checking in on MCP: From Vibes to Enterprise
Tim Nichols
CEO/Founder

Anthropic’s Model Context Protocol has quickly become the standard for connecting AI with external tools. It also has a ton of hype. Now that we’ve shipped a few AI integrations we give our thoughts on the burning hot MCP.
Anthropic’s Model Context Protocol
Anthropic announced their Model Context Protocol in November 2024 as an open standard for connecting AI tooling with external systems. MCP uses a client-server architecture to let a host application (ex: an LLM) access specific data or functionality on an MCP server (ex: GoogleDrive). In short, any internet company can safely expose a subset of its data and capabilities to an LLM. by building an MCP Server.
In 2025 MCP has become the buzzword of the AI community and quickly eclipsed alternative standards. So what’s the fuss? If this sounds like GraphQL for LLMs Anthropic won’t argue with you.
MCP has exploded in popularity because 1) it’s a lightweight solution for exchanging context 2) an emerging ‘AI Engineering’ / ‘Vibe Coding’ community has the tooling and audience to create some buzzy PoCs like:
- Generating code implementation of a Figma design
- Creating a 3D Image from a Reference Image
- Letting an LLM query Kubernets resources
How MCP Works
MCP allows a probabilistic model to consume a deterministic API.
If you need a tl;dr - MCP servers expose information to MCP clients (LLMs) through three different primitives
- Tools expose actions from your server to LLMs
- Resources expose data from your server to LLMs
- Prompts are the reusable templates that are processed by an LLMs
The big idea is that you’ve got building blocks to specify input parameters for LLMs. Today most MCP Servers are tool-centric demos, but the capability is clearly there for more advanced agent-agent communication. Now read the docs and then find a youtube explainer from your favorite DevRel influencer.
MCP vs Copilot Extensions
A core principle of Flightcrew is to place SRE, Infra and Compliance insights where engineers need them. Naturally we build a lot of integrations and last month we announced our partnership with GitHub Copilot.
We shared learnings from building a Copilot Extension which offers a very different path than MCP.
Copilot Extensions | MCP | |
---|---|---|
Authorization | GitHub App Installation | Draft Spec |
Stateless | Connection stays open until prompt is answered | Connection stays open until client disconnects |
LLM Integration | Supported GitHub Models or BYO | BYO |
Multiplayer tool interaction | Through chat context | Through inputs-outputs |
Catalog / Tool Discovery | Curated | Fragmented |
Copilot uses Github for its authentication method so tool authors don’t have to validate user information or permissions. The majority of development work is in managing prompts, interacting with LLMs through Github APIs, and managing a webhook.
Developing a MCP Server means bringing your own LLM and building a few endpoints. This is easy enough with MCP SDKs and today’s AI SDKs but is a slightly different skillset.
MCP is based on one-way communication, the client makes a request to the server and gets a response back that it processes and displays to the user. There is no back and forth between the MCP client and server, and the servers have no access to the client or primitives from other MCP servers unless it’s provided through arguments.
Compare this to Github Copilots Extension - which gives authors full access to Copilot’s LLM and user prompts. This disadvantage is that you are limited to one tool per prompt, and tools cannot communicate except through shared chat context. This can lead to clumsy, but audit friendly workflows where you repeatedly tag in different tools in sequential prompts.
What should you build first - A Copilot Extension or an MCP Server? Well, like everything in AI - context is everything ;)
At Flightcrew we’ve prototyped an MCP Server but are holding off on release until MCP is ready for Enterprise deployments.
MCP’s Enterprise Gaps
Lack of Authentication/Authorization
MCP focuses on self hosted tools that run locally, or those that already expose tokens for user access (ex: Supabase). This is great for PoCs and personal use, but won’t cut it for Enterprise engagements where security and compliance are table stakes.
Anthropic has created a WIP spec for OAuth2 and facilitated an active discussion. Once adopted by MCP clients we’re off to the races.
Stateless Operation
MCP was launched as a stateful protocol to allow for complex, agentic interaction. However this limits interactions with many apps and platforms. After some discussion Anthropic is stearing MCP towards “Streamable HTTP” which should drastically expand the MCP Server universe.
MCP Server Catalog & Discovery
GitHub’s ecosystem provides many, many benefits to startups. Benefit #221 is that once you’ve gotten approved for GitHub’s Marketplace your users can easily find, evaluate and install your App or Extension.
Alternatively, MCP brings the corners and quirks of a vibrant open source community. There are already multiple community-driven MCP catalogs like Glama, mcp-get, PulseMCP, Cursor Directory and MCP.so. Some of these catalogs perform a very basic curation while others give ratings on security, licensing and quality. In short it’s all “New” and so don’t be surprised if a customer’s security team has a lot of follow up questions.
The good news is that Auth and Stateless support appear imminent and the Anthropic team is best-in-class at building in public.
MVP MCP
It's still early for MCP and we're excited to monitor how the capabilities and community evolve. If you're interested in working with Flightcrew through MCP then please reach out.
Tim Nichols
CEO/Founder
Tim was a Product Manager on Google Kubernetes Engine and led Machine Learning teams at Spotify before starting Flightcrew. He graduated from Stanford University and lives in New York City. Follow on Bluesky