

Model Context Protocol
You’ve probably heard of MCP by now. It stands for Model Context Protocol, and everyone is now talking about it.
The easiest way to think about MCP in 2025 is that it “helps you build agents and complex workflows on top of LLMs.” [Ref]
“MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.”
Model Context Protocol, as introduced by Anthropic in Nov, 2024, started as “a standard for connecting AI assistants to the systems where data lives.”
As they put it:
Its aim is to help frontier models produce better, more relevant responses.
MCP arose, in part, from the growing need to simplify the complexity of prompt engineering as LLMs started being used in failure-critical domains within enterprises, government, and beyond. It was also in service of the growing scale of the deployment of these types of apps.
In 2025 we’ve seen:
Community adaptation from open-source libraries and AI platform companies.
Regulatory and Compliance industry watchdogs and government agencies focus on standards for context-sharing protocols
Extensions to the protocol, including specialized blocks for data provenance, advanced error handling, additional languages
Third-party tools that aim to help with MCP validation, versioning, and analytics
The latest protocol spec from Nov 2024 is:
Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
The problems that MCP solves are:
Inconsistent model behavior
Lack of interoperability
Context misalignment
Scalability and maintenance
Join us live to break down the entire MCP spec, talk about the latest industry adoption and open-source progress highlights, and (of course!) to build, ship, and share a complex agentic application with MCP!
📚 You’ll learn:
What MCP is, from the details of the spec to how it is put into practice today
About the origin and rapid adoption of MCP from November, 2024 to today
How to build, ship, and share with MCP and what real alternatives (if any!) exist today
🤓 Who should attend the event:
Aspiring AI Engineers who want to deploy agents and LLM apps into the enterprise
AI Engineering leaders who need to understand the emerging standards for LLM applications
Speaker Bio's:
Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021, he has built and led industry-leading Machine Learning education programs. Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher. He loves trail running and is based in Dayton, Ohio.
Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.
Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.