Mixture of Agents: Multi-Agent meets MoE?
We’ve all been building, shipping, and sharing agentic applications, and many tools now exist to help us easily build, ship, and share multi-agent applications.
At the same time, we’ve seen innovation within the transformer architecture used for creating GPT-style LLMs that can help us achieve even higher performance across benchmarks. For example, the standard of multi-headed self-attention used in base and instruct-tuned versions of the latest LLM chat models can be enhanced by introducing a switching feed-forward network layer to each decoder block. This is called the Mixture of Experts approach.
Recently, the fusion of these two core ideas: multi-agent systems and a mixture of experts approach has given rise to a new approach called Mixture of Agents (MoA). This approach allows us to “harness the collective strengths of multiple LLMs to improve state-of-the-art quality.”
In this event, we’ll learn about this MoA approach, and try to understand where we it should fit into our toolbox when considered amongst other techniques like MoE and multi-agent approaches. We’ll review the original paper including some of the foundational underlying assumptions, the MoA structure, and the current level of performance improvement reported.
As always, we’ll provide a detailed code walkthrough as well following an overview of concepts, which will be based primarily on this work from Together AI.
📚 You’ll learn:
How the idea of MoA follows logically from other recent advances in LLM applications
How to build and test the performance of an MoA system, in code
Where MoA fits into the production LLM app toolbox today, and where it (likely) will tomorrow!
🤓 Who should attend the event:
Aspiring AI Engineers who want to understand agents at the open-source LLM
AI Engineering leaders interested in the latest research directions for agentic applications
Speakers:
Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021 he has built and led industry-leading Machine Learning education programs. Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher. He loves trail running and is based in Dayton, Ohio.
Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.
Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.