

Building AI Agents with Gemma 3
AI agents are everywhere—but what’s actually happening under the hood? In this workshop, we’ll go beyond theory and build a real agent from the ground up using Gemma 3 and LLM function calling.
Through hands-on coding, we’ll walk through:
1️⃣ LLMs as Agents – What does it mean for an LLM to act as an agent?
2️⃣ Function Calling in Action – How models generate structured outputs that trigger function execution.
3️⃣ Building an AI Agent – Creating a Gradio-powered app that takes user input, generates function calls, and returns structured responses.
4️⃣ Logging & Observability – Implementing tracing, logging, and debugging tools to inspect every step of the agent's reasoning process.
5️⃣ Iteration & Improvement – Debugging failure cases, optimizing prompts, and making function calling more reliable.
By the end, you’ll understand the full pipeline, from user input to execution and back to a structured response—with full observability into what’s happening at every stage.
What You’ll Build
✅ An AI-powered function-calling agent using Google’s Gemma 3.
✅ A Gradio-based UI that shows both the end-user experience and the underlying agent mechanics.
✅ A complete logging and observability system that captures every step of the function-calling process.
✅ A debugging workflow to test, refine, and improve agent reliability.
Who Should Join?
🔹 Developers who want to build real AI-powered apps, not just demos.
🔹 Engineers interested in function calling, agent reasoning, and debugging workflows.
🔹 AI/ML practitioners looking to understand when to use agents—and when not to.
🎟 Free to attend—sign up to get access to the livestream, Discord Q&A, and post-event recording!