

Deep Research: The API
OpenAI has officially released Deep Research through the API! The layers of abstraction continue to increase for builders, as now we have to answer the question of not only which model to use, but which application to use in our application.
e.g., should I use a traditional model (e.g., GPT-4.1), a reasoning model (e.g., o3), or deep research (e.g., o3-deep-research) in my application?
Generally, we’re pumped about this release, and think it’s worth covering for AI Engineers, because this is literally the next level of abstraction for builders.
Putting OpenAI’s Deep Research Models to the Test
In this session, we will probe how o3-deep-research and o4-mini-deep-research tackle complex questions. We’ll watch the models break a high-level prompt into sub-tasks, collect sources from the web (and optional MCP data stores), run code for analysis, and assemble a citation-rich draft. Along the way, we’ll note where the workflow shines, where it struggles, and what kinds of queries appear to stretch its limits.
🧩 Why Explore This?
Traditional LLM calls often stop at single-response answers. Deep Research claims to deliver structured, multi-step analyses—closer to what a human analyst would produce for tasks like competitive landscapes, regulatory scan-throughs, or literature reviews. We’ll examine whether that claim survives real-world use, weighing factors such as transparency, latency, cost, and the reliability of automated citations.
🔍 What You’ll Learn
API Fundamentals – authentication, request structure, and how “reasoning”, “web_search_preview”, and code_interpreter tools fit together.
Model Selection – when to reach for the exhaustive power of o3-deep-research versus the faster, cost-efficient o4-mini-deep-research.
Prompt Engineering for Agents – framing high-level queries, adding domain context, and avoiding unnecessary clarifying loops.
Citations & Transparency – turning inline annotations into clickable bibliographies your stakeholders can trust.
Integrating Private Data via MCP – securely blending web results with your own PDFs, earnings decks, or lab notebooks.
Cost & Latency Tuning – batching, background mode, and graceful fallbacks when rate limits hit.
👩💻 Who Should Attend
AI Engineers who want to build applications using the latest modeling tools
AI Engineering leaders interested in building complex production LLM applications
Speakers:
Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021, he has built and led industry-leading Machine Learning education programs. Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher. He loves trail running and is based in Dayton, Ohio.
Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.
Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.