Cover Image for Cerebras SUPERNOVA @ RAISE
Cover Image for Cerebras SUPERNOVA @ RAISE
RAISE Summit is the premier event for professionals seeking to disrupt, build, and connect in the AI landscape.
Registration
Past Event
Welcome! To join the event, please register below.
About Event

A supernova is a massive stellar explosion—symbolizing rapid transformation. As the AI industry evolves, Cerebras is breaking through traditional limits. We are taking our flagship event to Paris, at RAISE Summit

​Whether you're building apps, deploying models, or designing infrastructure for the agentic era, Supernova is where AI goes from promise to production.

​Join us as we unveil the next generation of models and cloud infrastructure powering real-time reasoning and intelligent agents. From DeepSeek R1 to Llama 4 to Qwen, Cerebras is the fastest AI inference provider for cutting-edge, open-source models—built for those deploying at scale.

​Learn how:

  • Meta, Mistral, and Perplexity accelerated their core apps by 20x and are building real-time reasoning systems

  • GSK and Mayo Clinic are training and deploying LLMs for life sciences and hospitals

  • ​The latest AI startups are building agents for customer service, personal tutoring, and knowledge work

  • ​You can run your models 20x faster than GPUs with no added cost or complexity


Supernova @ RAISE Agenda

Day 1: 

Part 1 – Meta Llama API and the Future of AI Reasoning. Powered by Cerebras

10:30-10:45

Why AI Needs Faster Compute: Unveiling the World’s Fastest Inference Engine

Jessica Liu, VP Product Management, Cerebras Software

Imagine every extra millisecond of latency stealing tokens — and therefore intelligence Every millisecond of AI latency steals intelligence from your model and delight from your user experience. In this high-energy session, Cerebras’ VP of Product Management pulls back the curtain on our wafer-scale inference platform that turns that millisecond tax into pure headroom for smarter model reasoning and higher user engagement. We’ll trace the industry’s pivot from “bigger pre-training” to “smarter run-time,” show why data scarcity and spiraling training budgets make inference the new competitive front, and reveal how models like Qwen3 and Deepseek gain IQ-like leaps by "thinking for longer". Expect live numbers, behind-the-scenes engineering stories, and a first look at the architectural tricks that let us stream 10× more tokens while cutting power in half. If you care about building agents that think in real time, not coffee-break time, this is your roadmap to the fastest inference on Earth—and the dawn of AI’s next era.

10:45 -11:05 

NinjaTech AI, powered by Cerebras, going beyond Assistance: The Dawn of Truly Autonomous AI Agent 

Babak Pahlavan, Founder, CEO & CPO of NinjaTech AI

The age of specialized, limited AI agents is over. The era of an all-in-one General AI Agent that actually executes has begun.

Meet Super Ninja - the General AI agent that doesn't just handle one piece of your project; it completes the entire workflow from start to finish. While other agents get stuck with token limits or require constant hand-holding, Super Ninja runs extensive data analysis, codes and validates full applications, does comprehensive research, builds websites, and delivers high-quality results in your preferred format. Powered by our proprietary models on Cerebras' wafer-scale architecture, it operates like having a team of experts working 24/7.

This isn't just another AI tool—it's the foundation for the next generation of autonomous AI Agents and digital robots for personal & business productivity. 

11:05 -11:20

Building with Llama: The Future of AI Development

Pierre Roux, Meta Director Partnerships and Jessica Liu, VP Product Cerebras

In this fireside chat Meta will share the latest Llama developments, including the launch of Llama 4 and Llama API preview. Meta will discuss how the Llama enables developers to build and deploy applications easily and how the Llama API is delivering faster and more efficient inference for the world in partnership with Cerebras.

Part 2 – Voice, Vision, and Next-Gen Consumer AI, Powered by Cerebras

13:00-13:15

From Prototypes to Products: Applying Ultra-Fast Inference to your Use Cases Angela Yeung, VP Product Management, Cerebras Hardware

Picking up where the previous deep-dive into Cerebras’ wafer-scale architecture left off, this session zooms in on four concrete, high-impact use cases unlocked by sub-millisecond inference. Voice, Digital Twins, Code Generation, and Agents. Each use case comes with the key performance stats—latency envelopes, token budgets, and cost profiles—that turn ambitious prototypes into reliable products. Attendees will leave with practical blueprints for bringing Cerebras speed into their own applications and a clear view of where ultra-fast inference is headed next.

13:15-13:30

Andrei Papancea, CEO, NLX

Title: Replacing Clicks with Conversation: How NLX and Cerebras Let You Talk to Your UI

What if your customers could navigate your website or fill out forms on your mobile app simply by talking? Join Andrei Papancea, CEO of NLX, for a look at a future without clicks and scrolls. He will unveil Voice+, the company's patented multimodal technology that transforms any digital property into a truly interactive conversational interface.

Discover how Voice+ allows users to not only talk with an AI but to talk to the user interface itself—driving navigation, completing forms, and taking action using only their voice. Powered by Cerebras's blazing-fast AI inference, these interactions are delivered with zero latency, making them as seamless and intuitive as a human conversation. Andrei will dive into real-world applications, showcasing how this revolutionary approach is creating truly hands-free digital experiences and defining the next era of human-computer interaction.

13:30-13:50 

From Docs to Agents: How Notion and Cerebras Are Building AI for 100M Users
Tian Jin, ML Engineer, Notion
Angela Yeung, VP of Product, Cerebras

What does it take to bring cutting-edge AI to over 100 million users - and actually make it useful? In this fireside chat, Notion’s Tian Jin and Cerebras’ Angela Yeung go behind the scenes on building practical, production-ready AI that feels magical but works at scale.

They’ll explore how Notion approaches critical product decisions like when to fine-tune a model, how to evaluate quality across different models and agentic pipelines, and what “vibe working” looks like for the next generation of productivity tools. You'll get a behind-the-scenes look at Notion’s AI stack - from how model routing frameworks help match the right model to the task, to where fast inference from Cerebras enables instant, more responsive user experiences.

This conversation is for builders who care about delivering great AI experiences: real systems, real trade-offs, and a building AI with an intense focus on delivering the best possible user experience at scale.

Break – Demos in the Supernova Lounge

14:00-15:00

Part 3 – Making Cerebras Available to All through IBM, DataRobot, and Docker 

15:00 – 15:15

One Gateway, Any Model, Record Speed: Deploying AI with IBM and Cerebras

Vincent Perrin, IBM Technical Leader, AI and Quantum Computing

Abstract: Enterprises today face a critical challenge: how to easily deploy and manage a diverse set of AI models without sacrificing performance. This talk introduces the powerful combination of IBM Model Gateway, providing  a single, secure interface to access, govern, and deploy foundation models, and Cerebras, the world’s fastest Inference provider. In our live demo, you’ll see firsthand how to simplify your AI lifecycle, maintain enterprise-grade control, and serve models at breathtaking speed—all through one unified platform.

15:15 – 15:30

Docker and Cerebras: Fast inference meets fast deployment.

Phillipe Charriere, Docker Principal Solutions Architect 

Cerebras gives you blazing-fast inference. Docker Compose now gives you the simplest way to run agentic apps — with just a compose.yaml.

Define your open models, agents, and Cerebras API endpoints. Then spin up your entire agentic stack. From local-testing to full-scale deployment, your Cerebras-powered agents are wired and running in seconds. No rewires. No config gymnastics. Just fast.

15:30 -15:45

Pareto-Optimal Agentic Pipelines with DataRobot Syftr on Cerebras 

Matthew Hausknecht, Principal AI Researcher, DataRobot

Agentic pipelines are becoming increasingly sophisticated, integrating components such as RAG modules (e.g., vector databases, embedding models, retrievers), as well as verifiers, rewriters, and rerankers. Each module introduces complex configuration and hyperparameter choices, resulting in a vast design space with different trade-offs between latency, accuracy, and cost. Syftr is a novel framework that automates this exploration of this space, using multi-objective Bayesian Optimization to surface Pareto-optimal agentic pipelines. In this presentation, we explore how pipelines running on the Cerebras Wafer-Scale Engine perform when optimized by Syftr for low-latency use cases.

Part 4 – Boosting Longevity with Health and Life Science AI Breakthroughs 

16:00-16:30

GSK Fireside Chat - Molecules at Wafer-Scale: GSK × Cerebras on Accelerating Life Sciences 

Danielle Belgrave, VP AI and ML, GSK and Natalia Vassilieva, Field CTO, Cerebras 

Join AI leaders from GSK and Cerebras for an inside look at how wafer-scale compute is reshaping life-sciences R&D. In this fireside chat they trace the journey from the first training runs—where Cerebras’ single-chip architecture slashed model-training cycles from weeks to days—to today’s blistering inference that powers new breakthroughs in seconds. Attendees will leave with a clear picture of how the two teams co-engineered hardware, software, and scientific models to turn breakthroughs in compute into breakthroughs across the life-sciences spectrum.

16:30-17:00

Shant Ayanian, MD, Mayo Clinic

Advancing Personalized Medicine: The Future of AI and Genomics in Healthcare

Dr. Shant Ayanian's talk at the Cerebras Supernova event will highlight the Mayo Clinic's use of Cerebras' AI infrastructure to pioneer genomic-based personalized medicine. This collaboration will focus on creating individualized treatment models by integrating genomic and clinical data, moving beyond traditional "one-size-fits-all" approaches. The discussion will emphasize the transformative potential of AI in healthcare and the importance of partnerships in advancing medical treatments.

17:00-17:45

Longevity happy hour 

Let’s live long and prosper! Join us in the Supernova Lounge for the Longevity Happy Hour hosted by RAISE.

Day 2

10:15-10:45

Daniel Kim, Cerebras Growth

Head of Growth Cerebras Daniel Kim will be sharing how wafer-scale inference has been unlocking insane creativity and diverse use cases for developers and startups. The unique design of the Cerebras wafer enables breakthroughs in AI performance, power efficiency, and scalability, and explore real-world use cases that leverage these capabilities. Through this talk, Daniel Kim aims to demonstrate the vast potential of Cerebras to accelerate AI adoption and drive innovation in industries such as healthcare, finance, and more.


11:00 - 12:30 

Startup Competition at RAISE in the Supernova Zone


13:30-14:30 

Sarah Chieng, Growth, Cerebras

Russ Dsa, CEO, LiveKit
'Build your own Sales Agent | Cerebras x Livekit'

In this hands-on workshop, you’ll build a real-time voice-based AI sales agent using Cerebras for fast inference and LiveKit for conversational flow. The agent can speak, listen, and respond based on real sales context—no hallucinations. 

14:30-15:30

Sarah Chieng, Growth, Cerebras

Isaac Tai, Growth, Cerebras

'Build your own Perplexity Clone | Cerebras x Exa'

Learn how to build a Perplexity-style AI assistant that performs deep research in under 60 seconds. In this workshop, you’ll combine Exa's web search and Cerebras’s fast inference to scrape, analyze, and synthesize content from multiple sources into structured insights. You’ll walk away with a working AI research agent that can summarize events, extract key takeaways, and scale to any topic.

17:00-18:00

Supernova Startup Happy Hour 

Follow us on Linkedin and X to stay up to date

​July 8th & 9th, Doors open 10 AM

Supernova Guidelines

Location
Carrousel du Louvre
99 Rue de Rivoli, 75001 Paris, France
Supernova will be hosted at the heart of RAISE Summit, in one of the most iconic venues in the world — and in the European capital of AI: Paris.
RAISE Summit is the premier event for professionals seeking to disrupt, build, and connect in the AI landscape.