

Why Data and AI Still Break at Scale (and What to Do About It)
It’s 2025, and most teams still struggle to get data and machine learning projects working across people, systems, and time.
Notebooks don’t hold up. Pipelines are brittle. Results drift. Sharing work often means breaking it.
Akshay Agrawal has dealt with this across environments—doing ML at Google Brain, optimization research at Netflix, academic work at Stanford, and now building marimo, an open-source Python notebook reimagined for modern workflows—versionable with Git, executable as code, and easy to share as apps.
In this episode, he joins Hugo Bowne-Anderson to talk about why data and ML work so often falls apart—and what’s required to make it hold together.
We cover:
⚠️ Why reproducibility and collaboration still fail on real teams
🧠 What gets lost moving between research and production
🔧 How small tooling choices create large-scale failure modes
🧭 What Akshay has learned building across ML, research, and open source
If you’ve ever tried to get your team’s work running somewhere else—and watched it fall apart—this episode will feel familiar.