Reasonable Scale LLMs: Revisiting Foundational Principles in the Age of Generative AI
In this Fireside Chat, Jacopo Tagliabue (Founder, Bauplan) joins Hugo Bowne-Anderson to revisit the concept of Reasonable Scale Machine Learning, applying it to the current landscape dominated by Generative AI and Large Language Models (LLMs). We'll explore how the foundational principles of Reasonable Scale apply today, focusing on how smaller, manageable models can deliver significant value without the need for massive infrastructure.
Key Topics of Discussion:
Reasonable Scale LLMs: How the concept of reasonable scale holds up with generative AI and LLMs, and where the whitespace is in developing smaller models that can still deliver strong results.
Data Over Modeling: The continued importance of high-quality data and how it drives effective results, especially in the context of fine-tuning LLMs and working with smaller datasets.
Operationalizing AI Today: What it takes to adopt AI at a reasonable scale without overinvesting in infrastructure, and how to manage monitoring and cost efficiency.
Build vs. Buy for LLMs: When to leverage existing APIs and models versus building your own systems, and how organizations can strategically decide between these options.
The Whitespace for Smaller LLMs: Opportunities in developing and deploying smaller models for faster iteration, adaptability, and cost-effective AI solutions.
This session is designed for AI practitioners, engineers, and leaders who are navigating the adoption of AI and LLMs while balancing the complexities of infrastructure and operational costs.