Cover Image for GOOD RIDDANCE TO SUPERVISED LEARNING with Alan Nichol
Cover Image for GOOD RIDDANCE TO SUPERVISED LEARNING with Alan Nichol
Avatar for Vanishing Gradients Livestreams
44 Going

GOOD RIDDANCE TO SUPERVISED LEARNING with Alan Nichol

YouTube
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Supervised Learning has failed us as a paradigm, in many ways, when it comes to practitioners actually building ML and AI apps. We’re still tuning hyperparameters and massively overfitting to datasets. We've forgotten to ask does fitting a curve to this data actually solve my problem? Does this data actually represent the task I care about? 

These days the consequence is that practitioners in companies are successful in training neural nets and staggeringly unsuccessful at solving problems. People say that LLMs feel like incantations but so does supervised learning: most practitioners throw some training data into a model and try to steer it by manually inserting things and hoping it works. 

In this live podcast recording, Hugo and Alan Nichol (co-founder and CTO, Rasa) will talk about these issues in the supervised learning space and how LLMs and in-context learning offer a path out: instead of training specialist models for predictive tasks, in-context learning involves describing tasks in natural language and using general-purpose language models to "solve" them.

They’ll also dive into how the community currently overvalues the generative aspect of LLMs and undervalues such in-context learning, which provides a vast reduction in the complexity of building ML and AI-powered software.

Avatar for Vanishing Gradients Livestreams
44 Going