

Lossfunk Talks: Schema learning & rebinding for intelligence via analogies
Analogies and abstraction have been argued to be the core of intelligent generalization. As a manifestation of this phenomenon, I will consider the seemingly miraculous in-context learning capabilities of LLMs, and explain the mechanism by which any sequence model can demonstrate such powerful generalization. I will argue how LLMs might plausibly be following a similar mechanism under the hood, and end with a brief look at some recent advances in understanding in-context learning.
Speaker: Siva Swaminathan
Twitter: twitter.com/ergodicthought
LinkedIn: https://www.linkedin.com/in/siva-swaminathan-78885a105/
Current: Research Engineer at Google DeepMind Previous: Robot vision + 3d computational geometry @ Vicarious AI, Theoretical physics @ UT Austin, Electrical Engg @ IIT Madras