Large Language Models as a Building Blocks - ft. Jay Alammar
The rise of large language models is inspiring a wide variety of application ideas and experiments. For builders to become more adept at building with LLMs, it's important to gain an understanding of using LLMs as components of advanced pipelines, and not of being a text-in text-out monolith. In this talk, Jay covers a number of LLM applications covering generative use cases, as well using language models for semantic search and data exploration.
Jay Alammar (Director and Engineering Fellow @ Cohere)
Jay is the co-author of Hands-On Large Language Models. Through his blog and YouTube channel, Jay has helped millions of people wrap their heads around complex machine learning topics. Jay harnesses a visual, highly-intuitive presentation style to communicate concepts ranging from the most basic intros to data analysis, interactive intros to neural networks, to dissections of state-of-the-art neural network models like Transformers and Stable Diffusion.
Jay is Director and Engineering Fellow at Cohere, a leading provider of large language models for text generation, search, and retrieval-augmented generation for the enterprise.
WORKSHOP INFO
One day full of LLMs!
Fri Mar 1st, 9am - 5pm ET(times are in ET)