Cover Image for One Day Workshop on Practical LLMs

One Day Workshop on Practical LLMs

 
 
Google Meet
Registration
No Upcoming Sessions
This series has no upcoming sessions scheduled. Heard something is coming? Check back later!
About Event

NOTE: if you are not able to join the call, the room might be full, please come back towards the end of the hour for the nex talk. We will send you the recording afterwards.

We will be bringing together some of the LLM experts for a day of learning and disucssion. This will be in our usual format of presentation (30 minute) followed by discussion (20 minute)

WORKSHOP MATEIRAL

SCHEDULE (times are in ET):

9:00 Amir Feizpour (CEO @ Aggregate Intellect); Era of KnowledgeOps

10:00 Denys Linkov (ML Lead @ Voiceflow); LLM Integration Best Practices

11:00 Noelle Russell (Global AI Solutions Lead @ Accenture); Responsible Generative AI at Scale

12:00 Gordon Gibson (ML Lead @ Ada); Self-improving LLMs

13:00 Josh Seltzer (CTO @ Nexxt Intelligence); LLMs for Agile Innovation

14:00 Mingkai Deng (PhD Student @ CMU); Reinforced Prompt Optimization

15:00 Suhas Pai (CTO @ Bedrock AI); Enhancing Language Models with External Tools - Cancelled due to personal emergency (recorded video will be provided in the material)

16:00 Rajiv Shah (MLE @ Huggingface); Navigating Enterprise Analytics with LLMs

SPEAKERS

AMIR FEIZPOUR

Amir is the co-founder of Aggregate Intellect (https://ai.science/), a Smart Knowledge Navigator platform for R&D teams in emerging technologies like AI. Prior to this, Amir was an NLP Product Lead at Royal Bank of Canada, and held a postdoctoral position at University of Oxford conducting research on experimental quantum computing. Amir holds a PhD in Physics from University of Toronto.

The Emergence of KnowledgeOps - DevOps and ModelOps have transformed software development over the past two decades, but now we are seeing a further shift left with the rise of KnowledgeOps. This new era leverages tools to augment our problem-solving, planning, and critical thinking abilities, enabling us to tackle highly complex knowledge work with greater efficiency and effectiveness. KnowledgeOps promises to enhance our ability to experiment with a wider range of ideas and select the most impactful ones, similar to the benefits seen in DevOps and related methodologies.


DENYS LINKOV

Denys is the ML lead at Voiceflow focused on building the ML platform and data science offerings. His focus is on realtime NLP systems that help Voiceflow’s 60+ enterprise customers build better conversational assistants. Previously he worked at large bank as a senior cloud architect.

Integrating LLMs into Your Product: Considerations and Best Practices - The proliferation of ChatGPT and other large language models has resulted in an explosion of LLM-based projects and startups. While these models can provide impressive initial demos, integrating them into a product requires careful consideration and planning. This talk will cover key considerations for creating, testing, and optimizing prompts for LLMs, as well as how to run analytics on key user metrics to ensure success.


NOELLE RUSSELL

Noelle Silver Russell is a multi-award-winning technologist and entrepreneur who specializes in advising companies on emerging technology, generative AI and LLMs. She is the Global AI Solutions Lead as well as the Global Industry Lead for Generative AI at Accenture. She has led teams at NPR, Microsoft, IBM, and Amazon Alexa, and is a consistent champion for AI literacy and the ethical use of AI based on her work building some of the largest AI models in the world. She is the founder of AI Leadership Institute and she was recently awarded the Microsoft Most Valuable Professional award for Artificial Intelligence as well as VentureBeat’s Women in AI Responsibility and Ethics award.

Generative AI: Ethics and Accessibility - Generative AI has made impressive advances in creating music, art, and even virtual worlds that were once thought to be exclusively the domain of human creators. However, with such power comes great responsibility, and we must be mindful of the ethical implications of our creations. In this session, we will explore the intersection of generative AI, ethics, and accessibility. We will examine ethical considerations related to bias, transparency, and ownership, as well as the challenges of making generative AI accessible to individuals with disabilities and those from underrepresented communities.


GORDON GIBSON

Gordon is the Senior Engineering Manager of the Applied Machine Learning team at Ada where he's helped lead the creation of Ada's ML engine and features. Gordon's background is in Engineering Physics and Operations Research, and he's passionate about building useful ML products.

Leveraging Language Models for Training Data Generation and Tool Learning - An emerging aspect of large language models is their ability to generate datasets that allow them to self-improve. A fascinating recent example is Toolformer (Schick et al.) in which LLMs generate fine-tuning data that helps them learn how to use tools at run-time. In this talk, we’ll examine this trend by taking a close look at the Toolformer paper and other related research.


JOSH SELTZER

Josh is the CTO at Nexxt Intelligence, where he leads R&D on LLMs and NLP to build innovative solutions for the market research industry. He also works in biodiversity and applications of AI for conservation.

Commercializing LLMs: Lessons and Ideas for Agile Innovation - In this talk, Josh, an ML expert with experience commercializing NLP-powered services, will discuss the potential for leveraging foundation models to drive agile innovation in both individual and organizational processes. He will share lessons learned from his work with a bootstrapped startup and provide insights on how LLMs can be commercialized effectively.


MINGKAI DENG

Mingkai Deng is a PhD student at Carnegie Mellon University working at the intersection of machine learning, computer vision, and natural language processing. Prior to that, he was a data scientist who led award-winning projects and built analytics products that serve multiple Fortune 500 clients.

Optimizing Large Language Models with Reinforcement Learning-Based Prompts - Large language models (LLMs) have the potential to perform a wide range of tasks by understanding human queries, but they are often sensitive to the wording of the prompts, which can greatly affect the output. This talk will introduce RLPrompt, an efficient algorithm that uses reinforcement learning to systematically search for the best prompts to improve LLM performance across diverse tasks.


SUHAS PAI

Suhas is the CTO & Co-founder of Bedrock AI, an NLP startup operating in the financial domain, where he conducts research on LLMs, domain adaptation, text ranking, and more. He was the co-chair of the Privacy WG at BigScience, the chair at TMLS 2022 and TMLS NLP 2022 conferences, and is currently writing a book on Large Language Models.

LLMOps: Expanding the Capabilities of Language Models with External Tools - This talk explores how language models can be integrated with external tools, such as Python interpreters, API's, and data stores, to greatly expand their utility. We will examine the emerging field of 'LLMOps' and review some promising tools. Additionally, we will push the boundaries of what's possible by exploring how a language model could accurately answer complex questions like, "Who was the CFO at Apple when its stock price was at its lowest point in the last 10 years?"


RAJIV SHAH

Rajiv is a machine learning engineer at Hugging Face, whose primary focus is on enabling enterprise teams to succeed with AI. He is a widely recognized speaker on enterprise AI and was part of data science teams at Snorkel AI, Caterpillar, and State Farm.

Incorporating Large Language Models into Enterprise Analytics: Navigating the Landscape of In-Context Learning and Agents - Large Language Models (LLMs) have dramatically changed our expectations for AI. While a few innovators are building proof-of-concept projects using APIs, most enterprise analytic teams still need to figure out how to incorporate LLMs into their analytical toolbox. Rajiv shows the necessity of understanding the growth of "in-context learning" and agents. With these insights, he explains how LLMs will shape enterprise analytics. Along the way, he covers many practical factors, such as the different providers of LLMs, resource costs, and ethical issues.