Cover Image for AI Safety Thursdays: Can we make LLMs forget? An Intro to Machine Unlearning
Cover Image for AI Safety Thursdays: Can we make LLMs forget? An Intro to Machine Unlearning
Avatar for Trajectory Labs
Presented by
Trajectory Labs
1 Going

AI Safety Thursdays: Can we make LLMs forget? An Intro to Machine Unlearning

Get Tickets
Welcome! Please choose your desired ticket type:
About Event

LLMs are pre-trained on a large fraction of the internet. As a result, they can regurgitate private, copyrighted, and potentially hazardous information, causing deployment and safety challenges.

Lev McKinney will guide us through machine unlearning in LLMs—how models retain facts, methods for identifying influential training data, and techniques for suppressing predictions. Finally, we'll assess current research and its effectiveness for policy and safety concerns.

Timeline

6:00 to 6:30 - Food & Networking

6:30 to 7:30 - Main Presentation & Questions

7:30 to 8:00 - Discussion

If you can't make it in person, feel free to join the live stream at 6:30 pm, via this link.

Location
30 Adelaide St E 12th floor
Toronto, ON M5C, Canada
Avatar for Trajectory Labs
Presented by
Trajectory Labs
1 Going