Cover Image for Responsible AI Reading Group: AI Alignment and Risk Trade-Offs
Cover Image for Responsible AI Reading Group: AI Alignment and Risk Trade-Offs
Avatar for AI LA Events
Presented by
AI LA Events
Hosted By

Responsible AI Reading Group: AI Alignment and Risk Trade-Offs

Google Meet
Registration
Welcome! To join the event, please register below.
About Event

Join us for our next Responsible AI Reading Group session, where we’ll explore the fundamentals of AI alignment and the pressing concerns surrounding the development of increasingly advanced AI models, including Artificial General Intelligence (AGI).

Jenn Wu, a Lead UX Researcher at Wunderkind, will facilitate a discussion based on the article Why AI Alignment Could Be Hard with Modern Deep Learning.

The session will examine why aligning AI with human values remains a significant challenge and invite participants to reflect on the risks we are willing to accept in pursuit of powerful AI systems.

Discussion Topics Include:

  • The complexities of aligning AI with human intentions

  • Potential risks and unintended consequences of advanced AI models

  • The trade-offs between AI capabilities and safety

This session is open to all—whether you’re well-versed in AI ethics or just starting to engage with the topic. Come ready to share your thoughts and perspectives! We look forward to an insightful conversation!


Are you interested in sponsoring or hosting our next meetup? Contact: social@joinai.la

Disability Notice: Individuals with disabilities who need accommodations to attend this event should contact social@joinai.la with their name and contact information. We request that individuals requiring accommodations notify us at least 7 days prior to the event. Every reasonable effort will be made to provide reasonable accommodations in an effective and timely manner. 

Avatar for AI LA Events
Presented by
AI LA Events
Hosted By