Cover Image for Bay Area AI Safety Meetup w Dept of Homeland Security and new paper from UC Berkeley, sponsored by Tola Capital
Cover Image for Bay Area AI Safety Meetup w Dept of Homeland Security and new paper from UC Berkeley, sponsored by Tola Capital
Avatar for Bay Area AI Safety Meetup
Monthly academic salon discussing all aspects under the critical topic of AI safety.

Bay Area AI Safety Meetup w Dept of Homeland Security and new paper from UC Berkeley, sponsored by Tola Capital

Register to See Address
San Francisco, California
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

November edition of the monthly academic salon discussing all aspects under the critical topic of AI safety.

This month sponsored by Tola Capital


TLDR: Bringing together scholastic exercises with enterprise work in AI, this is a forum to discuss and address the deepest technical and ethical questions around AI safety in use within society. Primarily a social event to discuss current issues, featuring lightning talks while leaving more time for Q&A that could instigate a larger scope for any solutions.

Not AGI-focused - this meetup is around scientifically-grounded solutions for issues in AI from development pov.


Talks this month:

  • Micah Carroll, UC Berkeley (and potentially others) - presenting their recent paper on targeted manipulation w user feedback

  • John Waley, founder @ RecFounder Inception Studio • 3x Cybersecurity Founder (Redcoat AI, UnifyID, Moka5) • Adjunct Lecturer in Compilers and GenAI at Stanford

  • Sean Harvey, Product Leader, AI Corps @ US Department of Homeland Security


Schedule:

  • 5:30pm - Doors / pizza

  • 6:30pm - Lightning Talks w Q&A (5-10 min)

  • 7:00pm - Discussion

  • 8:00pm - Shutting it down

Location
Please register to see the exact location of this event.
San Francisco, California
Avatar for Bay Area AI Safety Meetup
Monthly academic salon discussing all aspects under the critical topic of AI safety.