Cover Image for AI safety Hackathon - Interpretability (AJ#7)

AI safety Hackathon - Interpretability (AJ#7)

 
 
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Join us for an exciting local hackathon event focused on interpretability in AI at EPFL! Hosted by Lausanne AI Alignment (LAIA), this 48-hour event is the perfect opportunity to explore the "brains" of AI and find new perspectives on modern AI neuroscience.

With the best starter templates provided, you can focus on creating interesting research instead of browsing Stack Overflow. Our local event is a great way to work together with like-minded individuals and make new connections in the field of AI.

Our submissions will be evaluated based on several criteria, including ML safety, interpretability, novelty, generality, and reproducibility. You will have access to many resources for inspiration, including ideas lists, research papers, and online tools.

If you are passionate about AI and want to make a meaningful contribution to the field, this is the event for you. Join us for a weekend of intensive research, collaboration, and fun!

Additionally, we want to ensure that all participants are well-fed and hydrated throughout the event, so we will be providing food and drinks to keep your energy levels up. So even if you want to drop by for 4h and help, you are welcome.

Find more information at the following link: https://itch.io/jam/interpretability-hackathon