Cover Image for AI & Liability Ideathon
Cover Image for AI & Liability Ideathon
Avatar for AI-Plans
Presented by
AI-Plans
52 Going

AI & Liability Ideathon

Virtual
Get Tickets
Welcome! Please choose your desired ticket type:
About Event

Overview

Join us for the AI & Liability Ideathon, a two-week event on December 7, 2024, at 3:00 PM BST.
Join lawyers, researchers and developers to create solutions for AI Liability.  Propose, develop and refine ideas with a team, ending in a presentation evening where you can share the final version of your proposal. 

All the final proposals will be published on AI-Plans, with the top 3 being selected by peer review after the presentation evening.

The presentation evening is open to everyone, including those who didn't take part in the Ideathon. 

The Ideathon, including the Presentation Evening, the Speakers and the Kick Off Call will be primarily taking place in the AI-Plans Discord: https://discord.gg/X2bsw8FG3f

What is an Ideathon? 

An Ideathon is a brainstorming event designed to allow individuals to combine collective multidisciplinary knowledge, experience, and creativity to tackle specific topics.. Participation is open to all interested individuals, including students, academics, civil society, non-profit organizations, lawyers, law professors, AI/ML engineers, developers, and product leaders. All are welcome, including those interested in AI safety and liability issues. 

For this AI Liability Ideathon, team proposals may be technology-based, policy-based, a combination of both, or otherwise related to the topic.

Examples of Potential Ideas:

  • Autonomous Legal Continuum: Develop a framework for determining liability for different types of systems, with the level of autonomy as a critical component. For less autonomous systems, greater human liability may be appropriate, while more autonomous systems might have liability regimes similar to those of corporations.

  • Legal Entities: Explore the concept of granting AI systems legal personhood, similar to the current status of corporations, to clarify liability issues.

  • Use Cross-Coders to Identify Duty of Care: Employ cross-coders to identify differences between base models and fine-tuned models, potentially reducing liability if it can be demonstrated that a duty of care has been met.

  • AI Agents as Subcontractors: Identify relevant pre-existing laws and regulations concerning business agents and examine the implications of treating an AI as a business agent with the authority to act on behalf of a corporation, employee, or subcontractor.

  • Map AI Dev Pipeline to AI Liability : Map all decision points in the AI development and deployment pipeline, assessing the relative contribution of decisions at each stage to the downstream likelihood of harm.

Why Participate?

If you're a lawyer: 

  • See the latest problems that we have, the developer perspective and create solutions with researchers. 

If you're an AI Researcher: 

  • Learn about existing liability regulations, the legal perspective, share your knowledge and propose new ideas.

The central question of who holds what responsibility in the AI Development pipeline is of ever growing importance. This is a chance to dive deep into the specific details of how to split it fairly and create solutions that might change the world.

Schedule: 

Present to December 7th: Registration & Team Forming

This is the period to register for the Ideathon and to start forming or joining a team if you have not already done so. Consider whom you might need on your team and try to recruit them. Think about the kinds of ideas you might want to develop during the Ideathon.

Mentors will be available to help you find a team or members if you need it. You can introduce yourself and share your proposal ideas in the Discord.

December 7th: Kick-Off Call and Q&A

We'll begin The Ideation with a brief talk from Kabir, Sean and Khullani and open up for a Q&A
This isn't mandatory by any means, but just an intro to the event and a chance to ask questions.

Will take place at 10pm UTC

December 11th: Deadline for deciding your idea

By this point teams should decide which idea they'll be focusing on developing. Not a hard set deadline, but a strong recommendation.

December 14th: Deadline for sharing 1st draft

Teams should share the first draft of their idea - can be a couple sentences or can be several pages, just does need to clearly explain what the idea is and why they've chosen it. If it's multiple pages, we recommend having a summary/abstract at the start.
The draft can be updated and it's updates shared in the share-your-work channel on the discord, as the Ideathon goes on. You can start considering how you want to present the idea.

December 20th: Deadline for final proposal

Now, the final, refined version of the idea your team has worked on should be ready. It should be clearly written up, with perhaps an exploration into an implementation (though that isn't necessary). If you haven't already started, then you should get ready to present the idea, for the next day. 

December 21st, 4pm BST: Presentation & Voting

We'll culminate in an evening of teams presenting and sharing their ideas on a call. Organizers will reach out beforehand about scheduling. If this time doesn't work for a team, they're welcome to submit a pre-recorded video.
Everyone will have a chance to vote on the ideas that are their favourites. This will be streamed online, with teams having the option of uploading their own video explaining their idea.

Speakers & Collaborators:

​Speakers: 

Gabriel Weil
Assistant Professor of Law, Tourou University


Professor Weil’s research has addressed geoengineering governance, tools for overcoming the global commons problem, and the optimal role for subnational policy in tackling a global problem, among other topics.

Tzu Kit Chan
Operations at ML Alignment & Theory Scholars

Among doing many other things, Tzu does Operations at MATS, co-founded Caltech AI Alignment, runs Stanford AI Alignment and advises as a board member for Berkeley’s AI Safety

Mentors:

Sean Thawe
Co-founder/Software Developer, AI-Plans

Sean does mechanistic interpretability research and software development at AI-Plans. He's taken part in an ideathon with his team for the Deep Learning Indaba which happened recently in October and November. Sean also works on data science and software engineering at Mindbloom AI as a consultant/researcher.

Kabir Kumar

Co-founder/Organizer, AI-Plans

Kabir has run several successful events, such as the Critique-a-Thons and Law-a-Thons and does mechanistic interpretability and evals research at AI-Plans.

If you are interested in supporting as a mentor or judge – please register your interest here:

https://forms.gle/iACDJb4CE725k9bk7 

Resources

Beckers, A., & Teubner, G. (2022). Three liability regimes for artificial intelligence. Goethe University Frankfurt. Retrieved from https://www.jura.uni-frankfurt.de/131542927/BeckersTeubnerThree_Liability_Regimes_for_Artificial_Intelligence2022.pdf

Madiega, T. (2023). Artificial intelligence liability directive (Briefing No. PE 739.342). European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf

_________________________

  • If you're interested in supporting us as a mentor to provide feedback and guidance to teams, Express Interest Here.

Avatar for AI-Plans
Presented by
AI-Plans
52 Going