Weekly AI Alignment Paper Reading Groups
โAI Alignment Paper Dissection โ 10-Week Event
(Source: https://beta.ai-plans.com)
โ๐ Time: Fridays, 15:00 UTC (1.5-hour sessions)
๐ Platform: Discord: https://discord.gg/dv9aNAZFSg
โEach Reading Group will be led by different people and have their own research focus. Pick the ticket for the Research Area that interests you the most.
โIf you're interested in leading a reading group, email kabir@ai-plans.com
โAI Alignment Through Ethics
โ๐ Focus Areas: Agent Foundations, Decision Theory, and Ethics (papers from AI-Plans collection)
โThis is a 10-week recurring event where weโll critically analyze three papers from each of the above focus areas. At the end of the program, weโll compile key insights into a literature review paper using LLMs and structured editing.
โHow It Works:
โAfter each Friday session, a poll with 4โ5 paper choices will be posted.
โThe most voted paper will be finalized by Sunday for the next session, as the one we'll read.
โEach participant must read the selected paper and prepare 15 thought, suggested distribution: 7 vulnerabilities, 4 strengths, 3 general thoughts, 1 key insight.
โDuring the session, weโll discuss these points, with open commenting and debate.
โFinal Review & Literature Paper
โIn the final (10th) session, weโll revisit all discussions with a broader perspective, distill insights into a literature review paper, and explore its potential applications.
โYou can be a co-author on the paper we'll make, as a literature review. We'll find the most pressing problems in the field and make them available to current and prospective AI Ethics researchers, so they can work on them.
โCommitment & Expectations
โExpect to dedicate 4โ5 hours per week to thoroughly reading the paper.
โSessions are interconnected, so consistent attendance is highly encouraged.
โIf you're serious about AI Alignment research and willing to engage deeply, this will be an enriching experience! Feel free to reach out if you have any questions. ๐
โ
โMCP & Persistent Memory
โLed by Graham dePenros