Berkeley Multi-Agent Security Hackathon
This hackathon focuses on concrete problems in multi-agent security in the age of autonomous and agentic systems. We are especially interested in projects that highlight ways to enhance the performance and credibility/trust guarantees of agentic AI within the next 1-2 years. We encourage the use of an interdisciplinary toolbox, including economics, mechanism design, game theory, cryptography, and auction theory.
Some general directions we'd like to explore include:
Decentralized commitment devices (or, in general, formal contracts) for AI security and cooperation.
Collusion among generative model agents, using cryptographic contracts.
Simulation of financial markets (e.g., high-frequency trading, lending, market making) using generative agents. We aim to determine if contracts can help stabilize these AI agents. For instance, can individually selfish agents with contracts achieve better outcomes than innately pro-social agents (i.e., those with modified reward functions)?