Cover Image for Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé
Cover Image for Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé
Avatar for Guaranteed Safe AI Seminars
Monthly seminars on topics related to Guaranteed Safe AI. https://www.horizonevents.info/guaranteedsafeaisem…
44 Went

Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé

Zoom
Get Tickets
Past Event
Welcome! Please choose your desired ticket type:
About Event

Using PDDL Planning to Ensure Safety in LLM-based Agents

Agustín Martinez Suñé – Ph.D. in Computer Science | Postdoctoral Researcher (Starting Soon), OXCAV, University of Oxford

Large Language Model (LLM)-based agents have demonstrated impressive capabilities but still face significant safety challenges, with even the most advanced approaches often failing in critical scenarios. In this talk, I’ll explore how integrating PDDL symbolic planning with LLM-based agents can help address these issues. By leveraging LLMs' ability to translate natural language instructions into PDDL formal specifications, we enable symbolic planning algorithms to enforce safety constraints throughout the agent’s execution. Our experimental results demonstrate how this approach ensures safety, even under severe input perturbations and adversarial attacks—situations where traditional LLM-based planning falls short. This work suggests a potential pathway for deploying safer autonomous agents in real-world applications. This work is a collaboration with Tan Zhi Xuan (MIT).

​​​GS AI seminars

​​​​​​The monthly seminar series on Guaranteed Safe AI brings together researchers to advance the field of building AI with high-assurance quantitative safety guarantees.

Avatar for Guaranteed Safe AI Seminars
Monthly seminars on topics related to Guaranteed Safe AI. https://www.horizonevents.info/guaranteedsafeaisem…
44 Went