Frontiers: Neuro-Symbolic Adaptation for Autonomous Agents
Welcome to Frontiers - a series where we bring top researchers, engineers, designers, and leaders working at the cutting edge of various fields to go deep on their work with the Manifold Community.
For this talk, our speaker will be Helen Lu. Helen holds a bachelor's in psychology from UC Berkeley and a master's in computer science from Georgia Tech. Now a Ph.D. student at Tufts, she explores neuro-symbolic AI and human-in-the-loop machine learning. Her current work focuses on integrating generative AI into robots’ cognitive systems to enhance human-robot collaboration on creative tasks.
Abstract
In dynamic open-world environments, autonomous agents often encounter novelties that hinder their ability to find plans to achieve their goals. Specifically, traditional symbolic planners fail to generate plans when the robot's knowledge base lacks the operators that can enable it to interact appropriately with novel objects in the environment. We propose a neuro-symbolic architecture that integrates symbolic planning, reinforcement learning, and a large language model (LLM) to adapt to novel objects. In particular, we leverage the common sense reasoning capability of the LLM to identify missing operators, generate plans with the symbolic AI planner, and guide the reinforcement learning agent in learning the new operators.
We’re growing our core team and pursuing new projects. If you’re interested in working together, see our website for active initiatives and open positions, join the conversation on Discord and check out our Github.
If you want to see more of our updates as we work to explore and advance the field of Intelligent Systems, follow us on Twitter and Linkedin!