Object-level Planning: Bridging Human Knowledge and Task and Motion Planning

Jan
13

Object-level Planning: Bridging Human Knowledge and Task and Motion Planning

David Paulius, Brown University

3:30 p.m., January 13, 2026   |   303 Cushing Hall of Engineering

Task and motion planning (TAMP) interleaves higher-level symbolic planning (task planning) with lower-level trajectory planning (motion planning) to enable robots to autonomously solve complex everyday tasks. Task-level representations are typically abstract but must nevertheless include a substantial amount of information expressing the robot’s constraints, mixing logical object-level requirements (e.g., a bottle must be open for pouring) with robot constraints (e.g., a robot’s gripper must be empty before it can pick up an object). These levels of planning are poorly suited to exploiting useful domain-independent knowledge from various sources like language models, recipe books, and internet videos, which hinders generalization across both robots and domains.

David Paulius

David Paulius,
Brown University

In this talk, I will introduce an additional level of planning that acts as a natural interface between domain-independent knowledge and TAMP, which I call object-level planning. Object-level planning exploits rich, object-level knowledge to bootstrap task-level planning by generating informative plan sketches close to the natural level of human conversation. I will show how object-level plan sketches bootstrap TAMP processes to substantially improve TAMP, even in the absence of additional knowledge. I will then highlight my recent work that exploits large language models (LLMs) to generate object-level plans, which can then be refined into executable plans using TAMP. Overall, object-level planning shows better performance than alternative LLM-based methods and promises a more efficient, intuitive, and interpretable way of generating goal-directed plans.

David Paulius (he/him) is a postdoctoral researcher in the Intelligent Robot Lab at Brown University, working jointly with Professors George Konidaris and Stefanie Tellex. Before joining Brown University, he was a postdoctoral researcher in the Human-centered Assistive Robotics (HCR) group at the Technical University of Munich, Germany.

David received his Ph.D. in Computer Science & Engineering from the University of South Florida, where he was advised by Professor Yu Sun and began his journey in robotics and AI. He received his BSc in Computer Science from the University of the Virgin Islands, St. Thomas, USVI. He was recognized as a Robotics: Science and Systems (RSS) Pioneer in 2022. David is interested in enabling robots to solve complex goal-directed tasks in dynamic human environments by exploiting commonly available but unstructured world knowledge. His work explores the integration of classical planning and foundation models for long-term autonomy and generalization across robots, task settings, and domains.