The aim of the project is to advance technologies for, and understand the principles of cognition and control in complex systems. We will meet this challenge by advancing methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object's shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot's actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable. These two assumptions are mirrored in the structure of the proposed work, as we will develop two main strands of work:
i) a multi-modal compositional, probabilistic representation of object properties to support perception and manipulation, and ii) algorithms for reasoning with this representation, that will estimate object properties from visual and haptic data, and also plan how to actively gather information about shape and other object properties (frictional coefficients, mass) while achieving a task. These two strands will be combined and tested on robots performing aspects of a dishwasher loading task. The outcome will be robust manipulation (i.e. under unreliable perception and action) of unfamiliar objects from familiar categories or with familiar parts.