EPSRC logo

Details of Grant 

EPSRC Reference: EP/M015777/1
Title: ALOOF: Autonomous Learning of the Meaning of Objects
Principal Investigator: Parker, Professor D
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: School of Computer Science
Organisation: University of Birmingham
Scheme: Standard Research - NR1
Starts: 31 December 2014 Ends: 29 June 2018 Value (£): 340,806
EPSRC Research Topic Classifications:
Artificial Intelligence Image & Vision Computing
Information & Knowledge Mgmt Robotics & Autonomy
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:  
Summary on Grant Application Form
When working with and for humans, robots and autonomous systems must know about the objects involved in human activities, e.g. the parts and tools in manufacturing, the professional items used in service applications, and the objects of daily life in assisted living. While great progress has been made in object instance and class recognition, a robot is always limited to knowing about the objects it has been trained to recognize. The goal of ALOOF is to enable robots to exploit the vast amount of knowledge on the Web in order to learn about previously unseen objects and to use this knowledge when acting in the real world. We will develop techniques to allow robots to use the Web to not just learn the appearance of new objects, but also their properties including where they might be found in the robot's environment.

To achieve our goal, we will provide a mechanism for translating between the representations robots use in their real-world experience and those found on the Web. Our proposed translation mechanism is a meta-modal representation (i.e. a representation which contains and structures representations from other modalities), composed of meta-modal entities and relations between them. A single entity represents a single object type, and is composed of modal features extracted from robot sensors or the Web. The combined features are linked to the semantic properties associated with each entity. The robot's collection of meta-modal entities is organized into a structured ontology, supporting formal reasoning. This representation is complemented with methods for detecting gaps in the knowledge of the robot (i.e. unknown objects and properties), and for planning how to fill these gaps. As the robot's main source of new knowledge will be the Web, we will also contribute techniques for extracting relevant knowledge from Web resources using novel machine reading and computer vision algorithms.

By linking meta-modal representations with the perception and action capabilities of robots, we will achieve an innovative and powerful mix of Web-supported and physically-grounded life-long learning. Our scenario consists of an open-ended domestic setting where robots have to find objects. Our measure of progress will be how many knowledge gaps (i.e. situations where the robot has incomplete information about objects), can be resolved autonomously given specific prior knowledge. We will integrate the results on multiple mobile robots including the MetraLabs SCITOS robot, and the home service robot HOBBIT.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.bham.ac.uk