EPSRC logo

Details of Grant 

EPSRC Reference: EP/S032355/1
Title: Robot In-hand Dexterous manipulation by extracting data from human manipulation of objects to improve robotic autonomy and dexterity - InDex
Principal Investigator: Resende Faria, Dr D
Other Investigators:
Vogiatzis, Dr G
Researcher Co-Investigators:
Project Partners:
Department: College of Engineering and Physical Sci
Organisation: Aston University
Scheme: Standard Research - NR1
Starts: 01 March 2019 Ends: 28 February 2022 Value (£): 358,731
EPSRC Research Topic Classifications:
Artificial Intelligence Image & Vision Computing
Robotics & Autonomy Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:  
Summary on Grant Application Form
Humans excel when dealing with everyday objects and manipulation tasks, learning new skills, and adapting to different or complex environments. This is a basic skill for our survival as well as a key feature in our world of artefacts and human-made devices. Our expert ability to use our hands results from a lifetime of learning by both observing other skilled humans and ourselves as we discover how to handle objects first hand. Unfortunately, today's robotic hands are still unable to achieve such a high level of dexterity in comparison to humans nor are systems entirely able to understand their own potential. In order for robots to truly operate in a human world and fulfil the expectations as intelligent assistants, they must be able to manipulate a wide variety of unknown objects by mastering their capabilities of strength, finesse and subtlety. To achieve such dexterity with robotic hands, cognitive capacity is needed to deal with uncertainties in the real world and to generalise previously learned skills to new objects and tasks. Furthermore, we assert that the complexity of programming must be greatly reduced and robot autonomy must become much more natural. The InDex project aims to understand how humans perform in-hand object manipulation and to replicate the observed skilled movements with dexterous artificial hands, merging the concepts of deep reinforcement and transfer learning to generalise in-hand skills for multiple objects and tasks. In addition, an abstraction and representation of previous knowledge will be fundamental for the reproducibility of learned skills to different hardware. Learning will use data across multiple modalities that will be collected, annotated and assembled into a large dataset. The data and our methods will be shared with the wider research community to allow testing against benchmarks and reproduction of results. More concretely, the core objectives are: (i) to build a multi-modal artificial perception architecture that extracts data of object manipulation by humans; (ii) the creation of a multimodal dataset of in-hand manipulation tasks such as regrasping, reorienting and finely repositioning; (iii) the development of an advanced object modelling and recognition system, including the characterisation of object affordances and grasping properties, in order to encapsulate both explicit information and possible implicit object usages; (iv) to autonomously learn and precisely imitate human strategies in handling tasks; and (v) to build a bridge between observation and execution, allowing deployment that is independent of the robot architecture.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.aston.ac.uk