EPSRC logo

Details of Grant 

EPSRC Reference: EP/N035399/1
Title: DEVA - Autonomous Driving with Emergent Visual Attention
Principal Investigator: Pugeault, Dr N
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Jaguar Land Rover Limited TRL Ltd (Transport Research Laboratory)
Department: Engineering Computer Science and Maths
Organisation: University of Exeter
Scheme: First Grant - Revised 2009
Starts: 01 December 2016 Ends: 31 May 2018 Value (£): 98,938
EPSRC Research Topic Classifications:
Image & Vision Computing Vision & Senses - ICT appl.
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
28 Apr 2016 EPSRC ICT Prioritisation Panel - Apr 2016 Announced
Summary on Grant Application Form
How does a racer drive around a track? Approaching a bend in the road, a driver needs to monitor the road, steer around curves, manage speed and plan a trajectory avoiding collisions with other cars - and all of this, fast and accurately. For robots this remains a challenge: despite progress in computer vision over the last decades, artificial vision systems remain far from human vision in performance, robustness and speed. As a consequence, current prototypes of self-driving cars rely on a wide variety of sensors to palliate the limitations of their visual perception. One crucial aspect that distinguishes human from artificial vision is our capacity to focus and shift our attention. This project will propose a new model of visual attention for a robot driver, and investigate how attention focusing can be learnt automatically by trying to improve the robot's driving.

How and where we focus our attention when solving a task such as driving is studied by psychologists, and the numerous models of attention can be sorted in two categories: first, top-down models capture how world knowledge and expectations guide our attention when performing a specific task; second, bottom-up models characterise how properties of the visual signal make specific regions capture our attention, a property often referred to as saliency. Yet, from a robotics perspective, there remains a lack of a unified framework describing the interplay of bottom-up and top-down attention, especially for a dynamic, time-critical task such as driving. In the racing scenario described above, the driver must take quick and decisive action to steer around bends and avoid obstacles - efficient use of attention is therefore critical.

This project will investigate the hypothesis that our attention mechanisms are learnt on a task specific basis, in a such a way as to provide our visual system optimal information for performing the task. We will investigate how state-of-the-art computer vision and machine learning approaches can be used to learn attention, perception and action jointly to allow a robot driver to compete with humans on a racing simulator, using visual perception only.

A generic learning framework for task-specific attention will be developed that is applicable across a broad range of visual tasks, and bears

the potential for reducing the gap with human performance by a critical reduction in current processing times.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.ex.ac.uk