EPSRC logo

Details of Grant 

EPSRC Reference: EP/H035885/1
Title: Learning Unconstrained Human Pose Estimation from Low-cost Approximate Annotation
Principal Investigator: Everingham, Dr M
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Sch of Computing
Organisation: University of Leeds
Scheme: First Grant - Revised 2009
Starts: 01 November 2010 Ends: 29 February 2012 Value (£): 100,510
EPSRC Research Topic Classifications:
Image & Vision Computing
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
02 Feb 2010 ICT Prioritisation Panel (Feb 10) Announced
Summary on Grant Application Form
This research is in the area of computer vision - making computers which can understand what is happening in photographs and video. As humans we are fascinated by other humans, and capture endless images of their activities, for example photographs of our family on holiday, video of sports events or CCTV footage of people in a town center. A computer capable of understanding what people are doing in such images would be able to do many jobs for us, for example finding photos of our child waving, fast forwarding to a goal in a football game, or spotting when someone starts a fight in the street. A fundamental task in achieving such aims is to get the computer to understand a person's pose - how are they standing, is their arm raised, where are they pointing? This pose estimation problem is easy for humans but very difficult for computers because people vary so much in their pose, their body shape and the clothing they wear.Much work has tried to solve this problem, and works well in particular settings for example where people wear a special suit with markers to help find the limbs, but does not work for real-world pictures because it uses simple stick man models of humans. We will investigate better models of how humans look by teaching the computer by showing it many example pictures. This approach of learning from pictures instead of building models by hand is showing great progress, but needs example pictures where the pose has been marked or annotated by a human annotator. Because annotating pictures is slow and tiresome current methods make do with a few hundred pictures and this isn't enough to learn all the ways a human can appear. We will overcome this problem by annotating pictures only roughly in a way which is very fast so we can annotate lots of pictures with low cost. We will then develop methods where the computer can learn from this rough annotation, working out what the corresponding exact annotation would be by combining many pictures and information we already know such as how the human body is put together.By having lots of images to learn from, and methods for making use of rough annotation, we will be able to make stronger models of how humans look as they change their pose. This will lead to pose estimation methods which work better in the real world and contribute to longer-term aims in understanding human activity from photographs and video.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.leeds.ac.uk