EPSRC logo

Details of Grant 

EPSRC Reference: EP/H027203/1
Title: Structured Sparsity Methods in Machine Learning an Convex Optimisation
Principal Investigator: Pontil, Professor M
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Computer Science
Organisation: UCL
Scheme: Standard Research
Starts: 01 September 2010 Ends: 31 August 2014 Value (£): 220,703
EPSRC Research Topic Classifications:
Artificial Intelligence Mathematical Aspects of OR
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
EP/H02686X/1
Panel History:
Panel DatePanel NameOutcome
15 Dec 2009 ICT Prioritisation Panel (Dec 09) Announced
Summary on Grant Application Form
Over the past ten years theoretical developments in machine learning (ML) have had a significant impact in statistics, applied mathematics and other fields of scientific research. In particular, fruitful interactions between ML and numerical optimisation have emerged that are expected to lead to theoretical and algorithmic breakthroughs with the potential to render ML methodologies significantly more applicable to many problems of practical importance. The proposed project aims to make significant UK contributions at a crucial juncture in this emerging interdisciplinary field that has so far been dominated by the US and France. Many ML techniques can be cast as problems of minimising an objective function over a large set of parameters. Examples include support vector machines as well as more recent techniques for semi-supervised learning and multi-task learning. Often the objective function is convex. Consequently, ideas from convex optimisation are becoming increasingly important in the design, implementation and analysis of learning algorithms. Up to now, however, ML has almost exclusively resorted to off the shelf methods for convex optimisation, without substantially exploiting the rich theory which lies behind this field. A thesis of this proposal is that there is a need for a deeper interplay between ML and numerical optimisation. Ultimately, bridging the two communities will facilitate communication and the power of core optimisation will be more easily brought to bear in ML and lead to new frontiers in optimisation. An area in which the interplay between ML and optimisation has a particularly important role to play is in the use of sparsity inducing optimisation problems. A rationale that drives the use of sparsity-inducing models is the observation that when the number of model parameters is much larger than the number of observations, a sparse choice of parameters is strongly desirable for fast and accurate learning. Building on this success, we believe that the time is now right for the development of a new line of algorithms for matrix learning problems under structured sparsity constraints. This means that many of the components of the parameter matrix or a decomposition thereof are zero in locations that are related via some rule (e.g the matrix may be constrained to have many zero rows, many zero eigenvalues, to have sparse eigenvectors, etc.).Perhaps the most well-know examples in which structured sparsity has proven beneficial are in collaborative filtering, where the objective function is chosen to favour low rank matrices, and in multi-task learning where the objective function is chosen to favour few common relevant variables across different regression equations. These types of optimisation problems have only recently started to be addressed in ML and optimisation, and several fundamental problems remain open, most importantly the study of efficient algorithms which exploit the underlying sparsity assumptions and a statistical learning analysis of the methods.Our proposal is multidisciplinary and involves substantial exchange of ideas between Computer Science (Machine Learning) and Mathematics (Numerical Optimisation), with three main goals. Firstly, we aim to develop novel and efficient algorithms for learning large structured matrices; fast convergence of the algorithms should be guaranteed when applied to problem data that have a sparse solution. Secondly, in the cases where the assumed sparsity structure leads to NP-hard problems and the first goal is unachievable (this is often the case under low-rank assumptions), we aim to identify tractable convex relaxations and understand their impact on sparsity. Thirdly, we aim for models and algorithms that have a more natural interpretation than generic solvers (e.g., a minimax statistical justification), which should make it more likely that practitioners will embrace the new methodology.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: