EPSRC Reference: |
EP/M000702/1 |
Title: |
Predicting musical choices using computational models of cognitive and neural processing |
Principal Investigator: |
Pearce, Dr MT |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Sch of Electronic Eng & Computer Science |
Organisation: |
Queen Mary University of London |
Scheme: |
First Grant - Revised 2009 |
Starts: |
06 April 2015 |
Ends: |
05 December 2016 |
Value (£): |
100,224
|
EPSRC Research Topic Classifications: |
Artificial Intelligence |
Cognitive Psychology |
Cognitive Science Appl. in ICT |
Music & Acoustic Technology |
|
EPSRC Industrial Sector Classifications: |
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
17 Jul 2014
|
EPSRC ICT Prioritisation Panel - July 2014
|
Announced
|
|
Summary on Grant Application Form |
Music consumption has shifted dramatically in recent years towards streaming from vast music libraries, overloading the user with the enormity of possible musical choices. This new landscape makes it imperative to develop intelligent tools to help listeners choose music to listen to. Research in music technology has traditionally followed a pure engineering approach, which has taken the field some way. However, progress is being hindered by the lack of a robust, scientifically grounded model of the listener, which can be used to inform digital music players (e.g., iTunes, Spotify, Last FM) about users' preferences for selecting music.
The proposed research addresses this gap by developing the scientific knowledge needed to create computational models which can predict listeners' musical choices from features of the music and electrical brain responses recorded using Electroencephalography (EEG). The principal idea is to develop a scientific understanding of the psychological and neural processes involved when a listener chooses music to listen to. The hypothesis is that accurate predictions of a listener's musical choices can be made using a combination of psychological principles, musical features and electrical brain responses recorded from the scalp. This research has two foundations: first, to conduct listener studies to identify those psychological principles, musical features and brain responses; and second, to use that knowledge to build a computational model that predicts a listener's choice of music.
The modelling approach includes three components to capture features of music that have an impact on musical choices. The first component uses acoustic features such as dissonance and temporal regularity extracted from the audio using signal processing methods. The second component takes a higher-level cognitive approach, extracting measures of complexity using information-theoretic models based on note-level representations of music. The third component extracts the emotional intentions of the music from affective textual analysis of the lyrical content. Understanding the exact nature and weighting of these features and how they impact on musical choice requires the detailed examination of the choices that listeners make when listening to music.
Therefore, investigating the behaviour of actual listeners is central to this research and two studies will be performed. The first focuses on collecting EEG data while participants listen to musical excerpts and select them for future listening. Machine learning methods will be used to predict the listeners' decisions using features of the time-varying neural response recorded prior to the choice being made. The purpose of the second user study is to collect data for predict modelling of choices from features of the music itself and attributes of the listener. This will involve a larger range of musical excerpts and a wider range of listeners than is practical for the EEG study. The objective is to understand the psychological processes involved in musical choice and to use this knowledge to refine, parameterise and optimise the computational models of musical choice.
The final stage of the research will develop an integrated predictive model of musical choice by combining predictive models using the musical signal with those making predictions from the neural signal. The development of such an integrated model is highly innovative. The advent within the last two years of affordable, multi-channel, wireless EEG headsets for the consumer market makes the possibility of using these devices to control media players and other interactive systems a real possibility. Therefore, the time is ripe to combine research in neuroscience, music cognition and machine learning, reflecting the unique interdisciplinary expertise of the PI, to understand the mapping between neural signals, musical structure and song selections.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
|