EPSRC logo

Details of Grant 

EPSRC Reference: EP/X017915/1
Title: Exploring the multiple loci of learning and computation in simple artificial neural networks
Principal Investigator: Bowers, Professor J
Other Investigators:
Evans, Dr BD
Researcher Co-Investigators:
Project Partners:
Department: Experimental Psychology
Organisation: University of Bristol
Scheme: Standard Research - NR1
Starts: 01 November 2022 Ends: 30 April 2024 Value (£): 199,342
EPSRC Research Topic Classifications:
Artificial Intelligence
EPSRC Industrial Sector Classifications:
No relevance to Underpinning Sectors
Related Grants:
Panel History:
Panel DatePanel NameOutcome
21 Jun 2022 New Horizons 2021 Full Proposal Panel Announced
22 Jun 2022 New Horizons AI and Data Science Panel June 2022 Announced
Summary on Grant Application Form
Two basic observations regarding neurons are: a) they emit a rapid sequence of action potentials when activated, known as spike-trains, and b) they vary dramatically in their morphology (e.g., their shape, size, etc.) in ways that impact on how they function. For example, neurons vary dramatically in how quickly they pass information along their axons (due to variation in axon diameter and myelination for example), and how long they integrate information from incoming signals (due to membrane time constants). By contrast, most deep neural networks (DNNs) developed in computer science do not include spiking units (artificial neurons) and all units are identical to one another other than their connection weights with one another. Nevertheless, DNNs are frequently described as the "best" models of human vision and are claimed to provide important insights into how the brain functions more generally. The assumption has been that DNNs can still be functionally equivalent to brains despite ignoring these features of neurons.

However, there is growing evidence that DNNs often function in qualitatively different ways than brains, and this raises the important question as to how to make DNNs better models of the brain. And even if DNNs can be made functionally equivalent to brains, it is still important to understand how neural spiking and neural diversity are used for brain computation. In this project we harness evolutionary algorithms and state-of-the-art learning methods to train spiking neural networks that vary not only in the connection weights between units (as standard), but also the time it takes for units to pass on information (i.e., the conduction time of a neuron), the time that units can integrate signals (i.e., the time-constant of a neuron), as well as their intrinsic excitability (ease with which they fire). By allowing units to adapt/learn in all these different ways, we will be able to explore the advantages of learning outside the synapse under different training conditions, including tasks that involve identifying spatial patterns (e.g., written numbers) and temporal patterns (e.g., spoken words). It may be that adapting time-based attributes of neurons is particularly important when identifying temporal patterns. We hope the results will provide insight into the morphological variation of neurons, start to bridge the gap between artificial and biological neural networks, and the findings may identify computational advantages of learning and computing outside the synapse.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.bris.ac.uk