EPSRC logo

Details of Grant 

EPSRC Reference: GR/J50699/01
Principal Investigator: Clarkson, Professor T
Other Investigators:
Taylor, Professor J
Researcher Co-Investigators:
Project Partners:
Department: Electronic Engineering
Organisation: Kings College London
Scheme: Standard Research (Pre-FEC)
Starts: 18 February 1994 Ends: 17 February 1997 Value (£): 101,778
EPSRC Research Topic Classifications:
EPSRC Industrial Sector Classifications:
Related Grants:
Panel History:  
Summary on Grant Application Form
to improve our knowledge of the reinforcement learning mechanisms and stored knowledge representation in pRAM neural networks by the use of visualisation techniques, to develop optimal training methods which ensure that patterns learned by a net are retained at maximum Hamming distances, to investigate the generalisation and classification performance of a pRAM network with various levels of training noise, and to find means of optimising the noise level.Progress:Learning transformed prototypes (LTP) is a statistical pattern recognition method for pRAM neural networks. Developed from the pRAM reinforcement learning rule, the LTP algorithm trains a neural net to perform a stochastic mapping of input sets to the self-organised output codebook vectors, or prototypes, in the binary domain. Unlike the algorithms in conventional neural networks such as LVQ (learning vector quantisation) and SOFM (self-organising feature maps) (e.g. Kohonen 1982, 1989 and 1992), where the network structure is limited to a single layer, LTP allows multi-layer networks. The favoured responses of the networks (or winners ) in LTP are not selected as the winning neuron nodes, as they are in SOFM, but are chosen as the winning output vectors (or prototypes) of the network which fall into the output hyperspace of 2n vector elements but created from only n output neurons. The means of selection of the winners in LTP is also quite different from those methods using direct comparison of input vectors with winner-associated weights. The dimension of the output prototypes in LTP may be different from that of the input sets. Pattern classification tasks have been performed with LTP and initial results have shown that the training process has speeded up dramatically compared with the previous pRAM training algorithms while maintaining the sub-optimal classification which maximises rewards. Research work is ongoing which aims to optimise the LTP algorithm. Visual C++ software has been installed on a IBM PC. The programming of demonstration software is being considered for the purpose of visualising the weight space distribution and reallocation, with graphical software dynamically during training with LTP. The following papers have been published since the start of the programme and are directly related to it: 1. Noisy reinforcement Training for pRAM Nets ,Y. Guan, T.G. Clarkson, J.G. Taylor, D. Gorse, Neural Networks, Vol.7, 523-538, 1994. 2. Extended functionality for probabilistic RAM neurons , D. Gorse, J.G. Taylor and T.G. Clarkson, Proc. ICANN'94, 705-708, May 1994. 3. Temporal Difference Learning using a Pulse-Based Reinforcement Training Algorithm ,D. Gorse, J.G Taylor and T.G. Clarkson, Proc. ICONIP'94, Seoul, 293-298, October 1994. In paper [1] above, the techniques used for investigating the internal representations of stored patterns in a network are described. Papers [2] and [3] are related to an extension of the binary training rule. Three conference papers and a book chapter have also been published on further aspects of pRAM networks.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: