EPSRC logo

Details of Grant 

EPSRC Reference: GR/J15032/01
Title: LOGICAL NEURAL SYSTEMS FOR LANGUAGE, IMAGE AND ACTION ASSOCIATION.
Principal Investigator: Stonham, Professor T
Other Investigators:
Wilson, Dr M
Researcher Co-Investigators:
Project Partners:
Department: Electronic & Computer Engineering
Organisation: Brunel University London
Scheme: Standard Research (Pre-FEC)
Starts: 01 January 1994 Ends: 31 December 1995 Value (£): 92,520
EPSRC Research Topic Classifications:
EPSRC Industrial Sector Classifications:
Related Grants:
Panel History:  
Summary on Grant Application Form
To develop and study the feasibility of a general logical neural system operating in a near-real-world environment which can relate language-like symbol strings to internal iconic representations, and respond to environmental input with appropriate language-like output. To develop a demonstrator that detects spatial and temporal relations in scenes. A minimal specification for this demonstrator is the finding of objects from verbal instructions in a 2-D projection of a 3-D scene.Progress:A world model called Kitchen World has been defined which consists of a 2-D projection of a 3-D scene, containing cups and saucers, observed by a camera. The action of the camera is emulated by a central retinal window, acting on the visual scene. Functions performing panning, scrolling and zooming of the retinal window have been implemented. A mechanism has been proposed in order to enable objects in the world image to be moved around, put in, or removed.A fast and efficient zooming mechanism has been designed. The algorithm is efficient in speed as it only requires integer operations and is based on averaging of neighbouring pixels of the input image. The algorithm is efficient in required memory. There is no need to allocate memory to store the elements of the transformation matrix, as these are used as soon as they are calculated. Because of the integer nature of the processing and the low memory requirement, the algorithm lends itself to a cost-effective hardware implementation. The algorithm produces very little spatial degradation. The zoomed images retain a high degree of integrity. A demonstrator is being developed which simulates a MAGNUS Neural Network system able to detect spatial and temporal relations in 2-D scenes. The minimal specification for this demonstrator, as defined in the project objectives, has already been demonstrated.Current work is being focussed on the optimisation of system parameters and training strategies. The MAGNUS system is characterised by a multi-field structure. Five such fields can be distinguished, corresponding to the parts of the network coding for the linguistic and visual internal representations and for the 3 motor outputs of the network. There are therefore 5 field size parameters that need optimisation. Each neuron in each field can receive imputs from each field output, connectivity parameters need to be evaluated. Also addressed in this part of the project is the evaluation of alternative structures, generalisation rules and training strategies. A mechanism that would eliminate the need for thresholding the grey-level input to MAGNUS has been proposed. It involves storing the grey level n-tuple patterns, eliminating therefore as a result of the V-RAM implementation of the GRAMs, the binary requirement of weightless systems. This is a joint project with Imperial College, London and cross-reference should be made to their entry under this project title.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.brunel.ac.uk