EPSRC logo

Details of Grant 

EPSRC Reference: EP/R013993/1
Title: CONVER-SE: Conversational Programming for Smart Environments
Principal Investigator: Howland, Dr KL
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Centrica Plc RICA Victoria and Albert Museum
Department: Sch of Engineering and Informatics
Organisation: University of Sussex
Scheme: First Grant - Revised 2009
Starts: 01 March 2018 Ends: 28 February 2020 Value (£): 100,801
EPSRC Research Topic Classifications:
Human-Computer Interactions
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
05 Sep 2017 EPSRC ICT Prioritisation Panel Sept 2017 Announced
Summary on Grant Application Form
Smart environments are designed to react intelligently to the needs of those who visit, live and work in them. For example, the lights can come on when it gets dark in a living room or a video exhibit can play in the correct language when a museum visitor approaches it. However, we lack intuitive ways for users without technical backgrounds to understand and reconfigure the behaviours of such environments, and there is considerable public mistrust of automated environments. Whilst there are tools that let users view and change the rules defining smart environment behaviours without having programming knowledge, they have not seen wide uptake beyond technology enthusiasts. One drawback of existing tools is that they pull attention away from the environment in question, requiring users to translate from real world objects to abstract screen-based representations of them. New programming tools that allow users to harness their understandings of and references to objects in the real world could greatly increase trust and uptake of smart environments.

This research will investigate how users understand and describe smart environment behaviours whilst in situ, and use the findings to develop more intuitive programming tools. For example, a tool could let someone simply say that they want a lamp to come on when it gets dark, and point at it to identify it. Speech interfaces are now widely used in intelligent personal assistants, but the functionality is largely limited to issuing immediate commands or setting simple reminders. In reality, there are many challenges with using speech interfaces for programming tasks, and idealised interactions such as the lamp example are not at all simple, in reality. In many cases, research used to design programming interfaces for everyday users is carried out in research labs rather than in the real home or workplace settings, and the people invited to take part in design and evaluation studies are often university students or staff, or people with an existing interest or background in technology. These interfaces often fall down once taken away from the small set of toy usage scenarios in which they have been designed and tested and given to everyday users.

This research investigates the challenges with using speech for programming, and evaluates ways to mitigate these challenges, including conversational prompts, use of gesture and proximity data to avoid ambiguity, and providing default behaviours that can be customised. In this project, we focus primarily on smart home scenarios, and we will carry out our studies in real domestic settings. Speech interfaces are increasingly being used in these scenarios, but there is no support for querying, debugging and alternating the behaviours through speech.

We will recruit participants with no programming background, including older and disabled users, who are often highlighted as people who could benefit from smart home technology, but rarely included in studies of this sort. We will carry out interviews in people's homes to understand how they naturally describe rules for smart environments, taking into account speech, gesture and location. We will look for any errors or unclear elements in the rules they describe, and investigate how far prompts from researchers can help them to be able to express the rules clearly. We will also explore how far participants can customise default behaviours presented to them. This data will be used to allow us to create a conversational interface that harnesses the approaches that worked with human prompts, and test it in real world settings. Some elements of the system will be controlled by a human researcher, but the system will simulate the experience of interacting with an intelligent conversational interface. This will allow us to identify fruitful areas to pursue in developing fully functional conversational programming tools, which may also be useful in museums, education, agriculture and robotics.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.sussex.ac.uk