The aim of the project is to make so-called distributed autonomous systems "scrutable" by humans. Autonomous systems can perform tasks without continuous human guidance. This project considers autonomous systems consisting of many components (called agents) which have to make joint plans on how to act together. It is important that humans can understand why the system behaves in particular ways, for example, why the agents have decided upon a certain plan. To address this, the project will develop and evaluate computational techniques for gathering information about the planning process, including how the various agents interacted, and presenting this information to humans in an understandable way.
To allow information to be gathered easily, the agents will use so-called argumentation techniques: agents engage in dialogues whereby they justify their decision-making as they agree on joint plans, and each decision is based on a collection of arguments in favour and/or against it. This approach has the additional benefit to potentially produce plans faster, because it offers agents an insight into each others' motives. However, this approach will gather much information, and the information will be complex. To allow humans to inspect this information easily, the project will use a combination of Natural Language and diagrams to present it in an understandable way; simplifying, summarising and aggregating information as needed.
For example, our system might explain:
"To achieve goal G, a plan comprising the sequence of actions A, B, C has been agreed upon. Agents P1, P2 and P3 were involved in the preparation of the plan. P1 argued that A had to be done first, because of its standard operating procedures; P2 and P3 agreed to this. P1 suggested that A should be followed by action D, because it could do A and D with low costs, however P2 disagreed, pointing out that action D did not address safety concerns, and suggested that A should be followed by B. No one objected to this. Finally, P1 suggested that B should be followed by C, and that it could perform the action with the help of P3."
If the user requests further details about the safety concerns, she would be presented with the following:
"Current guidelines establish that actions B and D both achieve the same goal, but B does so addressing safety concerns X, Y and Z; D does not address any safety concerns"
The design of a scrutable argumentation-based planning system faces substantial technological challenges. One challenge is to endow distributed planning with the ability to judiciously record its process, whilst creating useful plans quickly. The other is to present the record of the distributed planning process optimally to users. Optimality, in this context, means a combination of clarity and information-richness. We will carry out extensive experiments to find out what aspects of a plan (and the motivation behind it) to emphasise, and where details are better left unsaid. Business partners will help us by taking part in these experiments, and by engaging in discussions about the types of insight users demand.
The project will seek to find solutions that apply across a large range of applications, varying from ones that involve highly complex software agents and robots to ones that involve a plurality of simple, sensor-based agents. We hypothesise that our argumentation-based approach is suitable for modelling disparate scenarios, regardless of their complexity, and that Information Presentation techniques, coupled with state-of-the-art requirements gathering and user-based evaluation, has much to offer across this range.
|