Research

The goal of ResponsibleAI is to address some of the key technical and practical challenges faced in the development of intelligent systems that act responsibly. The scientific questions targeted by this work will develop the foundations for Responsbile AI, with a focus on intelligent agents and machine learning systems.

Background

To date, AI research has typically adopted a bottom-up approach, focusing on solutions to specific computational tasks such as search, classification, optimisation, and prediction. The last few years have seen a significant rise in the development of machine learning approaches based on neural networks and Bayesian statistics with great success in specific areas such as game playing and time-series prediction for traffic monitoring, epidemiology, and disaster response applications.

The rapid improvements in the field over the last decade have been too fast for those that use, operate, and regulate systems that employ AI-based solutions. For example, the CAA (Civil Aviation Authority) in the UK has struggled to design the rules for flights involving purely autonomous UAVs (unmanned aerial vehicles), let alone fleets of UAVs. Similarly, in the energy sector, asset managers are struggling to understand the risks involved in deploying intelligent systems that typically learn from historical data and act based on forecasts of the weather or energy consumption patterns of home occupants. Such systems are therefore liable to major failures that may negatively impact the organisations running them, but, more importantly, the end-users of such systems.

Research Areas

Against the current background of AI research, this research aims to establish some of the underpinning methodologies, algorithms, and partnerships that will contribute to the development of the field of Responsible AI.

Five key focal points for the ongoing research are:

  • Design for end users – What are the design principles that can serve to ensure human interaction with autonomous agents and machine learning systems guarantee that the outcome of such interactions will be understood by the end-users and lead to efficient and effective outcomes?  This includes ensuring that out comes are fairto users and are sustainable in the long run.
  • Accounting for risk – How can decision making algorithms for intelligent agents be designed to account for the risks that they expose their end-users to and the other agents they interact with?
  • Privacy – How can we models human intent and activities with minimal sensing of people’s environments and rely minimally on users’ self-expressed preferences in order to preserve privacy?
  • Provenance Tracking – How can we capture the provenance of decisions and data within systems involving both human and artificial actors? Specifically, a key challenge is to develop algorithms that can sift through vast amounts of provenance data to weed out malicious AI-based or human behaviour in order to protect systems from attacks.
  • Responsibility – How can the notion of “responsibility” be developed within the reasoning of autonomous agents and the engineering of machine learning systems?

Approach

The questions addressed as part of this research will not only involve developing theory but also taking such theory to practice, including empirical evalutation – a proven approach for impactful research. While other projects are mainly concerned with the design of AI for individual agents or for interactions between one agent and one human, this programme will consider multi-agent applications. For example, we will consider the use of large autonomous multi-UAV teams that are to be controlled by a few operators, raising issues around the design of flexible autonomy and how the cognitive load of operators can be minimised when dealing with large numbers of assets.  Additionally, this programme will consider social care and energy systems applications.

Given the application areas considered, a deep understanding of the problem domain and interactions with other disciplines is required. This programme of work provides a highly interdisciplinary approach that will bring together experts in HCI, AI, Law, and authorities concerned with regulation and policy design. We will seek to establish a network of researchers working across different disciplines to develop the field of Responsible AI, with a view to setting up internationally recognised workshop/journal/conference series supported by one or more learned societies.

The programme will adopt the following approach that diverges from traditional purely theoretical and algorithmic approaches used in the AI field:

  • Survey the methods and techniques used in different areas of computer science, systems science, engineering, law, and philosophy with a view to establishing a summarised view of the landscape.
  • Determine key interactional and computational requirements for specific problems faced in the chosen application areas (multi-UAV systems and IoT for energy and social care). Such requirements will be elicited through participatory design workshops involving end-users and researchers from other disciplines.
  • Develop algorithms, mechanisms, and user interfaces for the operation and management of autonomous systems.
  • Develop provenance tracking mechanisms to record human and machine decision making with a view to establishing accountability.
  • Elicit metrics of “responsibility” through empirical and theoretical methods. This may include the use of lab studies, field trials, and purely theoretical validation (e.g. using game theory and complexity theory) of the performance of systems in edge cases.
  • Develop a methodology for the design of AI that is responsible and demonstrate the application of this methodology within the chosen application domains.