The programme will contribute new results for the AI community, helping to establish new research directions. In addition to research contributions in computer science, due to the multidisciplinary nature of the programme, the work will impact policy makers and community leaders, aiming to raise awareness of the risks brought about by AI-driven systems in less technologically advanced parts of society.
Societal Impact
By laying the foundations for Responsible AI, the programme will contribute to the overall AI community, pushing the boundaries of intelligent systems. Specifically, we aim to identify new challenges for the design of reasoning mechanisms for agents operating in multi-agent contexts and establish new methodologies for the evaluation of interactional mechanisms for human-agent interaction. We also aim to contribute new challenges to the Databases and Information retrieval communities with respect to provenance tracking and analysis.
Going beyond the computational sciences, the programme will contribute new insights to the legal and insurance communities by establishing the range of risks that AI systems expose organisations and end-users to. Specifically, in the calculation of insurance premiums, this project will inform processes that evaluate the physical risks paused by autonomous assets (e.g., autonomous vehicles, or autonomous software agents) and will propose mitigation measures either through the interaction mechanisms or through restrictions on the autonomy of AI assets. The work will also help identify cyber-security risks that may arise due to the access to data by AI systems and the communication protocols used by such AI systems. The project also aligns closely with existing the AXA funded projects.
The programme will also generate new insights for policy makers and community leaders that will lead to better informed laws and ensure that parts of society that are less technologically advanced also understand the risks brought about by AI-driven systems. This will be achieved by involving them in our workshops and contributing to consultation programmes initiated by various Government departments relating to the use of technology in society.
The programme will involve a work package specifically focused on outreach involving:
- The organisation of international workshops at major AI and non-AI specific venues. This will include sponsoring students to attend such events.
- The publication of scientific articles in typical AI and non-AI specific venues as well as blog posts and articles in news outlets to inform the public.
- The organisation of workshops involving different disciplines (HCI, Law, Philosophy, Economics).
- The creation of professional videos to explain AI and notions of responsibility in the design of AI.
- Interactions with existing multi-disciplinary institutes such as the Oxford Martin School.
- A white paper to be written towards the end of the programme to present key insights and the vision for the future of responsible AI, targeted at policy makers, and the legal and insurance communities.
Applications
ResponsibleAI will focus its research on two key application areas:
- Disaster response - UAVs can team up with human emergency responders to gather information and rescue victims. Coordination of multiple UAVs at once is an existing and active research area, with ongoing investigations into bringing human operators and teams into the mix. The dynamic environments in which this technology will be deployed requires intelligent systems that can account for risk and continue to act responsibily without intervention from a human operator.
- IoT systems for smart grids and smart homes - The rise in IoT devices allows for automation of energy management and assistance for people living in residential care. The overlap between energy management and (precision) healthcare is particularly relevant given the ageing of populations across the West and other developing countries and the need to develop IoT systems that can serve multiple purposes.