HUMAN-AI TEAMING
Lab

Welcome to the Human-AI Teaming (HAT) lab, in the Department of Informatics at King’s College London!

The vision of the HAT lab is that humans should work together with the AI as a team.

To achieve this, we focus on new research and challenges around Safe, Trusted and Explainable AI, that will allow humans to trust what the AI systems are about to do and to interact with the AI systems to co-create solutions. We believe that top solutions are found by leveraging human expertise and domain knowledge together with the power of cutting-edge research in AI.

We focus on AI Planning and Machine/Reinforcement Learning, and the main application area is the control and optimisation of robotics and autonomous systems, where AI is used to control (teams of) moving robots as well as to provide robots with more autonomy (for example in manufacturing, space robotics, logistics, satellites as well as robots in extreme environments).

In this context, it is estimated that the average ratio of humans supervising robots working autonomously is 5:1. It is now time to reverse the ratio, to have 1 human supervising (at least) 5 robots.

And this is the goal of the HAT lab!


We have a rich portfolio of funded projects and collaborations with many academic and non-academinc partners, and we are always looking for new partnerships for impactful research.

We are very pleased to offer the community the open source software developed by our team:

Latest news

  • New papers on Explainable AI Planning!
    Read them now in the publications section.
  • PhD studentship available!
    Get in touch if you are interested in joining the HAT lab for a PhD!
  • New PostDoc position available!
    We are seeking a full-time PostDoc in AI and Robotics. The post is for up to 36 months. (Link for applications)
  • New Workshops in Explainable AI
    Workshop on Explainable AI at IJCAI-19 (Read more).
    Workshop on Explainable Planning at ICAPS-19 (Read more)
  • New CDT in Safe and Trusted AI!
    Apply for a PhD in the new Centre for Doctoral Training in Safe and Trusted AI! (Read more)
  • New AFOSR grant!
    New project funded by AFOSR on Explaining the Space of Plans (Read more)
  • New version of ROSPlan
    A new version of ROSPlan has been realed, together with new tutorials (Read more)
  • New InnovateUK grant!
    New project funded by InnovateUK on Intelligent Situational Awareness Platform (Read more)

The team

All the work in the team is only possible thanks to the teamwork with the brilliant Postdocs and PhD Students. Many thanks to them!

Daniele Magazzeni

Daniele Magazzeni

Reader

Gerard Canal

Gerard Canal

Postdoc

Senka Krivić

Senka Krivić

Postdoc

Dorian Buksz

Dorian Buksz

PhD Student

Anna Collins

Anna Collins

PhD Student

Sophia Kalanovska

Sophia Kalanovska

PhD Student

Benjamin Krarup

Benjamin Krarup

PhD Student

Lin Li

Lin Li

PhD Student

Parisa Zehtabi

Parisa Zehtabi

PhD Student

Associates

Michael Cashmore

Michael Cashmore

Chancellor’s Fellow

University of Strathclyde

Oscar Lima

Oscar Lima

Researcher at DFKI

German Research Center for AI

We are looking for new students to join the team. Email Dan if interested.

Projects

A list of projects we are currently involved with.

Trust in Human-Machine Partnership (ThUMP)

EPSRC/UKRI (£1.2m / 2019-2022)

Explaining the Space of Plans

AFOSR ($1m / 2018-2023)

Intelligent Situational Awareness Platform

InnovateUK (£130k / 2018-2019)

A Realtime Autonomous Robot Navigation Framework

Korea Evaluation Institute of Industrial Technology:
2018-2021

King’s/NASA Collaboration on Planning Technologies

King’s Impact Acceleration Grant

Publications

Our latest publications. A complete list can be found here.

  • Towards Explainable Planning as a Service. Michael Cashmore, Anna Collins, Benjamin Krarup, Senka Krivic, Daniele Magazzeni and David Smith; ICAPS-19 Workshop on Explainable Planning, 2019.

    Abstract:
    Explainable AI is an important area of research within which Explainable Planning is an emerging topic. In this paper, we argue that Explainable Planning can be designed as a service – that is, as a wrapper around an existing planning system that utilises the existing planner to assist in answering contrastive questions. We introduce a framework to facilitate this, along with some examples of how a planner can be used to address certain types of contrastive questions. We discuss the main advantages and limitations of such an approach and we identify open questions for Explainable Planning as a service that identify several possible research directions.
    @inproceedings{Cashmore_icapsxai2019,
    author = "Cashmore, Michael and Collins, Anna and Krarup, Benjamin and Krivic, Senka and Magazzeni, Daniele and Smith, David",
    title = "{Towards Explainable Planning as a Service}",
    booktitle = "ICAPS-19 Workshop on Explainable Planning",
    year = "2019"
    }
  • Model-Based Contrastive Explanations for Explainable Planning. Benjamin Krarup, Michael Cashmore, Daniele Magazzeni and Tim Miller; ICAPS-19 Workshop on Explainable Planning, 2019.

    Abstract:
    An important type of question that arises in Explainable Planning is a contrastive question, of the form “Why action A instead of action B?”. These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. An effective explanation of this type serves to highlight the differences between the decisions that have been made by the planner and what the user would expect, as well as to provide further insight into the model and the planning process. Producing this kind of explanation requires the generation of the contrastive plan. This paper introduces domain-independent compilations of user questions into constraints. These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan. We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting.
    @inproceedings{Krarup_icaps2019,
    author = "Krarup, Benjamin and Cashmore, Michael and Magazzeni, Daniele and Miller, Tim",
    title = "{Model-Based Contrastive Explanations for Explainable Planning}",
    booktitle = "ICAPS-19 Workshop on Explainable Planning",
    year = "2019"
    }
  • Towards an Argumentation-based Approach to Explainable Planning. Anna Collins, Daniele Magazzeni and Simon Parsons; ICAPS-19 Workshop on Explainable Planning, 2019.

    Abstract:
    Providing transparency of AI planning systems is crucial for their success in practical applications. In order to create a transparent system, a user must be able to query it for explanations about its outputs. We argue that a key underlying principle for this is the use of causality within a planning model, and that argumentation frameworks provide an intuitive representation of such causality. In this paper, we discuss how argumentation can aid in extracting causalities in plans and models, and how they can create explanations from them.
    @inproceedings{Collins_icaps2019,
    author = "Collins, Anna and Magazzeni, Daniele and Parsons, Simon",
    title = "{Towards an Argumentation-based Approach to Explainable Planning}",
    booktitle = "ICAPS-19 Workshop on Explainable Planning",
    year = "2019"
    }
  • Explaining the Space of Plans through Plan-Property Dependencies. Rebecca Eifler, Michael Cashmore, Jörg Hoffmann, Daniele Magazzeni and Marcel Steinmetz; ICAPS-19 Workshop on Explainable Planning, 2019.

    Abstract:
    A key problem in explainable AI planning is to elucidate decision rationales. User questions in this context are often contrastive, taking the form “Why do A rather than B?”. Answering such a question requires a statement about the space of possible plans. We propose to do so through plan-property dependencies, where plan properties are Boolean properties of plans the user is interested in, and dependencies are entailment relations in plan space. The answer to the above question then consists of those properties C entailed by B. We introduce a formal framework for such dependency analysis. We instantiate and operationalize that framework for the case of dependencies between goals in oversubscription planning. More powerful plan properties can be compiled into that special case. We show experimentally that, in a variety of benchmarks, the suggested analyses can be feasible and produce compact answers for human inspection.
    @inproceedings{Eifler_icaps2019,
    author = {Eifler, Rebecca and Cashmore, Michael and Hoffmann, J\"org and Magazzeni, Daniele and Steinmetz, Marcel},
    title = "{Explaining the Space of Plans through Plan-Property Dependencies}",
    booktitle = "ICAPS-19 Workshop on Explainable Planning",
    year = "2019"
    }
  • Probabilistic Planning for Robotics with ROSPlan. Gerard Canal, Michael Cashmore, Senka Krivić, Guillem Alenyà, Daniele Magazzeni and Carme Torras; Towards Autonomous Robotic Systems, pp. 236-250, 2019.

    Abstract:
    Probabilistic planning is very useful for handling uncertainty in planning tasks to be carried out by robots. ROSPlan is a framework for task planning in the Robot Operating System (ROS), but until now it has not been possible to use probabilistic planners within the framework. This systems paper presents a standardized integration of probabilistic planners into ROSPlan that allows for reasoning with non-deterministic effects and is agnostic to the probabilistic planner used. We instantiate the framework in a system for the case of a mobile robot performing tasks indoors, where probabilistic plans are generated and executed by the PROST planner. We evaluate the effectiveness of the proposed approach in a real-world robotic scenario.
    @inproceedings{Canal_taros2019,
    author = "Canal, Gerard and Cashmore, Michael and Krivi\'{c}, Senka and Alenyà, Guillem and Magazzeni, Daniele and Torras, Carme",
    title = "{Probabilistic Planning for Robotics with ROSPlan}",
    booktitle = "Towards Autonomous Robotic Systems",
    year = "2019",
    publisher = "Springer International Publishing",
    address = "Cham",
    pages = "236--250",
    isbn = "978-3-030-23807-0",
    doi = "10.1007/978-3-030-23807-0\_20"
    }