Cars that Explain: Building Trust in Autonomous Vehicles through Causal Explanations and Conversations

Author: Balint Gyevnar

Date: 2022-06-27

Follow @UoE_Agents on Twitter

In most discussions about autonomous vehicles (AVs), I seem to keep encountering two differing opinions. On the one hand, there are the AV-advocates. They argue that autonomous vehicles would improve traffic efficiency and transport safety thereby reducing road fatalities possibly by as much as 90% [1]. Through this increase in efficiency, we could see a decrease in pollution, while the driverless nature of these vehicles would help make car-transport more accessible for passengers with disabilities.

On the other hand, we hear the AV-sceptics. Their argument is based on the observation that the complex, highly integrated, and opaque systems of AVs are not at all understood by most humans. This lack of understanding then often manifests in reluctance to accept the technology due to fears that the vehicle might fail in unexpected situations [2]. This in effect has been fostering distrust and scepticism in the public's eye. For example, a recent survey in the US found that only about 20% of people interviewed were willing to adopt some level of autonomy in cars [3].

Yet, I believe we can build trust and understanding for AVs through the adoption of human-centric explainable AI (XAI).

There already seems to be a consensus among scientists that explanations are a great way to improve trust in AVs [2, 4, 5], but most current XAI methods sorely lack consideration of the human-factor. I propose that we build human-centric XAI systems with three key properties in mind:

(Of course, explanations must be entirely sound, that is, faithful to the processes that they explain. However, this is not a specific criterion for human-centric XAI, but rather a general requirement for all XAI.)

My essay for a move towards human-centric XAI for AVs has won third prize in the “Shape the Future of ITS” Competition by the IEEE Intelligent Transportation Systems Society. This blog post is an amended version of that essay, which you can find on my webpage.

I call this marriage of human-centric XAI and AV systems eXplainable Autonomous Vehicle Intelligence (XAVI [(k)savi]). As a first step towards achieving XAVI, we presented a paper [18] at the IJCAI 2022 Worksahop on Artificial Intelligence for Autonomous Driving, where I also gave a talk on the problem motivation and method.

“Why did you turn instead of stopping‽”

To illustrate why XAVI could be powerful and useful to build trust, imagine someone that uses a XAVI-powered mobility-on-demand service to commute to their workplace every day. During one morning trip three scenarios may take place, that could prompt interaction from the commuter.

At one time the vehicle stops abruptly even though the passenger could not see anything in particular. After asking for clarification, XAVI may show a recording of a small child chasing after a ball right in the direction of the car. The vehicle may also project an arrow extended towards the location of the averted collision to signal why breaking was necessary. This could reassure the commuter that the vehicle is aware of social expectations without being single-mindedly goal-focused (e.g., XAVI didn't run over someone simply because that is faster).

A little later, XAVI may decide to take a route unfamiliar to the passenger, who then asks for an explanation. XAVI could reply based on traffic information, that it determined the chosen route to be the fastest. The commuter may interject saying they would have known a better route. In turn, the vehicle could explain that it considered that route too, but traffic diversions had been reported along that way just an hour ago which already began causing huge delays.

Top-down schematic view of three vehicles arriving at a junction with arrows indicating the direction of turns.

Figure 1. Visual explanation showed by XAVI (in blue). Oncoming Vehicle 2 is stopping waiting for Vehicle 1 to pass and is predicted to turn right. XAVI can turn left earlier using this time gap.

Finally, at a junction two other vehicles arrive at the same time as XAVI, as shown in Figure 1. XAVI chooses to take a left turn even though another vehicle was approaching on a priority lane from the right. The passenger could ask why XAVI thought this was a safe manoeuvre. XAVI may explain that the oncoming vehicle was likely trying to turn right and was giving way to the vehicle going straight, otherwise stopping on the road would be irrational. This gave XAVI enough time to turn left, who may also display the figure above for clarification.

Why bother with XAVI?

The example scenarios above show explanations generated by XAVI, in which the passenger is reassured that the system is safe and fully in control of the situation by being able to react quickly and accurately to incidents, thereby contributing to a formation of trust [8]. Explanations directly appeal to cause-effect relationships and are delivered as part of an interaction-loop which could ensure that the commuter's doubts are freely expressed and readily addressed, as in the second scenario.

By being explainable, we allow our systems to be not only transparent but also more accountable. This could mean attribution of responsibility in an incident can be more easily determined. Such attribution is not only an essential part of understanding accidents, but it also opens up our systems for scrutiny regarding its internal biases or unfairness [11]. In addition, transparency not only means explaining the decisions of XAVI to people, but it will enable people to provide more meaningful feedback to XAVI. The conversational approach not only helps users express their doubts or curiosity but can also provide a feedback loop which we could use to optimise XAVI.

XAVI is conversational but explanation need not be in words. Various modes of explanations such as audio cues (e.g., changing pitch for speed limits, alerts, etc.) or visual imagery can enable a higher degree of fidelity and accessibility for everyone. Besides being more accessible, this fusion of media can ensure that the optimal level of user understanding is reached during interactions.

Integration of explainable AI into AVs could resolve a range of transparency issues around black-box models as well. For example, if we used an interpretable model such as IGP2 [9], we could make system debugging easier, while performance evaluation, model comparison, and hyper-parameter search could become more straightforward. Verifiability of these models would mean that rigorous proofs could be given for a given decision as in the GRIT system [10], while the white-box nature means we can also reason about the extent of knowledge transfer to various unseen driving scenarios.

How can we build XAVI?

Building XAVI involves solving and integrating a wide range of tasks from a variety of distinct fields, such as motion planning and prediction, cognitive modelling, and natural language processing. To better understand and structure how such a system could look, I suggest XAVI use three distinct modules for processing as depicted in Figure 2.

Schematic diagram of XAVI with coloured rectangles showing system sub-components connected by arrows.

Figure 2. Proposed XAVI system structure. Dashed lines mean perceptual inputs. The abbreviations "upd/ret" stands for update and retrieve.

The AV module is responsible for the actual operation of the car. The global planner combines relevant map, traffic, and weather data to generate a route to a given destination according to some user specified criteria (e.g., shortest distance, lowest emissions, etc.). This task can be solved by a range of route-finding algorithms, which may also be extended to include recent memory of traffic experiences to optimise route search. Using plan-explanation methods we could also generate justifications for our selected routes [12]. Following the global route, the local planner generates shorter-term action-plans that dynamically optimise the car's behaviour based on the immediate perceived and predicted state of the environment. Interpretability of these systems is a key factor, as they must form intelligible structures our explanations can be based on. For example, the recent prediction and planning system IGP2 [9] would predict both the optimal and counterfactual manoeuvre-sequences, which could enable the creation of contrastive and more efficient explanations. Finally, low-level controllers of the car would execute the commands of manoeuvres, while collecting feedback-signals that could be further incorporated into our explanations.

The primary module of XAVI is the explanation engine. Its main task is to synthesise information from other modules of XAVI and generate relevant explanations for the passengers. Following Dazeley et al. [13], its core is formed by an interaction-loop between the cognitive process that selects relevant causal information for the social process, which determines the kind of explanations required while also managing the incoming and outgoing communications with the passenger. To provide context-relevant and useful explanations, these processes update and retrieve information from a shared memory space. In addition, they may maintain and revise various cognitive models of the passenger, for example based on criteria of explanation-seeking curiosity [14], which allow explanation selections that are more engaging and more in line with the expectations of passengers.

The direct communication with users is handled by the dialogue engine. This module parses the incoming queries of the passengers and generates explanations based on the commands of the social process. Natural language interactions may be managed through semantics-oriented dialogue modelling [15] where semantic information is extracted by the explanation engine from causal information. This information may then be displayed on screen or converted to voice for audio cues. Additional visual outputs could combine a range of data, such as simple displays of perceptual recordings or dynamically generated images like Figure 1.

Regulations and ethics for XAVI

XAVI fits into a broader trend of information management regulations and movements, such as the EU's GDPR or the Algorithmic Accountability Act of 2019 in the US. Furthermore, full transparency of XAVI means not just causally justified explanations on-the-spot, but also complete oversight of the collected and processed data of the vehicle. This data may then be integrated into an event data recorder infrastructure, such as the EU's eCall system to provide timely help to passengers or relevant explanations to the authorities in case of an incident. Overall, I envision XAI, and by extension XAVI, to form one piece of the puzzle when building truly trustworthy AI [16] that respects human rights, prevents harm, ensures fairness, and guarantees explicability.

Of course, an important consideration around XAVI is how we handle failure cases. While these cases may negatively affect trust, a combination of explanations and an expression of regret was shown to be able to sufficiently recover trust [17]. It is however crucial that our explanations are sound, that is, they do not distort the true causality of the underlying systems. Otherwise, our explanations could be misleading or just plain incorrect.

Ultimately, XAVI must also rely on some form of collected data, such as voice recordings or perceptual inputs to make the best predictions possible. Besides the ethical considerations that pertain to data collection, a prevailing issue with data is its inherent bias. This could cause distrust in the applicability of AVs in rare or novel scenarios and would make the deployment of XAVI in various parts of the world a bigger challenge, for example where language data is not widely available. However, an advantage in the design of XAVI is the potential ability to explain decisions in terms of social expectations, which could immediately shed light on systematic biases and prompt designers for corrections.

Wrapping up

Explainable Autonomous Vehicle Intelligence or XAVI is what I envision explainability for autonomous vehicles could become. Three criteria are key to create XAVI: transparency, causality, and a social approach.

Transparency, because XAVI should explain underlying methods that are themselves inherently interpretable while avoiding black-boxes that magically map input to output without any apparent clue about causality.

Causality, not simply because the underlying methods should be causally justifiable, but also because the generated explanations themselves should explain the AV's working in terms of cause-effect relationships, ideally leveraging a contrastive structure.

And finally a social approach, because explaining is a conversational process that involves interaction between passengers and vehicle, and we should always keep the passengers' requirements under consideration.

I hope that by building interpretable and intelligible systems such as XAVI, we can boost people's trust in AVs and therefore seek to achieve wide acceptance for them. However, we need to embrace the human-centric mindset, otherwise a world where cars are allowed to drive on their own could never come.

References

  1. Hong Wang, Amir Khajepour, Dongpu Cao, and Teng Liu. Ethical Decision Making in Autonomous Vehicles: Challenges and Research Progress. IEEE Intelligent Transportation Systems Magazine, 14(1):6-17, 2022.
  2. Rasheed Hussain and Sherali Zeadally. Autonomous Cars: Research Results, Issues, and Future Challenges. IEEE Communications Surveys Tutorials, 21(2):1275-1313, 2019.
  3. Woon Kim and Tara Kelley-Baker. Users' Trust in and Concerns about Automated Driving Systems. Technical report, AAA Foundation for Traffic Safety, 2021.
  4. Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze. Explanations in Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems, 2021.
  5. Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, and Randy Goebel. Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions. arXiv:2112.11561, 2021.
  6. Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1-38, 2019.
  7. David R. Large, Leigh Clark, Annie Quandt, Gary Burnett, and Lee Skrypchuk. Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving. Applied Ergonomics, 63:53-61, 2017.
  8. Launonen Petri, Arto O. Salonen, and Heikki Liimatainen. Icy Roads and Urban Environments. Passenger Experiences in Autonomous Vehicles in Finland. Transportation Research Part F: Traffic Psychology and Behaviour 80 (July): 34-48, 2021.
  9. Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, and Subramanian Ramamoorthy. Interpretable Goalbased Prediction and Planning for Autonomous Driving. IEEE International Conference on Robotics and Automation (ICRA), 2021.
  10. Cillian Brewitt, Balint Gyevnar, Samuel Garcin, and Stefano V. Albrecht. GRIT: fast, interpretable, and verifiable goal recognition with learned decision trees for autonomous driving. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
  11. David Danks, and Alex John London. Algorithmic Bias in Autonomous Systems. IJCAI 17, 4691-4697, 2017.
  12. Maria Fox, Derek Long, and Daniele Magazzeni. Explainable Planning. arXiv:1709.10256, 2017.
  13. Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Arya,l and Francisco Cruz. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence 299, 103525, 2021.
  14. Emily G. Liquin, and Tania Lombrozo. A functional approach to explanation-seeking curiosity. Cognitive Psychology 119, 101276, 2020.
  15. Günther Wirsching, Markus Huber, Christian Kölbl, Robert Lorenz, and Ronald Römer. Semantic Dialogue Modeling. Cognitive Behavioural Systems, 104-113, 2012.
  16. AI HLEG. Ethics Guidelines for Trustworthy AI. Publications Office of the European Union, Directorate-General for Communications Networks, 2019.
  17. Esther Siegling Kox, José L. Kerstholt, T. F. Hueting, and Peter W. de Vries. Trust Development in Military and Civilian Human-Agent Teams: The Effect of Social-Cognitive Recovery Strategies. International Journal of Social Roboticsl, 2022.
  18. Balint Gyevnar, Massimiliano Tamborski, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, and Stefano V. Albrecht. A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning. IJCAI Workshop on Artificial Intelligence for Autonomous Driving, 2022.