Cars that Explain: Building Trust in Autonomous Vehicles through Causal Explanations and Conversations
In most discussions about autonomous vehicles (AVs), I seem to keep encountering two differing opinions. On the one hand, there are the AV-advocates. They argue that autonomous vehicles would improve traffic efficiency and transport safety thereby reducing road fatalities possibly by as much as 90% . Through this increase in efficiency, we could see a decrease in pollution, while the driverless nature of these vehicles would help make car-transport more accessible for passengers with disabilities.
On the other hand, we hear the AV-sceptics. Their argument is based on the observation that the complex, highly integrated, and opaque systems of AVs are not at all understood by most humans. This lack of understanding then often manifests in reluctance to accept the technology due to fears that the vehicle might fail in unexpected situations . This in effect has been fostering distrust and scepticism in the public's eye. For example, a recent survey in the US found that only about 20% of people interviewed were willing to adopt some level of autonomy in cars .
Yet, I believe we can build trust and understanding for AVs through the adoption of human-centric explainable AI (XAI).
There already seems to be a consensus among scientists that explanations are a great way to improve trust in AVs [2, 4, 5], but most current XAI methods sorely lack consideration of the human-factor. I propose that we build human-centric XAI systems with three key properties in mind:
- Transparent: As the cliché goes, people don't trust what they don't understand. Explanations should be built on systems whose purposes, capabilities, and methods are transparent. We should avoid inscrutable black boxes and surrogate model-based explanations.
- Causal: Explanations should appeal to causality and be selective to avoid too complex explanations. Not only that, but explanations should also be contrastive to highlight differences between the true outcome and some counterfactual ones .
- Social: Explanations should be intelligible and part of an interaction that considers the listener's beliefs about the observed environment. Interactivity can also help place the explainer (e.g., an AV) on equal social ground to the listener .
(Of course, explanations must be entirely sound, that is, faithful to the processes that they explain. However, this is not a specific criterion for human-centric XAI, but rather a general requirement for all XAI.)
My essay for a move towards human-centric XAI for AVs has won third prize in the “Shape the Future of ITS” Competition by the IEEE Intelligent Transportation Systems Society. This blog post is an amended version of that essay, which you can find on my webpage.
I call this marriage of human-centric XAI and AV systems eXplainable Autonomous Vehicle Intelligence (XAVI [(k)savi]). As a first step towards achieving XAVI, we presented a paper  at the IJCAI 2022 Worksahop on Artificial Intelligence for Autonomous Driving, where I also gave a talk on the problem motivation and method.
“Why did you turn instead of stopping‽”
To illustrate why XAVI could be powerful and useful to build trust, imagine someone that uses a XAVI-powered mobility-on-demand service to commute to their workplace every day. During one morning trip three scenarios may take place, that could prompt interaction from the commuter.
At one time the vehicle stops abruptly even though the passenger could not see anything in particular. After asking for clarification, XAVI may show a recording of a small child chasing after a ball right in the direction of the car. The vehicle may also project an arrow extended towards the location of the averted collision to signal why breaking was necessary. This could reassure the commuter that the vehicle is aware of social expectations without being single-mindedly goal-focused (e.g., XAVI didn't run over someone simply because that is faster).
A little later, XAVI may decide to take a route unfamiliar to the passenger, who then asks for an explanation. XAVI could reply based on traffic information, that it determined the chosen route to be the fastest. The commuter may interject saying they would have known a better route. In turn, the vehicle could explain that it considered that route too, but traffic diversions had been reported along that way just an hour ago which already began causing huge delays.
Finally, at a junction two other vehicles arrive at the same time as XAVI, as shown in Figure 1. XAVI chooses to take a left turn even though another vehicle was approaching on a priority lane from the right. The passenger could ask why XAVI thought this was a safe manoeuvre. XAVI may explain that the oncoming vehicle was likely trying to turn right and was giving way to the vehicle going straight, otherwise stopping on the road would be irrational. This gave XAVI enough time to turn left, who may also display the figure above for clarification.
Why bother with XAVI?
The example scenarios above show explanations generated by XAVI, in which the passenger is reassured that the system is safe and fully in control of the situation by being able to react quickly and accurately to incidents, thereby contributing to a formation of trust . Explanations directly appeal to cause-effect relationships and are delivered as part of an interaction-loop which could ensure that the commuter's doubts are freely expressed and readily addressed, as in the second scenario.
By being explainable, we allow our systems to be not only transparent but also more accountable. This could mean attribution of responsibility in an incident can be more easily determined. Such attribution is not only an essential part of understanding accidents, but it also opens up our systems for scrutiny regarding its internal biases or unfairness . In addition, transparency not only means explaining the decisions of XAVI to people, but it will enable people to provide more meaningful feedback to XAVI. The conversational approach not only helps users express their doubts or curiosity but can also provide a feedback loop which we could use to optimise XAVI.
XAVI is conversational but explanation need not be in words. Various modes of explanations such as audio cues (e.g., changing pitch for speed limits, alerts, etc.) or visual imagery can enable a higher degree of fidelity and accessibility for everyone. Besides being more accessible, this fusion of media can ensure that the optimal level of user understanding is reached during interactions.
Integration of explainable AI into AVs could resolve a range of transparency issues around black-box models as well. For example, if we used an interpretable model such as IGP2 , we could make system debugging easier, while performance evaluation, model comparison, and hyper-parameter search could become more straightforward. Verifiability of these models would mean that rigorous proofs could be given for a given decision as in the GRIT system , while the white-box nature means we can also reason about the extent of knowledge transfer to various unseen driving scenarios.
How can we build XAVI?
Building XAVI involves solving and integrating a wide range of tasks from a variety of distinct fields, such as motion planning and prediction, cognitive modelling, and natural language processing. To better understand and structure how such a system could look, I suggest XAVI use three distinct modules for processing as depicted in Figure 2.
The AV module is responsible for the actual operation of the car. The global planner combines relevant map, traffic, and weather data to generate a route to a given destination according to some user specified criteria (e.g., shortest distance, lowest emissions, etc.). This task can be solved by a range of route-finding algorithms, which may also be extended to include recent memory of traffic experiences to optimise route search. Using plan-explanation methods we could also generate justifications for our selected routes . Following the global route, the local planner generates shorter-term action-plans that dynamically optimise the car's behaviour based on the immediate perceived and predicted state of the environment. Interpretability of these systems is a key factor, as they must form intelligible structures our explanations can be based on. For example, the recent prediction and planning system IGP2  would predict both the optimal and counterfactual manoeuvre-sequences, which could enable the creation of contrastive and more efficient explanations. Finally, low-level controllers of the car would execute the commands of manoeuvres, while collecting feedback-signals that could be further incorporated into our explanations.
The primary module of XAVI is the explanation engine. Its main task is to synthesise information from other modules of XAVI and generate relevant explanations for the passengers. Following Dazeley et al. , its core is formed by an interaction-loop between the cognitive process that selects relevant causal information for the social process, which determines the kind of explanations required while also managing the incoming and outgoing communications with the passenger. To provide context-relevant and useful explanations, these processes update and retrieve information from a shared memory space. In addition, they may maintain and revise various cognitive models of the passenger, for example based on criteria of explanation-seeking curiosity , which allow explanation selections that are more engaging and more in line with the expectations of passengers.
The direct communication with users is handled by the dialogue engine. This module parses the incoming queries of the passengers and generates explanations based on the commands of the social process. Natural language interactions may be managed through semantics-oriented dialogue modelling  where semantic information is extracted by the explanation engine from causal information. This information may then be displayed on screen or converted to voice for audio cues. Additional visual outputs could combine a range of data, such as simple displays of perceptual recordings or dynamically generated images like Figure 1.
Regulations and ethics for XAVI
XAVI fits into a broader trend of information management regulations and movements, such as the EU's GDPR or the Algorithmic Accountability Act of 2019 in the US. Furthermore, full transparency of XAVI means not just causally justified explanations on-the-spot, but also complete oversight of the collected and processed data of the vehicle. This data may then be integrated into an event data recorder infrastructure, such as the EU's eCall system to provide timely help to passengers or relevant explanations to the authorities in case of an incident. Overall, I envision XAI, and by extension XAVI, to form one piece of the puzzle when building truly trustworthy AI  that respects human rights, prevents harm, ensures fairness, and guarantees explicability.
Of course, an important consideration around XAVI is how we handle failure cases. While these cases may negatively affect trust, a combination of explanations and an expression of regret was shown to be able to sufficiently recover trust . It is however crucial that our explanations are sound, that is, they do not distort the true causality of the underlying systems. Otherwise, our explanations could be misleading or just plain incorrect.
Ultimately, XAVI must also rely on some form of collected data, such as voice recordings or perceptual inputs to make the best predictions possible. Besides the ethical considerations that pertain to data collection, a prevailing issue with data is its inherent bias. This could cause distrust in the applicability of AVs in rare or novel scenarios and would make the deployment of XAVI in various parts of the world a bigger challenge, for example where language data is not widely available. However, an advantage in the design of XAVI is the potential ability to explain decisions in terms of social expectations, which could immediately shed light on systematic biases and prompt designers for corrections.
Explainable Autonomous Vehicle Intelligence or XAVI is what I envision explainability for autonomous vehicles could become. Three criteria are key to create XAVI: transparency, causality, and a social approach.
Transparency, because XAVI should explain underlying methods that are themselves inherently interpretable while avoiding black-boxes that magically map input to output without any apparent clue about causality.
Causality, not simply because the underlying methods should be causally justifiable, but also because the generated explanations themselves should explain the AV's working in terms of cause-effect relationships, ideally leveraging a contrastive structure.
And finally a social approach, because explaining is a conversational process that involves interaction between passengers and vehicle, and we should always keep the passengers' requirements under consideration.
I hope that by building interpretable and intelligible systems such as XAVI, we can boost people's trust in AVs and therefore seek to achieve wide acceptance for them. However, we need to embrace the human-centric mindset, otherwise a world where cars are allowed to drive on their own could never come.
- Hong Wang, Amir Khajepour, Dongpu Cao, and Teng Liu. Ethical Decision Making in Autonomous Vehicles: Challenges and Research Progress. IEEE Intelligent Transportation Systems Magazine, 14(1):6-17, 2022.
- Rasheed Hussain and Sherali Zeadally. Autonomous Cars: Research Results, Issues, and Future Challenges. IEEE Communications Surveys Tutorials, 21(2):1275-1313, 2019.
- Woon Kim and Tara Kelley-Baker. Users' Trust in and Concerns about Automated Driving Systems. Technical report, AAA Foundation for Traffic Safety, 2021.
- Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze. Explanations in Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems, 2021.
- Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, and Randy Goebel. Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions. arXiv:2112.11561, 2021.
- Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1-38, 2019.
- David R. Large, Leigh Clark, Annie Quandt, Gary Burnett, and Lee Skrypchuk. Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving. Applied Ergonomics, 63:53-61, 2017.
- Launonen Petri, Arto O. Salonen, and Heikki Liimatainen. Icy Roads and Urban Environments. Passenger Experiences in Autonomous Vehicles in Finland. Transportation Research Part F: Traffic Psychology and Behaviour 80 (July): 34-48, 2021.
- Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, and Subramanian Ramamoorthy. Interpretable Goalbased Prediction and Planning for Autonomous Driving. IEEE International Conference on Robotics and Automation (ICRA), 2021.
- Cillian Brewitt, Balint Gyevnar, Samuel Garcin, and Stefano V. Albrecht. GRIT: fast, interpretable, and verifiable goal recognition with learned decision trees for autonomous driving. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
- David Danks, and Alex John London. Algorithmic Bias in Autonomous Systems. IJCAI 17, 4691-4697, 2017.
- Maria Fox, Derek Long, and Daniele Magazzeni. Explainable Planning. arXiv:1709.10256, 2017.
- Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Arya,l and Francisco Cruz. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence 299, 103525, 2021.
- Emily G. Liquin, and Tania Lombrozo. A functional approach to explanation-seeking curiosity. Cognitive Psychology 119, 101276, 2020.
- Günther Wirsching, Markus Huber, Christian Kölbl, Robert Lorenz, and Ronald Römer. Semantic Dialogue Modeling. Cognitive Behavioural Systems, 104-113, 2012.
- AI HLEG. Ethics Guidelines for Trustworthy AI. Publications Office of the European Union, Directorate-General for Communications Networks, 2019.
- Esther Siegling Kox, José L. Kerstholt, T. F. Hueting, and Peter W. de Vries. Trust Development in Military and Civilian Human-Agent Teams: The Effect of Social-Cognitive Recovery Strategies. International Journal of Social Roboticsl, 2022.
- Balint Gyevnar, Massimiliano Tamborski, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, and Stefano V. Albrecht. A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning. IJCAI Workshop on Artificial Intelligence for Autonomous Driving, 2022.