How to make robots that we can trust
Can we trust a robot that makes decisions with real-world consequences? Michael Winikoff, Professor in Information Science, University of Otago. Self-driving cars, personal assistants, cleaning robots, smart homes – these are just some examples of autonomous systems. With many such systems already in use or under development, a key question concerns trust. My central argument is that having trustworthy, well-working systems is not enough. To enable trust, the design of autonomous systems also needs to consider other requirements, including a capacity to explain decisions and to have recourse options when things go wrong. When doing a good job is not enough The past few years have seen dramatic advances in the deployment of autonomous systems. These are essentially software systems that make decisions and act on them, with real-world consequences. Examples include physical systems such as self-driving cars and robots, and software-only applications such as personal assistants. However, it is not enough to engineer autonomous systems that function well. We also need to consider what additional features people need to trust such systems. For example, consider a personal assistant. Suppose the personal assistant functions well. Would you trust it, even if it could not explain its decisions? To make a system trustable we need to identify the key prerequisites to trust. Then, we need to ensure that the system is designed to incorporate these features. A trustworthy robot may need to be able to explain its decisions. What makes us trust? Ideally, we would answer this question using experiments. We could ask people whether they would be willing to trust an autonomous system. And we could explore how this depends on various factors. For instance, is providing guarantees about the system’s behaviour important? Is providing explanations important? Suppose the system makes decisions that are critical to get right, for example, […]