The level of automation in consumer-oriented products is increasing, yet the simple presence of automated functionalities does not mean that consumers would accept and use them. Instead, the acceptance and use of automation may vary significantly between the different users of the very same product. This indicates that the management of technological trust has a central role not only in product development and marketing but also in the ethically appropriate use of automation.
In our recent research, we set out to explore trust in car automation and how drivers respond to the situations where the foundations of their trusts are questioned. We decided to focus on Tesla Autopilot, a package of driving assistance functionalities, as it provides a prime example of consumer-oriented automation. With a suite of sensors, the car senses the traffic and road conditions and is expected to assist the driver through the partial automation of some driving activities. Tesla develops Autopilot constantly and updates cars with the latest versions of Autopilot through frequent software updates. In this view, Autopilot cannot be considered as a complete product, but rather as something that is constantly changing and requiring learning from the driver’s part. Therefore, there is a risk that expectations on the level of automation are not aligned with the actual technological capabilities, leading potentially to situations that may put the driver, passengers and other road users in danger.
Using empirical evidence gathered from selected discussion threads on an online forum, we analysed how drivers discussed their expectations concerning the level of automation, and experiences that were in conflict with their expectations. The focus of our research was to understand better the misalignment between these expectations and experiences, as well as the different levels of trust and associated coping strategies that were deployed by the users of Tesla Autopilot.
Misalignment of trust
Right from the beginning, we observed that several drivers had been in situations where their experiences differed from initial expectations. We analysed these situations to identify perceived causes for the misalignment of trust: some blamed the driver and others the technology.
The human driver. Here, the human driver was seen as a root cause for all issues. It was reasoned that it is his or her responsibility to pay attention to the system and environment and be able to control the car in all circumstances. Some argued that the current limitations of the technology are documented well enough in Tesla user manual and/or flagged in warnings when users turn on Autopilot. In addition, some also argued that the driver needs to be aware of “known” bugs and issues, the system and its design as well as be aware that the system is still “beta” and not working perfectly in all situations. It was concluded that drivers should be aware of these matters to prevent accidents.
The technology. The other camp criticised the technology and marketing message, reasoning that the technology and promises of its capabilities should also be held accountable. Some drivers criticised the functioning and reliability of different driving assistance features, whereas other included the Tesla brand in the conversation and argued that Tesla provides too optimistic a picture of the car’s performance through its sales and marketing. Some drivers also highlighted that as the system performs well most of the time, people learn to trust it too much. Subsequently, they wouldn’t pay enough attention to be prepared to take control in situations when the system suddenly does not behave as expected. Training and more careful messaging were presented as a way to align drivers’ expectations with the capabilities of automation.
Levels of trust and coping with unexpected situations
As drivers shared their experiences on narrow escapes and other unexpected situations, we were able to distinguish three groups of users based on their level of trust and how they responded to new experiences.
Group 1: Trust. The drivers in this group agreed that Autopilot is imperfect and would not always work flawlessly, yet they felt confident of their ability to use automation. Through their experiences and engagement, they had learned how Autopilot may behave in different situations, and they prepared their own actions as a strategy for those situations, thus growing more confident that they are in control and can cope with using the automation. Through the alignment of expectations and the control of the meaning of their experiences, this group of drivers was able to reduce their stress and maintain or even increase their trust in automation.
Group 2: Low trust. The drivers in this group agreed that the technology is imperfect and has its faults, and they had experienced similar issues as the first group. They trusted in Autopilot less and felt they needed to be more cautious. These users were significantly less confident of their understanding on when and how the technology could behave unexpectedly. However, they continued to use Autopilot while trying to manage the associated stress and uncertainty. They had accepted the unexpected behaviour of Autopilot as inevitable and coped with the resulting stress by highlighting the need for alertness on the road, yet they had not been able to reduce the stress by gaining the feeling of control.
Group 3: No trust. The drivers in this group reported experiences of incidents similar to the two other groups but described the lowest level of trust. They expressed their disappointment with the technology as they learned from their own negative experiences and the information provided by others. Some had stopped using autonomous features in particular situations and some did not use them at all anymore and regretted purchasing them. Users in this group did not feel they were in control in the same way as the users in the trust group, and they could not manage the stress as the users in the low trust group did. To cope with the situation this group resorted to switching off automation to eliminate or modify the problematic conditions, thereby leading to disuse of automation.
Align technological capabilities and expectations
The autonomous car technology is still in the making and users frequently encounter unexpected situations with the technology. The drivers are therefore constantly learning and adjusting their expectations and trust in automation as well as their ways of managing and coping with challenging situations. Even with imperfect systems, there are users who are willing to experiment and accept their limitations, at least in the early phases of innovation. They are able to overcome initial distrust through the pro-active processes of trust calibration and coping. These types of people formed the trust group in our data. However, this group of users is not representative of all users.
In order to increase trust and help the users in low trust and no trust groups to use automation more confidently and within the boundaries of technological capabilities and limitations, we recommend that the firms that seek to commercialise consumer-oriented automation should not only focus on technology but pay also closer attention to customers and their readiness to operate and cope with automated systems. In our view, the following steps would pave a way towards responsible and safer commercialisation of autonomous technologies.
- Provide informative communication about the capabilities and limitations of automation
- Help users to prepare for and control different situations with better information and training
- Involve users in the development and continuously improve the technology
- Communicate any functional updates and their implications to the users clearly
- This blog post is based on the authors’ paper Trust and risky technologies: Aligning and coping with Tesla Autopilot, in Proceedings of the 52nd Hawaii International Conference on System Sciences.
- The post gives the views of the author, not the position of LSE Business Review or the London School of Economics.
- Featured image by Oregon Department of Transportation, under a CC-BY-2.0 licence
- When you leave a comment, you’re agreeing to our Comment Policy.
Kari Koskinen works as a fellow in the department of management at LSE and has obtained his PhD in the Information Systems and Innovation Group of the same department. His research interests revolve around digital innovation and platforms.
Antti Lyyra is a fellow at LSE’s department of management. His research focuses on the industry dynamics of digital innovation in the context of artificial intelligence and robotics.
Niina Mallat is a postdoc researcher in Aalto University’s department of information and service management. She studies the users of autonomous vehicles and digital technologies to understand trust formation and acceptance of emerging technologies.
Virpi Kristiina Tuunainen is a professor of information systems science at the department of information and service management of Aalto University School of Business. Her current research focuses on ICT- enabled or enhanced services and digital innovation.