Trust is an essential component of our daily lives. From the certainty that the train driver will get us safely to our destination, to the hope that the fish we buy is fresh, trust underpins our daily interactions. Living without it would be unsustainable.

Several Nobel laureates in economics, such as Joseph Stiglitz and Angus Deaton, have stressed the importance of trust in economic institutions and governments. This trust is crucial for well-functioning markets and to avoid rising inequality. However, this same trust also makes us vulnerable, particularly when it comes to technology.

The emergence of artificial intelligence (AI) poses a new challenge: trust or concern? According to a new report, while 85% of people recognise its benefits, 61% express mistrust. These mixed feelings stem from ethical, legal, and social concerns. This article explores the relationship of trust with AI in our digital age and the implications of this duality for the future.

What will I read about in this article?

 

The relationship of trust with AI

To better understand this relationship between trust and AI, researchers Ella Glikson of Bar-Ilan University and Anita Woolley of Carnegie Mellon University reviewed around 150 articles published over the past 20 years in various fields, such as computer science, robotics and business management, which allowed them to draw some conclusions about how people in organisational settings relate to this technology.

Unlike traditional automation, which follows pre-programmed rules without learning capabilities, artificial intelligence is an automated process that learns and adjusts based on experience and feedback, making it dynamic and “alive”. The relationship of trust with AI is crucial in determining its use, non-use (rejection), misuse (dependence) or even abuse (harmful use).

However, an interesting question arises that we don’t know if society will ever find an answer to: can we trust systems that we don’t fully understand, and sometimes not even fully understood by the programmers themselves?

 

The relationship between trust in AI according to this study and some risks

To understand the relationship between trust and AI, the authors of the review distinguished between three types of artificial intelligence:

Physical robots with AI:

This is the artificial intelligence that’s embedded in physical machines, such as service robots in catering, shops or company receptions, as well as in industrial robots used for assembly, or semi-autonomous vehicles or intelligent drones.

According to the authors’ review, trust in AI robots follows a similar pattern to human interactions: it starts low, but increases with time and experience. For example, users of semi-autonomous cars tend to trust these vehicles more than non-users.

However, this increased confidence is accompanied by significant ethical, legal, and social concerns. For example, robots used in elderly care may dehumanise attention.

In addition, questions of legal liability arise: who’s liable if an artificially intelligent robot makes a mistake? Finally, the emergence of these types of robots may aggravate inequality with the displacement of many occupations (e.g. in the transport sector). These are all questions that institutions, companies and society will gradually have to confront and define.

chica y robot en oficina

Virtual AI:

It is AI that is present in virtual assistants, and interacts with users through digital interfaces. Examples include voice assistants such as Alexa or Siri, chatbots such as ChatGPT or Bing AI, and AI systems used in healthcare. In this case, trust tends to be high at first, although it may decrease over time.

However, there are factors that can increase trust such as tangibility (including an avatar or image), transparency (understanding how decisions are made) and reliability (functioning correctly). That said, the use of these assistants also raises ethical, legal and social dilemmas.

These could include over-dependence (e.g. in academia or in the medical field, which could dehumanise teacher-student or doctor-patient interactions), a lack of legal liability in case a chatbot provides incorrect information (e.g. to a medical team making decisions based on its data), or digital inequality, generated by the differences that may arise between those who have this technology and those who don’t.

Embedded AI:

Artificial intelligence that’s embedded in digital devices and is often not recognised by users as AI. Examples of these are the algorithms that suggest new recommendations in music or film platforms, personalised advertising in social networks or traffic predictions in navigation apps. We live with it without being fully aware of its presence, in contrast to virtual or physical artificial intelligence.

In this case, trust levels tend to be high initially, as with virtual AI, and it appears that this trust may decrease over time, although there’s little information and research to understand this phenomenon in detail.

This type also raises important concerns such as disinformation or manipulation (it may or may not provide certain content to certain audiences), lack of privacy, or unequal access and use (even if invisible).

 

Transparency and control as a solution

As the KPMG study indicated, 85% of participants are aware of the potential to improve aspects of our lives, even if most of us are wary of this technology.

To improve the trust relationship with AI, it’d be necessary to make it transparent how different types of artificial intelligence (physical, virtual and embedded) make decisions, allowing users to have more control and decision-making power in their use. This transparency could come hand in hand with a regulatory framework that makes explicit the need for transparency for a safer and more autonomous one.

As with all technological breakthroughs, trust has never been total at the outset. In the case of AI it must be the same, and even more so, knowing the complexity and lack of knowledge about how such systems work. It’s important perhaps not to turn our backs on a technology that’s here to stay and that can surely improve millions of lives, but at the same time it’s necessary to maintain a critical eye and a cautious attitude. In short, to opt for critical trust.

 

Sources:

  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660.