Virtual AI:
It is AI that is present in virtual assistants, and interacts with users through digital interfaces. Examples include voice assistants such as Alexa or Siri, chatbots such as ChatGPT or Bing AI, and AI systems used in healthcare. In this case, trust tends to be high at first, although it may decrease over time.
However, there are factors that can increase trust such as tangibility (including an avatar or image), transparency (understanding how decisions are made) and reliability (functioning correctly). That said, the use of these assistants also raises ethical, legal and social dilemmas.
These could include over-dependence (e.g. in academia or in the medical field, which could dehumanise teacher-student or doctor-patient interactions), a lack of legal liability in case a chatbot provides incorrect information (e.g. to a medical team making decisions based on its data), or digital inequality, generated by the differences that may arise between those who have this technology and those who don’t.
Embedded AI:
Artificial intelligence that’s embedded in digital devices and is often not recognised by users as AI. Examples of these are the algorithms that suggest new recommendations in music or film platforms, personalised advertising in social networks or traffic predictions in navigation apps. We live with it without being fully aware of its presence, in contrast to virtual or physical artificial intelligence.
In this case, trust levels tend to be high initially, as with virtual AI, and it appears that this trust may decrease over time, although there’s little information and research to understand this phenomenon in detail.
This type also raises important concerns such as disinformation or manipulation (it may or may not provide certain content to certain audiences), lack of privacy, or unequal access and use (even if invisible).
Transparency and control as a solution
As the KPMG study indicated, 85% of participants are aware of the potential to improve aspects of our lives, even if most of us are wary of this technology.
To improve the trust relationship with AI, it’d be necessary to make it transparent how different types of artificial intelligence (physical, virtual and embedded) make decisions, allowing users to have more control and decision-making power in their use. This transparency could come hand in hand with a regulatory framework that makes explicit the need for transparency for a safer and more autonomous one.
As with all technological breakthroughs, trust has never been total at the outset. In the case of AI it must be the same, and even more so, knowing the complexity and lack of knowledge about how such systems work. It’s important perhaps not to turn our backs on a technology that’s here to stay and that can surely improve millions of lives, but at the same time it’s necessary to maintain a critical eye and a cautious attitude. In short, to opt for critical trust.
Sources:
- Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660.