The social status of an AI influences its acceptance

- EN - DE- FR- IT

How do people evaluate the performance of an AI system? New research results from a group of sociologists at the University of Lucerne show that even non-human actors are subject to social prejudices.

Imagine two doctors with identical skills and impeccable performance: Would your judgment of their actions change if one worked in a world-class hospital or was male or female? And what if one of these doctors wasn’t a person at all, but an artificial intelligence (AI)?

An article recently published in the journal "Sociological Science" sheds light on how the social status of an AI influences our perception of it. The authors argue that AI - like humans or institutions - can have a "social status". The higher this perceived status of an AI, the less questionable people find the use of this AI.

Belonging counts - even if you are an AI

The authors and the author of the article - Dr. Patrick Schenk , Vanessa A. Müller and Luca Keiser from the Department of Sociology - found that people find the use of AI more morally acceptable when it is associated with high-ranking organizations. At the same time, human actors perform better than AI in this assessment, even if AI performs just as well or sometimes even better in certain tasks.

As part of the study, almost 600 participants evaluated the acceptance of different actors (human, AI and simple computer programs) in three different scenarios: Cancer diagnoses in a hospital, fact-checking in a newspaper editorial office and personnel decisions in a recruitment agency. The results of the study show that AI systems are perceived similarly to simple computer programs, but their use is considered more morally questionable than that of a human actor. This preference for humans remains regardless of how effectively the respective actors perform in the tasks examined.

Furthermore, factors such as gender or giving the AI a human name do not have a significant influence on the assessment, but the reputation of the organization does. If an AI is associated with a renowned institution - according to the findings - people rate it more positively.

Creating awareness to overcome prejudices

The study illustrates how social biases - such as associating prestige with value - shape our view of new technologies. The results suggest that people may be more willing to trust and accept AI if it is backed by reputable institutions, regardless of its actual capabilities. According to the authors and the author of the study, recognizing this tendency towards "status bias" could help promote fairer and more objective assessments of AI systems in society.

The study, published under the title "Social Status and the Moral Acceptance of Artificial Intelligence", is part of the Swiss National Science Foundation project "Artificial Intelligence and Moral Decision-Making in Contemporary Societies: An Empirical Sociological Investigation", led by Gabriel Abend , Professor of Sociology, and Patrick Schenk.