Artificial intelligence (AI) is becoming more powerful and is used for increasingly complex tasks. This raises ethical questions, for instance, when AI is used to make decisions for or judge people. A team of Empa researchers was involved in a new TA-SWISS study, which examined the opportunities and risks of AI for society. The study results were made public on 15 April 2020.
Artificial intelligence is a very powerful tool for solving complex problems and handling huge amounts of random data. Its use allows to translate languages far better than ever before or to defeat human opponents in strategy games of various kinds. AI is constantly being improved and used for more and more activities that could previously be performed only by humans, such as identifying tax fraud or diagnosing diseases.
But rapidly growing technical capabilities require a watchful eye for the risks that accompany them. Can AI create mass job losses? How will our consumer behavior change if more and more people follow shopping recommendations of a smart search engine? What happens to our media if AI contributes to the production of fake news or does not dissolve ideological filter bubbles, but rather expands and reinforces them? What can happen if the government uses AI, for example to carry out predictive policing, to issue regulations or to reduce the workload at courts? How should research and education react to the opportunities and risks of AI and which competences are particularly relevant for today’s researchers and future decision makers in order to make the best possible use of AI for society?
These and similar questions were addressed in the TA-SWISS study by an interdisciplinary project team led by Markus Christen (Digital Society Initiative, University of Zurich), the Empa researchers Clemens Mader, Claudia Som and Lorenz Hilty and Johann Cas (Institute for Technology Assessment, Austrian Academy of Sciences). The researchers drew up their results using methods such as targeted literature studies, workshops and interviews with more than 300 international experts.
This work has resulted in nine recommendations for the sectors examined: work, education and research, consumption, media and administration. In the educational sector, for example, it is important not only to enable experts to develop and implement AI systems, but also to build competences to judge the legal, ethical and social effects of AI. In sectors where the risk level is unknown, the experts demand that more research should be done to identify such risks. For this purpose, funding from universities or through third-party funding would be desirable.
The experts also comment on the lack of transparency of AI and its possible discriminatory characteristics. Potential control mechanisms for these systems are discussed, as are legal aspects arising from the use of AI, such as liability or data protection.