Jump to main content
Back to main page

Research: what people think about artificial intelligence?

2023-07-06 10:39:00

A team of researchers examining the social impact of artificial intelligence presented their findings at the Research Week.

“Will artificial intelligence steal our jobs?” This is the question asked by more and more people around the world with concern. Although the development of AI-based systems seems to be primarily a technological issue, it is, in fact, just as important to consider its social implications. The direction of development depends on humans and technology alike, because the most important decisions are made by humans, so the question should be this: how will this development be socially sustainable in the long term? 

Two OTKA research projects addressing this issue were presented at the Corvinus Research Week. The Visions on Artificial Intelligence and Society OTKA research project (2019-2024) and the Sociology of AI expectations OTKA research project (2022-2026) are led by Lilla Vicsek, Associate Professor at the Department of Sociology, Corvinus University. 

The researchers involved in the projects include: Alexandra Köves (Corvinus), Róbert Pintér (Corvinus), Dennis Nguyen (Utrecht University), Tamás Bokor (Corvinus), Katalin Fehér (NKE), Tamás Tóth (NKE), Ágnes Horváth (Corvinus), Boglárka Herke (Centre for Social Sciences), Máté Fischer (Corvinus) and Zsófia Bauer. 

Not everyone expected rapid change, but optimism is shared by several stakeholder groups 

The researchers started to investigate what people think about the social impacts of AI from different perspectives. Another reason why this is a difficult issue is that the field is constantly changing, with new developments emerging on a daily basis. In one study, before the epidemic, students in Hungary were asked how dangerous they thought artificial intelligence was to their own jobs, and the opinions were very different from what would most likely be today. 

The students interviewed thought at that time that the spread of AI would mainly affect blue-collar workers, i.e. they would be protected by their university degrees, and that this problem would appear later in Hungary than in the West. This idealisation may also have been due to the fact that students were not necessarily aware of the pace of development at that time and its technological background. 

Techno-optimism characterises the views of Hungarian programmers, too, involved in AI development, according to another study. They tend to view breakthroughs as their own career successes, and show less solidarity with other members of society who may be negatively affected by these achievements. 

Researches say that the lower level of social sensitivity is, among others, due to the fact that the results of work done at various levels are not immediately visible, the programmers are located further away from those who suffer the negative effects of development, and they tend to work in teams, so they think of themselves as a small element in the system.  Although the media coverage of ethical issues in AI is increasing, for the programmers interviewed, social and ethical issues have been pushed into the background. 

In the latest project, copywriters and journalists were also asked what they thought about the role of AI in their work. Only very preliminary results are available here.  

Cognitive dissonance emerged among the interviewees, which was mostly resolved by saying that it might affect others negatively but not them, that it might take away jobs from beginners, but senior copywriters would still be needed and that AI was not yet reliable enough to take on jobs on its own anyway. As for their future plans, learning and adaptation were the key points, i.e. they thought they would be able to adapt to change easily. In the meantime, they also mentioned that they did not foresee what would happen over the next 5 years.  

The question is not what it will be like, but what we want it to be like 

Views different in time and by social group have led to divided expectations, but mainly with a descriptive, rather than prescriptive nature. It is therefore important to think about normative future scenarios, too. The key point is to set goals in advance and determine how we wish to see our own future and where we think AI could play a role in it. 

One of the studies used the method of backcasting to find out what different professionals who are familiar with AI and come from academia, business and the civil society would consider to be a preferred future, what the role of AI could be in promoting and sustaining this, and how this could be achieved. The main goal with AI is to promote well-being, according to the research, with environmental sustainability, social justice, social well-being and social responsibility as the main factors. AI developments have to reflect a certain level of decentralisation, increase human autonomy through AI-human symbiosis, and for this to happen, the building of trust is essential. 

Policy responses must also adapt to the expected societal changes. If jobs are replaced by AI, the workforce released also has to be provided with some form of value-creating employment. The possibility of a basic income, for example, could be considered as a solution to this technological unemployment. 

The investigation of the social aspects of artificial intelligence is therefore a priority area in two respects. On the one hand, it helps us understand expectations and prepare for change, and, on the other hand, we can influence change itself. It’s not just a question of what life will be like in a world where artificial intelligence is everywhere (if we’re not there now), but what we humans want the world to be like. 

 

The report was written by: Máté Kovács 

 

The following publications have been published (or are in the process of publication) about the research:  

Vicsek, L. (2021): Artificial intelligence and the future of work – lessons from the sociology of expectations. International Journal of Sociology and Social Policy, 41: 7-8, pp. 842-861.  

Vicsek, L ; Bokor, T ; Pataki, G (2022): Younger generationsexpectations regarding artificial intelligence in the job market: Mapping accounts about the future relationship of automation and work, Journal of Sociology, 4407833221089365, Early Access,  

Herke, B ; Vicsek, L. (2022): The attitudes of young citizens in higher education towards universal basic universal income in the context of automation-A qualitative study, International Journal of Social Welfare :  3 pp. 310-322., 13 p. (2022)  

Vicsek, L. – Tóth, T. (2023): Visions of Human-Centered Artificial Intelligence – Relations with ethics and Power. In: Michael Filimowicz (eds) Algorithmic ethics, Routledge, Chapter 1. 

Horváth, Á. – Vicsek, L. (2023): Visions of Hungarian Artificial Intelligence Specialists about the Future of Work and their Roles. Science, Technology and Society: forthcoming. 

Copied to clipboard
X
×