Jump to main content
Back to main page

Hungarian developers are not very interested in what and how artificial intelligence is used

2023-09-25 15:34:00

Techno-optimism and the shifting of social and ethical responsibility, the lack of solidarity characterise Hungarian AI professionals, according to Corvinus research.

How do Hungarian AI specialists see the future of work and their own role in it, and to what extent are ethical and social aspects present in their thinking? Ágnes Horváth and Lilla Vicsek explored these questions, answering them through 20 semi-structured interviews with Hungarian specilaists in 2021, before the generative AI “craze” broke out in 2023. The Corvinus study on the results was published in September in the journal Science, Technology and Society. 

The interviewees’ accounts were characterised by techno-optimism: they believe that in the future many people will be able to work in new kinds of creative, self-fulfilling jobs without major social tensions. The interviews revealed a widespread perception among them that technology is a value-neutral tool with mainly positive social impacts. An example of an optimistic vision from the responses: “We will be able to do a lot more work that really interests us, that we can accomplish ourselves, that inspires us, with which we can do something good for the world that matters.” “We’re going to do less administrative work that’s repetitive, that doesn’t require a lot of empathy, that’s not inspiring, and I think we’re going to get tremendous help from technology to do that work.”  

Self-interested thinking can lead to increased inequalities 

At the same time, the developers have shirked their responsibility and failed to address the social and ethical implications of their work. In their view, the developer has no real ability to shape the direction of innovation or influence the misuse of technology.  

“Now I’m sorry, the Swiss knife maker doesn’t care what his knife is used for, even whether it’s used for killing, to some degree.  Engineers who develop such facial recognition algorithms are not responsible for where they will be used and if they will discriminate. I think a lot of engineers see this [type of work] as a challenge – to see if they can build it, and if they can, [the job is] done.” said one of the respondents. 

Although ethics is increasingly important in the media, it seems to have had little impact on the thinking of the developers and researchers in the sample. Social consequences and ethical issues were not at the forefront of their thinking, and ethical considerations hardly featured in their responses. Their activities were typically motivated primarily by self-interest: career development or the idea of creating something new.  

Respondents did not express solidarity with the potential losers of technological change. This is important because they are designing the technology of the future. Their work could even be guided by an increased sense of responsibility, which could include finding solutions for disadvantaged groups and increasing equity” – said Lilla Vicsek, Associate Professor of Sociology at Corvinus, one of the authors of the paper. 

“The fact that social and ethical considerations have become so marginalised in developers’ thinking carries risks, as AI can also lead to negative social consequences, including increased inequalities. It is important to take social and ethical considerations into account at the design stage: different designs lead to different technological solutions, which contribute to different outcomes,” said Vicsek. 

Causes of deflecting responsibility and evading blame 

Techno-optimism and a value-neutral approach to technology as a tool may have contributed to respondents’ self-legitimation of their own actions and may explain why they did not think about the social problems and ethical dilemmas related to AI. However, if they focus only on the positive, there is a greater risk that society will not be able to cope with the worst-case scenarios.    

The research also analysed how different reasons, including the Hungarian context, might have influenced the interviewees’ responses. The Hungarian AI developers interviewed may have felt that being further away from where the main AI development is happening, what they do did not matter that much. This could also lead to a shift of responsibility and blame. The lack of solidarity with the losers of technological change may be linked to wider solidarity problems within Hungary and a lack of tolerance towards disadvantaged groups. In addition, although ethical issues related to AI are discussed in the media in Hungary, they are not a hot topic, unlike in some Western countries. However, since the professionals we interviewed speak English, they can read and follow international AI news in English if they wish.  

The paper was published as part of the Visions of Artificial Intelligence and Society  OTKA research project led by Lilla Vicsek. Lilla Vicsek will give an informative lecture on the impact of AI on employment on Friday 29 September at the Researchers’ Night programme at Corvinus University. 

Copied to clipboard
X
×
GEN.:2024.04.14. - 04:45:24