Jump to main content
Back to main page

Corvinus research: AI is not interested in the poor people of the developing countries

2023-11-30 10:44:00

Many global problems are ignored by the so-called human-centred approach to AI, which is supposed to be ethical.

Budapesti Corvinus Egyetem

ChatGPT was made available just over a year ago, on 30 November. This also raises the question of whether AI is ethical. According to Corvinus research, many fundamental global problems – poverty, hunger, water scarcity, wars and inequalities – are not sufficiently addressed even by the so-called human-centred approach to AI, which is supposed to be ethical. Instead, it mainly seeks answers to societal challenges within national borders, as a recent study by the Corvinus University of Budapest suggests based on a content analysis of relevant texts from five Western universities.  

A recurring criticism from social scientists is that ethics play too small a role in the development of AI. To solve this, the concept of human-centred AI (HAI) was introduced and is now used by many projects, research centres, university courses and companies, although in a slightly different way by each. The Department of Sociology at the Corvinus University of Budapest set out to map how this concept is interpreted by five prestigious institutions in the developed world of the Global North, based on texts published on their websites. The study by Lilla Vicsek and Tamás Tóth was published in late May by Routledge in the volume Algorithmic ethics edited by Michael Filimowicz. The study covered the University of Bologna, Northwestern University, the University of Maryland, Stanford University and Utrecht University. 

 

Poor regions and the needs of the vulnerable are barely reflected 

The study summarises that each institution uses the concept of human-centred AI in a unique way, with the only common understanding being that it is associated with the needs of people.  

Academia has recognised that the best way to demonstrate its humanist aspirations is to provide   content framed with supportive attitudes rather than focusing on technical solutions. This is an important result, because previous research on ethical guidelines in market sectors has shown the dominance of technical solutions“, said Lilla Vicsek, sociologist and researcher at Corvinus University, and first author of the study. 

According to the researchers, however, the academic community under scrutiny fails to address several fundamental issues. First, the analysed texts often treat humanity as a homogeneous group and assume that every society faces the same problems, even though the different segments of societies may have different interests and needs and often face different problems across regions. Second, the writings on human-centred AI are essentially tailored to the needs of the affluent Global North. Although the analysed institutions often take into account the challenges of marginalised communities within the local and national boundaries, they are country-specific: they tend to ignore the unacceptable living conditions in the Global South and the problems in poor regions, such as lack of water, hunger, hazardous jobs, diseases and wars that affect large parts of Africa, and much of Asia and South America.   

But if the Global South were also addressed, and if AI were used, for example, to improve water supply, including a detailed assessment of poor infrastructure, it could make a real difference to the quality of life, and indeed the survival chances, of many vulnerable groups. 

”The dominant view of white men still dominates the texts analysed.  Support for more diverse research teams is essential if we are serious about human-centred AI remedying some inequalities and racism,” says Lilla Vicsek on the sociology research. She added: “However, during the analysed period the universities in question did not formulate any concrete actions to reduce the bias of the AI industry.” 

 

There is a lack of concrete proposals to solve the identified problems 

 Power issues and decision-making relating to AI have an important ethical impact. The power of large tech companies resulting from extensive data collection is not criticised at all on the websites of two of the institutions, Bologna University and Northwestern University, while two others, Stanford University and Utrecht University focus on accountability and data protection issues.  

Of the five websites surveyed, Stanford University published the most content, including content addressing inequalities, though only to a small extent. The two European universities mention social responsibility and sustainability in relation to AI more than the American ones, but in far too general and superficial terms, without concrete proposals. The researchers say that these institutions should make much more effort to reduce inequalities.  

Currently the industry players are concerned with ethical and humanitarian impacts only to the extent that the public and the investors minimally expect of them to. It is important to research visions for human-centred AI because these expectations can guide and coordinate the activities of the actors responsible for AI. To achieve this, it is essential that cooperation is transparent, and that the details are clear and understandable, without which there is no chance for change. Indeed, the companies and institutions of the Global North have the resources to find equitable, democratic solutions to global challenges, and this needs to be recognised,” concludes Lilla Vicsek, a sociologist at Corvinus University. 

Copied to clipboard
X
×