Jump to main content
Back to main page

Robot apocalypse or imagined threat? Corvinus held a conference on the future of AI law

2023-12-01 11:48:00

Can the anxiety of complainants help to automatically jump the queue? A decision in the first and so far only Hungarian AI case has been taken. The lessons of the case were also discussed at the Corvinus debate.

Budapesti Corvinus Egyetem

When and how legislation on artificial intelligence (AI) should work? Do we need to address the issue now, or shall we leave it for later, allowing time for the professionals involved to gain a better understanding of the nature of AI? This was one of the main questions of the conference on The Future of Artificial Intelligence Law. The event was organised by the Department of Business Law of the Institute of Accounting and Law of Corvinus University on 24 November. 

“AI is a much discussed topic today, and we live in world driven by data, which is why we have organised this conference with the best experts in the field” – this is how Péter Metzinger, Associate Professor at Corvinus, the organiser of the conference, greeted the participants at the end of November.  

 

Details of the one and only AI trial in Hungary so far 

Among the speakers from Corvinus, Gábor Dudás, lawyer and assistant professor at the University’s Department of Business Law said that European data protection has gone too far. He said that during the COVID-19 outbreak, employers were not entitled to know who was or was not infected at their company because of the GDPR. He told the audience that there had been only one AI-related lawsuit in Hungary so far: a private service provider wanted to manage customer complaints with the help of AI by using software that analysed the complainants’ tone of voice and level of anxiety. Thus, the complainants who were the most nervous and complained the most about what had happened to them were moved to the front of the queue. It is wrong, of course, but, as it turned out, the company feared losing a large number of customers, so it resorted to this method. Dudás also mentioned another interesting point: under our current laws, naturally, a bank cannot ask what religion a customer applying for a loan is. If AI were used in credit assessment, it could “find out” that the applicant was, for example, a Muslim, and if the bank rejects the loan application, the customer could say that it was because of his religion, which could lead to social tensions. According to Dudás, nobody really knows what is happening now, as the transparency of AI is not ensured. The risk is high because, while the use of AI may not result in significant harm individually, if a small grievance affects millions of individuals, these individual grievances can add up. Therefore, according to Dudás, it is essential to carry out an impact assessment before using AI in practice. 

 

Lawyer Zoltán Nemessányi, associate professor at Corvinus, illustrated with a thought experiment how much we currently do not know – and in his opinion this is the most important dilemma – what autonomy artificial intelligence has. Therefore, we cannot imagine what damage AI could cause, so it is difficult to regulate it legally. As he said, the law can only react after some real situations have developed. 

It is obvious that the EU is very concerned about this issue, as well as who will regulate AI: the EU or the Member States? It was said: AI knows no borders, and the EU has “only” 27 Member States, but in the future AI will be deployed worldwide simultaneously. There is also the question of who should be held liable for any damage caused by AI: the owner or the operator, and what should it be: fault or so-called objective liability? 

 

Is the legislation overly cautious? 

Zsolt Ződi, Senior Research Fellow at the National University of Public Service said in his presentation that he feels that Europe, the first in the world to legislate on this issue, is more driven by fear. According to him, if the precautionary principle had been applied, there would have been no railways, no cars, no aeroplanes, no X-rays. The speaker referred to Cass R. Sunstein (a law professor at Harvard University, who has written a book on precautionary principles in AI), who headed the White House Office of Information and Regulatory Affairs from 2009 to 2012, where they created methods for making good decisions on issues of uncertain risk. Ződi believes that paternalistic policies are characterised by an equally strong response to all risks. “Just because a storm can bring down a giant tree, it doesn’t mean you have to cut down all the trees,” he said.  

Professor Márta Törőcsikné Görög (University of Szeged, Faculty of Law and Political Sciences) discussed the question of whether law has anything to fear in the matter of AI. In her opinion, we live in a panic-driven society where risk is at the centre, and we see AI as dangerous. “We don’t want to control the outcome of the development, but rather we want to control the development itself”, she said. She brought up the example of autonomous cars, where the manufacturers are driven by consumer acceptance, which includes the safety of self-driving vehicles, and where the use of traditional civil law is sufficient. She asked: What do we really want to regulate? The risks of AI or the market? 

Sándor Udvary, professor at the Károli Gáspár Reformed University said that AI, as an innovation, is about enhancing human well-being, and lawyers should not interfere with this innovation, and dystopia should not guide our thinking. He believes that the problems and the damage resulting from AI can be addressed by fine-tuning existing legislation. 

Another question was why Europe thinks this issue should be regulated. According to Zsolt Ződi, Europe still remembers the tragedy when many people were killed based on personal data. He added that according to the European conception of law, it is the duty of the state to protect the individual, “because we saw in the Second World War that the individual cannot protect himself. We’ll see what happens with AI. The robot apocalypse is an imagined threat, but the risks are sector-specific,” he said. There was also a view that the slowness of EU legislation is not a problem, because it gives enough time to get to know AI better. 

 

Katalin Török 

Copied to clipboard
X
×
GEN.:2024.04.28. - 22:00:02