Jump to main content
Back to main page

What skills and innovations can be created by the coexistence of the digital environment, artificial intelligence and humans?

2023-10-05 09:45:00

This year, Corvinus University hosted the International Conference on Cognitive Infocommunication, sponsored by the IEEE World Association for the Advancement of Science, chaired by Professor Péter Baranyi.

One of the main themes of the 14th IEEE International Conference on Cognitive Infocommunications, held on 22 and 23 September, was Artificial Intelligence, the new social, business and corporate capabilities and IT innovations emerging from the “coexistence” of the new digital environment and humans, and the scientific research results leading to them.  

Digitalisation and digital transformation are key issues for companies today. The rise of artificial intelligence, digital tools, the transformation of competences and the use of infrastructure require new approaches to the study of organisations. Human capabilities are now being augmented by ICT solutions and are transforming the way we think, and therefore the way we collaborate. The discipline of cognitive infocommunication uses this approach to explore the interaction between technology and people. 

The scientific findings presented in the lecture series on both days of the conference have a direct impact on our everyday lives at the societal, corporate and individual levels. From an ethical, legal and IT perspective, the presentations highlighted how our new digital environment can sustainably enhance human and corporate performance and how its mathematical and IT tools can be further developed.    

The conference was organised in a joint scientific collaboration between Corvinus University of Budapest and the Shenzhen Research Centre of the University of Hong Kong, and in scientific partnership with the Hungarian Academy of Engineering, Hungarian Research Network, Budapest University of Technology and Economics, University of Pannonia, and University of Óbuda. 

How do we judge machines’ choices? 

The opening lectures of the conference were given by world-renowned professors. Professor Cesar A. Hidalgo, leader of the Center for Collective Learning at the Artificial and Natural Intelligence Institute (ANITI) of the University of Toulouse and the Corvinus Institute for Advanced Studies at Corvinus University of Budapest was one of them, and the  

title of his presentation was “Why do people judge humans differently from machines? The role of agency and experience”. It examined “trust” between humans and intelligent robots. 

“Our psychology highly determines the way we see AI” – he claimed right at the beginning of his presentation. Psychology suggests that people may have different mind perception models for humans and machines, and thus, will treat them differently. It turned out that in case of a mistake, we either blame or excuse humans. We tend to judge other people based on the perceived level of their intention causing harm. On the contrary, we are not willing to perceive as much agency in machines. “When people are judging machines, the outcome matters the most” – the professor explained. Imagine for example a catastrophe when the lives of people are at risk and a rescue mission is launched to save them. In case of failure, people tend to judge machines harder than humans responsible for the mission.  

Thus, the question arises what happens if we perceive a machine as being human-like and having a high level of agency. In randomized experiments, professor Hidalgo and his colleagues manipulated people’s perception of machines to explore whether people judge more human-like machines more similarly to the way they judge humans.  

The researchers developed different descriptions for four types of machines, and it turned out that the perception of agency and experience seems to change our judgement. In other words, they wanted to find out what happens, if we perceive machines as being able to experience feelings, think about problems, and act independently. “Machines that are perceived closer to humans, tend to be judged more similarly to humans” – told the professor about their main finding. He concluded that many questions may arise based on these experiments because our perception of machines can easily be manipulated.  

Mathematical tools behind artificial intelligence 

Professor Péter Baranyi held a lecture entitled “Crisp vs. Soft Computing behind AI Transformation”, in which he explored the “crisp” and “soft” mathematical tools behind artificial intelligence and their conceptual framework.  

The presentation highlighted that the sequence of breakthrough mathematical methods that built AI goes back to Professor Hilbert’s famous Parisian conjectures in 1900. The lecture called mathematical tools based on precise algebraic formulas, whose results are precise and sharply distinguishable as “yes” or “no”, “Crisp Computing”. In contrast, he called “soft computing” a mathematical trend where the result is an association between patterns, an inference, rather than a precise numerical “sharp” value. This includes the artificial neural network inspired by the neural network of the human brain, which consists of a large number of elementary components, i.e. neurons and the connections between them. It also includes fuzzy logic, the elementary components of which are linguistic rules and the use of fuzzy (non-edge boundary) sets of ‘if-then’ causal reasoning. Genetic algorithms are based on a large number of elementary components, i.e. the propagation and selection of individuals containing parameter lists mimicking gene strands based on goodness of fit. Based on Hilbert’s conjecture, Kormogolov proved in the 1950s that all models can be decomposed into elementary components. This claim confirms that essentially all models can be described by soft computing. And if it can be written, then all models can be learned, for example by artificial neural networks.  

In his presentation, Péter Baranyi explained that, for example, in the case of managing a nuclear power plant, flying an airplane, or performing a medical operation, it is not sufficient for artificial intelligence to make a soft decision, i.e. not a sharp yes/no decision and not to control the process precisely. It is important to know that a nuclear power plant will definitely not explode, and it is not enough that AI largely believes that it will not. In such “sharp” (Crisp) cases, it is necessary to combine state-of-the-art Crisp mathematical tools capable of precise stable control with soft (Soft) tools that define the essence of control. But the AI itself must also be kept precisely stable. However, the difficulty is that Crisp and Soft Computing tools differ significantly in philosophy, concept, and formalism. The presentation proposed a mathematical framework in which the tools of both directions can be formalized in the same way, and operations and relationships between them can be interpreted so that we can begin to build artificial intelligence capable of both meaningful decisions and precise control. 

A new framework for understanding corporate reality 

Professor Andrea Kő, head of the Institute of Data Analytics and Information Systems at Corvinus University of Budapest presented a new framework for understanding cognitive and digital corporate reality in the plenary lecture on the second day of the conference. She presented the scientific concept of Digital & Cognitive Corporate Reality (DCR), developed and introduced by Corvinus University scientists, which provides a groundbreaking new approach to the digital evolution of the enterprise. (Several presentations on this topic have already been given at the conference.) 

Professor Andrea Kő, realized that the role of management and business changes in this rapidly evolving context. “We have to deal with a mix of new artificial and human cognitive capabilities present in companies, which fundamentally changes the concept and system of, for example, memory and learning” – she explained in her lecture. Therefore, she proposed a new framework with her colleagues, Péter Baranyi, Ildikó Borbásné Szabó, Ádám Csapó, Tibor Kovács, and László Lőrincz. “This is the first time when we manage the digital and cognitive concepts in the framework of corporate management. These disciplinary components were already known to researchers, but combined in this new system, new issues and challenges of a higher, more comprehensive nature can be effectively addressed”. 

The presented new scientific concept provides a possibility for developing theoretical frameworks and practical solutions towards applications. For instance, use cases about the emerging corporate roles, like prompt engineering and AI quality control, and quality development could be explored based on this new approach. “The DCR provides new and innovative ways of collaboration” – professor Kő stated, highlighting the loosening of temporal, spatial, and other contextual constraints in companies. However, it is important to note that “human capabilities also have an intricate impact on the modern ICT networks”. The reflective relationship between humans and technology should also be exploited from a management point of view. 

Limits to the ethics of artificial intelligence 

The second plenary lecture of the closing day was given by Professor Viktor Dörfler on “The great fallacy of AI ethics”.  In his talk, he gave an in-depth analysis of the possibilities, limitations and problems of artificial intelligence (AI) ethics. 

The first part of the presentation gave a brief overview of ethical models in Western philosophy.  For roughly two and a half thousand years, Western thought was characterised by normative ethics, i.e. moral philosophers tried to prescribe how to be moral.  From Aristotle’s ethics of virtue to Kant’s rule-based ethics (deontology) to the many variants of consequential ethics, no single model has been able to get philosophers to agree. The big change, according to the professor, was brought about by Elizabeth Anscombe, who suggested talking to psychologists who observe how people’s minds work. Hence the new descriptive ethics is also called moral psychology. There is a fairly broad consensus, if not complete, that people make moral decisions intuitively, following their moral compasses, and only compare their decisions with normative models in hindsight.   

On this philosophical basis, the professor argued that at least three things are essential for moral decisions that AI cannot produce. Our knowledge is sensory based and it is crucial that we (feel) the situation of our moral decisions, and AI lacks felt sense. We also need to understand these decision situations, and AI cannot do this, even if it sometimes seems to the user that it can. The third, doubt, encompasses both. Anyone who tries to make a moral decision, doubts, since we cannot know the outcome of our decisions, and normative models do not give a clear answer as to what is right.  

Therefore, the professor argues, we need to approach the subject of AI ethics in a completely different way. Let’s look at how AI can help one’s moral choices and vice versa. AI can process possibly relevant data at an extraordinary speed and from this it can possibly provide useful patterns to the decision maker, in time. The decision-maker can then use what he or she can and wants from it.  AI can also help in the doubting process by pre-processing different ethical models. And we can help the AI by understanding that it does not have a sense of when to call a human being to make an ethical decision, and therefore giving it “outside” indicators by which to do so. In his concluding remarks, Prof. Dörfler returned to the opening thought and stated that we do not actually have an MI ethics problem. We do have ethical problems, and AI amplifies them. 

Copied to clipboard
X
×