Artificial intelligence (AI) is probably the most remarkable boom of recent times. ChatGPT, OpenAI’s AI-powered chatbot, can now be used by anyone to do almost anything, from writing an essay to giving relationship advice. The chatbot seems like fun for now, not really affecting the everyday life of the average person, but some ominous voices can already be heard, of course: the rise of AI will have an impact on many areas, from education to the somewhat dystopian question of to what extent will AI become conscious.
The Danube Institute’s Free Market Road Show brought together experts to discuss the possible implications of AI’s emergence and development. John Chisholm, CEO of John Chisholm Ventures, talked with Neven Vidaković, Ph.D., an economist and assistant professor at the Faculty of Economics and Business at the University of Maribor, about the threats to the future of humanity posed by the rise of AI.
Firstly, John Chisholm mentioned the open letter initiated by the non-profit research group Center for AI and Digital Policy (CAIDP) at the end of March, also signed by many senior executives and experts, asking the Federal Trade Commission (FTC) to investigate OpenAI, the developer of ChatGPT. The complaint includes the allegation that the GPT-4 language model „poses risks to privacy and public safety.” Moreover, the open letter requests a pause in further development of AI systems for at least six months. Signatories even include Elon Musk and Steve Wozniak, for example.
Chisholm thinks that the very fact that many respected scientists signed the open letter is ominous as the question arises why they are so concerned. Presumably, for example, their worries stem from the scenario that AI will eventually become intelligent enough to become autonomous and even perform acts that are destructive to humanity. Moreover, it is also a question, what is developing faster: artificial intelligence or the additional techniques intended to control it? Chisholm added that many highly qualified IT specialists do not find the technologies to control AI sufficient and drew an analogy with humanity’s previous discoveries: the question arises why there was no need for a six-month pause in the development of fire, computers, the internet or smartphones, and why this seems to be essential now for AI. Artificial intelligence is completely different from any previous innovation, and it is not necessarily a positive thing.
Chisholm also brought up the issue of liability: who can be held liable if AI causes harm, for example, hurts or even kills people? Is the AI itself liable, or those who developed it? Furthermore, what can we do with those countries, like China, that are unwilling to halt AI development for the next six months? Chisholm believes humanity will need a framework of rules to guide us in any AI-related situation that may arise. He also added that the risks are usually more visible than the potential benefits in the early stages of a technology’s development, so it is not surprising that the AI revolution is a scary vision for many people at the moment. Still, its potential benefits should not be overlooked as AI may greatly help entrepreneurs, for example.
Neven Vidaković, Ph.D., explained what he believes is the real problem surrounding artificial intelligence that cannot be ignored, and that is AI’s value judgment. Previous innovations, such as fire or smartphones, did not have this ability, but AI can now decide whether something like Captain America is good or evil. This characteristic raises the question of whether it will be able to make value judgments about society and decide which groups of people it finds „good” or „bad,” which is the basis of totalitarian systems. Artificial intelligence with a value system and the ability to make value judgments paints a more ominous picture, in which case AI would no longer operate only as a tool and could become very dangerous.
Graphics by Réka Pisla