AI does not threaten the world, but humans do
The potential threat of AI does not come from machines becoming autonomous and learning to think. They are extensions of human cognition. Claiming that machines are conscious and autonomous is not only misleading but also obscures human action, says our postdoctoral researcher Dominik Schlienger.

The rapid development of AI has brought concerns to public discussion that it could become a threat to all of humanity. Some researchers have also supported this threat scenario. Dominik Schlienger from the University of the Arts Helsinki argues in his article “The fallacy of autonomous AI” that computers cannot and will not become autonomous thinkers.
Schlienger justifies his claim by stating that computers operate using algorithms. “Through language, machines are extensions of the cognitive practices that constitute the language they run on. The computer is to the brain what the hammer is to the hand,” he explains.
If AI is like any other technical tool and not an autonomous thinker, but an extension of our thinking, its dangers are related to how we use AI.
A machine is a human-created system
Software demonstrates the socio-technical nature of machines better than anything else: Every line of software code has been written by a human, and often not just one person, but a community of programmers, developers, inventors, hardware designers, and users. Even if a machine is highly automated, its thinking ability is human-created – it is a social practice.
According to Schlienger, through computer code and algorithms, AI is a linguistic device with grammar, syntax, and semantics like natural languages. They are systems of human meaning-making.
“The claim that a computer could think independently is based on the assumption, that in computer languages, meaning could be hard-coded into the signifier. However, this is not the case. The entire machine, including its operating system, hardware, and software, forms the code, and therefore they are part of the signifier.”
Only by confusing the signifier with the signified can one claim that the meaning-making of computers is separate from human meaning-making, Schlienger states. Furthermore, since machines, like language and code, require execution to create meaning, it emphasizes that we create AI, and AI cannot operate independently. To create meaning, a preceding and subsequent event is always needed, activated by the code.
The linguistic approach is particularly evident in chatbots based on large language models.
The danger stems from human thought
Schlienger does not deny the potential of technologies nor the possible dangers but instead opposes the idea that the danger is attributed to the technology itself.
” The harms and dangers of AI are the direct result of (sloppy) human thinking, not something we can blame AI for in lieu. To attribute agency to an external, deus ex machina may be convenient for the hegemony that profits from doing so, but it is a cop-out from collective responsibility. ”
According to Schlienger, we need to rethink how we develop AI.
“The fact that AI’s agency is not autonomous, but an extension of human cognition provides a trojan horse for whoever is controlling the machine.”
Read Dominik Schlienger’s article “The fallacy of autonomous AI “