Focus AI: Artificial intelligence and the new doomsday machine

First, the hype, then the panic: In six months, ChatGPT managed to gain over 100 million users and provoke countless prophets of doom. Even Elon Musk, otherwise not exactly squeamish, calls for further development to pause.

Helmut Spudich

“What nukes are to the physical world, AI is to everything else.” The scaremongering surrounding the AI (artificial intelligence) bot ChatGPT cannot be summarized more drastically than with this opening of a conference of the Center for Humane Technology in New York in March 2023. According to the organizers, half of all AI researchers believe there is at least a 10% chance that AI will lead to the downfall of humankind. Of course, that would protect humanity from falling victim to climate catastrophe, of which not just ten but almost 99 percent of all climate scientists are convinced.

Apocalyptic fears also affect thousands of techies, researchers, and celebrities who wrote an open letter a few weeks ago calling for a “moratorium” on further AI development. Signatory Elon Musk made himself available as a mouthpiece for those concerned. Maybe just because he was one of the key investors in OpenAI, the developer of ChatGPT, in 2015, but has since left the field to Microsoft, Google, and others and now fears that the AI train will leave the station without him. A “moratorium” would be very useful for him to catch up with the AI Express.

But let’s leave the sarcasm behind: many serious people are concerned with AI’s possible negative consequences, and that’s a good thing. Technological progress since the invention of the steam engine has also brought social progress in assessing the impact of new technologies in recent decades. It took humanity a hundred years and more to understand the dramatic effects of industrialization that began in the 19th century, as evidenced by the very late fight against climate change. With newer technologies, the feedback loop is shorter. Undesirable consequences are assessed more quickly, increasing the chance of limiting or altogether avoiding them.

That is why a constructive debate about the consequences of AI systems is urgently needed. ChatGPT provoked this reaction very quickly, although it still needs to be made clear how we will deal with the negative sides of AI. Existing control systems constrain some other applications. For example, in developing new medicines through AI: Such active ingredients, like substances found through traditional research, are subject to extensive clinical tests and approval by medical authorities. Or translations and proofreading using AI software: These have to hold their own in a highly professional environment of trained translators and proofreaders if they want to be commercially successful on a broad basis.

Control is more difficult in areas such as face recognition: apart from the fact that it still shows a high degree of “prejudice” (i.e. errors) in identifying persons, there is a significant risk of abuse by states, companies, and private individuals. On the other hand, face recognition can be very useful in medicine: The Johns Hopkins School of Medicine is developing methods to identify stroke patients. At MIT, facial recognition can determine the status of ALS, a degenerative muscle disease.

With generative AI, the systems that simulate human intelligence like ChatGPT or, like Midjourney, create realistic photos of an arrest of Donald Trump that never happened, there are no guard rails preventing dramatic undesirable developments. The EU has the (world’s first) draft of an “Artificial Intelligence Act.” However, this will not be law before 2025 – and then it will take another few years before it becomes law in all EU countries. Until then, AI systems will make tremendous advances that we cannot even start to imagine.

The responsibility for the positive development of AI that serves people lies primarily in the hands of its developers and the media and watchdogs, who are keeping an eye on them. It is in the commercial interest of large providers to ensure that no harmful products reach the market. Legal product liability can lead to huge penalties even without special AI regulation. On the other hand, our social ecosystem is at risk: the potential for AI-controlled disinformation is almost unlimited. This would be a worthy field of activity for the open-source community: to develop an AI that can uncover the misuse of AI.

The “AI dilemma,” as it has already been given a Netflix-worthy title, will be a permanent feature of AI development. But as long as human intelligence is still the world champion in wreaking havoc on people and the planet, artificial intelligence should not cause excessive panic.

Published On: April 26, 2023

Share post: