Close this search box.

The AI dilemma: will the rise of AI bring us prosperity or end in tragedy?


You’ve probably heard all about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. Although we don’t yet fully understand its capabilities, the technology has already been deployed to the public.

AI has the potential to help us achieve big things. This doesn’t only apply to the business realm, but to major societal and global challenges such as achieving medical breakthroughs or battling climate change. At the same time, half of AI researchers believe there’s a 10% or greater chance that humans will go extinct from their inability to control AI. Welcome to the AI dilemma.

3 rules of technology

To highlight the nature of the AI dilemma, it’s wise to first take a closer look at the so-called ‘3 rules of technology:

  1. When you invent a new technology, you uncover a new class of responsibilities. These responsibilities are not always glaringly obvious. To give you an example: we didn’t need the right to be forgotten into law until computers could remember us forever.
  2. If the new technology confers power, it will start a race.
  3. If you do not coordinate, the race will end in tragedy. There is not one single player that can stop the race and prevent it from ending miserably.

Humanity’s ‘first contact’ moment with AI: the social dilemma and why we lost

We all remember the scifi movies in which humanity meets alien civilizations or new self-aware technology for the first time. Sometimes these moments of ‘first contact’ are benign (‘Close Encounters of the Third Kind’, ‘E.T’), but more often than not they result in tragedy and threaten to extinguish human civilization (‘War of the Worlds’, ‘Independence Day’, ‘The Terminator’).

In the real world, social media platforms were our ‘first contact moment’ with (very basic) AI technology. When you open Facebook or TikTok and scroll your finger, you activate a supercomputer that points AI at your brain to calculate and predict with ever-increasing accuracy the perfect thing that will keep you scrolling.

This fairly simple technology was enough to plague humanity with an equally impressive as disturbing arsenal of unintended and negative side effects such as information overload, addiction, doom scrolling, an often toxic influencer culture, the sexualization of kids, Qanon, shortened attention spans, polarization, fake news, deep fakes, and the breakdown of democracy. 

AI intended to maximize engagement (giving more people a voice, connecting with friends, joining like-minded communities, enabling small businesses to reach customers in an easier fashion) culminated in a commercial arms race to the bottom of the brain stem. We still haven’t been able to fix the misalignment caused by broken business models that encourage maximum engagement.

Humanity’s ‘second contact’ with AI: GTP-3, GTP-4 and creative AI

First-generation AI tools were merely reactive. The newest AI products, such as GTP-3 and 4, have creational abilities and integrate previously distinct AI disciplines (robotics, code, speech recognition, computer vision, image recognition) in one singular, all-encompassing language model. The promise is that they will make us more efficient, help us write and code faster, and have the potential to solve scientific challenges and climate changes. And of course creational AI will also allow us to make a lot of money…

Creative AI: the dark side

Just like the constant improvements and breakthroughs in IT have created an interesting and enriched playing field for scammers and cybercriminals, creative AI has a dark side too. A couple of examples:

  • Replicating voice patterns and using them to impersonate real people. A 3 second voice snippet can suffice to pull this off.
  • Requesting GPT-4 to find and exploit vulnerabilities in IT systems.
  • Producing deep fakes, fake narratives and fake religions through the use of fake news and the total manipulation of content-generated identification. This can lead to the breakdown of democracies, societal values and the rule of law. 
  • New, unprogrammed and unforeseen AI capabilities can suddenly arise. Examples are theory of mind and spontaneous new language capabilities. Just like a child, current-generation AI has the ability to build on existing knowledge and teach itself new things. We don’t have the technology to understand and predict the direction and scale of the self-learning capabilities of these generative Gollem AI’s. Eventually, AI makes better and stronger AI and follows an exponential scale (and exponentiality is a major human blindspot) when it does this.  


Because everything we do and create (friendships, nation states, societal institutions, laws) runs on top of language, creative AI’s ability to create persuasive narratives without human assistance could become a zero-day vulnerability for the operating system of humanity. According to the well-renowned writer Yuval Harari, AI is to the visual world what nukes are to the physical world.

The deployment of creative AI technology

You would expect companies to be extremely careful when it comes to deploying technology with such unprecedented capabilities out into the world. Unfortunately, the opposite is true. Because companies are in a race to that intimate spot in your life, they want to deploy novel AI technology as soon as possible to as many people as possible. AI chatbots have been added to platforms children use, such as Snapchat (if we don’t do it, the competition will beat us to the punch). Safety researchers are in short supply, and most of the research that’s happening is driven by for-profit interests instead of academia.

Additionally, the media hasn’t been covering AI advances in a way that allows you to truly see what’s at stake. Corporations are caught in an arms race to deploy their new technologies and get market dominance as fast as possible. In turn, the narratives they present are shaped to be more about innovation and less about potential threats.


The emergence of creative AI with a theory of mind is a new rite of passage for human civilization. Because the technology is still not ingrained into every fiber of the global village (although it is evolving mightily fast), we can still choose the future we want and decide to closely regulate the advancement and deployment of sophisticated AI. After all, we have (so far) been able to ward off the existential threat of full-scale nuclear war. However, taming the beast of uncontrolled AI expansion requires a healthy democratic dialogue that effectively slows down public deployment of the newest generation of AI-driven large language models.

video recommendation

This blog is inspired by the A.I. Dilemma which can be found on youtube.