Search
Close this search box.

AI will save the world: the truth or wishful thinking?

The era of artificial intelligence (AI) has definitely arrived. In the seven decades of its existence (the first modest steps toward the creation of AI were set in the 1950s) the abilities of AI have come a long way. This is especially true if you look at image recognition, image generation, language processing and language production (Chat GPT anyone).

In a previous blog we already saw that several tech followers and AI experts see advanced and general-purpose AI as a potential existential threat to the future of humankind. If AI were to surpass humanity in terms of general intelligence, it could become extremely difficult (and maybe even impossible) to control. Entrepreneur, software engineer and co-founder of the venture capital firm Andreessen-Horowitz Marc Andreessen has a different outlook on the future of AI and sees the technology as a saving grace for mankind and the planet. Instead of killer software and robots that will spring to life and decide to murder the human race or ruin everything, he views advanced AI as a potent tool to make the world a better place.

In this article, we will take a closer look at Andreessen’s arguments. Why does he firmly believe that AI is and will be a change for the good instead of an existential threat? And do his arguments hold true if you hold them up to the light of critical analysis?

Why and how AI can make the world a better place?

The main premise for Andreessen’s argument is his positive view on human intelligence. According to Andreessen, our cognitive abilities have made a very broad range of life outcomes, on a personal as well as on a professional level, better. Without the application of intelligence, we wouldn’t have been able to realise important breakthroughs in the fields of science, technology, chemistry, medicine, energy, construction, transportation, communication, art, ethics, philosophy, and morality. “We would pretty much all still be living in mud huts, scratching out a meager existence of subsistence farming”, concludes Andreessen.

Andreessen argues that AI is able to profoundly augment human intelligence to take it to an even higher level and make all the outcomes of intelligence better. He even sees AI as one of the best things human civilization has ever created, a breakthrough that’s on par with or even goes beyond the invention of electricity and microchips.

So why all the doom and gloom?

So why all the doom and gloom? Why do so many people, including many computer scientists and AI developers, fear the rapid advancement of state-of-the-art AI technology? Andreessen points out that historically every technology that matters (electric lighting, automobiles, the internet) has sparked skepticism and irrational (legitimate concerns inflated to a level of hysteria) moral panic.

He identifies two major types of actors when it comes to reform movements and the introduction of new and disruptive technologies.

  • The “baptists” are people who truly (on a deep and emotional level) believe that new restrictions, regulations, and laws are required to prevent societal disaster.
  • “Bootleggers” are mainly self-interested opportunists who seek to profit financially from the imposition of new restrictions, regulations and laws that insulate them from competitors. Andreessen sees government-blessed AI vendors protected from new startup and open source competition as an example of bootleggers.

Baptists and bootleggers: their arguments and their merits

Andreessen identifies five major arguments and claims that the baptists and bootleggers use to stress the danger of advanced AI and the widespread adoption of this revolutionary technology. Let us take a look at these arguments and how Andreessen refutes them.

1. AI is going to kill us all

The first and most ominous AI doomer risk that skeptics bring forward is that AI will someday decide to literally kill humankind. Andreessen points out that the fear that technology of our own creation will rise up and destroy us is deeply embedded into our culture (just look at the ancient Greek mythological story of Prometheus or read Mary Shelley’s novel Frankenstein or The Modern Prometheus).

Andreessen states that the idea that AI will decide to kill humanity is unscientific and a profound category error. AI is not a living being with motivations, goals and the potential to develop a mind of its own. According to Andreessen, AI is nothing more than a combination of math, code and computers created, built, owned and controlled by people. He sees the most rabid proponents of the ‘AI killer theory’ as modern cults, thriving on fear and irrationality.

2. AI will ruin our society

The second big fear is that AI will not so much kill us, but is going to generate outputs (misinformation, fake news, hateful content) that could ruin society and political systems. According to the baptists and bootleggers, AI alignment is the only way to prevent this. Andreessen isn’t convinced and believes that stern frameworks for restriction lead to the creation of an elitist, government-corporate-academic thought police that suppresses many of the opportunities that AI offers.

3. AI will take our jobs

The fear of job loss due to technological breakthroughs (mechanization, automation, computerization, AI) has been a recurring theme in the last couple of hundred years. Although we’ve been through such technology-driven unemployment panic cycles in our recent past, most people still have a job and have profited from significant wage growth throughout the decades. Andreessen thinks that AI is not going to create mass unemployment, but will rather fuel productivity growth and lead to the most dramatic and sustained economic boom ever witnessed in human history.

4. AI will cause crippling inequality

Andreessen admits that inequality is an issue in our society, but doesn’t believe that it is driven by technology. In fact, he holds the sectors of the economy that are the most resistant to new technology responsible for this phenomenon. He argues that the free market will drive the companies that build AI solutions to make their products (and the benefits that they bring) available to as many people as possible.

5. AI will lead to bad people doing bad things

This is the only one of the proposed risks regarding AI that Andreessen considers to be real and legitimate. Technology is a tool. The nature of every tool is that it can be used for good and bad things, depending on the intentions and personality of the user. However, since the AI cat is already out of the bag and the technology is easy to come by, stopping the advancement of AI is impossible without levels of totalitarian oppression that would be utterly draconian.

Andreessen holds the opinion that current legal frameworks allow us to curb the risk of bad people doing bad things with AI. First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hacking governmental computer systems? Stealing money from a bank? With or without AI, such acts will result in criminal prosecution. Secondly, Andreessen proposes using AI for the good as a defensive tool, for example by “putting AI to work in cyberdefense, biological defense, hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nations safe.”

What we need to do

Drawing parallels to the Cold War era, Andreessen suggests that western countries should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential. He contrasts this view to the Chinese take on AI, a more cynical outlook on AI that sees the technology as a mechanism for authoritarian population control.

In the closing paragraphs of the article, Andreessen comes up with some best practices on how to deal with AI:

  • Big AI companies should be allowed to build AI as fast and aggressively as they can, but shouldn’t be allowed to achieve regulatory capture or establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk.
  • Startup AI companies should be allowed to build AI as fast and aggressively as they can.
  • Open-source AI should be allowed to freely (meaning without significant regulatory barriers) proliferate and compete with both big AI companies and startups.
  • To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities.

How realistic is this assessment?

But how realistic and accurate is the view that Andreessen has on the future of AI, especially since several of his claims clash with the view of certain other experts in the field. He is spot on when he explains that AI, like most other technologies, is essentially neutral and can be used for both good and bad things, and that we as human societies have an important say on where the technology will be heading and how AI is going to be used in the near and more distant future.

But he tends to underestimate or downplay the dangers and unforeseen consequences of the unregulated rise of advanced AI technology. For example, Andreessen claims that AI doesn’t have the ability to actively pursue goals. Many experts in the field state that we simply don’t know this and that there are subtle signals (AI performing unexpected tasks or gaining knowledge that goes beyond its programming) that seem to suggest the opposite.

Additionally, there are ample examples of products of human technology and the human mind that have produced negative side-effects with far-reaching consequences. The social media misalignment comes to mind. Social media’s AI intention to maximize engagement and increase clicks culminated in a commercial arms race that modified people to become more predictable. It also created a lot of unintended problems such as information overloads, addiction, shortened attention spans, doom scrolling, an often toxic influencer culture, the sexualization of kids, QAnon, polarization, fake news, and deep fakes.

Other examples of beneficial technology with terrible side effects are fossil fuels and plastic. Both have fueled economic progress, but have also lead to the near collapse of many vital ecosystems, widespread pollution, and the dangerous alteration of the global climate. Although Andreessen raises some valid points and useful insights on the future and our dealings with AI, his piece is somewhat compromised and muddled by an overly optimistic view on human intelligence, the free market economy, and the relationship between humans and technology.

conclusion: AI is neither inherently bad nor the bringer of all that is good

In the end, AI is neither inherently good nor bad. It is highly unlikely that AI will kill off the human race and will end the world as we know it. On the other hand, AI is going to have a significant impact on our lives and society. And it would be naive to expect this impact to be undividedly positive without some form of regulation and responsibility management.

In the end, AI is a tool and its effects depend largely on how we choose to use it. It’s ultimately up to us to ensure that the technology is used in a way that benefits society as a whole and helps to create a better future for everyone.

LinkedIn
Twitter
WhatsApp
Facebook