The existential threat of AI

And how the Big Tech PR peddles it

ganpy
4 min readJun 6, 2023

--

About a week ago, merchants of the technology world, specifically those behind the AI technology we are already using and the tech we are being lured with as must-haves in the near future, signed a statement. I will repeat. These are the merchants who develop and deploy the said AI technology.

Many researchers and academicians, AI tech executives, and other prominent tech business figures, including Bill Gates and OpenAI CEO Sam Altman himself are in that list if signatories. They signed a simple single sentence statement issued by the Center for AI Safety.

It’s a very short 22 word statement.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Wonder why these people would flag this technology as a huge risk to humanity — the very technology their teams are building and funding?

My reasoning is simple.

First, it is to shift the focus of the general public’s attention and talking points to highly unlikely far-fetched scenarios. Scenarios that do not require them to make any changes to their current operations or to their business models. On the other hand, if they were to bring attention to addressing the immediate impacts of AI on the society, such as privacy violations, copyright infringement, environmental impacts, and most importantly, labor or workforce displacement, then they know it is going to turn out to be very costly for them. That’s why the above signed statement — to call for action against protecting against AI somehow “waking up” one fine day is a master PR stroke, because it not going to be expensive for them at all.

Secondly, from a PR standpoint, by constantly talking to us about the dangers of AI, they are constantly reminding us that the AI technology they are building is immensely powerful. Like, so powerful that it could push humanity to extinction. Like J. Robert Oppenheimer powerful. I am a technocrat myself and a heavy tech consumer, so I am not undermining the improvements that the tech industry has made in machine learning and artificial intelligence. But to put in simple terms, this technology is not atom splitting, not even close, rather the technology they have developed so far is entirely using human training data, all to generate a string of words or pixels or sounds.

If AI is a real threat to humanity, then it is by:

  • Accelerating existing trends of wealth and income inequality
  • Lacking integrity in information and privacy
  • Exploiting natural resources

I am pretty sure that there are quite a few distinguished people in the above list who signed the statement in good faith and are really sincere about pressing the warning bell. But come on!! Is it worth investing our collective time, attention, and resources in a hypothetical scenario? Those otherwise could be invested into addressing privacy, societal bias, impacts on labor and environment, etc. that are already occurring?

Then there are those who bring up one of the latest news items about a “hypothetical scenario” of a drone killing its operator “in a simulation” as evidence for concern.

Firstly, it was not even a simulation. It was a thought experiment which got misquoted/mistreated in news cycles. Secondly, if a hypothetical scenario disturbs us so much, how about we take a step back and think about a real drone that killed 37 innocent people celebrating at a wedding party because of faulty human intelligence? What have we done about that?

It’s already clear that the same technology merchants who warn us about the existential threat posed by AI, quite unsurprisingly, do not want AI to be regulated in the US. Sam Altman (CEO OpenAI), has already threatened to pull OpenAI’s services out of the European Union if AI was “overregulated” there. But the EU is moving forward with its AI Act, which could become the first globally focused regulatory framework for AI systems. It’s too early to celebrate that as a regulatory achievement but it is a welcome start.

Also, the whole debate on how regulations could slow down innovations in the AI technology space can take a back seat, even if partially true. The threat for the society is from within and not from China. Because even China is ahead of us when it comes to regulating AI.

The point of this post is not to say AI is harmless. Or to dismiss the fear of a potential cataclysm caused by Skynet like technology at some point in a distant future.

The point of this post is to highlight why our debate right now while discussing the impact of AI should be around prioritizing which harm to focus our resources on. The future apocalypse that only the architects of the technology are warning about and claim they are uniquely in a position to avert, or the immediate harm which we all are in the middle of already — governments, researchers, artists, and the public alike.

Notwithstanding the sophisticated Big Tech PR and all the distractions they try to peddle in the form of existential threats, I am very clear as to which side of the debate I am on.

What about you?

--

--

ganpy

Entrepreneur, Author of "TEXIT - A Star Alone" (thriller) and short stories, Moody writer writing "stuff". Politics, Movies, Music, Sports, Satire, Food, etc.