Posted inOpinion

Why I’m scared of AI

Is it only me who takes Elon Musk’s hair-raising predictions about artificial intelligence seriously?

AI revolution
Human beings will then be as influential to the direction of the future as the beasts that live in the forests are now. Image: Shutterstock

I don’t understand why no one I meet seems as terrified as I am about the coming AI revolution. Increasingly, I feel like Cassandra telling anyone who’ll listen we’re doomed – but all they do is shrug. What am I missing?

Perhaps you haven’t watched Elon Musk’s latest appearance, recorded in November, on The Joe Rogan Experience. Near-obsessively, I find I now seek out and watch every pronouncement the great South African makes on AI, on the basis I believe he’s both the planet’s cleverest individual and our greatest expert on this new technology.

When asked by Rogan about the effect AI is going to have on the global workforce, this is what he had to say: “I call it the supersonic tsunami… Any job that is physically moving atoms, like cooking food or farming, anything that’s physical, those jobs will exist for a much longer time. But anything that is digital, which is just someone at a computer doing something, AI is going to take over those jobs like lightning.”

Oh good. So that’s – according to Grok, Musk’s own AI – about 1.25 billion white-collar jobs that will be swept away “like lightning”. Presumably, the remaining 2.4 billion members of the global workforce who currently hold down blue-collar jobs will then find themselves reporting to robots, as opposed to humans. This is how it starts, no? The droid takeover that until very recently would have seemed unimaginable outside the realms of science fiction.

Does this seem overblown? If so, I apologise, but please – please – tell me what I am getting wrong, because it keeps me awake at night. What are we all meant to do, all nine billion of us, when there are no jobs left? More to the point, what are our kids meant to do? Should we play golf and computer games for the entirety of our existence? Take up crafting? These questions I fear are going to become more and more urgent. “In the next ten to twenty years, which is long-term for me, my prediction is that work will become optional,” Musk recently said. How could it not? If every conceivable human skill or aptitude will very soon be performed infinitely better and more quickly by an AI-enabled machine, then what will be the point of doing anything?

It’s a question even Musk says he doesn’t have the answer to. “The challenge will be fulfilment. How do you derive fulfilment in life?” he mused to Senator Ted Cruz in March during a televised conversation about the AI-defined future. And this is the best-case scenario. This is if the robots allow us to go on living. In the same conversation, Cruz asked Musk how likely it is that AI simply wipes humanity out. “Twenty per cent likely, maybe ten per cent… in the next five to ten years,” he said. Fun!

We’re only at the start of it, of course. The beginning of the beginning. ChatGPT only became something we all knew about in November, 2022. Last year at FII, Musk dispassionately told the assembled corporate overlords: “AI is getting ten times better per year. Four years from now, that would mean 10,000 times better. Maybe 100,000 times better… I think by 2040, probably there are more humanoid robots than there are people… there’ll be at least ten billion.”

In other words, mankind is right now unleashing an intelligence so seismic – deepening at an exponentially rapid rate – as to be beyond our comprehension. An intelligence that imminently will walk among us. What a staggering notion to attempt to process.

Last time he was on Rogan, in March, Musk said: “I always thought AI was going to be way smarter than humans and an existential risk, and that’s turning out to be true… I think we are trending toward having something that is smarter than any human – smarter than the smartest human – by next year, or a couple of years. There is a level beyond that, which is smarter than all humans combined, which is frankly around 2029 or 2030.”

The fundamental basis for my fear, I think, is that I don’t understand how anyone can imagine that AI will work for us, and not the other way around. There is no precedent for an entity that is cleverer and stronger working for an entity that is stupider and weaker – not in the living world, at least. Why do we think this time it will be different? Is it because as a species, we humans have been around for 300,000 years, and have been planetary top dog – intelligence-wise – for the last 50,000? If so, I worry we’re about to get a hell of a shock.

And how can Musk, or anyone else for that matter, claim to be able to apportion levels of probability to how the AI revolution will play out for mankind? If one AI – by the end of the decade – will be cleverer than all humanity combined, then obviously none of us, not even Musk, will be able to second guess it. Human beings then will be as influential to the direction of the future as the beasts that live in the forests are now.

“They call it the singularity for a reason, because we don’t know what’s going to happen,” Musk told delegates at the AI Startup School in June. “In the not that far future, the percentage of intelligence that is human will be quite small. At some point the collective sum of human intelligence will be less than one per cent of all intelligence… We’re the biological boot-loader for digital superintelligence.”

Perhaps it will all turn out fine. Perhaps being unemployed will feel like a lifelong holiday and we’ll love it. Certainly, previous revolutions in the way humans work have consistently turned out for the better and not for the worse, despite the fretting of pessimists like me. But surely, this time we can accept it’s different – that this time we’re not so much changing how we work as where we stand relative to the other things with which we share the planet.

Last summer, Tucker Carlson said: “I don’t know why we aren’t having any of these conversations…. If we agree that the outcome is bad, specifically that it is bad for people… then we should strangle AI in its crib right now and blow up the data centres…. If it’s going to become a threat to people and humanity and life, then we have a moral obligation to murder it immediately.”

Clearly, we’re now well past that point. But if there are readers out there who can ease my concerns, I would genuinely like to hear from to you. For me, the supersonic tsunami is a source of great fear.

Follow us on

Damian Reilly

Damian Reilly

Damian Reilly is Editor-in-Chief of Arabian Business.

Author