This is evidenced by Russia's invasion of Ukraine and the US-Chinese Cold War, as risks could escalate into a full-fledged war at any moment.
AI research has undoubtedly made significant progress in the last decade following the AI winter of the late 1980s, which ended with resounding commercial failure.
This progress was especially noticeable in machine learning and the development of promising tools that can implement high-stakes and complicated operations with human oversight/intervention or, more dangerously, without it.
The storm that ChatGPT caused across the world in the last five months helped revive the technological arms race, possibly provoking rapid AI shifts.
Read more: Will ChatGPT knock Google off its throne?
Addressing these changes requires not only a deep understanding of what we're dealing with, but also the ability to outpace AI while instilling an ethical dimension in the process.
More dangerous than nuclear weapons
Musk is surely not alone in fearing risks on the horizon. Others have displayed harsher levels of panic regarding the ill-considered progress of AI technologies.
The harshest stance might be the one taken by Bill Gates, founder of Microsoft, who said a few years back that AI "could be more dangerous than nuclear weapons."
Yet here he is, a mere few years later, refusing to join Musk's call to slow down the pace of AI progress.
In 2016, long-standing statesman Henry Kissinger held a secret meeting with leading AI experts at The Brook, a Manhattan Private Club, to discuss how smart robots could "cause a rupture in history and unravel the way civilisation works," Vanity Fair reported.
As for entrepreneur Peter Thiel, one of the original donors to OpenAI, he believes that if we had full, strong AI, "it would be like aliens landing on this planet."
Thiel doubts that anyone has a manual on the safe development of AI, as "we don't even understand what AI is, let alone how to control it."
Back in 2015, Musk had been one of the co-founders of OpenAI, the parent company of ChatGPT (initially a nonprofit, then a private company), to protect the world from "malicious AI."
He often defended the company and what he called its democratic goals, justifying its development of AI applications that cannot be inherently monopolised, as the danger lies in limiting technology to a small group of individuals who exploit the supreme capabilities of AI for destructive purposes.
But after Musk left OpenAI's Board in 2018, he became critical of the company and its activities, particularly its relationship with Microsoft.
Machine versus man
Musk explains that humans will become the biological bootloaders of AI. By definition, a bootloader is a small program that launches the computer's operating system when the computer is activated for the first time.
"Matter can't organise itself into a chip," Musk says. "But it can organise itself into a biological entity that gets increasingly sophisticated and ultimately can create the chip."
Musk warned against the excessively rapid and tremendous development of AI by humans without paying mind to the possibly harmful behaviours that this autonomous technology could acquire.
For the Tesla CEO, the issue goes beyond the motives of a handful of Silicon Valley executives, including Google executives. Whether they have a business ethic does not matter. The machine, after all, will not reflect the personality of the humans who make them. Rather, they will develop into their own entity.
We cannot predict where the uncontrolled machine will hread, but it is possible that its stance will be hostile to man.