The hysteria over AI throws up a broad range of risks

Somewhere between Elon Musk’s apocalyptic fears and Bill Gates’s calm, a frenzy is unfolding about how big a threat this cutting-edge tech could be

Fears over the dangers of  fast-developing artificial intelligence run up to an existential threat to humanity. Those funding AI should think about the dangers, not just chase profit at all costs.
Nash Weerasekera
Fears over the dangers of fast-developing artificial intelligence run up to an existential threat to humanity. Those funding AI should think about the dangers, not just chase profit at all costs.

The hysteria over AI throws up a broad range of risks

Not even two weeks after its launch, OpenAI’s GPT-4 is already under fire. The latest and more sophisticated version of the artificial intelligence chatbot was met with uproar, and even a petition to halt AI experiments.

The open letter, entitled “Pause Giant AI Experiments,” sparked controversy and fear worldwide. Signed by over 1,300 tech and business figures, it calls for the training of powerful AI systems to stop for at least 6 months as “AI systems with human-competitive intelligence can pose profound risks to society and humanity.”

The letter derives its importance from its high-profile signatories, not least of which is Elon Musk, CEO of Twitter and Tesla; Steve Wozniak, co-founder of Apple; Stuart Russell, professor of Computer Science at Berkeley; Max Tegmark, Professor of Physics at MIT; Evan Sharp, co-founder of Pinterest; and a swathe of others from the tech industry.

They said the pause should be used to jointly develop rules and safety protocols for AI tools, so as to dissipate fears and regulate the sector. Otherwise, governments must “step in,” the signees believe.

Musk: Control a must

Elon Musk believes having some kind of regulation and control is a must. “We have regulatory bodies that oversee the public safety of cars and planes and medicine,” he says, so why not introduce regulations for AI as well?

We have regulatory bodies that oversee the public safety of cars and planes and medicine. So, why not introduce regulations for AI as well?

Elon Musk

Musk is not worried about AI tools like postal or photography drones or social media apps software or algorithms, which are getting increasingly sophisticated while our true social connections dwindle further.

He is worried about much more powerful AI capabilities – and rightfully so – provided that his intentions are true and not rooted in a covert business agenda.

Shutterstock

'The biggest existential threat to humanity'

This is not the first time Musk warns against arbitrary or irresponsible AI development, especially in terms of Machine Learning.

In a speech at MIT in 2014, he said AI could become the "biggest existential threat to humanity," a stance he reiterated at the World Government Summit in Dubai last February.

In 2017, he also led a campaign calling on world governments to ban the development of AI weapons, for fear of a new arms race to develop, deploy, and use "killer robots" or "automated weapons" that could ignite a third world war in and of themselves, essentially turning sci-fi movies to reality. 

At the time, Musk's campaign instantly gathered support from about a thousand scientists and leading AI experts, the likes of physicist Stephen Hawking, DeepMind CEO Demis Hassabis, and Apple's Wozniak. Later, hundreds of AI experts from Canada and Australia would also join the campaign.

Divide over regulation

To this day, states are divided over the authorisation and regulation of such weapons. Several proposals to regulate AI technologies have been put forward recently in the United States, the United Kingdom, and the European Union. Previously, along with China and Singapore, the EU had submitted primary tech governance frameworks.

But collective state action may not be fruitful or keep pace with AI advances; it may merely seem like flagging a rocket after lift-off. Instead, each state must fulfill its own oversight role.

 But collective state action may not be fruitful or keep pace with AI advances; it may merely seem like flagging a rocket after lift-off. Instead, each state must fulfill its own oversight role.

Both the 2017 and 2023 letters underline fears of AI taking control while the world is still unprepared for potential threats and incapable of addressing cybersecurity risks, extremist behaviours, and algorithms controlling the human mind.

This could be by providing wrong or dangerous information to humans about, say, making a biological weapon; stealing their texts, art, and poetry; replacing hundreds of millions of them in their jobs,  and – worst of all – destroying humanity itself.

Fears of an AI apocalypse

Are these fears reasonable? The short answer is yes.

Humankind's perspective on security is focused on geostrategic and economic interests, which tend to take precedence over any other consideration.

Humankind's perspective on security is focused on geostrategic and economic interests, which tend to take precedence over any other consideration.

This is evidenced by Russia's invasion of Ukraine and the US-Chinese Cold War, as risks could escalate into a full-fledged war at any moment.

AI research has undoubtedly made significant progress in the last decade following the AI winter of the late 1980s, which ended with resounding commercial failure.

This progress was especially noticeable in machine learning and the development of promising tools that can implement high-stakes and complicated operations with human oversight/intervention or, more dangerously, without it.

The storm that ChatGPT caused across the world in the last five months helped revive the technological arms race, possibly provoking rapid AI shifts.

Read more: Will ChatGPT knock Google off its throne?

Addressing these changes requires not only a deep understanding of what we're dealing with, but also the ability to outpace AI while instilling an ethical dimension in the process.

Nash Weerasekera

More dangerous than nuclear weapons

Musk is surely not alone in fearing risks on the horizon. Others have displayed harsher levels of panic regarding the ill-considered progress of AI technologies.

The harshest stance might be the one taken by Bill Gates, founder of Microsoft, who said a few years back that AI "could be more dangerous than nuclear weapons."

Yet here he is, a mere few years later, refusing to join Musk's call to slow down the pace of AI progress.

In 2016, long-standing statesman Henry Kissinger held a secret meeting with leading AI experts at The Brook, a Manhattan Private Club, to discuss how smart robots could "cause a rupture in history and unravel the way civilisation works," Vanity Fair reported.

As for entrepreneur Peter Thiel, one of the original donors to OpenAI, he believes that if we had full, strong AI, "it would be like aliens landing on this planet."

Thiel doubts that anyone has a manual on the safe development of AI, as "we don't even understand what AI is, let alone how to control it."

Back in 2015, Musk had been one of the co-founders of OpenAI, the parent company of ChatGPT (initially a nonprofit, then a private company), to protect the world from "malicious AI."

He often defended the company and what he called its democratic goals, justifying its development of AI applications that cannot be inherently monopolised, as the danger lies in limiting technology to a small group of individuals who exploit the supreme capabilities of AI for destructive purposes. 

But after Musk left OpenAI's Board in 2018, he became critical of the company and its activities, particularly its relationship with Microsoft.

Machine versus man

Musk explains that humans will become the biological bootloaders of AI. By definition, a bootloader is a small program that launches the computer's operating system when the computer is activated for the first time.

"Matter can't organise itself into a chip," Musk says. "But it can organise itself into a biological entity that gets increasingly sophisticated and ultimately can create the chip."

Musk warned against the excessively rapid and tremendous development of AI by humans without paying mind to the possibly harmful behaviours that this autonomous technology could acquire.

For the Tesla CEO, the issue goes beyond the motives of a handful of Silicon Valley executives, including Google executives. Whether they have a business ethic does not matter. The machine, after all, will not reflect the personality of the humans who make them. Rather, they will develop into their own entity.

We cannot predict where the uncontrolled machine will hread, but it is possible that its stance will be hostile to man.

We cannot predict where the uncontrolled machine will hread, but it is possible that its stance will be hostile to man.

Race for dominance

Remarkably, Alphabet and Microsoft CEOs Sundar Pichai and Satya Nadella did not sign the open letter, as the suggested pause does not serve their business goal of catching up on the frantic race imposed by ChatGPT.

The "existential" battle has already begun between the two companies, both of which have a long history of AI investments.

More worrisome is the fact that the two companies, along with Meta and Amazon, are downsizing their ethics team that assesses moral issues surrounding the deployment of AI, according to The Financial Times. 

This race for market dominance should not be the sole driver for the rapid deployment of such technologies that have an unprecedented impact on humanity.

Just as pharmaceutical companies cannot introduce a new product until it undergoes rigorous safety tests that could take years, AI technologies as powerful as or more powerful than GPT-4 should not be launched before assessing the ability of cultures, peoples, and countries to safely use them.

Just as pharmaceutical companies cannot introduce a new product until it undergoes rigorous safety tests that could take years, AI technologies as powerful as or more powerful than GPT-4 should not be launched before assessing the ability of cultures, peoples, and countries to safely use them.

Perhaps Musk was right to describe Bill Gates' understanding of AI as "limited." Last March, Gates published a study entitled "The Age of AI has Begun," in which he defended AI as complementing or supporting the role of human beings rather than nullifying it, especially in terms of raising labor productivity, finding solutions to enhancing access to education and health care, and combating climate change. 

Despite touching on the dangers of this technology, Gates called for balancing fears about the negative aspects of AI, which are realistic and understandable, on the one hand, and its ability to improve people's lives on the other.

"Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world's best AIs on its biggest problems," Gates said in his essay, published on his own Gates Notes website.

The world is at a critical junction

The most prominent neutral opinion was that of British computer scientist Geoffrey Hinton. In an interview with CNBC, the "Godfather of AI" affirmed that the rapid advancement of artificial general intelligence (AGI) has put the world at a critical junction.

Hinton said he now expects technology to conquer the world in 20 years or less, a far cry from the half-a-century deadline he had suggested not too long ago.

Hinton said he now expects technology to conquer the world in 20 years or less, a far cry from the half-a-century deadline he had suggested not too long ago.

Hinton called for careful consideration of the consequences that AI could have, which may include attempts to wipe out humanity – an issue he deemed to be secondary compared to the fact that power-hungry governments and companies are seeking to monopolise AI, and will not stop developing these technologies until their rivals do.

Musk seems to have lost the first round of the battle. Questions hovered over the authenticity of his call, with some calling it part of a "bigger sales strategy."

It is still unclear whether he will also lose his second battle amid growing awareness and demand for AI development. AI is being introduced in all fields and uses, and start-ups are proliferating. The competition is at its peak among those investors who are hungry for everything new and promising.

Getty Images
Futuristic server room with quantum computers.

Business pays no mind to the fate of humanity

Musk's open letter will likely fall on deaf ears. It may spark a serious debate that outlines possible dangers, and in the best-case scenario, drive companies to uphold "digital governance" at a time of unforeseen obstacles.

The first serious response came from Italy, where the state banned ChatGPT for the absence of a legal motive to collect and store personal data under the pretext of "training ChatGPT algorithms," which is in contravention of EU privacy laws. Similar moves are expected in other EU countries soon.

Open letters will not stop AI companies from moving forward in their development, deployment, and creation of profitable products, nor will banking on moral and ethical motives alone.

In the world of business and money, there is often no place for considering the fate of humanity. Switching profits with ethics will never be part of the corporate agenda, which has rarely shown a willingness to prevent the destruction of the planet for quick profit.

font change

Related Articles