When artificial intelligence becomes a nightmare

Geoffrey Hinton's resignation stirs up panic and mobilises the White House

Hinton's greatest concern is that governments and power-hungry companies would seek to monopolise AI technology or if the technology fell into the wrong hands.
Nash Weerasekera
Hinton's greatest concern is that governments and power-hungry companies would seek to monopolise AI technology or if the technology fell into the wrong hands.

When artificial intelligence becomes a nightmare

Geoffrey Hinton, Google's topmost artificial intelligence (AI) man, who announced his resignation from the company on 1 May, joined the panicked chorus shortly after the launch of ChatGPT in November.

Read more: Will ChatGPT knock Google off its throne?

At the time, people were stunned because of the rapid development of the generative artificial intelligence that underpins the application. In it, Hinton saw a sign of a dangerous future where humans would lose control when robots become smarter.

The resignation sharpens panic

What Hinton — nicknamed the godfather of artificial intelligence — foresaw is not new. He was preceded by many prominent figures in the world of technology and business, including:

  • Elon Musk, owner of Twitter (or X Corp as it became to be known last month) and CEO of Tesla
  • Steve Wozniak, one of the founders of Apple
  • Stuart Russell, professor of computer science at the University of California at Berkeley
  • Max Tegmark, professor of physics at the Massachusetts Institute of Technology
  • Evan Sharp, one of the founders of Pinterest

These names — along with 1,300 other personalities — signed at the end of March an open letter calling for freezing the development of the most powerful AI bots for a period of time, citing risks these systems pose if they continue to develop without checks and balances.

Notably, one of the signatories at the time was Joshua Bengio, the second godfather of artificial intelligence, who — along with Hinton and Jan Lucan — won the 2018 Turing Award for their research on AI deep learning.

This was preceded by other open concerns about the development of artificial intelligence systems for various uses.

Prominent voices included Bill Gates, the founder of Microsoft, who is currently competing for a huge share of the sector, and Peter Thiel, one of the most prominent financiers OpenAI, who denied the existence of any evidence of how to develop safe artificial intelligence.

An empty "oval" invitation?

On the heels of Hinton’s resignation, the White House summoned the CEOs of the four leading American companies in artificial intelligence and technical innovation – Microsoft, Google, OpenAI and Anthropic – which have been roaring lately with the intensifying competition among them.

The summoning may be the first reaction to Hinton's resignation and his troubling statements.

The stated aim of the summoning is to discuss concerns about the risks associated with artificial intelligence, including violations of privacy, bias and the possible spread of fraud and misinformation, as well as the duty of these CEOs to ensure responsible conduct in the management of their companies and the innovation of their products.

President Joe Biden was clear in asserting the duty of these companies to verify that their products are safe before they are placed in people’s hands in order to guarantee their safety and safeguard their rights.

The question remains about the effectiveness of this debate in the absence of legal deterrence and regulation. This comes after the White House's positive attitude was met in August by the approval of seven of the largest artificial intelligence companies to make their models available for public scrutiny but only in a limited manner in line with "the principles of responsible disclosure."

Meanwhile, OpenAI refused to make public any of the basic technical information associated with GPT-4, which it recently launched, according to The Financial Times.

When it is too late

What is interesting about Hinton's recent statements is not the statements themselves, but first, the fact that they come from a leading and trusted researcher in artificial intelligence, and the fact that the statements reaffirmed all previous opinions and warnings beyond any doubt.

What is interesting about Hinton's recent statements is not the statements themselves, but the fact that they come from a leading and trusted researcher in artificial intelligence.

Second, his resignation from Google came at this time when the company seeks to reserve a place for itself at the forefront of this technology, and is fighting a fierce battle with Microsoft and its search engine Bing, which has begun to integrate AI tools, especially ChatGPT in its services.

Does Hinton's resignation really reflect a desire to speak out about the "dangers" of the technology he helped develop without affecting the reputation of Google?

Reuters
Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017. 

Or is it a reference to the company's reckless speed in its research in this field for purely commercial purposes – similar to Microsoft, its rival, and OpenAI, without anticipating the consequences of this blind drift for all humanity – and despite his claim to BBC that Google has followed a "responsible approach" it its research?

Is it an "awakening of conscience" by Hinton and his desire not to be part of what Google might do tomorrow?

He expressed to The New York Times his regret for his contribution to artificial intelligence research, but consoled himself by saying, "If I hadn't done that, somebody else would have."

Ironically, Sundar Pichai, Google's CEO, had publicly stated last month in an interview with 60 Minutes on CBS News, that society was still not ready for what is coming despite its adaptability, adding that no one from the sector moved in the direction of curbing the development of unrestrained artificial intelligence.

When intelligence becomes a nightmare

Hinton's research on "neural networks," which are systems very similar to the human brain and allow learning and information processing through experience and up to self-administered "deep learning" of artificial intelligence, paved the way for current systems such as ChatGPT, which can exceed the capacity of human thinking in the foreseeable future, in addition to its superiority in the amount of general knowledge and big data it has been taught.

Hinton's research on "neural networks" paved the way for current systems such as ChatGPT, which can exceed the capacity of human thinking in the foreseeable future, in addition to its superiority in the amount of general knowledge and big data it has been taught.

Although they raised serious reservations because of the misinformation that they can store and provide, the speed of learning in these systems surprised even their creators.

This is especially worrying if the technology falls into the hands of "bad guys" who might use it for "bad targets," Hinton said, calling this a "nightmare," as (emotionless) digital intelligence is completely different from human biological intelligence – while each system learns alone, all systems can share their learnings immediately. 

In fact, this is not the first time that the 75-year-old British Canadian scholar and academic, who got his PhD in artificial intelligence, has expressed concerns of this magnitude.

In an interview with CBS News in March, the scholar said the world is at a "pivotal moment," with the rapid advance of artificial general intelligence (AGI), he said, predicting an invasion of the world by this technology in 20 years or less – not in 50 years as he had recently thought.

His greatest concern at the time was that governments and power-hungry companies would seek to monopolise AI technology and that, in the absence of global regulation, governments and companies would not stop developing it until their peers did.

Hinton's greatest concern at the time was that governments and power-hungry companies would seek to monopolise AI technology and that, in the absence of global regulation, governments and companies would not stop developing it until their peers did.

He recently said that tech giants are engaged in a competition that may be impossible to stop.

Hinton and Google

Google spent $44mn in 2013 for the acquisition of the company DNN Research, which founded by Hinton and his students and developed advanced technologies in "machine learning" and "deep learning," including the technical underpinnings of new chatbots such as ChatGPT and Google Bard.

Since then, Hinton has worked for Google part-time for a decade alongside his research at the University of Toronto.

Hinton's most famous fingerprint was his 2012 success in developing a pioneering neural network for image recognition; he was also known for his work on digital neural networks and his research into "unsupervised learning procedures for neural networks with rich sensory inputs." 

Hinton's most famous fingerprint was his 2012 success in developing a pioneering neural network for image recognition; he was also known for his work on digital neural networks and his research into "unsupervised learning procedures for neural networks with rich sensory inputs."

Hinton was the founding director of the Gatsby Computational Neuroscience Unit at University College London; he is currently the Canada Research Chair in Machine Learning and Director of the Neural Computation and Adaptive Perception Program, which is funded by the Canadian Institute for Advanced Research.

He is also a fellow of the Royal Society, the Royal Canadian Society and the Association for the Advancement of Artificial Intelligence, a foreign honorary member of the American Academy of Arts and Sciences, and a former president of the Society for Cognitive Sciences.

Hinton is not the first one in Google to sound the alarm about the dangers expected from artificial intelligence; remember the expulsion of one of the engineers at the company last July, Blake Lemoine, because he described Google's LaMDA bot as being so realistic that he believed it is sentient.

So, we face a new danger: the exclusion of any opinion that could stand in the way of doubling the profits of companies and their striving to outperform their competitors at all costs, even if the price is to destroy humanity by an invasion of super-intelligent and powerful machines.

The current burst of artificial intelligence and the constant warnings of chaos remind us of the endless calls, for decades, to be alert to the dangers of climate change and the irreversible ravages that it can bring.

However, we are beginning to see the consequences of environmental neglect, due to capitalist and industrial ecosystems, and their failure to commit to alternative solutions and their continued depletion of resources until we are on the verge of drought in the most fertile parts of Earth.

Will it be corrected by controlling or slowing down the development of artificial intelligence or will we witness a similar fate where we lament the past when it is too late?  

font change

Related Articles