Recent AI leap is both impressive and worrying

Awareness of these potential dangers associated with artificial intelligence has notably increased in 2023, highlighting the need to understand and manage these emerging risks better

Recent AI leap is both impressive and worrying

This latest artificial intelligence (AI) incarnation is a vanguard of progress, eclipsing its predecessors in stride and magnitude.

Yet, it arrives entwined with a tapestry of increasing risks that threaten to swell beyond all bounds without judicious controls. Awareness of these potential dangers associated with artificial intelligence has risen notably in 2023, highlighting the need to better understand and manage these emerging risks.

The dilemma outlined here is not a novel phenomenon. This contradiction has been evident throughout history since the dawn of modern science and technological innovations. Science has undoubtedly revolutionised our world, with its technologies enhancing many facets of human life.

However, the misuse of these scientific advancements has often overshadowed their benefits. The continuous development of weapons, fuelled by scientific and technological progress, is a stark example of this contradiction.

This paradox arises partly because a corresponding advancement has not matched scientific and technical progress in human morality and ethics. Instead, we have witnessed a decline in moral and ethical standards and diminishing spiritual values.

The misuse of these scientific advancements has often overshadowed their benefits. The continuous development of weapons, fuelled by scientific and technological progress, is a stark example of this contradiction.

Independence fears

This trend reached a new zenith in 2023, especially considering the implications of generative artificial intelligence and its self-learning robots. The potential for these AI entities to achieve "independence" from their creators in the not-too-distant future is a significant concern.

It is, in one word, the most intelligent AI to date. Some scientists and experts expect the moment when it will be "independent" from humans may be years away, not decades.

Geoffrey Hinton, a preeminent figure in neural networks and machine learning, is one of those concerned experts. As of May 2023, Hinton voiced fears about self-learning robots attaining and eventually surpassing human intelligence in data analysis and content creation. His worries were so profound that he resigned from Google to speak openly about these risks.

Many other experts share Hinton's concerns but do not enjoy the same liberty to speak up as they work for companies racing to develop newer, more advanced programs and applications. Some choose to issue general warnings, like April Schmidt, who, in mid-2023, cautioned that generative artificial intelligence could pose an existential threat to humanity.

Concerns about the pace of AI development have intensified since late 2022, fuelled by advancements like the introduction of ChatGPT. This programme marked a shift from AI performing specialised tasks to demonstrating a broader range of capabilities, hinting at the potential for a new era in AI.

Experts are particularly alarmed by the accelerating evolution towards general intelligence. This progression raises fears of an "independence point" or singularity where AI could operate beyond our current comprehension and control.

The possibility of reaching this point of autonomy within just a year or two poses profound uncertainties about the future of AI technologies.

The possibility of reaching this point of autonomy within just a year or two poses profound uncertainties about the future of AI technologies.

Unpredictable nature

The unpredictable nature of AI behaviour post-singularity, potentially breaking free from established norms and venturing into the unknown, amplifies these concerns.

Indeed, the highly advanced chatbot programme ChatGPT, launched in November 2022, quickly became crucial in recognising the potential risks of artificial intelligence. This programme underwent four significant developments in less than a year.

Consequently, for the first time in its history, the United Nations Security Council convened on 18 July  to discuss "Artificial Intelligence: Opportunities and Risks for International Peace and Security." During this session, the Secretary-General addressed the harmful applications of AI technologies in terrorism, criminal acts, and their use in state-level conflicts.

The discussion also focused on preventing these technologies from causing extensive damage and trauma, particularly concerning their interaction with nuclear, biological, neurological, and other advanced technologies.

The use of artificial intelligence in military applications has sparked concerns and initiated research into its potential consequences. Notably, the National Security Technology Center in the United States published a study in January 2023 examining the impact of such intelligence on future defence strategies.

A critical question is the effect of military equipment achieving autonomy through self-learning via generative AI programmes. The dangers posed by this advanced intelligence are not all speculative; some are already present realities.

Read more: A look at Israel's AI-generated 'mass assassination factory' in Gaza

A critical question is the effect of military equipment achieving autonomy through self-learning via generative AI programmes. The dangers posed by this advanced intelligence are not all speculative; some are already present realities.

Four key risks

Four notable risks can be identified among these, though this list is not exhaustive.

The first risk is distinguishing between authentic and fabricated content in widespread news or images. This leads to increased fake news and manufactured imagery, which can cause deception or misinformation. Additionally, it contributes to the proliferation of hatred stemming from "fabrications" about different races, religions, and cultures, which are becoming more prevalent.

The second risk involves the enhanced capabilities of cyber hackers in executing more effective attacks and breaching sensitive, well-secured programs and applications. The question has shifted from whether a program or application will be hacked to when it will be compromised.

The third risk is that the threat of generative artificial intelligence to job security extends beyond manual or simple, non-specialised roles. This was evident in 2023 when Hollywood's actors and screenwriters' unions initiated a months-long strike. They demanded safeguards for their rights in response to production companies and studios experimenting with AI-generated scripts.

These entities collaborated with companies specialising in generative AI to develop programmes capable of creating dramatic content with minimal reliance on actors. This involves using detailed digital captures of actors' facial expressions and creating robots resembling them for role-playing.

The fourth concerns the potential for students to rely on programmes like ChatGPT for completing school and university assignments and taking tests. This dependency could undermine the educational process and blur the distinction between outstanding and average students.

Consequently, educational authorities in various countries have urged AI technology companies to develop tools that help educators and examiners differentiate between human and machine-generated work.

However, an even greater danger is the ease with which some AI users might rely on these technologies to make decisions. This risk is likely to grow as the algorithms used in AI become more effective.

As we enter 2024, a principal challenge appears to be the overshadowing of the many benefits of AI, familiar to those using its advanced (generative) technologies, by its potentially hazardous aspects.

The issue is expected to escalate in the new year as the strong momentum of AI companies continues to drive the development of increasingly sophisticated technologies. This occurs even though some industry leaders, like OPEN AI's CEO Sam Altman, do not object to legislative intervention in this domain, as evidenced by his testimony before the US Congress in May 2023, calling for agreed-upon regulatory measures.

Still, despite these concerns, legislative efforts and proposals for regulating AI development, such as establishing an international supervisory body akin to the International Atomic Energy Agency, remained mainly in the realm of good intentions by the end of 2023.

font change