AI odyssey in 2024 may mark the start of a new era

Generative AI has rapidly permeated our lives and spread like wildfire worldwide. It has ushered in new capabilities, expectations and fears.

Will it bring about a golden age or new dark times for mankind? A year of AI’s rapid rise feels like a millennium. Are we ready?
Nicola Ferrarese
Will it bring about a golden age or new dark times for mankind? A year of AI’s rapid rise feels like a millennium. Are we ready?

AI odyssey in 2024 may mark the start of a new era

London: At the end of 2022, the ChatGPT programme was released into the world, and within two months, a record 100 million users had signed up. Was that really only a year ago?

Such is the pace of change when it comes to Artificial Intelligence (AI) or, more specifically, Generative Artificial Intelligence (GAI). It seems more like a lifetime ago.

The speed of this technology’s reach and the depth of its impact have broken all previous notions, upending all previous norms. GAI has rapidly permeated our lives and spread like wildfire across the world.

It has ushered in a new language, capabilities, expectations, fears, and a new future. It has touched every generation, garnered both qualified support and enthusiastic embrace, critical opposition and worried acceptance.

Businesses have welcomed it. Universities have grappled with its implications. Lawsuits have been filed against it (by writers, thinkers, and artists). And everyone, from employees to experts to enterprises and nations, has been disrupted by it.

ChatGPT was not the first groundbreaking discovery or product of AI. The technology has been around for decades, used in various fields since the first humanoid (or robot) emerged in the 1950s. It is estimated that 77% of electronic devices today use AI in one way or another.

What ChatGPT introduced for the first time was an incredible ability to respond to — and interact with — human language, learning from it at a speed many thought frightening.

This has alarmed prominent scientific figures, including AI pioneers like Geoffrey Hinton and unnerved major tech companies, governments, legal experts, and militaries, who warn of the existential dangers associated with powerful GAI models.

The major worry has been focused on machine learning and the fateful consequences if this is not properly managed through laws and controls to protect humans from competitive and authoritative tendencies.

Nicola Ferrarese

Breaking from the past

Managing the rollout of AI cannot rely on past models because it is unlike any other traditional technology. These have typically taken years to develop, embed, and stabilise, giving humanity the time it needs to catch up, adapt, and transition.

Tech advances of the past have usually required a large complementary physical infrastructure — the laying of power lines, the development of new types of engines, the manufacturing of devices, the building of factories, the training of operatives and so on.

Generative AI has rapidly permeated our lives and spread like wildfire worldwide. It has ushered in new capabilities, expectations and fears.

By contrast, much of the infrastructure required to operate GAI systems – cloud computing, software, app stores – already exists. This drastically cuts the time, effort, expertise, and expense needed to launch new AI information systems.

From a commercial point of view, AI tech companies operate outside established business norms. Initial entrants are not necessarily the winners but start where others leave off, which propels a rapid transition through successive stages of AI development and fuels intense competition.

Consider the stellar uptake and impact of ChatGPT as an example. This highlighted Google's research on large and generative language models upon which ChatGPT relies.

Similar models from Google, Microsoft, Meta, and others, all with competitive features, quickly emerged. This prompted OpenAI, the company behind ChatGPT, to launch a newer, more flexible model designed to meet the needs of individual businesses.

This adaptation could be seen as "democratising innovation" – a concept formulated by Eric von Hippel of the Massachusetts Institute of Technology (MIT) about the ability of users to develop products and services they need on their own rather than relying on companies to do so

Diana Estefana Rubio

Factors controlling AI development

Against this backdrop, it is challenging to predict the future of AI or the nature of our future with it, but four factors could help decide what this looks like.

The first is the technology itself. Will the tech companies open it up and make it fully accessible to all, or will it remain their exclusive purview, having argued that they alone can act as gatekeepers to ward off the perils and dangers it may give rise to?

This is especially relevant given our lack of preparedness to manage all manner of risks, from cyber threats, biased behaviours, and privacy violations to algorithms giving misleading or dangerous information, such as guides to create a biological weapon.

Beyond this, there are further risks, including the theft, by trained algorithms, of people's intellectual property, such as written work and visual art, not to mention the millions of jobs and livelihoods that the technology could endanger.

This brings us to the second factor: employment.

A worry arises at every technological crossroads, where machine performance becomes more efficient and productive than humans, prompting consideration of their replacement.

Will we see, for example, five million Indian programmers out of work in two years, as claimed by the chief executive of London-based Stability AI, given that programmes now self-generate using AI models?

The third and perhaps most important factor concerns the rules and regulations that will govern the work of companies operating in the field of AI. Regulation should be effective and fair, based on standards that apply to everything from oversight to accountability.

The fourth issue relates to humans themselves and how they interact with AI, adapt to its changes, and benefit from its value. How will this change the management of their affairs and improve
human life?

It is challenging to predict the future of AI or the nature of our future with it, but four factors could help decide what this looks like.

Dark Age or Golden Age?

In the summer of 2022, very few could have predicted how GAI models such as ChatGPT would have had such a transformative effect on 2023, so it is virtually impossible to guess what 2024 will bring us in this area.

The readiness of companies to adopt GAI is already highly advanced, with most having spent 2023 experimenting with it. This is shown in the finances. Investors injected over $36bn into GAI this year, more than double their 2022 outlay. What will they spend in 2024?

Yet, not everyone is rushing in. There is, perhaps predictably, a deep divide between those who warn and those who relish. This has resulted in two distinct AI 'camps' when it comes to tech development.

The first group is concerned about the uncontrolled and rapid rise of AI. They advocate for maintaining the confidentiality of sources for the models or codes they release and argue against opening it up to all.

The second camp sees AI as an opportunity to advance in various fields rapidly. They are against the idea of impeding its progress and share their model sources with others who can take advantage of them. For them, the potential benefits to humanity outweigh the risks.

Diana Estefana Rubio

With that in mind, OpenAI recently launched a more advanced version of ChatGPT, called GPT-4, which allows users, especially companies, to build their own software and chat applications. Meta followed suit by releasing its GAI model Llama, while Google introduced its Gemini model.

Open-source models can mimic the performance of GPT-4 when trained skillfully and selectively. This encourages competition, prompting new and innovative models.

Jan Lucan, chief AI scientist at Meta, says open-source models have stimulated competition and empowered larger companies to build and use AI systems. Still, critics fear that placing powerful GAI models in the hands of irresponsible actors could increase the risk of deception, cyber warfare, and bioterrorism.

Lucan reminds us that people had the same worries at the beginning of the internet era, adding that this technology flourished precisely because it remained an open platform and maintained its decentralisation.

Still, the concerns prompted Meta and IBM to launch the AI Alliance in early December 2022 to "promote open, safe, and responsible artificial intelligence".

It is an international consortium comprising more than 50 organisations involved in research and development, including universities, scientific agencies like NASA, and leading tech companies like Advanced Micro Devices, Dell, and Intel.

Diana Estefana Rubio

The alliance's primary goal is to empower those advocating and following the open-source approach compared to the major producers of closed-source artificial intelligence models.

Lucan, Geoffrey Hinton and Yoshua Bengio won the Turing Award in 2018 for their contributions to deep learning. Hinton is considered Google's first Chief AI Scientist but resigned in May, concerned about "recklessness" in the heated AI race.

Investors injected over $36bn into Generative AI this year, more than double their 2022 outlay. What will they spend in 2024?

Diana Estefana Rubio

Meeting the future head-on

Lucan maintains that intelligence has nothing to do with the desire for dominance, saying: "If it were true that the most intelligent people want to control others, Albert Einstein and other scientists would have been rich and powerful, but they were not."

As a highly regarded and trusted scientist in the AI field, he thinks the superiority of machine intelligence can help humanity tackle some of its most significant challenges, including climate change and the treatment of diseases. He is excited by the prospect because he sees it as being under human control.

There are other challenges facing the development of AI, however, not least the huge resources required to produce these models, everything from data inputs to supercomputing power, electricity, and brains.

For instance, the training of GPT-3 costs $4.5mn and consumes 1.3 gigawatt-hours of electricity, enough to power 121 American homes for a year. Training GPT-4, a much larger model than its predecessor, will cost and incur much more.

Moreover, computational power requirements are increasing faster than the input data, meaning that training these newer models will become prohibitively expensive more rapidly than the potential improvements they offer.

Two recent developments stand out in this context. The first is what Google's Gemini offers by making GIA available on mobile phones. This cuts the reliance on cloud servers managed by major tech groups and could likewise cut operational costs.

The second is Microsoft's ongoing Project Silica. This aims to develop the world's first sustainable and long-term storage technology based on storing data in quartz crystal.

A durable, low-cost material resistant to electromagnetic fields, quartz crystal could last hundreds of thousands of years and accommodate a data density exceeding seven terabytes in a square crystal layer using laser beam guidance.

To date, we have not yet reached a turning point that justifies a full-scale investment in this storage technology, but that moment may well come in 2024.

Nicola Ferrarese

Further to these two developments, quantum computing technology continues to develop. This promises an exceptional capability that can significantly accelerate heavy processing tasks and outperform traditional algorithms for solving problems. This would cut costs further while enhancing the performance of AI models.

GAI improves itself with data, and as The Economist reported recently, a research paper published in October 2022 concluded that "the stockpile of high-quality linguistic data is likely to run out soon, probably before 2026".

Undoubtedly, many more new texts, images, and videos are being produced all the time. Still, they may increasingly be stored within corporate databases or on personal devices and, therefore, inaccessible.

Whether the future is dark or bright remains to be seen, but lessons can always be drawn from the past, and technological advances elsewhere have often depended on the interests they serve – political, military, economic, or security.

Will AI replace us?

The term "technological unemployment" was first coined by the renowned economist John Maynard Keynes in the 1930s, but the fear of job losses from technology goes back much further.

In 1412, for instance, the city council of Cologne banned the production of spinning wheels by local craftsmen because of the fear of unemployment among textile workers who still used a hand spindle.

In the 19th century, tailors worried about modern sewing machines, while port workers worried about grain elevators. In the early 20th century, lamplighters went on strike for fear of losing their jobs to electricity.

Each time, despite the fears, the change brought about by a technological advance has not led to the mass unemployment some feared.

True, jobs do disappear, but they are replaced by new jobs in new industries requiring new skills, sometimes in new places. Agricultural workers whose jobs were affected by machines migrated to cities and found work in factories.

Still, regarding its impact on humanity, there is no doubt that the current wave of technological disruption is very different from anything that has come before.

The rapid advances in computing power and AI/GAI capabilities will substantially increase the number and type of tasks in which machines can outperform humans.

Moreover, these tasks will not be limited to physical capabilities as was often the case in the past but will also cover human cognitive abilities and jobs — like writing this article.

A leading study by the University of Oxford predicts that nearly half of the jobs in the United States will be lost over the next 20 years due to AI. In other advanced economies, this figure may be lower. Some say that as AI spreads, emerging jobs will outnumber those that disappear.

Nash Weerasekera

Read more: AI and the future of jobs

Analysis from investment bank Goldman Sachs suggests that AI will enhance productivity, potentially raising the global value of goods and services by up to 7%.

Policy planners worldwide are grappling with how to transition workers displaced by algorithms. Those excluded by the new technology must be able to move into new fields of work and adapt by acquiring the necessary new skills.

According to a study by PredictLeads, there is a strong demand for specialists in AI, with almost two-thirds of companies in the S&P 500 advertising job openings. Recruiting programmers and engineers has recently become much easier.

Yet algorithms are not the only threat to jobs. Not too long ago, physical humanoid robots were a figure of fun, more often seen in movies than in industry.

That is changing fast.

Already, robots equipped with technology are taking jobs once thought to have been best performed by humans: jobs in customer service or financial advice, jobs that require the human touch, creativity, morals, and empathy.

Elon Musk, a tech billionaire who has warned of the dangers of uncontrolled AI progress, foresees the triumph of robots in commerce, substituting human labour.

In his car production company, Tesla, he thinks humans are still more efficient than robots in assembling certain cars. Still, he anticipates that Tesla's humanoid robot could eventually be more valuable than his entire car business, worth $650bn.

Whether the future is dark or bright remains to be seen, but lessons can always be drawn from the past, and technological advances have often depended on the interests they serve – political, military or economic.

The race to regulate

A year on from the launch of the ChatGPT, minds have focused on the self-learning capabilities of AI, the nature of intelligence, and the point at which the algorithms surpass human intelligence and venture beyond our control. So, what control is there?

To date, tech companies have voluntarily complied with ethical principles, integrating these commitments into their governance systems. However, these are optional and unregulated, so there is still no way of ensuring the safety, security, and effectiveness of the AI developed at these firms.

Many commit to investing in research that can help guide the regulation of this technology, such as techniques to evaluate capabilities that may pose potential risks in AI models. Still, analysts point to the temptation of vast profits and rapid developments.

Last May, a group of 350 scientists and executives from AI companies – including OpenAI, Anthropic, and Google – warned of the "danger of extinction" posed by AI, likening it to that posed by nuclear war and pandemics. Yet despite the warnings, none of their companies have stopped working on yet more powerful AI models.

History shows that voluntary self-regulation alone does not work, so strict government intervention is needed. Balances, standards, controls, and checks should run alongside the significant investments to fund the next breakthrough.

For a warning tale over the importance of regulation, look no further than FTX, a cryptocurrency trading platform that went spectacularly bust a year ago. It was founded in 2019 and was valued at $32bn by January 2022.

The founder – Sam Bankman-Fried – had also set up an investment firm, which turned out to be using FTX funds for trading because the investment firm was not doing well. By November 2022, it all came crashing down. Investors lost billions.

Given the pace of change in AI, the world cannot wait a few years before legislation catches up, takes shape, and moves towards enforcement.

Thankfully, the European Union made significant progress in June 2023 with its AI Act. This establishes different rules for different risk levels. It defines "unacceptable risk AI systems" as "systems that are considered a threat to people".

These will be banned.

The UK does not appear to be ready to regulate AI in the short term, despite Prime Minister Rishi Sunak's wish that the UK become a global centre of excellence in this field by establishing the AI Safety Institute – a kind of monitoring agency.

The UK has proposed a bill that is likely to come into force in 2024, which aims to protect against the adverse effects of AI, especially regarding jobs and privacy, and enable innovation and trade.

The debate continues over laws and red lines even though the European Commission first proposed a common regulatory and legal framework for AI in 2021.

The rules were intended to cover personal and biometric data, including facial recognition and fingerprint scanning. Still, Germany, France, and Italy – home to several AI companies - felt that this would slow development, so they argued for voluntary regulation.

The trio also worried about falling behind to China and the United States. In Washington, criticism has been levelled at the Biden administration for being light-touch on this technology, for fear that the US would fall behind Beijing in the AI arms race.

Diana Estefana Rubio

Analysts say that if countries and governments can be this indifferent to regulation, what motivates companies to constrain themselves voluntarily with oversight and effective governance systems? Why shouldn't they prioritise their own interests in the fierce competition within the sector?

AI and mankind

Jack Ma, the co-founder of the Alibaba Group, once said: "Machines may be stronger, smarter, and faster than humans, but they can never be as wise as humans because we humans have faith, and we have religion, and we have a heart."

Ma associated human readiness for the future with moving away from traditional methods of acquiring knowledge and relying on creativity and innovation, with humans being constructive in everything.

His statement may reflect that many people feel hatred, jealousy, or fear towards robots, whom they see as threatening their essence and existence. It recalls the vandalism that delivery robots faced on the streets of New York last year.

Even as AI 'became more human' – acting as personal companions and interacting with users – many assumed that it was still under human control and that no matter how clever or independent it got, humans still had a handle on it. Have we?

Throughout our addiction to the AI offering, the technology has continued to learn and develop its own capabilities, accumulating and exploiting data, imposing its ideas, tendencies, choices, and behaviours that will alter human conduct in the long run.

Quite without realising it, people are already losing the power of decision-making. Yet there may be more at risk, including identity, efforts, rights, achievements, and goals. People fear the prospect of losing what they worked hard to achieve.

In summary, while much noise around AI in 2023 may have been exaggerated, progression is likely to continue despite the costs, dangers, and complexities, such as the motivating factors of greed, polarisation, and competition.

Given the absence of constraint or enforcement and the fact that humans seem obsessed with threats, division, and war, AI may one day emerge as the stronger party.­

font change

Related Articles