AI in the Alps: Davos airs concerns over 'artificial content' age

In troubled geopolitical times, the World Economic Forum points to the dangers of fake news at a time of war and conflict, not least for global finance

AI in the Alps: Davos airs concerns over 'artificial content' age

The main theme of this year's World Economic Forum in Davos was captured in the event’s subheading, "Rebuilding Trust". The need to do that was a point of agreement among the attendees from the so-called global elite.

Conflicts and wars – whether cold or hot, from Ukraine to Gaza, from the Red Sea to Taiwan – have left their imprint on everyone, including the politicians, economists, and scientists gathered for the WEF’s annual convention in the Swiss Alps.

The worries caused by the rise in hostilities have touched even countries with the highest rankings for contentment, including Sweden and its neighbours in the happy Scandinavian countries.

And they ring out in the most impoverished corners of Africa, where globalisation took root only for established and rising world powers to chase their own economic and geopolitical interests.

This pursuit persists and is now heavily armed, including warships and fleets. It has created conflicting interests for a range of groups, which sometimes converge and at other times diverge, resulting in factionalisation within the countries concerned and disputes that run so deeply even individual households are split down the middle.

Trust tanks as conflict flourishes

As this relentless competition continues, unity is rare for a strategic commodity here or a giant corporation there. The whole process is defined only by a clash of interests in the policies that follow, coming amid internal and external economic crises and promoting the well-being of some at the expense of others.

It all adds up to a drop in trust in government. This was on stark display in Davos from the Edelman Trust Barometer for 2024, and the erosion is not fleeting.

Discussions in Davos on AI focused on concern about the potential dark fate it could have in store for humanity without unified standards and controls governing this potentially lethal technology. 

The latest report describes governments as less ethical and efficient than companies. At the same time, innovators are reluctant to act due to a lack of trust in the ability of governments to legislate effectively.

The annual survey includes opinions from over 1,400 risk experts, policymakers, and business leaders.  For this year, it identified "misinformation and disinformation" as the primary threats facing the global economy in the short term. 

It caught the eye of the United Nations Secretary-General António Guterres, who said: "When global norms collapse, so does trust. I am personally shocked by the systematic undermining of principles and standards we used to take for granted… And I am certain that unless we take action, we can expect much worse."

His remarks were interpreted as a reference to the war in Gaza.

"Artificial content" makes trust harder still

Along with the political and military issues pre-occupied delegates, another issue dominated the agenda: artificial intelligence (AI). 

In a sign of the troubled geopolitical times, it did not receive the wider attention and media coverage it deserved, despite the participation of star-name CEOs, including Microsoft's Satya Nadella and OpenAI's Sam Altman. 

Nonetheless, almost every official discourse emphasised AI's ongoing and rapid growth. It came complete with references to the concern about the potential dark fate it could have in store for humanity without unified standards and controls governing this potentially lethal technology. 

There was a sense in the discussions that despite these repeated warnings, there was a lack of new ideas about what to do about them. That left a feeling of control slipping away from decision-makers, legislators, and regulators while generative artificial intelligence continues to grow and expand toward dominance without proper restraint or accountability. 

The Davos debate focused on regulation and legislation. But there is a question about the effectiveness of both in a global race over the cutting-edge tech that shows no sign of slowing down. 

Laws may succeed in slowing companies down as they develop AI that could prove potentially destructive or from going too far in allowing machines to teach themselves and even, in the end, control their own destinies.

But AI can – and probably will – end up in the hands of those beyond the reach of the rules. Powerful forces exist beyond the law, from gangs to militias and from individual criminals to rogue states.

AI can – and probably will – end up in the hands of those beyond the reach of the rules. Powerful forces exist beyond the law, from gangs to militias and from individual criminals to rogue states.

AI in their hands would be akin to biological weapons finding their way into the wrong hands, albeit with a key difference: AI can more easily wreak havoc and can cause smaller problems that will lead to bigger difficulties. 

Misinformation, after all, is all it takes to push the world closer to war. It is not hard to leak fake news that can sow discord or make China, the United States, Russia or European nations feel under threat or just at risk from lower living standards. 

The primary concern is less dramatic but could lead to serious consequences. It is known as "artificial content" – creating and distributing fabricated facts and data through AI.

Numerous warnings have been issued regarding misleading and false information, graphics, and images that applications like ChatGPT and other similar applications can produce. These have been identified as the greatest short-term threat according to the WEF's Global Risks Report 2024, issued in Davos.

Financial threat

Many people may assume that addressing this concern merely involves raising awareness and verification of information, and this can only be done through research. 

But what if the false information is linked to investors' choices and decisions via fabricated financial and economic news or damaging the reputation of banks and companies regardless of the underlying truth?

Should a misinformation campaign work, could it go as far as to crash stock prices or even the wider valuations on global markets, leading to significant economic repercussions?

At the moment, generative artificial intelligence is significantly helping to manage credit risks and detect fraud at banks. But it is not conceivable, for instance, that this "autonomous intelligence" could one day prove damaging. 

It may impersonate real individuals, take on their identities, and manipulate personal banking data, leading to the theft of wealth. The "black box" nature of AI applications makes tracing this kind of fraud difficult. 

On a wider scale, markets have shown how easily they can sometimes be moved before. False information about the position of the US Securities and Exchange Commission on Bitcoin caused the price of cryptocurrency to surge when it was reported that the asset would be allowed to be included in regulated exchange-traded funds. 

Social media's fast-moving concern at the financial position of Credit Suisse last year increased the speed at which one of the biggest names in global finance collapsed into the arms of its biggest competitor. 

"Artificial content" – the creation and distribution of fabricated facts and data through AI is a chief concern. Misinformation is all it takes to push the world closer to war.

Chat-bots and market mayhem

Now, imagine the havoc resulting from a human error when using artificial intelligence at a company like Deloitte, which has started using an innovative AI-based chat program for its 75,000 employees in Europe and the Middle East, aiming to boost productivity.

From a purely technical perspective, the risks are substantial. These chatbots may misjudge market sentiment or use unreliable sources. Such mistakes could then be transmitted into the market by actions taken by clients or investors acting on the information.

Very soon, a loop created in this way may become a systemic problem. Even if clients do not act on information they doubt, that would add to a lack of trust. 

The challenges this creates are not simple, especially considering that banks are expected to spend around $274bn in 2023 solely for data protection purposes.

Like the flavour of the debate at Davos this year, the dangers posed by "artificial content" amount to security and political issues. There are tremendous risks to stability in both military and financial terms.

Any such "irresponsible content" is capable of wreaking havoc on the planet.  The worst fear is that the only solution to stop the machine might be the machine itself, making rebuilding trust all the more difficult. 

font change