Europe now leads the world in AI regulation

The first comprehensive regulation of AI by a major regulator anywhere assigns applications of AI to three risk categories. Offering protection while allowing for innovation provides a useful template.

French Minister for Economy, Finance, Industry and Digital Security Bruno Le Maire speaks at press conference during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 2, 2023.
JUSTIN TALLIS / AFP
French Minister for Economy, Finance, Industry and Digital Security Bruno Le Maire speaks at press conference during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 2, 2023.

Europe now leads the world in AI regulation

In a groundbreaking move, the European Parliament has approved an artificial intelligence (AI) law aimed at mitigating the technology’s risks and making it less harmful to humanity.

It is the first legislation of its kind in the world and a stark departure from the many tentative and non-binding regulating measures seen elsewhere.

This includes an executive decision by US President Joe Biden in October 2023 and the G7 AI Principles and Code of Conduct, intended to boost AI safety and integrity.

The EU’s law is seen as a significant step towards the establishment of a much-needed globally recognised set of rules and standards.

At an AI Safety Summit in the UK in November, representatives of 28 governments and dozens of companies underscored the growing concern over the potential dangers of AI.

The new EU law is a potential game-changer. It is binding on all EU member states and even extends to foreign companies operating outside the 27-member-state group that nevertheless reach EU citizens with their products and services.

The law received significant support in the European Parliament and passed easily (523 votes to 46, with 49 abstentions).

Nash Weerasekera

Read more: What are governments doing to regulate AI?

Rights over profit

The EU Artificial Intelligence Act aims to safeguard the fundamental rights of individuals, uphold democracy, ensure the supremacy of constitutional and legislative frameworks, and foster a positive and transparent environment.

Those working in AI development must now disclose essential information about their products and respect the personal data of individuals, particularly in public settings, with the use of technology like facial recognition software.

Furthermore, the law lets EU citizens lodge complaints that companies must promptly address, in another boost to consumer protection.

High-risk AI systems, such as those that may compromise fundamental rights or make discriminatory decisions, are now subject to stringent conditions and restrictions.

There is some concern that this could hamper productivity and innovation in AI, or undermine competitiveness, owing to the new and significant responsibilities placed on developers and creators.

The law becomes fully effective in two years, but elements will be implemented earlier. It can be subdivided into three main areas: categorising hazards, uses for the benefit of humanity, and transparency.

The EU's law is seen as a significant step towards the establishment of a much-needed globally recognised set of rules and standards.

Risk categorisation

The law categorises AI-based applications into four groups based on the level of risk associated with their use: unacceptable risk, high risk, limited risk, and low risk.

Unacceptable risks encompass AI systems capable of manipulating and coercing human behaviour to alter decisions or influence public opinion.

Using biometric information or physical characteristics (such as height, gait, eye colour, facial features, voice patterns etc) in public spaces for the purpose of suppressing specific groups is also now unacceptable.

The law also identifies government-run social assessment systems (or 'social scoring') as an unacceptable risk.

These are used by some governments to compile data on an individual's risk levels by monitoring their financial, social, and criminal behaviour.

High-risk applications in AI might include those stemming from the use of self-driving cars, medical applications, or CV-scanning tools that rank job applicants.

The legislation lists any AI system used as a safety component in the management of critical digital infrastructure, such as road traffic, water, gas, and electricity, as high-risk.  

AFP
A road sign reads "Extreme Heat, Plan your journey, Carry water", warning motorists about the heatwave forecast for July 18 and 19, on the M11 motorway north of London on July 17, 2022.

It also deems AI systems to be high-risk if they help assess and determine someone's employment, health interventions, creditworthiness, insurance, or benefits. Limited and minimal risks are associated with entertainment systems such as games or systems generating text or videos.

The law provides a grace period for companies to align with its provisions. Applications posing unacceptable risks must comply within six months of the law's enactment.

Additionally, lawmakers are granted nine months to identify best practices and establish a robust model for companies and AI developers.

Benefitting humanity

EU legislators want AI to be harnessed for the good of humanity, to serve and enhance human welfare while upholding fundamental rights and minimising risk.

In this sense, Europe can differ from the US, where priorities can include material gains and technological proliferation to boost productivity.

While the EU's law generally prohibits the use of biometric identification systems, it does allow for certain exceptions under strict conditions, provided that judicial and administrative safeguards are in place.

Such technology may be permissible within specified parameters and timelines, for instance, to locate missing people or to pre-empt threats to public security.

These provisions set boundaries and establish the European Union as a global advocate for human rights in technology. Firms that disregard the bloc's ethical standards can now expect to be penalised.

High-risk AI systems, such as those that may compromise rights or make discriminatory decisions, are now subject to restrictions.

Enshrining honesty

The EU law places a significant emphasis on ensuring transparency in AI applications. For instance, content generated by AI must now be disclosed as such. This should help stop the spread of 'deepfakes' and false reports.

It underscores the importance of categorising applications based on their associated risks and obligating developers to furnish essential information. AI systems must also adhere to the requirements of EU intellectual property rights.

The Computer and Communications Industry Association (CCIA) Europe, a lobby group, wants to monitor the law's impact on AI productivity and warns of possible ramifications for competitiveness.

It cautioned against burdening innovative developers with disproportionate compliance costs and additional administrative requirements, such as personal data.

CCIA Europe's competition policy manager told Euronews that a competitive AI market would be more advantageous for EU consumers than prematurely imposing additional regulations, which could stifle innovation and cooperation.

The EU law stands as a model and reference point for numerous countries grappling with the intricate task of regulating AI and mitigating its risks. It is also a vehicle through which the European Union can export the values and standards of its member states in areas like human rights and equality.

Acting as template

Other countries are seeking to do something similar. In September 2021, Brazil's Congress passed a bill that creates a legal framework for AI (it still needs to pass through the Senate).

The EU legislation will set a precedent, with similar measures expected in the US, which continues to trail the EU in regulating the AI sector.

Two years after their publication, the voluntary pledges of AI companies remain mere statements of intentions. A year earlier, in 2021, the White House drafted an AI Bill of Rights. It may have been well-meaning, but nothing was binding.

Brendan SMIALOWSKI / AFP
US Vice President Kamala Harris applauds as US President Joe Biden signs an executive order on advancing the safe, secure, and trustworthy development and use of AI at the White House in Washington, DC, on October 30, 2023.

Late last year, President Biden issued an executive order promoting "the safe and reliable development and use of AI." It established new AI safety and security standards, prioritised privacy protection, and advocated for innovation and competition.

In terms of sector regulation, it helped establish the likely US direction, yet it was still not binding, so it did not compel action and adherence, as the new EU law does.

With no AI laws passed by Congress, Biden's administration has been limited to what essentially amounts to guidelines, leaving big US tech companies to formulate their own policies and police their own products.

Alternative pathways

In March, the UN General Assembly passed a landmark resolution aimed at promoting the safe use of AI, urging countries to avoid AI applications that contravene international law or threaten human rights.

Although not legally binding, its international adoption could catalyse action and encourage countries to enact their own legislation governing the use of AI.

Critics say contentious issues like the use of AI in military applications were not covered, nor was the cosy relationship between big tech firms and governments.

The US supported the resolution, which sparked debate. According to Forbes, a business magazine, Washington's backing was with a view to counterbalancing China's growing influence in the field of AI.

The country that establishes the first workable legal framework for AI is likely to shape the conditions and values underpinning its use, which Washington recognises. 

The EU legislation will set a precedent, with similar measures expected in the US, which continues to trail the EU in regulating the AI sector.

Consequently, there is the opportunity to promote Western values such as freedom and democracy in AI applications, in contrast to the authoritarianism, surveillance, and censorship now so well-established in China.

In areas such as legal frameworks, the development of AI is becoming increasingly subject to traditional geopolitical rivalries between the US and China.

But the technology itself is advancing far faster than the efforts of governments and international organisations to regulate it. As soon as a law is drafted, it is out of date.

Elon Musk, the entrepreneur who owns electric car maker Tesla and social media site X, was an early backer of OpenAI, which created the popular generative AI algorithm ChatGPT

Read more: Dissecting discord: The unravelling of the Musk-Altman OpenAI partnership

He has joined other experts to issue warnings about the alarming evolution of the technology. Its speed and scope of development mean that it will soon surpass human intelligence, adding to the complexity of its regulation.

Furthermore, the nature of the technology means that it is not delineated by geographical boundaries, unlike other regulated industries. AI is an area that shows the limits of sovereignty.

The need for legal safeguarding is now both urgent and important, because authorities and companies abusing such powerful technology could cause untold problems. 

In the race between innovation and regulation, the EU has fired the first serious shot for the latter.

font change

Related Articles