In a groundbreaking move, the European Parliament has approved an artificial intelligence (AI) law aimed at mitigating the technology’s risks and making it less harmful to humanity.
It is the first legislation of its kind in the world and a stark departure from the many tentative and non-binding regulating measures seen elsewhere.
This includes an executive decision by US President Joe Biden in October 2023 and the G7 AI Principles and Code of Conduct, intended to boost AI safety and integrity.
The EU’s law is seen as a significant step towards the establishment of a much-needed globally recognised set of rules and standards.
At an AI Safety Summit in the UK in November, representatives of 28 governments and dozens of companies underscored the growing concern over the potential dangers of AI.
The new EU law is a potential game-changer. It is binding on all EU member states and even extends to foreign companies operating outside the 27-member-state group that nevertheless reach EU citizens with their products and services.
The law received significant support in the European Parliament and passed easily (523 votes to 46, with 49 abstentions).
Read more: What are governments doing to regulate AI?
Rights over profit
The EU Artificial Intelligence Act aims to safeguard the fundamental rights of individuals, uphold democracy, ensure the supremacy of constitutional and legislative frameworks, and foster a positive and transparent environment.
Those working in AI development must now disclose essential information about their products and respect the personal data of individuals, particularly in public settings, with the use of technology like facial recognition software.
Furthermore, the law lets EU citizens lodge complaints that companies must promptly address, in another boost to consumer protection.
High-risk AI systems, such as those that may compromise fundamental rights or make discriminatory decisions, are now subject to stringent conditions and restrictions.
There is some concern that this could hamper productivity and innovation in AI, or undermine competitiveness, owing to the new and significant responsibilities placed on developers and creators.
The law becomes fully effective in two years, but elements will be implemented earlier. It can be subdivided into three main areas: categorising hazards, uses for the benefit of humanity, and transparency.