What are governments doing to regulate AI?

Al Majalla explains how different countries are tackling the regulation of AI by implementing strategies and laws to curb its dangers

Why the law needs to come down on AI
Nash Weerasekera
Why the law needs to come down on AI

What are governments doing to regulate AI?

In a 1950 article entitled Computing Machinery and Intelligence, mathematics professor Alan Turing asked if a machine could one day be equipped with the ability to think.

Five years later, another mathematics professor, John McCarthy, coined the term “artificial intelligence” in a specialised workshop without clearly defining it.

Several definitions have followed, the most recent (and most important) of which is the one adopted by the European Parliament on 14 June 2023.

AI was described as a “machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments."

Data breaches and deep fakes

AI’s rapid development has quickly revealed serious risks, ranging from data breaches and job insecurity to the rise of deepfake technology.

Add to this threats to democracy, privacy, security and human rights, said UN High Commissioner Michelle Bachelet. Down the line, it could attack and annihilate humanity with lethal, self-acting weapons.

Netflix
A still from Netflix documentary 'Killer Robots' explores the possibility of AI weapons.

Renowned physicist Stephen Hawking has suggested that “the development of full artificial intelligence could spell the end of the human race."

His sentiments were supported by Sam Altman, creator of the AI-driven tool ChatGPT, who called for "a global regulatory framework" for artificial intelligence, modelled on nuclear or biotech regulations.

The development of full artificial intelligence could spell the end of the human race.

Stephen Hawking, renowned physicist

Further, several specialists have submitted a proposal to UN Secretary-General Antonio Guterres to establish an "international oversight body for AI" along the lines of the International Atomic Energy Agency (IAEA).

On 17 July, the UN Security Council held its first discussion on the issue of AI and its impact on peace and security.

Entrepreneur Elon Musk also warned of the advancement of AI with ChatGPT reaching a stage where it can develop its own programmes (without the need for humans), including programmes to wage war on humanity, similar to the events of the famous film Terminator, which takes place in 2029.

Reuters

International figures have participated in signing open letters that were published by the Future of Life Institute, a nonprofit that "works on reducing extreme risks from transformative technologies." 

The letters warn of the dangers of AI-operated military systems, weapons, and autonomous robots. They demand that officials refrain from developing, manufacturing, trading, or using these systems and arms.

They also called for their countries to join those that voted on this issue at the UN, stating that combating AI-related threats should be "a global priority as are other societal threats, such as pandemics and nuclear wars."

On 1 September 2017, Russian President Vladimir Putin told several Russian students that "whoever excels in AI will rule the world."

Later, in March 2018, he revealed the construction of a nuclear submarine entirely based on AI, according to a France 24 report.

The report continues to say that, back in 2011, Putin announced Russia had adopted the 'Perimeter' system (15E601), which can initiate a nuclear response upon the detection of an atomic bomb being launched by any other nation.

Regulations and legislations

A more optimistic outlook states that AI continuously and steadily provides many services to humanity, individuals, and institutions in all areas of life. These include learning, education, health, environment, security, transportation, law, finance, banking, journalism, public services, and more.

A more optimistic outlook states that AI continuously and steadily provides many services to humanity, individuals, and institutions in all areas of life.

As for the rules and regulations of AI, in her book Digital Empires: The Global Battle to Regulate Technology, Anu Bradford, professor of law at Columbia University, highlighted three global trends.

The first one, adopted by America, focuses on innovation and economic superiority. It sets the rules for AI regulation as a secondary or a late catch-up measure. One example would be the Blueprint for an AI Bill of Rights, issued by the White House in October 2022.

The second is applied by China, which uses AI to strengthen surveillance and propaganda to consolidate the Communist Party's control.

Reuters

However, the discovery of generative AI has created new and serious challenges that are difficult to predict. This prompted the Chinese government to draft laws to regulate it, holding developers accountable for content that deviates from Communist Party values.

The third global trend, planned by the European Union, differs from the market-driven American model and the restrictive, state-led Chinese model.

It focuses on promoting innovation while protecting the rights of users and citizens from harmful and destructive AI developments. It does this by controlling and regulating the activity of technology companies.

This European approach – as well as its laws and regulations – has a growing appeal for governments.

Some believe that the "Brussels approach" will lead to the globalisation of European standards when it comes to AI regulation and digital activities.

The most important legal regulations adopted by Europe are the European General Data Protection Regulation (GDPR), the Digital Services Act, the Digital Markets Act, and the various amendments that were agreed upon on 14 June in the European Parliament.

The amendments were designed to ensure that AI systems remain under human control and that they are safe, transparent, traceable, and non-discriminatory. They should also accurately categorise what suppliers and users are responsible for, and what they are prohibited from doing.

The (European) amendments were designed to ensure that AI systems remain under human control and that they are safe, transparent, traceable, and non-discriminatory. 

The amendments further set out four risk levels for suppliers and users: minimum or zero risks; limited risks; high risks including damage to health, safety, fundamental rights, and the environment; and unacceptable risks to personal safety that are strictly prohibited.

In addition, there was a proposal to create a body in charge of combating the spread of hazardous AI systems.

Civil accountability

In 2020, the Europeans adopted a legal regime of civil liability arising from AI use.

Its purpose was to address the legal consequences of the following (among others):

  • The technical complexity of its systems

  • The diversity of actors involved in its operation

  • The possibility of intrusion by outsiders

  • The difficulty of tracking contributions to the harm done

  • AI's ability to learn independently

  • The importance of safe and secure data collection, storage, and sharing

  • The inability to confer legal personality on AI systems to hold them accountable

Rigorous legal frameworks are needed to address such liabilities arising from AI.

The main goal is to enable people who have been harmed by AI to receive appropriate compensation in a manner that both protects citizens and avoids discouraging companies from investing in AI innovation.

The main goal is to enable people whom AI has harmed to receive appropriate compensation in a manner that both protects citizens and avoids discouraging companies from investing in AI innovation.

Consequently, the European Community resolved to hold all operators of high-risk AI systems civilly liable at all stages of production, use, and control of AI tools and systems. These include:

  • Those responsible for product safety, including manufacturers, developers, programmers, service providers, and upstream operators.

  • Those responsible for the frontline operation who are generally the first visible contact point of the affected party.

  • External parties that infiltrate the system and act through it in manners that harm those involved.

When it comes to high-risk AI tools that operate autonomously, their operators are strictly held responsible if their activities cause any damage.

They cannot claim that they acted with due diligence to absolve themselves of responsibility unless the damage is caused by force majeure – that is, unforeseeable circumstances that are out of their control.

Shutterstock

Compulsory and remunerative risk insurance must also cover the use of these tools, which the frontline operators are responsible for ensuring.

Criminal responsibility

The European Community's decisions contain several recommendations in the context of criminal liability. Mainly, the following:

  • Police and judicial authorities should adopt applications legally, fairly, and transparently for specific, explicit, and legitimate purposes, without excessive use, and not for longer than necessary.

  • Strict democratic oversight of any AI-based technology used by law enforcement and judicial authorities, and prohibition of applications that don't respect the principles of necessity, proportionality, and legitimate right of defence.

  • Criminal liability is the responsibility of a natural or legal person instrumental in causing harm from AI use.

  • Ensuring full proactive transparency about companies supplying AI systems for law enforcement and judicial purposes, including adopting appropriate public procurement procedures for AI systems and conducting periodic assessments of applications related to citizens' fundamental rights with the participation of civil society.

It's clear from the above that EU laws occupy a priviliged position in relation to AI.

The European Union is the only entity that has proved capable of effectively standing up to the rising dictatorship of algorithms, in the words of Boris Baroud, a specialist in legal issues raised by the processing of databases.

This reflects Montesquieu's golden rule that the abuse of any powerful authority must be confronted with a corresponding authority.

Regulating AI in the Middle East

In the Middle East, the Government AI Readiness Index 2022, developed by the Oxford Insights Foundation, indicates that Gulf countries lead after Israel. (The numbers are as follows: Israel (61.96), the United Arab Emirates (57.83), Qatar (62.37), Saudi Arabia (70.12), Oman (68.54), Bahrain (53,59), Egypt (49.42), Jordan (51.76), Kuwait (47.68), Tunisia (46.81), Lebanon (45.72), and Iran (45.30).)

This progress in the index is due to the desire of governments and private sectors in the Gulf to adopt AI, in varying proportions, to stimulate and diversify the economy, reducing dependence on oil. In addition, it will help them meet public and private security and service needs.

Governments and private sectors in the Gulf (want) to adopt AI, in varying proportions, to stimulate and diversify the economy, reducing dependence on oil.

Different strategies for AI have been developed in the Gulf. These involve varying methods of regulating risk liability and ensuring lawful and ethical use.

The UAE strategy, launched as part of the UAE 2071, includes the establishment of a Ministry of AI.

It also includes the establishment of an AI Council composed of researchers and innovators from the best universities and international institutions, to conduct a review of national methodologies on issues such as cybersecurity, data management, and ethics.

So far, there are no legal rules for the governance of AI in the UAE, but rather guidelines — such as ethical guidelines for its use in Dubai — along with various provisions contained in many laws such as Privacy and Personal Data Protection Provisions, Consumer Protection Law, Civil Transactions Law, Penal Code, and others.

Saudi Arabia has focused its strategy on establishing the Saudi Data and AI Authority (SDAIA) to be responsible for the development of legislation. The Personal Data Protection System was also developed. Its early articles state that its application does not prejudice the competencies and functions of the National Cyber Security Authority, which is responsible for cybersecurity.

Smart cities

The Kingdom plans to use AI in the public sector, specifically in smart cities such as Neom, which will include many AI-powered services. It has prepared its programmes in cooperation with tech multinational Huawei.

The UAE's efforts to advance in AI have benefited from the country's $300mn investment in Silicon Park, the first integrated smart city, built to complement the government's smart transformation of services.

Also, under an agreement between SenseTime (a world leader in the AI field) and the Abu Dhabi Investment Office, a centre is being established in Abu Dhabi to research and develop AI capabilities in seven different industries.

It covers Europe, the Middle East and Africa and contributes to the diversification of the national economy and the enhancement of its competitiveness. It also provides a thriving environment for skills to grow across diverse technical tasks.

Following Saudi's approach, Qatar focuses on AI data security, as evidenced by Law No. 13 on the protection of personal data privacy.

It deals with a range of things, such as the rights of individuals, the obligations of the controller and processor, the status of private data, electronic communications for the purpose of direct marketing, and penalties for violations of the provisions of the law.

AI was successfully used for organising and monitoring purposes during last year's World Cup.

Finally, Oman doesn't yet have integrated legislation; instead, it adopts guidelines for AI use in the public sector, including six main principles: comprehensiveness, consideration, accountability, fairness, transparency, and security.

Oman is currently developing its first smart city, Madinat Al Irfan, which will later transform other parts of the country into smart areas, such as Muscat's Duqm and Ras Al-Hamra.

font change

Related Articles