Skip to main content

AI and Regulation

What prospects?

di Emanuela Rossi

  1. Introduction

Artificial intelligence (AI) has gradually become an area of strategic importance and a key driver of economic development, that can bring solutions to many social challenges.

However, the risks connected to its use and spread have highlighted the need for a regulatory framework that would determine the legal requirements applicable to relevant actors and main applications (like weapons, health and diseases, financial services, autonomous vehicles, robotics, energy production and distribution and employment).

  1. Status of the international regulations and principles on artificial intelligence

In May 2019, the Organisation for Economic Co-Operation and Development  (OECD)  member countries adopted the Recommendation on Artificial Intelligence[1] and developed Principles on Artificial Intelligence in order to develop and promote AI applications that respect human rights and democratic values[2].

Following on from the OECD, in June 2019 the G20 developed human centred AI Principles that draw from the OECD ones[3].

In February 2020, the European Union published its White Paper on Artificial Intelligence.
This focuses on the strategy for promoting and regulating AI, including safety, data protection and privacy, and non-discrimination and fairness[4].
The White Paper is complemented by a Report on the Safety an Liability Aspects of AI[5].

In June 2020, the Global Partnership on Artificial Intelligence (GPAI) was launched.
The idea was developed in 2018 within the G7, under the Canadian and French Presidencies. One of the aims of the GPAI is to support and guide the responsible development and use of AI applications respectful of human rights.[6]

In December 2020, the White House issued its final guidance on the regulation of AI[7].
The document also considers the AI implications on fairness and non-discrimination.

Finally, the European Commission is due to present a regulatory framework for high-risk AI in March this year[8]. Among the ambitions of the Commission there is the introduction of a conformity assessment of high-risk AI applications by certified test centres. Such measures would help ensuring that EU consumer rights are complied with[9].

  1. Key applications and recent developments
  • AI and healthcare

The use of AI in healthcare is gaining increasing interest, as AI has the potential to improve health outcomes and offer cost savings.

However, patients data are often used to produce and test AI systems, therefore raising issues on data protection and privacy.

AI developers often obtain patient data through data sharing agreements with holders of regional or national data sets and not directly from the data subjects.

Use of patient data is subject to several laws and codes of practice, the most important one being the EU General Data Protection Regulation (GDPR).

The Information Commissioner’s Office (ICO), the independent regulatory office dealing with data protection in the UK, recently contributed to the UK Parliamentary  Office of Science and Technology’s (POST) recently published research briefing on AI and healthcare (known as POSTnote), highlighting the data protection and privacy implications of the use of AI in healthcare.[10]

  • AI and work

Among the applications of AI in the work environment there is the use of AI to select workers, through the analysis of CVs in the recruiting process or within platforms like the well-known Deliveroo.

The Court of Bologna has recently stated that the algorithm that governs the access to work sessions on Deliveroo is discriminatory when prejudices the workers exercising their trade union rights.

In fact, Deliveroo’s riders are coordinated by an AI platform that allocates the working lots based on the reputational ranking of the rider. The ranking is made up of the rider’s reliability and participation during peaks, and the peaks is exactly when the riders may participate to trade unions strikes[11].

  • AI and product liability

In July 2020, the JURI committee of the European Parliament published the study it requested on Artificial Intelligence and Civil Liability.

The study concludes that the lack of subjectivity and legal capacity of the AI systems doesn’t prevent them to be subject to the European product safety regulation and product liability directive as it’s always possible to identify a physical person or a corporate body behind it.
The first set of rules imposes essential safety requirements for products to be distributed onto the market, the latter aims at compensating victims for the harm suffered from the use of defective goods.
However, these regulations could be further improved to favour their application to some of the most recent emerging technologies, like AI[12] [13].

It follows that in some cases the existing law and regulations can be applied to artificial intelligence with no or minimum need of adaptation. This is the case for human rights (especially the right to respect private and family life and the prohibition of discrimination[14]), data protection, consumer rights, product safety and product liability.
In some other cases the application of AI gives rise to new challenges that need improved law and regulation or tailored regulation. It may be the case for the use of AI for work purposes.

  1. Conclusion

The strategic importance of AI and the risks connected with its use and development have highlighted the need to regulate AI.

The need for a regulatory framework is also enhanced by the deep connection that AI has with data protection and human rights (especially discrimination).

In today’s super connected world, there is the need to develop a shared strategy for AI’s regulatory framework detailing the key requirements and stating the minimum standards that AI developers and actors should comply with, to be then implemented at national level.

These minimum standards should cover:

– safety;

– privacy and data protection;

– non-discrimination;

– liability, including product liability and liability for errors.

Normally, regulations are set up after problems arises, a negative event happens and there is some public outcry[15].

Some AI developers and academics also argued that AI is still in its infancy and it’s too early to regulate this technology.

Considering  the impact that artificial intelligence has, on both people’s rights and innovation and economic growth, the track is to develop regulations that are practical and flexible enough to accompany the development of a rapidly evolving field, and that respect the principle of proportionality in order not to choke AI development.

This set of shared international regulations should go further recommendations and principles and be binding in order to drive the development of AI technologies and applications compliant with human rights, safety, consumer rights and liability rules.








[7] Engler A., on, 8 December 2020.

[8] Morelli C., Legislazione europea e intelligenza artificiale, on, 09 November 2020.

[9] Stolton S., EU Artificial Intelligence regulation at risk in WTO e-commerce deal, study says, on, 27 January 2021.


[11] Bianchi D., L’intelligenza artificiale discrimina i riders e il Giudice apre alla contrattazione collettiva dell’algoritmo, on Diritto e Giustizia, 5 January 2021


[13] Olivi G., E’ tempo di stabilire le responsabilità per gli errori dell’intelligenza artificiale, on, 20 January 2021.


[15] For a critic read on recent approaches to regulation, see MacCarthy M., AI needs more regulation, not less, on, 9 March 2020.