AI and the European Defective Product Directive

by Dario Panza

With the advent of Artificial Intelligence (“AI”) embedded products, an assessment of the adequacy of the legislation in place becomes necessary. In particular this article focuses on consumer protection and aims to evaluate if the European Defective Product Directive (“Directive”) [1] effectively addresses the issues arising when marketing new technologies.

As a result of a thorough scrutiny brought forward by the most authoritative members of academia in the field of “Technology Regulation”[2], the Directive has been considered obsolete and ill-suited to address the challenges posed by new technologies due to the following factors:

  • the definition of “product” provided within the Directive is ambiguous and fails to capture AI;
  • there is an extremely cumbersome burden of proof that the consumer has to bear in mind in order to be awarded damages arising from a defective product;
  • a safe harbor provision included in the Directive shields producers to be held liable in case of damages caused by defective products and shifts the economic consequences of the latter on consumers [3].

In accordance with Art. 2 of the Directive products are “all movables […] even though incorporated into another movable or into an immovable […] ‘Product’ includes electricity”. The following definition is rather broad, but at the same time fails to make reference to the defining features of AI and hence a substantial strand of literature argues that the latter should not fall within the realm of the Directive (exposing a legal loophole).

However, to the extent that the applicability of the Directive is granted regarding defective AI embedded products, consumers bear a burdensome burden of proof that might daunt them to bring actions. Indeed, in accordance with article 4 of the Directive, a consumer claiming for damages needs to demonstrate the specific law in the product that impaired the latter to perform the tasks it has been programmed for.

Last, but not least, Art 3 of the Directive sets forth a so-called absolute liability for the producer: in case of a product is faulty the producer is liable for damages regardless of any imputable negligence. However Art 7, by laying out the so-called Development Risk Defense (DRD), surreptitiously reintroduces a standard of negligence relieving from liability the producer to the extent that at the time when the product was under development the scientific knowledge was such that issues unraveled later on in the future could have not been detected. Therefore, as far as the producer has been compliant with the duty of care provided by law, it will never be held liable for damages caused by the products manufactured, eventually depriving victims of any means of compensation. The fact that any economic consequence arising from faulty products is to be born with the consumers, acts as a deterrent to purchase AI embedded products and posed a real threat to the developing of a market for the above-mentioned products in Europe.

Therefore, the tenets of a set of rules that appropriately discipline the AI phenomenon (from a consumer protection perspective) must rely on:

  • a batch of definitions tailored around the concept of AI;
  • a rebuttable presumption of the righteousness of the allegations brought forward by consumers in order to be awarded damages (a tech firm normally detains the expertise and the resources to demonstrate if the alleged faulty product was perfectly functional or not);
  • the installment of absolute liability (the producer is liable just because the product is flawed regardless of its compliance with any standard of negligence) of the producer once ascertained that a product is faulty. Any allegation referring to a steep increase of the cost of tech business (and at the same time the rise of a wave of deterrence in engaging in a business of this sort) needs to be debunked. Tech companies could easily resort to the insurance market to manage away liability risk and this implication is two-fold: tech firms relying on economies of scale would get a better insurance deal that any individual would possibly get (in case she was to bear the economic consequences of a faulty product). Secondly, the premium insurance would be passed on to the consumer embedding it in the product price [4].

[1] Directive 85/374/EEC.

[2] DIRPOLIS Institute, Scuola Superiore Sant’Anna.

[3] Bertolini A (2013) Robots as products: the case for a realistic analysis of robotic technologies and the law. Law Innov Technol 5(2):147–171.

[4] Michael Decker, “Technology Assessment of Service Rob~tics. Preliminary Thoughts Guided by Case Studies” in Michael Decker and Mathias Gutman, robo- and Informationethics Some Fundamentals (Lit Verlag 2012) 64; Directorate General Health and Consumer Protect10n (DG SANCO), Guidance document on the relationship between the General Product Safety Directive (GPSD) and certain sector directives with provisions on product safety ‘, second chapter, November 2015.


en_USEnglish it_ITItaliano