accessibilityalertarrow-downarrow-leftarrow-rightarrow-upchevron-downchevron-leftchevron-rightchevron-upclosedigital-transformationdiversitydownloaddrivedropboxeventsexitexpandfacebookguideinstagramjob-pontingslanguage-selectorlanguagelinkedinlocationmailmenuminuspencilphonephotoplayplussearchsharesoundshottransactionstwitteruploadwebinarwp-searchwt-arrowyoutube
Article Article

Products liability legislation needs to catch up with Artificial Intelligence

Much needed directive should catch up with the realities of modern products

The existing products liability legislation in the European Union is now almost 40 years old. The ongoing tech boom and the increasing presence of innovations in everyday life, in particular artificial intelligence used in consumer products and services, make it necessary to adapt the rules to the new realities.

As a result, the EU legislator has put the issue of products liability and artificial intelligence outside the brackets and, independently of the Artificial Intelligence Act (AI Act) announced in April 2021, the Commission published a new proposal for a directive on liability for artificial intelligence (AI Liability Directive) at the end of September 2022.

In this article, you will learn:

• what the objectives and principles are of the proposed regulations regarding liability of AI,
• how to resolve the problem of the so-called ‘black box’ effect while claiming damages for AI errors, and
• what to consider in the risk management strategy implemented during the life cycle of a high-risk AI system.

A package on artificial intelligence

“The submitted AI Liability Directive complements the AI Act. The primary aim of the proposed regulations is to protect consumers and businesses and to adapt the rules to the dynamic innovation market and the digital age. Thanks to the envisaged regulations, it should be clear who is liable for defective products – from smart technologies such as IoT to pharmaceutical products – when and on what terms.”


Zuzanna Nowak-Wróbel, Associate, Warsaw IP & TMT team

The published draft AI Liability Directive amends the existing Product Liability Directive on the one hand, adjusting it to the technological transformation, and on the other hand proposes new regulations dedicated to artificial intelligence. The Directive clearly states that it does not apply to criminal liability – the proposed regulations only address the issue of non-contractual liability based on the principle of fault, i.e. for damage caused intentionally or by a negligent act or omission. This includes breaches of privacy or damage caused by errors in AI algorithms and defective AI-enabled products.

According to the existing product liability regulation, an AI product is considered defective if it does not provide the safety which an user should reasonably expect, taking into account, for example, the presentation of the product, the expected use, and the moment the product was put on the market. The new rules should help to obtain damages if products such as robots, drones or smart home systems become unsafe due to a software update and cause damage to the user.

“The revised rules will give companies legal certainty, allowing them to invest in new and innovative products, and consumers will be able to obtain fair compensation when defective products cause damage. In general, injured parties will be able to enjoy the same standards of protection in the event of damage caused by artificial intelligence products or services as they would in any other circumstances.”


Jakub Pietrasik, Counsel and Head of IP & TMT practice in Warsaw

Presumption of a causal link

The proposed AI Liability Directive introduces a so-called presumption of a causal link – that is, in circumstances where the damage-triggering event, damage per se and fault are established and a causal link to the operation of the AI appears likely, a causal link sufficient to claim damages for the actions or deficiencies of the AI system is presumed to exist. However, such a presumption should only be applied if certain conditions are met.

“By adopting the presumption of a causal link, one will no longer need to prove and explain how the damage was caused, which often requires considerable technical knowledge and understanding of complex AI systems. This eliminates the so-called ‘black box’ effect, i.e. the difficulty of understanding the algorithms on which AI models are based. We know what information goes into the system and what comes out of it. What we don’t know is how the inference itself works. And this is difficult not only for consumers, but also for specialists, which is why it is a good step to eliminate this problem at the level of disputes and claiming damages for AI errors”, adds Zuzanna Nowak-Wróbel.

Facilitating access to evidence

The proposal treats high-risk AI systems separately. Under the current version of the AI Act, high-risk AI systems include any system that requires a conformity assessment involving a notified body, as well as, for example, biometric identification and categorisation systems, autonomous vehicles, scoring systems or systems that manage critical infrastructure (water, fuel or heating and electricity supply).

The Directive introduces an additional tool that injured persons can use when seeking redress from providers of such systems – a right to request disclosure of evidence from AI systems providers. Thus, a supplier or user of AI systems may be obliged to disclose the technical documentation of the system, the results of validation tests of AI algorithms or the documentation of the quality management system. Here, the legislator also introduces the concept of presumption – if the supplier does not comply with the disclosure order, the supplier is to be found liable for failing to exercise due diligence in connection with the operation of the AI system.

Interestingly, in the case of high-risk AI systems, the presumption of a causal link can only arise if the claimant proves that the supplier has breached its obligations under the AI Act. Such breaches may include, for example, creating a system using training, validation and testing data that does not meet quality criteria, or designing a system in breach of the rules on transparency of operation and interpretability of performance results, or breaching the rules on human oversight. In contrast, when pursuing a claim against a user of a high-risk AI system, it must be shown that the user acted contrary to the provided instructions or that the AI system was exposed to input data not relevant in view of the system’s intended purpose.

“The risk management strategy implemented during the life cycle of a high-risk AI system can be a useful element for the purpose of assessing the system’s compliance with the mandatory requirements imposed on suppliers of such a system. Therefore, all steps taken by suppliers within the risk management system and the results themselves, i.e. decisions to adopt or not to adopt certain measures, should be monitored and documented on an ongoing basis, as their correctness and completeness will affect not only compliance, but also directly affect liability for damages”, explains Jakub Pietrasik.

Harmonisation of artificial intelligence legislation

It is apparent that the EU legislator is actively working on AI Acts and aims to possibly harmonise them across all member states. The result is a package of various EU regulations relating to AI systems. This package currently covers three complementary areas:

• basic horizontal legislation on artificial intelligence systems (AI Act);
• updates and adaptation of sectoral and horizontal product safety rules (amendments to the General Product Safety Directive provisions);
• new rules addressing liability issues related to AI systems (draft AI Liability Directive).

“Unlike the provisions of the AI Act, the provisions of the AI Liability Directive will not apply directly in EU countries, so we will need to implement the new regulations and make a number of changes to national laws. For the time being, we must remain patient, as there is still a long way to go before we learn the final regulations that will be drafted and adopted by member states. Nevertheless, the proposed legislation related to liability for AI systems is a very good step towards increasing legal certainty in the regulation of new technologies. Manufacturers, insurers and users should gain certainty as to how liability regulations will be applied in relation to damage caused by AI systems, and thus in assessing and insuring their risks associated with AI-based activities”, concludes Jakub Pietrasik.

Download the Article in English

Download PDF

Contributors