Liability and Emerging Digital Technologies: An EU and US Perspective

Investigator:

Maria Lillà Montagnani

Abstract:

Emerging Digital Technologies (EDTs) – e.g. Internet of Things and of Services (IoT/IoS), Artificial Intelligence (AI), advanced robotics and Autonomous Vehicles (AV) – are currently at the centre of the debate on the relationship between humans and machines. On the one hand, EDTs are deemed to lead to fundamental discoveries, opening up new possibilities, and bringing major benefits to our society and economy. On the other hand, EDTs are feared to be the end of mankind as we know it for their potential to replace human beings. Between these polarized positions stands the fact that smart machines carry some risks – or increase existing ones – not only for those developing EDTs, but in particular for consumers that use them, which, in turn, generates consumer distrust towards the technologies themselves. These risks are closely related to the technological features of EDTs (e.g. autonomy, data-driveness, openness, vulnerability, human-likeness) which, compared to previous technologies (i.e. Digital Technologies) are novel or profoundly amplified. From a legal standpoint, the concretisation of these risks generates an almost infinite list of possible “EDTs-related damages” that ought to be addressed through the current liability regimes. Against this background, the project consists of an analysis of the liability rules applicable to EDTs – hard and soft law rules as well as best practices when already in place and retraceable – as implemented in the EU, and in a comparison with those taking place in the US to identify the normative foundations on which a liability regime for new technologies may be built. While it is often maintained that the objective of the liability system is to compensate victims, this cannot be the only goal of regulators but it should go hand-in-hand with promoting innovation by providing incentives towards those actors who are best situated to take precautions against harm. To do this, it becomes crucial to understand whether the existing rules present gaps in considering the possible damages that occur in the context of the use of IoT, AI, advanced robotics and autonomous systems, and identifying possible solutions that would build trust in these technologies.