What happens if an autopilot car causes an accident? One of the most discussed topics when it comes to legal issues related to the use of Artificial Intelligence is that of civil liability for damages caused by intelligent systems. What exactly are we referring to when we talk about civil liability? We are referring to liability for damages resulting from the breach of a private law obligation - i.e., one which concerns relations between private citizens. By way of extreme example, we could say: whoever causes damage to others is obliged to compensate for it. Normally, the attribution of a civil liability to a person, and therefore of an obligation to compensate damages, requires the presence of a subjective element in terms of wilful misconduct (i.e. consciousness and will to produce the damage) or negligence, imprudence, or inexperience (i.e. without consciousness and will to produce the damage). However, the legislation provides for some exceptional cases of strict liability, which simply requires the existence of a causal link between a harmful event and the conduct of the damaging party, regardless of the presence of the psychological element of malice or fault. The issue is far from irrelevant. As technology progresses and spreads, like humans, intelligent systems will undoubtedly make mistakes, take wrong decisions and cause damage. The most classic example of this is self-driving. For that matter, there have already been cases of harmful events in this area, attributable to the conduct of autonomous systems. The first fatal accident caused by a self-driving car dates back to 7 May 2016, when the driver of the car, Joshua Brown, decided to rely on the autopilot of his Tesla S to drive the car while travelling on a highway in Florida. While he was watching to a film, the car crashed into a lorry in the opposite lane, which was turning left. The Californian car manufacturer explained the reason for this by stating that it was the colour of the truck that fooled the system: the white side, perpendicular to the car, could not be told apart from the sky, which was particularly bright at the time. This was certainly a rare occurrence, but one that provides the occasion for several considerations from a legal point of view. In order to understand the relevance of the topic, we can give further examples, which cover almost all areas of application of AI technologies. In the health and medical-surgical sectors, AI is increasingly being used for diagnostic purposes, just as in the care of people, the use of AI is also hypothesised for emergency purposes. In the banking sector, predictive systems for managing the risk of granting credit to customers are becoming increasingly popular. In the field of human resources, intelligent systems to support research and personnel selection are becoming widespread. In the insurance sector, image recognition systems based on neural networks are beginning to be used on a massive scale to speed up and make reliable the settlement of claims. And so on. If we think of these examples, we realise that the possibility of an error by the intelligent system is a real risk. That is to say, when asked about a particular question, the system could give an objectively incorrect answer on the basis of incomplete or partial parameters. Likewise, the damage that could result from an AI choice or decision is concrete and effective. And identifying the party liable to compensate for such damage is not a matter for jurists only or destined to be discussed only in court. On the contrary, the future choices that the legislator will make regarding civil liability linked to the production and use of AI are potentially able to constitute a brake or an incentive to the development and the innovation. They are able to steer companies’ "make or buy" choices and, ultimately, to decide who should bear the costs of technological progress. The issue of allocating liability for damages – which actually is a problem which arises in relation to any technology - is further complicated, in the case of AI, by the fact that intelligent systems are often based on self-learning algorithms whereby the system learns and self-determines based on its own experience and on its interaction with the external environment. This means that whoever designs, programs or builds the system may not be able to predict or know in advance its reactions to the surrounding environment. The question therefore arises as to who is to be held liable, and thus must compensate the consequent damages, for a conduct that may be unpredictable or unavoidable. As we will see in more detail in the next lessons, the answer to this question is not always easy and straightforward, based on current regulations. For this very reason, the issue has been studied by the European institutions for some years now. In particular, the European Parliament, with its Resolution issued on 20 October 2020, which we will examine in detail in one of the next lessons, proposed the introduction of a specific liability regime linked to artificial intelligence. In other words, the European Parliament would like to see a new European legislation, therefore common to all Member States, which lays down uniform rules on compensation for damages caused by AI.