The Responsibility Gap. Responsibility is one of the most important and notoriously difficult concept in ethics and it is a term that can take on a variety of different meanings. We can distinguish between two specific notions of responsibility: backward looking or passive responsibility and forward looking or active responsibility. In case of the former backward looking responsibility, we hold the person responsible retrospectively. That is after something has happened. And by something, we often mean some negative events. In backward looking responsibility, a responsible party might be required to provide an account of why something has happened or to pay for damages inflicted upon others. Forward looking responsibility, on the other hand, refers to responsibility taken to ensure a certain outcome or state of affairs before something has happened. In forward looking responsibility, for example, a control room engineer in a power plant may be responsible for ensuring that temperatures within the system do not exceed the particular limits. The limits and the responsibility have both been established in advance of the outcome. So far, responsibility seems pretty straightforward. However, as soon as you start adding complexity by adding actors, whether human or non-human, it quickly becomes difficult to identify who is responsible for a specific outcome or a specific task. We call this difficulty, the responsibility gap. In this lecture, we will look at two potential occurrences of a responsibility gap resulting from new technological developments. We will see that in both examples, technological developments introduced the need to take on new responsibilities on parts of the human actors in order to avoid these responsibility gaps. Our first example comes to us from the popular taxi app, Uber. By now, most of us will be familiar with Uber and know that it is an electronic platform for connecting private drivers with passengers who need a ride. From its inception, Uber has had its share of supporters and detractors. There are those who enthusiastically welcomed Uber from the outset as an innovative and more efficient way of organizing transportation. And while Uber may in fact provide some flexibility, critics have pointed out some weak points. For example, Uber's pricing structure has drawn attention for its lack of transparency. Prices are established using algorithms that are nearly impossible to understand. Exactly how the price of a ride has been established is anyone's guess. Despite this, we are able to observe certain things like significant spikes in the price of a ride during extreme weather or other emergencies. During a snow storm in New York, for example, the cost of an Uber ride was 8 times more expensive than the same ride on the normal weather conditions. A similar price increase was seen during a hostage crisis in Sydney in 2014. Another criticism that has been leveled against Uber is that it serves some neighborhoods exceedingly well while ignoring others entirely. An inconsistency that may create or exacerbate existing inequalities between those living in well served areas and those living beyond the boundaries. Although taxi services are commonly understood to be private services, the public's ability to access mobility services is also seen as a public value, which explains at least in part why governments heavily regulate this sector in terms of price competition, passengers safety and driver training and licensing. Such regulations are intended to secure not only public values, like safety, market fairness and pricing transparency, but also equal access to mobility services. However, the Uber taxi app is not formally considered a taxi service. Remember, it is after all only an app designed to bring together two parties in search of one another. Since Uber is not officially considered a taxi service, official taxi service regulation does not apply. All of this prompts the question, "Just how do we go about securing public values for new platform services like Uber?" Let's focus for a moment on the different prices Uber used in different neighborhoods and the resulting disparities in provision of service between those neighborhoods. If we agree that it is undesirable that some neighborhoods are not as well serve as other neighborhoods, who is responsible for resolving such disparities? Are the developers of the Uber taxi app responsible or should responsibility perhaps fall on consumers or do we maybe think the situation is not actually negative but simply the result of market forces and the prices simply reflect an equilibrium between supply and demand? So here, on the one hand, we see technological platforms creating new opportunities. What Uber has done in the case of transportation and Airbnb in the case of temporary accommodation, while at the same time, they create responsibility gaps on the other because the existing framework of oversight and regulation does not apply. In such cases, it is not obvious who should take the responsibility to fill the gap. Most policymakers lack technological knowledge to fully understand how these platforms work. Given the mediating role these technological platforms play, engineers with knowledge of these platforms may have an important role to play. This is all the more important when these technological platforms affect public values as is the case with Uber. This example shows that technological applications may also introduce new responsibilities for humans even when these technologies take over human tasks, like aligning supply and demand. A second example of how new technological developments create new responsibilities, concerns artificial intelligence, by which we mean intelligence exhibited by machines. Colloquially, the term artificial intelligence is applied when a machine mimics those cognitive functions typically associated with humans such as learning and problem solving. Some people now fear that we will create autonomous systems that will become even smarter than humans and will consequently come to have control over us. They see a future in which we are taken over by machines and the smarter these machines become, the more difficult it will be for humans to intervene. The implicit assumption in this view is that responsibility is a zero some game. More responsibility with machines automatically means less responsibility with human beings. Indeed, this would be an undesirable state of affairs. So how can we prevent it? To show how artificial intelligence can be constrained, let's look at it in the context of the military where artificial intelligence is being used to develop autonomous weapons that can select and fire on human targets without any human intervention. If more responsibility for machines indeed means less responsibility for humans, this would lead to a responsibility gap. If the autonomous weapon would erroneously kill an innocent person, no person could be held accountable since machines cannot be punished for immoral behavior. However, we could also try to develop mechanisms for keeping machines under human control. In other words, when more power goes to machines in the sense of machines that are able to do more radical things, like autonomously choose targets, this should be balanced not by less responsibility with human beings but by more. In this context of autonomous weapons, the notion of meaningful human control has already been introduced as a criterion to balance technological advances in artificial intelligence with supervisory responsibility by human beings. In other words, every technical advancement should always be counterbalanced by human control. This way, no matter how smart automatic systems become, they will never have final responsibility. Humans will always have the possibility of intervening. For some engineers, the military context might feel a bit of a far flung domain, but similar considerations can be made in other contexts, like automatic vehicles or autopilots in airplanes. If you reflect a bit, you will be able to think of an example in which technology has gained power and to develop a strategy for avoiding responsibility gaps.