So how can institutions and employers really take equity seriously? That's the focus of this video. So, in your reading you read about responsible innovation. And I think that's again a useful framework for thinking about the different categories that you want to consider when you're thinking about trying to incorporate equity in particular more explicitly. Now, the reading that you did for today talks about responsible innovation along multiple dimensions but I find it useful for specifically focusing on equity and justice. And I'm going to go through these categories and hopefully give you a sense of why I think that's the case and what these categories really mean. So the first is anticipation. So invariably, when you're talking to somebody about technology, they will say something like well, but you can't prove predict the consequences of technology, right. These are unanticipated consequences. And so we just need to deal with the consequences once they emerge and sometimes they have negative consequences. But technology is overwhelmingly good and they might add for good measure that technology is also objective, right. It's purely technical. What I hope these readings show you is that of course, it's not really objective and purely technical. It's deeply all technologies have values embedded in them, but also that in fact, you can anticipate the benefits and costs and implications more generally, of technologies in advance. And what that means then for technology developers as well as policymakers is that you can then do something about it once you know, then you can do, right. So that's the idea of the anticipation part of responsible innovation. The second again, is focused very much on the institution and that requires a really difficult kind of reflection. It requires you to think about what your values are, what your assumptions are and what your privileges are. And of course, if you're going to do that, then you definitely have to leave behind the idea that technology is an entirely outside of society and doesn't have any embedded values. And I think this is where the short reading that you did on the flint. Water crisis is useful. What you you see there, which I think is so fantastically interesting. Not only is there the original set of scenes in which the city and the state and environmental regulators essentially ignored the cries of residents and their concerns about the quality of the water. But that even when an engineer came in to address the situation, he was so concerned and so focused on the benefits that he was provide into the community and the expertise that he was providing. That he wasn't actually able to see his own blinders, his own values and assumptions, which in the end meant that while the lead crisis was addressed in flint. The legionnaires disease crisis took many, many months for anyone to even acknowledge. Because the innovator, the engineer in this case took up so much space that it made it very hard for the community members to actually make that case at the same time. The third category for responsible innovation is inclusion. So including community needs perspectives and knowledge. And here, you have to be really careful about what that means, right. You have to work against your own biases in terms of perhaps assumptions that the communities may not have the knowledge or the expertise or that the technical knowledge is more important. This is where the reflectivity comes in but you have to think about, okay, it might be complicated to include all of the communities that need to be represented. But what I hope you've seen in the readings is that there are various mechanisms available to bring different kinds of communities into the conversation and to allow communities to speak on their own terms. And then finally, there's the category of responsiveness. And here the question is, as you're in the process of developing a technology or in the deployment process, how do you actually incorporate that knowledge into governance? Right. Like how do you do something about that? Whether that's the policies or the priorities that a tech developer is thinking about or a policymaker. And there are a variety of different strategies that scholars have suggested and that governments have developed to do this. One is for example, engaging scientists and engineers along the process. It's called midstream modulation. Some of the readings talked about that a little bit to help the engineer be more reflexive about the values that are engaged. And sort of talk to them about what the kinds of choices are that they're making and make different choices as a result. So you can get a sense that all of these kinds of dimensions really work together in the process of developing more responsible innovation. You read about the example of the spice project that took place a few years ago in the UK. And is a really interesting example of an attempt to try to use responsible innovation in practice. And I think more generally demonstrates how innovators and researchers can deploy a responsible innovation framework. They can do it through research funding for example, or they can do it in project development as they did in this case. So the spice project focused on a technology related to geoengineering. And geoengineering is basically a suite of technologies that are very large scale and they are designed in different ways to mitigate climate change. And the hope is that these technologies might work. But we don't really know because there are large scale technologies and in the case of the technology that was being you used in the spice project, what's called stratospheric aerosol injection. It really is kind of straightforward. You're injecting aerosols into the atmosphere in the hope that it will reflect the sunlight. You can imagine kind of redirecting the sun, which is essentially what the spice project was trying to do might bring with it some risks, right. And so there's a, not a lot of deployment, but what the UK research councils required in this case was a stage gating process. The idea was there would be different stages of the project and that there would be deliberation at each stage. And a concrete decision about what we know, about the risks and benefits and whether we should move forward. This was a process that was interdisciplinary. It involved social scientists as well as scientists, which allowed for kind of push and pull in terms of the reflectivity that I mentioned before. And it was still driven by scientific priorities and notions of autonomy, but it made those scientists a bit more sensitive and a bit more reflexive. And as you read there interestingly, they do didn't really actually get very far. And so one might say, okay, well this didn't really work very well. But in fact it created a sensitivity that led the researchers to abandon the project when they discovered that the leader of the project had intellectual property interest in the technology. So they were so concerned about the implications of that, the how the public might perceive it, that they proactively address those issues, right. So there was anticipation there as well. So that gives you a sense of how it might work, even though this was a case in which you could argue that it really didn't. The point here is that you can be responsive, you can make choices along the way and this is especially important when you're talking about a case like geoengineering that is incredibly fraught. Another example that I want to give you is from my own research, my own work. Over the last few years at the University of Michigan, I have been directing something that I call the technology assessment project and fundamentally this is based in this idea of anticipation. The idea is that new technologies, we started with analyzing a technology that you may have heard of facial recognition technology. We focused on it in its use in K through 12 schools in primary schools which was just starting around that time in 2019 and 2020. And we thought to ourselves is this important emerging technologies? We think it raises some potentially problematic issues but the idea behind this project is specifically to use the method that I call the ontological case study method. And in this analogical case study method, I and my research team use similar previous technology to help us anticipate the implications of the emerging technologies. And when I say similar previous technologies, I mean that they are similar in terms of their function or that they are similar in terms of their potential implications or their projected implications. So in the case of facial recognition technology for example, we looked at previous technologies that had been surveillance technologies. We looked at the use of metal detectors, we looked at closed circuit television. We looked at the incorporation of school resource officers and what we found when we looked at those kinds of surveillance technologies and we looked at dozens of them, but this is just a handful. But what we found when we looked at those was that not only our surveillance technologies disproportionately used in historically disadvantaged communities, usually communities of color. But that the use of those surveillance technologies have really serious psychological implications, especially when they're being used on young people. That it not just normalizes surveillance, but it punishes nonconforming behavior and actually narrows what constitutes nonconforming behavior. Because you all you feel like you're being watched. So you become sort of smaller and smaller, which is not necessarily a great thing when you're a child and you're learning who you are and you're testing boundaries. And you're learning your identity for the technologies themselves to essentially be performing this kind of disciplining function. And in that report, and in subsequent reports, what we then did was we offer different ways for tech developers and policymakers to think along the ways. To sort of give not quite the stage gating approach that they used in the spice project, but something similar issues to look for. Things to consider, the data sets that the facial recognition technology is built on. The deployment of the technology, the people that are used to deploy the system of facial recognition technology. Those are all pieces is that could be addressed in different ways to produce different kinds of implications. And that's a case, for example, where responsiveness comes into play. So those are just a couple of examples. There are many, many more, but it gives you a sense that of how you might actually operationalize these principles of responsible innovation.