To give you one real example of how risk management has been incorporated into the requirements lifecycle, we're going to look at NASA's Defect Detection Prevention algorithm. This is also known as the DDP. The approach is aimed to systemize the identity, assets and control cycle to integrate risk management into the requirements engineering process. This was developed by NASA in 2003 and it includes a quantitative, reasoning tool with visualization facilities as well. This technique also handles multiple risks in parallel and explicitly considers risk consequences. This whole technique could be simplified, but in general it consists of three basic steps. In the first step, you elaborate a risk impact matrix. Then, you elaborate a countermeasure effectiveness metric. Lastly, you determine an optimal balance of risk reduction versus countermeasure cost. Then, you rinse and repeat as you go through. In the first part of DDP or Defect Detection Prevention, we build a risk consequence table with domain experts. This table is capturing the estimated severity of all the consequences of each risk. For each objective and associated risk, the table is specifying an estimated loss of proportion of attainment of the objective, if that risk occurs. If there is no loss, then you put a zero in that column. A one, indicates total loss. And then you can have anything scaling between zero and one for everything else. In step two, for each pair of countermeasures and weighted risk, specify an estimation of the fractional reduction of the risk if the countermeasure is applied. If there is no reduction, you put a zero. If you are totally reducing the risk, totally eliminating the risk, then you put a one. By looking at each row, you can then calculate the overall effect of applying the set of countermeasures to the counteractive risks. Each countermeasure has a benefit in terms of risk reduction. But countermeasures also have some cost associated with them and maybe in addition of cost or maybe a decrease in cost. So in step three, we need to estimate costs with domain experts. The DDP tool may then visualize the effectiveness of each countermeasure together with its cost. The Defect Detection Prevention process can now be used to visualize, actually quite a few things. A risk based balance chart can show the residual impact of each risk on all objectives, if some particular corresponding countermeasure is selected. From this chart, we can then explore optimal combinations of countermeasures where we're trying to achieve risk balance, with respect to the cost constraints. The optimal combinations equate to a 0_1 knapsack problem, where you're trying to balance your risk and your cost. If you aren't familiar with the 0_1 knapsack problem, let's say that, you're going to school or you're going to work and you need to fill your bag with everything that you need. Take a second and think, what would you put in your bag? In what order would you put it in your bag? And why? Now many of us would start with putting in a laptop, putting in a notebook, maybe a laptop or a phone charger etc. Given that I live in Colorado where the air is very dry, next, I would put in a bottle of water, oh, and a sandwich. I almost forgot my sandwich. I don't want to go through the work day without my sandwich or pizza. We'll see. Anyway, what would you put? Each item that you put into your knapsack takes up space and it has some value to you, where all of those values are different. Here, we're trying to create our product, our knapsack with the amount of risk and cost in mind. The risk and the cost are fitting into our constraints. We want something as functional as possible, as, well fitting in the constraints. By the way, if you aren't really familiar with the knapsack problem, know that there are also partial knapsack problems. In the partial knapsack, in those cases, that means that you could cut your laptop in half and put half of it in. Obviously, this makes much more sense in terms of adding and deleting functionality of a program. But know that those algorithms are out there. The 0_1 knapsack problem is N_P complete. The knapsack problem is interesting from the perspective of computer science for many, many reasons. There are two different problems. The decision problem and the optimization problem. The decision problem form of the knapsack problem, which means, can a value of at least the v be achieved without extending some particular weight? That problem is N_P complete. That means that there is no known algorithm that is both correct and fast on all cases. And by fast we mean, that there's not an algorithm that can be run in polynomial time. While the decision problem is N_P complete, the optimization problem is N_P hard. Its resolution is at least as difficult as the decision problem. And there is no known polynomial algorithm, which can tell, given some solution whether or not it's optimal. Since we do not have perfect solutions here, we instead use approximation algorithms. And these are appropriate for approaching any kind of problem involving N, 0_1 knapsack issues. Main ways of doing this are through dynamic programming and also through particular machine learning algorithms. I encourage you to read up more on this if you've never heard of any of this before. In DDP, we use a simulated annealing optimization procedure to find the near optimal solutions. For example, we try to maximize satisfaction of objectives under some particular cost threshold, or we could minimise cost above the satisfaction threshold. The optimality criterion can be set by the requirements and engineer or by the developer. But it's something that you really should be discussing with your project experts as well. For more information about DDP, check out the reading by Feather and Kornfield from 2003. It's called, Quantitative Risk Based Requirement Reasonings and is included in one of our readings.