[MUSIC] Welcome to Introduction to User Experience Design. Today we will cover evaluation. I the previous module, you learned about the importance of prototyping. In this module, we consider the role of evaluation in the user interface design cycle. We originally define design as the development of a novel creation to meet some user need. The goal of the novel design is to provide and improve user experience over the previous design. But how do we know we've accomplished this goal? Evaluation is the answer. It allows us to ascertain that we are improving the user experience. Evaluation requires that you collect data. This may be either quantitative data based on objective measures of performance or subjective measures of preference, or it may be qualitative data based on interviews. Now might be a good time for you to review the module on qualitative and quantitative data. Evaluation is generally defined in two categories. Formative evaluation is conducted early on in the design process with low fidelity prototypes, while summative evaluation is conducted with high fidelity prototypes or a near final interface. The type of data we collect is related to the type of prototypes we are using. Low fidelity prototypes require that the designer collects the data. For example, time to completion or count number of clicks during the task. High fidelity prototypes may produce data that the designer can access and analyze, which tells us about how the system was used. For example, there might be time stamps of when the user started and ended a session, and log data of how the user interacted with the system. The prototype we have will affect where we can conduct our evaluation. Low fidelity prototypes require a controlled environment, for example, a laboratory setting or an office. On the other hand, high fidelity prototypes may be deployed in the wild. We might, for example, put our new app on the user's phone, or we might place a kiosk in an area where the general public can interact with it. Here we come full circle in that it is through the evaluation phase that we show that we are providing an improved user experience. By useful, I mean that it allows a user to complete a task. By usable, I mean that the user can accomplish the task via the interface in an effective, efficient, and satisfying manner. Ascertaining if the task can be completed with our new design is a pretty low bar. A thorough evaluation requires that we consider if the design is usable. This means that we measure to what degree the goals of the task are met. This can be accomplished by collecting quantitative data in the form of questionnaires, or log data of the path the user traversed while completing the task. Or it can be qualitative data in the form of user interviews. We will be able to ascertain if the design is efficient by evaluating various task completion measures. These include time to completion of the task, number of clicks, or number of errors while performing a task. Notice that we can infer learnability and memorability by using some of the same measures I just mentioned. Learnability refers to how easy it is to complete a task successfully. We can get an objective measure of this by looking at the data for number of clicks to complete a task, or amount of time to complete a task, and then compare these to expert performance. Memorability refers to how easy it is to remember how to use a product, or more specifically, how to perform a given task on an interface after repeated trials. We can measure amount of time or number of clicks to complete a task over repeated trials to get a measure of memorability. We also need to have indicators of the subjective user satisfaction while executing the task. These can be both cognitive or emotional aspects of the task completion. We are going to refer to cognitive measures as those that relate to the mental effort it required to complete the task. For example, were the steps required to complete the task intuitive? For the emotional component, we want to have a sense of the feelings that the user experienced as she completed the task. These two might be correlated. It might be that a task that was unintuitive will lead to the user feeling frustrated. Here's a sample of the kind of data matrix you might collect after a usability session. This is not exhaustive. It's just an example. It's important to remember that the usability measures we just discussed must be considered in relation to either the values rate using the status quo interface, right, the current practices of the user. Or if were designing a completely new interaction, we can compare the user's values to some other objective measures of success. For example, the values that are obtained when the design team, you might consider these people experts, use a novel design. Conducting an evaluation necessarily overlaps with material we covered in other modules. With this in mind, I urge you to review this material. Since this is an introductory course, I limited the evaluation material to usability testing. Thus, I did not cover other advanced topics, for example, analytic evaluation. Here, experts are used to simulate or predict typical user performance. Some common examples are heuristic evaluation or cognitive walk-through. I encourage you to seek out other advance courses that cover these and other advanced topics. Once the evaluation data is collected and analyzed, the designer is in a position to iterate on the design. This may lead to another round of alternative designs. It might lead to prototype building and more evaluation. When do you stop? Well, one rule of thumb is that you stop when you have met your design objectives. And this translates to an evaluation cycle that shows that the user can interact with your design in an effortless and enjoyable manner. We're now at the end of the course. We used the four step design cycle to explore user experience. I hope you enjoyed this introductory class, and if you did, I urge you to take other related Coursera courses to advance your knowledge of this fascinating field. [SOUND]