In this lecture, I'm going talk to you a little bit about how you might plan to test your website in an effective way. So, the interesting thing about the web is it's really changed how software development culture works in terms of releases. On the web, it is not unusual to have a website that releases a new version of that site every day, and there may be substantial changes that occur from day to day within a particular website. So, if you're thinking about an application like YouTube, you've got an incredible amount of functionality there. You've got terabytes and terabytes and even petabytes of data representing videos. You have search capability. You have the ability to playback in different ways and in different formats. You have the ability to change the speed at which you playback a video. In fact, some of you may be using that technology right now to watch me if you think I talk too slow. So, there's an incredible amount of functionality in these websites, but because you don't have to download something to a client, you have the ability to rapidly release new versions of that content. I'm going to give you a couple of examples. So, there's a new version of Chrome about every six weeks, and this is about as fast as you find updates to client-side pieces of code. So, what that means is that every six weeks, without you really doing anything, your browser is going to update itself. Sometimes the changes that it makes are noticeable, meaning that the tabs might look different or the way that the search functionality works is slightly different. A lot of times those changes are hidden under the hood. So, things like security fixes and speedups to JavaScript. But if we look at that scale compared to what we see on the web, Amazon is deploying to production, some new features, some new aspect of its website, every second, 50 million updates per year. That's astonishing. If you look at Netflix, you figure you're watching videos, what changes? Well, they're deploying code 1,000 times per day. Even smaller websites, something like Etsy is deploying 50 times per day. So, going back and looking at a client-side application, Facebook updates its mobile app bi-weekly. So, there's another instance where the pace is different than you see on the web, but it's still amazingly fast. So, as a tester, you might look at this and say, "What do I do?" I mean, if I'm deploying production every second, obviously, I can't be testing those releases as they come out, at least not in the manual fashion. So, the things that we have to do is we have to have a lot of automation, and we have to look at what things we want to test and how thoroughly we want to test them. So, what to test. So, there's a nice split when you're thinking about websites and in fact, it's true for other kinds of domains as well. First, we want to test physical things. We want to test user interface, user experience. So, things like a website is constructed of pages. How long does it take to load a page? Does it look correct? When we look at the page itself, we can look at content locations. Are they physically placed in the right place within the page? Then we're going to have a set of UI elements, things like text boxes and buttons. We should try each one of them out to make sure that when I click on this button, it actually does something or when I click this checkbox and submit a form, that the form reflects the change in the checkbox that I made. Then we also want to look at user interactions. So, oftentimes when we have a webpage, the user types things in and we validate on the client-side. So, maybe the value is between 0 and 10,000. If the user types 100,000, we should have a client-side error. So, making sure that validation works correctly, and that we can submit things back to the web server correctly. Then we take a step back, and we look at the functionality of the website. Oftentimes websites have a set of common features. So, you're building an electronic store, say. Well, you're going to have account management. You're going to have search for product catalog. You're going to have orders. You're going to have pricing and sales. These kinds of things are features of your application. So, what we want to do is we want to try and test the features that we have, and we use the feature concept to organize our user stories and use cases. What we may try and do then is to build up a checklist. When we're testing websites, here are the things that we should always test, that are independent of whatever domain we're in, and here are the things that are applicable to my domain, the functional things, that I also want to make sure work correctly. So, this split works well for websites, but it also works well for other things, like embedded domain. So, in this case, if you're building a drone, you'd have physical things involving propellers, the battery, the camera, making sure that the camera works correctly, the game pad controller that you use to control the drone, and then you can think of the functional things that the drone does, things like it should be able to hover, it should be able to use cameras to track particular objects on the ground, it should be able to fly to waypoints. So, this is a way, again, of splitting up the testing obligations that you have. Once you've done this, you can come up with some testing coverage goals. So, what do we want to cover in each of the physical and functional spaces? Do we want to make sure that we press every button on a website? What if we're doing regression and we've already tested a lot of that code before? What do we want to retest when we come back to it? Then we can think also in terms of the diversity of browsers, the ways that the user is going to experience this website. So, you may want to have coverage on one browser, 100% coverage on, say, Chrome, and 80% coverage on Edge, and maybe 90% coverage on Safari, just so you know where the browsers likely differ and make sure that you cover those points within your application, and it also goes for mobile platforms. How much do I want to cover an Android versus iOS? Have we prioritized the goals that we have for risk and criticality? So, we're adding a new feature. Clearly, that should be the lion's share of the testing that we do when we're doing regression and making sure when you plan that you have an understanding of how much of your testing budget is going to go towards new things, the things that those may affect, and just retesting the rest of your application. Then, finally, we have to determine what success means. So, a lot of times, these websites aren't safety critical and some bugs are okay. So, we have to determine what we consider success. How fast does a page have to load? How well-structured does the page content have to be? Of course, it's one thing if you're Amazon and you have a million customers, and it's another thing if you're just getting started, what you're willing to tolerate. So, just to recap, the pace of these web releases is relentless. It's on a whole different scale than we've ever had with software development before. So, because of that, it's probably not possible to test everything. So, what we need to do is we need to have a test plan that brings in a lot of automation, and we also have to think about test triage. So, we have to determine which testing goals are most important, determine which of these goals we can automate so that they're just part of the deployment process itself, and then look at which remaining things we have to test at a user level, and the approach that we come up with should be systematic and cost-benefit driven.