Now that you've learned a bit about how to actually build a linear regression model for a real problem, we're going to talk about some specifics about Machine Learning models in general. In this video, we'll talk about model accuracy and model interpretability and understand the trade-offs that come with those two things. Generally, machine-learning models can fall anywhere on a spectrum from very restrictive or interpretable, to very flexible or accurate. Models will allow us to either understand their predictions and the features that they select, or have more accuracy. There's usually a trade-off where you have to prioritize one or the other. Generally, the more complex models, like the one you see on the right, that are more flexible, will perform better and have higher accuracy but they're difficult to interpret. That means that it's difficult to see what the model is actually doing in terms of features that are important or how features are correlated with the target or the outcome. But on the other hand, on the far left, more restrictive models are more interpretable because you can see exactly what the coefficients are, for example, of a feature, and you can understand how the models ranking different features in terms of importance for predicting the outcome. But these more restrictive models generally might not be as accurate or perform as well as the more flexible models. Choosing which type that you want to use, involves both an understanding of the trade-off, and knowing what the goal of the model is, so what your actual data science objective is for your project. If your goal is more to determine which feature or features are most important for determining an outcome, you would probably use a more restrictive model like a linear regression, because it will give you more interpretability. On the other hand, if accuracy is your main goal, and you really don't care that much about understanding the feature importances or other details, you just want to get the best predictions possible, then you would try to have a more flexible model that will give you that better accuracy but less interpretability. Linear regression is a very restricted and highly interpretable model, so it falls on the left side of that spectrum that we just saw. Linear regression allows us to look at the model's predictions, and the coefficients for the features in real terms. So if we're looking at sale prices and square footage, we can actually see which square footages where most related to certain sales prices or if we have more features than just square footage, we can see how the model ranks these different features. It's easily understandable. We could see that in actual square footage terms or dollar amounts. The drawback to a model like this though, is that the predictive power is somewhat limited. That means that the accuracy might not be as good as you could get with a different type of model. If you're more concerned about how the model is going to perform on unseen new data, you might choose a different model. One thing that's a common approach, is to start with a linear regression model, and that can give you a good idea of your features and the coefficients. It might not give you the accuracy you're looking for, but then you can go on and try a different type of model with the same data, and that way you can have both the understanding and the eventual higher accuracy from trying a different type of model. There's other machine learning methods that are a lot more flexible but harder to interpret. A deep learning neural network model is the extreme example of this, where the model is very flexible and is often very accurate and powerful, making good predictions, and dealing with new unseen data, but it's very complex and very difficult, if not impossible, to interpret anything about how the model is performing its calculations. What we're looking at on the slide, is a basic illustration of a neural network, where we have an input layer on the left, which is our features, and the output layer on the right, which is the target. But the intermediate steps between these two, are these jumble of multiple connections between the layers, so we have numerous hidden layers between the input and the output, and this is where the model becomes what's known as a black box model. Because we can't really examine what each of these hidden layers is doing, or how it's making its decisions; so really low interpretability but high-accuracy. If you understand that trade off and you decide that fits your objective, then this might be a model you would try for your problem. Deep learning and neural networks are beyond the scope of this course, but they're just a common example of that opposite end of the accuracy and interpretability trade-off. Now that you have an understanding of the interpretability versus accuracy, or restrictive versus flexible trade-off with Machine Learning models, we'll go into the next video, where we'll learn about different regression metrics, how to evaluate regression models, and understand the different types of metrics you can use to evaluate your model.