What Is Cross-Validation in Machine Learning?

Written by Coursera Staff • Updated on

Explore the process of cross-validation in machine learning while discovering the different types of cross-validation methods and the best practices for implementation.

[Featured Image] An employee works on a laptop and uses cross validation in machine learning to evaluate a data model's performance.

Cross-validation is a predictive assessment technique used in machine learning to estimate the capabilities of a machine learning model. If you work in machine learning, you can use cross-validation as a statistical model to compare and select machine learning models for a specific application. The technique can help you address problems such as overfitting a model, which can create issues like suboptimal performance in real-world scenarios. Cross-validation works to avoid this issue, and others such as underfitting, by evaluating the model’s performance across multiple validation data sets during training. It divides or partitions the data set into subgroups for training and testing to accomplish this. 

The goal of cross-validation is to provide a more accurate estimate of your model’s ability to perform on new or unseen data by providing you with an unbiased estimate of the generalization error (a measure of how well an algorithm predicts future observations for previously unseen data). By providing you with the generalization error, cross-validation yields information about the model’s generalization capabilities and is commonly used to tune or estimate the hyperparameters of a model. This helps you find the ideal configuration for the best generalization performance. 

Read further to learn more about cross-validation in machine learning while discovering the different types of cross-validation methods and the best ways for you to implement them.

Understanding cross-validation 

Cross-validation determines the accuracy of your machine learning model by partitioning the data into two different groups, called a training set and a testing set. The data is then randomly separated into a certain number of groups or subsets called folds. Each fold contains about the same amount of data. The number of folds used depends on factors like the size, data type, and model. For example, if you separate your data into 10 subsets or folds, you would use nine as the training group and only one as a testing group. Repeat the process as many times as you have folds—in this case, you would perform the training/testing a total of 10 times. After repeating the process (referred to as iteration) 10 times, you aggregate the results to create a single model estimation. The model estimation provides an estimate of your model’s performance on new and unseen data. 

These are the steps you would take to implement cross-validation, using the k-fold cross-validation form across 10 k-folds as an example: 

1. Partition the data 

Divide the data set into 10 subsets, also referred to as folds. Each fold contains an approximately equal proportion of the data. Common choices for k include five or 10, but you can adjust the value based on the size and overall requirements of the data set.

2. Train and test model 

Train and test the model 10 times, using a different fold as the test set in each iteration. In the first iteration, the first fold is the test set, and you train the model on the remaining k-1 folds. In the second iteration, the second fold is the test set, and the process continues in this way until you reach 10 times. 

3. Calculate performance metrics 

After each iteration, calculate your model's performance metrics based on the model’s predictions on the test set. These metrics assess or estimate how well your model generalizes to new, unseen data.

4. Aggregate results 

The performance metrics gathered in each iteration are usually aggregated to generate an overall assessment of the model's performance and create an evaluation model. 

What is cross-validation in machine learning used for?

Using cross-validation, practitioners can build more reliable models that generalize well to new, unseen data, strengthening the algorithms' reliability, performance, and capabilities. The versatility of cross-validation allows you to choose from various methods based on the specific characteristics of the data set and the problem at hand. Common uses for machine learning include:

Mitigating overfitting 

To identify if your model is overfitting, cross-validation works to help recognize patterns not tied to the specific partitioning of data. For example, if your model performs poorly on different test data sets but well on the training data during testing, this indicates overfitting. With cross-validation, you can break the training data into subsets and make further adjustments to your algorithm before applying it once again to the test data.

Ensuring generalization 

Cross-validation assesses how well your model generalizes to new, unseen data. Cross-validation in this application aims to evaluate your model’s performance across multiple subsets of data to ensure a comprehensive understanding of the model’s ability to generalize. Ensuring generalization is critical for real-world applications where the model encounters diverse data inputs. 

Performance evaluation  

Performance evaluation in cross-validation refers to assessing a model’s predictive performance on different subsets of training data. To run a performance evaluation, you can train and test the model on multiple subsets of training data over and over, ensuring the evaluation isn’t too dependent on one specific data split. You can then compute the performance metrics for each fold and aggregate the results to provide an overall assessment of the model’s performance. This can help you estimate how well the model generalizes to new, unseen data. 

Hyperparameter tuning 

Hyperparameter tuning in cross-validation determines the optimal configuration of your model's hyperparameters by assessing the model's performance with different hyperparameter values across multiple cross-validation folds. Hyperparameters are external configuration variables used to manage your model’s training. They can help you figure out how to train and configure the model. Hyperparameter tuning is the process of choosing a set of hyperparameters and running them through your model multiple times. 

Through multiple rounds of testing, the tuning process aids you in selecting hyperparameters that lead to the best generalization performance, which enhances your model's overall effectiveness.

Comparing models 

Using cross-validation, practitioners can compare the performance of different models to check for efficiency and performance. Cross-validation provides unbiased, fair comparisons between your models by evaluating each model under the same conditions across multiple data subsets. The process ensures a reliable basis for selecting a suitable model for a specific task.

Solution for testing limited data 

Cross-validation is an excellent tool if you have only a small data set with which to work because it allows you to still train and evaluate the model on different splits of that same data set to assess the fit and utility of the model on unseen data. Because cross-validation splits the data set into test and training sets, practitioners can train and evaluate models on different portions of the data, even when limited. 

Identifying data variability 

By testing the model on different subsets, cross-validation helps you identify how robust the model is to variations in input patterns. Cross-validation ensures a model's reliability in real-world data variability scenarios. Through cross-validation, it’s easier to understand how well a model operates and copes with variability in the data.

Common types of cross-validation 

Several methods of cross-validation exist, each with its own characteristics and applications. Your choice of the cross-validation method to use depends on factors such as the data set size, the type of issue/problem, and the computational resources available. Some commonly used types of cross-validation include:

K-fold cross-validation 

In k-fold cross-validation, the data set is split into k folds, the model is trained and evaluated k times, and the performance metrics are averaged over the k iterations. The values for k can include three, five, and 10, with two of the most common being k = 5 and k = 10. In each iteration, one fold is for testing, and the remaining k-1 folds are for training. After testing, you calculate the average of the results. K-fold cross-validation is commonly used and highly adaptable to a variety of data sets. 

Stratified k-fold cross-validation 

Stratified k-fold cross-validation is similar to k-fold cross-validation except that it ensures each group or fold accurately represents the complete data set. Stratified k-fold cross-validation is particularly useful when dealing with biases and variances when data sets contain imbalances or when there’s a class imbalance in the target variable. For example, it’s often used for grouped data to balance group membership across folds in k-fold.  

Holdout method 

The simplest form of cross-validation, called the holdout method, involves splitting the data set into two parts: a training set and a test set. This allows your model to be trained on the training data set, evaluated on the test set, and then evaluated using the holdout data. Holdout data refers to a portion of test data that is intentionally held out and unseen during training. 

The holdout method is commonly used for large data sets where holding out a portion for testing still provides a representative sample since its evaluation can have a high variance and depends on the arrangement of the data points among the sets. 

Time series cross-validation 

Specifically designed for time series data, where the temporal order of observations is important, time series cross-validation is a more advanced method of training and testing data sets that ensures your model is evaluated under realistic conditions as it encounters new data points in the order that they appear. 

The procedure features a series of test sets consisting of a single observation gathered at set intervals throughout time, arranged in chronological order. In each iteration, the training set includes data up to a certain point in time, and the test set includes data after that point. Time series cross-validation aims to evaluate data to identify patterns, trends, forecasting purposes, and related. 

Practitioners have several methods to build a time series cross-validation strategy: sliding window validation, expanding window validation, decomposing the time series, or transforming it. 

Leave-one-out cross-validation  

If you use the leave-one-out cross-validation (LOOCV) approach, it splits the data into a training and validation set, with the validation set including one observation and the training set including n-1 observations. As an arrangement of k-fold cross-validation, the process of LOOCV sets k to the number of examples in the data set. 

Each iteration tests a single data point as the model is trained on the remaining n-1 data points. N signifies the total number of samples, and you repeat the process n times. LOOCV is typically used to make predictions on data and is ideal for smaller data sets as it can be expensive for larger data sets. 

Learn more about cross-validation and machine learning on Coursera

If you’re interested in exploring careers in machine learning and want to learn about fundamental principles in this field, such as cross-validation, consider enrolling in an online course. Discover more about this topic on Coursera. The course Applied Machine Learning in Python, offered on Coursera, is an intermediate-level course focusing on machine learning fundamentals, specifically using the Scikit-learn library. You do not need previous experience to enroll. Get started today. 

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.