Congratulations, you've completed exploring and preparing your data with BigQuery, the first course in the data to insights with Google Cloud core series. Let's recap all that we've covered so far in this course series. In Module 1, data on Google Cloud was introduced. You looked at the query, infrastructure and storage challenges that are faced by data analysts. This included queries taking too long to run, there being no easy way to combine and query data collected, on-premises clusters not scaling with analyses, the cost of storing data, and the lack of a central data analytics warehouse or set of tools. You compared traditional big data on-premises platforms that require a significant investment in infrastructure with big data on Google Cloud. With Google Cloud, there is no need to focus on infrastructure, allowing you to focus on insights instead. Google Cloud enables efficient resource allocation through the separation of storage and computing power. BigQuery specifically scales automatically and you only pay for what you use. You were introduced to two real-world use cases, Ocado and Spotify. Both of whom turned to BigQuery as a solution for managing and leveraging their big data. Lastly, in Module 1, you were introduced briefly to three key components of the Google Cloud Console dashboard; projects, resources, and billing. Module 2 provided you with an overview of big data tools in Google Cloud. You identified five key tasks categories that apply to data analysts namely; ingest, transform, store, analyze, and visualize, as well as the challenges associated with each. You then identified the different scalable big data tools offered by Google Cloud to overcome these data challenges. You learned about BigQuery, Google Cloud's petabyte scale, data analytics warehouse, and nine fundamental features that allow you to focus on finding insights instead of managing infrastructure. You then compared the roles of data analysts, data scientists, and data engineer in terms of what they do, their background, and their need for different Google Cloud tools. Lastly, in Module 2, you completed a lab where you explored a BigQuery public dataset. In Module 3, exploring data with SQL was introduced. You started with three fundamental steps to exploring data through SQL. First, ask good questions, second, know your data, and third, right good sequel. You then learned a number of best practices for coding high-quality, standard SQL, including the use of clauses and functions. You ended the Module with a lab where you troubleshoot common SQL errors with BigQuery. Module 4 discussed BigQuery pricing. You learnt that the unit of work in BigQuery is called a job and that there are four different types of jobs, namely; query, load data, extract, and copy. You also learnt that only query jobs incur a processing cost. You then learnt how to determine and control the cost of both storage and analytics. Lastly, you were introduced to a few cost optimization principles to apply when writing your queries. In Module 5, data cleaning and transformation was discussed. You were reminded of the saying garbage in, garbage out and learned about the five strict integrity rules that high-quality datasets conform to. These are; validity, accuracy, completeness, consistency, and uniformity. You then learned about dataset shape, where ideally you have the right amount of columns and records to make judgments and inferences from your data and insights. You also learned about dataset skew, which is the distribution of values. You revisited the five principles of dataset integrity and how these relate to cleaning and transforming data with SQL. You were then introduced to Dataprep, a tool that allows you to apply the best practices done for cleaning data through a drag-and-drop interface. You also had the opportunity to explore the Dataprep UI by building an e-commerce transformation pipeline that will run at a scheduled interval and output results back into BigQuery. Lastly, in Module 5, you were introduced to Data Fusion. After exploring the components of the Data Fusion UI, learning how to build a pipeline and exploring data using Wrangler, you completed a lab where you have the opportunity to build transformations and prepare data with Wrangler in Data Fusion. We look forward to welcome you to the next course in the data to insights with Google Cloud core series, creating new BigQuery datasets and visualizing insights. In the next course, you will learn how to bring your own datasets into BigQuery and how you can set up your own data analytics warehouse by joining together multiple datasets and visualizing those results inside dashboards. See you there.