One of the most important topics is safeguarding your data from unintended access. In this next module, we'll cover a range of methods from securing individual columns through encryption to limiting access to specific rows through authorized views. We'll then cover the default access control roles that are available within BigQuery and how those map to your overall Google Cloud Platform account level roles. The first topic that we're going to cover in data access is actually doing a little bit of encryption on some of the columns you might have. Here you see the farm fingerprint function, much like other functions that you've seen in the past, is if wrapping this literal string called secure. This is just a one-way encryption function that you can throw on your field values and have them be encrypted one-way. Now this is not meant for passwords. This is meant for potentially hiding fields from different access groups depending upon what level of information that you want to see in your data. As an example, if you had an analytics and marketing group, the analytics group, for example, could see the telephone numbers of all the users in your account, but say you wanted to set up a view where only the marketers can see every field that's available but the phone numbers have been hashed away or encrypted so that they can't be read by the marketers. You could set up a view that has this foreign fingerprint function thrown on and then authorize the view to access your underlying source data. We'll look at how to create those authorized views in just a few slides. Just as we introduced, here's the concept of authorized views, which I hope you limit that row-level access and even column-level access to your underlying source data. BigQuery views again review are logical, meaning that the data underneath is not stored or materialized. What that means is that your query is going to be rerun every single time against your underlying source data. One of the really cool things that you can do is invoke some BigQuery reserved keywords like session user, which you can then pull the email address of the person that's logged in, and assuming that you have your data structured in such a way where you have a column called allowed viewer or allowed group that you can then match that email address on, you can set up some pretty neat row-level filtering on your underlying data. One of the core principles about datasets and BigQuery is that permissions are granted at the dataset level, not at the individual table level. Two key areas to really focus on for access permissioning or at the project level, which again includes BigQuery and your Cloud storage buckets and everything else, and then individually at the BigQuery dataset level. You can access and share datasets by clicking on that triangle beneath each of your different datasets and it'll pull up a window that looks exactly like this. Then what you can do, is you can add people, but not just people. You could add authorized views from other different datasets to look into your source data and then get access to them. There are a lot of different flexible ways depending upon what your access control use cases to limit and restrict access to your underlying data. These are some of the predefined roles or pre-canned roles that are available as part of BigQuery. You have your dataViewer, your dataEditor, your dataOwner, the user, the jobUser, and then the overall admin. Then here's a list of things that those particular users can do. You can get as granular as you would like if you want users just to be able to view your data, or users to actually own your data and be able to create tables, but not actually create queries, or if you want to give full administrative access to other different numbers. You want to pick the most granular role possible or the most limited role possible when it comes to permissions for each user that you're adding to make sure that nobody has more permissions than they should. Now, another one of the key concepts that we want to cover is inherited permissions at the BigQuery dataset level from your overall Google Cloud projects , permissions and roles. As you see all the way on the left is the viewer, the editor, and then the owner. At the Google Cloud project level, if you add new users to your overall Google Cloud project based on the role that you grant them to the overall Google Cloud project, it will automatically inherit particular roles down to BigQuery. If you added another co-owner to your Google Cloud project as a whole, they will also be an owner or an editor as you see in that third row for your BigQuery datasets. This same goes for viewer editor. Now, if you didn't want to leave you overall project viewers, editors, or owners to have access your underlying data-sets that are in BigQuery. You can then go into BigQuery and then automatically change the inheritance of those permission levels from what they are automatically defaulted out to something a little bit more granular. Now data access is not something that you set up ones and just forget about. As you see in the last bullet point there, this is something that you need to monitor and audit periodically to make sure that users that joined your project initially still need the same level of access that they did at the beginning, often no longer with your project organization. You have an automatic or regular way of phasing out those users to make sure access doesn't fall into the wrong hands. Now for the second bullet point, datasets in storage are cheap within BigQuery. A lot of organizations do. You'll have many different datasets mirrored across each other in development QA or production environments, and then have a different matrix of permissions that are associated with each of those. Your analytics and development team can have full access to all three resources, but maybe your marketing team or some other teams only have access to certain tables within production or vice versa. As we covered a little bit earlier, your dataset users should have the minimum required permissions to do their jobs. It's time for a quick recap. Setting up proper access controls to your data is probably the most important topics in data analysis. As you've seen, google Cloud Platform provides you with a variety of roles at the project level that can then be inherited down to the BigQuery dataset level. It's ultimately up to you to determine which individuals and which groups should have access to which parts of your data. Be sure to set up regular access control audits and look at those Stackdriver logs to spot any strange usage patterns. Lastly, as I mentioned before, consider using authorized views in tandem with the WHERE clause filter on current user to limit row-level access to a particular table based on who's logged in.