Now one of the most important topics is safeguarding your data from unintended access. In this next module, we'll cover a range of methods from securing individual columns through encryption to limiting access to specific rows through authorized views. We'll then cover the default access control roles that are available within Big Query, and how those maps, your overall Google Cloud platform account level roles. So the first topic that we're going to cover in data access is actually doing a little bit of encryption on some of the columns you might have. So here you see the farm fingerprint function, much like other functions that you've seen in the past is wrapping this literal string called secure. And this is just a one way encryption function that you can throw on your field values and have them be encrypted one way. Now this is not meant for passwords. This is meant for potentially hiding fields from different access groups depending upon what level of information that you want to see in your data. As an example, if you have an analytics and marketing group, the analytics group for example could see the telephone numbers of all the users in your account. But say you wanted to set up a view where only the marketers can see every feel that's available but the phone numbers have been hashed away or encrypted so that they can be read by the marketers. You could set up a view that has this farm_fingerprint function thrown on, and then authorize the view to access your underlying source data and we'll look at how to create those authorized views in just a few slides. And just as we introduced, here's the concept of authorized views, will drop limit that role level access and even column level access to your underlying source data. So Big Query views again to review are logical, meaning that the data underneath is not stored or materialized. What that means is that your query is going to be re-ran every single time against your underlying source data. And one of the really cool things that you can do is invoke some Big Query reserved keywords like session user, which you can then pull the email address of the person that's logged in. And assuming that you have your data structured in such a way where you have a column called allowed viewer or allowed group that you can then match that email address on. You can set up some pretty neat role level filtering on your underlying data. And one of the core principles about datasets and Big Query is that permissions are granted at the dataset level, not at the individual table level. So two key areas to really focus on for access permissioning are at the project level, which again, includes Big Query and your cloud storage buckets and everything else. And then individually at the Big Query dataset level and you can access and share data sets by clicking on that triangle beneath each of your different data sets. And it'll pull up a window that looks exactly like this. And then what you can do is you can add people but not just people. You could add authorized views from other different data sets to look into your source data and then get access to that. You have a lot of different flexible ways depending upon what your access control use case is to limit and restrict access to your underlying data. These are some of the predefined roles or precanned roles that are available as part of the Big Query. You have your data viewer, your data editor, your data owner, the user, the job user and then the overall admin. Then here's a list of things that those particular users can do. So you can get as granular as you would like if you want users just to be able to view your data, or users to actually own your data and be able to create tables, but not actually create queries. Or if you wanted to give full administrative access to other different members. So you want to pick the most granular role possible or the most limited role possible when it comes to permissions for each user that you're adding to make sure that nobody has more permissions than they should. Now, another one of the key concepts that we want to cover is inherited permissions at the Big Query dataset level from your overall Google Cloud projects, permissions and roles. So as you see all the way on the left is the viewer, the editor and then the owner at the Google Cloud project level. If you add new users to your overall Google Cloud project based on the role that you grant them to the overall Google Cloud project, will automatically inherit particular roles down to Big Query. So if you added another co owner to your Google Cloud project as a whole, they will also be an owner or an editor as you see in that third row for your Big Query data sets. The same goes for viewer and editor. Now, if you didn't want some of your overall project viewers editors or owners to have access to your underlying data sets that are in Big Query. You can then go into Big Query and then automatically change the inheritance of those permission levels from what they're automatically defaulted at, to something a little bit more granular. Now, data access is not something that you set up once and just forget about. As you see in the last bullet point there, this is something that you need to monitor and audit periodically, to make sure that users that join your project initially still need the same level of access that they did at the beginning. Or if they're no longer with your project organization, you have an automatic or regular way of phasing out those users to make sure access doesn't fall into the wrong hands. Now for the second bullet point, data sets and storage are cheap within Big Query. So what a lot of organizations do, is you have many different data sets mirrored across each other in development QA or production environments and then have a different matrix of permissions that are associated with each of those. So your analytics and development team can have full access to all three resources, but maybe your marketing team or some other teams only have access to certain tables within production or vice versa. As we covered a little bit earlier, your data set users should have the minimum required permissions to do their jobs. It's time for a quick recap, setting up proper access controls to your data is probably one of the most important topics in data analysis. As you've seen, Google Cloud platform provides you with a variety of roles at the project level, that can then be inherited down to the Big Query dataset level. And it's ultimately up to you to determine which individuals and which groups should have access to which parts of your data. And be sure to set up regular access control audits and look at those stack driver logs to spot any strange usage patterns. Lastly, as you mentioned before, consider using authorized views intended with a where clause filter on current user to limit role level access to a particular table based on who's logged in.