[MUSIC] One important aspect of VR interaction is user input. In other words, how does the VR system understand what we want to do, and how does it support this? For example, when I try to browse the web on my laptop, I use the user interface provided by my laptop, which is the keyboard and mouse. When talking about VR interaction, the first thing we need to understand is what is supported by the VR system. So in this video, we'll be taking a look at user input from standard HMDs, and how different user inputs support different types of interaction in VR. So what exactly does existing what a reality hardware support? A lot of VR hardware supports traditional 2D user interfaces, the things we're very familiar with when interacting with the digital world, for instance, buttons, triggers, joysticks, and touchpads. If you often use a computer, or you play games on a 2D display, chances are you use them all the time. And because they're based on a 2D space, we refer to them as 2D user interfaces, or 2D UI. However, in VR, it is important to realize that we also have access to a different kind of user interface, which is based in the 3D space, And often referred to as 3D UI. On one hand, these are the types of interfaces most of us are less familiar with when it comes to interacting with a digital world. On the other hand, we are actually all experts in this type of user interface, because that's exactly how we interact with the real world in 3D. What we are less familiar with is how we capture the way we interact with the real world, and translate this into a language which the computer understands. So the computer can generate user output in VR in line with what we expect naturally. The basic 3D UI in VR utilizes tracking information from the user's head, including both head rotation or HR, and head position or HP. If the VR system supports controllers, we could also use information from the user's hands, controller rotation or CR, and controller position, CP. So now we have defined all these abbreviations, let's take a look at how different VR systems support these different types of user input. The simplest type of VR system is a very basic cardboard VR, it could be just a cardboard box with two lenses into which you can slot in your mobile phone. The simplest type of this doesn't come with any buttons or controllers, so the only type of user input it supports is HR or head rotation. A slightly more sophisticated mobile VR systems come with a button which you click, or other types of 2D UI interfaces, such as a touch pad. More recently, we've seen mobile VR systems with a dedicated controller, which is often a small knob with built-in tracking system. An accelerometer or gyrometer, which tracks the controller's rotation or CR. Most high end VR displays supported by desktop or laptop computers also come with external tracking devices which track the head position or HP. And most complete high end VR systems come with a controller which is also position tracked. So for a full set of user input, we will have head rotation, 2D user interface, controller rotation, head position, and controller position. Most current consumer VR systems on the market have either all of these five types of user input, or a subset of it. So how do different user inputs from different VR systems support VR interaction? Before getting into this topic, let's think first about how we interact with the real world, using our head or hands. First, when we're in a new environment, we will look around to observe the environment. In this case, we'll be rotating our head, and possibly also shifting our body left and right to see objects that have been occluded from our view. Secondly, in order to further explore the environment, we'll be moving around. In this case, most of the time we'll be looking towards the direction of travel, so we don't bump into things. And we will be using our legs to move forward and backward, and possibly moving our arms back and fourth. We might also wish to interact with objects in the environment, and in most cases we will be doing this with our hands, you could grab an object, play with it, and put it elsewhere. Our head rotation often plays a big role here, as in most of the cases, we'll be looking at an object when we grab it. Finally, we would like to be able to interact with other people, this is a very important aspect of VR direction, but the technical consideration is very different from the first three. So we will discuss them in another course, where we talk about social interaction VR and virtual characters. So how do we observe an environment, move around, and interact with objects with different user input available to us? When we only have head rotation, we could observe the environment by looking around, this forms the basis of VR, our user dynamic control of viewpoint. Here we have three degrees of freedom, because the graphics update when we look around, but not when we move our head sideways, as the position of the head is not tracked. As most of the time we will be looking forward when we move around, we can use the head rotation to define the direction of traveling. But we also need to be able to define the speed of traveling, or at least when to start or stop. We can decouple the head rotation into different dimensions, we can use the left and right vector to define the direction of traveling. As we normally travel at a more or less fixed distance from the ground, that is the height of our body, and then the up and down angle to define the speed of traveling. But this would be less natural, as normally we only use out head for direction, so ideally we can use some other ways to take care of speed. And we can be looking at object to indicate interest, which is quite natural. But again, it will be hard without any other form of user input, to indicate if we'd like to actually grab or drop this object. We can define in our program that when I look at a certain object for more than three seconds, then this object is selected. I can move it with my head rotation to another location, and then I can stay still for another three seconds to deselect it or drop the object. But to make it more natural, we will need another form of user input to work together with HR. And that's why a lot of mobile VR devices have been trying to enable some form of simple 2D user interaction, be it a button or touchpad. When moving around towards the direction of our head rotation in the virtual world, the 2D UI enables us to press a button to indicate start and stop. Or on a touch pad, slide back and forth to indicate how fast we'd like to travel. And when it comes to interacting with objects, we can use this button to indicate selection and deselection of the object we're looking at. So in order to move an object from one place to another, I can be looking at the object, press a button to grab it, move it with my head rotation where I want it to be, and then press the button again to drop it there. So with head rotation and some simple 2D UI, we can move around fairly naturally. We can also interact with objects, but it's not as natural, as normally we would use our hands instead of head to move objects around. And that's why some mobile VR systems now come with a controller, which is rotation tracked, and it comes with some 2D UI, like a button or touchpad on it. With these devices, when interacting with objects, we can use the controller to point at the object of interest, select it, and move it to where we want it to be, and drop it. And during this procedure, we don't need to be looking at the object constantly, which makes it more natural. And when it comes to a high end desktop or laptop based VR, they offer support head position tracking. First of all, this allows us to observe the environment with six degrees of freedom. This means we can not only look around, but can also shift our head to see what's behind another object in front of us, or what's underneath a table. And thanks to precision tracking of our head, we can also physically move around to explore the new environment as we do in real life, only restricted by the space in the real physical world. And as most of these high end devices come with a pair of VR controllers that are also position tracked, we will be able to really interact with objects in the virtual world in a realistic way. We can move close to an object, and select it by using our hands, and actually reaching out to where the object is in the 3D space, rather than just pointing at it. So in summary, we have presented five key types of user input in the current standard VR systems. And explained how a more complete set of choices for user input enables VR programmers to map VR interaction more closely to real life interaction. This will make the VR interaction much easier, or in other words, they're cognitively less demanding, as we don't need to think about how we do these things. And thus, the VR interaction is a better simulation of our real world interaction. Finally, I should make another point about the VR controllers that come with the VR systems, which are both position and rotation tracked, but don't necessarily need to use them with our hands. It is a good idea to be holding them with our hands as we use our hands a lot when we interact with the world. But there is nothing stopping us from using them as trackers to track other body parts, or other optics in the physical world, to suit your VR application. There are already VR systems out there that come with extra trackers which you can use to track objects or other body parts. I think we'll see more VR systems offering additional tracking devices, and that the number of trackers per VR system will grow. This will enable us to be more creative and flexible in how we design and program our VR interactions [MUSIC]