Good, so, let's look at these. We said they're judicious in both frames. They're fixed in both frames. That means the B frame components and N frame components have to be the same. So the e hat vector, if you composed into both axes gives you the same thing. So, there's really no distinction if this is, if this e hat is in the B frame or the N frame. If e hat is what takes you from N to B, they're the same. So, if you go back and look at the DCM, usually, let me just write this out again. If you look at the A [COUGH] if we have BN times something, all right. A vector, not B, N, here we go. V and B frame. These are two different things, normally, here we have the same BN. I have e hat in the N frame. And I end up with e hat in the B frame, all right? That's essentially what that other side was talking about. That's where we're going to be. But with this theorem, Euler says, on his right, that really you can drop these letters and say, look, they're really the same. And so we often just write that short hand as B N e hat. So Robert, if you take a vector the three by one basically times a matrix and you get back the same vector what is that vector called? How does it relate to the matrix? >> [INAUDIBLE] Eigen. >> Eigen. Eigen what is Eigen it's a German word? >> Self. >> Self essentially. It's a self if you hear eigenvector, eigenvalue it's due to a self mapping. This thing maps back onto itself. So that's why I had you guys solve this eigenvector, eigenvalue problem by hand you know so much fun. As much we don't have to solve here an eigenvector, eigenvalue problem. The beauty here is with this stuff, I can show an analytic answer of how to go from a DCN and get this vector. But generally, if you have a matrix A times a general vector V, do you just Do you just get back the same vector? What was your name again? >> David. >> David, thank you. >> No, if the eigenvalue's one, you would but yeah, here it constant. >> Okay, so for every eigenvector, there is an associated eigenvalue. Hopefully not news to even enough because you have done that homework one. So, I'm not spending much time on eigenvector, eigenvalues. So good, so that is always there. So you have to know that is the eigenvalue. The eigenvalue tells you by how much do you stretch or shrink that vector. And this has to do now also with the question of whether it has to be unit length and so forth. When you doing rotations and if this is more to unit like one of the sadden you end up to take that put it regular dc in math. You definitely subtraction vectors you will not get something that auto normal in that kind of habit. You have to read normalize the rotation matrix. Its a lot of extra work. Now since you guys are all expert eigenvalues, eigenvectors. If you have a three by three matrix Maurice how many eigenvalues would you have? >> Three. >> Three, are they unique or non-unique? >> They're not necessarily none. >> Exactly, trick question, could be either, [LAUGH] right? So you have always eigenvalue, eigenvector problem. There's all the stuff, hey assuming they're unique, this is Math and how you do it? If they're unique do you have unique eigenvectors? Yes. So if you have unique eigenvalues, you do unique eigenvectors. If you have repeated eigenvalues, you have two plus ones, maybe it's triple repeated, three plus ones. Like the identity matrix. The identity matrix is the zero rotation, right? What are the eigenvalues, Evan, of the identity matrix? >> Three one's? >> Three one's, it's triple repeated, all right. So if an eigenvalue is repeated are the eigenvectors now unique? Mendor. >> Yup. >> If I have the repeated, lambda one is equal to lambda two, is v one and two unique or non-unique? >> Non-unique. >> Non-unique, actually. Now the plane in which v one and two line is going to be unique. But within that plane you can define it with an infinities of combinations of vectors, right. So does just remember that. So now into eigen, if we talk about this and think it through and think, okay, the zero rotation is identity matrix, an identity matrix has a triple repeated root. It has three plus one's basically the eigenvector is anything. Any access right and that's what we described earlier. We're talking about those ambiguities that we're going to get into, right. So it directly relates to eigenvectors. So let's say we do have unique eigenvalues and I'm going to go back to a plus one times v equal to a times v. Is v now really unique? Let's define uniqueness. I'm sorry back of the room what was your name? >> Kyle. >> Kyle thank you. Is this eigenvector is are only one answer that you can put into this for an A matrix? >> Really not sure, actually. I would want to say no. >> It's 50/50. Now my next question [LAUGH]. >> I don't know really like why it'd be no. >> Let's just think it through, all right? I'm not expecting it to be completely expert to linear algebra and that stuff but this really becomes. If this math is property. If this math is property, if this property is mathematically correct and it does it. I pick a vector of 1, 2, 3 times +1, does three by three matrix give me 1, 2, 3? If I see this math, is there a different vector that you can envision that would also satisfy this equation? What can you do to this equation and still have the same form. Yes, Cody. >> You can multiply by scalar. >> Exactly, if one, two, three is an icon vector, so is two, four, six. So you can multiply them to scalar and in particularly, you can flip the sign. If one, two, three is an item vector, so must be minus one, minus two, minus three. And this all has impact on our attitude. Yes? >> I want to clear up something you said earlier. I honestly can't remember, but I thought you can have repeated eigenvalues and still have unique eigenvectors. >> If you have repeated eigenvalues you don't have unique. There is thin to eigen space it's kind of a subspace of the manifold. It is unique, that plane. And if it's a double repeated eigenvalue, that plane is unique at that instant but you can use any sets, combinations. >> Isn't that what the geometric multiplicity is once so you could have repeated eigenvalues with unique vectors like the identity matrix. >> The identity matrix does not have unique icon vectors. >> But there are cases where that eigen space only spans part of. It is, you could have a- >> It's a two dimensional space. If I only have a double repeated eigenvalue, I have a two dimensional subspace within which those two eigenvectors must lie. >> But there are some examples whether the deficiency there, that are actually one dimensional. >> Not that I'm aware of these real numbers maybe with complex or some other stuff, but with regular real numbers, no. You end up with subspace, that is then defined through them. We'll take this off line to see what we are really talking about there, but typically no. If you have a double repeated eigenvalue, then when you do the reduction to get the eigenvectors, you come up with an ambiguity. And it's always that if it is a double repeated, it is an two dimensional sub-manifold of the fold mapping. And the fold mapping is a three dimensional mapping. So it is a plane within that three dimensional sub-space. So the key is to remember eigenvalues, eigenvectors are not even what math lab gives you, people ask me, can I use math to find my answer? I go, go for it. At least to double check your answer, you'd have to do this process. But it's going to give you a vector. And it uses some algorithm that says, I'm going to pick one, two, three. Why not minus one, two, three? It's its choice. But the choice of the vector impacts your attitude description. And when we do estimation we'll find similar eigenvalue, eigenvector kind of ambiguities. So this discussion again. It's going to pay off multiple times that we do this so there's some issues there that we really have to consider when we do this. The other thing is MathLab, mathematic and most algorithms tend to give you the unit vector. Instead of giving you one, two, three, it's going to give you that normalized back to a unit of length. Which for us is perfect, because in our application, we're actually looking for a unit vector. But eigenvectors by themselves do not have to be unit vectors you could have 500 being an eigenvector. So if you think of identity, really it's the identity operator times any vector is going to give back itself. Any vector is going to be an eigenvector in that space because it's a triple repeated group. The whole subspace is an eigen. The eigen vectors reside in the whole subspace of three dimensional space, which is the whole thing. So, okay. This must be an eigenvector and actually it must be the eigenvector corresponding to a plus one eigenvalue. If you have a general rotation, you will tend to get one that is plus one and the other eigenvalue will tend to be a complex conjugate set of number. So if you have to solve that cubic polynomial to get the characteristic stuff. One is plus one, the other two are complex conjugates, okay, cool. So you're looking for the plus one in math lab, you put in a DCM, get some numbers. There's two complex numbers. One of the real one, plus one hopefully. They're all unitary, actually, because it's a orthonomal metrics and now you know which eigenvector to get. It won't always be the first, it won't always be the second it won't always be the third. Math lab has no idea what you're trying to solve. Just look for which one is plus one and that's the corresponding eigenvector you have to pick. So that's one way you can find this. Now, let's talk about uniqueness. With these sets, e hat and fee. If I have one rotation. I'll make it very simple. This is my zero rotation facing you guys. And my new rotation b is a 45 degree rotation to my left, your right. Right so I'm doing for me a plus 45 degree rotation. So I've got 45 degrees with my big feet and my e hat is up. Are there other ways I could have rotated from here to here about a single axis by a single angle? Lewis? >> If you do the negative of the axis or the long way around? >> Right, and that's basically a different combination. So let's use the up axis. I went to the left. Okay, but we know attitudes reside in the SO3 group which means there's a close set. My attitude can't go off to infinity. The worse I can be off is 180 degrees so instead of 45 this way. You could've gone basically all the way around, and then, what is it? 360 minus 45, is that 315? So you go 315 minus 315, about the up axis. That gets you there as well, right? That's one. And then you mentioned well, you could go the negative axis. And again, being an eigenvector, I can flip the sign. There's nothing you need that you have to go about this one. I could've gone about the down, I would now have to go minus 45, because this would be a positive. Or I go a plus 350 to get there. Then we have four combinations that you can get. That's important because then when you extract these parameters from the DCM, I should expect four possible answers. There can't just be one answer. So now the attitudes sets we go to if they're not unique and we'll find ways to really take advantage of non-uniqueness in elegant ways. But I have to have Math to give you multiple answers, so that's it. So I'm going to use here, this is my pi prime, is basically my long way and pi is my short way. It's a shorthand to differentiate. There's two short descriptions, one with a positive angle, one a negative, and then two long. One with a positive, one with a negative, that get you there. Which ones do you prefer, Trevor, short angles or long angles? >> Bring old. >> Yeah, most people prefer short, especially if you doing a control application, which you'd rather control the space for app and say, look your arrow is 45 degrees, or say no the arrow is -335, and you're doing this. So in the leadership this is called the unwinding problem. And you want to have attitude sets and cease where you will prefer to go look, it's just your white, just glance that way and off you go, all right? So we tend to pick the shorter one but it's not required. If I'm looking at these angles, let's say estimation. I want to know how different are these two frames, I don't want to say, well track men, guess what, your estimation men, your difference was 359 degrees. That wasn't very good, all right. But no, it was off by one degree, that's brilliant. Its much better, it sounds much better to have much small angles. So, that's where we tend to go, but we don't have to. So here is a four possible combinations, we had, 45 degrees, about up, minus 45 about down, or you do. Up about the long axis or minus down about the minus long. Different combinations how you can do these two things and you can get there. But here in the book I derive this, I'm not doing using class but there's some basic algebra staring at DCM finding how if you add this, this and this if you get the trace of a matrix. The trace of a matrix is the sum of the diagonal elements here the c123. If you do this math you will have a bunch of stuff that is going to be cosined. Yeah, this has e1 squared over this. You will add up these diagonals. You'll get e1 squared, e2 squared, e3 squared times sigma while the e123 squares have to be one right because this is a unit vector. That simplifies it, you bring in the definition. And in the end you can solve for the cosine of that angle, which is nice. But now if you look at the cosine curve what the calculator gives you, so here's my cosine. I go down, I come up, I go to here. So here is two pi. Here is 180 degrees pi. Right, your calculator actually gives you an answer that is in this shaded region. That's it. It doesn't know the problem you're trying to solve. Any calculator inverse cosine just gives you an answer between zero and 180. Which was perfect for inclination angles and extracting them, but here there's actually other answers. So if you look at the math and go where's my second answer coming from, the long answer is simply if this is your cosine value that is the short answer. And then basically, 360 minus that angle that you got from the calculator, that's the long way around, right. Then so you have to do your own math to get the other angle. The calculators won't give this to you or math level only always give you one angle. It doesn't give you multiple ones. So good. So basically that's the, then you could subtract 360 or here I'm taking the angle minus 360, that's how I'm defining the long way around. And that's what makes this math work. Once you have that angle, then if you stare at this, you can see if you take 2, 3 minus 3, 2, so 2, 3 is this element minus this element This part and this part cancels it leaves you with these terms that sum up and we know the angle. That's a known quantity so I can do that math and extract out e1. So this is nice because this will now implicitly given me the correct eigenvector and yeah, so we have the short, the long, and then you have right eigenvector. So how does it decide if it should be up or down? That is actually handled right here inside the sign. The only thing you have to, well a sign of that. Well no. This is going to give you, if you pick the right, the short of the long, this will all work out. [COUGH] And then you use the other combinations that we showed. Here instead of 45 here, you can do minus the other one comes from the complement sets that you could generate. But they are all equivalent and so you get them getting four. Practically we always pick to positive angle. The difference between this frame and this frame is one degree, not minus one. It's true minus one, but we would practically speak to positive to show the angle and then pick the axis that gets you there. So any questions? This is just showing you how do we go from the DCM. With Euler angles we have to go inverse sines and cosines and inverse tangents. Here you take the trace to get the cosine of the angle and then take differences of the matrix components to find e hat vector. We don't have to solve an eigenvalue, eigenvector problem. We can find it explicitly from this derivation. And how to get from these principles stuff to here. The book actually gives you a detailed derivation of that. I'm not going through that in class. If you're curious, you can follow along there, okay. Now, so these angles are very handy.