Lesson: Introduction to technical PM interview questions. Topic: why learn and practice technical PM questions? In these lessons, you will learn: One, why practice technical questions? Two, foundational technical concepts. Three, how to answer basic technical questions. You might wonder why PMs get asked technical questions during job interviews. Ken Norton, a partner at Google Ventures and former Google Group Product Manager, says, "Product managers with technical backgrounds will have more success conveying product requirements to engineers and relaying complicated details to non-technical colleagues and customers." If you've ever worked with a manager who doesn't understand his business, you can relate to the experience as being quite frustrating. What constitutes a technical question varies by the industry. For example, if you're interviewing to be a manager at a construction company, you might be asked when you'd lay a foundation using quarter minus. By building these technical foundations, you will be equipped for both technical and non-technical industries. Topic: technical questions versus coding questions. In general, the type of experience you claim on your resume determines how much you will be asked to code or explain code. For example, the only way a candidate got asked at Microsoft: what is the difference between C++ and Java? Was because he listed a proficiency in both C++ and Java on his resume. The technical question is different from the coding question because the interviewer is trying to assess your ability to communicate technically, not write code. This is why real life technical questions often require you to explain a concept to a seven-year-old or my grandparents. The difficulty with preparing and the problem with prep guides on the market today is unless you have access to a technical interviewer, it's difficult to know if what you're saying is correct and has room for improvement. We address this by: one, changing many of the questions in this module from multiple choice to short answer, which will give you an opportunity to put your thoughts together. Two, linking to LeetCode's free, easy questions. LeetCode allows you to work through data structures and SQL problems. LeetCode allows you to work through various problem sets so that you can gain and build your insight. This requires basic programming skills and is optional. The following lessons will cover fundamentals that all product managers should have an understanding of in order to successfully work with their engineering counterparts. Lesson: traversing basic data structures and algorithms. Topic: real interview questions. In this lesson, you will learn how to answer real PM interview questions like these, for example, from Microsoft: Write a binary search. Design a method that removes every other node from a linked list. Implement breadth-first search and explain it to someone who is not familiar with the algorithm. One from Google: when designing a simple load balancer for Google.com, explain which data structures you'd use. Topic: introduction to complexity analysis. For a given requirement, let's say you want to sort a list of numbers, there can be many different ways to code that implementation up. How do you compare one implementation to another, and how do you evaluate which implementation is better? To do so, we need a way to characterize the performance of our implementations. This is done in terms of compute and runtime. For example, how many steps the computer has to take in order to finish the implementation for the requirement at hand. Another thing to characterize is storage, for example, how many things does the computer need to remember to finish the requirement? The runtime and storage are usually described in terms of input size. Let's say you're trying to sort a list of 100 numbers. As a function of the fact that you're trying to sort those 100 numbers, how many steps does the computer have to take in order to produce a fully sorted list of those 100 numbers? How many things does a computer have to keep track of as a function of those 100 numbers? The number of things in nomenclature is usually denoted by the variable n. The way you compare algorithmic performance is by using something called big O notation. Big O stands for on the order of, where n, as we previously described, represents the number of elements that are input into the algorithm. In interviews, it's not important to get precise. For example, if you need to examine all 100 numbers once versus ten times, we still consider that O of n [written O(n)], the exact number of times is not important. We'll see why as we explore our graphs next. Next, I want to take a look at the most common types of runtimes for algorithms that you'll come across. We'll understand, one, what types of algorithms require those runtimes? Two, why we look at things in terms of the overall polynomial? That is, why do we look at things in terms of n and n squared, but not n versus 2n. First, I want to bring your attention to constant time [written O(1)]. As we see here, constant time is a flat line as a function of the number of inputs. Let's motivate a natural example of constant time. Let's assume you have a line of people, I'd say 100 people are in that line. Let's say the requirement is you want to get the name of the first person in line. Regardless of how long that line is, you only have to do one thing which is go up to the person and ask them their name. If we had an algorithm for that, that would be O of one, regardless of how large the list is, the computer only has to do one thing to figure out the name of the first person in line. Next, I want to bring your attention to big O of n [written O(n)]. Big O of n refers to algorithms that need to examine the input set at least once per element. As an example, let's take a look at this dictionary. Let's assume I want to find which page the word "yogurt" shows up. Let's assume A starts over here and Z is over here at the end of the dictionary. One way I could figure out what page this is, is I could start at the first page, look for the word yogurt. If it's not there, I could go to the next page and the next so on and so forth. If I exhaust every single page, I'll finally get to a page that contains yogurt, assuming this dictionary has the word yogurt in it. If it doesn't, I'd have to go through all the pages, I'd get to the end, and I would report that yogurt doesn't exist. Either way, I have to look at every single page in order to do this. So that's what we mean by algorithms that run in O of n time. They're algorithms that need to look at every element. In this case, our elements are the pages in this dictionary. Next, I want to bring your attention to [O of] log of n. Now this is cool. The idea of logarithmic algorithms, is that they exploit an underlying structure of the problem to do things faster than having to look at every single element. What structure do we have any dictionary? They're alphabetically sorted. If I want to look for the word yogurt and figure out what page it's on, the first thing I could do is look at the middle of the dictionary. In the middle the of the dictionary, I might find that the index is M. Or in the middle of the dictionary, I might find that the words start with an M. M is after A but before Y, which is what we're looking for, yogurt. So we know that yogurt can't be anywhere between the middle and the front. That's what we've proven so far, and so we've eliminated half the dictionary. Now we can look between M and Z, we can flip to a page in between there, and we might find that it is T. Again, you know that T comes before Y, so what you can do is half again and you can look over here. You can continue this way until you end up in a page where you find the word yogurt and you can return back the page number that you found it in. The reason why logs are so cool, is that by exploiting certain structures like sorted words, you can cut down the problem statement by half every single time. This has amazing results when you scale up your problem. Let's assume this dictionary actually had a million pages instead of 500. If you did not have a structure that you were exploiting, you'd have to check every one of those million pages for a certain word and your computer might have to do a million scans. If you exploit the structure and can half every time, how many halves does it take to reach a million? It actually only takes 20 halves. So in 20 steps, you can determine what page a certain word is on. Let's say you have a billion pages. If you had to scan every single page, that would take over a billion operations the computer would have to do. Instead, If we can half the problem every single time, it only takes 30 halves to get to a billion. So in 30 steps of jumping around in the dictionary, your computer can find which page yogurt is on. Next I want to bring your attention to Big O of n times log n [written O(n * log n)]. This runtime is usually associated with sorting algorithms. While bigger than n in runtime, n log n is still a preferable way of sorting as compared to the next thing we'll be studying, which is Big O of n squared. I want to compare and contrast the difference between n log n and n squared. Because you can have sorting algorithms, the requirement being sorting a list of numbers, implemented in both n log n time, but also n squared time. I want to illustrate the difference here. Let's assume you want to sort one billion numbers. In this case, n is a billion, n squared here, a billion squared would be 10 to the 18, which is 10 followed by 18 zeros. I'm not even sure what number that is. Next, let's compare that to a sorting algorithm that runs in n log n time. In this case, n is a billion, log of n, which we computed earlier is 30, so 30 billion. Let's compare 30 billion to 10 to the 18 in terms of the number of compute steps involved and how long that could actually take. A modern processor is capable of going well beyond a gigahertz, but let's use that as our base reference. One gigahertz means that you're doing one billion things every second. Let's assume that you're processor is fully utilized and the only thing it's doing is the steps required for the sorting operation. Let's actually take the case of n squared, it's a billion times a billion divided by one billion operations per second. What that means is it'll take one billion seconds for the computer to sort one billion numbers. A billion seconds just so happens to be 31 years. Now let's look at an algorithm implemented in n log n time; n is a billion, log of n is 30, 30 billion. Thirty billion steps divided by one billion steps per second is 30 seconds. So we're comparing 30 seconds for an algorithm implemented in n log n time versus 31 years for an algorithm implemented in n squared time: quite a difference.