Big O
--
Welcome back readers! It’s Wednesday, April 14 and you’re getting the blog a little earlier than planned since I had some unexpected free time today. And as much as I wish this was going to be about bagels (I grew up on Long Island; true love never dies) it is not.
When I first graduated from Flatiron School and failed my mock interview (like, crashed and burned), I learned a new term, Big O. If this was mentioned in the Flatiron curriculum at all, it was glossed over quickly because I have no notes about it and was completely thrown when the interviewer asked. So I quickly added it to my pile of “other programming fundamentals to learn” and got t o work.
During that interview, the interviewer was throwing around terms like ‘quadratic’ and ‘factorial’ and ‘exponential’ and I had some serious flashbacks to 10th grade math class and also started to ask myself if this was really the right career path for me, because this concept, which the interviewer knew like second nature, seemed REALLY complicated at first. This is one of several things I got out of that interview that set me down my current path of learning some of the fundamentals that I missed by just doing bootcamp and not majoring in computer science like my mother told me to (really should have listened).
Quick digression: when I was a student in bootcamp, Flatiron’s curriculum seemed very strange sometimes. It would plunge us right into the deep end with very complicated tasks or projects that literally kept me up at night. Then they’d end a section with something like “oh by the way, this is how you declare a function” or some other simple fundamental that would have been really useful a few lessons ago. At first this frustrated me, so I spoke to an instructor who told me that it was intentional and part of how the leach us to learn. It meant that once we suffered through the hard stuff, we’d see the basics and say “wow, that was easy” and rack up a quick win. In retrospect it really was a good confidence booster — for any student learning code, getting to a section of your curriculum that you feel you’re already comfortable with is huge. And going through this introductory computer science stuff feels a lot like those quick wins just because of the solid foundation I built at Flatiron. Another ++ to you, FIS.
I read a lot of the conceptual stuff about Big O and it still seemed really complicated, but then actually applying it with JavaScript made a world of difference and I found that it was actually a pretty easy concept to grasp. I got into the details of it and realized that it all actually made a lot of sense, and before I knew it I was able to tell a constant runtime from a quadratic runtime or a factorial one just by looking at them. Nice!
So here’s what I learned.
Big O is one of several methods of asymptotic notation. Asymptotic notation is used to determine efficiency in programs. Runtime should be determined using this method rather than just timing something — all computers are different and using time as your measurement will never give you consistnet results (for example, my 10 year old HP sitting in the closet takes a lot longer to run anything that this beast that I just bought myself as a graduation present). This method calculates runtimes by looking at how many instructions the computer has to take based on the input. Aside from big O, there are a few other methods of doing this: big Theta (Θ) and big Omega (Ω). They all do the same thing, but for different situations. Big Theta is used when you the runtime is always the same. Ω is used when you are trying to convey the upper bound (best case scenario) of a runtime, and big O is used when conveying the lowest possible runtime. Because people usually want to know what the worse case scenario is, big O is the most commonly used.
Common runtimes include:
- O(1): CONSTANT runtime. Most efficient. Program always does the same no matter what the input is.
- O(log N): LOGARITHMIC. The runtime grows in proportion to the log of the input size.
- O(N): LINEAR runtime. A simple ‘for’ loop for iterating through an entire set.
- O(N log N):
- O(N²): QUADRATIC runtime. This is common when working with two dimensions, lke a nested loop
- O(2^N): EXPONENTIAL runtime. This is often the runtime of recursive functions.
- O(N!): FACTORIAL runtime. The least efficient. It’s when the program has to check all of the possible outcomes of something.
So, keeping all that in mind, let’s look at the function I wrote in last week’s blog post. What kind of runtime are we looking at here?
myFunction = (arr) => {let res = arr.pop()if (arr.length > 0) { myFunction(arr) } else { console.log("Oh dear. The array is empty.") }}
Well, myFunction is recursive, so it’s got a runtime of O(2^N), exponential runtime.