🦁 Conquering the Big-O Beast

Now that we have a basic understanding of Big-O notation thanks to the previous post (which I invite you to review), we can start looking for slightly more costly calculations to understand how algorithms are evaluated broadly.

Imagine you're organizing a large party and you have a guest list! Each guest has a specific age and you want to sort the guest list by age, from youngest to oldest. You could opt for the brute force strategy, which involves comparing the age of each guest with everyone else, one by one.

This method in programming is known as "Bubble Sort" 📋, and it's famous for being pretty slow... well, yes, horrifyingly slow in terms of performance. Let's see its implementation:

1. function bubbleSort(peopleInvited) {
2.  for (var i = 0; i < peopleInvited.length; i++) {
3.    // Notice that j < (length - i)
4.    for (var j = 0; j < peopleInvited.length - i - 1; j++) {
5.      // Compare the adjacent positions
6.      if (peopleInvited[j] > peopleInvited[j + 1]) {
7.        // Swap the numbers
8.        var tmp = peopleInvited[j];
9.        peopleInvited[j] = peopleInvited[j + 1];
10.       peopleInvited[j + 1] = tmp;
11.     }
12.    }
13.  }
14.
15.  return peopleInvited;
16. }

In the code example from the previous post, we saw the implementation of the find(array, value) method where we had a for loop, so our Big-O notation was quite simple, but in this example, we have two for loops, and they're nested within each other.

What does this mean for our performance?

🎥 Let's break it down a bit:

  • Line 2: This is like counting all the party guests, which we'll call n (number of guests). This would be O(n).
  • Line 4: Here we are counting again, but this time we skip the last guest we have compared before. This would be O(n-1).
  • Lines 6, 8, 9, 10, and 15: We might think that this runs for each item on the guest list (which in the worst case is true), but we actually evaluate the runtime in isolation, i.e., in isolation, this if statement runs once. This is called, as we've seen, 'constant time', and we write it as O(1). As the size of the data (n) increases, the number of times this code snippet runs does not change.

😕 I know that this last part may cause some confusion when saying it runs in isolation, so we'll provide a brief explanation before we continue to understand how all these conclusions we're discovering work:

What the text is trying to say is that in an isolated analysis, the if statement and the lines of code inside the if block (lines 8, 9, and 10) have a constant runtime, i.e., O(1). This means that the number of operations these lines of code perform does not depend on the size of the input (in this case, the length of peopleInvited).
However, it's important to note that this does not mean that the if statement and the code block within it only run once throughout the entire algorithm. In fact, they run many times, depending on how the elements in the peopleInvited array are arranged.

✍️ To be more specific:

  • The inner loop on line 4 runs n-1 times on the first iteration, then n-2 times, then n-3 times, and so on.
  • The if statement on line 6 is evaluated on each iteration of the inner loop. That is, it is evaluated (n-1) + (n-2) + (n-3) + ... + 1 times in total, which equals n(n-1)/2 times.
  • Lines 8, 9, and 10 (which perform the swapping of elements) only run when the condition on line 6 is true, which can vary.

🎯 When talking about Big-O notation, we are interested in how the runtime grows as the size of the input increases.

Now, the fun part is putting it all together. But before doing that, we need to remember a crucial rule of Big-O notation:

  • Simplify, simplify, and simplify. We care about the overall long-term trend, not the algorithm's minuscule details.

🤝 So, when simplifying, we follow these rules:

  1. O(5) is still constant time and is written as O(1).
  2. O(2n) is still linear time and is written as O(n).
  3. O(n - 3) => O(n)
  4. O(n^3 + n^2) => O(n^3)
  5. O(150x + 650y) => O(x + y) "x" and "y" are different values, so I can't drop or combine them.

So, back to our party. Line 4's O(n-1) simplifies to O(n) because we're dropping the less significant term. Lines 6, 8, 9, 10, and 15 are constant-time terms O(1), and all are less significant, so we can also discard them. We're left with the O(n) from the outer loop and the O(n) from the inner loop.

Now, what do we do with these two n values? Do we add them or multiply them? Well, that depends on whether the loops are "nested" or "adjacent".

  • If they're nested, like in our sorting algorithm, we multiply them.
  • If they're adjacent, we'd add them together.

Therefore, following all these rules, the complexity of the algorithm for arranging the guests by age, i.e., our Bubble Sort or brute force algorithm, is O(n*n), which we can also write as O(n^2).

🧠 This means that if you had 100 guests at your party, you would end up making about 10,000 comparisons in the worst-case scenario (100 guests * 100 comparisons per guest = 10,000 comparisons). And if you had 1,000 guests, you would end up making around 1,000,000 comparisons! As you can see, the performance of Bubble Sort deteriorates very quickly as the number of guests increases.

🗞️ Additionally, I provide you with an example of the algorithm so that you can better visualize how it works:

  • At each "step" of the algorithm, the ages of two consecutive guests are compared. If the first guest is older than the second, you swap their positions in the list. In this way, the oldest guest "bubbles" up to the end of the list as each step is executed. This process is repeated from the beginning of the list until no more swaps are needed, which indicates that the list is completely sorted.

Here is the step-by-step process with the list of ages of the 5 guests that we will use as examples with their ages [(35, 22, 45, 30, 26)]:

1️⃣ First Step

(35, 22, 45, 30, 26) → (22, 35, 45, 30, 26) // 35 and 22 are swapped because 35 > 22

(22, 35, 45, 30, 26) → (22, 35, 45, 30, 26) // 35 and 45 are not swapped because 35 < 45

(22, 35, 45, 30, 26) → (22, 35, 30, 45, 26) // 45 and 30 are swapped because 45 > 30

(22, 35, 30, 45, 26) → (22, 35, 30, 26, 45) // 45 and 26 are swapped because 45 > 26

2️⃣ Second Step

(22, 35, 30, 26, 45) → (22, 35, 30, 26, 45) // 22 and 35 are not swapped because 22 < 35

(22, 35, 30, 26, 45) → (22, 30, 35, 26, 45) // 35 and 30 are swapped because 35 > 30

(22, 30, 35, 26, 45) → (22, 30, 26, 35, 45) // 35 and 26 are swapped because 35 > 26

(22, 30, 26, 35, 45) → (22, 30, 26, 35, 45) // 35 and 45 are not swapped because 35 < 45

3️⃣ Third Step

(22, 30, 26, 35, 45) → (22, 30, 26, 35, 45) // 22 and 30 are not swapped because 22 < 30

(22, 30, 26, 35, 45) → (22, 26, 30, 35, 45) // 30 and 26 are swapped because 30 > 26

(22, 26, 30, 35, 45)

This is why, even though Bubble Sort is an algorithm that is easy to understand and implement, it is rarely used in practice for large lists. There are other more efficient sorting algorithms, such as Quicksort, Mergesort, and Heapsort, which have better time complexity in the worst-case scenario.

And there you have it! Now you can show off at the next programmers' party by talking about Big-O notation and its application in Bubble Sort. And if someone asks you how you learned it, you can tell them it was at a Party with guests 🥳.

I hope to see you soon in the next edition where we will look at the different types of time complexities in more detail. 👋