⏳ Algorithmic Complexity

📖 Introduction

Designing and building algorithms is an integral part of software development. While the primary goal of every algorithm is to solve a particular problem, it is essential to consider how an algorithm performs its work and how many resources it needs to do it, as this will ultimately transform into costs for our client. These resources include computation time, referring to how long it takes to run an algorithm, and memory space, referring to how much storage the algorithm requires. This need to analyze the efficiency of algorithms gives rise to the study of algorithmic complexity.

Algorithmic complexity is a metric that allows us to accurately quantify an algorithm's performance based on the size of its input. This is crucial to understanding and predicting how our algorithm will behave with different volumes of data. For instance, an algorithm that performs well with small amounts of data may become inefficient when given a much larger dataset, and sometimes it doesn't have to be on the order of millions to no longer be as performant as we expect.

Additionally, understanding algorithmic complexity is of vital importance during the technical interview process. Interviewers often ask about the complexity of algorithms to evaluate a candidate's ability to write efficient code and assess their capacity to make informed decisions about trade-offs between time and space.

In this post, we will take a closer look at an introduction to Big-O notation, which is used to describe algorithmic complexity, and we will also cover Big-Theta and Big-Omega notations. I hope this post helps you improve your understanding of these fundamental concepts in computer science. Let's get to it!

⏳ Algorithmic Complexity

Every algorithm or set of instructions requires time and space to execute, so when we design algorithms to solve our problems, it is important to consider their temporal and spatial complexity.

We typically only hear about Big-O notation when we talk about complexity and algorithmic performance, and... yes, there's a reason for this! But it's also important to understand that Big-θ (Big-Theta) and Big-Ω (Big-Omega) exist.

Big-O notation describes the slowest performance or complexity of an algorithm. It tells us how much time or space an algorithm can need in the worst-case scenario.

On the other hand, Big-Ω (Big-Omega) is used to describe the spatial and temporal complexity in the best case of an algorithm. This asymptotic lower bound refers to the minimum time or space required by an algorithm. As for Big-θ (Big-Theta), it describes the average spatial and temporal complexity of an algorithm, meaning how much time or space an algorithm requires on average to run.

So as the size of our data increases, how does it affect the performance of the algorithm or its space requirements?

Let's illustrate this with a simple example.

const find = (array, value) => {
  for (const element of array) {
    if (element === value) {
      return true;
    }
  }
  
  return false;
}

In this function, we search for a specific value. The first argument is a collection of values to search for, and the second argument is the value we're looking for.

We're using a for loop to iterate through each value in the collection and check if it's equal to the target value. If the value is in the array, it returns true; otherwise, it returns false as shown in the following code snippets.

const array = [1, 2, 3, 4, 5];
const value = 3;

const result = find(array, value);

console.log(result); // return "true" in the console
const array = [1, 2, 3];
const value = 4;

const result = find(array, value);

console.log(result); // return "false" in the console

⌚ How long will it take for this algorithm to run? Well, we have to think, what's our worst-case scenario?

Big-O 1️⃣

In the worst case (Big-O), the value we are looking for is not found in the array, and we have to go through all the elements of the array. This leads us to a time complexity of O(n). As the length of the array increases, the time required to run this algorithm increases proportionally.

If we have an array with 10 elements, the maximum number of times we will iterate through our array searching for this element is precisely 10. Similarly, if we have an array with 100 elements, the maximum number of times we will iterate through our array looking for this element is 100.

This type of algorithm is known as a linear time algorithm and is represented as O(n) where the execution time of the algorithm increases proportionally to the input size.

Big-Ω 2️⃣

In the best case (Big-Ω), the value we are looking for is found in the first element of the array. This would give us a time complexity of O(1).

Big-θ 3️⃣

For the average case (Big-θ), we assume that we have to search through half of the array before finding our value. This would give us a time complexity of O(n/2), but in Big-O notation, constants are ignored, so it would also be O(n).

While it is more common to talk about Big-O in the industry and in technical interviews, understanding Big-Ω and Big-θ will give you a complete picture of how algorithms perform in different situations.

You shouldn't need to know Big-θ and Big-Ω for a technical interview, but it's good to be familiar with them.

In future posts, we will explore more complicated examples in detail and how to identify algorithmic complexity. I hope you join the series!

Until next time! 👋