Understanding Big O Notation: Simplifying Time and Space Complexity in Algorithms
May 16, 2023 / 24 min read /
--- views
In the world of computer science and programming, efficiency is key. As developers, we strive to create algorithms that can solve problems quickly and optimize the use of system resources. This is where Big O notation, time complexity, and space complexity come into play. These concepts provide us with a standardized way to measure and analyze the performance of algorithms, helping us understand how they scale with increasing input sizes.
In this article, we will delve into the fundamentals of Big O notation, demystify time and space complexity, and explore their practical implications in designing efficient algorithms. Whether you are a beginner exploring the world of algorithms or an experienced programmer looking to solidify your understanding, this article will serve as a comprehensive guide to unraveling the mysteries of Big O notation and its significance in the realm of computer science.
Let's dive in and demystify the complexities behind measuring algorithmic efficiency.
Complexity
When we talk about sorting cards, we mean putting them in a specific order, like arranging them from the smallest to the largest number or sorting them by their suits (hearts, diamonds, clubs, spades) โ ๏ธ.
In almost every computation, a variety of arrangements for the processes is possible. It is essential to choose that arrangement which shall tend to minimize the time necessary for the calculation. โ ADA LOVELACE
Now, imagine you have a deck of 26 shuffled cards and you want to sort them. The time it takes to sort them depends on the method or algorithm you use. Let's say you use a simple method called "Bubble Sort" ๐ญ.
Bubble Sort works by comparing two adjacent cards and swapping them if they're not in the correct order. Then, it moves to the next pair and continues this process until all the cards are sorted.
Sorting 26 cards using Bubble Sort might take you a certain amount of time, let's say 5 minutes โณ.
Now, if you had a deck of 52 cards instead of 26, would it take twice as long? Not exactly. The time it takes to sort the cards using Bubble Sort increases, but not exactly in proportion to the number of cards. So sorting 52 cards might take around 10 minutes, not precisely double the time.
What if you had a thousand decks of cards? Sorting them using Bubble Sort would take a lot longer. It might take hours or even days, depending on how fast you are โฐโ. As the number of decks increases, the time it takes to sort them also increases significantly ๐.
When we talk about algorithms, we want them to be fast and efficient. In simpler terms, we want to find algorithms that are fast and don't need too much computing power ๐ปโก. We keep an eye on the number of operations in our algorithms because some require more operations as the problem gets bigger, and we prefer ones that require fewer operations to solve the problem efficiently.
To avoid bad surprises when our problem size grows, we find the algorithm's time complexity. In order to do that, we need to learn how to count and interpret complexities and express their growth with fancy Big-Os.
Hope for the best and prepare for the worst
When we sort a pile of cards, it can be faster or slower, depending on how the cards are arranged at the beginning.
Imagine you have a pile of cards, and you want to put them in order. If the cards are already mostly in the right order, it's faster to finish sorting them because we don't have to make too many changes.
But if the cards are all mixed up, it takes more time and effort to sort them because we have to move many cards around to get them in the right order.
Now, sorting algorithms can behave differently based on how the cards are arranged. We look at three different situations:
๐ Best Case: This is when the cards are already sorted or almost sorted. In this case, the sorting algorithm can finish quickly because there isn't much work to do. It's like having a head start on the sorting process.
โ Worst Case: This happens when the cards are in the reverse order, like having the biggest card first, then the second-biggest, and so on. In this case, the sorting algorithm needs to make a lot of changes and moves to sort the cards. It takes longer because we have to rearrange them completely.
โ๏ธ Average Case: This is what we usually encounter in real life, where the cards are shuffled randomly. The average case represents a typical scenario. Sorting algorithms are designed to work efficiently on average, so they can handle most random piles of cards without taking too much time.
What is Big-O notation?
Let's talk about the Big O notation. Big O notation is a way of measuring how quickly an algorithm will run. It's based on the idea that the time it takes to run an algorithm will increase as the amount of data it has to process increases. Let's delve into the classic definition:
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is often used to show how programs need resources relative to their input size. - Wikipedia
Big O notation helps us determine if a given algorithm is scalable or not, which basically means that this algorithm is going to scale well as the input grows really large. Certain operations can be more or less costly, depending on what we are using.
For instance, accessing an array element by its index is super fast โก๏ธ, but arrays have a fixed length, and if you want to constantly add or remove items from them, they have to get resized, which will get costly ๐ธ as the size of our input grows very large. If that's what we need to do, then we have to use another data structure called a linked list. These data structures can grow or shrink very quickly, but accessing a linked list element by its index is slow, so that's why you need to learn about the Big O notation.
What is Time and Space Complexity?
Time complexity and space complexity are two measures of how well an algorithm performs. Time complexity measures how long it takes an algorithm to run, while space complexity measures how much memory it uses. They are expressed using the big O notation.
Some algorithms always run for a constant duration regardless of input sizeโthey're O(1). For example, when checking if a number is odd or even: we see if its last digit is odd, and boom, problem solved ๐ฏ. No matter how big the number. We'll see more O(1) algorithms. They're amazing, but first let's see which algorithms are not amazing.
Counting Time
The time complexity of an algorithm can be determined by analyzing the number of fundamental operations it performs for a hypothetical input of size n. Let's illustrate this concept using Selection Sort, a sorting algorithm characterized by its nested loop structure. In Selection Sort, an outer for loop keeps track of the current position being sorted, while an inner for loop identifies the item that should occupy that position.
1def selection_sort(lst):2for current in range(len(lst) - 1):3smallest = current4for i in range(current + 1, len(lst)):5if lst[i] < lst[smallest]:6smallest = i7lst[current], lst[smallest] = lst[smallest], lst[current]
For each iteration of the outer loop, the inner loop scans through the remaining unsorted elements to find the minimum or maximum value, depending on the sorting order. This process involves comparing the current element with the other elements and potentially swapping their positions.
The number of comparisons and swaps performed by Selection Sort depends on the size of the input. Let's denote the input size as n. In the worst-case scenario, the outer loop makes (n-1) iterations since the last element doesn't require sorting. For each iteration of the outer loop, the inner loop executes (n-1), (n-2), (n-3), and so on, comparisons, respectively. Consequently, the total number of comparisons can be expressed as the sum of (n-1), (n-2), (n-3), ..., 2, 1, which simplifies to (n * (n-1)) / 2.
Similarly, the number of swaps corresponds to the number of times elements are swapped within the inner loop ๐. In Selection Sort, a swap occurs when a smaller or larger element is found during the inner loop traversal. As a result, the number of swaps is directly related to the number of comparisons made.
Hence, the time complexity of selection sort can be approximated as O(n2), since the number of basic operations grows quadratically with the input size. It's worth noting that this represents the worst-case scenario, and the actual number of operations may vary depending on the initial ordering of the input elements.
By analyzing the time complexity of algorithms like Selection Sort, we can gain insights into their efficiency and scalability. This understanding enables us to make informed decisions about algorithm selection, particularly when dealing with larger input sizes, where performance becomes a critical factor.
Understanding the growth
Imagine we have an algorithm with a significantly large input size, and we decide to increase the size even further. In such cases, we don't necessarily need to know the exact details of how the execution time, denoted as T(n), changes with respect to the input size. Instead, we can focus on identifying the fastest-growing term within T(n), which is known as the dominant term.
We've seen Selection Sort follows T(n) = n2 + n - 2. The fastest growing term is n2, therefore we can write **T(n) โ n2 **. Assuming there are n cards per box, we find:
T(10n)/T(n) โ (10n)2/n2 = 100.
It will take you approximately 100*(2 hours) = 200 hours! What if we had used a different sorting method? For example, there's one called โBubble Sortโ whose time complexity is T(n) = 0.5n2+ 0.5n. The fastest-growing term then gives T(n) โ 0.5n2. It will still grow like n2.
Six major types of complexities
Best Constant Time: O(1)
Imagine you have a magic box with a secret compartment ๐ฎ๐ฆ. No matter how many items are inside the box, you can always open the secret compartment in the same amount of time. Whether there are 10 toys or 100 toys, the time it takes to open the secret compartment is constantโit never changes.
Constant time, or O(1), means that an algorithm will always take the same amount of time to run, no matter how big the input is. For example, if you have an array of numbers and you want to find the first number, it will always take the same amount of time, no matter how many numbers are in the array.
Here's an example using an array in Python to illustrate constant time complexity O(1):
Constant Time Complexity.py
1my_array = [3, 5, 7, 9, 11] # An array with 5 elements23print(my_array [2]) # Accessing the element at index 2
We have an array called my_array
with 5 elements. Each element in the array has a specific index starting from 0.
To access an element in the array, we can use its index inside square brackets, like my_array[2]
. In this case, we are accessing the element at index 2, which is the number 7.
No matter how large the array is, accessing a specific element using its index takes the same amount of time. It's a constant time operation because the time it takes to access the element is not affected by the size of the array.
Good Logarithm Time: O(log n)
Imagine you have a phonebook with thousands of names ๐๐, and you want to find a specific name. Instead of looking through every name one by one, you decide to divide the phonebook in half and check if the name you're looking for is in the first or second half. If it's in the first half, you repeat the process by dividing that half in half again. You keep doing this until you find the name you're looking for.
Logarithmic time complexity is a type of algorithm that takes a logarithmic amount of time to run, as the input size increases. This means that the number of operations the algorithm performs grows very slowly as the input size grows.
A simple example of a logarithmic time algorithm is binary search. Binary search is a way of finding a specific value in a sorted list. To do this, the algorithm starts in the middle of the list and then compares the value it is looking for to the value in the middle. If the value is smaller than the value in the middle, the algorithm searches the left half of the list. If the value is larger than the value in the middle, the algorithm searches the right half of the list. The algorithm continues this process, dividing the list in half each time, until it finds the value it is looking for or until it has searched the entire list.
The time complexity of binary search is O(log n), where n is the number of elements in the list. This is because the algorithm only needs to perform log n comparisons to find the value it is looking for, regardless of the size of the list.
Here is an example of a Python code that implements binary search:
Binary Search.py
1def binary_search(list, value):2low = 03high = len(list) - 14while low <= high:5mid = (low + high) // 26if list[mid] == value:7return mid8elif list[mid] < value:9low = mid + 110else:11high = mid - 112return -11314numbers = [2, 5, 7, 9, 12, 15, 18, 21, 25, 30] # A sorted list of numbers1516target_number = 181718result = binary_search(numbers, target_number)1920if result != -1:21print("The target number is found at index:", result)22else:23print("The target number is not found.")
This code works by first setting the low and high pointers to the beginning and end of the list, respectively. Then, the code repeatedly does the following:
- Calculates the middle index of the list.
- Compare the value at the middle index to the value being searched for.
- If the value at the middle index is equal to the value being searched for, the code returns the middle index.
- If the value at the middle index is less than the value being searched for, the code sets the low pointer to the middle index + 1.
- If the value at the middle index is greater than the value being searched for, the code sets the high pointer to the middle index - 1.
- The code continues this process until the value being searched for is found or until the low pointer is greater than the high pointer. If the value being searched for is not found, the code returns -1.
As you can see, the number of comparisons the code performs is always less than or equal to log n, where n is the number of elements in the list. This is why binary search is a logarithmic time algorithm.
Fair Linear Time: O(n)
Imagine you have a set of toys ๐๐งธ, and you want to count how many toys you have. To count them, you need to look at each toy one by one. The time it takes to count the toys increases as the number of toys increases. If you have 10 toys, it takes less time compared to having 100 toys.
Linear time O(n) is a measure of how long an algorithm takes to run. It is called linear because the time it takes to run increases linearly with the size of the input. For example, if an algorithm takes 1 second to run on an input of size 10, it will take 2 seconds to run on an input of size 20, and so on.
Linear Time Complexity.py
1def count_toys(toys):2count = 03for toy in toys: # Go through each toy one by one4count += 1 # Increment the count by 1 for each toy56return count78toys_list = ['๐', '๐', '๐', '๐งฉ']910number_of_toys = count_toys(toys_list)1112print("The number of toys is:", number_of_toys)
In this code, we have a function called count_toys() that takes a list of toys as input. It goes through each toy in the list using a loop and increments a counter by 1 for each toy. Finally, it returns the total count of toys.
The time it takes to count the toys increases linearly with the number of toys. If you have 4 toys, it will go through the loop 4 times. If you have 10 toys, it will go through the loop 10 times. So, the time it takes to count the toys is directly proportional to the number of toys, and we call it linear time complexity O(n).
Bad Loglinear Time: O(nlogn)
Imagine you have a stack of index cards with names written on them, and you want to sort them in alphabetical order. Instead of comparing every pair of cards, which would take a long time, you decide to divide the stack into smaller stacks, sort each smaller stack individually, and then merge them back together in the correct order.
Loglinear time is a type of algorithm that takes time proportional to the logarithm of the size of its input. For example, if an algorithm takes 100 milliseconds to sort a list of 1000 items, it would take about 10 milliseconds to sort a list of 100 items, and about 1 millisecond to sort a list of 10 items.
Here is a simple Python code that sorts a list of numbers in loglinear time:
Loglinear Time Complexity.py
1def merge_sort(arr):2if len(arr) <= 1:3return arr45mid = len(arr) // 26left_half = arr[:mid]7right_half = arr[mid:]89left_half = merge_sort(left_half)10right_half = merge_sort(right_half)1112return merge(left_half, right_half)1314def merge(left, right):15result = []16i = 017j = 01819while i < len(left) and j < len(right):20if left[i] < right[j]:21result.append(left[i])22i += 123else:24result.append(right[j])25j += 12627result.extend(left[i:])28result.extend(right[j:])2930return result3132cards = ["Charlie", "Alice", "Bob", "Eve", "David"]3334sorted_cards = merge_sort(cards)3536print("The sorted cards are:", sorted_cards)
In this code, we have two functions: merge_sort()
and merge()
. The merge_sort()
function recursively divides the input list into smaller halves until we reach individual elements. Then, it uses the merge() function to combine and sort the divided halves back together. The merge sort algorithm leverages the divide-and-conquer technique to efficiently sort the list.
The time it takes to perform merge sort increases, but not as fast as the size of the list grows. It has a log-linear time complexity O(n log n). For example, if we have 8 cards, merge sort divides the list into halves, sorts them individually (2 operations each), and then merges them back together (4 operations in total). So, the total number of operations is 8, which is 8 times log base 2 of 8. As the input size grows, the number of operations increases at a rate that is less than linear.
This algorithm works by recursively splitting the list in half, sorting each half, and then merging the sorted halves back together. The time it takes to sort the list is proportional to the number of comparisons it makes. The number of comparisons is proportional to the logarithm of the size of the list.
For example, if the list has 1000 items, the algorithm will make about 20 comparisons. If the list has 100 items, the algorithm will make about 7 comparisons. And if the list has 10 items, the algorithm will make about 4 comparisons.
Loglinear time is a very efficient algorithm for sorting lists. It is much faster than linear time algorithms, such as bubble sort and selection sort. It is also faster than quadratic time algorithms, such as quicksort and merge sort.
It's is a good algorithm to use when you need to sort a list of items quickly. Also when the list of items that is very large.
Worst Quadratic Time: O(n2)
Imagine you have a set of books, and you want to compare each book with every other book to see if there are any duplicate titles. To do this, you pick one book and compare it with all the other books, then move on to the next book and repeat the process. The time it takes to compare all possible pairs of books increases quadratically as the number of books increases.
Quadratic time is a measure of how long an algorithm takes to run, as a function of the size of its input. An algorithm with quadratic time complexity will take an amount of time that is proportional to the square of the size of its input. For example, an algorithm that takes 1 second to run on an input of size 1, will take 4 seconds to run on an input of size 2, and 9 seconds to run on an input of size 3.
Here's a simple example in Python to demonstrate quadratic time complexity:
Quadratic Time Complexity.py
1def find_duplicates(books):2duplicates = []34for i in range(len(books)):5for j in range(i + 1, len(books)):6if books[i] == books[j]:7duplicates.append(books[i])89return duplicates1011book_titles = ["Harry Potter", "The Hobbit", "Harry Potter", "To Kill a Mockingbird", "The Lord of the Rings"]1213duplicate_books = find_duplicates(book_titles) # Calling the find_duplicates function1415print("The duplicate books are:", duplicate_books)
In this code, we have a function called find_duplicates()
that takes a list of book titles as input. It uses a nested loop to compare each book title with all other titles. If a duplicate is found, it adds it to the duplicates list. Finally, it returns the list of duplicate books.
The time it takes to find duplicates increases quadratically as the number of books increases. For example, if we have 5 books, we need to perform 10 comparisons. If we have 10 books, we need to perform 45 comparisons. The number of comparisons grows quickly as the square of the number of books. This is why we call it quadratic time complexity O(n2).
Here are some tips for choosing algorithms with good time complexity:
- Avoid algorithms that use nested loops.
- Use algorithms that are designed for large inputs.
- Use algorithms that have been optimized for speed.
- By following these tips, you can choose algorithms that will run quickly, even on large inputs.
Worst Exponential Time: O(2n)
Imagine you have a magical device that can generate every possible combination of a sequence of letters. For example, if you have the letters "a", "b", and "c", the device can generate combinations like "a", "b", "c", "aa", "ab", "ac", "ba", "bb", "bc", "ca", "cb", "cc", "aaa", "aab", and so on. The number of combinations grows exponentially as you increase the number of letters.
Exponential time complexity is a measure of how long an algorithm takes to run, as a function of the size of its input. An algorithm with exponential time complexity takes exponentially longer to run as the input size increases.
Here's a simple example in Python to demonstrate exponential time complexity:
Exponential Time Complexity.py
1def generate_combinations(chars):2combinations = []34# Base case: If there's only one character, return it as a combination5if len(chars) == 1:6combinations.append(chars[0])7return combinations89# Recursive step: Generate combinations for the remaining characters10sub_combinations = generate_combinations(chars[1:])1112# Add the current character to each sub-combination13for sub_combination in sub_combinations:14combinations.append(chars[0] + sub_combination)1516# Add the sub-combinations without the current character17combinations.extend(sub_combinations)1819return combinations2021letters = ['a', 'b', 'c'] # A list of letters2223combinations = generate_combinations(letters) # Calling the generate_combinations function2425print("The combinations are:", combinations)
In this code, we have a function called generate_combinations()
that takes a list of letters as input. It uses recursion to generate all possible combinations of the letters. It starts with a base case where, if there's only one character, it returns it as a combination. Otherwise, it generates sub-combinations for the remaining characters and combines them with the current character.
The number of combinations grows exponentially as the number of letters increases. For example, if we have 1 letter, there's only 1 combination. If we have 2 letters, there are 2 combinations. But if we have 3 letters, there are 8 combinations. As we add more letters, the number of combinations grows very quickly. This is why we call it exponential time complexity O(2^n).
Another simple example of an algorithm with exponential time complexity is the following Python code, which finds the nth Fibonacci number:
Fibonacci.py
1def fibonacci(n):2if n == 0:3return 04elif n == 1:5return 16else:7return fibonacci(n - 1) + fibonacci(n - 2)
This algorithm works by recursively calling itself to find the (n - 1)th and (n - 2)th Fibonacci numbers, and then adding them together. The number of recursive calls increases exponentially as n increases, so the time it takes to run the algorithm also increases exponentially.
For example, to find the 10th Fibonacci number, the algorithm will make 55 recursive calls. To find the 20th Fibonacci number, the algorithm will make 1,771,561 recursive calls. And to find the 30th Fibonacci number, the algorithm will make 2,178,309,850,094,819,840,000 recursive calls!
As you can see, exponential time complexity can make algorithms very slow, even for relatively small input sizes. This is why exponential time complexity is often considered to be a bad thing.
Here are some other examples of algorithms with exponential time complexity:
- Trying all possible combinations of passwords to crack a password
- Searching for a needle in a haystack
- Solving the traveling salesman problem
Conclusion
In conclusion, understanding Big O notation, time complexity, and space complexity is essential in analyzing and designing efficient algorithms. By evaluating the time and space complexity of different algorithms, we can choose the most suitable one for solving a problem efficiently. It's important to keep in mind that different algorithms have different trade-offs, and there is often a balance between time and space complexity. Ultimately, the goal is to design algorithms that can handle large inputs effectively, optimize resource utilization, and provide faster solutions to real-world problems.
Reference Videos