Time & Space Complexity in Functions – Big O Notation

Knowing how fast a function will run or how much memory it will require when running are important factors to consider when writing algorithms. This may not be something to consider in small, simple programs, but just imagine a large project riddled with unoptimized and slow performing functions. It can have a drastic impact on overall system performance. Luckily, there is big O notation, which allows for quick deduction as to how fast a function will perform or how much memory it will require.

Introducing, Big O Notation

Big O notation (also known as asymptotic notation) is simply a mathematical notation we can use to measure the performance of a function. There is no math involved in this approach, rather you only need to look at the code in order to classify it. The main way we go about achieving this is by determining how fast the runtime will grow in relation to the size of the given input.

For example, some functions will have a relatively quick runtime with a small input, but as input grows larger, runtime can increase exponentially. This will be clear after looking at a few examples.

Time Complexity

Constant time, O(1)

Let’s take a look at a very simple function.

No matter what the size of the input, this function will always take the same amount of time. We deduce this by the fact that there is always only going to be one “step”. Our String can be one or one thousand characters – it will still take the same amount of time to print both inputs. In this case we can say that this function runs in O(1) time, or rather, constant time.

Linear time, O(n)
This function will run in O(n) time or what is known as linear time. This is due to the fact that runtime will increase in a linear fashion depending on how many items we have in our list. If the list contains 1 item, then it only needs to print once. For 100,000 items, 100,000 times and so on.

Quadratic time, O(n²)
This function will run in O(n²)  or what is known as quadratic time. As we have a nested loop, our runtime will increase exponentially depending on the input. For example, if we have a list with 4 elements, we’d need to print 4 items 4 different times – which equates to 4².

Logarithmic time, O(log n)
This is a very basic example of what is known as  O(log  n) time or logarithmic time. This function will continuously divide x by 2 until it reaches 0. This is much more efficient than running in linear time. Just imagine how much longer it would take if we took a linear approach for extremely large numbers. Algorithms which run in logarithmic time are also known to take a divide and conquer approach.

Linear logarithmic time, O(n log n)
The final most common type of time complexity is written as O(n log n) or linear logarithmic time. This time complexity is a combination of both logarithmic and linear time. The best way to describe it in words would be, every instance of n must be processed in a log n algorithm. So, overall runtime would increase in a linear fashion depending on how many elements we have in our input. Many sorting algorithms run in n log n (for their best case), such as merge, heap and quick sort. More can be read on different runtime cases here.

Space Complexity

Space complexity is used to classify how much memory a given function will take during runtime. Unlike time complexity, space complexity is much easier to classify, we only need to look at the size of any variables we allocate.

This function’s space complexity is that of O(1). No matter what we do during our function, the size of our list does not change.

In comparison to the previous example, the size of our list is entirely dependant on our input. For this, we can say the space complexity for this function is that of O(n).

Liked the article? Share it!

Leave a Reply

Be the First to Comment!

avatar
  Subscribe  
Notify of