What is time and space complexity of an algorithm:
Time and space complexity basically gives us an estimate that how much time and space the program will take during its execution. The space complexity determines how much space will it take in the primary memory during execution and the time complexity determines the time that will be needed for successful completion of the program execution. In this article, we will discuss time and space complexity of an algorithm with some very easy examples and lastly, we will also discuss asymptotic notation.
Space complexity :
Space complexity is defined as the process of determining a formula for the production of how much memory space will be required for the successful execution of an algorithm. The memory space we consider is the space of primary memory.
We know that whenever a program executes, the program is to be loaded on the primary memory. Then during execution, how much space will be occupied it will be decided by the space complexity. Obviously, in case of space complexity, the number of instruction getting executed, number of instruction residing in the algorithm that will decide the space complexity.
Time complexity :
The time complexity is defined as the process of determining a formula for total time required towards the execution of that algorithm. This calculation will be independent of implementation details and programming language.
So during the execution of an algorithm, the total time required that will be decided in the time complexity. Actually, we are considering the time complexity of an algorithm, so that’s why it will be irrespective of the programming language selected for the implementation of the algorithm. But simply if anyone asks that what is the complexity of the program, we commonly consider this as the time complexity.
Don’t consider the program which will be occupying a very minimal space in the primary memory, that means the program which is having a minimum space complexity, will be also having the time complexity minimum. That is wrong. Because a program might be containing say 10 instructions. Out of those 10 instructions, 8 lines are in a loop and the loop will get executed for 100 number of times.
Then actually, during the execution, the program is executing the (8*100)+2 i.e. 802 number of instructions. So the memory space got occupied only for 10 instructions. But during execution, it is actually executing 802 instructions approximately, whatever calculations we have made right now. So the time complexity of the program will be high.
Asymptotic notation :
Asymptotic notations are used to make meaningful statements about the efficiency of the algorithm. Asympototic notation helps us to make approximate but meaningful assumption about the time and the space complexity.
So here we are having mainly 3 asymptotic notations :
Big O notation (O)
It is also known as the upper bound that means the maximum time complexity which will suffer by this program will be denoted by the big O notation.
Big Omega notation (Ω)
Also, known as the lower bound, it is the minimum time at least it will take will be denoted by the Big omega notation.
Big Theta notation (Θ)
Also known as the tight bound, that means within the upper bound and lower bound, the bound will be mentioned.
Example of asymptotic notation :
Suppose we are having n number of data, then we are trying to search the one item value within the given set of data. Then what will be the minimum number of search required? So, I think that is one. If I get the item at the very first location of the given set of data then I will say the successful completion of the searching. Now, what is the maximum search required? So I can find the data at the last, so nth data will be the search item. Otherwise, after searching all n no. of data, I am finding that the search item is not there. So, in that case, how many searching did I do? the answer is n number of searching. That is the upper bound. Now, what is the lower bound? The answer is 1.
And in case of tight bound, I can say that we can find this one at the (n+1)/2 or n/2 searching will be required if I can get the data in the middle.
So in case of linear search, the worst case complexity or upper bound will be n, the best case complexity or the lower bound will be 1, and the average case complexity or the tight bound will be n/2.
Now let us consider, I want to find the maximum of n no. of data, in that case just try to feel that to find out the maximum of n number of data, you are bound to traverse and compare and process and check all n data, then only you can tell that which is maximum. It might be true that the first data you are having in the very first place is the maximum. But you don’t know until and unless that the particular data is being compared with the rest. You can’t tell, you can’t declare that will be the maximum data. So, in that case, the best case complexity, the average case complexity and the worst case complexity all of them will be equal to n. that is the complexity, here in the time complexity we are denoting the how many number of times the most prime operation within the algorithm is getting carried out. And that will give me the upper bound, the lower bound and also the tight bound.