The goal of a Sorting Algorithm is to change the order of the elements in an array. The sorting can be done either in-place, re-order the datastructure within, or not-in-place by copying to a new datastructure. The in-place algorithms are low space complexity. The choice of an algorithm for an application is mainly driven by the application itself.
This is the 2nd part of a series of 3 short posts on the topic of Sorting Algorithms:
Part II: Merge Sort
The Merge Sort consists in splitting an array as much as possible:
It uses the principle of Divide and Conquer: breaking up an array, sorting all the parts of it, and building up it back again.
Here is a demonstration:
Let’s record how many steps are there per array size?
If is the array size, then it looks like we are doing comparisons
The number of comparisons increases when the array size is doubles.
The time efficiency is: O( (Nbr of comparisons per steps) * (nbr of steps) )
- Number of comparisons per steps:
We could approximate and say that the number of comparisons at every step is about the size of the array: .
So Merge Sort efficiency has an efficiency of : O(n log n)
It is better than the bubble sort ( $O(n^2)$ ).
Space efficiency of Merge Sort is worst than Bubble Sort as we need to create extra arrays. Auxiliary space is O(n) or the extra space is linear assuming we delete the array after using them.
This is an exemple if you were to perform a merge sort on the following array. Which is a possibility for what the 2nd iteration might look like?