Merge sort is one of the most powerful sorting algorithms, widely used in various applications. The best part about this algorithm is that it sorts a given data in O(n log n) complexity, as opposed to O(n²) complexity of bubble sort and selection sort. Moreover, the function mergesort is of interest because it creates an excellent case study for one of the widely used techniques in Computer Science - divide and conquer.

## Steps to Sort an Array Using Merge Sort

- Divide the array into two parts of lengths n/2 and n - n/2 respectively (if n is odd, round off the value of n/2). Let us call these arrays as left half and right half respectively.
- Recursively sort the left half array and the right half array.
- Merge the left half array and right half array to get the full array sorted.

### Example

Given Array: [6, 4, 5, 1, 2, 7, 3]

**Step 1:**Divide the array into two parts. Sorted subarrays are:- Left half: [6, 4, 5, 1]
- Right half: [2, 7, 3]

**Step 2:**Recursively sort the left and right halves. Sorted subarrays:- Recursively sorted left half: [1, 4, 5, 6]
- Recursively sorted right half: [2, 3, 7]

**Step 3:**Merge the two halves to create the final sorted array. Final merged and sorted array: [1, 2, 3, 4, 5, 6, 7]

The left and the right halves can always be sorted recursively using the same algorithm. The magic happens in creating the final merged and sorted array. Let us understand this well using the above example.

### Merging Two Sorted Arrays

Given two arrays [1, 4, 5, 6] and [2, 3, 7], we are to merge these arrays into a single sorted array. Let us place a pointer at the head of each array. We will depict the pointer by underlining the corresponding element where the pointer points to.

Final merged array = []

Left array: [1, 4, 5, 6]

Right array: [2, 3, 7]

As seen, the pointer of the leftarray is at 1 and the pointer of the right array is at 2. We pick the smaller one and put it in the final merged array and move the corresponding pointer. After doing this, we will have the following state:

Final merged array = [1]

Left array: [4, 5, 6]

Right array: [2, 3, 7]

Repeat the process:

Final merged array = [1, 2]

Left array: [4, 5, 6]

Right array: [3, 7]

Final merged array = [1, 2, 3]

Left array: [4, 5, 6]

Right array: [7]

Final merged array = [1, 2, 3, 4, 5, 6, 7]

Left array: []

Right array: []

### Recursive Sorting Process

To understand the recursive process, consider the array [2, 7, 3]. We break it into two sub-arrays: [2, 7] and [3]. Both these sub-arrays are already sorted, so we can simply merge them using the technique explained above to get the sorted array [2, 3, 7].

Here are the detailed steps involved in performing a merge sort on the array [6, 4, 5, 1, 2, 7, 3]:

- [6, 4, 5, 1, 2, 7, 3] is divided into [6, 4, 5, 1] and [2, 7, 3]
- [6, 4, 5, 1] is divided into [6, 4] and [5, 1]
- [6, 4] is divided into [6] and [4]
- [6] and [4] are merged into [4, 6]
- [5, 1] is divided into [5] and [1]
- [5] and [1] are merged into [1, 5]
- [4, 6] and [1, 5] are merged into [1, 4, 5, 6]
- [2, 7, 3] is divided into [2, 7] and [3]
- [2, 7] is divided into [2] and [7]
- [2] and [7] are merged into [2, 7]
- [2, 7] and [3] are merged into [2, 3, 7]
- [1, 4, 5, 6] and [2, 3, 7] are merged into [1, 2, 3, 4, 5, 6, 7]

Note: We need a separate array to store the data of the final merged array, meaning merge sort requires additional space.

## Merge Function Pseudo Code in C

```
function merge_sort(i, j, a, aux) {
mid = (i + j) / 2
merge_sort(i, mid, a, aux)
merge_sort(mid + 1, j, a, aux)
pointer_left = i, pointer_right = mid + 1
for k in [i ... j] {
if pointer_left points to smaller element, aux[k] = a[pointer_left] and increment pointer_left by 1
if pointer_right points to smaller element, aux[k] = a[pointer_right] and increment pointer_right by 1
}
copy the contents of aux[i .. j] to a[i .. j]
}
```

## Merge Sort Program in C

```
void merge_sort(int i, int j, int a[], int aux[]) {
if (j <= i) {
return;
}
int mid = (i + j) / 2;
merge_sort(i, mid, a, aux);
merge_sort(mid + 1, j, a, aux);
int pointer_left = i;
int pointer_right = mid + 1;
int k;
// Merge the two halves
for (k = i; k <= j; k++) {
if (pointer_left > mid) {
aux[k] = a[pointer_right++];
} else if (pointer_right > j) {
aux[k] = a[pointer_left++];
} else if (a[pointer_left] <= a[pointer_right]) {
aux[k] = a[pointer_left++];
} else {
aux[k] = a[pointer_right++];
}
}
// Copy the sorted elements back to the original array
for (k = i; k <= j; k++) {
a[k] = aux[k];
}
}
```

And for usage, make sure to use:

```
void sort_array(int a[], int size) {
int* aux = (int*)malloc(size * sizeof(int));
if (aux == NULL) {
// Handle memory allocation error
return;
}
merge_sort(0, size - 1, a, aux);
free(aux);
}
```

## Merge Sort in Python

Of course, you can write a similar merge sort function in Python. Here's how that would look:

```
def merge(left, right):
result = []
i = j = 0
# Compare elements from both subarrays and merge them
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
# Append any remaining elements from both subarrays
result.extend(left[i:])
result.extend(right[j:])
return result
```

For more, check out our list of popular Python projects.

## When to Use `public static void`

The keywords `public`, `static`, and `void` are used in Java to define methods. Each keyword has a specific role and is used in different contexts. Here’s when and why you would include `public static void` in a method declaration:

### 1. Public

The **public** keyword is an access modifier that specifies the visibility of the method. When a method is declared as public, it can be accessed from any other class or package. This is particularly useful when you need the method to be accessible throughout your application.

**Example:** A public method in a utility class that provides common functionality to different parts of the application.

### 2. Static

The **static** keyword indicates that the method belongs to the class itself rather than to instances of the class. A static method can be called without creating an instance of the class. This is commonly used for utility methods or for methods that are meant to be called at the class level.

**Example:** The `main`

method in a Java application is static, allowing the Java Virtual Machine (JVM) to invoke it without creating an instance of the class.

### 3. Void

The **void** keyword specifies that the method does not return any value. If a method’s purpose is to perform an action without providing any result, it is declared as void. This is useful for methods that modify data or perform operations but do not need to return a value.

**Example:** A method that prints a message to the console or modifies the state of an object without returning a result.

### Example Usage

A common example of using `public static void`

is the `main`

method in a Java application, which serves as the entry point for program execution:

```
public class Main {
public static void main(String[] args) {
// Code to be executed
System.out.println("Hello, World!");
}
}
```

In this example, `public`

allows the JVM to call the `main`

method from outside the class, `static`

enables the JVM to invoke it without creating an instance of `Main`

, and `void`

indicates that the method does not return any value.

## Merge Sort Complexity

Complexity gives a rough idea of the time taken to execute the algorithm as a function of the size of the input. Let T(n) be the time taken to perform merge sort on an array of size n.

T(n) comprises of:

- Time taken to divide the array into two halves = O(1)
- Time taken to sort the two halves recursively = 2T(n/2)
- Time taken to merge the two sorted halves = O(n)

Therefore, the total time taken T(n) can be expressed as:

T(n) = 2T(n/2) + O(n)

The above recurrence relation can be solved using the Master Theorem for divide-and-conquer recurrences:

T(n) = O(n log n)

Hence, the time complexity of merge sort is O(n log n).

Merge sort is also an example of a stable sorting algorithm, meaning it maintains the relative order of equal elements in the sorted array.

## Advantages of the Merge Sort Function

Merge sort is a popular and efficient sorting algorithm that offers several advantages over other sort functions (especially for small datasets). Here are some of the key benefits:

### 1. Stable Sort

Merge sort is a stable sort, meaning that it preserves the relative order of equal elements in the sorted output. This is important in situations where the stability of the sort is required, such as in certain database applications.

### 2. Time Complexity

Merge sort has a predictable and consistent time complexity of O(n log n) for all cases (best, average, and worst). This makes it more reliable than algorithms with worse-case scenarios like quicksort, which can degrade to O(n²).

### 3. Suitable for Linked Lists

Merge sort can be easily adapted for sorting linked lists. Since it only requires sequential access rather than random access, it performs well on linked lists without the need for additional space for array-based operations.

### 4. External Sorting

Merge sort is particularly useful for external sorting, where the data to be sorted is too large to fit into memory. By dividing the data into manageable chunks, sorting them, and then merging, merge sort can handle large datasets efficiently.

### 5. Divide and Conquer

Merge sort uses the divide and conquer paradigm, which simplifies the problem of sorting by breaking it down into smaller subproblems. This approach not only makes the algorithm more efficient but also easier to understand and implement.

### 6. Parallelism

Merge sort is well-suited for parallel processing. The divide and conquer nature allows different segments of the array to be sorted concurrently on multiple processors, which can significantly reduce the overall sorting time.

## Disadvantages of Merge Sort

While merge sort is an efficient sorting algorithm with a time complexity of O(n log n), it has certain disadvantages:

**Space Complexity:**Merge sort requires additional memory space to store the auxiliary array used during the merge process. This can be a significant drawback for large datasets, as the space complexity is O(n).**Not In-Place:**Merge sort is not an in-place sorting algorithm, meaning it requires extra space proportional to the size of the input array. This contrasts with algorithms like quicksort or heap sort, which sort the data in-place.**Recursive Overhead:**The recursive nature of merge sort can lead to overhead due to function calls, especially for very large arrays. This can affect performance and stack memory usage.**Slower for Small Arrays:**For smaller arrays, the constant factors and overhead of the merge sort algorithm can make it slower compared to simpler algorithms like insertion sort or bubble sort.

## Alternatives to Merge Sort

There are several alternative sorting algorithms that can be used depending on the specific requirements and constraints of the problem:

### Quick Sort

Quick sort is another divide-and-conquer algorithm with an average-case time complexity of O(n log n). It is often faster in practice than merge sort due to better cache performance and in-place sorting. However, its worst-case time complexity is O(n²), although this can be mitigated with good pivot selection strategies.

### Heap Sort

Heap sort is an in-place sorting algorithm with a time complexity of O(n log n) for all cases. It uses a binary heap data structure to sort elements. While not as fast as quick sort in practice, it has the advantage of guaranteed O(n log n) performance.

### Insertion Sort

Insertion sort is a simple, in-place sorting algorithm with a time complexity of O(n²) for average and worst cases, but O(n) for the best case (when the array is already sorted). It is particularly efficient for small arrays or nearly sorted arrays.

### Bubble Sort

Bubble sort is a straightforward sorting algorithm with a time complexity of O(n²) for average and worst cases. Although it is not efficient for large arrays, it can be useful for educational purposes and for small arrays where simplicity is preferred.

### Selection Sort

Selection sort is another simple, in-place sorting algorithm with a time complexity of O(n²). It is easy to implement but generally outperformed by more advanced algorithms like quick sort and merge sort.

### Timsort

Timsort is a hybrid sorting algorithm derived from merge sort and insertion sort. It is the default sorting algorithm in Python's built-in sort() function and Java's Arrays.sort(). Timsort is highly efficient for real-world data and has a time complexity of O(n log n) in the worst case.

Each of these algorithms has its own strengths and weaknesses, making them suitable for different types of data and use cases. Understanding the trade-offs between these algorithms can help in selecting the most appropriate one for a given problem.

## Time Complexity of Merge Sort

The time complexity of an algorithm provides an estimate of the time required to run the algorithm as a function of the size of the input data. For merge sort, we analyze its time complexity by breaking down the steps involved in the algorithm:

### Divide Step

In the divide step, the array is split into two halves. This step takes constant time, O(1), because the split is done by simply calculating the middle index of the array.

### Recursively Sorting the Halves

The merge sort algorithm recursively sorts the two halves of the array. If the size of the array is n, this step takes 2T(n/2) time, where T(n) is the time complexity for sorting an array of size n.

### Merge Step

In the merge step, the two sorted halves are merged to form a single sorted array. This step involves comparing elements from both halves and arranging them in sorted order. The merge step takes linear time, O(n), as each element in the array is processed exactly once.

### Total Time Complexity

Combining the time complexities of the divide step, the recursive sorting step, and the merge step, we get the following recurrence relation:

T(n) = 2T(n/2) + O(n)

This recurrence relation can be solved using the Master Theorem for divide-and-conquer recurrences. According to the Master Theorem, the solution to this recurrence relation is:

T(n) = O(n log n)

Therefore, the time complexity of the merge sort algorithm is O(n log n) for all cases (best, average, and worst).

### Space Complexity

In addition to the time complexity, it's important to consider the space complexity of merge sort. Merge sort requires additional memory space to store the auxiliary array used for merging. The space complexity of merge sort is O(n) because it needs extra space proportional to the size of the input array.

Overall, merge sort is a highly efficient algorithm with a predictable time complexity of O(n log n) and a space complexity of O(n), making it a valuable tool in the field of computer science.

## Sorted Subarrays

In the context of sorting algorithms like merge sort, a **sorted subarray** refers to a portion of an array that has been arranged in a specific order (typically ascending). Understanding sorted subarrays is crucial for efficiently applying sorting algorithms and optimizing their performance.

### 1. Definition and Importance

A **sorted subarray** is a segment of an array where the elements are in a non-decreasing order. For example, in the array [1, 2, 5, 7, 3, 8], the subarrays [1, 2, 5, 7] and [3, 8] are sorted. The concept is essential because many sorting algorithms, including merge sort, operate by dividing the array into smaller sorted subarrays and then merging them.

**Importance:** Sorted subarrays enable algorithms to efficiently merge or combine these segments into a fully sorted array. By leveraging the property of sorted subarrays, algorithms can minimize the number of comparisons and swaps required, thus enhancing performance.

### 2. Sorted Subarrays in Merge Sort

In merge sort, the array is recursively divided into smaller subarrays until each subarray contains a single element (which is trivially sorted). The algorithm then merges these sorted subarrays to form larger sorted subarrays, eventually resulting in a completely sorted array.

**Example:** Consider the array [4, 3, 7, 1, 8, 2]. Merge sort will process it as follows:

- Divide the array into subarrays: [4, 3, 7] and [1, 8, 2]
- Further divide into: [4], [3, 7] and [1], [8, 2]
- Sort and merge: [4], [3, 7] becomes [3, 4, 7]; [8, 2] becomes [2, 8]
- Finally, merge: [3, 4, 7] and [1, 2, 8] becomes [1, 2, 3, 4, 7, 8]

### 3. Benefits of Sorting Subarrays

Sorted sub-arrays offers several benefits:

**Efficiency:**Sorting smaller subarrays first can reduce the complexity of the overall sorting process, especially when combined with efficient merging techniques.**Reduced Complexity:**By handling smaller, already sorted segments, algorithms can achieve better time complexity and performance compared to sorting the entire array in one go.**Parallel Processing:**Sorted subarrays can be processed in parallel, further optimizing performance in multi-core systems.

### 4. Applications

Understanding and utilizing sorted subarrays is not limited to merge sort. Many other algorithms, like quicksort and binary search, also benefit from the concept of sorted subarrays for their operations.

**Example:** Binary search requires a sorted array (or subarray) to efficiently find the position of an element. When an array is partially sorted, binary search can be applied to sorted subarrays to quickly locate values.

## n-Sorted Arrays

An nsorted array refers to an array that has been divided into 'n' segments, with each segment being sorted individually. This concept can be useful in various contexts, such as parallel processing or when dealing with large datasets that can be sorted in chunks for efficiency.

### Definition

An n-sorted array is an array that is divided into 'n' contiguous segments, and each segment is sorted in ascending or descending order. However, the entire array as a whole is not necessarily sorted.

### Use Cases

**Parallel Processing:**When sorting large datasets, the array can be divided into smaller chunks that are sorted independently in parallel. After sorting each chunk, a merging process can combine them into a fully sorted array.**Memory Constraints:**In situations where there is limited memory, sorting smaller chunks and then merging them can be more efficient than sorting the entire dataset at once.**Incremental Sorting:**An array can be incrementally sorted in parts, making it easier to handle dynamically growing data.

### Merging n-Sorted Arrays

**Two-Way Merge:**If the array is divided into two sorted segments, a two-way merge (as in merge sort) can be used to combine the segments into a single sorted array.**K-Way Merge:**For more than two segments, a k-way merge algorithm can be employed. This is often implemented using a min-heap (or max-heap for descending order) to efficiently merge k sorted lists.

### Algorithm Efficiency

The efficiency of merging n-sorted arrays depends on the number of segments (n) and the size of each segment. Merging 'k' sorted segments, each of length 'm', has a time complexity of O(km log k) using a heap-based k-way merge.

## Conclusion

Merge sort is a highly efficient sorting algorithm that uses the divide-and-conquer technique to sort data. Its predictable time complexity of O(n log n) makes it a reliable choice for various sorting needs. However, its requirement for additional memory can be a disadvantage in memory-constrained environments.

Understanding merge sort and its implementation can provide a strong foundation for learning more advanced algorithms and techniques in computer science.