Skip to content

### Sorting a list of random numbers

### Sorting an already sorted list (no-op)

### Counting Sort

### Race of the linearithmics

On this page, sorting algorithms are used to put a list of numbers into incremental order. The table below shows all the sorting algorithms that will be analyzed and their expected best, average and worst case time complexities. You can click a specific algorithm to see more details.

Name | Best | Average | Worst |
---|---|---|---|

Bubble Sort | `Ο(n)` | `Ο(n²)` | `Ο(n²)` |

Counting Sort | `Ο(n+k)` | `Ο(n+k)` | `Ο(n+k)` |

Heap Sort | `Ο(n log n)` | `Ο(n log n)` | `Ο(n log n)` |

Insertion Sort | `Ο(n)` | `Ο(n²)` | `Ο(n²)` |

Merge Sort | `Ο(n log n)` | `Ο(n log n)` | `Ο(n log n)` |

Quick Sort | `Ο(n log n)` | `Ο(n log n)` | `Ο(n²)` |

Selection Sort | `Ο(n²)` | `Ο(n²)` | `Ο(n²)` |

Tim Sort | `Ο(n)` | `Ο(n log n)` | `Ο(n log n)` |

Below is a visualization of the time complexity of sorting algorithms given a randomized list of numbers.

As expected, all algorithms conform to their average-case runtime. Counting Sort is the fastest algorithm in this experiment, because it performs well on this particular list of numbers: the randomizer limits the variation of the numbers to the size of the list.

In the next experiment, the same algorithms are analyzed, but this time using a list of numbers that's already sorted.

For most algorithms, sorting an already sorted list of numbers results in the best-case performance of that algorithm. Thus, for algorithms where the best-case performance is different from the average-case performance, a drop in operations is observed compared to the previous experiment. Quick Sort is the only exception, as it suddenly performs *much worse*. This is to be expected as (counter intuitively) a sorted list is the worst-case scenario for early versions of Quick Sort. Later versions of Quick Sort select the pivots more intelligently, but it is always possible to create a data set that triggers worst-case performance.

In this experiment, the Counting Sort algorithm is used to sort various lists of numbers. Each list has it's own characteristics.

For most lists of numbers, Counting Sort delivers a runtime that's close to linear, which is extremely good for a sorting algorithm. What's interesting to see is that Counting Sort performs differently for lists where the values of the numbers are higher than the size of the list. This is the case for the "thousands" and "millions" series. These series contain N numbers, where each number is a random value between zero and a thousand (or a million). Even for small lists containing just 10 elements, Counting Sort requires thousands (or even millions!) of operations to sort the list. This is the unique property of Counting Sort: it beats other algorithms on lists with low variance and performs abysmal on lists with high variance.

In this final experiment, we let the three linearithmic algorithms Merge Sort, Heap Sort and Tim Sort compete for Big O supremacy!

All three algorithms show a solid linearithmic run time for all sizes of N. Merge Sort shows this pattern the best, while Heap Sort and Tim Sort sometimes lean towards a linear runtime. While the visualization may show that Tim Sort and Heap Sort have similar performance, in practice Tim Sort would be the winner. The reason why this visualization doesn't reflect this difference is because we are solely measuring code iterations, whereas real-life performance is determined by *many* different factors.

To conclude: Tim Sort is a highly sophisticated algorithm that performs well on data sets that are (partially) sorted. In fact, it performs so well that it has become the default sorting implementation in popular programming languages like Java and Python. Tim Sort also powers JavaScript's `Array#sort()`

in Chromium's V8 JavaScript engine.

You've finished this section.

Show me more algorithms | I want to roll my own visualizations!