**Java-implementation: http://pastebin.com/JvS1xADZ**

Sorting a list is very useful and very interesting to learn, and it is an incredibly intuitive way to learn to think about algorithmic complexity. The most straightforward way to sort a list is to start with the first index and add it to another list. For the second index check if it’s larger than the first index in the new list. If it is, append it to the new list, if not, prepend it. For the third index continue through the list until you encounter an index which is larger and add it to the list before the new, larger index. Continue in this fashion and you’ll end up with a sorted list. This can also be visualized:

The complexity of an algorithm is the amount of operations you need to do proportional to the input size *n*. So if you need to perform a single operation for every input, then we say the algorithm has linear complexity. When talking about algorithmic complexity, we don’t care about constants, so even if we need to perform a hundred operations for every input, we still say the algorithm has linear complexity. For a more general approach, we can think of the complexity of an algorithm as a function. Let’s consider the following algorithmic complexity function:

In this algorithm, we start out by performing two operations – regardless of input size, and then perform two operations for every input. To show whether or not an algorithm is contained within a complexity class *g(x)* (also a function), we can look at the *Big-O** *of *f(x)*. A function *f(x) *is *Big-O* of *g(x)* if – and only if – the following equality holds

Where *C* is a finite constant. In fact, this is not the definition of *Big-O*, but I find it’s a more intuitive way of understanding it. If you look at the following limit:

We can easily see that this limit is true, because as *x *grows large the contribution of the addition of two gets dimished, and the limit converges towards the constant 2. Showing that a function is *Big-O* of another function, shows that the two functions exhibit a similar growth. Computer scientists have grouped together functions of different growth, and this is for example *linear complexity time *() or *polynomial complexity time *( for some constant *k*) and many others. For more precise measures of growth, you can also use *Big-Theta* or *Big-Omega *of a function, but I won’t deal with those now.

When talking about sorting algorithms, it’s been proven that the fastest possible algorithm to sort a list using only *compare *and *switch* has complexity . But what if we’re not interested in the fastest algorithm, but the *slowest *algorithm?

Introducing *bogosort. *It basically checks if the list is sorted – if it is, return it, if it’s not, shuffle it and start over. When visualized in a flow chart, we get:

There is obviously no bound for how long this algorithm can run. If you’re very unlucky, it’ll never sort the list, but according to wikipedia, the average case for this algorithm is complexity . But we can do better…

Introducing *rafflesort*. This is basically a recusive bogosort algorithm. It checks if the list is already sorted, if it is return it – and if it isn’t, assign an index to every entry in the list and sort the indexes using the rafflesort algorithm. Then construct a new list where every is sorted according to list of indexes – if this list is sorted then return it, otherwise start over. When visualized:

I cannot wrap my mind around the slowness of this algorithm, and I can’t for the life of me determine whether or not this has a finite average-case complexity. I even tried to make a simulation in Java, but I kept getting stack overflow exceptions even for listsize = 2. I’m not sure whether it’s my implementation or because this algorithm is just inherently slow.

**Edit:** Reddit user /u/moor-GAYZ pointed out that this algorithm is the exact same as the following few lines of code:

function raffleSort(int[] list){ if(isSorted(list))return list; return raffleSort(list); }

This is because the only way for the recursive call to terminate is if the list of indexes is, in fact, already sorted. Because if it isn’t already sorted, it’ll continue with recursive calls. And if the list of indexes is already sorted, it is the identity list of indexes – it doesn’t change the order of the elements of the original list, and because the original list isn’t sorted to begin with, it’ll try again – because the list isn’t sorted now, either.

This means, functionally, that there are only two outcomes. a) the list is already sorted, and the algorithm will return it. b) the list isn’t sorted, and the algorithm will continue making recursive calls indefinitely.