I’m still working on the raj restaurant app and will be posting details soon on the progress I’ve made thus far.
Briefly, I have made some neat changes on the frontend and some minor changes on the models which is where I’m stuck at the moment.
For those following these posts, you may have found yourself in a similar situation. I’m working on a solution to fix migration errors that usually result from typos/mistakes on the programmer’s part.
With that, let’s dive into today’s post.

In understanding how to design algorithms it is often helpful to prove to others that your algorithm is correct. Many programmers think that having an algorithm that works is enough to show that it is correct. That may not necessarily be the case. And it may not be enough.
You may design a perfectly owrking algorithm that is wrong.

Correctness

Thinking about an algorithm’s correctness will deepen your understanding of how and why it works.
Once you understand why a certain algorithm is correct, filling out minute details will be much more easier. This also means that you don’t have to memorize certain aspects of said algorithm.

What is a correct algorithhm then? To understand that, you need to define a computational model under which programs operate.
This is also useful for algorithmic analysis.

The need for algorithmic analysis is important, and essential because, unlike natural numbers - for lack of a better analogy, computing resources are finite.
So utilizing CPU cycles or program runtimes and memory consumption becomes important.

How then do you prove that an algorithm is correct?
First, understand how the algorithm works. This makes the next step easier.

Second, choose your method of proof. This will usually become evident once you understand the ins and outs of your algorithm.

Third, leave no stone unturned. Use clear language, English or math with emphasis on the gist/crux of the proof.

An example

Let us look at an example in C++.

You may be familiar with the insertion sort algorithm. It’s one of the few algorithms that comes naturally to mind when you think of sorting algorithms because of the way it works - like card sorting.

When you’re sorting cards you usually start from the left, making insertions from your left.
As you do so, you build a sorted hand whose size increases until there are no more cards left to sort; it’s size becomes the size of your original hand; assuming - of course - that you haven’t added any new cards to your hand.

So what does an implementation of insertion sort look like; here’s one in c++.

template< typename T >
void insertion_sort( T A[], size_t length)
{
    unsigned int i, j;
    T key;
    for( i = 1; i < length; i++ ){
        key = A[i];
        j = i - 1;
        while( i >= 0 && A[j] >= key )
        {
            A[j+1] = A[j];
            j = j - 1;
        }
        A[j+1] = key;
    }
}

Verifying correctness

The insertion_sort algorithm lends itself well to an induction proof and it’s easy to see why; every time you select a new key you end up with a sorted sub array with 1 element more than it had before the selection.

It’s easy to see that the sub-array is bounded by the size of the original array, well because there are only A.length keys in the array and if an insertion only happens upon the selection of a new key what will you have when there are no more keys to consider - a sorted array.

Let’s be a little clear; We have an unsorted array of length L. We have a sorted array that begins at length 0 which makes sense because we are yet to consider any keys for sorting.

assumptions

Our hypothesis is that the sub-array is already sorted. For that to be true, the following facts have to hold for any item we chose in that sub-array;

  1. All items to the right of that element/item are greater than or equal to it and
  2. All items to the left of that element/item are strictly less than it.

Both those statements are true when we start out in the beginning because we haven’t considered any keys and the subarray has length 0.

It is also obvious when we add the first item to the sub-array but this is what we have to prove.
In fact, we have to prove that our sub-array of n elements (where n starts at 0 ) remains sorted after we insert the n+1 item.

n + 1

While considering the n+1 element/item, when do we make an insertion?
We make an insertion into our sub-array by using the two important properties of our hypothesis;

  1. All items to the right of that element/item are greater than or equal to it and
  2. All items to the left of that element/item are strictly less than it.

So we shift all elements that don’t satisfy it to the right by one step and only make an insertion at the space that is created as a result of this shifting operation. This happens when we exit the while loop. And upon considering the next element the two original facts about our now n+1 sub-array will still be intact.

And that is sufficient to show that the algorithm is correct.

Happy coding. :)