Induction
A brief introduction to induction
Mathematical induction–or simply “induction”–is a logical principle that allows us to reason about sequences of events by analyzing individual events. Induction has a pervasive role in the analysis of algorithms in computer science. The central task of algorithm design is to devise an automated procedure that breaks each possible instance of a problem into a sequence of “elementary” operations.
To establish that an algorithm does indeed perform a prescribed task, we must argue that for every instance of the task, the algorithm produces the correct output. This may seem an impossible endeavor since, in principle, there could be infinitely many instances. The principle of induction allows us to reduce the problem of reasoning about entire executions of algorithms to reasoning about individual steps that an algorithm takes. Designing an algorithm is the process of breaking a task down into individual steps. Induction gives us a tool to argue that the individual steps fit together to solve the original problem.
In this note, we will give a formal statement of the principle of induction and describe several illustrative examples.
Logical Predicates
In order to formally state the principle of induction, we must first introduce the notion of a logical predicate. A logical predicate is any statement that can take on a value of true or false. For example, the following statements are predicates:
 It is raining today.
 57 is a prime number.

The following procedure returns the value \(n\) on every input \(n\):
1 2 3 4 5 6
Count(n): total < 0 for i = 1 up to n do total < total + 1 endfor return total
The principle of induction concerns sequences of predicates. That is, rathering than looking at a single true/false statement, we consider many predicates simultaneously. For example, we might have:
 For \(i\) between 1 and 30, \(P(i)\) is the predicate, “It is rained on the \(i\)th day of the month.”
 For any positive integer \(n\), \(P(n)\) is the predicate, “\(n\) is a prime number.” Since the prime numbers are \(2, 3, 5, \ldots\) the first few values oare \(P(n)\) for \(n = 1, 2, 3, 4, 5,\ldots\) are false, true, true, false, true,…
 Going back to the
Count
method above, for each positive integer \(i\), we could define the predicate \(P(i)\) to be, “After iteration \(i\) of the for loop,total
stores the value \(i\).”
Statement of Induction
The purpose of induction is to establish that all predicates in a sequence of predicates are true.
Principle of Induction. Suppose \(P(1), P(2), P(3), \ldots\) is a sequence of predicates. Suppose we establish that
 \(P(1)\) is true (base case), and
 for all \(i\), if \(P(i)\) is true, then \(P(i+1)\) is also true (inductive step).
Then for every \(n\), \(P(n)\) is true.
The principle of induction formalizes the following line of reasoning. Suppose we wish to establish that \(P(1), P(2), P(3), \ldots\) are all true. If we argue the base case (that \(P(1)\) is true) and the inductive step (that whenever \(P(i)\) is true, then so is \(P(i+1)\)), then we can reason as follows:
 \(P(1)\) is true because this is the base case.
 Since \(P(1)\) is true, then so is \(P(2)\) by the inductive step with \(i = 1\).
 Since \(P(2)\) is true, then so is \(P(3)\) by the inductive step with \(i = 2\)
 Since \(P(3)\) is true, then so is \(P(4)\) by the inductive step with \(i = 3\)
 …
While this reasoning may be intuitive, the principle of induction asserts that our conclusion–that all of the \(P(n)\) are true–is a logically sound conclusion.
In what follows, we use the principle of induction in order to justify our claims about the behavior of a few procedures.
Iterative Example
Consider the following method:
1
2
3
4
5
6
IterativeSum(n):
total < 0
for i = 1 up to n do:
total < total + i
endfor
return total
Note that on input \(n\), this method returns the sum of the numbers \(1 + 2 + \cdots + n\). For large values of \(n\), this method is pretty inefficient. We would like to find a better method (i.e., simple formula) for computing the method’s output without having to perform all \(n\) iterations of the loop. We will show that, in fact, such a simple formula exists.
Proposition. For every positive integer \(n\), \(\mathrm{IterativeSum}(n)\) returns the value \(\frac 1 2 n (n+1)\).
We can verify the proposition by hand for a few small values of \(n\). However, we cannot hope to establish the proposition by exhaustively checking inputs and outputs, since the proposition’s conclusion must hold for all (of the infinitey many!) positive integers. Thus, it is natural to try to argue by induction.
Towards an argument by induction, we must decide precisely what claim it is we are making about the method IterativeSum
. When analyzing an iterative method (i.e., a method containing a loop), it is often a good strategy to find a loop invariant, i.e., some property that the loop maintains before and after each iteration. We can perform a few iterations of the loop by hand to see what is going on with total
:
 before the first iteration \(\total = 0\)
 after iteration \(i = 1\), \(\total = 1\)
 after iteration \(i = 2\), \(\total = 1 + 2 = 3\)
 after iteration \(i = 3\), \(\total = 1 + 2 + 3 = 6\)
 after iteration \(i = 4\), \(\total = 1 + 2 + 3 + 4 = 10\)
 …
Observe that the pattern of values of \(\total\) is \(1, 3, 6, 10, \ldots\), which are precisely the values of \(\frac 1 2 n (n+1)\) for \(n = 1, 2, 3, 4, \ldots\). Thus, we are lead to conjecture the following:
Claim (loop invariant). For every positive integer \(i,\), after iteration \(i\) of the loop in \(\IterativeSum\), \(\total\) stores the value \(\frac 1 2 i (i+1)\).
We will use induction to prove the loop invariant. Before writing up the argument, however, we need to do a bit of scratch work. The base case of the argument (\(i = 1\)) is straightforward, but we need to see how to derive the inductive step. The key is in the assignment \(\total \gets \total + i\). In iteration \(i+1\), \(\total \gets \total + i + 1\). Again, what we are required to show is that if the claim (loop invariant) holds after iteration \(i\), then it also holds after iteration \(i + 1\).
Supposing the claim holds after iteration \(i\), we have \(\total = \frac 1 2 i (i+1)\). Then in iteration \(i+1\) we update \(\total \gets \total + i + 1\). By the inductive hypothesis (that \(\total = \frac 1 2 i (i+1)\) before this operation) we compute:
\[\begin{align*} \total &= \frac 1 2 i (i + 1) + i + 1\\ &= \frac 1 2 i^2 + \frac 1 2 i + i + 1\\ &= \frac 1 2 i^2 + \frac 1 2 (3 i) + \frac 1 2 (2)\\ &= \frac 1 2 (i^2 + 3 i + 2)\\ &= \frac 1 2 (i + 1)(i + 2). \end{align*}\]Note that this final expression is precisely what our claim says the value should be after iteration \(i + 1\): \(\total = \frac 1 2 (i+1)(i+1+1)\). With this computation done, we can write our argument more formally.
Proof of claim. We argue by induction on \(i\).

Base case. Before the first iteration of the loop, we set \(\total \gets 0\) in line 2 of \(\IterativeSum\). In iteration \(i = 1\), line 4 updates the value of \(\total\) to \(\total + 1 = 0 + 1 = 1\). Therefore, \(\total = 1 = \frac 1 2 (1) (1 + 1)\) at the end of iteration \(i = 1\), so the claim holds for $i = 1$.

Inductive step. Assume the inductive hypothesis holds–i.e., that after iteration \(i\), \(\total\) stores the value \(\frac 1 2 i (i + 1)\). During iteration \(i + 1\), line 4 updates the value of \(\total\) to \(\frac 1 2 i (i+1) + (i + 1) = \frac 1 2 (i + 1) (i + 2)\) (where the equality holds by the computation we did above). Therefore, the claim holds after iteration \(i + 1\) as well.
Since we have established that the base case and the inductive step hold, the claim holds by induction. \(\Box\)
Our main proposition now follows immediately from the claim. Specifically, consider the value of \(\total\) returned by \(\IterativeSum(n)\). The condition of the for loop in lines 3–5 implies that we break out of the loop after iteration \(n\). By the loop invariant claim, \(\total = \frac 1 2 n (n+1)\) after the \(n\)th iteration, so this is the value returend by \(\IterativeSum(n)\) in line 6.
Recursive Example
Induction is an especially valuable tool in reasoning about recursive methods. For many of us, recursion is an unintuitive way of thinking about computation. Often when one implements a recursive method to solve a problem, it seems to work by magic (if at all). Induction gives us a logical tool to reason about, understand, and justify this magic.
When defining a recursive procedure for a task, we typically design the method in two parts:
 the base case in which the procedure should return a value without making a recursive call, and
 the recursive step which invokes one or more recursive method calls before returning a value.
In the analysis of a recursively defined method, these two cases correspond to the base case of induction and the inductive step. In the latter case, we can intuitively justify the correctness of the procedure as follows: the a method call succeeds becuase its recursive calls succeed. The recursive calls succeed because thier recursive calls succeed, and so on, until a base case is reached. Note that this justification is just applying inductive reasoning in reverse. Once we establish that the base case succeeds and argue the inductive step, we will have established that all recursive method calls succeed. To summarize, recursion isn’t magic, it’s induction.
To give an explicit example, here is a recursive implementation of the \(\IterativeSum\) method defined above:
1
2
3
4
5
RecursiveSum(n):
if n <= 1 then
return 1
endif
return n + RecursiveSum(n1)
Exercise. Compute the values returned by \(\RecursiveSum(n)\) for \(n = 1, 2, 3, 4\) by hand to verify that you get the same result as \(\IterativeSum(n)\).
We can again use induction to argue that \(\RecursiveSum(n)\) always returns the value \(\frac 1 2 n (n+1)\) for any \(n \geq 1\). In this case, the argument is actually a little simpler than the argument for \(\IterativeSum\) because we do not need to have a separate loop invariant claim. Again, the argument relies on the computation showing that \(\frac 1 2 n (n+1) + n = \frac 1 2 (n+1)(n+2)\) that we did before.
Proposition. For every integer \(n \geq 1\), \(\RecursiveSum(n)\) returns the value \(\frac 1 2 n (n+1)\).
Proof. We argue the proposition by induction on \(n\).

Base case. In the case \(n = 1\), the condition \(n \leq 1\) is satisfied in line 2, so the value \(1\) is returned in line 2. Since \(1 = \frac 1 2 (1) (1 + 1)\), the proposition holds for \(n = 1\).

Inductive step. Suppose the inductive hypothesis holds–i.e., that \(\RecursiveSum(n)\) returns the value \(\frac 1 2 n (n+1)\) for some \(n \geq 1\). Since \(n+1 > 1\), the value returned by \(\RecursiveSum(n+1)\) in line 5 is
\[\begin{align*} (n + 1) + \RecursiveSum(n) &= (n + 1) + \frac 1 2 n (n+1) = \frac 1 2 (n+1) (n+2). \end{align*}\]The first equality is from applying the inductive hypothesis, and the second equality holds by the computation we did previously. Therefore, the proposition holds for \(n+1\) as well.
Since the base case and inductive step hold, the proposition follows by induction. \(\Box\)
BubbleSort: A Complete Analysis
As a final note, we provide a complete analysis of the \(\BubbleSort\) sorting procedure. \(\BubbleSort\) sorts an array \(a\) of comparable values by using two elementary operations to access and manipulate the array:

\(\compare(a, i, j)\) returns \(\true\) if \(a[i] > a[j]\) and \(\false\) otherwise.

\(\swap(a, i, j)\) swaps the values \(a[i]\) and \(a[j]\) in the array. That is, if before calling \(\swap\) we had \(a[i] = x\) and \(a[j] = y\), then after performing \(\swap(a, i, j)\), we would have \(a[i] = y\) and \(a[j] = x\), and the other values in \(a\) would be unaffected.
The highlevel idea of the algorithm is to successively compare adjacent elements in the array and swap them if they are out of order. The order in which the compare/swap operations are performed ensures that the larger elements “bubble up” to higher indices in the array. More formally, here is the pseudocode for \(\BubbleSort\):
1
2
3
4
5
6
7
8
9
10
# input: a, an array of numerical values
BubbleSort(a):
n < size(a)
for i = 1 up to n1 do
for j = 1 up to n  i do
if compare(a, j, j+1) then
swap(a, j, j+1)
endif
endfor
endfor
To understand how the procedure works, consider the inner for loop on lines 5–8. For each value of \(j\), lines 6–7 swap values \(a[j]\) and \(a[j+1]\) if \(a[j] > a[j+1]\). We will show that each iteration of this loop guarantees that the largest element in \(a[1..j+1]\) is stored at \(a[j+1]\). Thus, after \(n  i\) iterations, the largest value in \(a[1..ni+1]\) is stored at \(a[ni]\). The outer loop iterates this process for \(i = 1, 2, \ldots, n1\). After the first (outer) iteration, the largest value in the array is stored at \(a[n]\). After the second outer iteration, the second largest value is stored at \(a[n1]\), and so on. After \(n  1\) iterations, the array is sorted.
Exercise. Perform the \(\BubbleSort\) routine by hand on the array \(a = [4, 2, 5, 1, 3]\).
Now we formalize our analysis of \(\BubbleSort\). Our analysis reduces to establishing two claims about the behavior of the method. The first claim analyzes the behavior of the inner for loop, while the second applies to the outer for loop.
Claim 1 (inner loop). After iteration \(j\) of the inner for loop of \(\BubbleSort\) (lines 5–8), the \(a[j+1]\) is largest value in \(a[1..j+1]\).
Claim 2 (outer loop). After iteration \(i\) of the outer for loop of \(\BubbleSort\) (lines 4–9), the following hold:
 \(a[ni+1..n]\) is sorted, and
 no element in \(a[1..ni]\) is larger than \(a[ni+1]\).
The correctness of the \(\BubbleSort\) algorithm follows directly from Claim 2 applied with \(i = n1\): after \(n  1\) iterations of the outer loop \(a[2..n]\) is sorted, and \(a[1]\) is no larger than \(a[2]\). Thus the entire array is sorted.
We now prove Claim 1. We will then use Claim 1 (and induction) to prove Claim 2.
Proof of Claim 1. We argue by induction on \(j\).

Base case. When \(j = 1\), consider the two cases: \(a[1] > a[2]\), and \(a[1] \leq a[2]\). In the former case, the condition in line 6 applies, so the two elements are swapped in line 7. Thus, after the swap, \(a[2]\) stores the larger of the two elements and the claim holds. In the case \(a[1] \leq a[2]\), no swap is performed, and the claim holds for \(j = 1\) as well.

Inductive step. Suppose the claim holds after iteration \(j\). That is, \(a[j+1]\) is the largest element in \(a[1..j+1]\). Now consider iteration \(j+1\). As in the base case, there are two cases to consider: \(a[j+1] > a[j+2]\) and \(a[j+1] \leq a[j+2]\). In the former case, \(a[j+1]\) is the largest element in \(a[1..j+2]\). In this case, the swap operation in line 7 ensures that this larger value is moved to \(a[j+2]\), so the claim holds.
On the other hand, if \(a[j+1] \leq a[j+2]\), then no swap is performed. By the inductive hypothesis, \(a[j+1]\) is the largest element in \(a[1..j+1]\). Since \(a[j+1] \leq a[j+2]\), \(a[j+2]\) is the largest element in \(a[1..j+2]\), so the claim still holds.
Since the base case and inductive step hold, Claim 1 follows from induction. \(\Box\)
With Claim 1 proven, we can now prove Claim 2, from which the correctness of \(\BubbleSort\) follows.
Proof of Claim 2. We argue by induction on \(i\).

Base case. Consider the first iteration \(i = 1\). By Claim 1 (applied to \(j = n  1\)), after the first iteration of the outer loop, \(a[n]\) stores the largest value in the array. In particular, (1) \(a[n..n]\) is sorted, and (2) no element in \(a[1..n1]\) is larger than \(a[n]\), so Claim 2 holds for \(i = 1\).

Inductive step. Suppose Claim 2 holds after iteration \(i\) for some \(i\). Consider the \((i+1)\)st iteration of the outer loop. By Claim 1 (applied to \(j = n  i  1\)), when the \((i+1)\)st iteration terminates, \(a[ni]\) stores the largest element in \(a[1..ni]\). Moreover, by the inductive hypothesis, we have \(a[ni] \leq a[ni+1]\), so (1) \(a[ni..n]\) is sorted, and (2) no element in \(a[1..ni1]\) is larger than \(a[ni]\). Thus, Claim 2 also holds for \(i+1\).
Since the base case and inductive step hold, Claim 2 follows from induction. \(\Box\)
We have now produced a logically sound argument that the \(\BubbleSort\) algorithm successfully sorts every array (assuming the elementary operations are performed faithfully). While our analysis is quite formal, the conceptual content of the Claims 1 and 2 are consistent with the original intuitive explanation of why \(\BubbleSort\) works. Thus, our proof can be viewed as formal verification that our intuition about the algorithm is correct.