$ \def\compare{ {\mathrm{compare}} } \def\swap{ {\mathrm{swap}} } \def\sort{ {\mathrm{sort}} } \def\insert{ {\mathrm{insert}} } \def\true{ {\mathrm{true}} } \def\false{ {\mathrm{false}} } \def\BubbleSort{ {\mathrm{BubbleSort}} } \def\SelectionSort{ {\mathrm{SelectionSort}} } \def\Merge{ {\mathrm{Merge}} } \def\MergeSort{ {\mathrm{MergeSort}} } \def\QuickSort{ {\mathrm{QuickSort}} } \def\Split{ {\mathrm{Split}} } \def\Multiply{ {\mathrm{Multiply}} } \def\Add{ {\mathrm{Add}} } $
Binary Radix Sort
RadixSort(a, B): # B is number of bits
RadixSort(a, 1, size(a)+1, B)
RadixSort(a, i, j, b):
if j - i <= 1 then
return
endif
m <- BitSplit(a, i, j, b)
RadixSort(a, i, m, b-1)
RadixSort(a, m, j, b-1)
Lower Bound. Any algorithm that sorts all permutations of size $n$ using only $\compare$ and $\swap$ operations requires $\Omega(n \log n)$ comparisons.
Caveat. If values are all represented with $B$ bits, then RadixSort sorts $n$ using $O(B n)$ bit-wise comparisons.
Example. Compute $10110 * 1011$.
Multiply(a, b):
product <- 0
shifted <- a # copy of a we will shift
for i = 1 to size b do
if b[i] = 1 do
product <- Add(product, shifted)
endif
shifted << 1
endfor
Multiply(a, b):
product <- 0
shifted <- a # copy of a we will shift
for i = 1 to size b do
if b[i] = 1 do
product <- Add(product, shifted)
endif
shifted << 1
endfor
If $a$ and $b$ are represented with $n$ bits, what is the running time of $\Multiply(a,b)$?
Why did we previously assume arithmetic takes $O(1)$ time?
Idea. Break numbers up into parts
$a b = (a_1 2^B + a_0)(b_1 2^B + b_0) = a_1 b_1 2^{2B} + (a_1 b_0 + a_0 b_1) 2^B + a_0 b_0$
$a b = a_1 b_1 2^{2B} + (a_1 b_0 + a_0 b_1) 2^B + a_0 b_0$
Define:
\[\begin{align*} c_2 = a_1 b_1 c_1 = a_1 b_0 + a_0 b_1 c_0 = a_0 b_0 \end{align*}\]Now: $a b = c_2 2^{2B} + c_1 2^B + c_0$
Consider: $c^* = (a_1 + a_0)(b_1 + b_0) = a_1 b_1 + a_1 b_0 + a_0 b_1 + a_0 b_0$
Question. How do $c_1, c_2, c_3$ relate to $c^*$?
By using $c^*$ to compute $a b$:
Compute
Computing $ab$ uses:
KMult(a, b):
n <- size(a) (= size(b))
if n = 1 then return a*b
a = a1 a0
b = b1 b0
c2 <- KMult(a1, b1)
c0 <- KMult(a0, b0)
c <- KMult(a1 + a0, b1 + b0)
return (c2 << n) + ((c - c2 - c0) << (n/2)) + c0
At depth $k$:
KMult
Total running time:
Can show:
Simplify:
Result. The running time of Karatsuba multiplication is $O(n^{\log 3}) \approx O(n^{1.58})$