Data Structure & Algorithms
231-CCS-4
Chapter 6: Recursion
TextBook: Chapter 5, pages 169-213
Subject Coordinator : Mrs.Mariyam Aysha Bivi
Subject Teachers : Dr. Talal Qaid, Dr. Makki Akasha Babikier
Chapter 6: Recursion 1
Recursive Definitions
• A recursive definition consists of two parts:
1. In the first part, called the anchor or the ground case, the
basic elements that are the building blocks of all other
elements of the set are listed.
2. In the second part, rules are given that allow for the
construction of new objects out of basic elements or objects
that have already been constructed.
These rules are applied again and again to generate new
objects.
Chapter 6: Recursion 2
Example
To construct the set of natural numbers:
1 ) First Recursive definition
one basic element, 0, is singled out, and the operation of incrementing
by 1 is given as:
2) Second Recursive definition
Chapter 6: Recursion 3
Example
• Recursive definitions are frequently used to define functions and
sequences of numbers.
Java implementation
• The Factorial function, !: int factorial (int n) {
if (n == 0)
return 1;
else return n * factorial (n – 1);
}
• Finding a formula is preferable to the recursion, as it reduces
computational process, but It is generally a difficult problem
• Example:
Can be converted into the simple formula
Chapter 6: Recursion 4
Method Calls and Recursion Implementation
What happens when a method is called?
• formal parameters, initialized to the values passed as actual parameters.
• The system has to know where to resume execution of the program after
the method has finished storing the return address in main memory;
but not efficient.
• For a method call, more information has to be stored than just a return
address Dynamic allocation using the run-time stack is a much
better solution.
What information should be preserved when a method is called?
• The state of each method, including main(), is characterized by
• Contents of all automatic variables,
• The values of the method’s parameters, and
• The return address indicating where to restart its caller.
Chapter 6: Recursion 5
Method Calls and Recursion Implementation(cont...)
The data area containing all this information is called an activation record or a stack
frame
• An activation record usually contains the following information:
Values for all parameters to the method
Local (automatic) variables can be stored elsewhere, in that case, the
activation record contains only their descriptors and pointers to the locations
where they are stored.
Return address back to the routine's caller
A dynamic link, to the caller’s activation record.
The return value (if any)
• The activation record is allocated on the run-time stack.
• An activation record exists for as long as a method owning it is executing.
• Activation records usually have a short lifespan because they are dynamically
allocated at method entry and deallocated upon exiting. Only the activation record
of main() outlives every other activation record.
Chapter 6: Recursion 6
Method Calls and Recursion Implementation (cont…)
• If a method is called either by
main() or by another method, then
its activation record is created on
the run-time stack.
Contents of the run-time stack when
main() calls method f1(), f1() calls
f2(), and f2() calls f3()
Chapter 6: Recursion 7
Anatomy of a recursive Call
Java implementation
• Consider this Function: double power (double x, int n) {
if (n == 0)
return 1.0;
//else
return x * power(x,n-1);
}
• Using this definition, the value of x4 can be computed in the following way:
the anchor
The system keeps track of all calls on
its run-time stack.
Chapter 6: Recursion 8
Anatomy of a recursive Call(Cont…)
• Changes to the run-time stack during execution of power(5.6,2)
the return address 136
Chapter 6: Recursion 9
Anatomy of a recursive Call(Cont…)
factorial(3) factorial(2) factorial(1)
n=1
f3() ret = 1 * f4
n=2 n=2
f2() ret = 2 * f3 f2() ret = 2 * f3
n=3 n=3 n=3
f1() ret = 3 * f2 f1() ret = 3 * f2 f1() ret = 3 * f2
main() f = main() f= main() f= main() f=
factorial(0)
return return return
n=0
f3() ret = 1
n=1 n=1
f3() ret = 1 * f3() ret = 1 * 1
n=2 n=2 n=2
f2() ret = 2 * f2() ret = 2 * f3 f2() ret = 2 * 1
n=3 n=3 n=3 n=3
f1() f1() f1() f1() ret = 3 * 2
ret = 3 * ret = 3 * f2 ret = 3 * f2
main() f= main() f= main() f= main() f=
Chapter 6: Recursion 10
Anatomy of a recursive Call(Cont…)
Method pow implemented differently by
using a loop:
Do we gain anything by using recursion instead of a loop?
• The recursive version seems to be more intuitive because it is similar to the original
definition of the power function.
• The recursive version increases program readability, improves self documentation,
and simplifies coding.
• For most recursive implementations, the code is shorter than it is in the
nonrecursive implementations.
Chapter 6: Recursion 11
Tail Recursion
• Tail recursion is characterized by the use of only one recursive call at
the very end of a method implementation.
• So, when the call is made, there are no statements left to be executed
by the method;
• The recursive call is not only the last statement but there are no earlier
recursive calls, direct or indirect
• Example:
Chapter 6: Recursion 12
Example 1:
Nontail Recursion void nonTail (int i) {
Example 2: printing an input line in reverse order. if (i > 0) {
nonTail(i-1);
/* 200 */ void reverse() {
System.out.print (i + "");
/* 201 */ char ch = getChar();
nonTail(i-1);
/* 202 */ if (ch != '\n') { }
/* 203 */ reverse(); }
/* 204 */ System.out.print(ch); ).
}
}
• The method is called as many times as the number of characters contained in the input string,
including the end-of-line character.
• In our example, reverse() is called four times, and the run-time stack during the last called
(Figure d).
Chapter 6: Recursion 13
Indirect Recursion
• Previously only direct recursions are discussed where a method f() called it self.
• Structure of Direct Recursion:
• Indirect Recursion: a function (let say fun) is called indirect recursive if it calls another
function (let say fun2), and then fun2 calls fun directly or indirectly.
Chapter 6: Recursion 14
Indirect recursion
• Example:
Write a program to print
numbers from 1 to 10 in such a
way that when number is odd,
add 1 and when number is even,
subtract 1.
Chapter 6: Recursion 15
Nested Recursion
• A more complicated case of recursion is found in definitions in which a function is not only
defined in terms of itself, but also is used as one of the parameters.
• Example 1:
Function h has a solution for all n ≥ 0. This fact is obvious for
all n > 4 and n = 0, but it has to be proven for n = 1, 2, 3, and 4.
Thus, h(2) = h(2 + h(4)) = h(2 + h(2 + h(8))) = 12.
(What are the values of h(n) for n = 1, 3, and 4?)
• Example 2: (Ackerman function)
This function is interesting because of its
• growth. remarkably rapid
The definition translates very nicely into Java, but the
task of expressing it in a nonrecursive form is truly
troublesome.
Chapter 6: Recursion 16
Excessive Recursion
• The price for using recursion is slowing down execution time and storing on the run-
time stack more things than required in a nonrecursive approach.
• If recursion is too deep (for example, computing 5.6100,000), then we can run out of
space on the stack and our program terminates abnormally by raising an unrecoverable
StackOverflowError
• If some recursive function repeats the computations for some parameters, the run time
can be prohibitively long even for very simple cases.
• Example: Fibonacci numbers
• Implementation in java
Easy but inefficient
Chapter 6: Recursion 17
Excessive Recursion(Cont…)
Why inefficient?
• The repetition of the same calculations
because the system forgets what has
already been calculated.
• For example, Fib() is called eight times with
parameter n = 1 to decide that 1 can be
returned.
• For each number of the sequence, the
method computes all its predecessors
without taking into account that it suffices
to do this only once.
• To find Fib(6) = 8, it computes Fib(5), Fib(4),
Fib(3), Fib(2), Fib(1), and Fib(0) first.
• To determine these values, Fib(4), . . . ,
Fib(0) have to be compute to know the
value of Fib(5).
• Independently of this, the chain of
computations Fib(3), . . . , Fib(0) is executed
to find Fib(4).
Chapter 6: Recursion 18
Excessive Recursion(Cont…)
• An iterative algorithm may be produced rather easily as follows:
Chapter 6: Recursion 19
Searching and Sorting
using Recursion
Binary Search
Quick Sort
Merge Sort
Chapter 6: Recursion 20
Binary Search using recusion
• We implemented the Binary Search using iterative version in which
we used loop to write our program.
• Iterative version:
boolean Binary-Search(int a[], int low int high , int x)
{ while (low <= high)
{ int mid = (low + high )/2;
if (a[mid] == x) return true;
if (a[mid] > x) high =mid-1;
else low = mid+1;
}
return false;
}
Chapter 6: Recursion 21
Binary Search using recusion
• Recursive version:
boolean Binary-Search(int a[], int low, int high, int x)
{ if ( s > e) √ base case
return false;
mid=(low + high)/2;
if (a[mid] == x) √ base case
return true;
if (a[mid]> x)
Binary-Search (a,low,mid-1, x);
else
Binary-Search (a, mid+1, high, x);
}
Chapter 6: Recursion 22
Binary Search using recusion
Chapter 6: Recursion 23
Quick Sort
• The original array is divided into two subarrays:
• The first of which contains elements less than or equal to a chosen key called
the bound or pivot.
• The second subarray includes elements equal to or greater than the bound.
• The partition process is repeated for both subarrays two new bounds are
chosen, one for each subarray.
four subarrays are created because each subarray obtained in the first
phase is now divided into two segments.
• This process of partitioning is carried down until there are only one-cell arrays
that do not need to be sorted at all.
• By dividing the task of sorting a large array into two simpler tasks and then
dividing those tasks into even simpler tasks, it turns out that in the process of
getting prepared to sort, the data have already been sorted.
• Because the sorting has been somewhat dissipated in the preparation process,
this process is the core of quicksort.
Chapter 6: Recursion 24
Quick Sort
Quick Sort algorithm
To partition an array, two operations have to be performed:
1. A bound has to be found
2. The array has to be scanned to place the elements in the proper subarrays.
Choosing a good bound is not a trivial task. In fact the subarrays should be
approximately the same length.
Chapter 6: Recursion 25
Quick Sort
Implementation
of Quick Sort
In this code the bound
is chosen equal to the
element located in the
middle of the array.
Chapter 6: Recursion 26
Quick Sort
Partitioning the array
[8 5 4 7 6 1 6 3 8 12 10]
with quicksort().
Chapter 6: Recursion 27
Quick Sort
Sorting the array [8 5 4
7 6 1 6 3 8 12 10] with
quicksort().
Chapter 6: Recursion 28
Quick Sort
• Everything relies on which element of the file or array is chosen for
the bound. Ideally, it should be the median element of the array.
From time to time quicksort can be expected to be anything but
quick.
It is inappropriate to use quicksort for small arrays.
• For arrays with fewer than 30 items, insertion sort is more efficient
than quicksort (Cook and Kim, 1980).
• The best case : 𝑂 𝑛 lg 𝑛 (when the bound divides an array into
two subarrays of approximately length n/2)
• Worst case: 𝑂 𝑛2 (if in each invocation of quicksort(), the smallest
(or largest) element of the array is chosen for the bound)
Chapter 6: Recursion 29
Merge Sort
Chapter 6: Recursion 30
Merge Sort
• The problem with quicksort is that its complexity in the worst case
is O(𝑛2 ) because it is difficult to control the partitioning process.
• Different methods of choosing a bound attempt to make the
behavior of this process fairly regular; however, there is no
guarantee that partitioning results in arrays of approximately the
same size.
• Another strategy is to make partitioning as simple as possible and
concentrate on merging the two sorted arrays.
• This strategy is characteristic of Merge Sort.
• It was one of the first sorting algorithms used on a computer and
was developed by John von Neumann.
Chapter 6: Recursion 31
Merge Sort
1. The key process in ‘mergesort’ is merging sorted halves of an
array into one sorted array.
2. These halves have to be sorted first, which is accomplished by
merging the already sorted halves of these halves.
3. This process of dividing arrays into two halves stops when the
array has fewer than two elements.
Chapter 6: Recursion 32
Merge Sort
Merge Sort algorithm
Chapter 6: Recursion 33
Merge Sort
Sorting the array [1 8 6
4 10 5 3 2 22] with
Merge Sort().
Complexity is 𝑂 𝑛 lg 𝑛
(assuming that n is a
power of 2)
mergesort has one serious drawback: the need for additional
storage for merging arrays, which for large amounts of data could
be an insurmountable obstacle.
Chapter 6: Recursion 34