0% found this document useful (0 votes)
33 views87 pages

Complete Guide - Static, Dynamic & Recursion

The document is a comprehensive guide on static and dynamic programming concepts, including typing and memory allocation. It explains static vs dynamic typing with examples in Java, TypeScript, Python, and JavaScript, as well as static and dynamic memory allocation with examples in C, C++, and Python. The guide also compares the features and use cases of static and dynamic approaches, highlighting their advantages and disadvantages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views87 pages

Complete Guide - Static, Dynamic & Recursion

The document is a comprehensive guide on static and dynamic programming concepts, including typing and memory allocation. It explains static vs dynamic typing with examples in Java, TypeScript, Python, and JavaScript, as well as static and dynamic memory allocation with examples in C, C++, and Python. The guide also compares the features and use cases of static and dynamic approaches, highlighting their advantages and disadvantages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

📚 The Ultimate Guide to Static, Dynamic & Recursion

A comprehensive, detailed guide with examples and explanations for understanding these fundamental programming
concepts.

🔷 PART 1: STATIC vs DYNAMIC


These terms appear in multiple contexts in programming. Let's explore each one thoroughly.

1️⃣ STATIC vs DYNAMIC TYPING


🔹 Static Typing
Definition: Variables have fixed types that are checked at compile time (before the program runs).

Key Characteristics:

✅ Must declare the type of a variable when you create it


✅ Type errors are caught before running the program
✅ Cannot change a variable's type after declaration
✅ Compiler knows exactly what type each variable is
✅ Generally faster execution because types are known in advance
✅ Better IDE support (autocomplete, refactoring)
✅ Self-documenting code
Languages: Java, C, C++, C#, TypeScript, Go, Rust, Swift

Example in Java:

java
// Static Typing Example
public class StaticTypingDemo {
public static void main(String[] args) {
// Must declare type
int age = 25; // age is an integer
String name = "Alice"; // name is a string
double price = 19.99; // price is a double
boolean isActive = true; // isActive is a boolean

// These work fine


age = 30; // OK - still an integer
price = 29.99; // OK - still a double

// These cause COMPILE-TIME ERRORS


// age = "thirty"; // ❌ ERROR! Cannot assign string to int
// name = 100; // ❌ ERROR! Cannot assign number to string
// price = "free"; // ❌ ERROR! Cannot assign string to double

// Type checking happens during compilation


System.out.println("Age: " + age);
System.out.println("Name: " + name);
}
}

Example in TypeScript:

typescript
// TypeScript - Static Typing for JavaScript
let age: number = 25;
let name: string = "Alice";
let isStudent: boolean = true;
let scores: number[] = [90, 85, 92];

age = 30; // OK
// age = "thirty"; // ❌ ERROR: Type 'string' is not assignable to type 'number'

function greet(person: string): string {


return `Hello, ${person}!`;
}

greet("Bob"); // OK
// greet(123); // ❌ ERROR: Argument of type 'number' is not assignable to parameter of type 'string'

Real-World Analogy: Think of static typing like labeled storage containers in a warehouse. Once you label a box
"ELECTRONICS," you can only put electronics in it. The warehouse manager (compiler) checks all the labels before items
are stored, preventing mistakes before they happen.

🔹 Dynamic Typing
Definition: Variables can hold any type and types are checked at runtime (while the program is running).

Key Characteristics:

✅ No need to declare variable types


✅ Variables can change types during execution
✅ Type errors only show up when the code runs
✅ More flexible and faster to write
✅ Less verbose code
❌ Potentially more error-prone
❌ Slightly slower because type checking happens at runtime
Languages: Python, JavaScript, Ruby, PHP, Perl

Example in Python:

python
# Dynamic Typing Example

# No type declarations needed


age = 25 # age is an integer
print(f"Age: {age}, Type: {type(age)}") # <class 'int'>

age = "twenty-five" # ✅ OK! Now age is a string


print(f"Age: {age}, Type: {type(age)}") # <class 'str'>

age = [25, 30, 35] # ✅ OK! Now age is a list


print(f"Age: {age}, Type: {type(age)}") # <class 'list'>

age = True # ✅ OK! Now age is a boolean


print(f"Age: {age}, Type: {type(age)}") # <class 'bool'>

age = {"value": 25} # ✅ OK! Now age is a dictionary


print(f"Age: {age}, Type: {type(age)}") # <class 'dict'>

# Type errors only appear at RUNTIME


def add_numbers(a, b):
return a + b

result1 = add_numbers(5, 10) # Works: 15


result2 = add_numbers("Hello", " World") # Works: "Hello World"
# result3 = add_numbers(5, "ten") # ❌ RUNTIME ERROR: unsupported operand type(s)

Example in JavaScript:

javascript
// JavaScript - Dynamic Typing
let value = 42; // number
console.log(typeof value); // "number"

value = "Hello"; // string


console.log(typeof value); // "string"

value = true; // boolean


console.log(typeof value); // "boolean"

value = [1, 2, 3]; // array (object)


console.log(typeof value); // "object"

value = { name: "Alice" }; // object


console.log(typeof value); // "object"

value = function() {}; // function


console.log(typeof value); // "function"

// Functions can accept any type


function process(data) {
return data * 2;
}

console.log(process(5)); // 10
console.log(process("5")); // "55" (string concatenation!)
// console.log(process("hi")); // NaN (Not a Number)

Real-World Analogy: Dynamic typing is like a universal container or shapeshifter box. You can put shoes in it today,
books tomorrow, and toys next week. The system only checks what's inside when you actually use it.
🔹 Comparison Table
Feature Static Typing Dynamic Typing
Type Declaration Required Not required
Type Checking Compile time Runtime
Type Changes Not allowed Allowed
Error Detection Early (before running) Late (during execution)
Performance Faster (optimized) Slightly slower
Flexibility Less flexible More flexible
Verbosity More verbose More concise
IDE Support Excellent Good
Learning Curve Steeper Gentler
Refactoring Easier Harder
Best For Large applications, teams Rapid prototyping, scripts

🔹 Gradual Typing (Best of Both Worlds)


Some modern languages support gradual typing - you can choose when to use type annotations.

Python with Type Hints:

python

from typing import List, Dict, Optional

# Optional type hints (not enforced at runtime, but tools can check)
def calculate_average(numbers: List[float]) -> float:
return sum(numbers) / len(numbers)

def get_user(user_id: int) -> Optional[Dict[str, str]]:


# May return a dictionary or None
users = {1: {"name": "Alice", "email": "[email protected]"}}
return users.get(user_id)

# Still dynamically typed at runtime


age = 25
age = "twenty-five" # Still works! Type hints are optional

2️⃣ STATIC vs DYNAMIC MEMORY ALLOCATION


🔹 Static Memory Allocation
Definition: Memory is allocated at compile time and has a fixed size throughout the program's execution.
Key Characteristics:

✅ Size must be known before the program runs


✅ Memory is allocated on the stack
✅ Automatic allocation and deallocation
✅ Very fast (just moving the stack pointer)
✅ Memory is automatically freed when out of scope
❌ Limited size (stack is typically 1-8 MB)
❌ Cannot be resized during execution
❌ Risk of stack overflow with large allocations
How the Stack Works:

Stack Memory (LIFO - Last In, First Out)


┌─────────────────────────────┐ ← High Memory Address
│ │
│ [Empty Space] │
│ │
├─────────────────────────────┤
│ Local Var C (int): 30 │ ← Stack Pointer (current top)
├─────────────────────────────┤
│ Local Var B (char): 'X' │
├─────────────────────────────┤
│ Local Var A (int): 100 │
├─────────────────────────────┤
│ Function Parameters │
├─────────────────────────────┤
│ Return Address │
└─────────────────────────────┘ ← Low Memory Address (Stack Base)

When function ends, stack pointer moves back up,


automatically "freeing" all local variables

Example in C:

c
#include <stdio.h>

void staticAllocationDemo() {
// Static allocation - size known at compile time
int numbers[10]; // Array of 10 integers (40 bytes)
char name[50]; // Array of 50 characters (50 bytes)
int matrix[5][5]; // 2D array (100 bytes)
double prices[3]; // Array of 3 doubles (24 bytes)

// All memory allocated on the STACK


// Size is FIXED and cannot change

// Initialize
for (int i = 0; i < 10; i++) {
numbers[i] = i * 10;
}

// These sizes CANNOT change during runtime


// numbers[100] = 999; // ❌ Out of bounds! Undefined behavior

printf("Numbers array: ");


for (int i = 0; i < 10; i++) {
printf("%d ", numbers[i]);
}
printf("\n");

// Memory automatically freed when function ends


}

int main() {
staticAllocationDemo();
// All local variables in staticAllocationDemo are now gone
return 0;
}

Example in Java:

java
public class StaticAllocation {
public static void main(String[] args) {
// Primitive types - static allocation on stack
int age = 25; // 4 bytes on stack
double salary = 50000; // 8 bytes on stack
boolean isActive = true; // 1 byte on stack
char grade = 'A'; // 2 bytes on stack

// Array reference on stack, but array object on heap


int[] numbers = new int[10]; // Reference on stack, array on heap

// These are automatically managed


processData(age, salary);
// age and salary from this scope are automatically cleaned up
}

public static void processData(int value1, double value2) {


// Parameters and local variables on stack
int result = value1 * 2;
// Automatically freed when method ends
}
}

Real-World Analogy: Static allocation is like booking a hotel room. You reserve a specific room with a specific size
before you arrive. The room size doesn't change. When you check out, the room is immediately available for the next guest.
No waiting, no paperwork.

🔹 Dynamic Memory Allocation


Definition: Memory is allocated at runtime from the heap and can have variable size determined during execution.

Key Characteristics:

✅ Size can be determined during program execution


✅ Memory is allocated on the heap
✅ Much larger available memory (gigabytes)
✅ Can be resized (in some cases)
✅ Persists until explicitly freed
❌ Must be manually managed (in C/C++) or garbage collected
❌ Slower allocation/deallocation
❌ Risk of memory leaks if not freed properly
❌ Risk of fragmentation
How the Heap Works:
Heap Memory (Dynamic, Unordered)
┌─────────────────────────────┐ ← High Memory Address
│ │
│ [Free Space] │
│ │
├─────────────────────────────┤
│ Block C (200 bytes) │ ← Allocated
│ [User Array Data] │
├─────────────────────────────┤
│ [Free Space] │ ← Fragmented
├─────────────────────────────┤
│ Block B (100 bytes) │ ← Allocated
│ [Struct Data] │
├─────────────────────────────┤
│ [Free Space] │
├─────────────────────────────┤
│ Block A (500 bytes) │ ← Allocated
│ [Dynamic Array] │
└─────────────────────────────┘ ← Low Memory Address

Blocks can be allocated and freed in any order


Memory manager tracks free and used blocks

Example in C (Manual Memory Management):

c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

void dynamicAllocationDemo() {
// Get size from user at RUNTIME
int size;
printf("How many numbers do you want to store? ");
scanf("%d", &size);

// Allocate memory dynamically from the HEAP


int* numbers = (int*)malloc(size * sizeof(int));

// Check if allocation succeeded


if (numbers == NULL) {
printf("Memory allocation failed!\n");
return;
}

// Use the dynamically allocated array


printf("Enter %d numbers:\n", size);
for (int i = 0; i < size; i++) {
scanf("%d", &numbers[i]);
}

// Display the numbers


printf("You entered: ");
for (int i = 0; i < size; i++) {
printf("%d ", numbers[i]);
}
printf("\n");

// MUST manually free the memory


free(numbers);
numbers = NULL; // Good practice to avoid dangling pointer
}

// Dynamic string allocation


char* createDynamicString(const char* input) {
// Allocate memory for the string
char* newString = (char*)malloc(strlen(input) + 1);
if (newString != NULL) {
strcpy(newString, input);
}

return newString; // Caller must free this memory!


}

// Dynamic 2D array
int** create2DArray(int rows, int cols) {
// Allocate array of pointers
int** array = (int**)malloc(rows * sizeof(int*));

// Allocate each row


for (int i = 0; i < rows; i++) {
array[i] = (int*)malloc(cols * sizeof(int));
}

return array;
}

void free2DArray(int** array, int rows) {


// Free each row
for (int i = 0; i < rows; i++) {
free(array[i]);
}
// Free array of pointers
free(array);
}

int main() {
dynamicAllocationDemo();

// String example
char* myString = createDynamicString("Hello, Dynamic Memory!");
printf("%s\n", myString);
free(myString); // Don't forget to free!

// 2D array example
int** matrix = create2DArray(3, 4);
// Use matrix...
free2DArray(matrix, 3);

return 0;
}

Example in C++ (Smart Pointers):

cpp
#include <iostream>
#include <memory>
#include <vector>

class Person {
public:
std::string name;
int age;

Person(std::string n, int a) : name(n), age(a) {


std::cout << "Person created: " << name << std::endl;
}

~Person() {
std::cout << "Person destroyed: " << name << std::endl;
}
};

int main() {
// Old C++ way (manual management)
Person* person1 = new Person("Alice", 30);
delete person1; // Must manually delete

// Modern C++ way (automatic management with smart pointers)


{
std::unique_ptr<Person> person2 = std::make_unique<Person>("Bob", 25);
// No delete needed - automatically freed when out of scope
} // person2 destroyed here

// Shared pointer (reference counted)


std::shared_ptr<Person> person3 = std::make_shared<Person>("Charlie", 35);
{
std::shared_ptr<Person> person4 = person3; // Share ownership
// Both person3 and person4 point to same object
} // person4 out of scope, but object still alive (person3 still owns it)
// Object destroyed when last shared_ptr goes out of scope

return 0;
}

Example in Python (Automatic Garbage Collection):


python
import sys

# Python automatically manages memory


def dynamic_allocation_demo():
# Get size from user at runtime
size = int(input("How many items? "))

# Create list with dynamic size


items = [0] * size # List created at runtime

print(f"Created list of size {size}")

# Fill the list


for i in range(size):
items[i] = i * 10

print(f"Items: {items}")

# No need to manually free memory!


# Python's garbage collector handles it automatically

# Large object allocation


def create_large_data():
# Allocate 1 million integers
large_list = list(range(1000000))

# Check memory usage


print(f"Size in memory: {sys.getsizeof(large_list)} bytes")

return large_list
# Memory persists because we return it

# When to free memory


def memory_lifecycle():
data = create_large_data() # Memory allocated
# Use data...
del data # Hint to garbage collector (not immediate)
# OR let it go out of scope

# Dynamic resizing
def dynamic_resizing():
items = [] # Start empty

# Dynamically grow
for i in range(10):
items.append(i) # List grows automatically

print(f"Items: {items}")
print(f"Length: {len(items)}")
print(f"Capacity: {sys.getsizeof(items)} bytes")

dynamic_allocation_demo()
memory_lifecycle()
dynamic_resizing()

Real-World Analogy: Dynamic allocation is like building a custom house. You decide the size, layout, and features as you
build, not beforehand. You can add rooms, expand, renovate. But you're responsible for maintaining it and eventually
demolishing it when no longer needed.

🔹 Stack vs Heap Comparison


Feature Stack (Static) Heap (Dynamic)
Allocation Speed Very fast (pointer move) Slower (search for space)
Deallocation Speed Automatic, instant Manual or GC (slower)
Size Limited (1-8 MB typically) Large (GB available)
Access Speed Faster (cache-friendly) Slower (scattered in memory)
Lifetime Function scope Until explicitly freed
Management Automatic Manual or GC
Fragmentation No fragmentation Can fragment
Flexibility Fixed size Variable size
Typical Use Local variables, parameters Large data, runtime-sized data

Memory Layout in a Process:


┌─────────────────────────────┐ ← High Address (0xFFFFFFFF)
│ Stack │ ← Grows downward
│ (Local variables) │
│ ↓↓↓ │
├─────────────────────────────┤
│ │
│ [Free Memory] │
│ │
├─────────────────────────────┤
│ ↑↑↑ │
│ Heap │ ← Grows upward
│ (Dynamic allocations) │
├─────────────────────────────┤
│ Uninitialized Data │
│ (BSS) │
├─────────────────────────────┤
│ Initialized Data │
│ (Global/Static vars) │
├─────────────────────────────┤
│ Program Code │
│ (Text segment) │
└─────────────────────────────┘ ← Low Address (0x00000000)

🔹 Memory Allocation Examples by Language


Java:

java
public class MemoryAllocation {
// Static variable - stored in special memory area
static int staticVar = 100;

// Instance variable - stored in heap (with object)


int instanceVar = 200;

public void demonstrateMemory() {


// Local primitive - stack
int localInt = 10;

// Object reference - stack (reference) + heap (object)


String localString = new String("Hello");

// Array - stack (reference) + heap (array)


int[] localArray = new int[1000];

// All local variables freed when method ends


// Objects on heap stay until garbage collected
}
}

JavaScript:

javascript
// JavaScript has automatic memory management

function memoryExample() {
// Primitives - stored on stack (or optimized)
let number = 42;
let string = "Hello";
let boolean = true;

// Objects - stored on heap


let object = { name: "Alice", age: 30 };
let array = [1, 2, 3, 4, 5];

// Large allocation
let largeArray = new Array(1000000);

// Garbage collector automatically frees unused objects


}

// Object becomes eligible for GC after function ends

3️⃣ STATIC vs DYNAMIC BINDING (Polymorphism)


🔹 Static Binding (Early Binding)
Definition: The method to be called is determined at compile time based on the reference type.

Key Characteristics:

✅ Resolved during compilation


✅ Faster execution (no runtime overhead)
✅ Used with private, final, and static methods
✅ Used with method overloading
❌ Less flexible
❌ Cannot achieve runtime polymorphism
Example in Java:

java
class Animal {
// Static method - static binding
static void makeSound() {
System.out.println("Some animal sound");
}

// Private method - static binding


private void eat() {
System.out.println("Animal eating");
}

// Final method - static binding


final void sleep() {
System.out.println("Animal sleeping");
}
}

class Dog extends Animal {


// This HIDES the parent method, doesn't override it
static void makeSound() {
System.out.println("Bark bark!");
}
}

public class StaticBindingDemo {


public static void main(String[] args) {
// Reference type is Animal, Object type is Dog
Animal myAnimal = new Dog();

// Static binding - uses REFERENCE type (Animal)


myAnimal.makeSound(); // Output: "Some animal sound"

// Reference type determines which method is called


Dog myDog = new Dog();
myDog.makeSound(); // Output: "Bark bark!"

// Method overloading - also static binding


Calculator calc = new Calculator();
calc.add(5, 10); // Calls add(int, int)
calc.add(5.5, 10.5); // Calls add(double, double)
}
}

class Calculator {
// Method overloading - resolved at compile time
int add(int a, int b) {
System.out.println("Adding integers");
return a + b;
}

double add(double a, double b) {


System.out.println("Adding doubles");
return a + b;
}

String add(String a, String b) {


System.out.println("Concatenating strings");
return a + b;
}
}

🔹 Dynamic Binding (Late Binding)


Definition: The method to be called is determined at runtime based on the actual object type.

Key Characteristics:

✅ Resolved during execution


✅ Enables runtime polymorphism
✅ Used with method overriding
✅ More flexible
❌ Slightly slower (runtime overhead)
❌ Used with non-static, non-private, non-final methods
Example in Java:

java
class Animal {
// Instance method - dynamic binding possible
void makeSound() {
System.out.println("Some animal sound");
}

void move() {
System.out.println("Animal moving");
}
}

class Dog extends Animal {


// Override parent method
@Override
void makeSound() {
System.out.println("Bark bark!");
}

@Override
void move() {
System.out.println("Dog running");
}
}

class Cat extends Animal {


@Override
void makeSound() {
System.out.println("Meow meow!");
}

@Override
void move() {
System.out.println("Cat prowling");
}
}

class Bird extends Animal {


@Override
void makeSound() {
System.out.println("Tweet tweet!");
}
@Override
void move() {
System.out.println("Bird flying");
}
}

public class DynamicBindingDemo {


public static void main(String[] args) {
// Dynamic binding - method determined at runtime

// Same reference type, different object types


Animal animal1 = new Dog();
Animal animal2 = new Cat();
Animal animal3 = new Bird();

// Runtime determines which method to call


animal1.makeSound(); // Output: "Bark bark!" (Dog's version)
animal2.makeSound(); // Output: "Meow meow!" (Cat's version)
animal3.makeSound(); // Output: "Tweet tweet!" (Bird's version)

animal1.move(); // Output: "Dog running"


animal2.move(); // Output: "Cat prowling"
animal3.move(); // Output: "Bird flying"

// Polymorphism in action
Animal[] animals = {new Dog(), new Cat(), new Bird()};

System.out.println("\nAll animals making sounds:");


for (Animal animal : animals) {
animal.makeSound(); // Correct method called for each object
}
}
}

Example in Python:

python
class Animal:
def make_sound(self):
print("Some animal sound")

def move(self):
print("Animal moving")

class Dog(Animal):
def make_sound(self):
print("Bark bark!")

def move(self):
print("Dog running")

class Cat(Animal):
def make_sound(self):
print("Meow meow!")

def move(self):
print("Cat prowling")

class Bird(Animal):
def make_sound(self):
print("Tweet tweet!")

def move(self):
print("Bird flying")

# Python uses dynamic binding by default


def animal_concert(animals):
"""Make all animals perform"""
for animal in animals:
animal.make_sound() # Runtime determines which method
animal.move()
print()

# Usage
animals = [Dog(), Cat(), Bird(), Animal()]
animal_concert(animals)

# Output:
# Bark bark!
# Dog running
#
# Meow meow!
# Cat prowling
#
# Tweet tweet!
# Bird flying
#
# Some animal sound
# Animal moving

Real-World Example - Payment System:

java
abstract class Payment {
abstract void processPayment(double amount);
abstract String getPaymentMethod();
}

class CreditCardPayment extends Payment {


@Override
void processPayment(double amount) {
System.out.println("Processing $" + amount + " via Credit Card");
System.out.println("Connecting to bank...");
System.out.println("Payment successful!");
}

@Override
String getPaymentMethod() {
return "Credit Card";
}
}

class PayPalPayment extends Payment {


@Override
void processPayment(double amount) {
System.out.println("Processing $" + amount + " via PayPal");
System.out.println("Redirecting to PayPal...");
System.out.println("Payment successful!");
}

@Override
String getPaymentMethod() {
return "PayPal";
}
}

class CryptoPayment extends Payment {


@Override
void processPayment(double amount) {
System.out.println("Processing $" + amount + " via Cryptocurrency");
System.out.println("Generating wallet address...");
System.out.println("Payment successful!");
}
@Override
String getPaymentMethod() {
return "Cryptocurrency";
}
}

public class PaymentSystem {


public static void checkout(Payment payment, double amount) {
// Dynamic binding - correct method called at runtime
System.out.println("Selected payment method: " + payment.getPaymentMethod());
payment.processPayment(amount);
System.out.println("---");
}

public static void main(String[] args) {


// User chooses payment method at runtime
Payment payment1 = new CreditCardPayment();
Payment payment2 = new PayPalPayment();
Payment payment3 = new CryptoPayment();

checkout(payment1, 99.99);
checkout(payment2, 149.99);
checkout(payment3, 299.99);
}
}

🔹 Binding Comparison
Feature Static Binding Dynamic Binding
When Resolved Compile time Runtime
Based On Reference type Object type
Used With private, final, static methods Overridden methods
Also Used With Method overloading Method overriding
Performance Faster Slightly slower
Flexibility Less flexible More flexible
Polymorphism No Yes
Example obj.staticMethod() obj.instanceMethod()
4️⃣ STATIC vs DYNAMIC LINKING
🔹 Static Linking
Definition: Library code is copied directly into the executable at compile time.

Characteristics:

✅ Self-contained executable (no external dependencies)


✅ Faster program startup
✅ No "DLL hell" or missing library issues
❌ Larger executable file size
❌ Updates require recompilation
❌ Multiple programs duplicate library code in memory
Example:

bash

# Compile with static linking (C)


gcc -static main.c -o program
# Result: Large executable (~1 MB) with all libraries included

# File sizes:
# Dynamic executable: ~20 KB
# Static executable: ~900 KB (includes libc, etc.)

Visual Representation:
Static Linking:
┌─────────────────────────┐
│ Your Program Code │
├─────────────────────────┤
│ Library Code (copied) │
│ - Function A │
│ - Function B │
│ - Function C │
├─────────────────────────┤
│ Another Library │
│ - Function X │
│ - Function Y │
└─────────────────────────┘

Single Large Executable File

🔹 Dynamic Linking
Definition: Library code is linked at runtime when the program starts or when needed.

Characteristics:

✅ Smaller executable files


✅ Shared libraries save memory
✅ Easy library updates (no recompilation)
✅ Multiple programs share one library copy
❌ Dependency management required
❌ Slightly slower startup
❌ Risk of version conflicts
Example:

bash
# Compile with dynamic linking (C)
gcc main.c -o program
# Result: Small executable (~20 KB) that needs .so/.dll files

# Check dynamic dependencies


ldd program
# Output:
# linux-vdso.so.1
# libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
# /lib64/ld-linux-x86-64.so.2

Visual Representation:

Dynamic Linking:
┌─────────────────────────┐
│ Your Program Code │
│ │
│ (References to libs) │
└─────────────────────────┘
↓ (at runtime)
┌──────┴──────┐
↓ ↓
┌─────────┐ ┌─────────┐
│ Library │ │ Library │
│ .so │ │ .dll │
│ (shared)│ │ (shared)│
└─────────┘ └─────────┘

🔄 PART 2: RECURSION - THE COMPLETE GUIDE


What is Recursion?
Definition: Recursion is a programming technique where a function calls itself to solve a problem by breaking it down into
smaller, similar subproblems.

Core Philosophy:

"To understand recursion, you must first understand recursion." 😄


Key Idea: Instead of solving a complex problem directly, recursion solves it by:

1. Breaking the problem into smaller versions of itself


2. Solving the smallest version (base case)
3. Combining solutions to build up to the original problem

The Anatomy of Recursion


Every recursive function MUST have these two components:

1️⃣ Base Case (Stopping Condition)


Purpose: The simplest form of the problem that can be solved directly without recursion.

Why Essential: Without a base case, the function calls itself infinitely, causing a stack overflow.

Examples of Base Cases:

Factorial: When n = 0 or n = 1, return 1


Fibonacci: When n = 0 or n = 1, return n
Array sum: When array is empty, return 0
Tree traversal: When node is null, return

2️⃣ Recursive Case (The Breakdown)


Purpose: Breaks the problem into smaller subproblems and calls the function recursively.

Requirements:

Must make progress toward the base case


Must solve a smaller version of the same problem
Must combine results appropriately

Recursion Template

python
def recursive_function(problem):
"""
General template for recursive functions
"""
# BASE CASE - Check if problem is simple enough to solve directly
if is_base_case(problem):
return base_case_solution(problem)

# RECURSIVE CASE - Break down the problem


smaller_problem = break_down(problem)

# Recursively solve smaller problem


smaller_solution = recursive_function(smaller_problem)

# Combine solution
final_solution = combine(smaller_solution)

return final_solution

How Recursion Works: The Call Stack


When a function calls itself, each call is placed on the call stack. Understanding the call stack is crucial to understanding
recursion.

The Call Stack Explained


Call Stack (LIFO - Last In, First Out)
┌─────────────────────────────┐
│ Most recent call │ ← Top (active)
├─────────────────────────────┤
│ Previous call (waiting) │
├─────────────────────────────┤
│ Earlier call (waiting) │
├─────────────────────────────┤
│ Original call (waiting) │
└─────────────────────────────┘ ← Bottom

Each stack frame contains:


- Function parameters
- Local variables
- Return address
- Return value (when computed)

Example: factorial(4)

python

def factorial(n):
"""Calculate n! = n × (n-1) × (n-2) × ... × 1"""
# Base case
if n <= 1:
return 1
# Recursive case
return n * factorial(n - 1)

result = factorial(4)

Step-by-Step Execution:
PHASE 1: CALLING (Building the stack)
═══════════════════════════════════════

Step 1: Call factorial(4)


┌─────────────────────────────┐
│ factorial(4) │
│n=4 │
│ 4 <= 1? No │
│ Return: 4 * factorial(3) │ ← Waiting...
└─────────────────────────────┘

Step 2: Call factorial(3)


┌─────────────────────────────┐
│ factorial(4) │ ← Waiting for factorial(3)
├─────────────────────────────┤
│ factorial(3) │
│n=3 │
│ 3 <= 1? No │
│ Return: 3 * factorial(2) │ ← Waiting...
└─────────────────────────────┘

Step 3: Call factorial(2)


┌─────────────────────────────┐
│ factorial(4) │ ← Waiting
├─────────────────────────────┤
│ factorial(3) │ ← Waiting for factorial(2)
├─────────────────────────────┤
│ factorial(2) │
│n=2 │
│ 2 <= 1? No │
│ Return: 2 * factorial(1) │ ← Waiting...
└─────────────────────────────┘

Step 4: Call factorial(1)


┌─────────────────────────────┐
│ factorial(4) │ ← Waiting
├─────────────────────────────┤
│ factorial(3) │ ← Waiting
├─────────────────────────────┤
│ factorial(2) │ ← Waiting for factorial(1)
├─────────────────────────────┤
│ factorial(1) │
│n=1 │
│ 1 <= 1? YES! (BASE CASE) │
│ Return: 1 │ ✓ Returns immediately
└─────────────────────────────┘

PHASE 2: RETURNING (Unwinding the stack)


═══════════════════════════════════════

Step 5: factorial(1) returns 1


┌─────────────────────────────┐
│ factorial(4) │ ← Waiting
├─────────────────────────────┤
│ factorial(3) │ ← Waiting
├─────────────────────────────┤
│ factorial(2) │
│ Return: 2 * 1 = 2 │ ✓ Computed!
└─────────────────────────────┘

Step 6: factorial(2) returns 2


┌─────────────────────────────┐
│ factorial(4) │ ← Waiting
├─────────────────────────────┤
│ factorial(3) │
│ Return: 3 * 2 = 6 │ ✓ Computed!
└─────────────────────────────┘

Step 7: factorial(3) returns 6


┌─────────────────────────────┐
│ factorial(4) │
│ Return: 4 * 6 = 24 │ ✓ Computed!
└─────────────────────────────┘

Step 8: factorial(4) returns 24


Result = 24

Mathematical View:
factorial(4)
= 4 × factorial(3)
= 4 × (3 × factorial(2))
= 4 × (3 × (2 × factorial(1)))
= 4 × (3 × (2 × 1))
= 4 × (3 × 2)
=4×6
= 24

Types of Recursion (DETAILED)


1️⃣ Linear Recursion
Definition: Function makes exactly one recursive call.

Characteristics:

Creates a linear chain of calls


Simple to understand and trace
Stack depth = input size

Example 1: Sum of Array

python
def sum_array(arr, index=0):
"""Sum all elements in array using recursion"""
# Base case: reached end of array
if index >= len(arr):
return 0

# Recursive case: current element + sum of rest


return arr[index] + sum_array(arr, index + 1)

# Example usage
numbers = [1, 2, 3, 4, 5]
result = sum_array(numbers)
print(f"Sum: {result}") # Output: Sum: 15

# Execution trace:
# sum_array([1,2,3,4,5], 0)
# = 1 + sum_array([1,2,3,4,5], 1)
# = 1 + (2 + sum_array([1,2,3,4,5], 2))
# = 1 + (2 + (3 + sum_array([1,2,3,4,5], 3)))
# = 1 + (2 + (3 + (4 + sum_array([1,2,3,4,5], 4))))
# = 1 + (2 + (3 + (4 + (5 + sum_array([1,2,3,4,5], 5)))))
# = 1 + (2 + (3 + (4 + (5 + 0))))
# = 15

Example 2: String Reversal

python
def reverse_string(s):
"""Reverse a string recursively"""
# Base case: empty or single character
if len(s) <= 1:
return s

# Recursive case: last char + reverse of rest


return s[-1] + reverse_string(s[:-1])

print(reverse_string("hello")) # Output: "olleh"

# Execution:
# reverse_string("hello")
# = "o" + reverse_string("hell")
# = "o" + ("l" + reverse_string("hel"))
# = "o" + ("l" + ("l" + reverse_string("he")))
# = "o" + ("l" + ("l" + ("e" + reverse_string("h"))))
# = "o" + ("l" + ("l" + ("e" + "h")))
# = "olleh"

Example 3: Countdown

python
def countdown(n):
"""Count down from n to 1"""
# Base case
if n <= 0:
print("Blast off! 🚀")
return

# Recursive case
print(n)
countdown(n - 1)

countdown(5)
# Output:
#5
#4
#3
#2
#1
# Blast off! 🚀

2️⃣ Binary/Tree Recursion


Definition: Function makes two or more recursive calls.

Characteristics:

Creates a tree structure of calls


Exponential number of calls possible
Can be very inefficient without optimization

Example 1: Fibonacci (Classic)

python
def fibonacci(n):
"""Calculate nth Fibonacci number"""
# Base cases
if n <= 0:
return 0
if n == 1:
return 1

# Recursive case: F(n) = F(n-1) + F(n-2)


return fibonacci(n - 1) + fibonacci(n - 2)

# Calculate fibonacci(5)
print(fibonacci(5)) # Output: 5

# Call tree for fibonacci(5):


# fib(5)
# / \
# fib(4) fib(3)
# / \ / \
# fib(3) fib(2) fib(2) fib(1)
# / \ / \ / \
# fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)
# / \
# fib(1) fib(0)

# Number of calls for each n:


# fib(0): 1 call
# fib(1): 1 call
# fib(2): 3 calls
# fib(3): 5 calls
# fib(4): 9 calls
# fib(5): 15 calls
# fib(10): 177 calls
# fib(20): 21,891 calls
# fib(40): 331,160,281 calls! 😱

Call Tree Visualization:


fibonacci(4) - Detailed Execution
════════════════════════════════════

fib(4) [Call #1]


/ \
/ \
fib(3) [#2] fib(2) [#9]
/ \ / \
/ \ / \
fib(2) [#3] fib(1) [#6] fib(1) [#10] fib(0) [#11]
/ \ | | |
/ \ | | |
fib(1) fib(0) [1] [1] [0]
[#4] [#5]
| |
[1] [0]

Execution Order:
1. fib(4) called
2. fib(3) called
3. fib(2) called
4. fib(1) called → returns 1
5. fib(0) called → returns 0
6. fib(2) returns 1 (1+0)
7. fib(1) called → returns 1
8. fib(3) returns 2 (1+1)
9. fib(2) called
10. fib(1) called → returns 1
11. fib(0) called → returns 0
12. fib(2) returns 1 (1+0)
13. fib(4) returns 3 (2+1)

Total: 15 function calls to compute fib(5)!


Notice: fib(2) is calculated 3 times (redundant!)

Example 2: Binary Tree Traversal

python
class TreeNode:
def __init__(self, value):
self.value = value
self.left = None
self.right = None

def inorder_traversal(node):
"""Visit left subtree, node, right subtree"""
if node is None: # Base case
return

# Recursive case: two recursive calls


inorder_traversal(node.left) # Visit left
print(node.value, end=" ") # Visit node
inorder_traversal(node.right) # Visit right

# Build a tree:
# 1
# /\
# 2 3
# /\
# 4 5

root = TreeNode(1)
root.left = TreeNode(2)
root.right = TreeNode(3)
root.left.left = TreeNode(4)
root.left.right = TreeNode(5)

print("Inorder traversal:")
inorder_traversal(root) # Output: 4 2 5 1 3

Example 3: Merge Sort

python
def merge_sort(arr):
"""Sort array using merge sort (divide and conquer)"""
# Base case: array with 0 or 1 element
if len(arr) <= 1:
return arr

# Recursive case: divide into two halves


mid = len(arr) // 2
left_half = merge_sort(arr[:mid]) # First recursive call
right_half = merge_sort(arr[mid:]) # Second recursive call

# Merge the sorted halves


return merge(left_half, right_half)

def merge(left, right):


"""Merge two sorted arrays"""
result = []
i=j=0

while i < len(left) and j < len(right):


if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1

result.extend(left[i:])
result.extend(right[j:])
return result

# Example
arr = [38, 27, 43, 3, 9, 82, 10]
sorted_arr = merge_sort(arr)
print(f"Sorted: {sorted_arr}")

# Recursion tree:
# [38,27,43,3,9,82,10]
# / \
# [38,27,43,3] [9,82,10]
# / \ / \
# [38,27] [43,3] [9,82] [10]
# / \ / \ / \
# [38] [27] [43] [3] [9] [82]
# \ / \ / \ /
# [27,38] [3,43] [9,82]
# \ / |
# [3,27,38,43] [9,10,82]
# \ /
# [3,9,10,27,38,43,82]

3️⃣ Tail Recursion


Definition: The recursive call is the last operation in the function (no computation after returning).

Importance: Many compilers can optimize tail recursion into iteration, preventing stack overflow.

Non-Tail Recursive (Regular):

python

def factorial_regular(n):
"""Regular recursion - NOT tail recursive"""
if n <= 1:
return 1
# ❌ Multiplication happens AFTER recursive call returns
return n * factorial_regular(n - 1)
# ↑ Must wait for result, then multiply

# Stack frames must be preserved:


# factorial(5) waiting to compute: 5 * __
# factorial(4) waiting to compute: 4 * __
# factorial(3) waiting to compute: 3 * __
# factorial(2) waiting to compute: 2 * __
# factorial(1) returns: 1
# Then unwind: 2*1=2, 3*2=6, 4*6=24, 5*24=120

Tail Recursive:

python
def factorial_tail(n, accumulator=1):
"""Tail recursion - can be optimized"""
if n <= 1:
return accumulator
# ✓ Recursive call is the LAST thing (nothing after it)
return factorial_tail(n - 1, n * accumulator)
# ↑ All computation done before call

# No waiting needed:
# factorial_tail(5, 1) → factorial_tail(4, 5)
# factorial_tail(4, 5) → factorial_tail(3, 20)
# factorial_tail(3, 20) → factorial_tail(2, 60)
# factorial_tail(2, 60) → factorial_tail(1, 120)
# factorial_tail(1, 120) → returns 120 (done!)

print(factorial_tail(5)) # Output: 120

Why Tail Recursion Matters:


Regular Recursion Stack:
┌─────────────────────────────┐
│ factorial(5) │
│ waiting to compute: 5 * __ │ ← Must keep in memory
├─────────────────────────────┤
│ factorial(4) │
│ waiting to compute: 4 * __ │ ← Must keep in memory
├─────────────────────────────┤
│ factorial(3) │
│ waiting to compute: 3 * __ │ ← Must keep in memory
├─────────────────────────────┤
│ factorial(2) │
│ waiting to compute: 2 * __ │ ← Must keep in memory
├─────────────────────────────┤
│ factorial(1) │
│ returns: 1 │
└─────────────────────────────┘
Stack grows with each call!

Tail Recursion (Optimized):


┌─────────────────────────────┐
│ factorial_tail(5, 1) │ ← Replaced by next call
└─────────────────────────────┘
↓ (reuse stack frame)
┌─────────────────────────────┐
│ factorial_tail(4, 5) │ ← Replaced by next call
└─────────────────────────────┘
↓ (reuse stack frame)
┌─────────────────────────────┐
│ factorial_tail(3, 20) │ ← Replaced by next call
└─────────────────────────────┘
↓ (reuse stack frame)
┌─────────────────────────────┐
│ factorial_tail(2, 60) │ ← Replaced by next call
└─────────────────────────────┘
↓ (reuse stack frame)
┌─────────────────────────────┐
│ factorial_tail(1, 120) │
│ returns: 120 │
└─────────────────────────────┘
Stack stays constant size!

More Tail Recursion Examples:

python

# Example 1: Sum of list (tail recursive)


def sum_list_tail(arr, index=0, accumulator=0):
"""Tail recursive sum"""
if index >= len(arr):
return accumulator
return sum_list_tail(arr, index + 1, accumulator + arr[index])

print(sum_list_tail([1, 2, 3, 4, 5])) # Output: 15

# Example 2: Reverse a list (tail recursive)


def reverse_list_tail(arr, accumulator=[]):
"""Tail recursive list reversal"""
if not arr:
return accumulator
return reverse_list_tail(arr[1:], [arr[0]] + accumulator)

print(reverse_list_tail([1, 2, 3, 4, 5])) # Output: [5, 4, 3, 2, 1]

# Example 3: Power function (tail recursive)


def power_tail(base, exponent, accumulator=1):
"""Calculate base^exponent using tail recursion"""
if exponent == 0:
return accumulator
return power_tail(base, exponent - 1, accumulator * base)

print(power_tail(2, 10)) # Output: 1024

4️⃣ Head Recursion


Definition: The recursive call happens first, and all operations happen after the recursive call returns.

Characteristics:
Processing happens during the "unwinding" phase
Useful for printing in reverse order
Cannot be easily converted to tail recursion

Example 1: Print Numbers in Order

python

def print_ascending(n):
"""Print 1 to n using head recursion"""
if n <= 0:
return # Base case

# Recursive call FIRST


print_ascending(n - 1)
# Operation AFTER recursive call
print(n, end=" ")

print_ascending(5)
# Output: 1 2 3 4 5

# Execution flow:
# print_ascending(5)
# → print_ascending(4)
# → print_ascending(3)
# → print_ascending(2)
# → print_ascending(1)
# → print_ascending(0) [returns]
# ← print(1)
# ← print(2)
# ← print(3)
# ← print(4)
# ← print(5)

Example 2: Print in Reverse

python
def print_descending(n):
"""Print n to 1 (this is actually tail recursion)"""
if n <= 0:
return
print(n, end=" ") # Operation BEFORE recursive call
print_descending(n - 1)

print_descending(5)
# Output: 5 4 3 2 1

Example 3: Print Array in Reverse

python
def print_array_reverse(arr, index=0):
"""Print array elements in reverse order"""
if index >= len(arr):
return # Base case

# Recursive call FIRST


print_array_reverse(arr, index + 1)
# Print AFTER recursive call (on the way back)
print(arr[index], end=" ")

numbers = [1, 2, 3, 4, 5]
print_array_reverse(numbers)
# Output: 5 4 3 2 1

# Stack trace:
# print_array_reverse([...], 0)
# → print_array_reverse([...], 1)
# → print_array_reverse([...], 2)
# → print_array_reverse([...], 3)
# → print_array_reverse([...], 4)
# → print_array_reverse([...], 5) [returns]
# ← print(5)
# ← print(4)
# ← print(3)
# ← print(2)
# ← print(1)

5️⃣ Indirect/Mutual Recursion


Definition: Two or more functions call each other in a cycle.

Characteristics:

Functions depend on each other


Can model complex relationships
Must have base cases in all functions

Example 1: Even/Odd Check

python
def is_even(n):
"""Check if number is even using mutual recursion"""
if n == 0:
return True # Base case
return is_odd(n - 1)

def is_odd(n):
"""Check if number is odd using mutual recursion"""
if n == 0:
return False # Base case
return is_even(n - 1)

# Test
print(is_even(4)) # True
print(is_even(7)) # False
print(is_odd(5)) # True
print(is_odd(8)) # False

# Trace is_even(4):
# is_even(4) → is_odd(3) → is_even(2) → is_odd(1) → is_even(0) → True

Example 2: Mutual Recursion with Different Logic

python
def function_A(n):
"""Decrements n, calls B"""
if n <= 0:
return "Done"
print(f"A({n})")
return function_B(n - 1)

def function_B(n):
"""Decrements n, calls A"""
if n <= 0:
return "Done"
print(f"B({n})")
return function_A(n - 1)

function_A(5)
# Output:
# A(5)
# B(4)
# A(3)
# B(2)
# A(1)

Example 3: Parsing (Real-World Use)

python
def parse_expression(tokens):
"""Parse mathematical expression"""
result = parse_term(tokens)
while tokens and tokens[0] in ['+', '-']:
op = tokens.pop(0)
right = parse_term(tokens)
result = f"({result} {op} {right})"
return result

def parse_term(tokens):
"""Parse term (mutual recursion with parse_expression)"""
result = parse_factor(tokens)
while tokens and tokens[0] in ['*', '/']:
op = tokens.pop(0)
right = parse_factor(tokens)
result = f"({result} {op} {right})"
return result

def parse_factor(tokens):
"""Parse factor"""
if tokens and tokens[0] == '(':
tokens.pop(0) # Remove '('
result = parse_expression(tokens) # Mutual recursion!
tokens.pop(0) # Remove ')'
return result
return tokens.pop(0)

# Example: 2 + 3 * 4
tokens = ['2', '+', '3', '*', '4']
print(parse_expression(tokens.copy()))
# Output: (2 + (3 * 4))

6️⃣ Nested Recursion


Definition: A recursive call uses another recursive call as its argument.

Characteristics:

Most complex type of recursion


Recursive call inside a recursive call
Can grow extremely fast
Famous example: Ackermann function
Example 1: Ackermann Function

python
def ackermann(m, n):
"""
Ackermann function - grows extremely fast!
One of the simplest examples of a function that is
computable but not primitive recursive.
"""
if m == 0:
return n + 1
elif n == 0:
return ackermann(m - 1, 1)
else:
# Nested recursion: recursive call as argument
return ackermann(m - 1, ackermann(m, n - 1))
# ^^^^^^^^^^^^^^^^
# Inner call evaluated first!

# Small inputs
print(ackermann(0, 4)) # Output: 5
print(ackermann(1, 2)) # Output: 4
print(ackermann(2, 2)) # Output: 7
print(ackermann(3, 2)) # Output: 29
# print(ackermann(4, 2)) # Takes FOREVER! Don't try this! 🔥

# Trace ackermann(1, 2):


# ackermann(1, 2)
# = ackermann(0, ackermann(1, 1))
# = ackermann(0, ackermann(0, ackermann(1, 0)))
# = ackermann(0, ackermann(0, ackermann(0, 1)))
# = ackermann(0, ackermann(0, 2))
# = ackermann(0, 3)
# =4

# Growth rate:
# A(0, n) = n + 1
# A(1, n) = n + 2
# A(2, n) = 2n + 3
# A(3, n) = 2^(n+3) - 3
# A(4, n) = 2^2^2^... (n+3 times) - 3 ← Astronomically large!
# A(4, 2) = 2^65536 - 3 (19,729 digits!)

Example 2: Nested Sum


python

def nested_sum(n):
"""
Nested recursion example
nested_sum(n) = nested_sum(nested_sum(n-1)) if n > 0
"""
if n <= 0:
return 0
if n == 1:
return 1

# Nested recursive call


return n + nested_sum(nested_sum(n - 1))

print(nested_sum(3))

# Trace:
# nested_sum(3)
# = 3 + nested_sum(nested_sum(2))
# = 3 + nested_sum(2 + nested_sum(nested_sum(1)))
# = 3 + nested_sum(2 + nested_sum(1))
# = 3 + nested_sum(2 + 1)
# = 3 + nested_sum(3)
# ... infinite recursion! Need better base case

🎯 Recursion vs Iteration
Many recursive problems can be solved iteratively (with loops). Let's compare:

Example 1: Factorial
Recursive:

python
def factorial_recursive(n):
"""Recursive factorial"""
if n <= 1:
return 1
return n * factorial_recursive(n - 1)

# Pros: Clean, elegant, mathematical


# Cons: Stack overhead, potential stack overflow

Iterative:

python

def factorial_iterative(n):
"""Iterative factorial"""
result = 1
for i in range(1, n + 1):
result *= i
return result

# Pros: Faster, no stack overhead, no stack overflow risk


# Cons: Less elegant for some problems

Example 2: Fibonacci
Recursive (Naive):

python

def fib_recursive(n):
"""Naive recursive Fibonacci - SLOW!"""
if n <= 1:
return n
return fib_recursive(n - 1) + fib_recursive(n - 2)

# Time complexity: O(2^n) - exponential!


# fib(40) takes several seconds
Iterative:

python

def fib_iterative(n):
"""Iterative Fibonacci - FAST!"""
if n <= 1:
return n

a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b

# Time complexity: O(n) - linear!


# fib(40) is instant

Recursive with Memoization:

python

def fib_memo(n, memo={}):


"""Recursive Fibonacci with memoization"""
if n in memo:
return memo[n]
if n <= 1:
return n

memo[n] = fib_memo(n - 1, memo) + fib_memo(n - 2, memo)


return memo[n]

# Time complexity: O(n) - as fast as iterative!


# Combines elegance of recursion with speed of iteration
Comparison Table

Feature Recursion Iteration


Readability Often clearer, more elegant Can be verbose
Memory Uses call stack Uses heap (minimal)
Speed Slower (function call overhead) Faster
Stack Overflow Risk Yes (deep recursion) No
Complexity Can be simpler for complex problems Can be complex
Debugging Harder to trace Easier to trace
Best For Trees, graphs, divide-and-conquer Simple loops, large iterations

🌟 Real-World Recursion Examples


Example 1: File System Traversal

python
import os

def list_all_files(directory, indent=0):


"""
Recursively list all files and directories
Real-world use: file managers, search tools
"""
try:
items = os.listdir(directory)

for item in items:


path = os.path.join(directory, item)

# Print with indentation to show hierarchy


print(" " * indent + "├── " + item)

# If it's a directory, recurse into it


if os.path.isdir(path):
list_all_files(path, indent + 1)

except PermissionError:
print(" " * indent + "└── [Permission Denied]")

# Example usage:
# list_all_files("/home/user/documents")

# Output:
# ├── documents
# ├── work
# ├── report.pdf
# ├── data
# ├── stats.csv
# ├── analysis.xlsx
# ├── personal
# ├── photo.jpg
# ├── video.mp4

How It Works:

1. Base case: PermissionError or empty directory


2. For each item in directory:
If it's a file: print it
If it's a directory: recurse into it
3. Indentation shows depth in hierarchy

Example 2: Binary Search

python
def binary_search(arr, target, left, right):
"""
Search for target in sorted array
Real-world use: databases, search engines
Time complexity: O(log n)
"""
# Base case: not found
if left > right:
return -1

mid = (left + right) // 2

# Base case: found


if arr[mid] == target:
return mid

# Recursive cases: search left or right half


if arr[mid] > target:
return binary_search(arr, target, left, mid - 1)
else:
return binary_search(arr, target, mid + 1, right)

# Example
arr = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
target = 13

result = binary_search(arr, target, 0, len(arr) - 1)


print(f"Found {target} at index {result}") # Output: Found 13 at index 6

# Trace for finding 13:


# Search in [1,3,5,7,9,11,13,15,17,19], left=0, right=9
# mid=4, arr[4]=9, 9 < 13, search right half
# Search in [11,13,15,17,19], left=5, right=9
# mid=7, arr[7]=15, 15 > 13, search left half
# Search in [11,13], left=5, right=6
# mid=5, arr[5]=11, 11 < 13, search right half
# Search in [13], left=6, right=6
# mid=6, arr[6]=13, FOUND!
Example 3: Quick Sort

python
def quicksort(arr):
"""
Sort array using quicksort algorithm
Real-world use: most sorting libraries use variations of quicksort
Average time complexity: O(n log n)
"""
# Base case: empty or single element
if len(arr) <= 1:
return arr

# Choose pivot (middle element)


pivot = arr[len(arr) // 2]

# Partition into three parts


left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]

# Recursively sort left and right, combine


return quicksort(left) + middle + quicksort(right)

# Example
unsorted = [3, 6, 8, 10, 1, 2, 1, 7, 4, 5]
sorted_arr = quicksort(unsorted)
print(f"Sorted: {sorted_arr}")
# Output: Sorted: [1, 1, 2, 3, 4, 5, 6, 7, 8, 10]

# Recursion tree for [3, 6, 8, 10, 1, 2, 1]:


# [3,6,8,10,1,2,1] pivot=10
# / | \
# [3,6,8,1,2,1] [10] []
# pivot=8
# / | \
# [3,6,1,2,1] [8] []
# pivot=1
# / | \
# [] [1,1] [3,6,2]
# pivot=6
# / | \
# [3,2] [6] []
# pivot=3
# /|\
# [2][3][]

Example 4: Tower of Hanoi

python
def tower_of_hanoi(n, source, destination, auxiliary):
"""
Solve Tower of Hanoi puzzle
Real-world use: understanding recursion, algorithm analysis

Rules:
1. Only one disk can be moved at a time
2. A disk can only be placed on top of a larger disk
3. All disks start on source rod

Goal: Move all disks from source to destination


"""
# Base case: only one disk
if n == 1:
print(f"Move disk 1 from {source} to {destination}")
return

# Recursive case:
# Step 1: Move n-1 disks from source to auxiliary (using destination)
tower_of_hanoi(n - 1, source, auxiliary, destination)

# Step 2: Move the largest disk from source to destination


print(f"Move disk {n} from {source} to {destination}")

# Step 3: Move n-1 disks from auxiliary to destination (using source)


tower_of_hanoi(n - 1, auxiliary, destination, source)

# Solve for 3 disks


print("Solution for Tower of Hanoi with 3 disks:")
tower_of_hanoi(3, 'A', 'C', 'B')

# Output:
# Move disk 1 from A to C
# Move disk 2 from A to B
# Move disk 1 from C to B
# Move disk 3 from A to C
# Move disk 1 from B to A
# Move disk 2 from B to C
# Move disk 1 from A to C

# Number of moves required: 2^n - 1


# For 3 disks: 2^3 - 1 = 7 moves
# For 64 disks: 2^64 - 1 = 18,446,744,073,709,551,615 moves!
# (Would take 585 billion years at 1 move per second!)

Example 5: Generate All Permutations

python
def permutations(arr):
"""
Generate all permutations of an array
Real-world use: testing, combinatorics, scheduling
"""
# Base case: empty or single element
if len(arr) <= 1:
return [arr]

result = []

# For each element, make it the first element


for i in range(len(arr)):
current = arr[i]
remaining = arr[:i] + arr[i+1:]

# Generate permutations of remaining elements


for perm in permutations(remaining):
result.append([current] + perm)

return result

# Example
print("Permutations of [1, 2, 3]:")
perms = permutations([1, 2, 3])
for perm in perms:
print(perm)

# Output:
# [1, 2, 3]
# [1, 3, 2]
# [2, 1, 3]
# [2, 3, 1]
# [3, 1, 2]
# [3, 2, 1]

# Total permutations: n! (factorial)


# For 3 elements: 3! = 6
# For 5 elements: 5! = 120
# For 10 elements: 10! = 3,628,800

Example 6: N-Queens Problem

python
def solve_n_queens(n):
"""
Place n queens on n×n chessboard so no two queens attack each other
Real-world use: constraint satisfaction, optimization
"""
def is_safe(board, row, col):
"""Check if queen can be placed at board[row][col]"""
# Check column
for i in range(row):
if board[i][col] == 1:
return False

# Check upper left diagonal


i, j = row - 1, col - 1
while i >= 0 and j >= 0:
if board[i][j] == 1:
return False
i -= 1
j -= 1

# Check upper right diagonal


i, j = row - 1, col + 1
while i >= 0 and j < n:
if board[i][j] == 1:
return False
i -= 1
j += 1

return True

def solve(board, row):


"""Recursively place queens"""
# Base case: all queens placed
if row >= n:
return True

# Try placing queen in each column


for col in range(n):
if is_safe(board, row, col):
# Place queen
board[row][col] = 1
# Recursively place rest of queens
if solve(board, row + 1):
return True

# Backtrack if doesn't lead to solution


board[row][col] = 0

return False

# Initialize board
board = [[0 for _ in range(n)] for _ in range(n)]

if solve(board, 0):
print(f"Solution for {n}-Queens:")
for row in board:
print(" ".join("Q" if cell == 1 else "." for cell in row))
else:
print(f"No solution exists for {n}-Queens")

# Solve 4-Queens
solve_n_queens(4)

# Output:
# Solution for 4-Queens:
#.Q..
#...Q
#Q...
#..Q.

Example 7: JSON Parser (Nested Structures)

python
def parse_json_value(data, indent=0):
"""
Recursively parse and display JSON structure
Real-world use: data parsing, API responses
"""
prefix = " " * indent

if isinstance(data, dict):
print(f"{prefix}{{")
for key, value in data.items():
print(f"{prefix} {key}:", end=" ")
parse_json_value(value, indent + 1)
print(f"{prefix}}}")

elif isinstance(data, list):


print(f"{prefix}[")
for item in data:
parse_json_value(item, indent + 1)
print(f"{prefix}]")

else:
print(f"{data}")

# Example
data = {
"name": "Alice",
"age": 30,
"address": {
"street": "123 Main St",
"city": "Boston",
"coordinates": {
"lat": 42.3601,
"lon": -71.0589
}
},
"hobbies": ["reading", "hiking", "coding"],
"friends": [
{"name": "Bob", "age": 32},
{"name": "Charlie", "age": 28}
]
}
parse_json_value(data)

# Output shows nested structure with proper indentation

⚠️ Common Recursion Pitfalls


1️⃣ Missing or Wrong Base Case

python

# ❌ WRONG - Infinite recursion


def countdown(n):
print(n)
countdown(n - 1) # No base case!

# countdown(5) # Stack overflow!

# ✅ CORRECT
def countdown_correct(n):
if n <= 0: # Base case
print("Done!")
return
print(n)
countdown_correct(n - 1)

countdown_correct(5)

2️⃣ Not Progressing Toward Base Case

python
# ❌ WRONG - Infinite recursion
def bad_sum(n):
if n == 0:
return 0
return n + bad_sum(n) # Should be n-1, not n!

# bad_sum(5) # Infinite recursion!

# ✅ CORRECT
def good_sum(n):
if n == 0:
return 0
return n + good_sum(n - 1) # Progress toward base case

print(good_sum(5)) # Output: 15

3️⃣ Inefficient Recursion (Redundant Calculations)

python
# ❌ VERY SLOW - Exponential time
def fib_slow(n):
if n <= 1:
return n
return fib_slow(n-1) + fib_slow(n-2)

# fib_slow(40) # Takes many seconds!

# ✅ FAST - Linear time with memoization


def fib_fast(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib_fast(n-1, memo) + fib_fast(n-2, memo)
return memo[n]

# fib_fast(40) # Instant!
# fib_fast(100) # Still instant!

4️⃣ Stack Overflow with Large Inputs

python
# ❌ Stack overflow with large n
def sum_to_n(n):
if n == 0:
return 0
return n + sum_to_n(n - 1)

# sum_to_n(10000) # RecursionError!

# ✅ Use iteration for large inputs


def sum_to_n_iterative(n):
total = 0
for i in range(1, n + 1):
total += i
return total

print(sum_to_n_iterative(10000)) # Works fine!

# ✅ Or use mathematical formula


def sum_to_n_formula(n):
return n * (n + 1) // 2

print(sum_to_n_formula(10000)) # Instant!

5️⃣ Modifying Input During Recursion

python
# ❌ WRONG - Modifies shared list
def process_list(arr):
if not arr:
return
arr.pop(0) # Modifies original list!
process_list(arr)

my_list = [1, 2, 3, 4, 5]
process_list(my_list)
print(my_list) # Output: [] (original list destroyed!)

# ✅ CORRECT - Use slicing or index


def process_list_correct(arr, index=0):
if index >= len(arr):
return
print(arr[index])
process_list_correct(arr, index + 1)

my_list = [1, 2, 3, 4, 5]
process_list_correct(my_list)
print(my_list) # Output: [1, 2, 3, 4, 5] (original preserved!)

🎓 When to Use Recursion


✅ Good Use Cases for Recursion
1. Tree/Graph Traversal
Binary trees, N-ary trees
Graph depth-first search
File system traversal
2. Divide and Conquer Algorithms
Quick sort, merge sort
Binary search
Closest pair of points
3. Backtracking Problems
N-Queens problem
Sudoku solver
Maze solving
Generating permutations/combinations
4. Dynamic Programming (with Memoization)
Fibonacci numbers
Longest common subsequence
Knapsack problem
5. Nested/Hierarchical Structures
JSON/XML parsing
Expression evaluation
Recursive data structures

❌ Avoid Recursion When


1. Simple Loops Would Work
Counting, summing simple arrays
Simple iterations
2. Very Deep Recursion
Processing 10,000+ items
Risk of stack overflow
3. Performance Critical Code
Real-time systems
High-frequency operations
4. Redundant Calculations Without Memoization
Naive Fibonacci
Overlapping subproblems

💡 Recursion Best Practices


1️⃣ Always Define Clear Base Case(s)

python

def recursive_function(n):
# Base case(s) first
if n <= 0:
return base_value
if n == 1:
return another_base_value

# Recursive case
return recursive_function(n - 1)

2️⃣ Ensure Progress Toward Base Case

python
# Ensure parameter moves toward base case
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1) # n decreases each call

3️⃣ Use Memoization for Overlapping Subproblems

python

def expensive_recursive(n, memo={}):


if n in memo:
return memo[n]

# ... recursive logic ...


result = expensive_recursive(n - 1, memo)
memo[n] = result
return result

4️⃣ Consider Tail Recursion When Possible

python

# Tail recursive version


def sum_tail(n, acc=0):
if n == 0:
return acc
return sum_tail(n - 1, acc + n)

5️⃣ Add Helper Functions

python
def public_function(data):
"""Public API with simple interface"""
return _recursive_helper(data, 0, len(data) - 1)

def _recursive_helper(data, left, right):


"""Internal recursive implementation"""
# ... recursion logic with extra parameters ...

6️⃣ Document Recursive Logic

python

def complex_recursion(n):
"""
Description of what the function does.

Base case: When n <= 0


Recursive case: Breaks problem into subproblems by...

Time complexity: O(...)


Space complexity: O(...) due to call stack
"""
pass

7️⃣ Test with Small Inputs First

python

# Test with base cases and small inputs


assert factorial(0) == 1
assert factorial(1) == 1
assert factorial(5) == 120

# Then test larger inputs


assert factorial(10) == 3628800
🧠 The Recursion Mindset
Think of Recursion Like Russian Nesting Dolls

🪆 Big Problem (Original)


└── 🪆 Smaller Problem
└── 🪆 Even Smaller
└── 🪆 Smallest (Base Case)

Each doll contains a smaller version of itself until you reach the smallest doll that can't be opened further—that's your base
case!

Three Key Questions

When approaching any recursive problem, ask:

1. What's the simplest case? (Base case)


When can I solve this directly?
What's the trivial solution?
2. How do I break down the problem? (Recursive case)
How can I make the problem smaller?
What's the relationship between problem and subproblem?
3. How do I combine results? (Return statement)
How do I use the solution to the smaller problem?
What operation combines sub-solutions?

Example: Sum of Array


Question 1: What's the simplest case?
Answer: Empty array → sum is 0

Question 2: How do I break it down?


Answer: Sum = first element + sum of rest

Question 3: How do I combine results?


Answer: Add current element to recursive result

def sum_array(arr):
if not arr: # Q1: Base case
return 0
return arr[0] + sum_array(arr[1:]) # Q2+Q3: Break down and combine

📊 Complexity Analysis
Time Complexity

Linear Recursion: O(n)

python

def factorial(n): # Makes n recursive calls


if n <= 1:
return 1
return n * factorial(n - 1)
# Time: O(n)

Binary Recursion: O(2^n) or O(n log n)

python
def fib(n): # Makes ~2^n calls (without memoization)
if n <= 1:
return n
return fib(n-1) + fib(n-2)
# Time: O(2^n)

def merge_sort(arr): # Divides problem in half each time


# ... merge sort logic ...
# Time: O(n log n)

Space Complexity
Stack Space: O(depth of recursion)

python

def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
# Space: O(n) for call stack

def binary_search(arr, target, left, right):


# ... binary search logic ...
# Space: O(log n) for call stack

🎯 Recursion Patterns Cheat Sheet


Pattern 1: Linear Recursion

python

def process(data, index=0):


if index >= len(data): # Base case
return base_value
return combine(data[index], process(data, index + 1))
Pattern 2: Tree Recursion

python

def process_tree(node):
if node is None: # Base case
return base_value
left_result = process_tree(node.left)
right_result = process_tree(node.right)
return combine(node.value, left_result, right_result)

Pattern 3: Divide and Conquer

python

def divide_conquer(data):
if len(data) <= 1: # Base case
return data
mid = len(data) // 2
left = divide_conquer(data[:mid])
right = divide_conquer(data[mid:])
return merge(left, right)

Pattern 4: Backtracking

python
def backtrack(state, choices):
if is_solution(state): # Base case
solutions.append(state)
return

for choice in choices:


if is_valid(choice):
make_choice(state, choice)
backtrack(state, remaining_choices)
undo_choice(state, choice) # Backtrack

🎉 Conclusion
Recursion is a powerful tool that:

Simplifies complex problems


Makes code more elegant and maintainable
Mirrors the problem's natural structure

Remember:

1. Always have a base case


2. Progress toward the base case
3. Use memoization for overlapping subproblems
4. Consider iteration for simple cases
5. Draw recursion trees to visualize
6. Test with small inputs first

Master recursion by:

Practicing with simple problems


Understanding the call stack
Identifying patterns
Knowing when NOT to use it

"Recursion is the art of defining something in terms of itself, while cleverly avoiding infinite regress."

📚 Summary
Static vs Dynamic
Concept Static Dynamic
Typing Fixed types, compile-time Flexible types, runtime
Memory Compile-time, stack Runtime, heap
Binding Compile-time method resolution Runtime method resolution
Linking Libraries copied into executable Libraries linked at runtime
Recursion Types
Type Calls Best For
Linear One call Lists, simple sequences
Binary/Tree Two+ calls Trees, divide-and-conquer
Tail Last operation Optimization opportunity
Head First operation Reverse processing
Indirect Mutual calls Complex relationships
Nested Recursive argument Complex mathematical functions

When to Use What


Use Static Typing: Type safety, large teams, long-lived projects Use Dynamic Typing: Rapid prototyping, scripts,
flexibility

Use Static Allocation: Fixed-size data, performance-critical Use Dynamic Allocation: Runtime-sized data, large structures

Use Recursion: Trees, graphs, natural recursive structure Use Iteration: Simple loops, large datasets, performance

You now have a complete understanding of Static, Dynamic, and Recursion! 🎓🚀

You might also like