Module1 NIA Detail Notes
Module1 NIA Detail Notes
Fundamentals of Optimization
Optimization is the process of finding the best solution from all feasible solutions.
It involves selecting input values within given constraints to maximize or minimize
an objective function. Optimization is widely used in various fields such as
engineering, economics, computer science, and operations research.
o Example:
Maximize profit: Z=3 x+5 y
Subject to constraints: 2 x+ y ≤ 100 x +2 y ≤ 80 x , y ≥ 0
2. Nonlinear Optimization: The objective function or constraints involve
nonlinear relationships.
o Example:
Minimize cost: Z=x 2+ y 2
Subject to: x + y ≥ 10 x , y ≥ 0
3. Integer Optimization: Decision variables take integer values.
o Example:
Minimize cost: Z=4 x +7 y
Subject to: x + y ≤ 20 x , y ∈ Z +¿¿ (x and y must be positive integers)
4. Combinatorial Optimization: Finding an optimal object from a finite set of
objects.
o Example:
Convex: Z=x 2+ y 2
Non-convex: Z=x 4 −3 x 2+ y 2
6. Dynamic Optimization: Optimization over time, where decision variables
change over different time periods.
Advantages of Optimization
Improves efficiency: Reduces waste of resources.
Cost-effectiveness: Helps in minimizing costs.
Better decision-making: Provides structured methods to solve complex
problems.
Scalability: Can be applied to small- and large-scale problems.
Automates processes: Reduces the need for manual intervention.
Disadvantages of Optimization
Complexity: Some optimization problems are computationally intensive.
Requires precise data: Inaccurate data can lead to suboptimal solutions.
Local vs. Global Optima: Some methods get stuck in local optima rather
than finding the best overall solution.
Computationally expensive: Large problems require high computational
power and time.
Difficult formulation: Defining constraints and objective functions
accurately can be challenging.
Conclusion
Optimization is a powerful tool for improving efficiency and effectiveness in
various fields. Understanding its principles, applications, and limitations is
essential for making informed decisions and solving real-world problems
effectively.
Advantages:
Disadvantages:
Advantages:
Disadvantages:
Computationally expensive.
No general solution methods like LP (uses iterative methods like gradient
descent).
3. Integer Optimization (Integer Programming - IP)
Definition: Optimization problems where some or all variables must be
integers.
Mathematical Formulation:
Maximize Z =3 X +5 Y
Subject to:
2 X + 4 Y ≤20
+ ¿¿
X ,Y ∈Z
Example:
A warehouse needs to decide how many trucks ( X ) and vans (Y ) to buy:
Maximize Z =400 X +300 Y
Subject to:
3 X +2 Y ≤30
X , Y are integers
Advantages:
Disadvantages:
Advantages:
Disadvantages:
Computationally expensive.
Requires well-defined stages and states.
5. Stochastic Optimization
Definition: Optimization problems where data or constraints include
uncertainty (random variables).
Mathematical Example:
A retailer deciding how much stock to order, given uncertain demand:
Minimize E [C ]=P ( d ≤Q ) C s + P ( d >Q ) Ch
Where C s is the shortage cost and C h is the holding cost.
Example:
An airline overbooking flights while considering customer no-shows.
Advantages:
Disadvantages:
Subject to:
x + y=10
Disadvantages
❌ May require complex mathematical models.
❌ Computationally expensive for large problems.
❌ Assumptions (e.g., linearity) may not always hold.
Formulation of Optimization Problems
1. Introduction
Optimization is the process of finding the best possible solution under given
constraints. It involves selecting the optimal values for decision variables to
maximize or minimize an objective function while satisfying constraints.
Subject to constraints:
gi ( x 1 , x 2 , . . ., x n ) ≤0 , i=1 , 2 ,. . . , m
h j ( x1 , x2 , . .. , x n )=0 , j=1, 2 , . .. , p
where:
Subject to Constraints:
1. Resource Constraint:
4 X 1 +3 X 2 ≤240
2. Production Capacity:
2 X 1 + X 2 ≤100
3. Non-Negativity Constraints:
X 1 , X 2 ≥0
Solution: Solving this using graphical or simplex method yields the optimal
production quantities for X 1 and X 2 .
Subject to:
x 1+ 2 x 2 ≥10
x1 , x2 ≥ 0
Solution Approach: This can be solved using the Lagrange multiplier method or
numerical techniques such as gradient descent.
Maximize:
Z=5 X 1 +7 X 2
Subject to:
X 1 +2 X 2 ≤ 6
X 1 , X 2 ≥0 (Integer Values)
5. Conclusion
Formulating an optimization problem involves defining decision variables, an
objective function, and constraints. Different types of optimization techniques
(linear, nonlinear, integer, etc.) are used depending on the problem's nature. While
optimization provides efficiency and cost-saving benefits, it also comes with
challenges like complexity and computational cost.
Examples:
1. Gradient Descent (GD)
o Used to minimize a function f ( x ) by iteratively moving in the direction
of the negative gradient.
o Update rule:
x k+1 =x k − α ∇ f ( x k )
o Example:
If f ( x )=x 2 + 4 x+ 4 , then ∇ f ( x )=2 x+ 4 .
Using gradient descent with step size α =0.1:
x k+1 =x k − 0.1 ( 2 x k +4 )
2. Newton’s Method
o Uses second-order derivatives (Hessian matrix) to accelerate
convergence.
o Update rule:
−1
x k+1 =x k − H ∇ f ( xk )
o Example:
If f ( x )=x 2 + 4 x+ 4 , then ∇ f ( x )=2 x+ 4 and H=2.
Newton’s update:
2 xk+ 4
x k+1 =x k −
2
Advantages:
Efficient for smooth functions.
Faster convergence than derivative-free methods.
Newton’s method has quadratic convergence.
Disadvantages:
Requires derivative computation.
Sensitive to step size selection.
Newton’s method requires Hessian computation.
Examples:
1. Nelder-Mead Method (Simplex)
Disadvantages:
Summary Table
Example
Category Methods Advantages Disadvantages
Gradient-Based Gradient Fast Requires
Descent, convergence derivatives
Newton’s
Method
Derivative-Free Nelder-Mead, Works for any Slower
PSO, GA function convergence
Deterministic Newton’s Consistent May get stuck
Method results in local minima
Stochastic Genetic Avoids local Computationall
Algorithm, minima y expensive
Simulated
Annealing
Constrained Lagrange Solves Complex
Multipliers constrained formulation
problems
Unconstrained Gradient Simpler to Limited
Descent implement applicability
Global Genetic Finds the best High
Optimization Algorithm, PSO solution computational
cost
Local Gradient Fast for May get stuck
Optimization Descent smooth in local minima
Example
Category Methods Advantages Disadvantages
functions
Problem Statement
Given a list of cities and the distances between each pair of cities, find the shortest
possible route that visits each city exactly once and returns to the starting city.
2. Mathematical Formulation
TSP can be formulated as an optimization problem using graph theory.
Graph Representation
Represent the problem as a complete weighted graph G= (V , E ) , where:
o V is the set of n cities (nodes).
o E is the set of edges representing paths between cities.
o d ( i, j ) is the distance (or cost) between city i and city j .
Objective Function
Minimize the total travel cost:
n n
Minimize ∑ ∑ d (i , j ) ⋅ x i j
i=1 j=1
where:
Constraints
1. Each city is visited exactly once:
n
∑ x i j =1 , ∀ i∈ V
j=1
∑ x i j =1 , ∀ j ∈V
i=1
ui −u j+ n xi j ≤ n −1 , ∀ i , j ∈V , i≠ j
3. Example
Example with 4 Cities
Let’s consider 4 cities A , B ,C , D with the following distance matrix:
[ )
0 10 15 20
10 0 35 25
15 35 0 30
20 25 30 0
4. Solution Approaches
Since TSP is NP-hard, exact solutions are computationally expensive for large n .
Various approaches exist:
Exact Algorithms
1. Brute Force (Exponential Time Complexity)
o Solves the TSP as an ILP problem using solvers like CPLEX or Gurobi.
Disadvantages
❌ Computationally Expensive:
A small change in distance values can significantly alter the optimal solution.
For small instances, exact algorithms (Brute force, Dynamic Programming) work
well, but for larger instances, heuristics (Nearest Neighbor, Christofides) and
metaheuristics (Genetic Algorithm, Simulated Annealing) are preferred.
Knapsack Problem
Introduction
The Knapsack Problem is a classic combinatorial optimization problem. It arises in
decision-making scenarios where we must determine the best way to pack items
into a knapsack to maximize value without exceeding a weight limit. The problem
has applications in finance, resource allocation, cryptography, and more.
Objective:
Maximize the total value V while ensuring the total weight does not exceed W .
Mathematical Formulation:
Define x i as:
Subject to:
n
∑ wi xi ≤ W
i=1
Example:
Given:
Item Weight (w i) Value ( v i)
1 2 10
2 3 20
3 4 30
4 5 40
Knapsack Capacity: W =7
Mathematical Formulation:
Instead of x i being 0 or 1, it can be any value between 0 and 1:
0 ≤ xi ≤ 1
Solution Approach:
Use the Greedy Algorithm:
Example:
Using the same item set as before, the sorted value-to-weight ratios:
Knapsack Capacity = 7
Greedy Selection:
1. Take Item 4 completely (w 4=5, v 4=40).
2. Take fraction of Item 3 to fill the remaining space (2/4 = 0.5 of item).
3. Item 3 contributes 0.5 ×30=15.
Solution Approaches
1. Brute Force (Exponential Time)
Generate all possible subsets and pick the best.
Time Complexity: O ( 2n ) (very inefficient for large n ).
❌ Disadvantages
Exponential Time Complexity in Brute Force: Inefficient for large
problems.
Greedy Algorithm Doesn't Work for 0/1 Knapsack: It may give suboptimal
results.
DP Can Be Inefficient for Large Capacity W : Requires O ( n W ) space.
Applications
1. Finance: Portfolio selection, investment decisions.
2. Cryptography: Public-key encryption schemes.
3. Resource Allocation: Cloud computing, storage optimization.
4. Logistics: Cargo loading, space management.
5. Manufacturing: Cutting stock problem, material selection.
Conclusion
The Knapsack Problem is a fundamental optimization problem with diverse
applications. The 0/1 Knapsack is best solved using Dynamic Programming,
while the Fractional Knapsack is efficiently handled by the Greedy Algorithm.
Understanding the advantages and limitations of each approach is crucial for
selecting the right algorithm in real-world scenarios.
Classical Optimization Techniques:
Mathematical Model of Optimization
1. Introduction to Optimization
Optimization refers to the process of finding the best possible solution from a set
of feasible solutions. It is widely used in engineering, economics, machine
learning, and operations research to maximize or minimize objective functions.
where:
1. Single-Variable Optimization
Solution:
2. Multi-Variable Optimization
(
∂f ∂f ∂f
)
Gradient: ∇ f = ∂ x , ∂ x , .. . , ∂ x =0
1 2 n
Solution:
∂f
∂x )
=2 x − 4=0 ⇒ x=2
=2 y −6=0 ⇒ y=3)
∂f
∂y
Hessian H=
[ )
2 0
0 2
Steps:
Solution: Define:
L ( x , y , λ )=x 2 + y 2 + λ ( x + y − 1 )
Solve:
∂L
=2 x + λ=0
∂x
∂L
=2 y + λ=0
∂y
∂L
=x + y −1=0
∂λ
Stationarity: ∇ f + λ ∇ g=0
Primal feasibility: g ( x ) ≤ 0
Dual feasibility: λ ≥ 0
Complementary slackness: λ g ( x )=0
Disadvantages
1. Limited to Differentiable Functions: Requires the objective function to be
continuous and differentiable.
2. Not Suitable for Complex Problems: Struggles with large-scale, highly
nonlinear, or discrete problems.
3. Difficult Handling Constraints: Handling multiple constraints requires
specialized methods.
Conclusion
Classical optimization techniques provide powerful tools for solving optimization
problems analytically. However, they are often impractical for real-world problems
with complex constraints, requiring numerical optimization techniques like
Gradient Descent or Evolutionary Algorithms.
Objective Function:
Maximize Z =c 1 x 1+ c 2 x 2 +. ..+ cn x n
Subject to Constraints:
a 11 x 1 +a 12 x 2+. . .+a1 n x n ≤ b1 a 21 x 1+ a22 x 2 +. ..+ a2 n x n ≤b 2 . . . a m 1 x 1+ am 2 x2 +. . .+a m n x n ≤ b m
Example Problem
Given LP Problem:
Maximize Z =3 x 1 +5 x 2
Basis x1 x2 s1 s2 RHS
x2 0.5 1 0.5 0 2
s2 2 0 -1 1 2
Z -0.5 0 2.5 0 10
Solution:
x 1=0 , x 2=2 , Z max =10
Conclusion
The Simplex Method is a powerful tool for solving linear programming problems.
By following systematic steps, one can determine the optimal solution efficiently.
However, for very large problems, alternative methods like the Interior-Point
Method might be preferred.
Maximize Z =C X Subject to A X =b , X ≥ 0
T
Where:
The standard Simplex Tableau stores all coefficients, but the Revised Simplex
Method works with:
where:
θ=min ( Xd )
B
Numerical Example
Problem Statement
Maximize:
Z=3 x1 +2 x 2
Subject to:
x 1+ x2 ≤ 4 2 x1 + x 2 ≤6 x 1 , x 2 ≥ 0
Objective function:
Z=3 x1 +2 x 2+ 0 s 1+ 0 s 2
B=
[ 10 01)
Inverse of B:
B =
−1
[ 10 01)
Initial basic solution:
X B=B b=
−1
[ 10 01)[ 46)=[ 46)
Step 3: Compute Reduced Costs
Non-basic columns:
N=
[ 12 11)
Cost vectors:
C B=[0 , 0], C N =[3 ,2]
′ −1
C N =C N −C B B N=[3 , 2]−[0 ,0 ]
[ 12 11)=[3 , 2]
Since C N contains positive values, the solution is not optimal.
′
Conclusion
The Revised Simplex Method is an optimized version of the standard simplex
method, improving efficiency for large linear programming problems. While it is
mathematically complex, its advantages in storage and computational speed
make it essential for solving high-dimensional LP problems.
Mathematical Formulation
A general Linear Programming Problem (LPP) is given by:
T
Maximize Z =C X
Subject to A X =b , X ≥ 0
Where:
The standard Simplex Tableau stores all coefficients, but the Revised Simplex
Method works with:
where:
θ=min ( ) XB
d
Subject to:
x 1+ x2 ≤ 4
2 x1 + x 2 ≤6
x1 , x2 ≥ 0
2 x1 + x 2 +s 2=6
x 1 , x 2 , s1 , s 2 ≥ 0
Objective function:
Z=3 x1 +2 x 2+ 0 s 1+ 0 s 2
Basis matrix:
B=
[ 10 01)
Inverse of B:
B =
−1
[ 10 01)
Initial basic solution:
X B=B b=
−1
[ 10 01)[ 46)=[ 46)
Step 3: Compute Reduced Costs
Non-basic columns:
N=
[ 12 11)
Cost vectors:
C B=[0 , 0], C N =[3 ,2]
′ −1
C N =C N −C B B N=[3 , 2]−[0 ,0 ]
[ 12 11)=[3 , 2]
Since C N contains positive values, the solution is not optimal.
′
Conclusion
The Revised Simplex Method is an optimized version of the standard simplex
method, improving efficiency for large linear programming problems. While it is
mathematically complex, its advantages in storage and computational speed
make it essential for solving high-dimensional LP problems.
It is polynomial-time and is significantly faster than the Simplex method for large-
scale LP problems.
Mathematical Formulation
A Linear Programming Problem (LPP) is typically formulated as:
T
Minimize or Maximize Z=c x Subject to: A x ≤ b , x ≥ 0
where:
o The new transformation maps the current point inside the feasible
region while improving the objective function.
4. Move Toward the Optimal Solution
Subject to:
2 x1 +3 x 2 ≤ 6 x 1+ 2 x 2 ≤3 x 1 , x 2 ≥ 0
[)
x1
x
x= 2
x3
x4
A=
[ 21 3 1 0
2 0 1 )
D=diag ( x )
where:
o Moves inside the feasible region rather than along edges, reducing
computational burden.
4. Better Performance for Sparse Matrices
Conclusion
Karmarkar’s method is a powerful alternative to the Simplex Method,
especially for large-scale linear programming problems.
It uses interior-point techniques and projective transformations to reach
an optimal solution.
While faster in theory, it is more complex to implement and requires
significant computational power.
It is polynomial-time and is significantly faster than the Simplex method for large-
scale LP problems.
Mathematical Formulation
A Linear Programming Problem (LPP) is typically formulated as:
T
Minimize or Maximize Z=c x
Subject to: A x ≤ b , x ≥ 0
where:
o The new transformation maps the current point inside the feasible
region while improving the objective function.
4. Move Toward the Optimal Solution
Subject to:
2 x1 +3 x 2 ≤ 6
x 1+ 2 x 2 ≤3
x1 , x2 ≥ 0
x 1+ 2 x 2 + x 4=3
x1 , x2 , x3 , x4 ≥ 0
[)
x1
x
x= 2
x3
x4
A=
[ 21 3 1 0
2 0 1 )
D=diag ( x )
Using an appropriate projective transformation, convert the system into the
interior of a unit simplex.
where:
o Moves inside the feasible region rather than along edges, reducing
computational burden.
4. Better Performance for Sparse Matrices
Conclusion
Karmarkar’s method is a powerful alternative to the Simplex Method,
especially for large-scale linear programming problems.
It uses interior-point techniques and projective transformations to reach
an optimal solution.
While faster in theory, it is more complex to implement and requires
significant computational power.
Linear Programming – Duality Theorem
1. Introduction to Duality in Linear Programming
Linear Programming (LP) is a mathematical technique used for optimization,
where we aim to maximize or minimize an objective function subject to a set of
constraints. Every LP problem has a corresponding dual problem, and the
relationship between the primal and dual problems is known as duality.
where:
subject to:
a 11 x 1 +a 12 x 2+. . .+a1 n x n ≤ b1
a 21 x 1+ a22 x 2 +. ..+ a2 n x n ≤b 2
a m 1 x 1+ am 2 x2 +. . .+a m n x n ≤ b m
x 1 , x 2 ,. . . , x n ≥ 0
subject to:
a 11 y1 + a21 y 2 +. . .+ am 1 y m ≥ c 1
a 12 y 1+ a22 y2 +. . .+a m 2 y m ≥ c2
a 1n y 1 +a2 n y 2+ .. .+ am n y m ≥ c n
y 1 , y 2 , . .. , y m ≥0
Duality Relationships
Primal (Maximization) Dual (Minimization)
Maximize Z Minimize W
n decision variables m dual variables
m constraints (≤) n constraints (≥)
Coefficients of constraints Right-hand side values in primal
become coefficients in objective become coefficients in dual
function
Constraints are ≤ Constraints are ≥
subject to:
2 x1 + x 2 ≤10
x 1+ 3 x 2 ≤15
x1 , x2 ≥ 0
Step 1: Form the Dual Problem
Minimize W =10 y 1+15 y 2
subject to:
2 y 1+ y 2 ≥ 3
y 1 +3 y 2 ≥ 2
y1 , y2 ≥ 0
6. Conclusion
The duality theorem in linear programming is a powerful tool that allows
solving problems efficiently by analyzing their duals. Understanding duality
provides deeper insights into optimization problems and helps in economic and
sensitivity analysis.
where:
subject to:
a 11 x 1 +a 12 x 2+. . .+a1 n x n ≤ b1
a 21 x 1+ a22 x 2 +. ..+ a2 n x n ≤b 2
a m 1 x 1+ am 2 x2 +. . .+a m n x n ≤ b m
x 1 , x 2 ,. . . , x n ≥ 0
subject to:
a 11 y1 + a21 y 2 +. . .+ am 1 y m ≥ c 1
a 12 y 1+ a22 y2 +. . .+a m 2 y m ≥ c2
a 1n y 1 +a2 n y 2+ .. .+ am n y m ≥ c n
y 1 , y 2 , . .. , y m ≥0
Duality Relationships
Primal (Maximization) Dual (Minimization)
Maximize Z Minimize W
n decision variables m dual variables
m constraints (≤) n constraints (≥)
Coefficients of constraints Right-hand side values in primal
Primal (Maximization) Dual (Minimization)
become coefficients in objective become coefficients in dual
function
Constraints are ≤ Constraints are ≥
subject to:
2 x1 + x 2 ≤10
x 1+ 3 x 2 ≤15
x1 , x2 ≥ 0
subject to:
2 y 1+ y 2 ≥ 3
y 1 +3 y 2 ≥ 2
y1 , y2 ≥ 0
Disadvantages
1. Complexity for Large Problems: Can be difficult for large LP problems.
2. Dual Form Transformation: Writing the dual correctly requires careful
attention to constraints and coefficients.
3. Interpretation Challenges: Understanding the economic meaning of dual
variables is not always straightforward.
6. Conclusion
The duality theorem in linear programming is a powerful tool that allows
solving problems efficiently by analyzing their duals. Understanding duality
provides deeper insights into optimization problems and helps in economic and
sensitivity analysis.
Linear Programming – Decomposition
Principle
Introduction
In large-scale linear programming (LP) problems, the Decomposition Principle is
a technique used to break down a complex LP problem into smaller, more
manageable subproblems. This method is particularly useful when dealing with
problems that have a block structure with linking constraints.
where:
m
Subject to A0 x 0 +∑ Ai xi =b
i=1
x 0 ≥ 0 , x i ≥ 0 , i=1 ,2 , . . ., m
Here:
Subject to Ai x i=bi − A 0 x 0
xi ≥ 0
Each subproblem is independent but interacts with the master problem through
the linking variables x 0.
Step-by-Step Solution Using the Decomposition Principle
Example Problem
Given Problem
Maximize:
Z=3 x1 +5 x 2+ 2 y 1 + 4 y 2
Subject to:
x 1+ x2 ≤10
y1+ y2≤ 8
x 1+ y 1 ≤ 6
x 2+ y 2 ≤ 7
x1 , x2 , y1 , y2 ≥ 0
Subject to x 1+ x2 ≤10
x 1+ y 1 ≤ 6
x 2+ y 2 ≤ 7
Subject to y 1 + y 2 ≤ 8
y 1 ≤6 − x1
y 2 ≤7 − x 2
Solving this iteratively with the Simplex Method, we obtain values for y 1 , y 2 that
are fed back into the master problem.
Thus, the optimal solution is x 1=4 , x 2=6 , y1 =2 , y 2=5 , and the optimal objective
value is 66.
Tabular Representation
Iteration x1 x2 y1 y2 Z
Initial 0 0 0 0 0
1st 3 5 2 4 58
2nd 4 6 2 5 66
Optimal 4 6 2 5 66
Conclusion
The Decomposition Principle in Linear Programming is a powerful method for
solving large-scale optimization problems efficiently. It divides a large LP into
manageable subproblems while maintaining coordination through a master
problem. Though it has some computational challenges, it remains widely used in
operations research, supply chain optimization, and multi-stage decision-making
problems.
Mathematical Formulation
Let:
Objective Function:
Minimize the total transportation cost:
m n
Z=∑ ∑ c i j x i j
i=1 j=1
where x i j ≥0 .
Constraints:
1. Supply Constraints (Total sent from each source ≤ supply):
n
∑ x i j =s i , ∀ i=1 , 2 ,… , m
j=1
∑ x i j =d j , ∀ j=1 , 2 ,… ,n
i=1
3. Non-Negativity Constraint:
x i j ≥0 ∀ i , j
After finding the Initial Feasible Solution, use MODI (Modified Distribution
Method) or Stepping-Stone Method for optimization.
D1 D2 D3 Supply
S1 19 30 50 7
S2 70 30 40 10
D1 D2 D3 Supply
S3 40 8 70 18
Demand 5 8 22
Total supply = 7 + 10 + 18 = 35
Total demand = 5 + 8 + 22 = 35
(Since supply = demand, it is a balanced problem.)
Find the difference between the smallest and second-smallest cost in each row and
column.
Disadvantages
❌ Complex Calculations – Large problems require extensive computations.
❌ Assumes Linear Costs – Real-world costs may not be strictly linear.
❌ Fixed Supply and Demand – Assumes static supply and demand, which may
change in real scenarios.
❌ Ignores Other Factors – Does not consider real-life constraints like traffic, labor,
etc.
6. Conclusion
The Transportation Problem is a special case of Linear Programming used in
logistics and supply chain management to minimize costs while meeting supply
and demand constraints. Vogel’s Approximation Method (VAM) provides a near-
optimal initial solution, which can be further optimized using MODI or Stepping-
Stone Methods.
Mathematical Formulation
A standard Quadratic Programming problem is formulated as follows:
Objective Function:
1 T T
min f ( x )= x Q x+ c x
2
where:
x is an n-dimensional decision variable vector,
Q is an n × n symmetric matrix,
c is an n -dimensional vector.
Constraints:
A x≤b, x≥0
where:
A is an m ×n constraint matrix,
b is an m -dimensional vector.
Example Problem
Consider the quadratic programming problem:
Minimize:
[ ) [ ) [ ) [ xx )
T
1 x
f ( x1 , x2)= [x x ) 2 1 1 + −4
2 1 2 1 2 x2 − 6
1
Subject to:
x 1+ x2 ≤2
x 1 , x 2 ≥0
Step-by-Step Solution
Step 1: Convert Quadratic Function
Expanding the quadratic term:
1
f ( x1 , x2)=
2
( 2 x1 +2 x 2+ 2 x 1 x 2 ) − 4 x 1 − 6 x2
2 2
∇f=
[ 2 x1+ x2− 4
x 1 +2 x 2 −6 )
Hessian matrix (second derivatives):
H=
[ )
2 1
1 2
The Hessian matrix is positive definite, ensuring convexity and guaranteeing
a unique global minimum.
Thus, the optimal solution is x 1=1 , x 2=1 with objective function value:
Summary Table
Variable Value
x1 1
x2 1
λ 1
Objective Function -7
Disadvantages:
1. Complexity: Larger problems require more computational effort compared
to linear programming.
2. Non-Convexity: If the Hessian is not positive definite, finding a global
minimum becomes difficult.
3. Constraint Handling: Some quadratic problems with nonlinear constraints
may require specialized solvers.
Conclusion
Quadratic programming is a powerful optimization tool for solving real-world
problems with quadratic objective functions and linear constraints. While convex
QP problems are efficiently solvable, non-convex ones can be challenging and may
require advanced techniques.
F ( n )=F ( n −1 )+ F ( n −2 )
F ( 0 )=0 , F ( 1 )=1
Solution
1. Top-Down (Memoization) Approach
def fibonacci(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
return memo[n]
print(fibonacci(6)) # Output: 8
Steps:
d p[i] stores F ( i ).
d p[i]=d p [i−1]+ d p [i− 2].
i d p[i]
0 0
1 1
2 1
3 2
4 3
5 5
6 8
def fibonacci_tabulation(n):
if n <= 1:
return n
dp = [0] * (n+1)
dp[1] = 1
for i in range(2, n+1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
print(fibonacci_tabulation(6)) # Output: 8
Let:
Recurrence relation:
d p[i][w]=
{ d p [i −1][w ], if w t [i]> w (exclude item)
max ( d p [i −1][w ], v a l[i]+d p[i− 1][w − w t [i]] ) , otherwise (include item) )
Solution (Tabulation Approach)
Input:
DP Table:
Items
→ 0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1
2 0 1 1 4 4 5 5 5
3 0 1 1 4 5 5 9 9
4 0 1 1 4 5 7 9 10
weights = [1, 3, 4, 5]
values = [1, 4, 5, 7]
W = 7
n = len(weights)
print(knapsack(weights, values, W, n)) # Output: 10
Disadvantages:
❌ Consumes extra memory: Requires a table to store values.
❌ Not useful for problems without overlapping subproblems: DP is not
necessary when problems can be solved independently.
❌ Implementation complexity: Requires careful table construction and recursive
thinking.
Conclusion
Dynamic Programming is a powerful technique for solving optimization problems
by breaking them down into subproblems, solving each subproblem only once,
and storing results. It is widely used in real-world applications like shortest path
problems, sequence alignment in bioinformatics, and financial portfolio
optimization.
k=1
Subject to constraints:
J
∑ ci j x a1 1ij
x a2 ⋯ x an ≤1 , i=1 ,2 , … , m
2ij ni j
j=1
x j >0 , j=1 ,2 , … , n
Where:
Definitions
Monomial: A function of the form
a1 a2 an
f ( x )=c x 1 x 2 ⋯ x n
f ( x )=∑ c k x 1 x 2 ⋯ x n
a1 k a2 k ank
k
2. Example of Geometric Programming
Example 1: Minimizing Cost of a Beam Design
Problem Statement
Minimize the cost of manufacturing a beam with width x 1 and height x 2, subject to
constraints on bending stress and deflection.
min C ( x 1 , x 2 ) =2 x 1 +3 x2
Subject to:
1
2
≤5
x1 x 2
x1
≤2
x2
x 1 , x 2 >0
Solution Steps
Step 1: Convert Constraints into Standard GP Form
1. The first constraint:
1
2
≤5
x1 x 2
2 1
x1 x2≥
5
x1
≤2
x2
ln ( x 1 x 22) ≥ ln ( 15 )
which becomes:
y 1 +2 y 2 ≥ − ln ( 5 )
y 1 − y 2 ≤ ln ( 2 )
min ( 2 e y + 3 e y )
1 2
Now, we can solve the problem using convex optimization techniques (such as
interior-point methods).
Disadvantages
1. Limited to Posynomials
o Only applies when the objective function and constraints are
posynomials.
2. Requires Logarithmic Transformation
o Needs transformation into convex form, which may not always be
straightforward.
3. Numerical Stability Issues
o Some problems may suffer from round-off errors when converted to
log-space.
Conclusion
Geometric programming is a powerful optimization tool for specific types of
nonlinear problems. While it has limitations, it is widely used in engineering,
economics, and operations research. Understanding the log-space
transformation and convex optimization techniques is crucial for solving GP
problems effectively.
Stochastic Programming
Introduction
Stochastic Programming (SP) is a mathematical framework for modeling
optimization problems that involve uncertainty. Unlike deterministic models,
where all parameters are known with certainty, stochastic programming
incorporates random variables to represent uncertainty in constraints, objective
functions, or both. The goal is to find an optimal decision considering these
uncertainties.
min x ∈X E[f ( x , ξ ) ]
where:
3. Chance-Constrained Programming
This approach ensures that constraints hold with a certain probability 1 −α :
P ( A x ≤ b ) ≥1 − α where α is a small probability level.
Given Data:
x E [C o s t ]
50 10 ( 50 ) +0.4 ( 0 ) +0.6 ( 20 ×50 ) =500+0+600=1100
75 10 ( 75 ) +0.4 (5 × 25 ) +0.6 ( 20 ×25 )=750+50+300=1100
100 10 ( 100 ) +0.4 (5 × 50 ) +0.6 ( 0 ) =1000+100+0=1100
Since all options yield the same expected cost, any of these production quantities
are optimal.
Disadvantages:
1. Computational Complexity: Requires solving large-scale models.
2. Data Requirement: Needs probability distributions for uncertainties.
3. Solution Interpretability: Some solutions may be difficult to interpret.
Conclusion
Stochastic programming is a powerful optimization tool for decision-making under
uncertainty. It balances risk and cost by incorporating probabilistic elements into
mathematical models, enabling better strategic planning in various fields like
finance, supply chain management, and energy systems.
Lagrange Multiplier Method
1. Introduction
The Lagrange multiplier method is a strategy for finding the local maxima and
minima of a function subject to equality constraints. It is widely used in
optimization problems in economics, physics, and engineering.
L ( x , y , λ )=f ( x , y ) + λ g ( x , y )
where:
∂L
∂x
=2 x+ λ=0 ⇒ λ=−2 x )
2. With respect to y :
∂L
∂y
=2 y + λ=0 ⇒ λ=− 2 y )
3. With respect to λ (constraint equation):
∂L
=x + y −1=0
∂λ
−2 x=−2 y ⇒ x= y )
1
x+ x=1⇒ 2 x=1 ⇒ x= , y=
2
1
2 )
Substituting into f ( x , y ):
( )() ()
2 2
1 1 1 1 1 1 1
f , = + = + =
2 2 2 2 4 4 2
Final Answer:
1 1 1
The maximum value of f ( x , y ) is at x= , y= .
2 2 2
4. Example 2: Minimization with a Constraint
Problem:
Find the minimum of f ( x , y )=x 2+ y 2 subject to the constraint x 2+ y 2=4 .
Solution
1. Lagrange Function:
L ( x , y , λ )=x 2 + y 2 + λ ( x 2 + y 2 − 4 )
2. Partial Derivatives:
∂L
=2 x +2 λ x=0
∂x
∂L
=2 y +2 λ y =0
∂y
∂L 2 2
=x + y −4=0
∂λ
3. Solving for x , y , λ :
2 y ( 1+ λ )=0 ⇒ y ( 1+ λ )=0 )
2 2
x + y =4
Final Answer:
The minimum value is 4.
5. Steps in Tabular Form
Step Description
1 Define the function f ( x , y ) and
constraint g ( x , y )=0.
2 Construct the Lagrange function
L=f + λ g.
3 Compute partial derivatives and
set them to zero.
4 Solve for x , y , λ .
5 Substitute values into f ( x , y ) to
find extrema.
Disadvantages
Only Works with Equality Constraints: Cannot handle inequalities directly.
May Lead to Complex Equations: Solving the system can be difficult.
Needs Differentiability: Requires smooth functions.
7. Conclusion
The Lagrange Multiplier Method is a powerful tool for solving constrained
optimization problems. By introducing a multiplier λ , it converts the problem into a
system of equations that can be solved systematically.