0% found this document useful (0 votes)
50 views16 pages

Unit-Vi - Dynamic Programming Problem

Dynamic Programming Problem (D.P.P.) is a mathematical technique for finding optimum solutions to multi-stage decision problems, proposed by Richard Bellman and G.B. Dantzig in 1952. It involves breaking down complex problems into smaller sub-problems or stages, where decisions at one stage affect subsequent stages, and uses recursive equations for optimization. Key components include stages, state variables, and return functions, with applications in various optimization problems like recurrence relations and capital budgeting.

Uploaded by

Seshaiah Turaka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views16 pages

Unit-Vi - Dynamic Programming Problem

Dynamic Programming Problem (D.P.P.) is a mathematical technique for finding optimum solutions to multi-stage decision problems, proposed by Richard Bellman and G.B. Dantzig in 1952. It involves breaking down complex problems into smaller sub-problems or stages, where decisions at one stage affect subsequent stages, and uses recursive equations for optimization. Key components include stages, state variables, and return functions, with applications in various optimization problems like recurrence relations and capital budgeting.

Uploaded by

Seshaiah Turaka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

DYNAMIC PROGRAMMING PROBLEM

Definition:

“Dynamic Programming Problem (D.P.P.)” is a mathematical technique used for determining


‘Optimum solution’ to the Multi-stage decision problem.

Dynamic Programming technique was proposed by Richard Bellman and G.B. Dantzing in
the year 1952, Dynamic programming was known as “Stochastic Linear Programming” in the initial
stage.

Dynamic Programming helps in solving the problems in which number of decisions is taken.
In most of the business situations, decision variables changes with time. These types of business
situations are said to be Dynamic nature. Dynamic programming technique can be used effectively in
such type of situations.

Solving the optimization, problems having many decision variables through calculus or other
method are very difficult as they involve huge calculations. So in such situations, dynamic
programming method can be used where in a big problem is divided into various sub-problems
(stages). The decision taken at one stage influences the decisions of the other stages. Recursive
equations are used in dynamic programming for solving large and difficult problems.

Definitions in D.P.P

1. STAGE:

The D.P.P. can be decomposed or divided into a sequence of smaller sub- problems called
“Stages”. At each stage there are a number of decision alternatives (Course of action) and a decision
is made by selecting the most suitable alternative. Stages very often represent different time periods in
the planning period of the problem, places, people or other entries.

Example:

In replacement problem each year is a stage, in the salesman allocation problem each territory
represents a stage.

2. STATE:

Each stage in a D.P.P. has a certain number of stages associated with it. These states represent
various conditions of the decision process at a stage. The variables that specify the condition of the
decision process or desirable the status of the system at a particular stage are called “State variables’.
These variables provide information for analysing the possible effects that the current decision could
have upon the course of action. Any stage of decision process there could be a finite or infinite
number of states.

Example:

A specific city is referred to as state variable, in any stage of the shortest route problem.
3. RETURN FUNCTION:

At each stage, a decision that can affect the state of the system at the next stage and help in
arriving at the optimal solution at the current stage is made. Every decision that is made has its own
merit of terms of worth or benefit associated with it and can be described in an algebraic equation
form. This equation is generally called a “Return function”, since for every set of decisions; a return
on each decision is obtained. This ‘Return function’, in general depends upon the ‘state variable’ as
well as the decision made at a particular stage.

For a multi-stage decision process, functional relationship among state, stage and decision
may be described as shown
TYPES OF PROBLEMS IN D.P.P:

1) Recurrence Relation

2) L.P.P. in D.P.P

3) Shortest Path Method (or) Critical Path Method

4) Capital Budgeting Problem

1) RECURRENCE RELATION:

PROBLEM-1

Use dynamic programming to solve


2 2 2
Minimize Z = y 1 + y 2 + y 3

Subject to y 1 + y 2 + y 3 = 5

And y 1, y 2 , y 3 ≥ 0

Solution:

Stage -3 At stage -3

u3 = y 1 + y 2 + y 3 →①

Stage -2 At stage -2

u2 = y 1 + y 2 →②

= u3 - y 3

Stage -1 At stage -1

u1 = y 1 →③

= u2 - y 2

f 1 (u1) = y 1

= u2 - y 2 →④

f 2 (u2 ) = Max { y 1. y 2}

= Max {(u2 - y 2). y 2} →⑤

f 3 (u3 ) = Max { y 1. y 2. y 3}

= Max {( y 1. y 2). y 3}

= Max { f 2 (u2 ) y 3} →⑥
By using the Bellman’s principle of optimality, we have

Consider equation 5 & put equation 4 in 5

f 2 (u2 ) = Max { y 1. y 2}

= Max {(u2 - y 2). y 2}

2
= Max { y 2.u2 - y 2} →7

Differentiating w.r.to y 2 & equating them to ‘0’ then we get


d
f (u ) = 0
d y2 2 2


d 2
{ y 2.u2 - y 2} = 0
d y2

⇒ ( y 2.u2) - d y ( y 22) = 0
d d
d y2 2

⇒ (1. u2) - 2 y 2 = 0

⇒ u2 - 2 y 2 = 0

⇒ u2 = 2 y 2

⇒ y2 = →8
u2
2

Substituting equation 8 i.e. y 2 = in f 2 (u2 ), then we get


u2
2

⇒ f 2 (u2 ) = Max { y 2(u2 - y 2)}

= Max { }
u2 u2
(u2− )
2 2

= Max { u 2 2u 2−u2
2
(
2
) }
= Max {u 2 u2
2 2
( ) }
⇒ f 2 (u2 ) = Max {}
u 22
4

Consider equation 6, put f 2 (u2 ) in equation 6


f 3 (u3 ) = Max { y 1. y 2. y 3}

= Max {( y 1. y 2). y 3}

= Max { f 2 (u2 ) y 3}

{( ) }
2
u2
= Max y
4 3

{( ) }
2
= Max
( u3 − y 3 )
y3
4

Again differentiating f 3 (u3 ) w.r.to u3 and equating them to ‘zero’, then we get


d
f (u ) = 0
d y3 3 3


{( )} y3 = 0
2
d ( u3 − y 3 )
d y3 4

⇒ { y 3 ( u3 − y 3 )2} = 0
1 d
4 d y3

⇒ [ ( 3 3) d y 3 = 0 ]
1 y d u − y 2+ u − y 2 d y
3 (
d y3 3 3)
4 3

⇒ [y 3 2 ( u3− y 3 ) (−1 ) + ( u3 − y 3 )2 (1) ] = 0

⇒ ¿=0

⇒ ( u3 − y 3 ) [¿+( u3 − y 3 )] = 0

⇒ ( u3 − y 3 ) [−2 y 3+u3 - y 3] = 0

⇒ ( u3 − y 3 ) [u3 - 3 y 3] = 0

⇒ ( u3 − y 3 ) = 0 or [u3 - 3 y 3] = 0

⇒ u3 = y 3 or u3 = 3 y 3

⇒ u3 = y 3 or y 3 = →⑩
u3
3

⇒ u3 ≠ y 3 since u3 = y 1 + y 2 + y 3

From equation 1
u3 = y 1 + y 2 + y 3 = 5

⇒ u3 = 5

Put u3 = 5 in equation 10

⇒ y3 = →⑪
5
3

Consider equation 2

⇒ u2 = u3 - y 3

= 3 y3 - y3

=3 ( 53 ) - ( 53 )
= ( 153 )- ( 53 ) = ( 15−3
3 )
= ( 103 )
⇒ u2 = ( 103 ) →⑫

-------------------------------------------------------------------------------------------------------------------------

--⇒ y 3 = →⑬
5
3

y2 = = 2 3 = 3 ( ) ()
u2 1 10 5
2

⇒ y2 = ( 53 )→⑭
y 1 = u2 - y 2

= ( 103 ) - ( 53 ) = ( 53 )
⇒ y1 = ( 53 )→⑮
-----------------------------------------------------------------------------------------------
--

Thus y 1 = ( 53 ) y2 = ( 53 ) y3 = ( 53 )
Maximum Z = y 1. y 2. y 3 = ( 53 ).( 53 ).( 53 ) =
( 1259 )
∴ Max Z = ( 1259 ) y1 = y1 = y1 = ( 53 )
PROBLEM-2

Determine the value of u1, u2, u3 , so as to

Minimize Z = u1. u2. u3

Subject to u1 + u2 + u3 = 5

And u1,u2, u3 ≥ 0

Solution: let us define 3 state variables X j (j = 1, 2, 3) such that

Stage -3 At stage -3

x 3 = u1 + u2 + u3 →①

Stage -2 At stage -2

x 2 = u1 + u2 →②

= x 3 - u3

Stage -1 At stage -1

x 1 = u1 →③

= x 2 - u2

The maximum value of Z for any feasible value of state variable is given by

f 1 ( x 1) = u1

= x 2 - u2 →④

f 2 ( x 2 ) = Max { u1.u2}

= Max {( x 2 -u2).u2} →⑤

f 3 ( x 3 ) = Max {u1.u2.u3}

= Max {(u1.u2).u3 }

= Max { f 2 ( x 2 )u3 } →⑥

By using the Bellman’s principle of optimality, we have


Consider equation 5 & put equation 4 in 5

f 2 ( x 2 ) = Max { u1.u2}

= Max {( x 2 - u2).u2}

= Max { x 2.u2 - u22} →7

Differentiating f 2 ( x 2 ) w.r.to u2 & equating them to ‘0’ then we get


d
f (x ) = 0
d u2 2 2


d 2
{u . x - u } = 0
d u2 2 2 2

⇒ (
d d 2
u2. x 2) - (u ) = 0
d u2 d u2 2

⇒ (1. x 2) - 2u2 = 0

⇒ x 2 - 2u2 = 0

⇒ x 2 = 2u2

⇒ u2 = →8
x2
2

Substituting equation 8 i.e. u2 = in f 2 ( x 2 ), then we get


x2
2

⇒ f 2 ( x 2 ) = Max {u2( x 2 - u2)}

= Max ¿

{ ( )}
2 2
x x
= Max 2 . x 2− 2
2 2

{ }
2 2
x2 x2
= Max −
2 4

⇒ f 2 ( x 2 ) = Max {} x 22
→⑨
4

Consider equation 6, put f 2 ( x 2 ) in equation 6

f 3 ( x 3 ) = Max {u1.u2.u3}

= Max {(u1.u2).u3 }
= Max { f 2 ( x 2 )u3 }

{( ) }
2
x2
= Max u
4 3

{( ) }
2
= Max
( x 3−u3 )
u3
4

Again differentiating f 3 ( x 3 ) w.r.to x 3 and equating them to ‘zero’, then we get


d
f (x ) = 0
d u3 3 3


{( )}
u3 = 0
2
d ( x 3−u3 )
d u3 4

⇒ {u x −u 2} = 0
1 d
4 d u3 3 ( 3 3 )

⇒ [ ) ( 3 3) d u 3 = 0 ]
1 u d x −u 2+ x −u 2 d u
3 (
d u3 3 3
4 3

⇒ [ u 2 ( x −u ) (−1 ) +( x −u ) (1)] = 0 2
3 3 3 3

⇒ ¿=0

⇒ ( x 3−u3 ) [¿+( x 3−u3 ) ] = 0

⇒ ( x 3−u3 ) [−2 u3+ x 3 - u3] = 0

⇒ ( x 3−u3 ) [ x 3 - 3u3 ] = 0

⇒ ( x 3−u3 ) = 0 or [ x 3 - 3u3 ] = 0

⇒ x 3 = u3 or x 3 = 3u3

⇒ x 3 = u3 or u3 = →⑩
x3
3

⇒ x 3 ≠ u3 since x 3 = u1 + u2 + u3

From equation 1

x 3 = u1 + u2 + u3 = 10

⇒ x 3 = 10
Put x 3 = 10 in equation 10

⇒u3 = →⑪
10
3

Consider equation 2

⇒ x 2 = x 3 - u3

= 3u3 - u3

=3 ( 103 ) - ( 103 )
= ( 303 )- ( 203 ) = ( 30−20
3 ) =( )
10
3

⇒ x2 = ( 103 ) →⑫
-------------------------------------------------------------------------------------------------------------------------

--⇒u3 = →⑬
10
3

u2 = = 2 3 = 6 = 3
( ) ( ) ( )
x2 1 10 20 10
2

⇒ u2 = ( 103 )→⑭
u1 = x 2 - u2

= 2 u2 - u2

=2 ( 103 ) - ( 103 ) = ( 203 ) - ( 103 ) = ( 103 )


⇒ u1 = ( 103 )→⑮
-----------------------------------------------------------------------------------------------
--

Thus u1 = ( 103 ) u2 = ( 103 ) u3 = ( 103 )


Maximum Z = u1.u2. u3 = ( 103 ).( 103 ).( 103 ) =( 1000
27 )
∴ Max Z = ( 1000 u1 = u2 = u3 =
27 ) ( 103 )
2) L.P.P. IN D.P.P:

PROBLEM-1

Maximum Z = 50 x + 80 y

Such that x ≤ 80

Y ≤ 60

5 x + 6 y ≤ 600

1x + 2 y ≤ 160

And x, y ≥ 0

Solution:

Given that

Maximum Z = 50 x + 80 y

x ≤ 80 → ①

Y ≤ 60 →②

5 x + 6 y ≤ 600 → ③

1x + 2 y ≤ 160 → ④

From equation 3

⇒ 5 x + 6 y ≤ 600

⇒ 5 x ≤ 600 – 6 y

⇒ x≤ →⑤
( 6 00 – 6 y )
5

From equation 4

⇒ 1x + 2 y ≤ 160

⇒ 1x ≤ 160 – 2y →⑥

From equation 5 & 6 equating them the 2 equations, we have

⇒ = 160 – 2y
( 6 00 – 6 y )
5

⇒ 600 – 6y = 5(160 – 2y)


⇒ 600 – 6y = 800 – 10 y

⇒ 10 y – 6y = 800 – 600

⇒ 4 y = 200

⇒ y = 200/ 4

⇒ y = 50 →⑦ [∵Y ≤ 60]

Put y = 50 in equation 3, then we get

⇒ 5 x + 6 y ≤ 600

⇒ 5 x + 6 (50) ≤ 600

⇒ 5 x + 300 ≤ 600

⇒ 5 x ≤ 600 – 300

⇒ 5x = 300

⇒ x = 300 / 5

⇒ x = 60 →⑧ [∵x ≤ 80]

∴ The maximum value of Z = 50 x + 80 y = 50(60) + 80(50) = 300 + 400 =


700

Maximum Z = 700

X = 60 y = 50

PROBLEM-2

Maximum Z = 3 a + 5 b

Such that a ≤ 4

b≤6

3 a + 5 b ≤ 18

And a, b ≥ 0

Solution:

Given that

Maximum Z = 3 a + 5 b

Such that a ≤ 4 →①

b≤6 →②
3 a + 2 b ≤ 18 →③

Consider equation 3

⇒ 3 a + 2 b ≤ 18

⇒ 2 b ≤ 18 – 3 a

⇒b≤
( 18−3 a )
2

⇒b≤9- ( 32 )a →④

Equating equations 2 & 4

⇒6=9- ( 32 )a
⇒6= ( 18−32 a )
⇒ 12 = 18 – 3 a

⇒ 3 a = 18 – 12

⇒ 3a = 6

⇒a=6/3 →⑤

CHECKING:

Substitute a = 2 in equation 4, then we get

⇒b=9- ( 32 )a
⇒b=9- ( 32 )2
⇒b=9–3

⇒b=6 →⑥

Which is true, so

∴ The maximum value of Z is

Maximum Z = 3 a + 5 b

= 3 (2) + 5 (6)
= 6 + 30

= 36

Maximum Z = 36

⇒a=2b=6

SHORTEST PATH METHOD:

PROBLEM-1

Mr. Banarjee, a salesman, has decided to travel from city 1 to city 10. He
wants to plan for minimum distance programme and visit

You might also like