- Minimum and Maximum
- Divide and Conquer
- Dynamic Programming Concepts
- Fibonacci Numbers
- Subset Sum Problem
- Longest Increasing Subsequence
- Longest Common Subsequence
- Other Dynamic Programming Usages

- The
`minimum`

of a set of`N`

elements- The first element in list of elements ordered in incremental order (
`index = 1`

)

- The first element in list of elements ordered in incremental order (
- The
`maximum`

of a set of`N`

elements- The last element in list of elements ordered in incremental order (
`index = N`

)

- The last element in list of elements ordered in incremental order (
- The
`median`

is the “halfway point” of the set- When
`N`

is odd, index =`(N+1)/2`

= unique value - When
`N`

is even, index =`⌊(N+1)/2⌋`

(lower median) or index =`⌈(N+1)/2⌉`

(upper median)

- When

```
int FindMaxStupid(int[] numbers)
{
foreach (var number in numbers)
{
var currentNumberIsBest = true;
foreach (var candidate in numbers)
if (number < candidate)
currentNumberIsBest = false;
if (currentNumberIsBest)
return number;
}
return int.MinValue;
}
```

- Minimum element

```
int FindMin(int[] numbers)
{
int min = numbers[0];
for (int i = 1; i < numbers.Length; i++)
if (numbers[i] < min) min = numbers[i];
return min;
}
```

- Maximum element

```
int FindMax(int[] numbers)
{
int max = numbers[0];
for (int i = 1; i < numbers.Length; i++)
if (numbers[i] > max) max = numbers[i];
return max;
}
```

`Divide`

: If the input size is too large to deal with in a straightforward manner- Divide the problem into two or more disjointed sub-problems

`Conquer`

: conquer recursively to solve the sub-problems`Combine`

: Take the solutions to the sub-problems and “merge” these solutions into a solution for the original problem

- MergeSort
- The sub-problems are independent, all different

```
void MergeSort(int[] arr, int left, int right) {
if (right > left) {
int mid = (right + left) / 2;
MergeSort(arr, left, mid);
MergeSort(arr, (mid+1), right);
Merge(arr, left, (mid+1), right);
}
}
```

- Binary search
- Quick sort
- Merging arrays
- Merge sort

- Finding majorant
- Tower of Hanoi
- Fast multiplication
- Strassen’s Matrix Multiplication

- How dynamic programming (DP) works?
- Approach to solve problems
- Store partial solutions of the smaller problems
- Usually they are solved bottom-up

- Steps to designing a DP algorithm:
- Characterize
`optimal substructure`

`Recursively`

define the value of an optimal solution- Compute the value
`bottom up`

- (if needed)
`Construct`

an optimal solution

- Characterize

- DP has the following characteristics
- Simple sub-problems
- We break the original problem to smaller sub-problems that have the same structure

- Optimal substructure of the problems
- The optimal solution to the problem contains within optimal solutions to its sub-problems

- Overlapping sub-problems
- There exist some places where we solve the same sub-problem more than once

- Simple sub-problems

- Using Divide-and-Conquer to solve problems (that can be solved using DP) is
`inefficient`

- Because the same common sub-problems have to be solved many times

- DP will solve each of them once and their answers are stored in a
`table`

for future use- Technique known as
`memoization`

- Technique known as

- The Fibonacci numbers are the numbers in the following integer sequence:
`0`

,`1`

,`1`

,`2`

,`3`

,`5`

,`8`

,`13`

,`21`

,`34`

,`55`

,`89`

,`144`

, …- The first two numbers are
`0`

and`1`

- Each subsequent number is the sum of the previous two numbers

- In mathematical terms:
`F`

_{n}=`F`

_{n-1}+`F`

_{n-2}`F`

_{0}=`0`

`F`

_{1}=`1`

- How can we find the
`n`

^{th}Fibonacci number using recursion (“divide and conquer”) - Directly applying the recurrence formula:

```
decimal Fibonacci(int n)
{
if (n == 0) return 0;
if (n == 1) return 1;
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
```

- We can save the results from each function call
- Every time when we call the function we check if the value is already calculated
- This saves a lot of useless calculations!
- http://en.wikipedia.org/wiki/Memoization

```
decimal Fibonacci(int n)
{
if (memo[n] != 0) return memo[n];
if (n == 0) return 0;
if (n == 1) return 1;
memo[n] = Fibonacci(n - 1) + Fibonacci(n - 2);
return memo[n];
}
```

- How to find the
`n`

^{th}Fibonacci number using the dynamic programming approach?- We can start solving the Fibonacci problem from bottom-up calculating partial solutions
- We know the answer for the
`0`

^{th}and the`1`

^{st}number of the Fibonacci sequence - And we know the
`formula`

to calculate each of the next numbers (`F`

_{i}`= F`

_{i-1}`+ F`

_{i-2})

- Recurrent solution
- Complexity: ~
`O(φ`

^{n}`)`

=`O(1.618`

^{n}`)`

- Complexity: ~
- DP or memoization solution
- Complexity: ~
`O(n)`

- Complexity: ~
- Dynamic programming solutions is way faster than the recurrent solution
- If we want to find the
`36`

^{th}Fibonacci number:- Dynamic programming solution takes ~
`36`

steps - Recurrent solution takes ~
`48 315 633`

steps

- Dynamic programming solution takes ~

- If we want to find the

- Given a set of integers, is there a non-empty subset whose sum is zero?
- Given a set of integers and an integer
`S`

, does any non-empty subset sum to`S`

? - Given a set of integers, find all possible sums
- Can you equally separate the value of coins?

- Solving the subset sum problem:
- numbers =
`{3,5,-1,4,2}`

, sum =`6`

- start with possible =
`{0}`

- numbers =
- Step 1: obtain all possible sums of
`{3}`

- possible =
`{0}`

∪`{0+3}`

=`{0,3}`

- possible =
- Step 2: obtain all possible sums of
`{3,5}`

- possible =
`{0,3}`

∪`{0+5,3+5}`

=`{0,3,5,8}`

- possible =
- Step 3: obtain all possible sums of
`{3,5,-1}`

- possible =
`{0,3,5,8}`

∪`{0-1,3-1,5-1,8-1}`

=`{-1,0,2,3,4,5,7,8}`

- possible =
- …

```
bool IsSubsetSumRecursive(int[] set, int n, int sum)
{
// Base Cases
if (sum == 0)
return true;
if (n == 0 && sum != 0)
return false;
// If last element is greater than sum, then ignore it
if (set[n - 1] > sum)
{
return IsSubsetSumRecursive(set, n - 1, sum);
}
/* check if sum can be obtained by any of the following
(a) including the last element
(b) excluding the last element */
return IsSubsetSumRecursive(set, n - 1, sum)
|| IsSubsetSumRecursive(set, n - 1, sum - set[n - 1]);
}
```

```
bool IsSubsetSum(int[] set, int sum)
{
const int NotSet = -1;
var sumOfAll = set.Sum();
var last = new int[sumOfAll + 1];
var currentSum = 0;
for (var i = 1; i < sumOfAll; i++) last[i] = NotSet;
for (var i = 0; i < set.Length; i++)
{
for (var j = currentSum; j + 1 > 0; j--)
{
if (last[j] != NotSet &&
last[j + set[i]] == NotSet)
{
last[j + set[i]] = i;
}
}
currentSum += set[i];
}
return last[sum] != NotSet;
}
```

- Find a subsequence of a given sequence in which the subsequence elements are in increasing order, and in which the subsequence is as long as possible
- This subsequence is not necessarily contiguous nor unique

- The longest increasing subsequence problem is solvable in time
`O(n*log(n))`

[more info] - We will review one simple DP algorithm with complexity
`O(n*n)`

- Example:
`1`

, 8,`2`

, 7,`3`

,`4`

, 1,`6`

```
L[0] = 1; P[0] = NoPrevious;
for (int i = 1; i < S.Length; i++)
{
L[i] = 1;
P[i] = NoPrevious;
for (int j = i - 1; j >= 0; j--)
{
if (L[j] + 1 > L[i] && S[j] < S[i])
{
L[i] = L[j] + 1;
P[i] = j;
}
}
if (L[i] > maxLength)
{
bestIndex = i;
maxLength = L[i];
}
}
```

```
void PrintLongestIncreasingSubsequence(
int[] sequence, int[] predecessor, int maxIndex)
{
var lis = new List<int>();
while (maxIndex != NoPrevious)
{
lis.Add(sequence[maxIndex]);
maxIndex = predecessor[maxIndex];
}
lis.Reverse();
Console.WriteLine("subsequence = "
+ string.Join(", ", lis));
}
```

- Given two sequences
`x[1..m]`

and`y[1..n]`

, find their longest common subsequence (LCS) - For example if we have
`x="ABCBDAB"`

and`y="BDCABA"`

their longest common subsequence will be`"BCBA"`

`S`

_{1}=`GCCCTAGCG`

,`S`

_{2}=`GCGCAATG`

- Let
`C`

_{1}= the right-most character of`S`

_{1} - Let
`C`

_{2}= the right-most character of`S`

_{2} - Let
`S`

_{1}`'`

=`S`

_{1}with`C`

_{1}“chopped-off” - Let
`S`

_{2}`'`

=`S`

_{2}with`C`

_{2}“chopped-off”

- Let
- There are three recursive subproblems:
`L`

_{1}=`LCS(S`

_{1}`',S`

_{2}`)`

`L`

_{2}=`LCS(S`

_{1}`,S`

_{2}`')`

`L`

_{3}=`LCS(S`

_{1}`',S`

_{2}`')`

- The solution to the original problem is whichever of these is the longest:
`L`

_{1}`L`

_{2}- If
`C`

_{1}is not equal to`C`

_{2}, then`L`

_{3} - If
`C`

_{1}equals`C`

_{2}, then`L`

_{3}appended with`C`

_{1}

- This recursive solution requires multiple computations of the same sub-problems
- This recursive solution can be replaced with DP

- To compute the LCS efficiently using dynamic programming we start by constructing a table in which we build up partial results
- We’ll fill up the table from top to bottom, and from left to right
- Each cell = the length of an LCS of the two string prefixes up to that row and column
- Each cell will contain a solution to a sub-problem of theoriginal problem
`S`

_{1}=`GCCCTAGCG`

`S`

_{2}=`GCGCAATG`

- Each empty string has nothing in common with any other string, therefor the 0-length strings will have values
`0`

in the LCS table

```
for (i = 0; i <= n; i++)
{
c[i, 0] = 0;
}
```

```
for (i = 0; i <= m; i++)
{
c[0, i] = 0;
}
```

```
int[,] LCS(string firstString, string secondString)
{
var m = firstString.Length;
var n = secondString.Length;
var c = new int[m + 1, n + 1];
for (var i = 1; i <= m; i++)
{
for (var j = 1; j <= n; j++)
{
if (firstString[i - 1] == secondString[j - 1])
c[i, j] = c[i - 1, j - 1] + 1;
else
c[i, j] = Math.Max(c[i, j - 1], c[i - 1, j]);
}
}
return c; // Answer in c[m, n]
}
```

```
void PrintLCS(int i, int j, int[,] c)
{
if (i == 0 || j == 0) return;
if (FirstString[i - 1] == SecondString[j - 1])
{
PrintLCS(i - 1, j - 1, c);
Console.Write(SecondString[j - 1]);
}
else if (c[i, j] == c[i - 1, j])
{
PrintLCS(i - 1, j, c);
}
else
{
PrintLCS(i, j - 1, c);
}
}
```

- In many DP problems there is a moving object with some restrictions
- For example: In how many ways you can reach from top-left corner of a grid to the bottom-right?
- You can move only right and down
- Some cells are unreachable

- Matematical, finance and economic optimizations
- Optimal consumption and saving
- The core idea of DP is to avoid repeated work by remembering partial results. This is a very common technique whenever performance problems arise

- Bioinformatics
- sequence alignment, protein folding, RNA structure prediction and protein-DNA binding

- Control theory, information theory
- Operations research, decision making
- Computer science:
- Theory, Graphics, AI
- Markov chains
- Spelling correction

- Integer Knapsack Problem
- Unix diff for comparing two files
- Dijkstra’s algorithm for the shortest path problem
- Bellman–Ford algorithm shortest distance in a graph
- Floyd’s All-Pairs shortest path algorithm
- Cocke-Kasami-Younger for parsing context free grammars
- Methods for solving the travelling salesman problem
- Some methods for solving interval scheduling problems
- Edit distance (Levenshtein distance)
- Many other string and graph algorithms
- en.wikipedia.org/wiki/Dynamic_programming

- Divide-and-conquer method for algorithm design
- Dynamic programming is a way of improving on inefficient divide-and-conquer algorithms
- Dynamic programming is applicable when the sub-problems are dependent, that is, when sub-problems share sub-sub-problem
- Recurrent functions can be solved efficiently
- Longest increasing subsequence and Longest common subsequence problems can be solved efficiently using dynamic programming approach

© 2011-2024 **Nikolay Kostov's Blog**