In: Computer Science
2) [9point] The following can be solved greedily.
You are given a set of posters with a limited number of types. You want to make a
symmetrical display with the posters.You also want the tallest posters in the middle. All
posters have the same width. Can you make the display symmetrical, and if so,
what is the largest symmetrical display you can make with the tallest posters in the middle?
a. [3 points] What is the optimal substructure for placement?
b.[3 points] What is the greedy property? That is, what is your rule for selecting where a picture goes?
c.[3 points] Prove the correctness of your algorithm by proof of contradiction.
a.
Optimal Substructure: A given problems has Optimal Substructure Property if optimal solution of the given problem can be obtained by using optimal solutions of its subproblems.
For example, the Shortest Path problem has following optimal
substructure property:
If a node x lies in the shortest path from a source node u to
destination node v then the shortest path from u to v is
combination of shortest path from u to x and shortest path from x
to v. The standard All Pair Shortest Path algorithms like
Floyd–Warshall and Bellman–Ford are typical examples of Dynamic
Programming.
On the other hand, the Longest Path problem doesn’t have the Optimal Substructure property. Here by Longest Path we mean longest simple path (path without cycle) between two nodes. Consider the following unweighted graph given in the CLRS book. There are two longest paths from q to t: q→r→t and q→s→t. Unlike shortest paths, these longest paths do not have the optimal substructure property. For example, the longest path q→r→t is not a combination of longest path from q to r and longest path from r to t, because the longest path from q to r is q→s→t→r and the longest path from r to t is r→q→s→t.
b.
A greedy algorithm, as the name suggests, always makes the choice that seems to be the best at that moment. This means that it makes a locally-optimal choice in the hope that this choice will lead to a globally-optimal solution.
How do you decide which choice is optimal?
Assume that you have an objective function that needs to be optimized (either maximized or minimized) at a given point. A Greedy algorithm makes greedy choices at each step to ensure that the objective function is optimized. The Greedy algorithm has only one shot to compute the optimal solution so that it never goes back and reverses the decision.
Greedy algorithms have some advantages and disadvantages:
Note: Most greedy algorithms are not correct. An example is described later in this article.
C. How to create a Greedy Algorithm?
Being a very busy person, you have exactly T time to do some interesting things and you want to do maximum such things.
You are given an array A of integers, where each element indicates the time a thing takes for completion. You want to calculate the maximum number of things that you can do in the limited time that you have.
This is a simple Greedy-algorithm problem. In each iteration, you have to greedily select the things which will take the minimum amount of time to complete while maintaining two variables currentTime and numberOfThings. To complete the calculation, you must:
Repeat this as long as the currentTime is less than or equal to T.
Let A = {5, 3, 4, 2, 1} and T = 6
After sorting, A = {1, 2, 3, 4, 5}
After the 1st iteration:
After the 2nd iteration:
After the 3rd iteration:
After the 4th iteration, currentTime is 6 + 4 = 10, which is greater than T. Therefore, the answer is 3.
Implementation
#include <iostream> #include <algorithm> using namespace std; const int MAX = 105; int A[MAX]; int main() { int T, N, numberOfThings = 0, currentTime = 0; cin >> N >> T; for(int i = 0;i < N;++i) cin >> A[i]; sort(A, A + N); for(int i = 0;i < N;++i) { currentTime += A[i]; if(currentTime > T) break; numberOfThings++; } cout << numberOfThings << endl; return 0; }
c.
I've seen correctness proofs for other searching algorithms; however, for this particular algorithm: search in a row-wise and column wise sorted matrix, I'm not able to generate a proper proof.
The algorithm is
Let x = element we're trying to search for in the matrix, e = current element we're processing in the array. 1) Start with the top right element. 2) Loop: compare this element e with x ...i) if e = x, then return the position of e, since we found x in the given matrix. ...ii) if e > x then move left to check elements smaller than e (if out of binding of the matrix, then break and return false) ...iii) if e < x then move below to check elements greater than e (if out of binding of the matrix, then break and return false) 3) repeat the i), ii) and iii) until you find the element or return false
Essentially, we start from the top right and traverse to the bottom left, key points being if element we're searching for is smaller then we go left if it's bigger we traverse down.
I'd like to use contradiction by saying the following: "If we traversed all the way to the bottom left of the algorithm and we didn't find the element we're searching (assuming it's in the matrix), then ..."
However, I'm not sure how to properly follow up the "then" because whatever I say is just repeating what the algorithm does, not really proving anything.