In: Computer Science
a) Write Quick Sort and its function algorithm
b) Derive the computation time for each statement in the algorithm. (You must explain your reason for each statement about how you get the answer of each computation time in at one or two sentences each.)
Answer : Given data
(a) Quick Short :
* Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller arrays.
* A large array is partitioned into two arrays one of which holds values smaller than the specified value, say pivot, based on which the partition is made and another array holds values greater than the pivot value.
* Quicksort partitions an array and then calls itself recursively twice to sort the two resulting subarrays.
* In this algorithm the the best case performance
O(n2), average case performance O(n logn), and
worst case performance is O(n) auxiliary (naive) and
O(log n) auxiliary (Hoare 1962).
* In this short how to work-
* Taking the analogical view in perspective, consider a situation where one had to sort the papers bearing the names of the students, by name from A-Z. One might use the approach as follows:
a.Select any splitting value, say L. The splitting value is also known as pivot.
b. Divide the stack of papers into two. A-L and M-Z. It is not necessary that the piles should be equal.
c. Repeat the above two steps with the A-L pile, splitting it into its significant two halves. And M-Z pile, split into its halves. The process is repeated until the piles are small enough to be sorted easily.
d. Ultimately, the smaller piles can be placed one on top of the other to produce a fully sorted and ordered set of papers.
e. The approach used here is reduction at each split to get to the single-element array.
f. At every split, the pile was divided and then the same approach was used for the smaller piles by using the method of recursion.
* Technically, quick sort follows the below steps:
Step 1 − Make any element as pivot.
Step 2 − Partition the array on the basis of
pivot.
Step 3 − Apply quick sort on left partition
recursively.
Step 4 − Apply quick sort on right partition
recursively.
Quick short function algorithm-
* The algorithm was developed by a British computer scientist Tony Hoare in 1959.
* The name "Quick Sort" comes from the fact that, quick sort is capable of sorting a list of data elements significantly faster (twice or thrice faster) than any of the common sorting algorithms.
* It is one of the most efficient sorting algorithms and is based on the splitting of an array (partition) into smaller ones and swapping (exchange) based on the comparison with 'pivot' element selected.
* Due to this, quick sort is also called as "Partition Exchange" sort. Like Merge sort, Quick sort also falls into the category of divide and conquer approach of problem-solving methodology.
* This algo is divide and conqur method.
* This shorting algorithm also known as pivot algorithms.
Quick Sort Pivot Algorithm-
* This algorithm is based on our understanding of partitioning in quick sort, we will now try to write an algorithm for it, which is as follows.
Step 1 − Choose the highest index value has pivot Step 2 − Take two variables to point left and right of the list excluding pivot Step 3 − left points to the low index Step 4 − right points to the high Step 5 − while value at left is less than pivot move right Step 6 − while value at right is greater than pivot move left Step 7 − if both step 5 and step 6 does not match swap left and right Step 8 − if left ≥ right, the point where they met is new pivot.
(b)-
* Computation is time complexity is how to check the algorithm like best case ,worst case and average case of the algorithm we can derive the time complexity -
* Time complexity :
Big O notation
f(n) = O(g(n)) means
* There are positive constants c and k such that:
0<= f(n) <= c*g(n) for all n >= k.
* For large problem sizes the dominant term(one with highest value
of exponent) almost completely
determines the value of the complexity expression. So abstract
complexity is expressed in terms of
the dominant term for large N. Multiplicative constants are also
ignored.
N^2 + 3N + 4 is O(N^2)
since for N>4, N^2 + 3N + 4 < 2N^2 (c=2 & k=4)
O(1) constant time
* This means that the algorithm requires the same fixed number of
steps regardless of the size of the task.
Example:
a) a statement involving basic operations
Here are some examples of basic operations.
one arithmetic operation (eg., +, *)
one assignment
one test (eg., x==0)
one read(accessing an element from an array)
b) Sequence of statements involving basic operations.
statement 1;
statement 2;
.....
statement k;
* Time for each statement is constant and the total time is also
constant: O(1)
O(n) linear time
* This means that the algorithm requires a number of steps
proportional to the size of the task.
Examples:
a. Traversing an array.
b. Sequential/Linear search in an array.
c. Best case time complexity of Bubble sort (i.e when the elements
of array are in sorted order).
Basic structure is :
for (i = 0; i < N; i++) {
sequence of statements of O(1)
}
* The loop executes N times, so the total time is N*O(1) which is
O(N).
O(n^2) quadratic time
O
* The number of operations is proportional to the size of the task
squared.
______________THE END_________________