In: Computer Science
3.1
(b & c)
Python Program:
total_hours = 10000
lines_of_code = 400000
potential_defects = 6000
programmer_cost = 55
total_cost1 = total_hours * programmer_cost
cost_one_defect = total_cost1 // potential_defects
print("total cost of preventing 6000 defects : ", total_cost1)
print("the cost of preventing one defect : ", cost_one_defect)
Output:
3.2
(a & b)
Python Program:
defect_per_1000 = 3
cost_per_defect = 30000
lines_of_code = 400000
total_defects = defect_per_1000 * (lines_of_code // 1000)
total_cost2 = total_defects * cost_per_defect
print("total defects per 400,000 lines of code will be escaped : ", total_defects)
print("the overall cost of fixing field defects : ", total_cost2)
Output:
(c.)
3.1 ====> 6000 defects <--> $550000
3.2 ====> 1200 defects <--> $36000000
The overall cost in 3.1 is 550000 and in 3.2 is 36000000. The 3.1 is too cost-effective as you can see there is a large difference in the overcall cost. I definitely suggest the company have to use 3.1 by fixing the amount of the programmer. And yes you can take half of the 3.2 overall cost amount as a budget to find defects so that you can complete the work in the figure around 3.1 and save money from it.
Thumbs Up Please !!!