In: Economics
Consider a situation where there is a cost that is
either incurred or not. It is incurred only if the value
of some random input is less than a specified cutoff
value. Why might a simulation of this situation give a
very different average value of the cost incurred than
a deterministic model that treats the random input as
fixed at its mean? What does this have to do with the
“flaw of averages”?
Here as the situation says that cost may have incurred or not. In economics, the event is not definite and hence we take out its payoff via considering its average outcome out if its probability.
Suppose for example that we know that a fair unbiased coin has the 50-50 chance of head and tail respectively. But on a random day while tossing a coin for 10 times, 9 out of 10 times it was head. That means the average was false. This is called the flaw of average where we take an average while conducting any sampling.
This error could be solved it the experiment is conducted for infinite no of times then the occurrence of head and tail will normalize. This will also do away with the flaw of averages.
In the above case, simulation has given the different average values as the samples were not sufficient to reach a mean. Hence it was suffering from the flaw of average and had an error value. In econometrics denote it as error value and consider it for the more appropriate results.