In: Economics
Explain in your own words the concept of relative efficiency and draw a graph that shows it (one single graph). Be precise and explain your graph. Make sure to define all quantities and symbols you use in your explanation and on the graph.
In the comparison of various statistical procedures, efficiency is a measure of quality of an estimator, of an experimental design,[1] or of a hypothesis testing procedure.[2] Essentially, a more efficient estimator, experiment, or test needs fewer observations than a less efficient one to achieve a given performance. This article primarily deals with efficiency of estimators.
The relative efficiency of two procedures is the ratio of their efficiencies, although often this concept is used where the comparison is made between a given procedure and a notional "best possible" procedure. The efficiencies and the relative efficiency of two procedures theoretically depend on the sample size available for the given procedure, but it is often possible to use the asymptotic relative efficiency (defined as the limit of the relative efficiencies as the sample size grows) as the principal comparison measure.
An efficient estimator is characterized by a small variance or mean square error, indicating that there is a small deviance between the estimated value and the "true" value. [1]
EstimatorsEdit
The efficiency of an unbiased estimator, T, of a parameter θ is defined as [3]
{\displaystyle e(T)={\frac {1/{\mathcal {I}}(\theta )}{\mathrm {var} (T)}}}
where {\displaystyle {\mathcal {I}}(\theta )} is the Fisher information of the sample. Thus e(T) is the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao bound can be used to prove that e(T) ≤ 1.
Efficient estimatorsEdit
In general, the spread of an estimator around the parameter θ is a
measure of estimator efficiency and performance. This performance
can be calculated by finding the mean squared error:
Let T be an estimator for the parameter θ. The mean squared error of T is the value {\displaystyle MSE(T)=E[(T-\theta )^{2}]}.
Here,
{\displaystyle MSE(T)=E[(T-\theta )^{2}]=E[(T-E[T]+E[T]-\theta )^{2}]=E[(T-E[T])^{2}]+2E[T-E[T]](E[T]-\theta )+(E[T]-\theta ))^{2}=Var(T)+(E[T]-\theta )^{2}}
Therefore, an estimator T1 performs better than an estimator T2 if {\displaystyle MSE(T_{1})<MSE(T_{2})}.[4]
For a more specific case, if T1 and T2 are two unbiased estimators for the same parameter θ, then the variance can be compared to determine performance.
T2 is more efficient than T1 if the variance of T2 is smaller than the variance of T1, i.e. {\displaystyle Var(T_{1})>Var(T_{2})} for all values of θ.
This relationship can be determined by simplifying the more general case above for mean squared error. Since the expected value of an unbiased estimator is equal to the parameter value, {\displaystyle E[T]=\theta }.
Therefore, {\displaystyle MSE(T)=Var(T)} as the {\displaystyle (E[T]-\theta )^{2}} term drops out from being equal to 0.[4]
If an unbiased estimator of a parameter θ attains {\displaystyle e(T)=1} for all values of the parameter, then the estimator is called efficient.[3]
Equivalently, the estimator achieves equality in the Cramér–Rao inequality for all θ. The Cramér–Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the "best" an unbiased estimator can be.