In: Economics
Now, suppose classical assumption are met are except E [ ε|X] = 15. Fully, prove that in this case, b, the OLS estimator, is unbiased for β1 and β2 but biased for β0. Hint: It is always possible to re-write the model so that it has an error that has a mean of zero.
We derived in Note 2 the OLS (Ordinary Least Squares) estimators
(j = 0, 1) of
the regression coefficients β
j βˆ
j (j = 0, 1) in the simple linear regression model given
by the population regression equation, or PRE
i 0 1 i i Y = β + β X + u (i = 1, …, N) (1)
where ui is an iid random error term. The OLS sample regression
equation
(SRE) corresponding to PRE (1) is
i 0 1 i i X uˆ Y = βˆ + βˆ + (i = 1, …, N) (2)
where and are the OLS coefficient estimators given by the formulas
0 βˆ 1 βˆ
2
i i
i i i
1
x
x y ˆ
∑
∑ β = (3)
βˆ
0 = Y − βˆ
1X (4)
x X i i ≡ − X , y Y i i ≡ − Y , X = ∑i i X N , and Y Y = ∑i i N
.
Why Use the OLS Coefficient Estimators?
The reason we use these OLS coefficient estimators is that, under
assumptions A1-
A8 of the classical linear regression model, they have several
desirable statistical
properties. This note examines these desirable statistical
properties of the OLS
coefficient estimators primarily in terms of the OLS slope
coefficient estimator ;
the same properties apply to the intercept coefficient estimator
.
We derived in Note 2 the OLS (Ordinary Least Squares) estimators (j = 0, 1) of
the regression coefficients β
j βˆ
j (j = 0, 1) in the simple linear regression model given
by the population regression equation, or PRE
i 0 1 i i Y = β + β X + u (i = 1, …, N) (1)
where ui is an iid random error term. The OLS sample regression equation
(SRE) corresponding to PRE (1) is
i 0 1 i i X uˆ Y = βˆ + βˆ + (i = 1, …, N) (2)
where and are the OLS coefficient estimators given by the formulas 0 βˆ 1 βˆ
2
i i
i i i
1
x
x y ˆ
∑
∑ β = (3)
βˆ
0 = Y − βˆ
1X (4)
x X i i ≡ − X , y Y i i ≡ − Y , X = ∑i i X N , and Y Y = ∑i i N .
Why Use the OLS Coefficient Estimators?
The reason we use these OLS coefficient estimators is that, under assumptions A1-
A8 of the classical linear regression model, they have several desirable statistical
properties. This note examines these desirable statistical properties of the OLS
coefficient estimators primarily in terms of the OLS slope coefficient estimator ;
the same properties apply to the intercept coefficient estimator .
Unbiasedness of and . 1
ˆ
β 0
ˆ
β
The OLS coefficient estimator is unbiased, meaning that . 1
ˆ
β 1 1 ) ˆ E(β = β
The OLS coefficient estimator is unbiased, meaning that . 0
ˆ
β 0 0 ) ˆ E(β = β
• Definition of unbiasedness: The coefficient estimator is unbiased if and
only if ; i.e., its mean or expectation is equal to the true coefficient β
1 βˆ
1 1 ) ˆ E(β = β 1.
Proof of unbiasedness of : Start with the formula . 1
ˆ
β 1 i i
Yi k βˆ = ∑
1. Since assumption A1 states that the PRE is i 0 1 i i Y = β + β X + u ,
k u , since k 0 and k X 1.
k k X k u
k ( X u ) since Y X u by A1
ˆ k Y
1 i i i i i i i i
0 i i 1 i i i i i i
i i 0 1 i i i 0 1 i i
1 i i i
= β + ∑ ∑ = ∑ =
= β ∑ + β ∑ + ∑
= ∑ β + β + = β + β +
β = ∑
2. Now take expectations of the above expression for , conditional on the
sample values {X
1 βˆ
i: i = 1, …, N} of the regressor X. Conditioning on the
sample values of the regressor X means that the ki are treated as nonrandom,
since the ki are functions only of the Xi.
= .
= k 0 since E(u X ) 0 by assumption A2
k E(u X ) since is a constant and the k are nonrandom
) E( ) E[ k u ] ˆ E(
1
1 i i i i
1 i i i i 1 i
1 1 i i i
β
β + ∑ =
= β + ∑ β
β = β + ∑
Result: The OLS slope coefficient estimator is an unbiased estimator of
the slope coefficient β
1
ˆ
β
1: that is,
.
Statistical Properties of the OLS Slope Coefficient Estimator
¾ PROPERTY 1: Linearity of 1
ˆ
β
The OLS coefficient estimator can be written as a linear function of the
sample values of Y, the Y
1 βˆ
i (i = 1, ..., N).
Proof: Starts with formula (3) for : 1 βˆ
because x 0. x
x Y =
x
Y x
x
x Y =
x
x (Y Y) =
x
x y ˆ =
2 i i
i i
i i i
2
i i
i i
2
i i
i i i
2
i i
i i i
2
i i
i i i
1
∑ = ∑
∑
∑
∑ − ∑
∑
∑
∑ −
∑
∑ β
• Defining the observation weights k x x i i = ∑i
2
i for i = 1, …, N, we can re-
write the last expression above for as: 1 βˆ
2
i i
i
1 i i i i
x
x k Y where k ˆ
∑ β = ∑ ≡ (i = 1, ..., N) … (P1)
• Note that the formula (3) and the definition of the weights ki imply that is
also a linear function of the y
1 βˆ
i’s such that
1 i i i k y βˆ = ∑ .
Result: The OLS slope coefficient estimator is a linear function of the
sample values Y
1 βˆ
i or yi (i = 1,…,N), where the coefficient of Yi or yi is ki.