In: Math
home / study / math / statistics and probability / statistics and probability questions and answers / suppose that we fit model (1) to the n observations (y1, x11, x21), …, (yn, x1n, x2n). yi ... Your question has been answered Let us know if you got a helpful answer. Rate this answer Question: Suppose that we fit Model (1) to the n observations (y1, x11, x21), …, (yn, x1n, x2n). yi = β0 + ... Suppose that we fit Model (1) to the n observations (y1, x11, x21), …, (yn, x1n, x2n). yi = β0 + β1x1i + β2x2i + εi , i = 1, …., n, (1) where ε’s are identically and independently distributed as a normal random variable with mean zero and variance σ2, i = 1, …, n , and all the x’s are fixed. a) Suppose that Model (1) is the true model. Show that at any observation yi , the point estimator of the mean response and its residual are two statistically independent normal random variables. b) Suppose the true model is Model (1), but we fit the data to the following Model (2) (that is, ignore the variable x2). yi = β 0 + β 1x1i + εi , i = 1, …., n. Assume that average of x1 =0, average of x2=0. The sum of x1i and x2i equals 0. Derive the least-squares estimator of β1 obtained from fitting Model (2). Is this least-squares estimator biased for β1 under Model (1)?
OLS estimator. Let β1-hat be the estimator; y-hat = β1-hat * x be the predicted value, so u-hat = y - y-hat = y - β1-hat*x is the residual.
To get the sum of squared residuals:
u-hat = y - β1-hat*x
(u-hat)2 = (y - β1-hat*x)2 = y2 - 2*β1-hat*x*y + β1-hat2*x2
Σ (u-hat)2 = Σ [y2 - 2*β1-hat*x*y + β1-hat2*x2] = Σ (y2) - Σ(2*β1-hat*x*y) + Σ(β1-hat2*x2)
Σ u-hat2 = Σ y2 - 2β1-hat* Σ(x*y) + β1-hat2 * Σ x2
Take the derivative w.r.t. beta-hat, and set equal to zero, in order to minimize sum of squared residuals:
∂/∂β1-hat [Σ y2 - 2β1-hat * Σ(x*y) + β1-hat2 * Σ x2] = 0 - 2Σ(x*y) + 2 β1-hat* Σ x2 = 0
2Σ(x*y) = 2 β1-hat* Σ x2
β1-hat = Σ(x*y) / Σ x2
Expected value: E[β1-hat] = E[Σ(x*y) / Σ x2 ] = E[Σ(x*(β1x + u)) / Σ x2 ] = E[Σ(β1x2 + xu)) / Σ x2 ] = E[β1Σx2 / Σ x2 + xu / Σ x2 ] = E[β1Σx2 / Σx2 ] + E[ xu / Σ x2 ] = E[β1] + E[ xu / Σ x2 ] = β1 + E[ x/Σ x2 * u]. If x and u are independent, then this last term is zero, so E[β1-hat] = β1. Yes, unbiased.
Var[β1-hat] = E[(Σ(x*y) / Σ x2 - β1)2] = E[(Σ(x*(xβ1+u)) / Σ x2 - β1)2] = E[(Σ(x*(xβ1+u)) / Σ x2 - β1)2] = E[(Σ(x2β1+xu)) / Σ x2 - β1)2] = E[(Σx2β1+Σxu) / Σ x2 - β1)2] = E[(Σx2β1/Σ x2+Σxu/Σ x2) - β1)2] = E[β1+Σxu/Σ x2) - β1)2] = E[(Σxu/Σ x2)2] =E[(Σxu)2/(Σ x2)2] = E[(Σx2E[u]2/(Σ x2)2] = E[u]2/E[Σ x2].