Question

In: Economics

1. Consider the linear regression model for a random sample of size n: yi = β0...

1. Consider the linear regression model for a random sample of size n: yi = β0 + vi ; i = 1, . . . , n, where v is a random error term. Notice that this model is equivalent to the one seen in the classroom, but without the slope β1.

(a) State the minimization problem that leads to the estimation of β0.

(b) Construct the first-order condition to compute a minimum from the above objective function and use it to obtain the expression of the ordinary least squares estimator of

β0. Call this estimator β˜0.

(c) Suppose now that the true population model is given by: Y = β0 + β1X + u, where u is an error term which satisfies E(u) = 0, E(u|X) = 0 and Var(u|X) = σ^2

(homoskedasticity). This implies that, in our sample, yi = β0 + β1xi + ui ;i = 1, . . . , n. Show that, in general, β˜0 is a biased estimator of β0. When is the bias equal to 0?

Solutions

Expert Solution

(a) The minimization problem would concer the minimization of the residual sum of squared. We have or . Hence, the minimization problem would be as below.

,

ie the minimization of RSS with respect to the parameter beta0.

(b) The FOC would be as below.

or or or or or or or .

(c) For the true regression model be , the estimate of beta0 here would be as . We have or , and since the true model is , and slope coefficient is unbiased, we have .

Since we have , hence or , the is a biased estimator of true beta0.

The bias is or or . The bias would be zero when or , ie for and/or .

Hence, if mean of independent variable is zero and/or the true slope coefficient is zero, then the bias is zero.


Related Solutions

In a simple linear regression model yi = β0 + β1xi + εi with the usual...
In a simple linear regression model yi = β0 + β1xi + εi with the usual assumptions show algebraically that the least squares estimator β̂0 = b0 of the intercept has mean β0 and variance σ2[(1/n) + x̄2 / Sxx].
Consider the simple linear regression: Yi = β0 + β1Xi + ui whereYi and Xi...
Consider the simple linear regression: Yi = β0 + β1Xi + ui where Yi and Xi are random variables, β0 and β1 are population intercept and slope parameters, respectively, ui is the error term. Suppose the estimated regression equation is given by: Yˆ i = βˆ 0 + βˆ 1Xi where βˆ 0 and βˆ 1 are OLS estimates for β0 and β1. Define residuals ˆui as: uˆi = Yi − Yˆ i Show that: (a) (2 pts.) Pn i=1...
Question 1: Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain...
Question 1: Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for !β̂ and !β̂ are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of β! ̂ will decline as the sample size increases. Explain the importance of this. Question 2: Consider the following...
(i) Consider a simple linear regression yi = β0 + β1xi + ui Write down the...
(i) Consider a simple linear regression yi = β0 + β1xi + ui Write down the formula for the estimated standard error of the OLS estimator and the formula for the White heteroskedasticity-robust standard error on the estimated coefficient bβ1. (ii) What is the intuition for the White test for heteroskedasticity? (You do not need to describe how the test is implemented in practice.)
Consider the simple regression model: Yi = β0 + β1Xi + e (a) Explain how the...
Consider the simple regression model: Yi = β0 + β1Xi + e (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for β0 and β1are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of will decline as the sample size increases. Explain the importance of this.
Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the...
Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for β̂0 and β̂1 are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of β̂1 will decline as the sample size increases. Explain the importance of this.
8) Consider the following regression model Yi = β0 + β1X1i + β2X2i + β3X3i + ...
8) Consider the following regression model Yi = β0 + β1X1i + β2X2i + β3X3i + β4X4i + β5X5i + ui This model has been estimated by OLS. The Gretl output is below. Model 1: OLS, using observations 1-52 coefficient std. error t-ratio p-value const -0.5186 0.8624 -0.6013 0.5506 X1 0.1497 0.4125 0.3630 0.7182 X2 -0.2710 0.1714 -1.5808 0.1208 X3 0.1809 0.6028 0.3001 0.7654 X4 0.4574 0.2729 1.6757 0.1006 X5 2.4438 0.1781 13.7200 0.0000 Mean dependent var 1.3617 S.D. dependent...
Consider the following regression model Yi = β0 + β1X1i + β2X2i + β3X3i + β4X4i ...
Consider the following regression model Yi = β0 + β1X1i + β2X2i + β3X3i + β4X4i + ui This model has been estimated by OLS. The Gretl output is below. Model 1: OLS, using observations 1-59 coefficient std. error t-ratio p-value const -0.1305 0.6856 -0.1903 0.8498 X1 0.1702 0.1192 1.4275 0.1592 X2 -0.2592 0.1860 -1.3934 0.1692 X3 0.8661 0.1865 4.6432 0.0000 X4 -0.8074 0.5488 -1.4712 0.1470 Mean dependent var -0.6338 S.D. dependent var 1.907 Sum squared resid 143.74 S.E. of...
Consider a regression model Yi=β0+β1Xi+ui and suppose from a sample of 10 observations you are provided...
Consider a regression model Yi=β0+β1Xi+ui and suppose from a sample of 10 observations you are provided the following information: ∑10i=1Yi=71;  ∑10i=1Xi=42;  ∑10i=1XiYi=308; ∑10i=1X2i=196 Given this information, what is the predicted value of Y, i.e.,Yˆ for x = 12? 1. 14 2. 11 3. 13 4. 12 5. 15
Consider the following regression model: Yi = αXi + Ui , i = 1, .., n...
Consider the following regression model: Yi = αXi + Ui , i = 1, .., n (2) The error terms Ui are independently and identically distributed with E[Ui |X] = 0 and V[Ui |X] = σ^2 . 1. Write down the objective function of the method of least squares. 2. Write down the first order condition and derive the OLS estimator αˆ. Suppose model (2) is estimated, although the (true) population regression model corresponds to: Yi = β0 + β1Xi...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT