Question

In: Economics

Consider the simple linear regression: Yi = β0 + β1Xi + ui whereYi and Xi...

Consider the simple linear regression: Yi = β0 + β1Xi + ui where Yi and Xi are random variables, β0 and β1 are population intercept and slope parameters, respectively, ui is the error term. Suppose the estimated regression equation is given by: Yˆ i = βˆ 0 + βˆ 1Xi where βˆ 0 and βˆ 1 are OLS estimates for β0 and β1. Define residuals ˆui as: uˆi = Yi − Yˆ i Show that: (a) (2 pts.) Pn i=1 uˆ 2 i = 0 (b) (2 pts.) Pn i=1 uˆiXi = 0

Solutions

Expert Solution

FIRST WE WROTE THE GIVEN EQUATIONS

THEN WROTE RESIDUAL AS GIVEN

AND THEN SQUARED IT AND TOOK A SUMMATION ON BOTH SIDES

AFTER THAT WE DIFFERENTIATED IT WITH RESPECT TO 0 AND 1

AND HENCE PROVED THE REQUIRED


Related Solutions

(i) Consider a simple linear regression yi = β0 + β1xi + ui Write down the...
(i) Consider a simple linear regression yi = β0 + β1xi + ui Write down the formula for the estimated standard error of the OLS estimator and the formula for the White heteroskedasticity-robust standard error on the estimated coefficient bβ1. (ii) What is the intuition for the White test for heteroskedasticity? (You do not need to describe how the test is implemented in practice.)
In a simple linear regression model yi = β0 + β1xi + εi with the usual...
In a simple linear regression model yi = β0 + β1xi + εi with the usual assumptions show algebraically that the least squares estimator β̂0 = b0 of the intercept has mean β0 and variance σ2[(1/n) + x̄2 / Sxx].
Consider the simple regression model: Yi = β0 + β1Xi + e (a) Explain how the...
Consider the simple regression model: Yi = β0 + β1Xi + e (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for β0 and β1are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of will decline as the sample size increases. Explain the importance of this.
Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the...
Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for β̂0 and β̂1 are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of β̂1 will decline as the sample size increases. Explain the importance of this.
Suppose you estimate a simple linear regression Yi = β0 + β1Xi + ei. Next suppose...
Suppose you estimate a simple linear regression Yi = β0 + β1Xi + ei. Next suppose you estimate a regression only going through the origin, Yi = β ̃1Xi + ui. Which regression will give a smaller SSR (sum of squared residuals)? Why?
Question 1: Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain...
Question 1: Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for !β̂ and !β̂ are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of β! ̂ will decline as the sample size increases. Explain the importance of this. Question 2: Consider the following...
Consider a regression model Yi=β0+β1Xi+ui and suppose from a sample of 10 observations you are provided...
Consider a regression model Yi=β0+β1Xi+ui and suppose from a sample of 10 observations you are provided the following information: ∑10i=1Yi=71;  ∑10i=1Xi=42;  ∑10i=1XiYi=308; ∑10i=1X2i=196 Given this information, what is the predicted value of Y, i.e.,Yˆ for x = 12? 1. 14 2. 11 3. 13 4. 12 5. 15
3. Consider the simple linear regression Yi = 2Xi + ui for i = 1, 2,...
3. Consider the simple linear regression Yi = 2Xi + ui for i = 1, 2, . . . ,n. The ui are IID (0; 2 ). a. Derive OLS estimator of 2 and called it b 2 b. Find its variance c. Is b 2 unbiased, show it? d.What is the risk we run when we do not include an intercept in the regression? Do question d.
Consider a simple linear regression model with nonstochastic regressor: Yi = β1 + β2Xi + ui...
Consider a simple linear regression model with nonstochastic regressor: Yi = β1 + β2Xi + ui . 1. [3 points] What are the assumptions of this model so that the OLS estimators are BLUE (best linear unbiased estimates)? 2. [4 points] Let βˆ 1 and βˆ 2 be the OLS estimators of β1 and β2. Derive βˆ 1 and βˆ 2. 3. [2 points] Show that βˆ 2 is an unbiased estimator of β2.
The regression model Yi = β0 + β1X1i + β2X2i + β3X3i + β4X4i + ui...
The regression model Yi = β0 + β1X1i + β2X2i + β3X3i + β4X4i + ui has been estimated using Gretl. The output is below. Model 1: OLS, using observations 1-50 coefficient std. error t-ratio p-value const -0.6789 0.9808 -0.6921 0.4924 X1 0.8482 0.1972 4.3005 0.0001 X2 1.8291 0.4608 3.9696 0.0003 X3 -0.1283 0.7869 -0.1630 0.8712 X4 0.4590 0.5500 0.8345 0.4084 Mean dependent var 4.2211 S.D. dependent var 2.3778 Sum squared resid 152.79 S.E. of regression 1.8426 R-squared 0 Adjusted...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT