Question

In: Statistics and Probability

Consider the model: yi = βxi + ei, i = 1,...,n where E(ei) = 0 and...

Consider the model:

yi = βxi + ei, i = 1,...,n

where E(ei) = 0 and Variance(ei) = σ2 and ei(s) are non-correlated errors.

a) Obtain the minimum-square estimator for β and propose an unbiased estimator for σ2.

b) Specify the approximate distribution of the β estimator.

c) Specify an approximate confidence interval for the parameter β with confidence

coefficient γ, 0 < γ < 1.

Solutions

Expert Solution

a) We have

To obtain the minimum-square esimator we use,

We minimise Q to get a least- square estimator/ Minimum-square estimator.  

Replacing by , we get

Now we take the derivative of Q with respect to , set it equal to 0 and solve the resulting equation to get a minimum.

The above steps will result in the least square estimator ,

The mean square error of wil be its unbiased estimator.

b) will approximately follow a Normal distribution with parameters and .

c) A 95% Confidence Interval of is given by


Related Solutions

Consider 2 models: yi = β1 + β2xi + ei (1) Y = X0β + e;...
Consider 2 models: yi = β1 + β2xi + ei (1) Y = X0β + e; (2) where Equation (1) represents a system of n scalar equations for individuals i = 1; ...; n , and Equation (2) is a matrix representation of the same system. The vector Y is n x 1. The matrix X0 is n x 2 with the first column made up entirely of ones and the second column is x1; x2; ...; xn. a. Set...
Consider the following regression model: Yi = αXi + Ui , i = 1, .., n...
Consider the following regression model: Yi = αXi + Ui , i = 1, .., n (2) The error terms Ui are independently and identically distributed with E[Ui |X] = 0 and V[Ui |X] = σ^2 . 1. Write down the objective function of the method of least squares. 2. Write down the first order condition and derive the OLS estimator αˆ. Suppose model (2) is estimated, although the (true) population regression model corresponds to: Yi = β0 + β1Xi...
Question 1: Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain...
Question 1: Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for !β̂ and !β̂ are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of β! ̂ will decline as the sample size increases. Explain the importance of this. Question 2: Consider the following...
Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the...
Consider the simple regression model: !Yi = β0 + β1Xi + ei (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for β̂0 and β̂1 are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of β̂1 will decline as the sample size increases. Explain the importance of this.
Consider the instrumental variable regression model Yi = ?0 + ?1Xi + ?2Wi + ui where...
Consider the instrumental variable regression model Yi = ?0 + ?1Xi + ?2Wi + ui where Xi is correlated with ui and Zi is an instrument. Suppose that the first three assumptions in Key Concept 12.4 are satisfied. Which IV assumption is not satisfied when: (a) Zi is independent of (Yi , Xi , Wi)? (b) Zi = Wi? (c) Wi = 1 for all i? (d) Zi = Xi?
Consider a simple linear model: yi = β1 + β2xi + εi, where εi ∼ N...
Consider a simple linear model: yi = β1 + β2xi + εi, where εi ∼ N (0, σ2) Derive the maximum likelihood estimators for β1 and β2. Are these the same as the estimators obtained from ordinary least squares? Is there a reason to prefer ordinary least squares or maximum likelihood in this case?
Assume that the population regression function is Yi = BXi + ei (B is beta, e...
Assume that the population regression function is Yi = BXi + ei (B is beta, e is the error term). This is a regression through the origin (no intercept). A. Under the homoskedastic normal regression assumptions, the t-statistic will have a Student t distribution with n-1 degrees of freedom (not n-2 degrees of freedom). Explain. B. Will the residuals sum to zero in this case? Explain and show your derivations
1. Consider the linear regression model for a random sample of size n: yi = β0...
1. Consider the linear regression model for a random sample of size n: yi = β0 + vi ; i = 1, . . . , n, where v is a random error term. Notice that this model is equivalent to the one seen in the classroom, but without the slope β1. (a) State the minimization problem that leads to the estimation of β0. (b) Construct the first-order condition to compute a minimum from the above objective function and use...
Consider the simple regression model: Yi = β0 + β1Xi + e (a) Explain how the...
Consider the simple regression model: Yi = β0 + β1Xi + e (a) Explain how the Ordinary Least Squares (OLS) estimator formulas for β0 and β1are derived. (b) Under the Classical Linear Regression Model assumptions, the ordinary least squares estimator, OLS estimators are the “Best Linear Unbiased Estimators (B.L.U.E.).” Explain. (c) Other things equal, the standard error of will decline as the sample size increases. Explain the importance of this.
Suppose that the random variable Y1,...,Yn satisfy Yi = ?xi + ?i i=1,...,n. where the set...
Suppose that the random variable Y1,...,Yn satisfy Yi = ?xi + ?i i=1,...,n. where the set of xi are fixed constants and ?i are iid random variables following a normal distributions of mean zero and variance ?2. ?a (with a hat on it) = ?i=1nYi xi  /  ?i=1nx2i is unbiased estimator for ?. The variance is  ?a (with a hat on it) = ?2/  ?i=1nx2i . What is the distribation of this variance?
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT