Question

In: Advanced Math

Compare and explain “random error of a regression” with “residuals of regression.”

Compare and explain “random error of a regression” with “residuals of regression.”

Solutions

Expert Solution

In statistics and optimization, errors and residualsare two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true value of a quantity of interest (for example, a population mean), and the residual of an observed value is the difference between the observed value and the estimated value of the quantity of interest (for example, a sample mean). The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals.

Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.

A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.

A residual (or fitting deviation), on the other hand, is an observable estimate of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of n people. The sample mean could serve as a good estimator of the population mean. Then we have:

  • The difference between the height of each man in the sample and the unobservable population mean is a statistical error, whereas
  • The difference between the height of each man in the sample and the observable sample mean is a residual.

Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily not independent. The statistical errors, on the other hand, are independent, and their sum within the random sample is almost surely not zero.

One can standardize statistical errors (especially of a normal distribution) in a z-score (or "standard score"), and standardize residuals in a t-statistic, or more generally studentized residuals.

In regression analysis, the distinction between errorsand residuals is subtle and important, and leads to the concept of studentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals. [2] If the data exhibit a trend, the regression model is likely an error, and should be a quadratic or higher order. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon called heteroscedasticity. If all of the residuals are equal, or do not fan out, they exhibit homoscedasticity.

However, a terminological difference arises in the expression mean squared error (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed residuals, and not of the unobservable errors. If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals. Since this is a biased estimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals by df = np − 1, instead of n, where df is the number of degrees of freedom (n minus the number of parameters p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.[4]

Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used in ANOVA (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equal np − 1, where p is the number of parameters estimated in the model (one for each variable in the regression equation). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).[5]

However, because of the behavior of the process of regression, the distributions of residuals at different data points (of the input variable) may vary even if the errors themselves are identically distributed. Concretely, in a linear regression where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higher than the variability of residuals at the ends of the domain[6]: linear regressions fit endpoints better than the middle. This is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence.

Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of residuals, which is called studentizing. This is particularly important in the case of detecting outliers, where the case in question is somehow different than the other's in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.


Related Solutions

Explain what is meant by autocorrelation of regression residuals and detail what estimation problems it causes....
Explain what is meant by autocorrelation of regression residuals and detail what estimation problems it causes. How could you detect and solve the residual autocorrelation problem?
Linear regression Hello What does it mean that the residuals in linear regression is normal distributed?...
Linear regression Hello What does it mean that the residuals in linear regression is normal distributed? Why is it only the residuals that is, and not the "raw" data? And why do we want our residuals to be normal?
1.In a multiple regression model, the error term α is assumed to be a random variable...
1.In a multiple regression model, the error term α is assumed to be a random variable with a mean of: a. Zero b. ‐1 c. 1 d. Any value 2. In regression analysis, the response variable is the: a. Independent variable b. Dependent variable c. Slope of the regression function d. Intercept 3. A multiple regression model has: a. Only one independent variable b. More than one dependent variable c. More than one independent variable d. At least two dependent...
Show that the residuals in the bivariate regression model sum to zero. What is the interpretation...
Show that the residuals in the bivariate regression model sum to zero. What is the interpretation of this result?
Discuss the differences in a regression model between making the random error being multiplicative and making...
Discuss the differences in a regression model between making the random error being multiplicative and making the random error being additive regarding how you approach estimation of the model coefficient(s), how you apply linearization for estimating the model coefficient(s), and how you obtain starting values for estimation of the model coefficient(s).
What is Standard Error in a Regression?
  What is Standard Error in a Regression? The average deviation from the mean The average deviation of the correlation coefficient to the R2 value The average deviation of X to Y values The average deviation of the predicted scores and the actual scores
Explain the difference between a random sampling error, and a nonrandom sampling error. Please be descriptive...
Explain the difference between a random sampling error, and a nonrandom sampling error. Please be descriptive and use an example.
4. Name 2 sources of systematic error and random error 5. The 2sd random error in...
4. Name 2 sources of systematic error and random error 5. The 2sd random error in this scenario is 10%. a. What is your total allowable error (TAE) If the TAE for the analyte you are investigating is 12% (per CAP), b. your TAE acceptable? Why or why not? 6. Dr. X informs you that the values for the test in question are most discrepant at higher concentrations. You decide to perform a linearity on your test method. Describe how...
Consider the simple linear regression model y = 10+25x+e where the random error term is normally...
Consider the simple linear regression model y = 10+25x+e where the random error term is normally and independently distributed with mean zero and standard deviation 2. Do NOT use software, generate a sample of eight observations, one each at the levels x = 10, 12, 14, 16, 18, 20, 22, and 24. DO NOT USE SOFTWARE! A.Fit the linear regression model by least squares and find the estimates of the slope and intercept. B.Find the estimate of s^2. C. Find...
The residuals for 15 consecutive time periods from a simple linear regression with one independent variable...
The residuals for 15 consecutive time periods from a simple linear regression with one independent variable are given in the following table. Time_Period   Residual 1   +4 2   -6 3   -1 4   -5 5   +3 6   +6 7   -3 8   +7 9   +7 10   -3 11   +2 12   +3 13   0 14   -5 15   -7 B) Compute the​ Durbin-Watson statistic. At the 0.05 level of​significance, is there evidence of positive autocorrelation among the​ residuals? The​ Durbin-Watson statistic is D= ​(Round to...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT