In: Statistics and Probability
Thinking about the Ho and Ha, what kind of hypothesis test will you have to perform if you want to see the influence of the single IV on the DV:
2. Why is the adjusted coefficient of determination different from the regular R2.
(1)
Generally a two-tailed test is what we carry out (unless we are interested in directional outcomes)
(2)
In regression, R^2 is called coefficient of determination and measures the goodness of fit of a model. It is a statistical measure of how well the regression line approximates the actual data points. An R^2 value of 1 indicates that the regression line perfectly fits the data. R^2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. An R^2 value of 0.8 means that 80% of the variation in the response variable is explained by the variation in the independent variable(s).
Adjusted R^2, given by [1 - {(1 - R^2)(n - 1)/(n - p - 1)}] is a modification of R^2 that adjusts for the number of explanatory terms in a model. Unlike R^2, the adjusted R^2 increases only if the new term improves the model more than would be expected by chance alone. The adjusted R^2 can be negative, and will always be ≤ R^2. In the formula for R^2, p is the total number of regressors in the linear model , and n n is sample size. Unlike R^2, adjusted R^2 allows for the degrees of freedom associated with the sums of the squares. Therefore, even though the residual sum of squares decreases or remains the same as new explanatory variables are added, the residual variance does not. For this reason, adjusted R^2 is generally considered to be a more accurate measure of goodness-of-fit than R^2.
Thus there will a considerable difference between R^2 and adjusted R^2 if all the variables in the model do not have sufficient explanatory power.