In: Statistics and Probability
We have seen that adding useless predictors to a regression model will increase R2. Here, let's examine what our inference methods say if the predictors are in fact useless. Suppose the true/pop fit is y = 1,(i.e., no x at all), and so a possible sample from the population could be the following:
set.seed(123)
n = 20
y = 1 + rnorm(n,0,1)
a) Write code to make data on 10 useless predictors (and no useful
predictors) each from unif(-1,+1), fit the model y = alpha + beta1
x1 + ... + beta10 x10, perform the test of model utility, and
perform t-tests on each of the 10 coefficients to see if they are
zero. Show/turn-in your R code.
b) According to the F-test of model utility, are any of the
predictors useful at alpha = 0.1?
c) According to the t-tests, are any of the predictors useful at
alpha = 0.1?