In: Statistics and Probability
how is the interpretation of slopes in multiple regression model different from simple regression slope?
How repeated measures ANOVA control for individual differences?
1.
Multiple regression is a natural extension of simple linear regression that incorporates multiple explanatory (or predictor) variables. Here we add more explanatory variables to our simple regression model to strengthen its ability to explain real-world data, we in fact convert a simple regression model into a multiple regression model. The least squares approach we used in the case of simple regression can still be used for multiple regression analysis.
A multiple linear regression model with k predictor variables X1, X2, ..., Xk and a response Y, can be written as y = β0 + β1x1 + β2x2 + ··· βkxk + ε.
The ε are the residual terms of the model and the distribution , the assumption we place on the residuals allow us to do inference on the remaining model parameters. We can interpret the meaning of the regression coefficients β0, β1, β2, ..., βk in this model. More complex models may include higher powers of one or more predictor variables, e.g., y = β0 + β1x1 + β2x22 + .... + ε.
2.
The repeated measures ANOVA can be compared to the dependent sample T-Test, as it compares the mean scores of one group to another group based on different observations. It is a necessary condition for the repeated measures ANOVA that the cases in one observation are to be directly connected with the cases in all other observations. This happens automatically when measures are taken repeatedly, or when analyzing similar units or comparable specimen. Such kind of pairing of observations or making repeated measurements are very common when experiments are conducted or when observations are made with time repititions. Such kind of pairing the measured data points is done in order to exclude any cofounding or hidden factors. It is also often used to account for differences in the baselines.