In: Statistics and Probability
What is descriptive analysis? What are examples of measures used in descriptive analysis?
▪ Know definitions, examples, and types of statistical tests used for each of the following analyses: inference, difference, association, predictive.
▪ Know definitions and uses for each of the measures of central tendency.
▪ Know definition and uses for each of the measures of variability.
▪ Understand when to use each of the measures of central tendency and measure of variability.
▪ Know the SPSS commands for producing descriptive statistics for nominal or ordinal variables.
▪ Know the Z values for 95 and 99% confidence levels.
▪ What does a particular % confidence interval allow us to say?
▪ Know the purpose of hypothesis testing.
Chapter 13: Implementing Basic Differences Tests
▪ What does a meaningful difference mean to the marketing manager?
▪ What characteristics do differences need to possess to be useful? What does each of these characteristics mean?
▪ What is the difference between a t-value and z-value test. How is this handled in SPSS?
▪ If the confidence level is 95% at what level of the P-value (Sig in SPSS) will we reject the null hypothesis? (What is the decision making process for evaluating Sig value in SPSS)
▪ What is the null hypothesis for a differences test?
▪ What are the SPSS commands for testing differences between percentages?
▪ What are the SPSS commands for testing differences between means, both independent samples and paired samples tests.
▪ Be able to recognize examples of when a researcher would use an independent samples and paired samples analysis.
▪ What is the purpose of the Levene’s Test, what is the null and alternative hypothesis, when do we reject the null?
▪ What is the purpose of an ANOVA, what is the null hypothesis, when do you reject the null, what are the SPSS commands for running an ANOVA? What do we mean by the “signal flag” procedure?
Chapter 14: Making Use of Associative Tests
▪ What is associative analysis? When is it used?
▪ Know the definition of the four relationships (nonmonotonic, monotonic, linear, curvilinear) and be able to recognize examples.
▪ Know the three characteristics of relationships and the definition of each.
▪ What is the purpose of a cross-tabulation and chi-square test, what is the null hypothesis, what does the chi-square value mean and what else must the researcher do if the value indicates a relationship exists. What are the SPSS commands?
▪ What is the purpose of the correlation coefficient? What is the range of the value and how does the researcher evaluate the value? Be able to analyze an example correlation coefficient.
▪ What types of variables does the Pearson product moment correlation test? Know the SPSS commands and understand the output – what value does the researcher analyze first, what is the null hypothesis, when does the researcher reject the null? If a relationship is established statistically, what value does the researcher analyze second and what is the interpretation of the value?
▪ Understand the interpretation of different scatter diagrams.
Chapter 15: Understanding Regression Analysis Basics
▪ What is the purpose of bivariate regression analysis?
▪ Know what each of the variables in y = a + bx mean.
▪ What does the ANOVA analysis tell the researcher in bivariate regression tests?
▪ How is rejection of the null for the ANOVA table in bivariate regression interpreted?
▪ What is definition of multiple regression analysis? How is it best described with regards to intercept and independent variables?
▪ What is multicollinearity?
▪ What does VIF measure?
▪ What is a prediction?
▪ What does multiple R measure?
Answers:
Chapter 15:
1)
Bivariate analysis is one of the simplest forms of quantitative (statistical) analysis. It involves the analysis of two variables (often denoted as X, Y), for the purpose of determining the empirical relationship between them. Bivariate analysis can be helpful in testing simple hypotheses of association.
2)
The equation has the form Y= a + bX, where Y is the dependent variable (that's the variable that goes on the Y-axis), X is the independent variable (i.e. it is plotted on the X-axis), b is the slope of the line and a is the y-intercept.
3)
So an ANOVA reports each mean and a p-value that says at least two are significantly different. A regression reports only one mean(as an intercept), and the differences between that one and all other means, but the p-values evaluate those specific comparisons.
4)
Compare the p-value for the F-test to your significance level. If the p-value is less than the significance level, your sample data provide sufficient evidence to conclude that your regression model fits the data better than the model with no independent variables.
5)
The definition of multiple linear regression (MLR) is to model the linear relationship between the explanatory (independent) variables and response (dependent) variable.
The intercept (often labeled the constant) is the expected mean value of Y when all X=0. Start with a regression equation with one predictor, X. If X sometimes equals 0, the intercept is simply the expected mean value of Y at that value. If X never equals 0, then the intercept has no intrinsic meaning.
6)
Multicollinearity is a problem because it undermines the statistical significance of an independent variable. Other things being equal, the larger the standard error of a regression coefficient, the less likely it is that this coefficient will be statistically significant.
7)
Variance inflation factor (VIF) is a measure of the amount of multicollinearity in a set of multiple regression variables. ... This ratio is calculated for each independent variable. A high VIF indicates that the associated independent variable is highly collinear with the other variables in the model.
8)
Regression analysis is a form of predictive modeling technique that investigates the relationship between a dependent (target) and the independent variable (s) (predictor). This technique is used for forecasting, time series modelling, and finding the causal effect relationship between the variables.
9)
It tells you how strong the linear relationship is. For example, a value of 1 means a perfect positive relationship, and a value of zero means no relationship at all. It is the square root of r squared.