Question

In: Statistics and Probability

When examining the relation between two variables, when is it better to use the regression coefficient β (beta) instead of the correlation coefficient r to test for significant effects?

Answer the questions below based on the following table:

                                        Fatigue                    Vigor                 Sleepiness     

Tension                             .36*                     –.31*                       .08

Fatigue                                                          –.73***                   .57**

Vigor                                                                                           –.49**        

* p < .05, ** p < .01, *** p < .001

The strongest correlation is between _________________________ and ________________________. (Give variable names)

The weakest correlation is between __________________________ and ________________________. (Give variable names)

The correlation between sleepiness and fatigue is ___________________

(indicate direction) and ________________________ (indicate strength).

1. When examining the relation between two variables, when is it better to use the regression coefficient β (beta) instead of the correlation coefficient r to test for significant effects?

2. Psychologists are growing less and less enthused with p-values (or cut-off points). What statistics do they prefer (as a better alternative to p-values) for evaluating studies’ results, and why?

Solutions

Expert Solution

-> Strongest Correlation is between Fatigue and Vigor

-> Weakest Correlation is between Tension and Sleepiness

-> Correaltion between Sleepiness and Fatigue is 0.57. The correlation is positive with moderate strength

1.) If we just want to check how much assocaition exists between two variables we can check for correlation. Again if correlation is low, that doesn't mean variables are not related. Correaltion only checks for linear relationship. There are chacnes tht variables can be non linearly related.

If we want to compute the value of one variable with the help of another or if we want to check how much a variable changes by changing another variable then we shouls use regression.

2.)

J Neyman (and ES Pearson) did not agree with Fishers logic to use p-values for decisions. They developed a purely decision.theoretic approach that allowd to decide between two alternative hypotheses (A vs. B) with a pre-defined confidence. The aim is to make a rational decision betwene A and B. It is not about rejecting a null-hypothesis. Either one accepts A or one accepts B. The rationality of the decision is introduced by the attempt to maximize the expected utility of the decision. This needs a definition of the utility under the various possible outcomes.

The critical point here is that the hypotheses need both be defined, and it must be reasonable to assume that they are true! Note that Fisher never was concerned about the "truth" or "falsehood" of the null-hypothesis. But here we must be convinced that either A is a good description of the reality or B is a good description of the reality. Only then can we make a rational decision based on the sampled data. This requires to state the utility (wins or losses) of correctly and wrongly accepting A and B. Based on these utilities one can derive the required confidences for the decisions, what is translated to the "probability of accepting B when A would actually be the good decsription of reality" (-> alpha; type-I error) and "probability of accepting A when B would actually be a good description of reality" (-> beta; type-II error). Sometimes instead of beta the complement 1-beta = power is used.

It seems obvious that this procedure is not suited for research questions. In reaserch it is usually not possible to dinstinguish two precise hypotheses (what should B be, precisely?), and there is usually no chance to state any utility function. But then it is not possible to select alpha and beta (or the power) to justify a rationaldecision. And if we don't know if the decision is rational, why do we do all this?

A possible way out might be to use Bayesian statistics to get a posterior predictive model and a utility function over the hypothesis space. The decision may then be taken based on the sign of the integral of the product of posterior x utility. This had the advantage that B would not need to be specified (the posterior will tell us what we should believe about B, given the data), and that such decisions would be in a sense rational, but it would again impede the control of error-rates.


Related Solutions

Use the Spearman's rank correlation coefficient at the 0.05 CI to investigate the relation between two...
Use the Spearman's rank correlation coefficient at the 0.05 CI to investigate the relation between two variables X and Y below. X Y 97 0 93 0 90 1 86 1 82 0 177 4 171 1 155 3 142 1 141 1 60 4 55 1 29 0 26 0 20 0 12 0 133 1 126 0 125 4 123 1 119 1 117 2
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance (b) size (c) strength (d) direction 2. 10 pairs of sample data were obtained from a study which looked at household income and the number of people in the household who smoked (cigarettes). The value of the linear correlation coefficient r was computed and a result of - 0.989 was obtained. All of the following (below) are conclusions that can be drawn from the...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance (b) size (c) strength (d) direction 2. 10 pairs of sample data were obtained from a study which looked at household income and the number of people in the household who smoked (cigarettes). The value of the linear correlation coefficient r was computed and a result of - 0.989 was obtained. All of the following (below) are conclusions that can be drawn from the...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance (b) size (c) strength (d) direction 2. 10 pairs of sample data were obtained from a study which looked at household income and the number of people in the household who smoked (cigarettes). The value of the linear correlation coefficient r was computed and a result of - 0.989 was obtained. All of the following (below) are conclusions that can be drawn from the...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance...
1. The linear correlation coefficient r measures of the linear relationship between two variables. (a) Distance (b) size (c) strength (d) direction 2. 10 pairs of sample data were obtained from a study which looked at household income and the number of people in the household who smoked (cigarettes). The value of the linear correlation coefficient r was computed and a result of - 0.989 was obtained. All of the following (below) are conclusions that can be drawn from the...
In a two-tailed test for correlation at α = .05, a sample correlation coefficient r =...
In a two-tailed test for correlation at α = .05, a sample correlation coefficient r = 0.42 with n = 25 is significantly different than zero. True or False
Calculate the t-test statistic for whether the correlation coefficient between the two variables below differs significantly...
Calculate the t-test statistic for whether the correlation coefficient between the two variables below differs significantly from 0. (Hint: You will first need to calculate the correlation coefficient.) 14        15 17        18 19        13 21        2 23        4 11        5 9          3 13        15 14        18 21        2
This week examines how to use correlation and simple linear regression to test the relationship of two variables.
  Discussion 1: Searching for Causes This week examines how to use correlation and simple linear regression to test the relationship of two variables. In both of these tests you can use the data points in a scatterplot to draw a line of best fit; the closer to the line the points are the stronger the association between variables. It is important to recognize, however, that even the strongest correlation cannot prove causation. For this Discussion, review this week’s Learning...
For an inverse relationship between two variables, the sign of the correlation coefficient is "+" TRUE...
For an inverse relationship between two variables, the sign of the correlation coefficient is "+" TRUE OR FALSE
1. If the coefficient of determination is 25%, the correlation between two continuous variables is a)...
1. If the coefficient of determination is 25%, the correlation between two continuous variables is a) -5 b) 5 c) -.25 d) .25 e) a or b f) none of the above 2. To assess the correlation between height and weight, one should use a) spearman correlation b) regression equation c. pearson correlation d) point biserial correlation 3. For a computed r = -0.547, given a dataset of n = 16, alpha = .05, and two-tailed significance, one should fail...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT