Answer:
- Each time you direct a t-test quite possibly you will make a
Type 1 blunder. This mistake is normally 5%.
- By running two t-tests on similar information you will have
expanded your possibility of "committing an error" to 10%.
- The recipe for deciding the new blunder rate for different
t-tests isn't as basic as increasing 5% by the quantity of
tests.
- Be that as it may, on the off chance that you are just making a
couple of various examinations, the outcomes are fundamentally the
same as on the off chance that you do.
- Accordingly, three t-tests would be 15% (really, 14.3%) et
cetera.
- These are unsuitable mistakes.
- An ANOVA controls for these blunders so the Type 1 mistake
stays at 5% and you can be more sure that any huge outcome you find
isn't simply down to risk
There are three principle suspicions, recorded
here:
- the reliant variable is regularly disseminated in each
gathering that is being thought about in the restricted ANOVA.
- Along these lines, for instance, in the event that we were
contrasting three gatherings (e.g., novice, semi-expert and expert
rugby players) on their leg quality, their leg quality qualities
(subordinate variable) would need to be regularly appropriated for
the novice gathering of players, typically circulated for the
semi-experts and ordinarily disseminated for the expert
players.
- You can test for ordinariness in SPSS Statistics
- There is homogeneity of changes.
- This implies the populace fluctuations in each gathering are
equivalent.
- On the off chance that you utilize SPSS Statistics, Levene's
Test for Homogeneity of Variances is incorporated into the yield
when you run a restricted ANOVA in SPSS Statistics
- Autonomy of perceptions.
- so finally these is the comparings of the groups.