In: Statistics and Probability
Reply to the following Discussion Question with a substantive post - one that demonstrates that you understand the mathematical concepts and provides an explanation rather than just making a simple statement about the topic. How does a researcher decide whether to choose a 1%, 5%, or 10% level of significance for a hypothesis test that they are running? Explain by giving examples.
The selection of the significance level is depends on what level of significance the researcher want with his or her hypothesis test. We know that the hypothesis tests with lowest significance level are more reliable and valid for the further research purpose. If the researcher use 1% level of significance rather than 5%, then researcher will get more valid and reliable results regarding the tests. So, for achieving more accuracy for inference regarding the claim of researcher, it is important to use as small as possible significance level for the research. In some instances, researcher use greater significance level for proving their claims. Suppose, researcher use 10% significance level, then results would be significant for this level, but these results would not be significant for 5% or 1% significance level. For example, researcher want to check whether the diameter of the particular product produced in manufacturing industry is 5 cm or not. For checking this claim, researcher collects sample data and show that results are significant at 5% level of significance but this result would not be significant at 1% level of significance. Actually, selection of significance level is based on the different facts such as level of reliability, validity, importance, and personal judgement based on available resources. In some situations, it is very difficult to maintain 1% level of significance for process due to unavailability of resources, and then researchers simply used 5% level of significance. in general, most of researchers use 5% level of significance by default for their research.