In: Psychology
he validity of a study refers to the strategies a researcher uses to ensure the data collected are true and certain. What are the steps GCU doctoral learners must take to ensure the validity of a qualitative research study? Give examples.
Validity refers to how well a test measures what it is purported to measure. For example, if the results of the personality test claimed that a very shy person was, in fact, outgoing, the test would be invalid.
Types of Validity
1. Face Validity ascertains that the measure appears to be assessing the intended construct under study. It is easy to assess. Although this is not a very “scientific” type of validity, it may be an essential component in enlisting motivation of stakeholders. For example: If a measure of art appreciation is created, all of the items should be related to the different components and types of art. If the questions are regarding historical time periods, with no reference to any artistic movement, stakeholders may not be motivated to give their best effort or invest in this measure because they do not believe it is a true assessment of art appreciation.
2. Construct Validity is used to ensure that the measure actually measures what it is intended to measure (i.e. the construct), and not other variables. Using a panel of “experts” familiar with the construct is a way in which this type of validity can be assessed. For example, A questionnaire to test the awareness among farmers is designed. The questions are written with complicated wording and phrasing. This can cause the test inadvertently becoming a test of reading comprehension, rather than a test of awareness among farmers. It is important that the measure is actually assessing the intended construct, rather than an extraneous factor.
3. Criterion-Related Validity is used to predict future or current performance - it correlates test results with another criterion of interest. For example: If a physics program designed a measure to assess cumulative student learning throughout the major. The new measure could be correlated with a standardized measure of ability in this discipline, such as an ETS field test or the GRE subject test. The higher the correlation between the established measure and new measure, the more faith stakeholders can have in the new assessment tool.
4. Formative Validity when applied to outcomes assessment it is used to assess how well a measure is able to provide information to help improve the program under study. For example: When designing a rubric for history one could assess student’s knowledge across the discipline. If the measure can provide information that students are lacking knowledge in a certain area, for instance, the Civil Rights Movement, then that assessment tool is providing meaningful information that can be used to improve the course or program requirements.
5. Sampling Validity (similar to content validity) ensures that the measure covers the broad range of areas within the concept under study. Not everything can be covered, so items need to be sampled from all of the domains. This may need to be completed using a panel of “experts” to ensure that the content area is adequately sampled. Additionally, a panel can help limit “expert” bias (i.e. a test reflecting what an individual personally feels are the most important or relevant areas). For example: When designing an assessment of learning in the theatre department, it would not be sufficient to only cover issues related to acting. Other areas of theatre such as lighting, sound, functions of stage managers should all be included. The assessment should reflect the content area in its entirety.