In: Psychology
Inter-scorer reliability would most likely be reported for a scale with what response format? a. Open-ended b. Multiple Choice c. True/False d. Rating scale
In educational and psychological testing, reliability describes the precision of the measurement process, or the consistency of scores produced by a test. Reliability refers broadly to consistency of measurement.
Inter-Rater
Reliability
When multiple people are giving assessments of some kind or are the
subjects of some test, then similar people should lead to the same
resulting scores. It can be used to calibrate people, for example
those being used as observers in an experiment.
Inter-rater reliability thus evaluates reliability across
different people.
Two major ways in which inter-rater reliability is used are
Inter-rater reliability is also known as inter-observer reliability or inter-coder reliability.
For example, in a test scenario, an IQ test applied to several people with a true score of 120 should result in a score of 120 for everyone. In practice, there will be usually be some variation between people.
The basic strategy for determining inter-scorer reliability is to obtain a series of responses from a single client and to have these responses scored by two different individuals.
Hence the best suggested response format to report a Inter-scorer reliability would be “Rating Scale”. (Option d)