Raters' Knowledge of Students' Proficiency Levels as a Source of Measurement Error in Oral Assessments


Creative Commons License

Tanriverdi-Koksal F., Ortactepe D.

HACETTEPE UNIVERSITESI EGITIM FAKULTESI DERGISI-HACETTEPE UNIVERSITY JOURNAL OF EDUCATION, cilt.32, sa.3, ss.581-599, 2017 (ESCI) identifier identifier

Özet

There has been an ongoing debate on the reliability of oral exam scores due to the existence of human raters and the factors that might account for differences in their scorings. This quasi-experimental study investigated the possible effect(s) of the raters' prior knowledge of students' proficiency levels on rater scorings in oral interview assessments. The study was carried out in a pre- and post-test design with 15 EFL instructors who performed as raters in oral assessments at a Turkish state university. In both pre- and post-tests, the raters assigned scores to the same video-recorded oral interview performances of 12 students from three different proficiency levels. While rating the performances, the raters also provided verbal reports about their thought processes. The raters were not informed about the students' proficiency levels in the pre-test, while this information was provided in the post-test. According to the findings, majority of the Total Scores ranked lower or higher in the post-test. The thematic analysis of the raters' video recorded verbal reports revealed that most of the raters referred to the proficiency levels of the students while assigning scores in the post-test. The findings of the study suggest that besides factors such as accent, nationality, and gender of the test-takers and the assessors, raters' prior knowledge of students' proficiency levels could be a variable that needs to be controlled for more reliable test results.