Computational Brain and Behavior, cilt.4, sa.2, ss.178-190, 2021 (Scopus)
© 2020, Society for Mathematical Psychology.We conducted three experiments specifically designed to simultaneously evaluate the effects on recognition accuracy of adding items during study and adding items during test. The recognition memory list-length effect (LLE) is small and unreliable (Annis et al. 2015; Dennis et al. 2008), but additional test trials produce a robust decrease in accuracy, termed output interference (OI; Criss et al. 2011; Kılıç et al. 2017). This is puzzling; why should the size of the effect of additional stimulus exposures depend on whether the item was studied or tested (Malmberg et al. 2012)? We found a decrease in accuracy when stimulus exposures were added at any stage. However, the harm of adding items during study was less than the output interference that resulted from testing. In addition, feedback presented during test served as a moderator. When feedback was given, OI was diminished, and the LLE increased. Within the framework of our model, this suggests that testing with no feedback often results in the encoding of additional information in a trace originally encoded during study, and testing with feedback decreases the tendency to update traces during test. Several possible accounts of feedback reducing trace updating are discussed.