Academia.eduAcademia.edu

Abstract

It has been argued that the ability to distinguish patterns (i.e., "sameness" from "differentness") holds adaptive value for humans and animals (Wasserman, Young, & Cook, 2004). Indeed, the apparent need to identify patterns is so strong as to produce relatively frequent (and newsworthy) accounts of pareidolia such as the "face on mars" and religious images burned into grilled cheese sandwiches. The broad term for identifying or perceiving patterns in random or meaningless data, where such patterns are neither present nor intended, is apophenia (cf. Carroll, 2003). To the extent that students perceive underlying patterns (whether intentional or not) within the answer keys of multiple-choice exams, outcomes unrelated to content knowledge seem likely. Although there are numerous exam formats available, multiple-choice exams are particularly popular among students and teachers for differing reasons. Students tend to prefer multiple-choice questions because recognition-based performance is often superior to recall-based performance (Hart, 1965). For teachers, advantages of multiple-choice exams include ease of construction, as many instructor resources linked to textbooks include test generators, which automate a great deal of the process of test building. This benefit extends to online learning management systems such as Blackboard given that most textbook publishers provide textbook specific test-bank modules that can be imported directly into course shells. In addition, multiple-choice exams are almost always less effortful and less time-consuming to score than written response exams. However, despite these apparent advantages, a great deal of research has shown that there are many pitfalls to avoid when constructing effective multiple-choice exams (Haladyna & Downing, 1989; Haladyna, Downing, & Rodriguez, 2002; Hogan & Murphy, 2007). Among these pitfalls are the potential construction biases that can emerge when instructors over-represent or under-use certain response options. As Mitchell (1974) showed, some response alternatives tend to be either over-represented (often "C") or under-represented (often "A") in exams. More recently, Attali and Bar-Hillel (2003) showed that both test takers and test makers appear to be biased in favor of answer choices located centrally among the alternatives in a ratio of 3 or 4 to 1. This bias will produce answer keys that are unbalanced in the sense that not all answer choices are equally represented. However, evidence regarding whether this concern is important or not is both mixed and outdated. For 556628S GOXXX10.