OMBEA is an audience response system (ARS); you can use it in educational contexts to help understand what the learners know and how they are learning, or in a consultative context where you are asking questions of an audience to gain an insight into an issue or situation. We work with it everyday and are passionate about how it can improve learning outcomes, help presenters communicate with and engage their audiences even better, and how it can be a key part of an overall strategy to improve teaching and communication in any organisation.

OMBEA is an audience response system (ARS); you can use it in educational contexts to help understand what the learners know and how they are learning, or in a consultative context where you are asking questions of an audience to gain an insight into an issue or situation. We work with it everyday and are passionate about how it can improve learning outcomes, help presenters communicate with and engage their audiences even better, and how it can be a key part of an overall strategy to improve teaching and communication in any organisation.

Do MCQs only test shallow knowledge and recall of facts?

Sometimes people raise objections to the use of MCQs. The arguments are along the lines of MCQs still encourage guessing at the correct answer, and test only the shallow recall of facts rather than deep knowledge of a subject. In fact the most conspicuous use of MCQs in recent popular culture is on the Game Show Who wants to be a Millionaire where the ‘ask the audience’ option gives the contestant the results of an audience poll on the answer. Who Wants to Be a Millionaire however is a poor example of what MCQs can do and for the rest of this blog we will look at the evidence to support the point that far from testing knowledge in a shallow way or encouraging guessing at answers, they can engage students at a very deep level with the material they are learning.

The efficacy of MCQs as an assessment tool has been the subject of much research and debate among researchers specialising in attainment. Studies have been done which compared the usefulness of MCQ testing with other forms of testing such as free text answers. One such study compared the grade outcomes of students tested with MCQs compared to those using Constructed Response (CR) testing, where the students write an open ended essay question (Hickson, Reed and Sander, 2012). The students were first year university economic students in the USA and the authors looked at 7754 assessment records, so the sample is more than large enough to be robust. The authors found that students tested with MCQs did as well as the students using CR, and that variations between the two types. They concluded:

Our analysis suggests that switching from an all-CR assessment to an all-MC assessment would produce grade variations that are similar to the differences that are observed for students across different tests.

The results are some what counterintuitive here. Surely the students being asked to write in depth essay types questions would develop better understanding of the subject than those being tested with MCQs? But it appears not and Hickson et. al. point out that MCQs are a much more time-efficient and cost-effective method of testing than Constructed Response questions, so the use of CR needs to be carefully justified.

MCQs are effective at testing prior knowledge of a topic

Studies such as this are useful as reminders that well constructed MCQs can be used very effectively and can test far more than recall of facts or shallow knowledge of a topic. If MCQs are written in such a way that they require reasoning and application of theory to practice then they can be very useful as both a summative and a formative strategy. A paper by Ozuru and others (2013) compared free text answers with MCQs for testing comprehension of a text. The text was a piece about evolutionary biology and covered some difficult concepts and ideas. The results ‘suggest that open-ended and multiple-choice format questions measure different aspects of comprehension processes.’ (p. 215). Specifically they found that free text questions were correlated with the quality of self-explanations, and MCQ responses were correlated with the level of prior knowledge related to the text. The findings of the Ozuru paper adds to our understanding of what MCQs do, and like Hickson et. al they found them a very useful assessment strategy for testing what students know about a particular topic.

So we can see that using an Audience Response System for MCQs is far more than a ‘Who Wants to be a Millionaire’ gimmick and when used correctly test deep knowledge of a topic. When used as a formative assessment strategy with students getting on the spot feedback on how they did, and a chance to reflect on how they answered, MCQs are a potent educational tool in complex and demanding knowledge domains.

References

Hickson S, Reed W, Sander N. (2012) Estimating the Effect on Grades of Using Multiple-Choice Versus Constructive-Response Questions: Data From the Classroom. Educational Assessment. 17(4), 200-213.

Ozuru, Y., Briner, S., Kurby, C. A., & McNamara, D. S. (2013). Comparing comprehension measured by multiple-choice and open-ended questions. Canadian Journal Of Experimental Psychology/Revue Canadienne De Psychologie Expérimentale, 67(3), 215-227.