Ambiguity detection and mental models of discourse

A new study by Calvin Laursen, Timothy Slattery, Martin Vassilev, and Jan Wiener, all from Bournemouth University, examines how people detect ambiguities in naturalistic texts. They show that participants often fail to detect ambiguities in descriptions when they’re consistent with previously constructed possibilities. The abstract of their paper is here:

Previous research on mental models (e.g., Goodwin & Johnson-Laird, 2005; Rips, 1994; Johnson-Laird, 2001; Rauh et al., 2005) has focused on which, how and why mental models are created by readers, however it is not clear whether people notice if/when there is more than one interpretation (ambiguity), without being prompted. We present a novel naturalistic reading methodology for studying readers’ ability to “detect” if an object/subject order was ambiguous or not. This paper presents a methodology for investigating ambiguity “detection” that uses premises similar to mental models research (e.g., Goodwin & Johnson-Laird, 2005; Nejasmic, Bucher & Knauff, 2015) which were embedded into paragraphs. Our findings are consistent with prior research (e.g., Johnson-Laird, 1994), as we found that participants often fail to detect ambiguity, if the premises are consistent with two viable models. We found that older participants (60+) perform no better or worse compared to younger participants in detecting ambiguity or in their ability to make inferences from mental models. However, consistent with prior research, we found that older participants took significantly longer both to read and to answer. Through the use of paragraph stimuli, this paper replicates prior findings of people’s “blindness” to alternative models and that, although slower, older readers do not differ from younger readers in their ability to construct and make inferences from mental models.

and a preprint is available for download through PsyArXiv.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.