If humans reason by building mental models, i.e., mental simulations of situations in the real world, then they should exhibit certain systematic patterns: some inferences should be easy, and others difficult. Cognitive scientists have developed computational models to mimic those patterns across a wide variety of domains. Many of those computational models are made freely available below. Note: To run the programs yourself, you’ll need a Lisp interpreter (such as LispWorks or Clozure CL).
mReasoner is a unified computational system that implements the model theory. It is written in Common Lisp, and it is a psychologically plausible inferential engine for syllogistic, quantificational, monadic, spatiotemporal, causal, and sentential reasoning. It can yield both deductive and probabilistic inferences, and it’s been used to model over two dozen datasets on human reasoning.
This program (v15) does various sorts of sentential reasoning (based on sentential connectives such as if, or, and, and not) according to the theory of mental models (see Khemlani, Byrne, & Johnson-Laird, 2018). It also builds and verifies statements against factual and counterfactual possibilities (see Johnson-Laird, Byrne, & Khemlani, 2023).
mAbducer solves universal rearrangement problems containing a single static loop, and it automatically programs two functions for solving any instance of a class of problems, such as reversing the order of a list, sorting palindromes, and parity-sorts (the inverse of riffle shuffles). Rearrangement problems can be set in the “railway” environment (described in Khemlani, Mackiewicz, Bucciarelli, & Johnson-Laird, 2013).
Contains a command line executable python implementation of the wason selection task algorithm proposed in Johnson-Laird & Wason (1970)’s paper, “A theoretical analysis of insight into a reasoning task”, that has been adapted to incorporate the mental model theory.
PRISM is a computational cognitive model that can be used to simulate and explain how preferred mental models are constructed, inspected, and varied in a spatial array that functions as if it were a spatial working memory. A spatial focus inserts tokens into the array, inspects the array to find new spatial relations, and relocates tokens in the array to generate alternative models of the problem description, if necessary.
Note: This computational model is deprecated in favor of mReasoner and mSentential, available above.
This computational model implements a revised version of a psychological theory of propositional reasoning originally developed by Johnson-Laird and Ruth Byrne (see their book Deduction). It postulates 4 stages of reasoning performance (increasing in accuracy); the first three are psychological and the fourth is an exercise in artificial intelligence.
Note: This computational model is deprecated in favor of mReasoner, available above.
This program models the psychological theory of syllogistic reasoning developed by Johnson-Laird as a successor to the theory sketched in Deduction (Johnson-Laird & Byrne, 1991).
This is a program for constructing mental models of instances of concepts. Its input is a set of fully explicit models, which it then seeks to simplify. (Written by Phil Johnson-Laird in April 2007).
The program tries to reverse engineer simple electrical circuits in the sort of way in which Dr. N.Y. Louis Lee discovered naive human reasoners do (see Lee & Johnson-Laird, under review).
The program makes spatial deductions using a compositional semantics and bottom-up + backtracking parser. The program constructs only one falsifying model. It does not allow premises to assert that one item is in the same place as another, but it does put items in the same place temporarily in the course of searching for models that refute conclusions.
It makes simple temporal deductions using a compositional semantics driven by a bottom-parser to update models (in the form of arrays). The program constructs all the possible models for premises as it interprets them, including the multiple models for indeterminacies. Hence, it does not need to search for alternative models in order to test for validity. If the number of models exceeds its *capacity*, then the program tries an alternative strategy in which it uses the question to search for just those premises that are relevant to the answer. In this way, it ignores irrelevant premises and deals with relevant ones in a referentially coherent order.
Marcin Hitczenko developed a program to build models from recursive and iterative quantifier premises, e.g., “Anyone who loves someone loves Bob.” This system builds quantified relational mental models, and it is written in Common Lisp. It provides a computational account of the models that underlie iterative reasoning, as explored by Cherubini and Johnson-Laird in their (2004) paper. The source code of the program, as well as documentation on how to use the program, are available in the package below.