Thinking is founded on models of possibilities: An interview with Johnson-Laird
Marco Ragni recently interviewed Phil Johnson-Laird for an article in KI – Künstliche Intelligenz. Here’s a small excerpt from the interview:
MR: From your perspective—what are current limitations of AI approaches to explain human reasoning?
PJL: When smart people use their intuitions to develop algorithms, the results can be startling. But, psychologists learn after a few experiments that intuitions about human reasoning are often wrong. So, AI can be both brilliant and irrelevant to cognitive science. Its biggest discovery about cognitive reasoning was the need for systems to be able to advance tentative conclusions and to withdraw them should they turn out to be wrong, i.e., the need for nonmonotonic logic. Yet, in my view, these logics overlook a critical aspect of human reasoning.
Here’s an illustration. A friend and I were sitting outside a restaurant nearly opposite to Picasso’s Château in Provence. Two other friends had gone to get the car, which we’d parked elsewhere. And we inferred that they’d be back in about 10 min. After 20 min, there was no sign of them. An AI nonmonotonic logic would allow us to withdraw our conclusion, and to amend our premises. Well, we did withdraw our conclusion, but by far the most important part of our thinking was to come up with a plausible explanation of what had happened to our friends. We needed it in order to decide our best course of action, i.e., whether to walk to the car or to stay where we were. We inferred that our friends had had difficulty in starting the car—this problem had happened before. Such abductions are not part of nonmonotonic logics. Ours enabled us to infer that our best course of action was to stay where we were, and to wait. After another 5 min or so, sure enough, the car came spluttering into view—it had needed a tow to get it started.