Applying Cognitive Principles to Model-Finding Output: The Positive Value of Negative Information

Tristan Dyer, Tim Nelson, Kathi Fisler, Shriram Krishnamurthi

ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages & Applications, 2022

Abstract

Model-finders, such as SAT/SMT-solvers and Alloy, are used widely both directly and embedded in domain- specific tools. They support both conventional verification and, unlike other verification tools, property-free exploration. To do this effectively, they must produce output that helps users with these tasks. Unfortunately, the output of model-finders has seen relatively little rigorous human-factors study.

Conventionally, these tools tend to show one satisfying instance at a time. Drawing inspiration from the cognitive science literature, we investigate two aspects of model-finder output: how many instances to show at once, and whether all instances must actually satisfy the input constraints. Using both controlled studies and open-ended talk-alouds, we show that there is benefit to showing negative instances in certain settings; the impact of multiple instances is less clear. Our work is a first step in a theoretically grounded approach to understanding how users engage cognitively with model-finder output, and how those tools might better support users in doing so.

Blog

See our post for a quick overview.

Paper

PDF


These papers may differ in formatting from the versions that appear in print. They are made available only to support the rapid dissemination of results; the printed versions, not these, should be considered definitive. The copyrights belong to their respective owners.