p-95-07 Abduction, Experience, and Goals: A Model of Everyday Abductive Explanation David B. Leake The Journal of Experimental and Theoretical Artificial Intelligence. 7:407-428, 1995. Abstract Many abductive understanding systems generate explanations by a backwards chaining process that is neutral both to the explainer's previous experience in similar situations and to why the explainer is attempting to explain. This article examines the relationship of such models to an approach that uses case-based reasoning to generate explanations. In this case-based model, the generation of abductive explanations is focused by prior experience and by goal-based criteria reflecting current information needs. The article analyzes the commitments and contributions of this case-based model as applied to the task of building good explanations of anomalous events in everyday understanding. The article identifies six central issues for abductive explanation, compares how these issues are addressed in traditional and case-based explanation models, and discusses benefits of the case-based approach for facilitating generation of plausible and useful explanations in domains that are complex and imperfectly understood. A postscript file for the full paper is available electronically. To get a copy by anonymous ftp, see ftp://ftp.cs.indiana.edu/pub/leake/README. on the web, open URL ftp://ftp.cs.indiana.edu/pub/leake/INDEX.html.