%TI Remembering Why to Remember: Performance-Guided Case-Base Maintenance %AU David Leake %AU David Wilson %PU Advances in Case-Based Reasoning: Proceedings of EWCBR-2K %OR INDAI %LT p-00-03 %AV url http://www.cs.indiana.edu/~leake/p-00-03.pdf %YR 2000 %AB An important focus of recent CBR research is on how to develop strategies for achieving compact, competent case-bases, as a way to improve the performance of CBR systems. However, compactness and competence are not always good predictors of performance, especially when problem distributions are non-uniform. Consequently, this paper argues for developing methods that tie case-base maintenance more directly to performance concerns. The paper begins by examining the relationship between competence and performance, discussing the goals and constraints that should guide addition and deletion of cases. It next illustrates the importance of augmenting competence-based criteria with quantitative performance-based considerations, and proposes a strategy for closely reflecting adaptation performance effects when compressing a case-base. It then presents empirical studies examining the performance tradeoffs of current methods and the benefits of applying fine-grained performance-based criteria to case-base compression, showing that performance-based methods may be especially important for task domains with non-uniform problem distributions. %KW case-based reasoning %KW case-base maintenance %KW case-base competence %TI Capture, Storage and Reuse of Lessons about Information Resources: Supporting Task-Based Information Search %AU David Leake %AU Travis Bauer %AU Ana Maguitman %AU David Wilson %PU Proceedings of the AAAI-00 Workshop on Intelligent Lessons Learned Systems %OR INDAI %LT p-00-02 %AV url http://www.cs.indiana.edu/~leake/p-00-02.pdf %YR 2000 %AB Learning how to find relevant information sources is an important part of solving novel problems and mastering new domains. This paper introduces work on developing a lessons learned system that supports task-driven research by (1) automatically storing cases recording which information resources researchers consult during their decision-making; (2) using these cases to proactively suggest information resources to consult in similar future task contexts; and (3) augmenting existing information resources by providing tools to support users in elucidating and capturing records of useful information that they have found, for future reuse. Our approach integrates aspects of case-based reasoning, ``just-in-time'' task-based information retrieval, and concept mapping. We describe the motivations for this work and how lessons learned systems for suggesting research resources complement those that store task solutions. We present an initial system implementation that illustrates the desired properties, and close with a discussion of the primary questions and open issues to address. %KW case-based reasoning %KW lessons learned systems %KW knowledge management %KW just-in-time retrieval %KW web browsing %TI Case-Based Recommender Components for Scientific Problem-Solving Environments %AU David Wilson %AU David Leake %AU Randall Bramley %PU Proceedings of the Sixteenth IMACS World Congress %OR INDAI %LT p-00-01 %AV url http://www.cs.indiana.edu/~leake/p-00-01.pdf %YR 2000 %AB Component-based problem-solving environments (PSEs) provide scientists and engineers with a framework of integrated problem-solving tools and resources that they can easily compose and apply in their particular task domains. Developing effective solution strategies within these environments depends on making good choices about the selection, parameterization, and organization of component tools and resources. Because making good choices may require considerable effort and expertise, designing ``intelligent'' components that can make informed recommendations about solution development will play a valuable role in realizing the full potential of PSEs. As part of an overall effort in software component systems and PSEs for scientific computing at Indiana University, the CBMatrix project is developing ``intelligent recommender components'' that use case-based reasoning (CBR) methods to assist in selection, organization, and application of scientific PSE tools and resources. This paper gives an overview of the CBMatrix project, the issues involved, initial results, and the recommender components under development. %KW artificial intelligence %KW case-based reasoning %KW scientific computation %KW recommender systems %TI On Constructing the Right Sort of CBR Implementation %AU David Wilson %AU Arijit Sengupta %AU David Leake %PU Proceedings of the JCAI-99 Workshop on Automating the Construction of Case Based Reasoners %OR INDAI %LT p-99-08 %AV url http://www.cs.indiana.edu/~leake/p-99-08.pdf %YR 1999 %AB Case based reasoning implementations as currently constructed tend to fit three general models, characterized by implementation constraints: task-based (task alone), enterprise (integrating databases), and web-based (integrating web representations). These implementations represent the targets for automatic system construction, and it is important to understand the strengths of each, how they are built, and how one may be constructed by transforming another. This paper describes a framework that relates the three types of CBR implementation, discusses their typical strengths and weaknesses, and describes practical methods for automating the construction of new CBR systems by transforming and synthesizing existing resources. %KW knowledge management %KW case-based reasoning %KW XML %KW database %KW corporate memory %TI Constructing and Transforming CBR Implementations: Techniques for Corporate Memory Management %AU David Wilson %AU Arijit Sengupta %AU David Leake %PU Proceedings of the Workshop on Practical Case-Based Reasoning Strategies for Building and Maintaining Corporate Memories, ICCBR-99 the Construction of Case Based Reasoners %OR INDAI %LT p-99-07 %AV url http://www.cs.indiana.edu/~leake/p-99-07.pdf %YR 1999 %AB Achieving widespread case-based reasoning support for corporate memories will require the flexibility to integrate implementations with existing organizational resources and infrastructure. Case-based reasoning implementations as currently constructed tend to fall into three categories, characterized by implementation constraints: \textit{task-based} (task constraints alone), \textit{enterprise} (integrating databases), and \textit{web-based} (integrating web representations). These implementation types represent the possible targets in constructing corporate memory systems, and it is important to understand the strengths of each, how they are built, and how one may be constructed by transforming another. This paper describes a framework that relates the three types of CBR implementation, discusses their typical strengths and weaknesses, and describes practical strategies for building corporate CBR memories to meet new requirements by transforming and synthesizing existing resources. %KW knowledge management %KW case-based reasoning %KW XML %KW database %KW corporate memory %TI Combining CBR with Interactive Knowledge Acquisition, Manipulation and Reuse %AU David Leake %AU David Wilson %PU Proceedings of the Third International Conference on Case-Based Reasoning, ICCBR-99, Springer-Verlag, Berlin. %OR INDAI %LT p-99-06 %AV url http://www.cs.indiana.edu/~leake/p-99-06.pdf %YR 1999 %AB Because of the complexity of aerospace design, intelligent systems to support and amplify the abilities of aerospace designers have the potential for profound impact on the speed and reliability of design generation. This article describes a framework for supporting the interactive capture of design cases and their application to new problems, illustrating the approach with a discussion of its use in a support system for aircraft design. The project integrates case-based reasoning with interactive tools for capturing expert design knowledge through ``concept mapping.'' Concept mapping tools provide crucial functions for interactively generating and examining design cases and navigating their hierarchical structure, while CBR techniques provide capabilities to facilitate retrieval and to aid interactive adaptation of designs. The project aims simultaneously to develop a useful design aid and more generally to develop practical interactive approaches to fundamental issues of case acquisition and representation, context-sensitive retrieval, and case adaptation. %KW case-based reasoning %KW design support %KW mixed-initiative systems %KW interactive systems %TI Integrating Information Resources: A Case Study of Engineering Design Support %AU David Leake %AU Larry Birnbaum %AU Cameron Marlow %AU Hao Yang %PU Proceedings of the Third International Conference on Case-Based Reasoning, ICCBR-99, Springer-Verlag, Berlin. %OR INDAI %LT p-99-05 %AV url http://www.cs.indiana.edu/~leake/p-99-05.pdf %YR 1999 %AB The development of successful case-based design aids depends both on the CBR processes themselves and on crucial questions of integrating the CBR system into the larger task context: how to make the CBR component provide information at the right time and in the right form, how to access relevant information from additional information sources to supplement the case library, how to capture information for use downstream and how to unobtrusively acquire new cases. This paper presents a set of design principles and techniques that integrate methods from CBR and information retrieval to address these questions. The paper illustrates their application through a case study of the Stamping Advisor, a tool to support feasibility analysis for stamped metal automotive parts. %KW knowledge management %KW knowledge acquisition %KW case-based reasoning %KW design support %KW mixed-initiative systems %KW just-in-time retrieval %KW interactive systems %TI When Experience is Wrong: Examining CBR for Changing Tasks and Environments %AU David Leake %AU David Wilson %PU Proceedings of the Third International Conference on Case-Based Reasoning, ICCBR-99, Springer-Verlag, Berlin. %OR INDAI %LT p-99-04 %AV url http://www.cs.indiana.edu/~leake/p-99-04.ps.Z %YR 1999 %AB Case-based problem-solving systems reason and learn from experiences, building up case libraries of problems and solutions to guide future reasoning. The expected benefits of this learning process depend on two types of regularity: (1) problem-solution regularity, the relationship between problem-to-problem and solution-to-solution similarity measures that assures that solutions to similar prior problems are a useful starting point for solving similar current problems, and (2) problem-distribution regularity, the relationship between old and new problems that assures that the case library will contain cases similar to the new problems it encounters. Unfortunately, these types of regularity are not assured. Even in contexts for which initial regularity is sufficient, problems may arise if a system's users, tasks, or external environment change over time. This paper defines criteria for assessing the two types of regularity, discusses how the definitions may be used to assess the need for case-base maintenance, and suggests maintenance approaches for responding to those needs. In particular, it discusses the role of analysis of performance over time in responding to environmental changes. %KW case-based reasoning %KW case-base maintenance %KW lazy updating %KW diachronic maintenance %TI Selecting Task-Relevant Sources for Just-in-Time Retrieval %AU David Leake %AU Ryan Scherle %AU Jay Budzik %AU Kristian Hammond %PU Proceedings of the AAAI-99 Workshop on Intelligent Information Systems, AAAI Press, Menlo Park, 1999. %OR INDAI %LT p-99-03 %AV url http://www.cs.indiana.edu/~leake/p-99-03.ps.Z %YR 1999 %AB ``Just-in-time'' information systems monitor their users' tasks, anticipate task-based information needs, and proactively provide their users with relevant information. The effectiveness of such systems depends both on their capability to track user tasks and on their ability to retrieve information that satisfies task-based needs. The Watson system \cite{budzik-et-al98,budzik-hammond99} provides a framework for monitoring user tasks and identifying relevant content areas, and uses this information to generate focused queries for general-purpose search engines and for specialized search engines integrated into the system. The proliferation of specialized search engines and information repositories on the Web provides a rich source of additional information pre-focused for a wide range information needs, potentially enabling just-in-time systems to exploit that focus by querying the most relevant sources. However, putting this into practice depends on having general scalable methods for selecting the best sources to satisfy the user's needs. This paper describes early research on augmenting Watson with a general-purpose capability for automatic information source selection. It presents a source selection method that has been integrated into Watson and discusses general issues and research directions for task-relevant source selection. %KW intelligent information systems %KW just-in-time retrieval %KW information retrieval %KW information integration %TI Task-Based Knowledge Management %AU David Leake %AU Larry Birnbaum %AU Cameron Marlow %AU Hao Yang %PU Proceedings of the AAAI-99 Workshop on Exploring Synergies of of Knowledge Management and Case-Based Reasoning. AAAI Press, Menlo Park, 1999. %OR INDAI %LT p-99-02 %AV url http://www.cs.indiana.edu/~leake/p-99-02.ps.Z %YR 1999 %AB Case-based reasoning is receiving much attention as a technology for building knowledge repositories that can be queried for task-relevant information. Taking the CBR problem-solving model seriously, however, suggests the value of a much stronger integration between knowledge management systems and the tasks that they serve. In this integrated view, knowledge management systems should be designed to do {\it just-in-time retrieval,} anticipating task-based information needs and satisfying them automatically before the user requests information, and should learn unobtrusively by monitoring the user's task performance. Key issues include how to integrate knowledge access into the user's problem-solving process, how to automatically provide the user with task-relevant information from multiple sources, and how to build up knowledge for transmission between task phases and for long-term storage. This paper describes how these issues are addressed in the Stamping Advisor, a system to aid the design of stamped automotive parts. This system automatically presents the designer with needed information in a natural way, uses CBR and task-focused information retrieval to access useful information, and automatically captures relevant information to support downstream task processes and build its memory of cases. %KW knowledge management %KW knowledge acquisition %KW case-based reasoning %KW design support %KW mixed-initiative systems %TI Managing, Mapping, and Manipulating Conceptual Knowledge %AU Alberto Canas %AU David Leake %AU David Wilson %PU >Proceedings of the AAAI-99 Workshop on Exploring Synergies of of Knowledge Management and Case-Based Reasoning. AAAI Press, Menlo Park, 1999. %OR INDAI %LT p-99-01 %AV url http://www.cs.indiana.edu/~leake/p-99-01.ps.Z %YR 1999 %AB Effective knowledge management maintains the knowledge assets of an organization by identifying and capturing useful information in a usable form, and by supporting refinement and reuse of that information in service of the organization's goals. A particularly important asset is the ``internal'' knowledge embodied in the experiences of task experts that may be lost with shifts in projects and personnel. Concept Mapping provides a framework for making this internal knowledge explicit in a visual form that can easily be examined and shared. However, it does not address how relevant concept maps can be retrieved or adapted to new problems. CBR is playing an increasing role in knowledge retrieval and reuse for corporate memories, and its capabilities are appealing to augment the concept mapping process. This paper describes ongoing research on a combined CBR/CMap framework for managing aerospace design knowledge. Its approach emphasizes interactive capture, access, and application of knowledge representing different experts' perspectives, and unobtrusive learning as knowledge is reused. %KW concept mapping %KW knowledge acquisition %KW case-based reasoning %KW case adaptation %KW design support %KW mixed-initiative systems %TI Categorizing Case-Base Maintenance: Dimensions and Directions %AU David Leake %AU David Wilson %PU Proceedings of EWCBR-98, Springer-Verlag, Berlin, 1998. %OR INDAI %LT p-98-03 %AV url http://www.cs.indiana.edu/~leake/p-98-03.ps.Z %YR 1998 %AB Experience with the growing number of large-scale CBR systems has led to increasing recognition of the importance of case-base maintenance. Multiple researchers have addressed pieces of the case-base maintenance problem, considering such issues as maintaining consistency and controlling case-base growth. However, despite the existence of these cases of case-base maintenance, there is no general framework of dimensions for describing case-base maintenance systems. Such a framework would be useful both to understand the state of the art in case-base maintenance and to suggest new avenues of exploration by identifying points along the dimensions that have not yet been studied. This paper presents a first attempt at identifying the dimensions of case-base maintenance. It shows that characterizations along such dimensions can suggest avenues for future case-base maintenance research and presents initial steps exploring one of those avenues: identifying patterns of problems that require generalized revisions and addressing them with lazy updating. %KW case-based reasoning %KW case-base maintenance %KW lazy updating %KW case adaptation %TI Integrating CBR Components within a Case-Based Planner %AU David Leake %AU Andrew Kinley %PU Proceedings of the AAAI-98 Workshop on Case-Based Reasoning Integrations, AAAI Press, San Mateo, 1998. %OR INDAI %LT p-98-02 %AV url http://www.cs.indiana.edu/~leake/p-98-02.ps.Z %YR 1998 %AB Multimodal reasoning systems can improve the effectiveness of reasoning by integrating multiple reasoning methods, each selectively applied to the tasks for which it is best-suited. One integration approach is to bring CBR into other systems, by developing case-based {\it intelligent components} \cite{riesbeck96} that collaborate with other reasoning systems, monitoring their successes and failures and suggesting solutions when prior experiences are relevant. Another approach is to bring other reasoning processes into a CBR system's own architecture, to facilitate subprocesses of CBR such as case adaptation and similarity assessment. This paper describes a project combining both approaches: It discusses motivations and methods for a case-based components approach to integrating multiple reasoning modes, styles, and levels within a case-based reasoning system. The fundamental principle is for the system to use case-based components to learn by monitoring, capturing, and exploiting traces of multiple types of prior reasoning within the CBR system. The paper considers the benefits of this approach for improving CBR and its potential applicability to integrations in other contexts. %KW machine learning %KW introspective reasoning %KW metacognition %KW case-based reasoning %KW case adaptation %KW memory search %KW similarity %KW multistrategy learning %KW multimodal reasoning %TI Combining Reasoning Modes, Levels, and Styles through Internal CBR %AU David Leake %AU Andrew Kinley %PU Proceedings of the 1998 AAAI Spring Symposium on Multimodal Reasoning, AAAI Press, San Mateo, 1998. %OR INDAI %LT p-98-01 %AV url http://www.cs.indiana.edu/~leake/p-98-01.ps.Z %YR 1998 %AB This paper discusses motivations and proposes methods for integrating multiple reasoning modes, styles, and levels within a case-based reasoning system. It describes a CBR system in which rule-based internal processing is augmented with two styles of case-based reasoning, derivational and transformational CBR, and which reasons at both the domain-level and the meta-level, in order to respond to the requirements of different processing tasks. The fundamental principle is for the system to learn by monitoring, capturing, and exploiting multiple types of prior system reasoning. The paper considers the ramifications of this approach and its potential as a strategy for multimodal reasoning in other contexts. %KW machine learning %KW introspective reasoning %KW metacognition %KW case-based reasoning %KW case adaptation %KW memory search %KW similarity %KW multistrategy learning %KW multimodal reasoning %TI Learning to Integrate Multiple Knowledge Sources for Case-Based Reasoning %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, Morgan Kaufmann, San Francisco, 1997. %OR INDAI %LT p-97-03 %AV url http://www.cs.indiana.edu/~leake/p-97-03.ps.Z %YR 1997 %AB The case-based reasoning process depends on multiple overlapping knowledge sources, each of which provides an opportunity for learning. Exploiting these opportunities requires not only determining the learning mechanisms to use for each individual knowledge source, but also how the different learning mechanisms interact and their combined utility. This paper presents a case study examining the relative contributions and costs involved in learning processes for three different knowledge sources---cases, case adaptation knowledge, and similarity information---in a case-based planner. It demonstrates the importance of interactions between different learning processes and identifies a promising method for integrating multiple learning methods to improve case-based reasoning. %KW machine learning %KW introspective reasoning %KW metacognition %KW case-based reasoning %KW case adaptation %KW memory search %KW similarity %KW multistrategy learning %TI Case-Based Similarity Assessment: Estimating Adaptability from Experience %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI Press, Menlo Park, CA, 1997. %OR INDAI %LT p-97-02 %AV url http://www.cs.indiana.edu/~leake/p-97-02.ps.Z %YR 1997 %AB Case-based problem-solving systems rely on {\it similarity assessment} to select stored cases whose solutions are easily {\it adaptable} to fit current problems. However, widely-used similarity assessment strategies, such as evaluation of semantic similarity, can be poor predictors of adaptability. As a result, systems may select cases that are difficult or impossible for them to adapt, even when easily adaptable cases are available in memory. This paper presents a new similarity assessment approach which couples similarity judgments directly to a case library containing the system's adaptation knowledge. It examines this approach in the context of a case-based planning system that learns both new plans and new adaptations. Empirical tests of alternative similarity assessment strategies show that this approach enables better case selection and increases the benefits accrued from learned adaptations. %KW machine learning %KW introspective reasoning %KW metacognition %KW case-based reasoning %KW case adaptation %KW memory search %KW similarity %TI A Case Study of Case-Based CBR %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the Second International Conference on Case-Based Reasoning, Providence, RI, 1997. %OR INDAI %LT p-97-01 %AV url http://www.cs.indiana.edu/~leake/p-97-01.ps.Z %YR 1997 %AB Case-based reasoning depends on multiple knowledge sources beyond the case library, including knowledge about case adaptation and criteria for similarity assessment. Because hand coding this knowledge accounts for a large part of the knowledge acquisition burden for developing CBR systems, it is appealing to acquire it by learning, and CBR is a promising learning method to apply. This observation suggests developing {\it case-based} CBR systems, CBR systems whose components themselves use CBR. However, despite early interest in case-based approaches to CBR, this method has received comparatively little attention. Open questions include how case-based components of a CBR system should be designed, the amount of knowledge acquisition effort they require, and their effectiveness. This paper investigates these questions through a case study of issues addressed, methods used, and results achieved by a case-based planning system that uses CBR to guide its case adaptation and similarity assessment. The paper discusses design considerations and presents empirical results that support the usefulness of case-based CBR, that point to potential problems and tradeoffs, and that directly demonstrate the overlapping roles of different CBR knowledge sources. The paper closes with general lessons about case-based CBR and areas for future research. %KW machine learning %KW introspective reasoning %KW metacognition %KW case-based reasoning %KW case adaptation %KW similarity %KW memory search %TI Acquiring Case Adaptation Knowledge: A Hybrid Approach %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the Thirteenth National Conference on Artificial Intelligence, AAAI Press, Menlo Park, CA, 1996. %OR INDAI %LT p-96-04 %AV url http://www.cs.indiana.edu/~leake/p-96-04.ps.Z %YR 1996 %AB The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %TI Linking Adaptation and Similarity Learning %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society, Lawrence Erlbaum, 1996. %OR INDAI %LT p-96-03 %AV url http://www.cs.indiana.edu/~leake/p-96-03.ps.Z %YR 1996 %AB The case-based reasoning (CBR) process solves problems by retrieving prior solutions and adapting them to fit new circumstances. Many studies examine how case-based reasoners learn by storing new cases and refining the indices used to retrieve cases. However, little attention has been given to learning to refine the process for applying retrieved cases. This paper describes research investigating how a case-based reasoner can learn strategies for adapting prior cases to fit new situations, and how its similarity criteria may be refined pragmatically to reflect new capabilities for case adaptation. We begin by highlighting psychological research on the development of similarity criteria and summarizing our model of case adaptation learning. We then discuss initial steps towards pragmatically refining similarity criteria based on experiences with case adaptation. %KW machine learning %KW similarity %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %TI Multistrategy Learning to Apply Cases for Case-Based Reasoning %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the Third International Workshop on Multistrategy Learning, AAAI Press, Menlo Park, 1996. %OR INDAI %LT p-96-02 %AV url http://www.cs.indiana.edu/~leake/p-96-02.ps.Z %YR 1996 %AB Investigations of learning in case-based reasoning (CBR) have traditionally focused on learning two types of knowledge: new cases and new indexing criteria for case retrieval. However, there is increasing recognition that other types of knowledge also play crucial roles in the case-based reasoning process. The effectiveness of a CBR system depends not only on having and retrieving relevant cases, but also on selecting which retrieved cases to apply and determining how to adapt them to fit new situations. Consequently, case-based reasoning can benefit from using multiple learning strategies to acquire, in addition to new cases and indices, new case adaptation strategies and similarity criteria. This paper describes ongoing research that studies how multiple types of learning can improve the case-based reasoning process and examines their interrelationship in contributing to the overall performance of a CBR system. %KW machine learning %KW multistrategy learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %TI CBR in Context: The Present and Future %AU David Leake %PU In Leake, D., ed. Case-Based Reasoning: Experiences, Lessons, and Future Directions, AAAI Press. %OR INDAI %LT p-96-01.ps.Z %AV url http://www.cs.indiana.edu/~leake/p-96-01.ps.Z %YR 1996 %KW case-based reasoning %KW case adaptation %KW memory search %KW case retrieval %KW indexing %KW similarity %KW adaptability %KW analogical reasoning %KW machine learning %TI Case-Based Reasoning: Experiences, Lessons, and Future Directions %AU David Leake %PU AAAI Press. %OR INDAI %LT a-96-book.html %AV url http://www.cs.indiana.edu/~leake/a-96-book.html %YR 1996 %KW case-based reasoning %KW analogical reasoning %KW machine learning %KW case adaptation %KW memory search %KW case retrieval %KW indexing %KW similarity %KW adaptability %KW expert systems %KW planning %KW design %KW help desks %TI Introspective Learning for Case-Based Reasoning %AU Susan Fox %PU Ph.D dissertation, Indiana University, 1995. %OR INDAI %LT p-95-14 %AV url http://www.cs.indiana.edu/~leake/p-95-14.ps.Z %YR 1995 %KW machine learning %KW introspective reasoning %KW metacognition %KW model-based reasoning %KW case-based reasoning %KW case retrieval %KW indexing %KW case adaptation %KW similarity %KW adaptability %KW failure-driven learning %KW expectation failures %TI Experience, Introspection, and Expertise: Learning to Refine the Case-Based Reasoning Process %AU David Leake %PU Journal of Experimental and Theoretical Artificial Intelligence %OR INDAI %LT p-95-13 %AV url http://www.cs.indiana.edu/~leake/p-95-13.ps.Z %YR 1995 %AB The case-based reasoning paradigm models how reuse of stored experiences contributes to expertise. In a case-based problem-solver, new problems are solved by retrieving stored information about previous problem-solving episodes and adapting it to suggest solutions to the new problems. The results are then themselves added to the reasoner's memory in new cases for future use. Despite this emphasis on learning from experience, however, experience generally plays a minimal role in models of how the case-based reasoning process is itself performed. Case-based reasoning systems generally do not refine the methods they use to retrieve or adapt prior cases, instead relying on static pre-defined procedures. The thesis of this article is that learning from experience can play a key role in building expertise by refining the case-based reasoning process itself. To support that view and to illustrate the practicality of learning to refine case-based reasoning, this article presents ongoing research into using introspective reasoning about the case-based reasoning process to increase expertise at retrieving and adapting stored cases. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %KW model-based reasoning %KW case retrieval %KW indexing %KW similarity %KW adaptability %KW failure-driven learning %KW expectation failures %TI Adaptive Similarity Assessment for Case-Based Explanation %AU David Leake %PU International Journal of Expert Systems Research and Applications, 8(2):165--194,1995. %OR INDAI %LT p-95-12 %AV url http://www.cs.indiana.edu/~leake/p-95-12.ps.Z %YR 1995 %AB Guiding the generation of abductive explanations is a difficult problem. Applying case-based reasoning to abductive explanation generation---generating new explanations by retrieving and adapting explanations for prior episodes---offers the benefit of re-using successful explanatory reasoning but raises new issues concerning how to perform similarity assessment to judge the relevance of prior explanations to new situations. Similarity assessment affects two points in the case-based explanation process: deciding which explanations to retrieve and evaluating the retrieved candidates. We address the problem of identifying similar explanations to retrieve by basing that similarity assessment on a categorization of anomaly types. We show that the problem of evaluating retrieved candidate explanations is often impeded by incomplete information about the situation to be explained, and address that problem with a novel similarity assessment method which we call constructive similarity assessment. Constructive similarity assessment contrasts with traditional ``feature-mapping'' similarity assessment methods by using the contents of memory to hypothesize important features in the new situation, and in using a pragmatic criterion---the system's ability to adapt features of the old case into features that apply in the new circumstances---as the basis for comparing features. Thus constructive similarity assessment does not merely compare new cases to old; instead, based on adaptation of prior cases in memory, it addresses the problem of incomplete input cases by building up and reasoning about augmented descriptions of those cases. %KW machine learning %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %KW explanation %KW abduction %KW similarity %TI Learning to Improve Case Adaptation by Introspective Reasoning and CBR %AU David Leake %AU Andrew Kinley %AU David Wilson %PU Proceedings of the First International Conference on Case-Based Reasoning, Sesimbra, Portugal, 1995. %OR INDAI %LT p-95-11 %AV url http://www.cs.indiana.edu/~leake/p-95-11.ps.Z %YR 1995 %AB In current CBR systems, case adaptation is usually performed by rule-based methods that use task-specific rules hand-coded by the system developer. The ability to define those rules depends on knowledge of the task and domain that may not be available a priori, presenting a serious impediment to endowing CBR systems with the needed adaptation knowledge. This paper describes ongoing research on a method to address this problem by acquiring adaptation knowledge from experience. The method uses reasoning from scratch, based on introspective reasoning about the requirements for successful adaptation, to build up a library of adaptation cases that are stored for future re-use. We describe the tenets of the approach and the types of knowledge it requires. We sketch initial computer implementation, lessons learned, and open questions for further study. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %TI Learning to Refine Indexing by Introspective Reasoning %AU Susan Fox %AU David Leake %PU Proceedings of the First International Conference on Case-Based Reasoning, Sesimbra, Portugal, 1995. %OR INDAI %LT p-95-10 %AV url http://www.cs.indiana.edu/~leake/p-95-10.ps.Z %YR 1995 %AB A significant problem for case-based reasoning (CBR) systems is determining the features to use in judging case similarity for retrieval. We describe research that addresses the feature selection problem by using introspective reasoning to learn new features for indexing. Our method augments the CBR system with an introspective reasoning component which monitors system performance to detect poor retrievals, identifies features which would lead retrieval of more adaptable cases, and refines the indexing criteria to include the needed features to avoid future failures. We explore the benefit of introspective reasoning by performing empirical tests on the implemented system. These tests examine the effect of introspective index refinement, and the effects of problem order on case and index learning, and show that introspective learning of new index features improves performance across the different problem orders. %KW machine learning %KW introspective reasoning %KW metacognition %KW model-based reasoning %KW case-based reasoning %KW case retrieval %KW indexing %KW case adaptation %KW similarity %KW adaptability %KW failure-driven learning %KW expectation failures %TI Combining Rules and Cases to Learn Case Adaptation %AU David Leake %PU Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society, Pittsburgh, PA, 1995. %OR INDAI %LT p-95-09 %AV url http://www.cs.indiana.edu/~leake/p-95-09.ps.Z %YR 1995 %AB Computer models of case-based reasoning (CBR) generally guide case adaptation using a fixed set of adaptation rules. A difficult practical problem is how to identify the knowledge required to guide adaptation for particular tasks. Likewise, an open issue for CBR as a cognitive model is how case adaptation knowledge is learned. We describe a new approach to acquiring case adaptation knowledge. In this approach, adaptation problems are initially solved by reasoning from scratch, using abstract rules about structural transformations and general memory search heuristics. Traces of the processing used for successful rule-based adaptation are stored as cases to enable future adaptation to be done by case-based reasoning. When similar adaptation problems are encountered in the future, these adaptation cases provide task- and domain-specific guidance for the case adaptation process. We present the tenets of the approach concerning the relationship between memory search and case adaptation, the memory search process, and the storage and reuse of cases representing adaptation episodes. These points are discussed in the context of ongoing research on DIAL, a computer model that learns case adaptation knowledge for case-based disaster response planning. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case adaptation %KW memory search %TI Using Introspective Reasoning to Refine Indexing %AU Susan Fox %AU David Leake %PU Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, Montreal, CA, 1995. %OR INDAI %LT p-95-08 %AV url http://www.cs.indiana.edu/~leake/p-95-08.ps.Z %YR 1995 %AB Introspective reasoning about a system's own reasoning processes can form the basis for learning to refine those reasoning processes. The ROBBIE system uses introspective reasoning to monitor the retrieval process of a case-based planner to detect retrieval of inappropriate cases. When retrieval problems are detected, the source of the problems is explained and the explanations are used to determine new indices to use during future case retrieval. The goal of ROBBIE's learning is to increase its ability to focus retrieval on relevant cases, with the aim of simultaneously decreasing the number of candidates to consider and increasing the likelihood that the system will be able to successfully adapt the retrieved cases to fit the current situation. We evaluate the benefits of the approach in light of empirical results examining the effects of index learning in the ROBBIE system. %KW machine learning %KW introspective reasoning %KW metacognition %KW model-based reasoning %KW case-based reasoning %KW case retrieval %KW indexing %KW case adaptation %KW failure-driven learning %KW expectation failures %TI Abduction, Experience, and Goals: A Model of Everyday Abductive Explanation %AU David Leake %PU The Journal of Experimental and Theoretical Artificial Intelligence. %OR INDAI %LT p-95-07 %AV url http://www.cs.indiana.edu/~leake/p-95-07.ps.Z %YR 1995 %AB Many abductive understanding systems generate explanations by a backwards chaining process that is neutral both to the explainer's previous experience in similar situations and to why the explainer is attempting to explain. This article examines the relationship of such models to an approach that uses case-based reasoning to generate explanations. In this case-based model, the generation of abductive explanations is focused by prior experience and by goal-based criteria reflecting current information needs. The article analyzes the commitments and contributions of this case-based model as applied to the task of building good explanations of anomalous events in everyday understanding. The article identifies six central issues for abductive explanation, compares how these issues are addressed in traditional and case-based explanation models, and discusses benefits of the case-based approach for facilitating generation of plausible and useful explanations in domains that are complex and imperfectly understood. %KW machine learning %KW cognitive modeling %KW case-based reasoning %KW explanation %KW abduction %KW diagnosis %KW anomaly detection %KW plausibility evaluation %KW story understanding %KW operationality %KW goal-based explanation %KW failure-driven learning %TI Towards Goal-Driven Integration of Explanation and Action %AU David Leake %PU Goal-Driven Learning, eds. A. Ram and D. Leake, MIT Press/Bradford Books, Cambridge, MA, 1995. %OR INDAI %LT p-95-06 %AV url http://www.cs.indiana.edu/~leake/p-95-06.ps.Z %YR 1995 %AB Explanation-based methods have been shown to play a valuable role in focusing learning. However, the value of their results depends not just on the methods themselves but on strategies for applying them. Using explanation-based methods effectively depends on developing methods for answering three fundamental questions of goal-driven learning---when to learn, what to learn, and how to focus learning effort---as they apply to explanation. This chapter discusses the questions of how a goal-driven explanation system can decide when explanations are needed, can characterize its information needs, and can use awareness of its needs to focus the search for explanations. The first part of the chapter provides an overview of the theory of focusing goal-driven explanation implemented in the story understanding program ACCEPTER. The second part takes a wider view, considering how the standard model of explanation as an isolated reasoning process can be extended into a model that integrates explanation with other types of goal-driven activity and information search. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW diagnosis %KW explanation %KW active learning %KW goal-driven learning %KW goal-based explanation %KW failure-driven learning %KW expectation failures %TI Becoming an expert case-based reasoner: Learning to adapt prior cases %AU David Leake %PU Invited paper, Proceedings of the Eighth Annual Florida Artificial Intelligence Research Symposium, Melbourne, FL, pp. 218-222, 1995 %OR INDAI %LT p-95-05 %AV url http://www.cs.indiana.edu/~leake/p-95-05.ps.Z %YR 1995 %AB Experience plays an important role in the development of human expertise. One computational model of how experience affects expertise is provided by research on case-based reasoning, which examines how stored cases encapsulating traces of specific prior problem-solving episodes can be retrieved and re-applied to facilitate new problem-solving. Much progress has been made in methods for accessing relevant cases, and case-based reasoning is receiving wide acceptance both as a technology for developing intelligent systems and as a cognitive model of a human reasoning process. However, one important aspect of case-based reasoning remains poorly understood: the process by which retrieved cases are adapted to fit new situations. The difficulty of encoding effective adaptation rules by hand is widely recognized as a serious impediment to the development of fully autonomous case-based reasoning systems. Consequently, an important question is how case-based reasoning systems might learn to improve their expertise at case adaptation. We present a framework for acquiring this expertise by using a combination of general adaptation rules, introspective reasoning, and case-based reasoning about the case adaptation task itself. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW expertise %KW case adaptation %KW memory search %TI An Architecture for Goal-Driven Explanation %AU Raja Sooriamurthi %AU David Leake %PU Proceedings of the Eighth Annual Florida Artificial Intelligence Research Symposium, Melbourne, FL, pp. 218-222, 1995 %OR INDAI %LT p-95-04 %AV url http://www.cs.indiana.edu/~leake/p-95-04.ps.Z %YR 1995 %AB In complex and changing environments explanation must be a dynamic and goal-driven process. This paper discusses an evolving system implementing a novel model of explanation generation --- Goal-Driven Interactive Explanation --- that models explanation as a goal-driven, multi-strategy, situated process inter-weaving reasoning with action. We describe a preliminary implementation of this model in GOBIE, a system that generates explanations for its internal use to support plan generation and execution. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW explanation %KW active learning %KW goal-driven learning %KW goal-based explanation %TI Combining Case-Based Planning and Introspective Reasoning %AU Susan Fox %AU David Leake %PU Proceedings of the Sixth Midwest Artificial Intelligence and Cognitive Science Conference, Carbondale, IL, pp. 32-36. %OR INDAI %LT p-95-03 %AV url http://www.cs.indiana.edu/~leake/p-95-03.ps.Z %AB There is much current interest in introspective reasoning, reasoning about reasoning processes themselves. One application of introspective reasoning is to detect flaws in a system's own reasoning, and to refine its reasoning methods to correct those flaws. We propose a framework for performing such introspective refinement, and describe its implementation in a system which combines introspective learning with case-based planning. We describe empirical tests performed to evaluate the effect of introspective reasoning for this system. %YR 1995 %KW machine learning %KW introspective reasoning %KW metacognition %KW case-based reasoning %KW case retrieval %KW indexing %KW case adaptation %KW failure-driven learning %KW expectation failures %TI Modeling Case-based Planning for Repairing Reasoning Failures %AU Susan Fox %AU David Leake %PU Proceedings of the 1995 AAAI Spring Symposium on Representing Mental States and Mechanisms. Pp. 31-38. %OR INDAI %LT p-95-02 %AV url http://www.cs.indiana.edu/~leake/p-95-02.ps.Z %AB One application of models of reasoning behavior is to allow a reasoner to introspectively detect and repair failures of its own reasoning process. We address the issues of the transferability of such models versus the specificity of the knowledge in them, the kinds of knowledge needed for self-modeling and how that knowledge is structured, and the evaluation of introspective reasoning systems. We present the ROBBIE system which implements a model of its planning processes to improve the planner in response to reasoning failures. We show how ROBBIE's hierarchical model balances model generality with access to implementation-specific details, and discuss the qualitative and quantitative measures we have used for evaluating its introspective component. %KW machine learning, introspective reasoning, introspective learning, metacognition, evaluation, model generality, case-based reasoning, memory search, failure-driven learning %YR 1995 %KW machine learning %KW introspective reasoning %KW metacognition %KW model-based reasoning %KW case-based reasoning %KW case retrieval %KW case adaptation %KW failure-driven learning %KW expectation failures %TI Representing Self-Knowledge for Introspection about Memory Search %AU David Leake %PU Proceedings of the 1995 AAAI Spring Symposium on Representing Mental States and Mechanisms. Pp. 84-88. Invited position paper. %OR INDAI %LT p-95-01 %AV url http://www.cs.indiana.edu/~leake/p-95-01.ps.Z %AB This position paper sketches a framework for modeling introspective reasoning and discusses the relevance of that framework for modeling introspective reasoning about memory search. It argues that effective and flexible memory processing in rich memories should be built on five types of explicitly represented self-knowledge: knowledge about information needs, relationships between different types of information, expectations for the actual behavior of the information search process, desires for its ideal behavior, and representations of how those expectations and desires relate to its actual performance. This approach to modeling memory search is both an illustration of general principles for modeling introspective reasoning and a step towards addressing the problem of how a reasoner---human or machine---can acquire knowledge about the properties of its own knowledge base. %YR 1995 %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW memory search %KW failure-driven learning %KW knowledge goals %TI Introspective Reasoning in a Case-based Planner %AU Susan Fox %AU David Leake %PU Proceedings of the Twelfth National Conference on Artificial Intelligence. Student research abstract. %OR INDAI %LT p-94-06 %AV url http://www.cs.indiana.edu/~leake/p-94-06.ps.Z %YR 1994 %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW case retrieval %KW case adaptation %KW failure-driven learning %TI Towards Situated Explanation %AU Raja Sooriamurthi %AU David Leake %PU Proceedings of the Twelfth National Conference on Artificial Intelligence. Student research abstract. %OR INDAI %LT p-94-05 %AV url http://www.cs.indiana.edu/~leake/p-94-05.ps.Z %YR 1994 %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW case-based reasoning %KW explanation %KW active learning %KW goal-driven learning %KW goal-based explanation %TI Using Introspective Reasoning to Guide Index Refinement in Case-Based Reasoning %AU Susan Fox %AU David Leake %PU Proceedings of the Sixtennth Annual Conference of the Cognitive Science Society, Lawrence Erlbaum Associates, 1994 %OR INDAI %LT p-94-04 %AV url http://www.cs.indiana.edu/~leake/p-94-04.ps.Z %YR 1994 %AB Case-based reasoning research on indexing and retrieval focuses primarily on developing specific retrieval criteria, rather than on developing mechanisms by which such criteria can be learned as needed. This paper presents a framework for learning to refine indexing criteria by introspective reasoning. In our approach, a self-model of desired system performance is used to determine when and how to refine retrieval criteria. We describe the advantages of this approach for focusing learning on useful information even in the absence of explicit processing failures, and support its benefits with experimental results on how an implementation of the model affects performance of a case-based planning system. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW memory search %KW case-based reasoning %KW case retrieval %KW failure-driven learning %KW model-based reasoning %KW expectation failures %TI Towards A Computer Model of Memory Search Strategy Learning %AU David Leake %PU Proceedings of the Sixtennth Annual Conference of the Cognitive Science Society, Lawrence Erlbaum Associates, 1994. %OR INDAI %LT p-94-03 %AV url http://www.cs.indiana.edu/~leake/p-94-03.ps.Z %YR 1994 %AB Much recent research on modeling memory processes has focused on identifying useful indices and retrieval strategies to support particular memory tasks. Another important question concerning memory processes, however, is how retrieval criteria are learned. This paper examines the issues involved in modeling the learning of memory search strategies. It discusses the general requirements for appropriate strategy learning and presents a model of memory search strategy learning applied to the problem of retrieving relevant information for adapting cases in case-based reasoning. It discusses an implementation of that model, and, based on the lessons learned from that implementation, points towards issues and directions in refining the model. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW memory search %KW learning goals %KW failure-driven learning %KW case-based reasoning %KW case adaptation %KW knowledge planning %TI A Framework for Goal-Driven Learning %AU Ashwin Ram %AU David Leake %PU Proceedings of the 1994 AAAI Spring Symposium on Goal-Driven Learning, pp. 1-11 %OR INDAI %LT p-94-02 %AV url http://www.cs.indiana.edu/~leake/p-94-02.ps.Z %YR 1994 %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW active learning %KW multistrategy learning %KW learning goals %KW utility of learning %KW goal-driven learning %KW failure-driven learning %KW knowledge planning %TI Issues in Goal-Driven Explanation %AU David Leake %PU Proceedings of the 1994 AAAI Spring Symposium on Goal-Driven Learning, pp. 72-79. %OR INDAI %LT p-94-01 %AV url http://www.cs.indiana.edu/~leake/p-94-01.ps.Z %YR 1994 %AB When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy---generally backwards chaining---to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation. %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW active learning %KW multistrategy learning %KW learning goals %KW utility of learning %KW goal-driven learning %KW explanation %KW knowledge planning %KW goal-based explanation %TI Goal-Driven Learning %AU Ashwin Ram %AU David Leake %PU MIT Press/Bradford Books, 1995. %OR INDAI %AV url http://www.cc.gatech.edu/cogsci/gdl.html %YR 1995 %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW active learning %KW multistrategy learning %KW learning goals %KW utility of learning %KW goal-driven learning %KW knowledge planning %TI Learning Adaptation Strategies by Introspective Reasoning about Memory Search %AU David B. Leake %PU Proceedings of the AAAI-93 Workshop on Case-Based Reasoning, AAAI Press, Menlo Park, CA, pp. 57-63, 1993. %OR INDAI %LT p-93-03 %AV url http://www.cs.indiana.edu/~leake/p-93-03.ps.Z %YR 1993 %AB In case-based reasoning systems, the case adaptation process is traditionally controlled by static libraries of hand-coded adaptation rules. This paper proposes a method for learning adaptation knowledge in the form of _adaptation strategies_ of the type developed and hand-coded by Kass [90]. Adaptation strategies differ from standard adaptation rules in that they encode general memory search procedures for finding the information needed during case adaptation; this paper focuses on the issues involved in learning memory search procedures to form the basis of new adaptation strategies. It proposes a method that starts with a small library of abstract adaptation rules and uses introspective reasoning about the system's memory organization to generate the memory search plans needed to apply those rules. The search plans are then packaged with the original abstract rules to form new adaptation strategies for future use. This process allows a CBR system not only to learn about its domain, by storing the results of case adaptation, but also to learn how to apply the cases in its memory more effectively. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW memory search %KW learning goals %KW failure-driven learning %KW case-based reasoning %KW case adaptation %KW knowledge planning %TI Goal-Driven Learning: Fundamental Issues (A Symposium Report) %AU David Leake %AU Ashwin Ram %PU AI Magazine, 14(4):67-72, 1993 %OR INDAI %LT p-93-02 %AV url http://www.cs.indiana.edu/~leake/p-93-02.ps.Z %YR 1993 %AB In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research directions in goal-driven learning. %KW machine learning %KW introspective reasoning %KW metacognition %KW cognitive modeling %KW active learning %KW multistrategy learning %KW learning goals %KW utility of learning %KW goal-driven learning %KW knowledge planning %TI Focusing Construction and Selection of Abductive Hypotheses %AU David Leake %PU Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Morgan Kaufmann, 1993, pp. 24-29. %OR INDAI %LT p-93-01 %AV url http://www.cs.indiana.edu/~leake/p-93-01.ps.Z %YR 1993 %AB Many abductive understanding systems explain novel situations by a chaining process that is neutral to explainer needs beyond generating some plausible explanation for the event being explained. This paper examines the relationship of standard models of abductive understanding to the case-based explanation model. In case-based explanation, construction and selection of abductive hypotheses are focused by specific explanations of prior episodes and by goal-based criteria reflecting current information needs. The case-based method is inspired by observations of human explanation of anomalous events during everyday understanding, and this paper focuses on the method's contributions to the problems of building good explanations in everyday domains. We identify five central issues, compare how those issues are addressed in traditional and case-based explanation models, and discuss motivations for using the case-based approach to facilitate generation of plausible and useful explanations in domains that are complex and imperfectly understood. %KW machine learning %KW cognitive modeling %KW case-based reasoning %KW explanation %KW abduction %KW anomaly detection %KW story understanding %KW plausibility evaluation %KW operationality %KW goal-based explanation %TI Evaluating Explanations: A Content Theory %AU David Leake %PU Lawrence Erlbaum Associates, Hillsdale, NJ, 1992. ISBN 0-8058-1064-1. %OR INDAI %LT a-92-book %YR 1992 %AV url http://www.cs.indiana.edu/~leake/a-92-book.txt %AB Psychology and philosophy have long studied the nature and role of explanation. More recently, artificial intelligence research has developed promising theories of how explanation facilitates learning and generalization. By using explanations to guide learning, explanation-based methods allow reliable learning in complex situations.

This volume addresses fundamental issues in generating and judging explanations: When to explain, what constitutes an explanation, how to build explanations, and how to evaluate candidate explanations. It examines the problem of everyday explanation of anomalous events, and argues that context---involving explainer goals, beliefs, and experience---is crucial to generating and judging those explanations. The theory developed is not only a theory of the _process_ of explanation, but also of the _content_ of the knowledge required to detect anomalies and guide search for explanations.

The book presents models of pattern-based anomaly detection as a means to automatically generate appropriate target concepts for explanation; of how the search for explanations of anomalies can be focused by case-based reasoning; and of goal-based evaluation of candidate explanations. It describes the implementation of these theories in ACCEPTER, a computer system that understands stories, detects anomalous events, retrieves relevant explanations from memory, and evaluates candidate explanations in light of overarching goals. %KW machine learning %KW cognitive modeling %KW case-based reasoning %KW case retrieval %KW explanation %KW abduction %KW anomaly detection %KW comprehension monitoring %KW story understanding %KW plausibility evaluation %KW operationality %KW belief maintenance %KW goal-based explanation %TI Using Goals and Experience to Guide Abduction %AU David Leake %PU Indiana University Computer Science Department Technical Report number 359, July 1992. %OR INDAI %LT p-92-02 %AV url http://www.cs.indiana.edu/~leake/p-92-02.ps.Z %YR 1992 %AB Standard methods for abductive understanding are neutral to prior experience and current goals. Candidate explanations are built from scratch by backwards chaining, without considering how similar situations were previously explained, and selection of the candidate to accept is based on its likelihood, without considering the information needs beyond routine understanding. Problems arise when applying these methods to everyday understanding: The vast range of possible explanations makes it difficult to control the cost of explanation construction and to assure that the explanations generated will actually be useful.

We argue that these problems can be overcome by using goals and experience to guide both explanation generation and evaluation. Our work is within the framework of case-based explanation, which builds explanations by retrieving and adapting prior explanations stored in memory. We substantiate our model by describing mechanisms that enable it to effectively generate good explanations. First, we demonstrate that there exists a theory of anomaly and explanation that can guide retrieval of relevant explanations. Second, we present a plausibility evaluation process that efficiently detects conflicts and confirmations of an explanation's assumptions by prior patterns, making it possible to focus explanation adaptation when retrieved explanations are implausible. Third, we present methods for judging whether explanations provide the information needed to satisfy explainer goals beyond routine understanding. By reflecting experience and goals in the search for explanations, case-based explanation provides a practical mechanism for guiding search towards explanations that are both plausible and useful. %KW machine learning %KW cognitive modeling %KW case-based reasoning %KW case retrieval %KW case adaptation %KW explanation %KW abduction %KW anomaly detection %KW plausibility evaluation %KW story understanding %KW operationality %KW goal-based explanation %TI Constructive Similarity Assessment: Using Stored Cases to Define New Situations %AU David Leake %PU Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, Lawrence Erlbaum Associates, 1992, pp. 313-318 %OR INDAI %LT p-92-01 %AV url http://www.cs.indiana.edu/~leake/p-92.01.ps.Z %YR 1992 %AB A fundamental issue in case-based reasoning is similarity assessment: determining similarities and differences between new and retrieved cases. Many methods have been developed for comparing input case descriptions to the cases already in memory. However, the success of such methods depends on the input case description being sufficiently complete to reflect the important features of the new situation, which is not assured. In case-based explanation of anomalous events during story understanding, the anomaly arises because the current situation is incompletely understood; consequently, similarity assessment based on matches between known current features and old cases is likely to fail because of gaps in the current case's description.

Our solution to the problem of gaps in a new case's description is an approach that we call constructive similarity assessment. Constructive similarity assessment treats similarity assessment not as a simple comparison between fixed new and old cases, but as a process for deciding which types of features should be investigated in the new situation and, if the features are borne out by other knowledge, added to the description of the current case. Constructive similarity assessment does not merely compare new cases to old: using prior cases as its guide, it dynamically carves augmented descriptions of new cases out of memory. %KW cognitive modeling %KW similarity assessment %KW case-based reasoning %KW explanation %TI Goal-based explanation evaluation %AU David Leake %PU Cognitve Science 15(4):509-545, 1991 %OR INDAI %LT p-91-01.ps.Z %AV url http://www.cs.indiana.edu/~leake/p-91-01.ps.Z %AB Many theories of explanation evaluation are based on context- independent criteria. Such theories either restrict their consideration to explanation towards a fixed goal, or assume that all valid explanations are equivalent, so that evaluation criteria can be neutral to the goals underlying the attempt to explain. However, explanation can serve a range of purposes that place widely divergent requirements on the information an explanation must provide. We argue that understanding what determines explanations' goodness requires a dynamic theory of evaluation, based on analysis of the information needed to satisfy the many goals that can prompt explanation; this view conforms to the common-sense idea that people accept and apply explanations precisely if those explanations give the information they need. We examine a range of goals that can underly explanation, and present a theory for evaluating whether an explanation provides the information an explainer needs for these goals. We illustrate our theory by sketching its implementation in the computer program ACCEPTER, which does goal-based evaluation of the goodness of explanations for surprising events in news stories. %KW cognitive modeling %KW case-based reasoning %KW explanation %KW abduction %KW plausibility evaluation %KW story understanding %KW operationality %KW goal-based explanation