Browsing by Author "Miller, Michael"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Active Logics: A Unified Formal Approach to Episodic Reasoning(1999-10-14) Elgot-Drapkin, Jennifer; Kraus, Sarit; Miller, Michael; Nirkhe, Madhura; Perlis, DonaldArtificial intelligence research falls roughly into two categories: formal and implementational. This division is not completely firm: there are implementational studies based on (formal or informal) theories (e.g., CYC, SOAR, OSCAR), and there are theories framed with an eye toward implementability (e.g., predicate circumscription). Nevertheless, formal/theoretical work tends to focus on very narrow problems (and even on very special cases of very narrow problems) while trying to get them ``right'' in a very strict sense, while implementational work tends to aim at fairly broad ranges of behavior but often at the expense of any kind of overall conceptually unifying framework that informs understanding. It is sometimes urged that this gap is intrinsic to the topic: intelligence is not a unitary thing for which there will be a unifying theory, but rather a ``society'' of subintelligences whose overall behavior cannot be reduced to useful characterizing and predictive principles. Here we describe a formal architecture that is more closely tied to implementational constraints than is usual for formalisms, and which has been used to solve a number of commonsense problems in a unified manner. In particular, we address the issue of formal, integrated, and longitudinal reasoning: inferentially-modeled behavior that incorporates a fairly wide variety of types of commonsense reasoning within the context of a single extended episode of activity requiring keeping track of ongoing progress, and altering plans and beliefs accordingly. Instead of aiming at optimal solutions to isolated, well-specified and temporally narrow problems, we focus on satisficing solutions to under-specified and temporally-extended problems, much closer to real-world needs. We believe that such a focus is required for AI to arrive at truly intelligent mechanisms with the ability to behave effectively over considerably longer time periods and range of circumstances than is common in AI today. While this will surely lead to less elegant formalisms, it also surely is requisite if AI is to get fully out of the blocks-world and into the real world. (Also cross-referenced as UMIACS-TR-99-65)Item Calibrating, Counting, Grounding, Grouping(1998-10-15) Elgot-Drapkin, Jennifer; Gordon, Diana; Kraus, Sarit; Miller, Michael; Nirkhe, Madhura; Perlis, DonEven an ``elementary'' intelligence for control of the physical world will require very many kinds of knowledge and ability. Among these are ones related to perception, action, and reasoning about ``near space'': that region comprising one's body and the portion of space within reach of one's effectors; chief among these are individuation and categorization of objects. These in turn are made useful in part by the additional capacities to estimate category size, change one's beliefs about categories, and form new categories or revise old categories. In this position paper we point out some issues in knowledge representation that can arise with respect to the above capacities, and suggest that the framework of ``active logics'' (see below) may be marshaled toward solutions. We will conduct our discussion in terms of learning to understand in a semantically explicit way one's own sensori-motor system and its interactions with near-space objects. (Also cross-referenced as UMIACS-TR-94-63)Item A Commentary on the Literature of Self-Reference.(1985) Perlis, D.; Miller, Michael; ISRSelf-reference, far from being just a logician's and a philosopher's puzzle is, in fact, a central feature of human language and reason. It, thus, seems natural that intelligent machines will also have to deal with the issue of self-reference. We discuss some of the formal problems, and potential solutions and applications. Portions of this essay are descriptive in nature, portions prescriptive. We are involved in the development of some of the ideas in the relevant literature, and make no apology for injecting a certain subjective note into the text, as opposed to forcing a false objectivity. We have also freely drawn on portions of essays written by one of the authors.Item Defaults Denied(1998-10-15) Miller, Michael; Perlis, Don; Purang, KhemdutWe take a tour of various themes in default reasoning, examining new ideas as well as those of Brachman, Delgrande, Poole, and Schlechta. An underlying issue is that of stating that a potential default principle is not appropriate. We see this arise most dramatically as a problem in an attempt to formalize what are often loosely called "prototypes", although it also arises in other formal approaches to default reasoning. Some formalisms in the literature provide solutions but not without costs. We propose a formalism that appears to avoid these costs; it can be seen as a step toward a population-based set-theoretic modification of these approaches, that may ultimately provide a closer tie to recent work on statistical (quantitative) foundations of (qualitative) defaults([1]). Our analysis in particular indicates the need to resolve a conflation between use and mention in many default formalisms. Our treatment proposes such a resolution, and also explores the use of sets toward a more population-based notion of default. (Also cross-referenced as UMIACS-TR-96-61)Item A Memory Model for Real-time Common Sense Reasoning.(1986) Drapkin, J.; Miller, Michael; Perlis, D.; ISRThis paper reports on a signicifcantly improved version of a system for real-time common sense reasoning. The research is based on the hypothesis that a simple conceptual architecture for memory suffices for a very broad range of behaviors in the common sense world. In particular, we describe a working example of mechanical reasoner that is rather flexible and robust, in that it can tolerate some inconsistencies; can work on goals; can "ruminate" without goals; can forget; can remember; can make assumptions and subsequently detect a conflict between a default conclusion and another assertion (or conclusion), can under suitable conditions decide between them, and can maintain this decision indefinitely until overridden by information to do so.Item On Default Handling: Consistency Before and After.(1986) Drapkin, J.; Miller, Michael; Perlis, D.; ISRIn common sense reasoning it is important to be able to handle conflicting data. We discuss this issue specifically in the context of default reasoning. We contrast two choices: either to constantly monitor the reasoning system in an effort to preserve consistency, or to allow inconsistencies to arise and then (try to) restore a semblance of order. That these are computationally virtually the same is granted; but there are other rather important distinctions between them bearing on default reasoning.Item Presentations and this and that: logic in action(1998-10-15) Miller, Michael; Perlis, DonThe tie between linguistic entities (e.g., words) and their meanings (e.g., objects in the world) is one that a reasoning agent had better know about and be able to alter when occasion demands. This has a number of important commonsense uses. The formal point, though, is that a new treatment is called for so that rational behavior via a logic can measure up to the constraint that it be able to change usage, employ new words, change meanings of old words, and so on. Here we do not offer a new logic per se; rather we borrow an existing one (step logic) and apply it to the specific issue of language change. (Also cross-referenced as UMIACS-TR-94-36)Item Real-Time Default Reasoning, Relevance, and Memory Models.(1985) Drapkin, J.; Miller, Michael; Perlis, D.; ISRWe describe a working example of mechanical default reasoning that is rather robust, in that it can detect a conflict between a default conclusion and another assertion (or conclusion), can (under suitable conditions) decide between them, and can maintain this decision indefinitely until overridden by information to do so. Our model contains five key elements: STM, LTM, ITM, QTM, and RTM. STM, LTM, and ITM are standard parts of cognitively- based models of memory. QTM is a technical device that controls the flow of information into STM, and RTM is the repository of default resolution and relevance.Item What experts deny, novices must understand(1998-10-15) Miller, Michael; Perlis, DonWe consider the problem of representing the denial of default information. We show that such denials are important parts of commonsense reasoning. Moreover, their representation is not a simple matter of negating traditional representations of default information. We have found a solution by separating default information into use and trend portions. This approach may also afford a more compact way to represent defaults in general. (Also cross-referenced as UMIACS-TR-94-64)