Perlis, Donald R
MetadataShow full item record
This dissertation studies the role of reflection in intelligent autonomous systems. A reflective system is one that has an internal representation of itself as part of the system, so that it can introspect and make controlled and deliberated changes to itself. It is postulated that a reflective capability is essential for a system to expect the unexpected---to adapt to situations not forseen by the designer of the system. Two principal goals motivated this work: to explore the power of reflection (1) in a practical setting, and (2) as a method for approaching bounded optimal rationality via learning. Toward the first goal, a formal model of reflective agent is proposed, based on the Beliefs, Desires and Intentions (BDI) architecture, but free from the logical omniscience problem. This model is reflective in the sense that aspects of its formal description, comprised of set of logical sentences, will form part of its belief component, and hence be available for reasoning and manipulation. As a practical application, this model is suggested as a foundation for the construction of conversational agents capable of meta-conversation, i.e., agents that can reflect on the ongoing conversation. Toward the second goal, a new reflective form of reinforcement learning is introduced and shown to have a number of advantages over existing methods. The main contributions of this thesis consist of the following: In Part II, Chapter 2, the outline of a formal model of reflection based on the BDI agent model; in Chapter 3, preliminary design and implementation of a conversational agent based on this model; In Part III, Chapter 4, design and implementation of a novel benchmark problem which arguably captures all the essential and challenging features of an uncertain, dynamic, time sensitive environment, and setting the stage for clarification of the relationship between bounded-optimal rationality and computational reflection under the universal environment as defined by Solomonoff's universal prior; in Chapter 5, design and implementation of a computational-reflection inspired reinforcement learning algorithm that can successfully handle POMDPs and non-stationary environments, and studies of the comparative performances of RRL and some existing algorithms.