Browsing by Author "Reggia, James A."
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item A Behavior-to-Brain Map(2006-05-18T17:43:25Z) Tinnirella, Matt; Tagamets, Malle A.; Weems, Scott; Contreras-Vidal, Jose; Reggia, James A.Development and study of large-scale computational models of the human brain, and their use to simulate cognitive functions, is becoming increasingly feasible. However, construction of integrated models that span multiple cognitive systems (language, memory, reasoning, learning, sensorimotor control, executive functions, etc.) is currently inhibited by the absence of any systematic catalog of experimentally documented associations between specific behavioral functions and specific brain regions. In this report we provide a prototype for such a mapping in the form of a semantic network. While preliminary and not comprehensive, the results presented here support the idea that an online mapping between cognitive function and cortical/subcortical structures can be developed as a useful reference source.Item Development of a Large-Scale Integrated Neurocognitive Architecture Part 1: Conceptual Framework(2006-06-15) Reggia, James A.; Tagamets, Malle; Contreras-Vidal, Jose; Weems, Scott; Jacobs, David; Winder, Ransom; Chabuk, TimurThe idea of creating a general purpose machine intelligence that captures many of the features of human cognition goes back at least to the earliest days of artificial intelligence and neural computation. In spite of more than a half-century of research on this issue, there is currently no existing approach to machine intelligence that comes close to providing a powerful, general-purpose human-level intelligence. However, substantial progress made during recent years in neural computation, high performance computing, neuroscience and cognitive science suggests that a renewed effort to produce a general purpose and adaptive machine intelligence is timely, likely to yield qualitatively more powerful approaches to machine intelligence than those currently existing, and certain to lead to substantial progress in cognitive science, AI and neural computation. In this report, we outline a conceptual framework for the long-term development of a large-scale machine intelligence that is based on the modular organization, dynamics and plasticity of the human brain. Some basic design principles are presented along with a review of some of the relevant existing knowledge about the neurobiological basis of cognition. Three intermediate-scale prototypes for parts of a larger system are successfully implemented, providing support for the effectiveness of several of the principles in our framework. We conclude that a human-competitive neuromorphic system for machine intelligence is a viable long- term goal, but that for the short term, substantial integration with more standard symbolic methods as well as substantial research will be needed to make this goal achievable.Item Exploring the Computational Explanatory Gap(MDPI, 2017-01-16) Reggia, James A.; Huang, Di-Wei; Katz, GarrettWhile substantial progress has been made in the field known as artificial consciousness, at the present time there is no generally accepted phenomenally conscious machine, nor even a clear route to how one might be produced should we decide to try. Here, we take the position that, from our computer science perspective, a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. We explain how addressing the computational explanatory gap can identify computational correlates of consciousness. We suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but would also inform the search for neurobiological correlates of consciousness and would, with high probability, contribute to demystifying the “hard problem” of understanding the mind–brain relationship. We compile a listing of previously proposed computational correlates of consciousness and, based on the results of recent computational modeling, suggest that the gating mechanisms associated with top-down cognitive control of working memory should be added to this list. We conclude that developing neurocognitive architectures that contribute to bridging the computational explanatory gap provides a credible and achievable roadmap to understanding the ultimate prospects for a conscious machine, and to a better understanding of the mind–brain problem in general.Item The Maryland Virtual Demonstrator Environment for Robot Imitation Learning(2014-06-20) Huang, Di-Wei; Katz, Garrett E.; Gentili, Rodolphe J.; Reggia, James A.Robot imitation learning, where a robot autonomously generates actions required to accomplish a task demonstrated by a human, has emerged as a potential replacement for a more conventional hand-coded approach to programming robots. Many past studies in imitation learning have human demonstrators perform tasks in the real world. However, this approach is generally expensive and requires high-quality image processing and complex human motion understanding. To address this issue, we developed a simulated environment for imitation learning, where visual properties of objects are simplified to lower the barriers of image processing. The user is provided with a graphical user interface (GUI) to demonstrate tasks by manipulating objects in the environment, from which a simulated robot in the same environment can learn. We hypothesize that in many situations, imitation learning can be significantly simplified while being more effective when based solely on objects being manipulated rather than the demonstrator's body and motions. For this reason, the demonstrator in the environment is not embodied, and a demonstration as seen by the robot consists of sequences of object movements. A programming interface in Matlab is provided for researchers and developers to write code that controls the robot's behaviors. An XML interface is also provided to generate objects that form task-specific scenarios. This report describes the features and usages of the software.Item Measuring Organization and Asymmetry in Bihemispheric Topographic Maps(1998-10-15) Alvarez, Sergio A.; Levitan, Svetlana; Reggia, James A.We address the problem of measuring the degree of hemispheric organization and asymmetry of organization in a computational model of a bihemispheric cerebral cortex. A theoretical framework for such measures is developed and used to produce algorithms for measuring the degree of organization, symmetry, and lateralization in topographic map formation. The performance of the resulting measures is tested for several topographic maps obtained by self--organization of an initially random network, and the results are compared with subjective assessments made by humans. It is found that the closest agreement with the human assessments is obtained by using organization measures based on sigmoid--type error averaging. Measures are developed which correct for large constant displacements as well as curving of the hemispheric topographic maps. (Also cross-referenced as UMIACS-TR-96-51)Item A SIMULATION ENVIRONMENT FOR EVOLVING MULTIAGENT COMMUNICATION(2000-09-15) Reggia, James A.; Schultz, Reiner; Uriagereka, Juan; Wilkinson, JerryA simulation environment has been created to support study of emergent communication. Multiple agents exist in a two-dimensional world where they must find food and avoid predators. While non-communicating agents may survive, the world is configured so that survival and fitness can be enhanced through the use of inter-agent communication. The goal with this version of the simulator is to determine conditions under which simple communication (signaling) emerges and persists during an evolutionary process. (Also cross-referenced as UMIACS-TR-2000-64)Item SMILE: Simulator for Maryland Imitation Learning Environment(2016-05-19) Huang, Di-Wei; Katz, Garrett E.; Gentili, Rodolphe J.; Reggia, James A.As robot imitation learning is beginning to replace conventional hand-coded approaches in programming robot behaviors, much work is focusing on learning from the actions of demonstrators. We hypothesize that in many situations, procedural tasks can be learned more effectively by observing object behaviors while completely ignoring the demonstrator's motions. To support studying this hypothesis and robot imitation learning in general, we built a software system named SMILE that is a simulated 3D environment. In this virtual environment, both a simulated robot and a user-controlled demonstrator can manipulate various objects on a tabletop. The demonstrator is not embodied in SMILE, and therefore a recorded demonstration appears as if the objects move on their own. In addition to recording demonstrations, SMILE also allows programing the simulated robot via Matlab scripts, as well as creating highly customizable objects for task scenarios via XML. This report describes the features and usages of SMILE.Item Towards Imitation Learning of Dynamic Manipulation Tasks: A Framework to Learn from Failures(2014-06) Langsfeld, Joshua D.; Kaipa, Krishnanand N.; Gentili, Rodolphe J.; Reggia, James A.; Gupta, Satyandra K.