Browsing by Author "Katz, Garrett"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Exploring the Computational Explanatory Gap(MDPI, 2017-01-16) Reggia, James A.; Huang, Di-Wei; Katz, GarrettWhile substantial progress has been made in the field known as artificial consciousness, at the present time there is no generally accepted phenomenally conscious machine, nor even a clear route to how one might be produced should we decide to try. Here, we take the position that, from our computer science perspective, a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. We explain how addressing the computational explanatory gap can identify computational correlates of consciousness. We suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but would also inform the search for neurobiological correlates of consciousness and would, with high probability, contribute to demystifying the “hard problem” of understanding the mind–brain relationship. We compile a listing of previously proposed computational correlates of consciousness and, based on the results of recent computational modeling, suggest that the gating mechanisms associated with top-down cognitive control of working memory should be added to this list. We conclude that developing neurocognitive architectures that contribute to bridging the computational explanatory gap provides a credible and achievable roadmap to understanding the ultimate prospects for a conscious machine, and to a better understanding of the mind–brain problem in general.Item Identifying Fixed Points in Recurrent Neural Networks using Directional Fibers: Supplemental Material on Theoretical Results and Practical Aspects of Numerical Traversal(2016-12-12) Katz, Garrett; Reggia, JamesFixed points of recurrent neural networks can represent many things, including stored memories, solutions to optimization problems, and waypoints along non-fixed attractors. As such, they are relevant to a number of neurocomputational phenomena, ranging from low-level motor control and tool use to high-level problem solving and decision making. Therefore, global solution of the fixed point equations can improve our understanding and engineering of recurrent neural networks. While local solvers and statistical characterizations abound, we do not know of any method for efficiently and precisely locating all fixed points of an arbitrary network. To solve this problem we have proposed a novel strategy for global fixed point location, based on numerical traversal of mathematical objects we defined called directional fibers [2]. This report supplements our results in [2] by presenting certain technical aspects of our method in more depth.