Digital Repository at the University of Maryland (DRUM)  >
Theses and Dissertations from UMD  >
UMD Theses and Dissertations 

Please use this identifier to cite or link to this item: http://hdl.handle.net/1903/12841

Title: Extracting Symbolic Representations Learned by Neural Networks
Authors: Huynh, Thuan Quang
Advisors: Reggia, James A
Department/Program: Computer Science
Type: Dissertation
Sponsors: Digital Repository at the University of Maryland
University of Maryland (College Park, Md.)
Subjects: Computer science
Keywords: hidden layer representation
neural network
penalty function
rule extraction
Issue Date: 2012
Abstract: Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are not obviously understandable. This difficulty has inspired substantial past research on how to extract symbolic, human-readable representations from a trained neural network, but the results obtained so far are very limited (e.g., large rule sets produced). This problem occurs in part due to the distributed hidden layer representation created during learning. Most past symbolic knowledge extraction algorithms have focused on progressively more sophisticated ways to cluster this distributed representation. In contrast, in this dissertation, I take a different approach. I develop ways to alter the error backpropagation neural network training process itself so that it creates a representation of what has been learned in the hidden layer activation space that is more amenable to existing symbolic representation extraction methods. In this context, this dissertation research makes four main contributions. First, modifications to the backpropagation learning procedure are derived mathematically, and it is shown that these modifications can be accomplished as local computations. Second, the effectiveness of the modified learning procedure for feedforward networks is established by showing that, on a set of benchmark tasks, it produces rule sets that are substantially simpler than those produced by standard backpropagation learning. Third, this approach is extended to simple recurrent networks, and experimental evaluation shows remarkable reduction in the sizes of the finite state machines extracted from the recurrent networks trained using this approach. Finally, this method is further modified to work on echo state networks, and computational experiments again show significant improvement in finite state machine extraction from these networks. These results clearly establish that principled modification of error backpropagation so that it constructs a better separated hidden layer representation is an effective way to improve contemporary symbolic extraction methods.
URI: http://hdl.handle.net/1903/12841
Appears in Collections:UMD Theses and Dissertations
Computer Science Theses and Dissertations

Files in This Item:

File Description SizeFormatNo. of Downloads
Huynh_umd_0117E_13271.pdf1.56 MBAdobe PDF175View/Open

All items in DRUM are protected by copyright, with all rights reserved.

 

DRUM is brought to you by the University of Maryland Libraries
University of Maryland, College Park, MD 20742-7011 (301)314-1328.
Please send us your comments