Extraction of Rules from Discrete-Time Recurrent Neural Networks

No Thumbnail Available

Files

CS-TR-3465.ps (502.83 KB)
No. of downloads: 427

Publication or External Link

Date

1998-10-15

Advisor

Citation

DRUM DOI

Abstract

The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned regular grammar. (Also cross-referenced as UMIACS-TR-95-54)

Notes

Rights