Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network
CoRR(2024)
摘要
This paper analyzes two competing rule extraction methodologies: quantization
and equivalence query. We trained 3600 RNN models, extracting 18000 DFA
with a quantization approach (k-means and SOM) and 3600 DFA by equivalence
query(L^*) methods across 10 initialization seeds. We sampled the
datasets from 7 Tomita and 4 Dyck grammars and trained them on 4 RNN
cells: LSTM, GRU, O2RNN, and MIRNN. The observations from our experiments
establish the superior performance of O2RNN and quantization-based rule
extraction over others. L^*, primarily proposed for regular grammars,
performs similarly to quantization methods for Tomita languages when neural
networks are perfectly trained. However, for partially trained RNNs, L^*
shows instability in the number of states in DFA, e.g., for Tomita 5 and Tomita
6 languages, L^* produced more than 100 states. In contrast, quantization
methods result in rules with number of states very close to ground truth DFA.
Among RNN cells, O2RNN produces stable DFA consistently compared to other
cells. For Dyck Languages, we observe that although GRU outperforms other RNNs
in network performance, the DFA extracted by O2RNN has higher performance and
better stability. The stability is computed as the standard deviation of
accuracy on test sets on networks trained across 10 seeds. On Dyck Languages,
quantization methods outperformed L^* with better stability in accuracy and
the number of states. L^* often showed instability in accuracy in the order
of 16% - 22% for GRU and MIRNN while deviation for quantization methods
varied in 5% - 15%. In many instances with LSTM and GRU, DFA's extracted by
L^* even failed to beat chance accuracy (50%), while those extracted by
quantization method had standard deviation in the 7%-17% range. For O2RNN,
both rule extraction methods had deviation in the 0.5% - 3% range.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要