Interpretable extractive text summarization with meta-learning and BI-LSTM: A study of meta learning and explainability techniques

Song-Nguyen Vo, Tien-Thinh Vo,Bac Le


Cited 0|Views2
No score
Text summarization is a widely -researched problem among scholars in the field of natural language processing. Multiple techniques have been proposed to help tackle this problem, yet some of these methodologies may still exhibit limitations such as the requirements for large training datasets, which might not always be possible, but more importantly, the lack of interpretability or transparency of the model. In this paper, we propose using meta -learning algorithm to train a deep learning model for extractive text summarization and then using various explanatory techniques such as SHAP (Shapley, 1953), linear regression (Lederer, 2022), decision trees (Furnkranz, 2010), and input modification to gain insights into the model's decision making process. To evaluate the effectiveness of our approach, we will compare it to other popular natural language processing models like BERT (Miller, 2019) or XLNET (Yang et al., 2020) using the ROUGE metrics (Lin, 2004). Overall, our proposed approach provides a promising solution to the limitations of existing methods and a framework for improving the explainability of deep learning models in other natural language processing tasks.
Translated text
Key words
Extractive text summarization,Text summarization,Meta-learning,XAI,Interpretability,Deep learning
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined