WeChat Mini Program
Old Version Features

Explaining Sentiments: Improving Explainability in Sentiment Analysis Using Local Interpretable Model-Agnostic Explanations and Counterfactual Explanations

Xin Wang, Jianhui Lyu,J. Dinesh Peter, Byung-Gyu Kim, B.D. Parameshachari, Keqin Li, Wei

IEEE Transactions on Computational Social Systems(2025)

Northeastern University

Cited 0|Views2
Abstract
Sentiment analysis of social media platforms is crucial for extracting actionable insights from unstructured textual data. However, modern sentiment analysis models using deep learning lack explainability, acting as black box and limiting trust. This study focuses on improving the explainability of sentiment analysis models of social media platforms by leveraging explainable artificial intelligence (XAI). We propose a novel explainable sentiment analysis (XSA) framework incorporating intrinsic and posthoc XAI methods, i.e., local interpretable model-agnostic explanations (LIME) and counterfactual explanations. Specifically, to solve the problem of lack of local fidelity and stability in interpretations caused by the LIME random perturbation sampling method, a new model-independent interpretation method is proposed, which uses the isometric mapping virtual sample generation method based on manifold learning instead of LIMEs random perturbation sampling method to generate samples. Additionally, a generative link tree is presented to create counterfactual explanations that maintain strong data fidelity, which constructs counterfactual narratives by leveraging examples from the training data, employing a divide-and-conquer strategy combined with local greedy. Experiments conducted on social media datasets from Twitter, YouTube comments, Yelp, and Amazon demonstrate XSAs ability to provide local aspect-level explanations while maintaining sentiment analysis performance. Analyses reveal improved model explainability and enhanced user trust, demonstrating XAIs potential in sentiment analysis of social media platforms. The proposed XSA framework provides a valuable direction for developing transparent and trust-worthy sentiment analysis models for social media platforms.
More
Translated text
Key words
Explainable artificial intelligence (XAI),explainability,local interpretable model-agnostic explanations (LIME),sentiment analysis (SA)
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined