Low-Resource Language Information Processing Using Dwarf Mongoose Optimization with Deep Learning Based Sentiment Classification
ACM Transactions on Asian and Low-Resource Language Information Processing(2023)
Harcourt Butler Technical University
Abstract
Asian and low-resource language information processing refers to the field of computational linguistics that aims to develop natural language processing (NLP) technologies for languages that have fewer available language resources or are less commonly spoken. This is an important field of study because many languages in Asia and other parts of the world are underrepresented in the field of NLP, which may limit access to information and technology for speakers of these languages. The growing volume of user-generated content on the web has made sentiment analysis (SA) a significant tool for extracting data regarding human emotional states. Twitter sentiment detectors provide a superior solution for assessing the quality of products and services compared to other conventional technologies. The detection performance and classifier accuracy of SA, which can be highly dependent on classifier methods and the quality of input features have been utilised. Deep learning (DL) methods use distinct techniques to extract data from raw data such as tweets or texts and represent them in different forms of models. Therefore, this article presents a Dwarf Mongoose Optimization with Deep Learning-Based Twitter Sentiment Classification (DMODL-TSC) technique to classify sentiments based on tweets. The presented DMODL-TSC technique leverages the concepts of natural language processing (NLP) and DL. Primarily, the raw tweets are preprocessed to transform them into a useful format. Next, the DMODL-TSC technique uses the advanced FastText word embedding technique. Moreover, the bidirectional recurrent neural network (BiRNN) method is utilized for the recognition of sentiments. Finally, the DMO technique is utilized for the optimal hyperparameter optimization of the BiRNN method, which leads to effective classification performance. The comprehensive result examination of the DMODL-TSC system was tested on three datasets, and the obtained outcomes illustrate the supremacy of the DMODL-TSC approach.
MoreTranslated text
Key words
Low-resource language information processing,Sentiment Classification,Twitter data,Sentiment analysis,Deep learning,language information,Machine learning,Text classification,Natural language processing
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined