WeChat Mini Program
Old Version Features

A DGA Domain Name Detection Method Based on Two-Stage Feature Reinforcement

2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023(2024)

Civil Aviat Univ China

Cited 0|Views11
Abstract
The domain name features used in the existing domain name detection methods about domain generation algorithm (DGA) are generally easy to evade, which results in some common DGA domain name detection methods failing to effectively detect the DGA domain name. To solve the issues, we propose a DGA domain name detection method based on two-stage feature reinforcement. Firstly, we encode the domain name to obtain the domain name word vector. Secondly, the slice pyramid network (SPN) is used to process the word vector to extract the domain name feature. Thirdly, we reinforce the domain name feature by using the two-stage reinforcement method we proposed. The two-stage reinforcement method reinforces the domain name feature by adding domain name semantic information to the extracted features and reducing feature information redundancy to improve the stability of the domain name feature, meanwhile, we convert the reinforced domain name feature to the primary capsules to reduce feature loss. Finally, we use the dynamic routing algorithm to process the primary capsules to generate digital capsules, and then the digital capsules are used to detect domain names. Experimental results on domain name detection and domain name family classification both show that compared with the state-of-the-art methods, our method has better detection performances.
More
Translated text
Key words
DGA domain name detection,slice pyramid network,two-stage feature reinforcement,dynamic routing algorithm
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined