WeChat Mini Program
Old Version Features

Evolutionary Dynamic Database Partitioning Optimization for Privacy and Utility

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING(2024)

Victoria Univ

Cited 21|Views16
Abstract
Distributed database system (DDBS) technology has shown its advantages with respect to query processing efficiency, scalability, and reliability. Moreover, by partitioning attributes of sensitive associations into different fragments, DDBSs can be used to protect data privacy. However, it is complex to design a DDBS when one has to optimize privacy and utility in a time-varying environment. This paper proposes a distributed prediction-randomness framework for the evolutionary dynamic multiobjective partitioning optimization of databases. In the proposed framework, two sub-populations contain individuals representing database partitioning solutions. One sub-population utilizes a Markov chain-based predictor to predict discrete-domain solutions for database partitioning when the environment changes, and the other sub-population utilizes the random initialization operator to maintain population diversity. In addition, a knee-driven migration operator is utilized to exchange information between two sub-populations. Experimental results show that the proposed algorithm outperforms the competing solutions with respect to accuracy, convergence speed, and scalability.
More
Translated text
Key words
Dynamic multiobjective optimization,database privacy and utility,database partitioning,evolutionary algorithm,prediction,Dynamic multiobjective optimization,database privacy and utility,database partitioning,evolutionary algorithm,prediction
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined