DEvS: data distillation algorithm based on evolution strategy.

Annual Conference on Genetic and Evolutionary Computation (GECCO)(2022)

引用 0|浏览3
暂无评分
摘要
The development of machine learning solutions often relies on training using large labeled datasets. This raises challenges in terms of data storage, data privacy protection, and longer model training time. One of the possible solutions to overcome these problems is called dataset distillation - a process of creating a smaller dataset while maximizing the preservation of its task-related information. In this paper, a new dataset distillation algorithm is proposed, called DEvS, which uses an evolutionary strategy approach to condense the training samples initially available for an image classification task, while minimizing the loss of classification accuracy. Experiments on CIFAR-10 demonstrate the competitiveness of the proposed approach. Also, contrary to recent trends, DEvS is derivative-free image generation, and therefore has greater scalability on larger input image sizes.
更多
查看译文
关键词
dataset distillation, image classification, neural networks, evolution strategy, optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要