DarwiNN: efficient distributed neuroevolution under communication constraints

GECCO '20: Genetic and Evolutionary Computation Conference Cancún Mexico July, 2020(2020)

引用 1|浏览16
暂无评分
摘要
Neuroevolution (NE), defined as the application of evolution-based training methods to Deep Neural Networks (DNNs), has recently demonstrated encouraging results on a variety of learning tasks. NE is highly parallel and relies on DNN inference as its main computational kernel and therefore can potentially leverage large-scale distributed inference-specific hardware infrastructure in the cloud or edge. We introduce chromosome updates (CU), a novel communication-optimized method for distributing NE computation, and DarwiNN, an open-source, GPU-accelerated distributed NE toolbox based on PyTorch, which implements CU and other algorithms for distributing NE.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要