A Learned Cache Eviction Framework with Minimal Overhead

arxiv(2023)

引用 0|浏览15
暂无评分
摘要
Recent work shows the effectiveness of Machine Learning (ML) to reduce cache miss ratios by making better eviction decisions than heuristics. However, state-of-the-art ML caches require many predictions to make an eviction decision, making them impractical for high-throughput caching systems. This paper introduces Machine learning At the Tail (MAT), a framework to build efficient ML-based caching systems by integrating an ML module with a traditional cache system based on a heuristic algorithm. MAT treats the heuristic algorithm as a filter to receive high-quality samples to train an ML model and likely candidate objects for evictions. We evaluate MAT on 8 production workloads, spanning storage, in-memory caching, and CDNs. The simulation experiments show MAT reduces the number of costly ML predictions-per-eviction from 63 to 2, while achieving comparable miss ratios to the state-of-the-art ML cache system. We compare a MAT prototype system with an LRU-based caching system in the same setting and show that they achieve similar request rates.
更多
查看译文
关键词
learned cache eviction framework,minimal overhead
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要