Online coded caching

IEEE/ACM Transactions on Networking (TON)(2016)

引用 335|浏览99
暂无评分
摘要
We consider a basic content distribution scenario consisting of a single origin server connected through a shared bottleneck link to a number of users each equipped with a cache of finite memory. The users issue a sequence of content requests from a set of popular files, and the goal is to operate the caches as well as the server such that these requests are satisfied with the minimum number of bits sent over the shared link. Assuming a basic Markov model for renewing the set of popular files, we characterize approximately the optimal long-term average rate of the shared link. We further prove that the optimal online scheme has approximately the same performance as the optimal offline scheme, in which the cache contents can be updated based on the entire set of popular files before each new request. To support these theoretical results, we propose an online coded caching scheme termed coded least-recently sent (LRS) and simulate it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize. For this time series, we show that the proposed coded LRS algorithm significantly outperforms the popular least-recently used (LRU) caching algorithm.
更多
查看译文
关键词
content distribution,coded caching,lrs scheme,cache storage,markov model,lru caching algorithm,online coded caching scheme,finite memory cache,least-recently used caching algorithm,netflix,least-recently sent scheme,demand time series,content distribution scenario,markov processes,time series,online scheme,content requests
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要