Joint Content Caching and Recommendation in Opportunistic Mobile Networks Through Deep Reinforcement Learning and Broad Learning

IEEE Transactions on Services Computing(2023)

引用 1|浏览12
暂无评分
摘要
Edge caching has been a research hotspot of the Mobile Edge Computing (MEC) in recent years, which is an effective way to ease the burden of traffic in cellular networks. It places contents to the edge of the networks and assists contents transmission via Device-to-Device (D2D) links. Traditional caching strategies strictly depend on the personal preferences of users, and they are scarcely possible to reduce the transmission cost while ensuring the high effectiveness. Fortunately, recent studies have found that the combination of caching and recommendation can effectively improve the efficiency of edge caching and reduce the transmission cost. In this article, we jointly consider the content caching and recommendation through Opportunistic Mobile Networks (OMNs) to reduce the cost of Content Service Center (CSC). In order to obtain the optimal caching and recommendation solutions with sparse rating matrix, we propose a Joint Content Caching and Recommender System (JCCRS). In JCCRS, a Broad Incremental Learning based Collaborative Filtering algorithm, named BILCF, is first proposed to predict the missing ratings. Afterwards, we quantify the relationship between each pair of Mobile Users (MUs) according to their mobility, similarity and preference. The content caching and recommendation problem is then modeled as a Non-Linear Integer Programming (NLIP) problem and we prove that it belongs to NP-hard. To solve this problem, a Deep Deterministic Policy Gradient (DDPG) based Content Caching and Recommendation method, named DCRM, is further proposed to obtain the approximate optimal solutions. Extensive experiments on both a realistic dataset and a synthetic dataset validated by the realistic data show that the proposed algorithms outperform other baseline methods under different scenarios.
更多
查看译文
关键词
opportunistic mobile networks,deep reinforcement learning,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要