GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence Maximization

Journal of Parallel and Distributed Computing(2024)

引用 0|浏览2
暂无评分
摘要
Influence maximization (IM) is a combinatorial problem of identifying a subset of seed nodes in a network (graph), which when activated, provide a maximal spread of influence in the network for a given diffusion model and a budget for seed set size. IM has numerous applications such as viral marketing, epidemic control, sensor placement and other network-related tasks. However, its practical uses are limited due to the computational complexity of current algorithms. Recently, deep reinforcement learning has been leveraged to solve IM in order to ease the computational burden. However, there are serious limitations in current approaches, including narrow IM formulation that only consider influence via spread and ignore self activation, low scalability to large graphs, and lack of generalizability across graph families leading to a large running time for every test network. In this work, we address these limitations through a unique approach that involves: (1) Formulating a generic IM problem as a Markov decision process that handles both intrinsic and influence activations; (2) incorporating generalizability via meta-learning across graph families. There are previous works that combine deep reinforcement learning with graph neural network but this work solves a more realistic IM problem and incorporates generalizability across graphs via meta reinforcement learning. Extensive experiments are carried out in various standard networks to validate performance of the proposed Graph Meta Reinforcement learning (GraMeR) framework. The results indicate that GraMeR is multiple orders faster and generic than conventional approaches when applied on small to medium scale graphs.
更多
查看译文
关键词
Graph neural networks,Q learning,influence maximization,multi-objective,Meta learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要