RELINK: Edge Activation for Closed Network Influence Maximization via Deep Reinforcement Learning
Shivvrat Arya, Smita Ghosh, Bryan Maruyama, and 1 more author
In Proceedings of the 34th ACM International Conference on Information and Knowledge Management, Seoul, Republic of Korea, 2025
Influence Maximization aims to select a subset of elements in a social network to maximize information spread under a diffusion model. While existing work primarily focuses on selecting influential nodes, these approaches assume unrestricted message propagation-an assumption that fails in closed social networks, where content visibility is constrained and node-level activations may be infeasible. Motivated by the growing adoption of privacy-focused platforms such as Signal, Discord, Instagram, and Slack, our work addresses the following fundamental question: How can we learn effective edge activation strategies for influence maximization in closed networks? To answer this question we introduce Reinforcement Learning for Link Activation (RELINK), the first DRL framework for edge-level influence maximization in privacy-constrained networks. It models edge selection as a Markov Decision Process, where the agent learns to activate edges under budget constraints. Unlike prior node-based DRL methods, RELINK uses an edge-centric Q-learning approach that accounts for structural constraints and constrained information propagation. Our framework combines a rich node embedding pipeline with an edge-aware aggregation module. The agent is trained using an n-step Double DQN objective, guided by dense reward signals that capture marginal gains in influence spread. Extensive experiments on real-world networks show that RELINK consistently outperforms existing edge-based methods, achieving up to 15% higher influence spread and improved scalability across diverse settings.