Distributed Link Sparsification for Scalable Scheduling using Graph Neural Networks
Published in IEEE TWC, 2025
Recommended citation: Zhongyuan Zhao, Gunjan Verma, Ananthram Swami, and Santiago Segarra, " Distributed Link Sparsification for Scalable Scheduling using Graph Neural Networks," IEEE Transactions on Wireless Communications, accepted for publication, DOI: 10.1109/TWC.2025.3606741 https://doi.org/10.1109/TWC.2025.3606741
- Received 27 February 2024; revised 20 January 2025, 17 June 2025, and 13 August 2025; accepted 1 September 2025.
- Related conference paper Distributed Link Sparsification for Scalable Scheduling using Graph Neural Networks
- Preprint https://arxiv.org/abs/2509.05447
- Code https://github.com/zhongyuanzhao/gcn-sparsify full code will be released soon — stay tuned!
Abstract
In wireless networks characterized by dense connectivity, the significant signaling overhead generated by distributed link scheduling algorithms can exacerbate issues like congestion, energy consumption, and radio footprint expansion. To mitigate these challenges, we propose a distributed link sparsification scheme employing graph neural networks (GNNs) to reduce scheduling overhead for delay-tolerant traffics while maintaining network capacity. A GNN module is trained to adjust contention thresholds for individual links based on traffic statistics and network topology, enabling links to withdraw from scheduling contention when unlikely to succeed. Our approach is facilitated by a novel offline constrained reinforcement learning algorithm capable of balancing two competing objectives: minimizing scheduling overhead while ensuring total utility meets the required level. In simulated wireless multi-hop networks with up to 500 links, our link sparsification technique effectively alleviates network congestion and reduces radio footprints across four distinct distributed link scheduling protocols.
Key words: Threshold, massive access, scalable scheduling, graph neural networks, constrained reinforcement learning.
