Distributed Link Sparsification for Scalable Scheduling using Graph Neural Networks

Published in IEEE TWC, 2024

Recommended citation: Zhongyuan Zhao, Gunjan Verma, Ananthram Swami, Santiago Segarra, " Distributed Link Sparsification for Scalable Scheduling using Graph Neural Networks," IEEE Transactions on Wireless Communications, under review

Abstract

In wireless networks characterized by dense connectivity, the significant signaling overhead generated by distributed link scheduling algorithms can exacerbate issues like congestion, energy consumption, and radio footprint expansion. To mitigate these challenges, we propose a distributed link sparsification scheme employing graph neural networks (GNNs) to reduce scheduling overhead for delay-tolerant traffics while maintaining network capacity. A GNN module is trained to adjust contention thresholds for individual links based on traffic statistics and network topology, enabling links to withdraw from scheduling contention when unlikely to succeed. Our approach is facilitated by a novel offline constrained reinforcement learning algorithm capable of balancing two competing objectives: minimizing scheduling overhead while ensuring total utility meets the required level. In simulated wireless multi-hop networks with up to 500 links, our link sparsification technique effectively alleviates network congestion and reduces radio footprints across four distinct distributed link scheduling protocols.

Key words: Threshold, massive access, scalable scheduling, graph neural networks, constrained reinforcement learning.