“Distributed Link Sparsification for Scalable Scheduling Using Graph Neural Networks” submitted to ICASSP 2022.

7 minute read

Published:

Preprint: https://arxiv.org/abs/2203.14339

If you ever being in a crowded café or stadium, you may have experienced slow or broken Internet even with full signal on your smartphone or laptop. This phenomenon boils down to a small scheduling overhead for every connection to the network. This overhead is not a big deal in wired networks, but for wireless networks, the per-connection overhead is typically proportional to the number of devices sharing the medium. That means you can not get $1/N$ of the total bandwidth of wireless networks, where $N$ is the number of active wireless devices. In fact, if you keep increasing $N$, the overhead would grow and eventually take up all the resources, leaving a network congested over-the-air. The network needs to limit the maximum nubmer of admitted users to prevent this from happening. That’s a big problem.

Now, imagine that there will be 10 million wireless connections per $km^2$ by 2030, mostly from Internet-of-Things (IoT) devices. In that scenario, nothing could get through the network even if their total traffic demand is theoretically well below the bandwidth of the network. This is one of the biggest challenges for 5G and beyond wireless networks, termed as ‘massive access’ or ‘massive connectivity’. (ref. [Chen 2021])

In this work, we come up with a scalable scheduling scheme with reduced overhead for wireless multihop networks. Generally, our solution can reduce the overhead by two orders of magnitude while retain almost $70\%$ of the network capacity, and is applicable to both synchronized and random access networks. Wireless multihop networks are infrastructureless, self-organizing wireless networks, which have been used for wireless networks in harsh/hostile environments, e.g. military communications, disaster relief, environmental and border monitoring. In the future wireless ecosystem, wireless multihop networks will also play a bigger role, e.g., to support IoT, connected vehicles, drones, wireless backhaul for 5G and beyond (small cells, mmWave, satellite constellation). (ref. [Akyildiz 2022])

The highlight of this work is the methodology of using node embeddings generated by Graph Neural Networks as parameters of localized decision functions. These localized decision functions allow different nodes to make decisions in slightly different manners based on their localities in the network. In our case, the decision is whether or not join the contention for scheduling–the scheduling overhead grows as more nodes contending for access. Conventionally, one would get these parameters through centralized or distributed optimization, both require thousands of rounds of message exchanges. However, with Graph Neural Networks, the parameters (maybe suboptimal) can be generated in a distributed manner with as low as a single round of message exchanges, even asynchronously by listening to the neighborhood. The idea of using a neural network (slow network) to generate weights for another neural network(s) (fast network) can be traced back to Jürgen Schmidhuber’s 1991 paper (ref. [FWP0]). Checkout his recent post 26 March 1991: Neural nets learn to program neural nets with fast weights—like today’s Transformer variants. 2021: New stuff!. With a slow network made of Graph Neural Networks, we might be able to automatically configure individual nodes in a network to improve the performance of the entire system.

The preprint and source code will hopefully be online soon. I would thank my co-authors, Ananthram Swami, and Santiago Segarra, to put this work together.


  • [Chen 2021] X. Chen, D. W. K. Ng, W. Yu, E. G. Larsson, N. Al-Dhahir,and R. Schober, “Massive access for 5g and beyond,”IEEE Journal of Selected Areas in Communications, vol. 39, no. 3, pp. 615–637, 2021.
  • [Akyildiz 2022] I. F. Akyildiz, A. Kak, and S. Nie, “6G and beyond: The future of wireless communications systems,” IEEE Access, vol. 8,pp. 133995–134030, 2020.
  • [FWP0] J. Schmidhuber. Learning to control fast-weight memories: An alternative to recurrent nets. Technical Report FKI-147-91, Institut für Informatik, Technische Universität München, 26 March 1991. PDF.