Implementation of Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks

Authors

  • B. Vamsi Krishna  M.Tech Student, Department of Electronics and Communication Engineering, S.V.University College of Engineering, Tirupati, A.P., India
  • Dr. S. Varadarajan  Professor, Department of Electronics and Communication Engineering, S.V.University College of Engineering, Tirupati, A.P., India

Keywords:

Deep Q Learning, Control Overhead Protocol, Multi-User Reinforcement Learning

Abstract

The ever-increasing demand for high-speed wireless communication services has underscored the importance of efficient spectrum utilization and dynamic network access. This research introduces an innovative approach that leverages Deep Reinforcement Learning (DRL) using Deep Q Networks (DQN) for achieving dynamic multichannel access in wireless networks. Furthermore, it conducts a comprehensive comparative analysis between this proposed method and the conventional Deep Reinforcement Learning for Dynamic Spectrum Access (DSA) in wireless networks. In the proposed method, DRL with DQN is employed to optimize multichannel access for wireless devices. This framework enables devices to intelligently select and utilize available channels based on real-time network conditions and user requirements. By learning from interactions with the environment, the proposed system adapts its channel selection strategies, leading to improved spectrum utilization and network performance. To assess the effectiveness of the proposed method, extensive simulations and real-world experiments are conducted, comparing its performance with the existing DRL-based DSA system. The evaluation encompasses key performance metrics, including spectrum utilization, network throughput, latency, and Quality of Service (QoS). The results of the comparative analysis reveal significant advantages of the proposed DRL with DQN approach in terms of dynamic multichannel access. It outperforms the conventional DSA system, demonstrating superior spectrum utilization and more efficient network resource allocation. Additionally, the proposed method exhibits greater adaptability to changing network conditions, making it suitable for a wide range of wireless communication scenarios. This research highlights the potential of DRL with DQN for dynamic multichannel access in wireless networks, emphasizing its role in enhancing network efficiency and meeting the demands of modern wireless communication services. By comparing it with the established DSA approach, this study provides valuable insights into the benefits and implications of adopting DRL-based strategies for optimizing wireless network access.

References

  1. S. Wang, H. Liu, P. Gomes, and B. Krishnamachari, “Deep reinforcement learning for dynamic multichannel access in 5https://github.com/ANRGUSC/MultichannelDQN-channelModel 11 wireless networks,” in ICNC, 2017.
  2. R. Knopp and P. Humblet, “Information capacity and power control in single-cell multiuser communications,” in IEEE ICC, 1995.
  3. I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, “Next generation/dynamic spectrum access/cognitive radio wireless networks: A survey,” Computer networks, vol. 50, no. 13, pp. 2127–2159, 2006.
  4. C. Papadimitriou and J. N. Tsitsiklis, “The complexity of markov decision processes,” Math. Oper. Res., vol. 12, no. 3, pp. 441–450, 1987.
  5. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
  6. E. Stevens-Navarro, Y. Lin, and V. W. S. Wong, “An mdp-based vertical handoff decision algorithm for heterogeneous wireless networks,” IEEE Trans on Vehicular Technology, vol. 57, no. 2, pp. 1243–1254, March 2008.
  7. P. Sakulkar and B. Krishnamachari, “Online learning of power allocation policies in energy harvesting communications,” in SPCOM, 2016.
  8. Q. Zhao, B. Krishnamachari, and K. Liu, “On myopic sensing for multi-channel opportunistic access: structure, optimality, and performance,” IEEE Trans. Wireless Commun., vol. 7, no. 12, pp. 5431–5440, dec 2008.
  9. S. H. A. Ahmad, M. Liu, T. Javidi, Q. Zhao, and B. Kr ishnamachari, “Optimality of myopic sensing in multichannel opportunistic access,” IEEE Trans. Inf. Theory, vol. 55, no. 9, pp. 4040–4050, 2009.
  10. K. Liu and Q. Zhao, “Indexability of restless bandit problems and optimality of whittle index for dynamic multichannel access,” IEEE Trans. Inf. Theory, vol. 56, no. 11, pp. 5547– 5567, nov 2010.
  11. P. Venkatraman, B. Hamdaoui, and M. Guizani, “Opportunis tic bandwidth sharing through reinforcement learning,” IEEE Trans. on Vehicular Technology, vol. 59, no. 6, pp. 3148–3153, July 2010.
  12. Y. Zhang, Q. Zhang, B. Cao, and P. Chen, “Model free dynamic sensing order selection for imperfect sensing multichannel cognitive radio networks: A q-learning approach,” in IEEE ICC, 2014.
  13. S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to end training of deep visuomotor policies,” arXiv preprint arXiv:1504.00702, 2015.
  14. J.-A. M. Assael, N. Wahlström, T. B. Schön, and M. P. Deisenroth, “Data-efficient learning of feedback policies from image pixels using deep dynamical models,” arXiv preprint arXiv:1510.02173, 2015.
  15. J. Ba, V. Mnih, and K. Kavukcuoglu, “Multiple object recog nition with visual attention,” arXiv preprint arXiv:1412.7755, 2014.
  16. “Solvepomdp,” http://erwinwalraven.nl/solvepomdp/.
  17. H. Liu, K. Liu, and Q. Zhao, “Logarithmic weak regret of non bayesian restless multi-armed bandit,” in IEEE ICASSP, 2011.
  18. C. Tekin and M. Liu, “Online learning in opportunistic spectrum access: A restless bandit approach,” in IEEE INFOCOM, 2011.
  19. W. Dai, Y. Gai, and B. Krishnamachari, “Efficient online learn ing for opportunistic spectrum access,” in IEEE INFOCOM, 2012.
  20. “Online learning for multi-channel opportunistic access over unknown markovian channels,” in IEEE SECON, 2014

Downloads

Published

2023-10-30

Issue

Section

Research Articles

How to Cite

[1]
B. Vamsi Krishna, Dr. S. Varadarajan "Implementation of Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks" International Journal of Scientific Research in Science and Technology(IJSRST), Online ISSN : 2395-602X, Print ISSN : 2395-6011,Volume 10, Issue 5, pp.267-278, September-October-2023.