Training an AI agent to play a Snake Game via Deep Reinforcement Learning

Authors

  • Reetej Chindarkar  Department of Computer Engineering, Dr. D.Y.Patil School of Engineering, Lohegaon,Maharashtra India
  • Kartik Kaushik  Department of Computer Engineering, Dr. D.Y.Patil School of Engineering, Lohegaon,Maharashtra India
  • Rutuja Vetal  Department of Computer Engineering, Dr. D.Y.Patil School of Engineering, Lohegaon,Maharashtra India
  • Ronak Thusoo  Department of Computer Engineering, Dr. D.Y.Patil School of Engineering, Lohegaon,Maharashtra India
  • Prof. Pallavi Shimpi  Assistant Professor, Department of Computer Engineering, Dr. D.Y.Patil School of Engineering, Lohegaon, Maharashtra India

Keywords:

Deep reinforcement learning, Snake Game, Autonomous agent, Deep Learning, Experience replay

Abstract

Deep Reinforcement Learning (DRL) has become a normally adopted methodology to alter the agents to be told complex management policies in varied video games, after Deep-Mind used this technique to play Atari games. In this paper, we will develop a Deep Reinforcement Learning Model along with Deep Q-Learning Algorithm that will enable our autonomous agent to play the classical snake game. Specifically, we will employ a Deep Neural Network (DNN) trained with a variant of Q-Learning. No rules about the game are mentioned, and initially the agent is provided with no information on what it needs to do. The goal for the system is to figure out the rules and elaborate a method to maximize the score or reward.

References

  1. G. Tesauro, “Temporal difference learning and TDgammon,” Communications of the ACM, vol. 38, no. 3, pp. 58–68, 1995.
  2. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” ArXiv e-prints, 2013.
  3.  E. A. O. Diallo, A. Sugiyama, and T. Sugawara, “Learning to coordinate with deep reinforcement learning in doubles pong game,” in Proceedings of IEEE International Conference on Machine Learning and Applications (ICMLA), 2017, pp. 14–19.
  4. S. Yoon and K. J. Kim, “Deep Q networks for visual fighting game AI,” in Proceedings of IEEE Conference on Computational Intelligence and Games (CIG), 2017, pp. 306–308.
  5. M. Andrychowicz, D. Crow, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel,and W. Zaremba, “Hindsight experience replay,” in Proceedings of Annual Conference on Neural Information Processing Systems, 2017, pp. 5055–5065.
  6. L.-J. Lin, “Reinforcement learning for robots using neural networks,” Ph.D. dissertation, Pittsburgh, PA, USA, 1992, UMI Order No. GAX93-22750.
  7. T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” Computing Research Repository, vol. abs/1511.05952, 2015.
  8. M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, “The arcade learning environment: An evaluation platform for general agents,” Journal of Artificial Intelligence Research, vol. 47, pp. 253–279, 2013.
  9. P. D. Christopher J. C. H. Watkins, “Q-learning,” Machine Learning, vol. 8, no. 3-4, pp. 279–292, 1992.
  10. M. Roderick, J. MacGlashan, and S. Tellex, “Implementing the deep q-network,” ArXiv e-prints, 2017.
  11. Ali Jaber Almalki, Pawel Wocjan, “Exploration of Reinforcement Learning to Play Snake Game” in Proceedings of International Conference on Computational Science and Computational Intelligence (CSCI), 2019, pp. 377-381.

Downloads

Published

2020-12-18

Issue

Section

Research Articles

How to Cite

[1]
Reetej Chindarkar, Kartik Kaushik, Rutuja Vetal, Ronak Thusoo, Prof. Pallavi Shimpi, " Training an AI agent to play a Snake Game via Deep Reinforcement Learning, International Journal of Scientific Research in Science and Technology(IJSRST), Online ISSN : 2395-602X, Print ISSN : 2395-6011, Volume 5, Issue 8, pp.59-61, November-December-2020.