Mobile Robot Navigation Using Deep Reinforcement Learning in Unknown Environments

International Journal of Electrical and Electronics Engineering
© 2020 by SSRG - IJEEE Journal
Volume 7 Issue 8
Year of Publication : 2020
Authors : Roan Van Hoa, L. K. Lai, Le Thi Hoan
pdf
How to Cite?

Roan Van Hoa, L. K. Lai, Le Thi Hoan, "Mobile Robot Navigation Using Deep Reinforcement Learning in Unknown Environments," SSRG International Journal of Electrical and Electronics Engineering, vol. 7,  no. 8, pp. 15-20, 2020. Crossref, https://doi.org/10.14445/23488379/IJEEE-V7I8P104

Abstract:

Mobile robots can cover a large range of real world missions such as environment surveillance, delivery, search and rescue missions. Such missions require different levels of selfnavigation in order to react to the dynamic environment changes. However, most of the navigation methods rely on static obstacle map, and don’t have the ability of autonomous learning. In this paper, we propose an end-to-end approach using deep reinforcement learning for the navigation of mobile robots in an unknown environment. Based on dueling network architectures for deep reinforcement learning (Dueling DQN) and deep reinforcement learning with double Q learning (Double DQN), a dueling architecture based double deep Q network (D3QN) is adapted in this paper. Simulation results on the Gazebo framework show the feasibility of the proposed method. The robot can complete navigation tasks safely in an unpredicted dynamic environment and becomes a truly intelligent system with strong self-learning and adaptive abilities.

Keywords:

Autonomous navigation, Deep reinforcement learning, Artificial Intelligence, Mobile Robots.

References:

[1] Li Y, “Deep reinforcement learning”, In: ICASSP 2018—2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), Calgary, AB, Canada, 15–20, April 2018.
[2] Sun ZJ, Xue L, Xu YM, et al, “Overview of deep learning”, Appl Res Comput 2012, 12, pp. 2806–2810.
[3] Sutton RS and Barto AG, “Reinforcement learning: an introduction”, IEEE Transactions on Neural Networks, 2005.
[4] Hosu I-A and Rebedea T, “Playing Atari games with deep reinforcement learning and human checkpoint replay”, 2016. ArXiv, abs/1607.05077.
[5] Lillicrap TP, Hunt JJ, Pritzel A, et al, “Continuous control with deep reinforcement learning”, Comput Sci 2015, 8(6): A187.
[6] Caicedo JC and Lazebnik S, “Active object localization with deep reinforcement learning”, In: Proceedings of the IEEE international conference on computer vision, Santiago, Chile, 2015, pp. 2488–2496.
[7] Meganathan RR, Kasi AA, and Jagannath S, “Computer vision based novel steering angle calculation for autonomous vehicles”, In: IEEE international conference on robotic computing, Laguna Hills, CA, USA, 31 January–2 February, 2018.
[8] Gupta S, Tolani V, Davidson J, et al, “Cognitive mapping and planning for visual navigation”, In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 7272–7281.
[9] Zhu Y, Mottaghi R, Kolve E, et al, “Target-driven visual navigation in indoor scenes using deep reinforcement learning”, In: 2017 IEEE international conference on robotics and automation (ICRA), Stockholm, 16–21 March 2016, pp. 3357–3364.
[10] S. Amarjyoti, “Deep reinforcement learning for robotic manipulation-the state of the art”, Bull. Transilv. Univ. Braşov, vol. 10, no. 2, 2017.
[11] A. V. Bernstein, E. Burnaev, and O. Kachan, “Reinforcement learning for computer vision and robot navigation”, in Proc. International Conference on Machine Learning and Data Mining in Pattern Recognition, 2018, pp. 258-272: Springer.
[12] V. Matt and N. Aran, “Deep reinforcement learning approach to autonomous driving”, ed: arXiv, 2017.
[13] X. Da and J. Grizzle, “Combining trajectory optimization, supervised machine learning, and model structure for mitigating the curse of dimensionality in the control of bipedal robots”, Int. J. Rob. Res., vol. 38, no. 9, pp. 1063–1097, 2019.
[14] I. Zamora, N. G. Lopez, V. M. Vilches, and A. H. Cordero, “Extending the openai gym for robotics: A toolkit for reinforcement learning using ros and gazebo”, arXiv preprint arXiv:1608.05742, 2016.
[15] H. Kretzschmar, M. Spies, C. Sprunk, and W. Burgard, “Socially compliant mobile robot navigation via inverse reinforcement learning”, The International Journal of Robotics Research, vol. 35, no. 11, pp. 1289-1307, 2016.
[16] L. Tai and M. Liu, “A robot exploration strategy based on qlearning network”, in Proc. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2016, pp. 57-62.
[17] L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation”, in Proc. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 31-36.
[18] Mnih. V, Kavukcuoglu. K, Silver. D, Rusu. A.A, Veness. J, Bellemare. M.G, Graves. A, Riedmiller. M, Fidjeland. A.K, Ostrovski. G, et al, “Human-level control through deep reinforcement learning”, Nature 2015, pp. 518-529.
[19] Van Hasselt. H, Guez. A, Silver. D, “Deep Reinforcement Learning with Double Q-Learning”, AAAI: Phoenix, AZ, USA, 2016; Volume 2, p. 5.
[20] Wang. Z, Schaul. T, Hessel. M, Van Hasselt. H, Lanctot. M, De Freitas. N, “Dueling network architectures for deep reinforcement learning” arXiv 2015 arXiv:1511.06581. Avaliable online: https://arxiv.org/pdf/1511. 06581.pdf (accessed on 12 September 2018).
[21] Diederik P, Kingma and Jimmy Ba, “Adam: A method for stochastic optimization”, CoRR, abs/1412.6980, 2015.
[22] Dr.V.V.Narendra Kumar, T.Satish Kumar, "Smarter Artificial Intelligence with Deep Learning" SSRG International Journal of Computer Science and Engineering Vol-5,Iss-6,2018.