Building Intelligent Navigation System for Mobile Robots Based on the SARSA Algorithm

International Journal of Electrical and Electronics Engineering
© 2021 by SSRG - IJEEE Journal
Volume 8 Issue 4
Year of Publication : 2021
Authors : Nguyen Thi Thu Huong
pdf
How to Cite?

Nguyen Thi Thu Huong, "Building Intelligent Navigation System for Mobile Robots Based on the SARSA Algorithm," SSRG International Journal of Electrical and Electronics Engineering, vol. 8,  no. 4, pp. 19-24, 2021. Crossref, https://doi.org/10.14445/23488379/IJEEE-V8I4P104

Abstract:

This article presents the construction of an intelligent automatic navigation system for mobile robots in a flat environment with defined and unknown obstacles. The studies using programming tools are the operating system for mobile robots (Robot Operating System - ROS). From updated information on maps, operating environment, robot control position, and obstacles (Simultaneous Localization and Mapping (SLAM)) to calculate the motion trajectory of the mobile robot. The navigation system calculates the global and local trajectory for the robot based on the application of SARSA algorithm. The results of simulation studies in the Gazebo environment and the experimental run on the real Turtlebot3 mobile robot showed the practical efficiency of automatic navigation for this mobile robot.

Keywords:

Artificial intelligence, Mobile robot, Robotic, Reinforcement learning, SARSA algorithm.

References:

[1] Nguyen Doan Phuoc., The Advanced control theory, Science and Technics Publishing House, In Viet Nam, (2015).
[2] Nguyen Thanh Tuan., Base Deep learning, The Legrand Orange Book. Version 2, last update, August (2020).
[3] Vu Thi Thuy Nga, Ong Xuan Loc, Trinh Hai Nam., Enhanced learning in automatic control with Matlab Simulink., Hanoi Polytechnic Publishing House, (2020).
[4] Charu C. Aggarwal., Neural Networks and Deep Learning., Springer International Publishing AG, part of Springer Nature, (2018).
[5] X. Ruan, D. Ren, X. Zhu, and J. Huang., Mobile Robot Navigation based on Deep Reinforcement Learning., Chinese Control And Decision Conference (CCDC), (2019).
[6] Roan Van Hoa, L. K. Lai, Le Thi Hoan., Mobile Robot Navigation Using Deep Reinforcement Learning in Unknown Environments., SSRG International Journal of Electrical and Electronics Engineering (SSRG-IJEEE), 7(8)(2020) 15-20 .
[7] Wu, Y.; Tan, H.; Peng, J.; Zhang, H.; He, H., Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus.” Appl. 247(2019) 454-466, Energy.
[8] A. Folkers, M. Rick, and C. Buskens., Controlling an autonomous vehicle with deep reinforcement learning., IEEE Intell. Veh. Symp. Proc., (2019) 2025–2031.
[9] Cuong Nguyen Manh, Tien Ngo Manh, Dung Pham Tien, Van Nguyen Thi Thanh, Manh Tran Van, Duyen Ha Thi Thanh, and Duy Nguyen Duc “Autonomous Navigation for Omnidirectional Robot Based on Deep Reinforcement Learning,” IJMERR, 9(8)(2020) 1134-1139.
[10] L. a. S. H. Lin., Modeling and Adaptive Control of an Omni-Mecanum-Wheeled Robot., Intelligent Control and Automation, 4 (2013) 166-179.
[11] Pinto, L.; Gupta, A. Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, (2016) 16–21 3406-3413.
[12] Bicchi, A.; Kumar, V. Robotic grasping and contact: A review. In Proceedings of the 2000 ICRA, millennium Conference, IEEE International Conference on Robotics and Automation, Symposia Proceedings (Cat. No. 00CH37065), San Francisco, CA, USA, 24–28 1(2000) 348-353.
[13] Fang, B.; Jia, S.; Guo, D.; Xu, M.; Wen, S.; Sun, F. Survey of imitation learning for robotic manipulation. Int. J. Intell. Robot. Appl. 3(2019) 362–369.
[14] Zhang, F.; Leitner, J.; Milford, M.; Upcroft, B.; Corke, P. Towards vision-based deep reinforcement learning for robotic motion control. arXiv 2015.
[15] Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep Reinforcement Learning: A Brief Survey. IEEE Signal Process. Mag. 34(2017) 26–38.
[16] Luong, N.C.; Hoang, D.T.; Gong, S.; Niyato, D.; Wang, P.; Liang, Y.C.; Kim, D.I. Applications of deep reinforcement learning in communications and networking: A survey. IEEE Commun. Surv. Tutorials 21(2019) 3133-3174.
[17] Cao, J.; Liu, W.; Liu, Y.; Yang, J. Generalize Robot Learning From Demonstration to Variant Scenarios with Evolutionary Policy
Gradient. Front. Neurorobotics. (2020).
[18] Krishnan, S.; Garg, A.; Liaw, R.; Thananjeyan, B.; Miller, L.; Pokorny, F.T.; Goldberg, K. SWIRL: A sequential windowed inverse reinforcement learning algorithm for robot tasks with delayed rewards. Int. J. Robot. Res. 38(2019) 126 -145.
[19] Hwangbo, J.; Lee, J.; Dosovitskiy, A.; Bellicoso, D.; Tsounis, V.; Koltun, V.; Hutter, M. Learning agile and dynamic motor skills for legged robots. Sci. Robot. (2019).
[20] Golemo, F.; Taiga, A.A.; Courville, A.; Oudeyer, P.Y. Sim-to-real transfer with neural-augmented robot simulation. In Proceedings of the Conference on Robot Learning, New York, NY, USA, (2018) 29–31 817- 828.
[21] Mees, O.; Merklinger, M.; Kalweit, G.; Burgard, W. Adversarial skill networks: Unsupervised robot skill learning from video. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 30 May–5 June (2020) 4188–4194.
[22] Zhao, T.; Deng, M.; Li, Z.; Hu, Y. Cooperative Manipulation for a Mobile Dual-Arm Robot Using Sequences of Dynamic Movement Primitives. IEEE Trans. Cogn. Dev. Syst. 12(2020) 18–29.
[23] Deng, M.; Li, Z.; Kang, Y.; Chen, C.L.P.; the Chu, X. A Learning-Based Hierarchical Control Scheme for an Exoskeleton Robot in Human-Robot Cooperative Manipulation. IEEE Trans. Cybern. 50(2020) 112–125.
[24] R. K. e. a. Megalingam., ROS based autonomous indoor navigation simulation using SLAM algorithm., Int. J. Pure Appl. Math, (2018), 199-205.
[25] Shota Ohnishi, Eiji Uchibe, Yotaro Yamaguchi, Kosuke Nakanishi, Yuji Yasui, and Shin Ishii., Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning., Publishing by Frontiers in Neurorobotics Journal, December 13(2019) 7-12.
[26] https://www.mathworks.com/help/reinfocermentlearning/ug/ddpg - Agent. html, (2020).