Research and Apply Deep Reinforcement Learning Technology to Control Mobile Robot

International Journal of Electrical and Electronics Engineering
© 2021 by SSRG - IJEEE Journal
Volume 8 Issue 4
Year of Publication : 2021
Authors : Roan Van Hoa, Nguyen Duc Dien, Lai Khac Lai
pdf
How to Cite?

Roan Van Hoa, Nguyen Duc Dien, Lai Khac Lai, "Research and Apply Deep Reinforcement Learning Technology to Control Mobile Robot," SSRG International Journal of Electrical and Electronics Engineering, vol. 8,  no. 4, pp. 30-35, 2021. Crossref, https://doi.org/10.14445/23488379/IJEEE-V8I4P106

Abstract:

Today, mobile robots are being strongly developed and widely used in life such as cargo robots, medical robots, wheelchairs for the disabled, etc. However, letting the robot move intelligently in dynamic environments without knowing the map in advance is a new research area of interest by scientists. The paper presents the application of deep reinforcement learning (DRL) to navigate mobile robots in an unknown environment. The system is built on robot operating system (ROS). The simulation results on the Gazebo software have verified the effectiveness of the proposed method.

Keywords:

Robot operating system (ROS), Deep reinforcement learning, Navigation, Simultaneous localization and mapping (SLAM).

References:

[1] YoonSeok Pyo, HanCheol Cho, RyuWoon Jung, TaeHoon Lim, ROS Robot Programming, ROBOTIS Co.,Ltd (2017).
[2] Đỗ Quang Hiệp, Ngô Mạnh Tiến, Nguyễn Mạnh Cường, Lê Trần Thắng, Phan Xuân Minh, Xây dựng hệ thống nhận thức môi trường cho robot tự hành Omni bốn bánh dựa trên thuật toán EKF-SLAM và hệ điều hành ROS, Tạp chí Nghiên cứu KH&CN quân sự, Trang 30-37, Số Đặc san Hội thảo Quốc gia FEE (2020).
[3] Nguyen Duc Dien, Nguyen Duc Duong, Vu Anh Nam, Tran Thi Huong, Building Environmental Awareness System for Mobile Robot Operating in Indoor Environment on ROS Platform, SSRG International Journal of Electrical and Electronics Engineering (SSRG-IJEEE), (2021) 8(1) 32-36.
[4] Hosu I-A and Rebedea T, Playing Atari games with deep reinforcement learning and human checkpoint replay, 2016. ArXiv, abs/1607.05077.
[5] Mnih. V, Kavukcuoglu. K, Silver. D, Rusu. A.A, Veness. J, Bellemare. M.G, Graves. A, Riedmiller. M, Fidjeland. A.K, Ostrovski. G, et al, Human-level control through deep reinforcement learning, Nature 2015, pp. 518-529.
[6] Roan Van Hoa, Dinh Thi Hang, Tran Quoc Dat, Tran Dong, Tran Thi Huong, Autonomous Navigation for Mobile Robot Based on Reinforcement Learning, SSRG International Journal of Electronics and Communication Engineering, 8(1)(2021) 1-5.
[7] Roan Van Hoa, L. K. Lai, Le Thi Hoan, Mobile Robot Navigation Using Deep Reinforcement Learning in Unknown Environments, SSRG International Journal of Electrical and Electronics Engineering (SSRG-IJEEE), 7(8) (2020) 15-20.
[8] L. Tai and M. Liu, A robot exploration strategy based on qlearning network, in Proc. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2016, pp. 57-62.
[9] I. Zamora, N. G. Lopez, V. M. Vilches, and A. H. Cordero, Extending the openai gym for robotics: A toolkit for reinforcement learning using ros and gazebo, arXiv preprint arXiv:1608.05742, 2016.
[10] D. Ascher and M. Lutz, Learning Python. O’Reilly, 1999.
[11] M. Pfitscher, D. Welfer, M. A. d. S. L. Cuadros, and D. F. T. Gamarra, Activity gesture recognition on kinect sensor using convolutional neural networks and fastdtw for the msrc-12 dataset, in International Conference on Intelligent Systems Design and Applications. Springer, (2018) 230–239.
[12] R. M. da Silva, M. A. d. S. L. Cuadros, and D. F. T. Gamarra, Comparison of a backstepping and a fuzzy controller for tracking a trajectory with a mobile robot, in International Conference on Intelligent Systems Design and Applications. Springer, (2018) 212–221.
[13] M. Cuadros, P. De Souza, G. Almeida, R. Passos, and D. Gamarra, Development of a mobile robotics platform for navigation tasks using image processing, in Asia-Pacific Computer Science and Application Conference (CSAC 2014), Shangai, China, 2014.