Deep Deterministic Policy Gradient Algorithm for Dynamic Task Scheduling in Edge-Cloud Environment Using Reinforcement Learning

International Journal of Electronics and Communication Engineering |
© 2025 by SSRG - IJECE Journal |
Volume 12 Issue 6 |
Year of Publication : 2025 |
Authors : D. Mamatha Rani, Supreethi K P, Bipin Bihari Jayasingh |
How to Cite?
D. Mamatha Rani, Supreethi K P, Bipin Bihari Jayasingh, "Deep Deterministic Policy Gradient Algorithm for Dynamic Task Scheduling in Edge-Cloud Environment Using Reinforcement Learning," SSRG International Journal of Electronics and Communication Engineering, vol. 12, no. 6, pp. 315-325, 2025. Crossref, https://doi.org/10.14445/23488549/IJECE-V12I6P125
Abstract:
In the contemporary era, cloud computing is helping high-performance computing applications by providing scalable and affordable computing resources. However, the latency of cloud resources is relatively less when compared with edge computing. In this context, using edge cloud for task scheduling has become indispensable in reaping latency performance benefits with edge cloud. However, assigning every task to the edge cloud is impossible, and resource-intensive tasks should be scheduled to the cloud. An edge-cloud environment becomes very complex, and scheduling is an NP-hard problem. Many existing methods based on reinforcement learning are found to have shortcomings in dealing with an ample action space in the presence of a state space. This paper proposes an algorithm known as the Deep Deterministic Policy Gradient Algorithm for Dynamic Task Scheduling (DDPGA-TS). Our algorithm has a novel pruning strategy that continuously monitors the action space and reduces it to improve overall performance in task scheduling. Our method uses three scales of environments. Several performance indicators are used to evaluate the proposed algorithm's performance. In the experimental findings, the suggested algorithm outperforms existing methods such as DDPG-NN and DDPG-CNN.
Keywords:
Cloud computing, Edge computing, Dynamic task scheduling, Reinforcement learning, Deep Learning.
References:
[1] Jaber Almutairi, and Mohammad Aldossar, “A Novel Approach for IoT Tasks Offloading in Edge-Cloud Environments,” Journal of Cloud Computing, vol. 10, pp. 1-19, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Shaoshuai Ding et al., “Partitioning Stateful Data Stream Applications in Dynamic Edge Cloud Environments,” IEEE Transactions on Services Computing, vol. 15, no. 4, pp. 2368-2381, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Lubomír Bulej et al., “Managing Latency in Edge–Cloud Environment,” Journal of Systems and Software, vol. 172, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Xu Zhao et al., “Low Load DIDS Task Scheduling Based on Q-Learning in Edge Computing Environment,” Journal of Network and Computer Applications, vol. 188, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Yu Zhang et al., “Deadline-Aware Dynamic Task Scheduling in Edge-Cloud Collaborative Computing,” Electronics, vol. 11, no. 15, pp. 1-24, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Abdullah Lakhan et al., “Delay Optimal Schemes for Internet of Things Applications in Heterogeneous Edge Cloud Computing Networks,” Sensors, vol. 22, no. 16, pp. 1-30, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Qihe Huang, Xiaolong Xu, and Jinhui Chen, “Learning-Aided Fine Grained Offloading for Real-Time Applications in Edge-Cloud Computing,” Wireless Networks, vol. 30, pp. 3805-3820, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Ali Asghari, Mohammad Karim Sohrabi, and Farzin Yaghmaee, “Task Scheduling, Resource Provisioning, and Load Balancing on Scientific Workflows Using Parallel SARSA Reinforcement Learning Agents and Genetic Algorithm,” The Journal of Supercomputing, vol. 77, pp. 2800-2828, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Hongman Wang et al., “Service Migration in Mobile Edge Computing: A Deep Reinforcement Learning Approach,” International Journal of Communication Systems, vol. 36, no. 1, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Bassem Sellami et al., “Deep Reinforcement Learning for Energy-Efficient Task Scheduling in SDN-Based IoT Network,” 2020 IEEE 19th International Symposium on Network Computing and Applications, Cambridge, MA, USA, pp. 1-4, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[11] Guanjin Qu et al., “DMRO: A Deep Meta Reinforcement Learning-Based Task Offloading Framework for Edge-Cloud Computing,” IEEE Transactions on Network and Service Management, vol. 18, no. 3, pp. 3448-3459, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Yixue Hao et al., “Deep Reinforcement Learning for Edge Service Placement in Softwarized Industrial Cyber-Physical System,” IEEE Transactions on Industrial Informatics, vol. 17, no. 8, pp. 5552-5561, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Shreshth Tuli et al., “Dynamic Scheduling for Stochastic Edge-Cloud Computing Environments Using A3C Learning and Residual Recurrent Neural Networks,” IEEE Transactions on Mobile Computing, vol. 21, no. 3, pp. 940-954, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Qi Zhang et al., “Task Offloading and Resource Scheduling in Hybrid Edge-Cloud Networks,” IEEE Access, vol. 9, pp. 85350-85366, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Yicen Liu et al., “SFC Embedding Meets Machine Learning: Deep Reinforcement Learning Approaches,” IEEE Communications Letters, vol. 25, no. 6, pp. 1926-1930, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Miaojiang Chen et al., “Deep Reinforcement Learning for Computation Offloading in Mobile Edge Computing Environment,” Computer Communications, vol. 175, pp. 1-12, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[17] M.S. Mekala et al., “Resource Offload Consolidation Based on Deep-Reinforcement Learning Approach in Cyber-Physical Systems,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 2, pp. 245-254, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Bing Lin et al., “Computation Offloading Strategy Based on Deep Reinforcement Learning for Connected and Autonomous Vehicle in Vehicular Edge Computing,” Journal of Cloud Computing, vol. 10, pp. 1-17, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Qiang Liu, Tao Han, and Ephraim Moges, “EdgeSlice: Slicing Wireless Edge Computing Network with Decentralized Deep Reinforcement Learning,” 2020 IEEE 40th International Conference on Distributed Computing Systems, Singapore, pp. 234-244, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Yiwei Zhang et al., “Computing Resource Allocation Scheme of IOV Using Deep Reinforcement Learning in Edge Computing Environment,” EURASIP Journal on Advances in Signal Processing, vol. 2021, pp. 1-19, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Xiaokang Zhou et al., “Edge-Enabled Two-Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything,” IEEE Internet of Things Journal, vol. 10, no. 4, pp. 3295-3304, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Bassem Sellami et al., “Energy-Aware Task Scheduling and Offloading Using Deep Reinforcement Learning in SDN-Enabled IoT Network,” Computer Networks, vol. 210, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Bassem Sellami, Akram Hakiri, and Sadok Ben Yahia, “Deep Reinforcement Learning for Energy-Aware Task Offloading in Join SDN-Blockchain 5G Massive IoT Edge Network,” Future Generation Computer Systems, vol. 137, pp. 363-379, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[24] Jiwei Huang et al., “Joint Computation Offloading and Resource Allocation for Edge-Cloud Collaboration in Internet of Vehicles via Deep Reinforcement Learning,” IEEE Systems Journal, vol. 17, no. 2, pp. 2500-2511, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Lixiang Zhang et al., “Distributed Real-Time Scheduling in Cloud Manufacturing by Deep Reinforcement Learning,” IEEE Transactions on Industrial Informatics, vol. 18, no. 12, pp. 8999-9007, 2022.
[CrossRef] [Google Scholar] [Publisher Link]