FedTaskRL: A Reinforcement Learning Based Framework for Efficient Task Scheduling in Federated Cloud Environments

International Journal of Electronics and Communication Engineering |
© 2025 by SSRG - IJECE Journal |
Volume 12 Issue 7 |
Year of Publication : 2025 |
Authors : M. Chandra Sekhar, P. Kumaraswamy, Nagendar Yamsani, Giri Babu K, Rajitha Kotoju |
How to Cite?
M. Chandra Sekhar, P. Kumaraswamy, Nagendar Yamsani, Giri Babu K, Rajitha Kotoju, "FedTaskRL: A Reinforcement Learning Based Framework for Efficient Task Scheduling in Federated Cloud Environments," SSRG International Journal of Electronics and Communication Engineering, vol. 12, no. 7, pp. 74-89, 2025. Crossref, https://doi.org/10.14445/23488549/IJECE-V12I7P107
Abstract:
The emergence of federated cloud edge computing has brought challenging issues in scheduling dynamic tasks, which require the consideration of energy efficiency, latency, SLA satisfaction and migration of resources together. Primitive approaches such as ru le based and heuristic scheduling can be too rigid and unable to deal with the volatile and diverse behavior of contemporary distributed systems. Although recent deep reinforcement learning (DRL) methods have achieved favorable performance, most existing m ethods possess issues such as hard reward specification and a lack of support for multi objective optimization and federated scalability/privacy. To fill these gaps, we propose FedTaskRL in this paper, a new federated DRL based DT scheduling framework for a cloud edge ecosystem. The proposed model employs a neural Q learning algorithm with an augmented state representation and a raw multi objective reward function. Such a design allows the model to adaptively learn customized scheduling policies to cut down energy, reduce response time, comply with SLA, and save migration cost over time for federated learning while considering the data locality of these clients. We conduct thorough extensive experiments in a federated simulated setting and show that FedTask RL outperforms state of the art methods , including DRL TS, A3C Scheduler, DRLIS, EdgeTimer, and MA DRL. The designed framework has a 28 kWh Use of Energy, 145 ms Average Response Time, 97.5% of SLA fulfilment, and a lower cost of $4.8 for the migration. T hese findings also confirm the effectiveness and efficiency of FedTaskRL in real time cloud workload management. In summary, FedTaskRL provides a scalable, adaptive, and privacy respecting solution for intelligent task scheduling, resulting in significantl y improved performance and practicality in federated cloud edge resource management.
Keywords:
Federated Cloud Computing, Task Scheduling, Deep Reinforcement Learning, SLA Compliance, Energy Efficiency.
References:
[1] Prashanth Choppara, and S. Sudheer Mangalampalli, “Adaptive Task Scheduling in Fog Computing Using Federated DQN and K-Means Clustering,” IEEE Access, vol. 13, pp. 75466-75492, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Guruh Fajar Shidik et al., “Novel Unsupervised Cluster Reinforcement Q-Learning in Minimizing Energy Consumption of Federated Edge Cloud,” IEEE Access, vol. 13, pp. 92577-92595, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Jinming Wang et al., “Deep Reinforcement Learning Task Scheduling Method for Real-Time Performance Awareness,” IEEE Access, vol. 13, pp. 31385-31400, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[4] S. Nambi, and P. Thanapal, “EMO-TS: An Enhanced Multi-Objective Optimization Algorithm for Energy-Efficient Task Scheduling in Cloud Data Centers,” IEEE Access, vol. 13, pp. 8187-8200, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Xiaojing Chen et al., “Toward Dynamic Resource Allocation and Client Scheduling in Hierarchical Federated Learning: A Two-Phase Deep Reinforcement Learning Approach,” IEEE Transactions on Communications, vol. 72, no. 12, pp. 7798-7813, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Mazin Abed Mohammed et al., “Federated-Reinforcement Learning-Assisted IoT Consumers System for Kidney Disease Images,” IEEE Transactions on Consumer Electronics, vol. 70, no. 4, pp. 7163-7173, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Somayeh Kianpisheh, and Tarik Taleb, “Collaborative Federated Learning for 6G With a Deep Reinforcement Learning-Based Controlling Mechanism: A DDoS Attack Detection Scenario,” IEEE Transactions on Network and Service Management, vol. 21, no. 4, pp. 4731-4749, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Rongqi Zhang et al., “Federated Deep Reinforcement Learning for Multimedia Task Offloading and Resource Allocation in MEC Networks,” IEICE Transactions on Communications, vol. E107-B, no. 6, pp. 446-457, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Tai Manh Ho, Kim-Khoa Nguyen, and Mohamed Cheriet, “Federated Deep Reinforcement Learning for Task Scheduling in Heterogeneous Autonomous Robotic System,” IEEE Transactions on Automation Science and Engineering, vol. 21, no. 1, pp. 528-540, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Hojjat Baghban et al., “Edge-AI: IoT Request Service Provisioning in Federated Edge Computing Using Actor-Critic Reinforcement Learning,” IEEE Transactions on Engineering Management, vol. 71, pp. 12519-12528, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Zhu Tianqing et al., “Resource Allocation in IoT Edge Computing via Concurrent Federated Reinforcement Learning,” IEEE Internet of Things Journal, vol. 9, no. 2, pp. 1414-1426, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Wei Huang et al., “FedDSR: Daily Schedule Recommendation in a Federated Deep Reinforcement Learning Framework,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 4, pp. 3912-3924, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Peiying Zhang et al., “Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT,” IEEE Transactions on Industrial Informatics, vol. 17, no. 12, pp. 8475-8484, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[14] S.R. Shishira, and A. Kandasamy, “BeeM-NN: An Efficient Workload Optimization Using Bee Mutation Neural Network in Federated Cloud Environment,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, pp. 3151-3167, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Xu Zhao et al., “Low Load DIDS Task Scheduling Based on Q-Learning in Edge Computing Environment,” Journal of Network and Computer Applications, vol. 188, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[16] N. Yuvaraj, T. Karthikeyan, and K. Praghash, “An Improved Task Allocation Scheme in Serverless Computing Using Gray Wolf Optimization (GWO) Based Reinforcement Learning (RIL) Approach,” Wireless Personal Communications, vol. 117, pp. 2403-2421, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Zheyi Chen et al., “Adaptive and Efficient Resource Allocation in Cloud Datacenters Using Actor-Critic Deep Reinforcement Learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 8, pp. 1911-1923, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[18] G. Matheen Fathima, and L. Shakkeera, “Efficient Task Scheduling and Computational Offloading Optimization with Federated Learning and Blockchain in Mobile Cloud Computing,” Results in Control and Optimization, vol. 18, pp. 1-16, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Gaith Rjoub et al., “Enhanced Dynamic Deep Q-Network for Federated Learning Scheduling Policies on IoT Devices Using Explanation-Driven Trust,” Knowledge-Based Systems, vol. 318, pp. 1-17, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Amjad Iqbal, Mau-Luen Tham, and Yoong Choon Chang, “Double Deep Q-Network-Based Energy-Efficient Resource Allocation in Cloud Radio Access Network,” IEEE Access, vol. 9, pp. 20440-20449, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Tinghao Zhang, Kwok-Yan Lam, and Jun Zhao, “Deep Reinforcement Learning Based Scheduling Strategy for Federated Learning in Sensor-Cloud Systems,” Future Generation Computer Systems, vol. 144, pp. 219-229, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Zhiyu Wang, Mohammad Goudarzi, and Rajkumar Buyya, “TF-DDRL: A Transformer-Enhanced Distributed DRL Technique for Scheduling IoT Applications in Edge and Cloud Computing Environments,” IEEE Transactions on Services Computing, vol. 18, no. 2, pp. 1039-1053, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Mahdi Safaei Yaraziz, and Richard Hill, “A Review of Resource Allocation for Maximizing Performance of IoT Systems,” IEEE Access, vol. 13, pp. 98426-98451, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Himani Chaudhary et al., “Advanced Queueing and Scheduling Techniques in Cloud Computing Using AI-Based Model Order Reduction,” Discover Computing, vol. 28, pp. 1-40, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Ashwin Singh Slathia et al., “SHERA: SHAP-Enhanced Resource Allocation for VM Scheduling and Efficient Cloud Computing,” IEEE Access, vol. 13, pp. 92816-92832, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Reena Panwar, and M. Supriya, “RLPRAF: Reinforcement Learning-Based Proactive Resource Allocation Framework for Resource Provisioning in Cloud Environment,” IEEE Access, vol. 12, pp. 95986-96007, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Yang Yu, and Xiaoqing Tang, “Hybrid Centralized-Distributed Resource Allocation Based on Deep Reinforcement Learning for Cooperative D2D Communications,” IEEE Access, vol. 12, pp. 196609-196623, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Hafiz Muhammad Fahad Noman et al., “FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks,” IEEE Access, vol. 12, pp. 109775-109792, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[29] James Adu Ansere et al., “Optimal Computation Resource Allocation in Energy-Efficient Edge IoT Systems With Deep Reinforcement Learning,” IEEE Transactions on Green Communications and Networking, vol. 7, no. 4, pp. 2130-2142, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[30] Yuao Wang et al., “Cooperative End-Edge-Cloud Computing and Resource Allocation for Digital Twin Enabled 6G Industrial IoT,” IEEE Journal of Selected Topics in Signal Processing, vol. 18, no. 1, pp. 124-137, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Sowmya Madhavan et al., “Cybertwin Driven Resource Allocation Using Optimized Proximal Policy Based Federated Learning in 6G Enabled Edge Environment,” Digital Communications and Networks, pp. 1-15, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[32] Antonio Scarvaglieri, Sergio Palazzo, and Fabio Busacca, “A Lightweight, Fully-Distributed AI Framework for Energy-Efficient Resource Allocation in LoRa Networks,” Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud Computing, Taormina, Messina, Italy, pp. 1-6, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[33] Jiayin Zhang et al., “Elastic Task Offloading and Resource Allocation Over Hybrid Cloud: A Reinforcement Learning Approach,” IEEE Transactions on Network and Service Management, vol. 21, no. 2, pp. 1983-1997, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[34] Duc-Dung Tran et al., “Multi-Agent DRL Approach for Energy-Efficient Resource Allocation in URLLC-Enabled Grant-Free NOMA Systems,” IEEE Open Journal of the Communications Society, vol. 4, pp. 1470-1486, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[35] Syed Usman Jamil, M. Arif Khan, and Sabih Ur Rehman, “Resource Allocation and Task Off-Loading for 6G Enabled Smart Edge Environments,” IEEE Access, vol. 10, pp. 93542-93563, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[36] Bushra Jamil et al., “IRATS: A DRL-Based Intelligent Priority and Deadline-Aware Online Resource Allocation and Task Scheduling Algorithm in a Vehicular Fog Network,” Ad Hoc Networks, vol. 141, pp. 1-19, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[37] Palash Roy et al., “Distributed Task Allocation in Mobile Device Cloud Exploiting Federated Learning and Subjective Logic,” Journal of Systems Architecture, vol. 113, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[38] P. Nehra, and Niththa Kesswani, “Efficient Resource Allocation and Management by Using Load Balanced Multi-Dimensional Bin Packing Heuristic in Cloud Data Centers,” Journal of Supercomputing, vol. 79, pp. 1398-1425, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[39] Lujie Tang et al., “Joint Optimization of Network Selection and Task Offloading for Vehicular Edge Computing,” Journal of Cloud Computing, vol. 10, pp. 1-13, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[40] Shashank Swarup, Elhadi M. Shakshuki, and Ansar Yasar, “Task Scheduling in Cloud Using Deep Reinforcement Learning,” Procedia Computer Science, vol. 184, pp. 42-51, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[41] Abdelkarim Ben Sada et al., “Multi-Agent Deep Reinforcement Learning-Based Inference Task Scheduling and Offloading for Maximum Inference Accuracy under Time and Energy Constraints,” Electronics, vol. 13, no. 13, pp. 1-27, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[42] Yijun Hao et al., “EdgeTimer: Adaptive Multi-Timescale Scheduling in Mobile Edge Computing with Deep Reinforcement Learning,” IEEE INFOCOM 2024 - IEEE Conference on Computer Communications, Vancouver, BC, Canada, pp. 671-680, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[43] Zhiyu Wang et al., “Deep Reinforcement Learning-Based Scheduling for Optimizing System Load and Response Time in Edge and Fog Computing Environments,” Future Generation Computer Systems, vol. 152, pp. 55-69, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[44] Shreshth Tuli et al., “Dynamic Scheduling for Stochastic Edge-Cloud Environments Using A3C Learning and Residual Recurrent Neural Networks,” IEEE Transactions on Mobile Computing, vol. 21, no. 3, pp. 940-954, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[45] Shuran Sheng et al., “Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing,” Sensors, vol. 21, no. 5, pp. 1-19, 2021.
[CrossRef] [Google Scholar] [Publisher Link]