Enhancing Spectral Efficiency in Vehicular Optical Camera Communications Using Multi-Agent Deep Reinforcement Learning

International Journal of Electronics and Communication Engineering |
© 2025 by SSRG - IJECE Journal |
Volume 12 Issue 5 |
Year of Publication : 2025 |
Authors : A. Kondababu, S. Vinaya Kumar, K.H.K. Prasad, Kasetty Lakshminarasimha, U.S.B.K. Mahalaxmi |
How to Cite?
A. Kondababu, S. Vinaya Kumar, K.H.K. Prasad, Kasetty Lakshminarasimha, U.S.B.K. Mahalaxmi, "Enhancing Spectral Efficiency in Vehicular Optical Camera Communications Using Multi-Agent Deep Reinforcement Learning," SSRG International Journal of Electronics and Communication Engineering, vol. 12, no. 5, pp. 80-93, 2025. Crossref, https://doi.org/10.14445/23488549/IJECE-V12I5P107
Abstract:
Vehicular communication systems are very important for modern transportation. These systems allow vehicles to share data and improve road safety. Optical Camera Communication (OCC) is a new method that helps vehicles communicate using visible light. This method has many advantages over traditional Radio Frequency (RF) communication. It offers a larger spectrum, lower cost and better security. This paper focuses on optimizing spectral efficiency in vehicular OCC. It proposes a new approach using multi-agent Deep Reinforcement Learning (DRL). This method helps vehicles decide their speed and modulation to maximize spectral efficiency. The system also ensures a low Bit Error Rate (BER) and ultra-low latency. The main goal is to find the best modulation order and vehicle speed to increase spectral efficiency. This needs to be done while maintaining reliability and low latency. The problem is difficult to solve with traditional methods. The reason is that it is a mixed-integer programming problem with nonlinear constraints. A solution is proposed using Reinforcement Learning (RL). In this case, each vehicle acts as an autonomous agent. The vehicles learn the best way to adjust their speed and modulation order. This is done using a technique called Q-learning. However, since the problem is large and complex, DRL is used to improve learning efficiency. This paper presents a new way to improve spectral efficiency in vehicular OCC. It uses DRL to optimize speed and modulation order. The system meets reliability and latency constraints. The results show that this method is more effective than existing approaches.
Keywords:
Deep Reinforcement Learning, Optical Camera Communication, Open car control, Markov decision process, Multi-agent reinforcement learning.
References:
[1] Felipe Cunha et al., Vehicular Networks to Intelligent Transportation Systems, Emerging Wireless Communication and Network Technologies, Springer, Singapore, pp. 297-315, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Mario Gerla, and Leonard Kleinrock, “Vehicular Networks and the Future of the Mobile Internet,” Computer Networks, vol. 55, no. 2, pp. 457-469, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[3] You Han et al., “Spectrum Sharing Methods for the Coexistence of Multiple RF Systems: A Survey,” Ad Hoc Networks, vol. 53, pp. 53-78, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Oluwaferanmi Oluwatosin Atiba, “Optical Wireless and Visible Light Communication Techniques,” Master’s Thesis, Tampere University, pp. 1-67, 2023.
[Google Scholar] [Publisher Link]
[5] Amirul Islam, “Machine Learning Assisted Ultra Reliable and Low Latency Vehicular Optical Camera Communications,” Ph.D. Thesis, University of Essex, pp. 1-168, 2022.
[Google Scholar] [Publisher Link]
[6] He Chen et al., “Ultra-Reliable Low Latency Cellular Networks: Use Cases, Challenges and Approaches,” IEEE Communications Magazine, vol. 56, no. 12, pp. 119-125, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Amirul Islam, Nikolaos Thomos, and Leila Musavian, “Achieving uRLLC with Machine Learning Based Vehicular OCC,” GLOBECOM 2022 - 2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, pp. 4558-4563, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Ana L.C. Bazzan, “A Distributed Approach for Coordination of Traffic Signal Agents,” Autonomous Agents and Multi-Agent Systems, vol. 10, pp. 131-164, 2005.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Guillermo Pocovi et al., “Achieving Ultra-Reliable Low-Latency Communications: Challenges and Envisioned System Enhancements,” IEEE Network, vol. 32, no. 2, pp. 8-15, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Zidong Zhang, Dongxia Zhang, and Robert C. Qiu, “Deep Reinforcement Learning for Power System Applications: An Overview,” CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 213-225, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Xin Xu et al., “A Reinforcement Learning Approach to Autonomous Decision Making of Intelligent Vehicles on Highways,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 10, pp. 3884-3897, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Mikhail A. Bragin et al., “Convergence of the Surrogate Lagrangian Relaxation Method,” Journal of Optimization Theory and Applications, vol. 164, pp. 173-201, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Laetitia Matignon, Guillaume J. Laurent, and Nadine Le Fort-Piat, “Independent Reinforcement Learners in Cooperative Markov Games: A Survey Regarding Coordination Problems,” The Knowledge Engineering Review, vol. 27, no. 1, pp. 1-31, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Yuki Goto et al., “A New Automotive VLC System Using Optical Communication Image Sensor,” IEEE Photonics Journal, vol. 8, no. 3, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Takaya Yamazato et al., “Image-Sensor-Based Visible Light Communication for Automotive Applications,” IEEE Communications Magazine, vol. 52, no. 7, pp. 88-97, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Anitha Saravana Kumar, Lian Zhao, and Xavier Fernando, “Multi-Agent Deep Reinforcement Learning-Empowered Channel Allocation in Vehicular Networks,” IEEE Transactions on Vehicular Technology, vol. 71, no. 2, pp. 1726-1736, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[17] Amirul Islam, Nikolaos Thomos, and Leila Musavian, “Multi-Agent Deep Reinforcement Learning for Spectral Efficiency Optimization in Vehicular Optical Camera Communications,” IEEE Transactions on Mobile Computing, vol. 23, no. 5, pp. 3666-3679, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Zhiyong Du et al., “Context-Aware Indoor VLC/RF Heterogeneous Network Selection: Reinforcement Learning With Knowledge Transfer,” IEEE Access, vol. 6, pp. 33275-33284, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Anastasios Giannopoulos et al., “Deep Reinforcement Learning for Energy-Efficient Multi-Channel Transmissions in 5G Cognitive HetNets: Centralized, Decentralized and Transfer Learning Based Solutions,” IEEE Access, vol. 9, pp. 129358-129374, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Willy Anugrah Cahyadi et al., “Optical Camera Communications: Principles, Modulations, Potential and Challenges,” Electronics, vol. 9, no. 9, pp. 1-44, 2020.
[CrossRef] [Google Scholar] [Publisher Link]