Robust Vehicle Detection in Fog: Integrating Spatial Correlation, Fused Features, and Shape Semantics

International Journal of Electronics and Communication Engineering
© 2026 by SSRG - IJECE Journal
Volume 13 Issue 3
Year of Publication : 2026
Authors : Nookala Sairam, Lavadya Nirmala Devi, Renu Madhavi
pdf
How to Cite?

Nookala Sairam, Lavadya Nirmala Devi, Renu Madhavi, "Robust Vehicle Detection in Fog: Integrating Spatial Correlation, Fused Features, and Shape Semantics," SSRG International Journal of Electronics and Communication Engineering, vol. 13,  no. 3, pp. 99-118, 2026. Crossref, https://doi.org/10.14445/23488549/IJECE-V13I3P108

Abstract:

Poor visibility is also a challenge to vehicle detection and classification in autonomous driving and surveillance systems when there is foggy weather, as a result of low visibility and contrast. The paper will present a new model with which to improve the quality and detection errors of vehicle objects in foggy images, and aims to achieve four primary goals, namely, to boost the contrast with the help of a spatial mutual correlation-based enhancement mechanism, get reliable detection with the help of a combined feature vector with color intensity, gradients, texture, and shape, classify multiple vehicle types with shape semantics and a personalized Deep Learning Model, and create a labelled set of foggy images based on the Real-Time Images. Architecture is based on the adaptation of the YOLOv8 architecture, which incorporates a 7-channel input pipeline and a shape-sensitive classification head and is trained on a foggy dataset that consists of 1951 instances in 10 vehicle classes. The findings show a significant improvement, and precision is 0.7046, which is much greater than that of the baseline strategies in foggy situations. The proposed work is also the first to jointly use spatial correlation-based dehazing, multiple feature fusion, and shape-aware classification, as it provides an effective solution to adverse weather object detection.

Keywords:

Foggy Image Enhancement, Vehicle Detection, Spatial Mutual Correlations, Fused Feature Vector, Shape Semantics, Deep Learning, YOLOv8, Dataset Creation.

References:

[1] Joseph Redmon et al., “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779-788, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Shaoqing Ren et al., “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Vedant Kumar et al., “Object Detection Using SSD,” Proceedings of the 5th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), Greater Noida, India, pp. 743-748, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[4] T. Kokul, and S. Anparasy, “Single Image Defogging using Depth Estimation and Scene-Specific Dark Channel Prior,” 2020 20th International Conference on Advances in ICT for Emerging Regions (ICTer), Colombo, Sri Lanka, pp. 190-195, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Boyi Li et al., “Benchmarking Single Image Dehazing and Beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492 505, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Qingsong Zhu, Jiaming Mai, and Ling Shao, “A Fast Single Image Haze Removal Algorithm using Color Attenuation Prior,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Kyungil Kim, Soohyun Kim, and Kyung-Soo Kim, “Effective Image Enhancement Techniques for Fog-Affected Indoor and Outdoor Images,” IET Image Processing, vol. 12, no. 4, pp. 465-471, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Qazi Mazhar ul Haq et al., “An Incremental Learning of YOLOv3 Without Catastrophic Forgetting for Smart City Applications,” IEEE Consumer Electronics Magazine, vol. 11, no. 5, pp. 56-63, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Hasan Abbasi, Marzieh Amini, and F. Richard Yu, “Fog-Aware Adaptive YOLO for Object Detection in Adverse Weather,” 2023 IEEE Sensors Applications Symposium (SAS), Ottawa, ON, Canada, pp. 1-6, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Kaiming He et al., “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770-778, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao, “Scaled-YOLOv4: Scaling Cross Stage Partial Network,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 13024-13033, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Zineb Haimer et al., “YOLO Algorithms Performance Comparison for Object Detection in Adverse Weather Conditions,” 2023 3rd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Mohammedia, Morocco, pp. 1-7, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Yuhua Chen et al., “Domain Adaptive Faster R-CNN for Object Detection in the Wild,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 3339-3348, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Martin Hahner et al., “Semantic Understanding of Foggy Scenes with Purely Synthetic Data,” 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, pp. 3675-3681, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Mingkang Chen et al., “Weather-Aware Object Detection Method for Maritime Surveillance Systems,” Future Generation Computer Systems, vol. 151, pp. 111-123, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Lizhao Liu et al., “Enhancing Foggy Day Target Detection using Plant Intelligence-Inspired Algorithms,” Measurement, vol. 255, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Lucai Wang et al., “R-YOLO: A Robust Object Detector in Adverse Weather,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1-11, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Zongdai Liu et al., “AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, pp. 15621-15630, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Marius Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 3213-3223, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[20] German Ros et al., “The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 3234-3243, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Ming-An Chung, Chih-Wei Yang, and Chia-Wei Lin, “Evaluation of the Effect of Foggy Image Generation on the Performance of the YOLOv7-Based Object Detection Model,” 2024 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), Taichung, Taiwan, pp. 815-816, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[22] A. Geiger et al., “Vision Meets Robotics: The KITTI Dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231 1237, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Andreas Geiger, Philip Lenz, and Raquel Urtasun, “Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite,” 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, pp. 3354-3361, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Yu Wan et al., “DFI-FODN: Dehazing Feature Injected Foggy Image Object Detection Network under Multi-task Learning Framework,” IGARSS 2025 - 2025 IEEE International Geoscience and Remote Sensing Symposium, Brisbane, Australia, pp. 8462-8465, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Xianglin Meng et al., “YOLOv5s-Fog: An Improved Model Based on YOLOv5s for Object Detection in Foggy Weather Scenarios,” Sensors, vol. 23, no. 11, pp. 1-16, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Qiyang Xie et al., “A Deep CNN-Based Detection Method for Multi-Scale Fine-Grained Objects in Remote Sensing Images,” IEEE Access, vol. 12, pp. 15622-15630, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv preprint, pp. 1-17, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Cong Liu et al., “DeMatchNet: A Unified Framework for Joint Dehazing and Feature Matching in Adverse Weather Conditions,” Electronics, vol. 14, no. 5, pp. 1-20, 2025.
[CrossRef] [Google Scholar] [Publisher Link]