Visual Object Tracking via Feature Fusion of Local Binary Patterns and Gradient Local Auto-Correlation

International Journal of Electronics and Communication Engineering
© 2025 by SSRG - IJECE Journal
Volume 12 Issue 10
Year of Publication : 2025
Authors : Villari Sreenatha Sarma, P.M. Ashok Kumar, Vadamala Purandhar Reddy
pdf
How to Cite?

Villari Sreenatha Sarma, P.M. Ashok Kumar, Vadamala Purandhar Reddy, "Visual Object Tracking via Feature Fusion of Local Binary Patterns and Gradient Local Auto-Correlation," SSRG International Journal of Electronics and Communication Engineering, vol. 12,  no. 10, pp. 73-83, 2025. Crossref, https://doi.org/10.14445/23488549/IJECE-V12I10P108

Abstract:

Tracking a single object using feature fusion techniques constitutes a pivotal problem in the field of computer vision, as it involves the detection and tracking of a target object over a sequence of images. Focus has recently been placed on feature-based methods to enhance the tracking accuracy and stability, especially in difficult situations. Nevertheless, many traditional approaches often struggle to provide a real-time solution due to their high computational needs. In this paper, we introduce a new framework for visual object tracking that merges feature fusion-based Local Binary Patterns (LBP) and Gradient Local Auto-Correlations (GLAC). Integrating LBP, which effectively captures robust texture information, with GLAC, which captures and encodes the spatial gradient correlation, enhances the object appearance discrimination. The tracking process is conducted in four distinct stages: feature extraction, feature fusion, similarity matching, and model update. The features of the object are extracted from both LBP and GLAC and then fused to form a discriminative feature vector, which is matched with the previously tracked features to identify the object in the next frames. Through the use of motion prediction, the accuracy in tracking is enhanced, and the estimated location is refined. The results of the experiment show that the LBP-GLAC Feature Fusion tracking method outperforms previously proposed techniques, achieving a tracking accuracy of 83% while performing computations in real-time.

Keywords:

Visual Object Tracking, Texture and Gradient Features, Similarity Matching, Motion Prediction, Real-Time Tracking, Tracking Accuracy.

References:

[1] Mengmeng Wang et al., “Visual Object Tracking across Diverse Data Modalities: A Review,” Arxiv Preprint, pp. 1-33, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Jiehao Yuan, “Visual Object Tracking: Algorithms and Prospectives,” 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications (CVIDL & ICCEA), Changchun, China, pp. 1243-1248, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Ammar Odeh, Ismail Keshta, and Mustafa Al-Fayoumi, “Visual Object Tracking Using Machine Learning,” First International Conference: Science, Engineering Management and Information Technology, Ankara, Turkey, pp. 63-79, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Gang-Joon Yoon, Hyeong Jae Hwang, and Sang Min Yoon, “Visual Object Tracking Using Structured Sparse PCA-Based Appearance Representation and Online Learning,” Sensors, vol. 18, no. 10, pp. 1-19, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Peng Gao et al., “A Complementary Tracking Model with Multiple Features,” Arxiv Preprint, pp. 1-6, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Xue-Feng Zhu et al., “Complementary Discriminative Correlation Filters Based on Collaborative Representation for Visual Object Tracking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 2, pp. 557-568, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Sugang Ma et al., “Robust Visual Object Tracking Based on Feature Channel Weighting and Game Theory,” International Journal of Intelligent Systems, vol. 2023, no. 1, pp. 1-19, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Mustansar Fiaz et al., “Handcrafted and Deep Trackers: Recent Visual Object Tracking Approaches and Trends,” ACM Computing Surveys, vol. 52, no. 2, pp. 1-44, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Ke Gao, “Deep Learning and DCT-Based Hand-Crafted Features for Computer Vision Tasks,” Thesis, University of Missouri-Columbia, pp. 1-161, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Aparna Gullapelly, and Barnali Gupta Banik, “Visual Object Tracking Based on Modified LeNet-5 and RCCF,” Computer Systems Science and Engineering, vol. 46, no. 1, pp. 1127-1139, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[11] George De Ath, “Object Tracking in Video with Part-Based Tracking by Feature Sampling,” Thesis, University of Exeter, pp. 1-200, 2019.
[Google Scholar]
[12] Rajkumari Bidyalakshmi Devi, Yambem Jina Chanu, and Khumanthem Manglem Singh, “Discriminative Object Tracking with Subspace Representation,” The Visual Computer, vol. 37, pp. 1207-1219, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Lei He, and Wansheng Liu, “Attention Based Discriminative Visual Object Tracking Algorithm,” International Conference on Advanced Algorithms and Neural Networks, vol. 12285, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Tianyang Xu, Xiao-Jun Wu, and Fei Feng, “Fast Visual Object Tracking via Correlation Filter and Binary Descriptors,” 2017 International Smart Cities Conference, Wuxi, China, pp. 1-4, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Lu Jin et al., “Hand-Crafted Features or Machine Learnt Features? Together They Improve RGB-D Object Recognition,” 2014 IEEE International Symposium on Multimedia, Taichung, Taiwan, pp. 311-319, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Minglei Tong, Hong Han, and Jingsheng Lei, “Efficient Visual Tracking by Using LBP Descriptor,” International Conference on Artificial Intelligence and Computational Intelligence, Chengdu, China, pp. 391-399, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Abhijeet Boragule, JungYeon Yeo, and GueeSang Lee, “Object Tracking with Sparse Representation Based on HOG and LBP Features,” International Journal of Contents, vol. 11, no. 3, pp. 47-53, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Zebin Cai et al., “A Real-Time Visual Object Tracking System Based on Kalman Filter and MB-LBP Feature Matching,” Multimedia Tools and Applications, vol. 75, pp. 2393-2409, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[19] H. Rami, M. Hamri, and L. Masmoudi, “Objects Tracking in Images Sequence using Local Binary Pattern (LBP),” International Journal of Computer Applications, vol. 63, no. 20, pp. 19-23, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Rafael Martín, and José M. Martínez, “Evaluation of Bounding Box Level Fusion of Single Target Video Object Trackers,” International Conference on Hybrid Artificial Intelligence Systems, Salamanca, Spain, pp. 200-210, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Alan Lukeźič et al., “Performance Evaluation Methodology for Long-Term Single-Object Tracking,” IEEE Transactions on Cybernetics, vol. 51, no. 12, pp. 6305-6318, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Maciej Szczodrak, Piotr Dalka, and Andrzej Czyżewski, “Performance Evaluation of Video Object Tracking Algorithm in Autonomous Surveillance System,” 2010 2nd International Conference on Information Technology, (2010 ICIT), Gdansk, Poland, pp. 31-34, 2010.
[Google Scholar] [Publisher Link]
[23] Zahra Soleimanitaleb, and Mohammad Ali Keyvanrad, “Single Object Tracking: A Survey of Methods, Datasets, and Evaluation Metrics,” Arxiv Preprint, pp. 1-15, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Fasheng Wang et al., “AMTSet: A Benchmark for Abrupt Motion Tracking,” Multimedia Tools and Applications, vol. 81, pp. 4711-4734, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Zhibin Hong et al., “MUlti-Store Tracker (MUSTer): A Cognitive Psychology Inspired Approach to Object Tracking,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, pp. 749-758, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Peixia Li et al., “GradNet: Gradient-Guided Network for Visual Object Tracking,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp. 6161-6170, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Hamed Kiani Galoogahi, Ashton Fagg, and Simon Lucey, “Learning Background-Aware Correlation Filters for Visual Tracking,” 2017 IEEE International Conference on Computer Vision, Venice, Italy, pp. 1144-1152, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Matej Kristan et al., “The Visual Object Tracking VOT2014 Challenge Results,” Computer Vision - ECCV 2014 Workshops, Zurich, Switzerland, pp. 191-217, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[29] Matej Kristan et al., “The Seventh Visual Object Tracking VOT2019 Challenge Results,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea (South), pp. 2206-2241, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[30] Matej Kristan et al., “The Visual Object Tracking VOT2015 Challenge Results,” 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, Chile, pp. 564-586, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Feng Li et al., “Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 4904-4913, 2018.
[CrossRef] [Google Scholar] [Publisher Link]