Robust Adaptive Learning for Real-World Data with Changing Features and Imperfect Labels
| International Journal of Electronics and Communication Engineering |
| © 2026 by SSRG - IJECE Journal |
| Volume 13 Issue 2 |
| Year of Publication : 2026 |
| Authors : Akula Suneetha, K. Ratna Babu, Rama Vasantha Adiraju, V. V. Jaya Rama krishnaiah, Desamala Prabhakara Rao, Vullam Naga Gopi Raju |
How to Cite?
Akula Suneetha, K. Ratna Babu, Rama Vasantha Adiraju, V. V. Jaya Rama krishnaiah, Desamala Prabhakara Rao, Vullam Naga Gopi Raju, "Robust Adaptive Learning for Real-World Data with Changing Features and Imperfect Labels," SSRG International Journal of Electronics and Communication Engineering, vol. 13, no. 2, pp. 291-306, 2026. Crossref, https://doi.org/10.14445/23488549/IJECE-V13I2P122
Abstract:
In many machine learning tasks, environments are complex and unstable. Features in the data change over time. The labels in the data contain errors. These issues make model training very difficult. This paper focuses on a situation in which both problems happen at the same time. Learning in such a case is very challenging. It is even harder when the data in the new stage is limited. A new method termed ALDN is proposed. This stands for Adaptive Learning for Dynamic Features and Noisy Labels. The method has two main stages. The first stage learns a model from clean data. The second stage uses that model to help train a new model. The feature space has changed in this second stage. To do this, a tool titled optimal transport is used. This helps map the old model into the new feature space. Then, a special rule is applied to align the behavior of the old model with the new one. This rule is named a regularizer. It helps the new model learn better, even with noisy labels. Two versions of the method are presented. One uses a direct rule to keep the models close. The other uses an indirect rule that compares predictions. It is proven that the method works well in theory. The method is tested through many experiments. Comparisons are made with other identified methods. Better performance is shown in most cases. It works well even when the features change, and the labels are not reliable. This paper presents a useful way to deal with both dynamic features and noisy labels. It reuses past models and adapts them to new data. This helps when there is little clean data and the new data is messy. The method is practical and backed by strong theory and experiments.
Keywords:
Learning, Adaptive, ALDN.
References:
[1] Yang Tang et al., “Introduction to Focus Issue: When Machine Learning Meets Complex Systems: Networks, Chaos, and Nonlinear Dynamics,” Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 30, no. 6, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis, “Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers,” Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 614-622, 2008.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Tony Jinks, Disappearing Object Phenomenon: An Investigation, McFarland, pp. 1-192, 2016.
[Google Scholar] [Publisher Link]
[4] Guozhu Dong et al., “Online Mining of Changes From Data Streams: Research Problems and Preliminary Results,” Proceedings of the 2003 ACM SIGMOD Workshop on Management and Processing of Data Streams, pp. 739-747, 2003.
[Google Scholar] [Publisher Link]
[5] Włodzisław Duch, and Geerd H.F. Diercksen, “Feature Space Mapping as a Universal Adaptive System,” Computer Physics Communications, vol. 87, no. 3, pp. 341-371, 1995.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Subutai Ahmad, and Volker Tresp, “Some Solutions to the Missing Feature Problem in Vision,” Advances in Neural Information Processing Systems, vol. 5, pp. 393-400, 1992.
[Google Scholar] [Publisher Link]
[7] Shilin Gu et al., “Adaptive Learning for Dynamic Features and Noisy Labels,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, no. 2, pp. 1219-1237, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Yang Liu, and Jialu Wang, “Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial,” Advances in Neural Information Processing Systems, vol. 34, pp. 17467-17479, 2021.
[Google Scholar] [Publisher Link]
[9] K.J.M. Janssen et al., “Updating Methods Improved the Performance of a Clinical Prediction Model in New Patients,” Journal of Clinical Epidemiology, vol. 61, no. 1, pp. 76-86, 2008.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Tony Xiao et al., “Learning from Massive Noisy Labeled Data for Image Classification,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 2691-2699, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Subhas Chandra Mukhopadhyay, “Wearable Sensors for Human Activity Monitoring: A Review,” IEEE Sensors Journal, vol. 15, no. 3, pp. 1321-1330, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Zhenning Kong, Salah A. Aly, and Emina Soljanin, “Decentralized Coding Algorithms for Distributed Storage in Wireless Sensor Networks,” IEEE Journal on Selected Areas in Communications, vol. 28, no. 2, pp. 261-267, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Hans Jonas, “The Practical Uses of Theory [with Comments],” Social Research, vol. 26, no. 2, pp. 127-166, 1959.
[Google Scholar] [Publisher Link]
[14] Xindong Wu et al., “Online Feature Selection with Streaming Features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 5, pp. 1178-1192, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Quanmao Lu, Xuelong Li, and Yongsheng Dong, “Structure Preserving Unsupervised Feature Selection,” Neurocomputing, vol. 301, pp. 36-45, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Qin Zhang et al., “Online Learning from Trapezoidal Data Stream,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 10, pp. 2709-2723, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Chenping Hou, and Zhi-Hua Zhou, “One-Pass Learning with Incremental and Decremental Features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 11, pp. 2776-2792, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Qin Zhang et al., “Online Learning from Trapezoidal Data Stream,” Asian Conference on Machine Learning (ACML), vol. 28, no. 10, pp. 2709-2723, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Chenping Hou, and Zhi-Hua Zhou, “One-Pass Learning with Incremental and Decremental Features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 11, pp. 2776-2792, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Bo-Jian Hou, Lijun Zhang, and Zhi-Hua Zhou, “Prediction with Unpredictable Feature Evolution,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5706-5715, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Chenping Hou et al., “Incremental Learning for Simultaneous Augmentation of Feature and Class,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 14789-14806, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Koby Crammer et al., “Online Passive-Aggressive Algorithms,” Journal of Machine Learning Research, vol. 7, no. 19, pp. 551-585, 2006.
[Google Scholar] [Publisher Link]
[23] Tongliang Liu, and Dacheng Tao, “Classification with Noisy Labels by Importance Reweighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 3, pp. 447-461, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Benoit Frenay, and Michel Verleysen, “Classification in the Presence of Label Noise: A Survey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 845-869, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Jing Zhou et al., “Streamwise Feature Selection,” Journal of Machine Learning Research, vol. 7, no. 67, pp. 1861-1885, 2006.
[CrossRef] [Google Scholar] [Publisher Link]
[26] B. Han et al., “Co-Teaching: Robust Training of Deep Neural Networks withExtremely Noisy Labels,” NeurIPS, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Jiaheng Wei et al., “Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations,” arXiv Preprint, pp. 1-23, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Bo Han et al., “Co-Teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels,” Advances in Neural Information Processing Systems, vol. 31, 2018.
[Google Scholar] [Publisher Link]
[29] B. Frenay, and M. Verleysen, “Classification in the Presence of Label Noise: A Survey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 845-869, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[30] Giorgio Patrini et al., “Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1944-1952, 2017.
[Google Scholar] [Publisher Link]
[31] Curtis G. Northcutt, Lu Jiang, and Isaac L. Chuang, “Confident Learning: Estimating Uncertainty in Dataset Labels,” Journal of Artificial Intelligence Research, vol. 70, pp. 1373-1411, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[32] Nagarajan Natarajan et al., “Learning with Noisy Labels,” Advances in Neural Information Processing Systems, vol. 26, pp. 1-9, 2013.
[Google Scholar] [Publisher Link]
[33] Hao Chen et al., “A General Framework for Learning from Weak Supervision,” arXiv Preprint, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[34] Xiaobo Xia et al., “Part-Dependent Label Noise: Towards Instance-Dependent Label Noise,” Advances in Neural Information Processing Systems, vol. 33, pp. 1-14, 2020.
[Google Scholar] [Publisher Link]
[35] Antonin Berthon et al., “Confidence Scores Make Instance-dependent Label-noise Learning Possible,” Proceedings of the 38th International Conference on Machine Learning, vol. 139, pp. 825-836, 2021.
[Google Scholar] [Publisher Link]
[36] Bo Han et al., “Co-Teaching: Robust Training of Deep Neural Networks withExtremely Noisy Labels,” NeurIPS Proceedings, 2018.
[Google Scholar] [Publisher Link]
[37] Jacob Goldberger, and Ehud Ben-Reuven, “Training Deep Neural Networks Using a Noise Adaptation Layer,” International Conference on Learning Representations, pp. 1-9, 2017.
[Google Scholar] [Publisher Link]
[38] Yuncheng Li et al., “Learning from Noisy Labels with Distillation,” Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1910-1918, 2017.
[Google Scholar] [Publisher Link]

10.14445/23488549/IJECE-V13I2P122