Machine Learning: Key Algorithms, Practical Applications, and Current Research Directions

International Journal of Electrical and Electronics Engineering
© 2025 by SSRG - IJEEE Journal
Volume 12 Issue 4
Year of Publication : 2025
Authors : Mohammad Nazmul Alam, Vijay Laxmi, Abhishek Sharma, Sarishma Dangi
pdf
How to Cite?

Mohammad Nazmul Alam, Vijay Laxmi, Abhishek Sharma, Sarishma Dangi, "Machine Learning: Key Algorithms, Practical Applications, and Current Research Directions," SSRG International Journal of Electrical and Electronics Engineering, vol. 12,  no. 4, pp. 12-46, 2025. Crossref, https://doi.org/10.14445/23488379/IJEEE-V12I4P102

Abstract:

We generate enormous amounts of data every day across various fields, such as finance, healthcare, sales, marketing, social media, and industry. State-of-the-art technology leverages this big data to make decisions and gain valuable insights. Machine learning, one of the most advanced and dynamic artificial intelligence techniques, utilizes large datasets to make predictions and develop intelligent applications. Machine learning algorithms enable computers to learn without being explicitly programmed. In this paper, we identify key algorithms and discuss fundamental algorithmic concepts. We explore various categories of machine learning, including supervised, unsupervised, semi-supervised, and reinforcement learning, along with their respective algorithms. Furthermore, we identify advanced machine learning applications across diverse fields. Finally, we discuss the challenges associated with machine learning techniques and potential future directions for developing algorithms and services.

Keywords:

Machine Learning, Reinforcement learning, Supervised, Unsupervised, Semi-supervised, Neural Network.

References:

[1] Tom M. Mitchell, Machine Learning, McGraw-Hill, 2017.
[Publisher Link]
[2] Aurélien Géron, Hands-on Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, O'Reilly Media, Inc., 2019.
[Google Scholar] [Publisher Link]
[3] Pang-Ning Tan, Michael Steinbach, and Vipin Kumar, Introduction to Data Mining, Pearson Education Limited, 2019.
[Google Scholar] [Publisher Link]
[4] Elaine Rich, Kevin Knight, and Shivashankar B. Nair, Artificial Intelligence, 3rd ed., TATA McGraw-Hill, 2009.
[Publisher Link]
[5] Harry Henderson, Artificial Intelligence, Milestones in Discovery and Invention, Infobase Publishing, 2007.
[Google Scholar] [Publisher Link]
[6] Mahdi Rezaei, and Reinhard Klette, Computer Vision for Driver Assistance, Springer International Publishing, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Taiwo Oladipupo Ayodele, Types of Machine Learning Algorithms, New Advances in Machine Learning, pp. 19-48, 2010. [CrossRef] [Google Scholar] [Publisher Link] [8] Zhi-Hua Zhou, Ensemble Methods: Foundations and Algorithms, CRC Press, 2012.
[Google Scholar] [Publisher Link]
[9] Rahul Saxena, How Decision Tree Algorithm Works, Dataaspirant, 2017. [Online]. Available: https://dataaspirant.com/how-decision tree-algorithm-works/
[10] Aurélien Géron, Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, O'Reilly Media, Inc., 2022.
[Google Scholar] [Publisher Link]
[11] Antonio Gulli, and Sujit Pal, Deep Learning with Keras: Implementing Deep Learning Models and Neural Networks with the Power of Python, Packt Publishing Ltd, 2017.
[Google Scholar] [Publisher Link]
[12] Tom M. Mitchell, The Discipline of Machine Learning, Carnegie Mellon University, School of Computer Science, Machine Learning Department, 2006.
[Google Scholar] [Publisher Link]
[13] Iqbal H. Sarker, “Machine Learning: Algorithms, Real-World Applications and Research Directions,” SN Computer Science, vol. 2, no. 3, pp. 1-21, 2021.
[CrossRef] [Google Scholar] [Publisher Link] 
[14] Diah Puspitasari et al., “Heart Disease: Application of the K-Nearest Neighbor (KNN) Method,” International Information and Engineering Technology Association, vol. 29, no. 4, pp. 1275-1281, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Gongde Guo et al., “KNN Model-Based Approach in Classification,” OTM Confederated International Conferences, ‘On the Move to Meaningful Internet Systems’, Catania, Italy, vol. 1, pp. 986-996, 2003.
[CrossRef] [Google Scholar] [Publisher Link]
[16] V. Kecman, Support Vector Machines-An Introduction, Support Vector Machines: Theory and Applications, Springer, Berlin, Heidelberg, pp. 1-47, 2005.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Dustin Boswell, “Introduction to Support Vector Machines,” Department of Computer Science and Engineering, University of California San Diego, vol. 11, pp. 16-17, 2002.
[Google Scholar]
[18] Vikramaditya Jakkula, “Tutorial on Support Vector Machine (SVM),” School of EECS, Washington State University, vol. 37, no. 2-5, 2006.
[Google Scholar]
[19] Kecheng Qu, “Research on Linear Regression Algorithm,” 2nd International Conference on Physics, Computing and Mathematical, vol. 395, pp. 1-6, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Supichaya Sunthornjittanon, “Linear Regression Analysis on Net Income of an Agrochemical Company in Thailand,” Portland State University, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Riccardo Trinchero, and Flavio Canavero, “Machine Learning Regression Techniques for the Modeling of Complex Systems: An Overview,” IEEE Electromagnetic Compatibility Magazine, vol. 10, no. 4, pp. 71-79, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Shruthi H. Shetty et al., Supervised Machine Learning: Algorithms and Applications, Fundamentals and Methods of Machine and Deep Learning: Algorithms, Tools and Applications, pp. 1-16, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[23] F.Y. Osisanwo et al., “Supervised Machine Learning Algorithms: Classification and Comparison,” International Journal of Computer Trends and Technology, vol. 48, no. 3, pp. 128-138, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[24] S.B. Kotsiantis, Supervised Machine Learning: A Review of Classification Techniques, Frontiers in Artificial Intelligence and Applications, Emerging Artificial Intelligence Applications in Computer Engineering, vol. 160, pp. 3-24, 2007.
[Google Scholar] [Publisher Link]
[25] José-Luis Solorio-Ramírez et al., “Random Forest Algorithm for the Classification of Spectral Data of Astronomical Objects,” Algorithms, vol. 16, no. 6, pp. 1-16, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Mohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam, “The K-Means Algorithm: A Comprehensive Survey and Performance Evaluation,” Electronics, vol. 9, no. 8, pp. 1-12, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Xin Jin, and Jiawei Han, “K-Means Clustering,” Encyclopedia of Machine Learning, pp. 563-564, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Jonathon Shlens, “A Tutorial on Principal Component Analysis,” arXiv Preprint, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[29] Andrzej Maćkiewicz, and Waldemar Ratajczak, “Principal Components Analysis (PCA),” Computers & Geosciences, vol. 19, no. 3, pp. 303-342, 1993.
[CrossRef] [Google Scholar] [Publisher Link]
[30] Shuangshuang Chen, and Wei Guo, “Auto-Encoders in Deep Learning-A Review with New Perspectives,” Mathematics, vol. 11, no. 8, pp. 1-54, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Pascal Vincent et al., “Extracting and Composing Robust Features with Denoising Autoencoders,” Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, pp. 1096-1103, 2008.
[CrossRef] [Google Scholar] [Publisher Link]
[32] Andrew Ng, “Sparse Autoencoder,” CS294A Lecture Notes, 2011. [Google Scholar] [Publisher Link] [33] Diederik P. Kingma, and Max Welling, “Auto-Encoding Variational Bayes,” arXiv Preprint, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[34] Jonathan Masci et al., “Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction,” International Conference on Artificial Neural Networks, Espoo, Finland, pp. 52-59, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[35] Salah Rifai et al., “Contractive Auto-Encoders: Explicit Invariance During Feature Extraction,” Proceedings of the 28th International Conference on International Conference on Machine Learning, Bellevue, Washington, USA, pp. 833-840, 2011.
[Google Scholar] [Publisher Link]
[36] Pascal Vincent et al., “Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion,” Journal of Machine Learning Research, vol. 11, pp. 3371-3408, 2010.
[Google Scholar] [Publisher Link]
[37] Yoshua Bengio, Aaron Courville, and Pascal Vincent, “Representation Learning: A Review and New Perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798-1828, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[38] Bruno A. Olshausen, and David J. Field, “Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?,” Vision Research, vol. 37, no. 23, pp. 3311-3325, 1997.
[CrossRef] [Google Scholar] [Publisher Link]
[39] Nitish Srivastava, and Ruslan Salakhutdinov, “Multimodal Learning with Deep Boltzmann Machines,” Advances in Neural Information Processing Systems, vol. 25, pp. 2222-2230, 2012.
[Google Scholar] [Publisher Link]
[40] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le, “Sequence to Sequence Learning with Neural Networks,” Advances in Neural Information Processing Systems, vol. 27, pp. 3104-3112, 2014.
[Google Scholar] [Publisher Link]
[41] Kyunghyun Cho et al., “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation,” Proceedings of the Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, pp. 1724-1734, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[42] Arash Vahdat, and Jan Kautz, “NVAE: A Deep Hierarchical Variational Autoencoder,” Advances in Neural Information Processing Systems, vol. 33, pp. 19667-19679, 2020.
[Google Scholar] [Publisher Link]
[43] Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou, “Isolation Forest,” 8th IEEE International Conference on Data Mining, Pisa, Italy, pp. 413-422, 2008.
[CrossRef] [Google Scholar] [Publisher Link]
[44] Y. Wang, J. Wong, and A. Miner, “Anomaly Intrusion Detection Using One-Class SVM,” Proceedings from the 5th Annual IEEE SMC Information Assurance Workshop, pp. 358-364, 2004.
[CrossRef] [Google Scholar] [Publisher Link]
[45] Kun-Lun Li et al., “Improving One-Class SVM for Anomaly Detection,” Proceedings of the International Conference on Machine Learning and Cybernetics (IEEE Cat. No.03EX693), Xi'an, vol. 5, pp. 3077-3081, 2003.
[CrossRef] [Google Scholar] [Publisher Link]
[46] Markus M. Breunig et al., “LOF: Identifying Density-Based Local Outliers,” Proceedings of the ACM SIGMOD International Conference on Management of Data, Dallas Texas, USA, pp. 93-104, 2000.
[CrossRef] [Google Scholar] [Publisher Link]
[47] Laurens Van Der Maaten, and Geoffrey Hinton, “Visualizing Data Using T-SNE,” Journal of Machine Learning Research, vol. 9, pp. 2579-2605, 2008.
[Google Scholar] [Publisher Link]
[48] Martin Wattenberg, Fernanda Viégas, and Ian Johnson, “How to Use t-SNE Effectively,” Distill, vol. 1, no. 10, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[49] Haoyu Xie, “Research and Case Analysis of Apriori Algorithm Based on Mining Frequent Itemsets,” Open Journal of Social Sciences, vol. 9, no. 4, pp. 458-463, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[50] Hongfei Xu et al., “Research on an Improved Association Rule Mining Algorithm,” IEEE International Conference on Power Data Science, Taizhou, China, pp. 37-42, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[51] Massih-Reza Amini et al., “Self-Training: A Survey,” arXiv Preprint, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[52] Zixing Song et al., “Graph-Based Semi-Supervised Learning: A Comprehensive Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 11, pp. 8174-8194, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[53] L. Ruthotto, and E. Haber, “An Introduction to Deep Generative Modeling,” GAMM-Mitteilungen, vol. 44, no. 2, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[54] Xibin Dong et al., “A Survey on Ensemble Learning,” Frontiers of Computer Science, vol. 14, pp. 241-258, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[55] Thomas G. Dietterich, “Ensemble Methods in Machine Learning,” International Workshop on Multiple Classifier Systems, Cagliari, Italy, vol. 1, pp. 1-15, 2000.
[CrossRef] [Google Scholar] [Publisher Link]
[56] David Opitz, and Richard Maclin, “Popular Ensemble Methods: An Empirical Study,” Journal of Artificial Intelligence Research, vol. 11, pp. 169-198, 1999.
[CrossRef] [Google Scholar] [Publisher Link]
[57] Ying Zhou, Thomas A. Mazzuchi, and Shahram Sarkani, “M-Adaboost-A Based Ensemble System for Network Intrusion Detection,” Expert Systems with Applications, vol. 162, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[58] Batta Mahesh, “Machine Learning Algorithms-A Review,” International Journal of Science and Research, vol. 9, no. 1, pp. 381-386, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[59] A.D. Dongare, R.R. Kharde, and Amit D. Kachare, “Introduction to Artificial Neural Network,” International Journal of Engineering and Innovative Technology, vol. 2, no. 1, pp. 189-194, 2012.
[Google Scholar]
[60] Yu-chen Wu, and Jun-wen Feng, “Development and Application of Artificial Neural Network,” Wireless Personal Communications, vol. 102, no. 2, pp. 1645-1656, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[61] Saad Albawi, Tareq Abed Mohammed, and Saad Al-Zawi, “Understanding of A Convolutional Neural Network,” International Conference on Engineering and Technology, Antalya, Turkey, pp. 1-6, 2017.
[CrossRef] [Publisher Link]
[62] Keiron O'Shea, and Ryan Nash, “An Introduction to Convolutional Neural Networks,” arXiv Preprint, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[63] Laith Alzubaidi et al., “Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions,” Journal of Big Data, vol. 8, no. 1, pp. 1-74, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[64] Anjar Wanto, Yuhandri Yuhandri, and Okfalisa Okfalisa, “RetMobileNet: A New Deep Learning Approach for Multi-Class Eye Disease Identification,” Artificial Intelligence Review, vol. 38, no. 4, pp. 1055-1067, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[65] Yoshua Bengio, “Learning Deep Architectures for AI,” Foundations and Trends® in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009.
[CrossRef] [Google Scholar] [Publisher Link]
[66] Raffaele Pugliese, Stefano Regondi, and Riccardo Marini, “Machine Learning-Based Approach: Global Trends, Research Directions, and Regulatory Standpoints,” Data Science and Management, vol. 4, pp. 19-29, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[67] S. Reema Sree, S.B. Vyshnavi, and N. Jayapandian, “Real-World Application of Machine Learning and Deep Learning,” International Conference on Smart Systems and Inventive Technology, Tirunelveli, India, pp. 1069-1073, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[68] Vineet Chaoji, Rajeev Rastogi, and Gourav Roy, “Machine Learning in the Real World,” Proceedings of the VLDB Endowment, vol. 9, no. 13, pp. 1597-1600, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[69] George Tzanis et al., “Modern Applications of Machine Learning,” Proceedings of the 1st Annual SEERC Doctoral Student Conference, vol. 1, no. 1. pp. 1-10, 2006.
[Google Scholar]
[70] Shahid Tufail et al., “Advancements and Challenges in Machine Learning: A Comprehensive Review of Models, Libraries, Applications, And Algorithms,” Electronics, vol. 12, no. 8, pp. 1-43, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[71] Hafsa Habehh, and Suril Gohel, “Machine Learning in Healthcare,” Current Genomics, vol. 22, no. 4, pp. 291-300, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[72] Qi An et al., “A Comprehensive Review on Machine Learning in Healthcare Industry: Classification, Restrictions, Opportunities and Challenges,” Sensors, vol. 23, no. 9, pp. 1-21, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[73] Yang Lu, and Li Da Xu, “Internet of Things (IoT) Cybersecurity Research: A Review of Current Research Topics,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 2103-2115, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[74] Yoram Reich, and S.V. Barai, “Evaluating Machine Learning Models for Engineering Problems,” Artificial Intelligence in Engineering, vol. 13, no. 3, pp. 257-272, 1999.
[CrossRef] [Google Scholar] [Publisher Link]
[75] Yoram Reich, “Machine Learning Techniques for Civil Engineering Problems,” Computer-Aided Civil and Infrastructure Engineering, vol. 12, no. 4, pp. 295-310, 1997.
[CrossRef] [Google Scholar] [Publisher Link]
[76] Huu-Tai Thai, “Machine Learning for Structural Engineering: A State-of-the-Art Review,” Structures, vol. 38, pp. 448-491, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[77] Guannan Huang et al., “Application of Machine Learning in Material Synthesis and Property Prediction,” Materials, vol. 16, no. 17, pp. 1-30, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[78] Sabrina C. Shen et al., “Computational Design and Manufacturing of Sustainable Materials through First-Principles and Materiomics,” Chemical Reviews, vol. 123, no. 5, pp. 2242-2275, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[79] Baiyu Lu, “The Application of Machine Learning in Chemical Engineering: A Literature Review,” Proceedings of the 9th International Conference on Humanities and Social Science Research, Atlantis Press, pp. 57-66, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[80] Periklis Gogas, and Theophilos Papadimitriou, “Machine Learning in Economics and Finance,” Computational Economics, vol. 57, no. 1, pp. 1-4, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[81] Komal et al., Opportunities and Challenges of AI/ML in Finance, The Impact of AI Innovation on Financial Sectors in the Era of Industry 5.0, pp. 238-260, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[82] Konstantinos G. Liakos et al., “Machine Learning in Agriculture: A Review,” Sensors, vol. 18, no. 8, pp. 1-29, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[83] Wenxiang Liu et al., “Applications of Machine Learning in Computational Nanotechnology,” Nanotechnology, vol. 33, no. 16, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[84] Avnish Pareek et al., “Nanotechnology for Green Applications: How Far on the Anvil of Machine Learning!,” Biobased Nanotechnology for Green Applications, pp. 1-38, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[85] Rahul Rai et al., “Machine Learning in Manufacturing and Industry 4.0 Applications,” International Journal of Production Research, vol. 59, no. 16, pp. 4773-4778, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[86] Tingting Chen et al., “Machine Learning in Manufacturing Towards Industry 4.0: From ‘For Now’ to ‘Four-Know’,” Applied Sciences, vol. 13, no. 3, pp. 1-32, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[87] Massimo Bertolini et al., “Machine Learning for Industrial Applications: A Comprehensive Literature Review,” Expert Systems with Applications, vol. 175, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[88] Thorsten Wuest et al., “Machine Learning in Manufacturing: Advantages, Challenges, and Applications,” Production & Manufacturing Research, vol. 4, no. 1, pp. 23-45, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[89] Sam Yang, Bjorn Vaagensmith, and Deepika Patra, “Power Grid Contingency Analysis with Machine Learning: A Brief Survey and Prospects,” Resilience Week, Salt Lake City, UT, USA, pp. 119-125, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[90] Kuldeep Singh Kaswan, Jagjit Singh Dhatterwal, and Rudra Pratap Ojha, AI in Personalized Learning, Advances in Technological Innovations in Higher Education, CRC Press, pp. 103-117, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[91] Md. Zahurul Haque, “E-Commerce Product Recommendation System Based on ML Algorithms,” arXiv Preprint, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[92] Badrul Sarwar et al., “Analysis of Recommendation Algorithms for E-Commerce,” Proceedings of the 2nd ACM Conference on Electronic Commerce, pp. 158-167, 2000.
[CrossRef] [Google Scholar] [Publisher Link] 
[93] Yizhao Ni et al., “An End-to-End Machine Learning System for Harmonic Analysis of Music,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 6, pp. 1771-1783, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[94] Iria Santos et al., “Artificial Neural Networks and Deep Learning in the Visual Arts: A Review,” Neural Computing and Applications, vol. 33, pp. 121-157, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[95] Ravil I. Mukhamediev et al., “Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges,” Mathematics, vol. 10, no. 15, pp. 1-25, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[96] Enrico Barbierato, and Alice Gatti, “The Challenges of Machine Learning: A Critical Review,” Electronics, vol. 13, no. 2, pp. 1-30, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[97] Eric Horvitz, “Machine Learning, Reasoning, and Intelligence in Daily Life: Directions and Challenges,” Proceedings of, Microsoft, vol. 360, 2006.
[Google Scholar]
[98] Niki Parmar et al., “Image Transformer,” Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, pp. 4055-4064, 2018.
[Google Scholar] [Publisher Link]
[99] Jung Min Ahn, Jungwook Kim, and Kyunghyun Kim, “Ensemble Machine Learning of Gradient Boosting (Xgboost, Lightgbm, Catboost) and Attention-Based CNN-LSTM for Harmful Algal Blooms Forecasting,” Toxins, vol. 15, no. 10, pp. 1-15, 2023.
[CrossRef] [Google Scholar] [Publisher Link]