PerfDetectiveAI - Performance Gap Analysis and Recommendation in Software Applications

International Journal of Computer Science and Engineering
© 2023 by SSRG - IJCSE Journal
Volume 10 Issue 5
Year of Publication : 2023
Authors : Vivek Basavegowda Ramu

pdf
How to Cite?

Vivek Basavegowda Ramu, "PerfDetectiveAI - Performance Gap Analysis and Recommendation in Software Applications," SSRG International Journal of Computer Science and Engineering , vol. 10,  no. 5, pp. 40-46, 2023. Crossref, https://doi.org/10.14445/23488387/IJCSE-V10I5P106

Abstract:

PerfDetectiveAI, a conceptual framework for performance gap analysis and suggestion in software applications is introduced in this research. For software developers, retaining a competitive edge and providing exceptional user experiences depend on maximizing application speed. But investigating cutting-edge approaches is necessary due to the complexity involved in determining performance gaps and creating efficient improvement tactics. Modern machine learning (ML) and artificial intelligence (AI) techniques are used in PerfDetectiveAI to monitor performance measurements and identify areas of underperformance in software applications. With the help of the framework, software developers and performance engineers should be able to enhance application performance and raise system productivity. It does this by utilizing sophisticated algorithms and utilizing sophisticated data analysis methodologies. Drawing on theoretical foundations from the fields of AI, ML and software engineering, PerfDetectiveAI envisions a sophisticated system capable of uncovering subtle performance discrepancies and identifying potential bottlenecks. PerfDetectiveAI aims to provide practitioners with data-driven recommendations to guide their decision-making processes by integrating advanced algorithms, statistical modelling, and predictive analytics. While PerfDetectiveAI is currently at the conceptual stage, this paper outlines the framework's fundamental principles, underlying methodologies and envisioned workflow. We want to encourage more research and development in the area of AI-driven performance optimization by introducing this conceptual framework, setting the foundation for the next developments in the quest for software excellence.

Keywords:

Performance Gap Analysis, Recommendation Systems, Software Applications, Artificial Intelligence (AI), Machine Learning (ML).

References:

[1] Yongqiang Cao, Yang Chen, and Deepak Khosla, “Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition,” International Journal of Computer Vision, vol. 113, pp. 54–66, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Murray Woodside, Greg Franks, and Dorina C. Petriu, “The Future of Software Performance Engineering,” 2007 Future of Software Engineering (FOSE '07), 2007.
[CrossRef] [Google Scholar] [Publisher Link]
[3] H. Sarojadevi, “Performance Testing: Methodologies and Tools,” Journal of Information Engineering and Applications, vol. 1, no. 5, pp. 5-13, 2011.
[Google Scholar] [Publisher Link]
[4] Pekka Abrahamsson et al., “Agile Software Development Methods: Review and Analysis,” Software Engineering, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Huiying Li et al., “A User Satisfaction Analysis Approach for Software Evolution,” 2010 IEEE International Conference on Progress in Informatics and Computing, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Simon Fahle, Christopher Prinz, and Bernd Kuhlenkötter, “Systematic Review on Machine Learning (ML) Methods for Manufacturing Processes – Identifying Artificial Intelligence (AI) Methods for Field Application,” Procedia CIRP, vol. 93, pp. 413-418, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[7] M. Vanitha, and P. Marikkannu, “Effective Resource Utilization in Cloud Environment Through a Dynamic Well-organized Load Balancing Algorithm for Virtual Machines,” Computers & Electrical Engineering, vol. 57, pp. 199–208, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Sambhav Yadav et al., “An Improved Collaborative Filtering Based Recommender System using Bat Algorithm,” Procedia Computer Science, vol. 132, pp. 1795–1803, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Xianchao Zhang et al., “Deep Anomaly Detection with Self-supervised Learning and Adversarial Training,” Pattern Recognition, vol. 121, p. 108234, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[10] AI Khan, Remudin Reshid Mekuria, and Ruslan Isaev, “Applying Machine Learning Analysis for Software Quality Test,” International Conference on Code Quality (ICCQ), 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Ran Li et al., “Software Defect Prediction Based on Ensemble Learning,” Proceedings of the 2019 2nd International Conference on Data Science and Information Technology, pp. 1–6, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Mick P. Couper, “The Future of Modes of Data Collection,” Public Opinion Quarterly, vol. 75, no. 5, pp. 889–908, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Salvador García et al., “Big data Preprocessing: Methods and Prospects,” Big Data Analytics, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Marc Courtois, and Murray Woodside, “Using Regression Splines for Software Performance Analysis,” Proceedings of the 2nd international workshop on Software and performance, pp. 105–114, 2000.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Andrew R. Gray, and Stephen G. MacDonell, “Software Metrics Data Analysis—Exploring the Relative Performance of Some Commonly Used Modeling Techniques,” Empirical Software Engineering, vol. 4, pp. 297–316, 1999.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Filippos I. Vokolos, and Elaine J. Weyuker, “Performance Testing of Software Systems,” Proceedings of the 1st International Workshop on Software and Performance, pp. 80-87, 1998.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Mojtaba Shahin, Peng Liang, and Muhammad Ali Babar, “A Systematic Review of Software Architecture Visualization Techniques,” Journal of Systems and Software, vol. 94, pp. 161-185, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Daniel Bardsley, Larry Ryan, and John Howard, “Serverless Performance and Optimization Strategies,” 2018 IEEE International Conference on Smart Cloud (SmartCloud), 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[19] N. Sreekanth et al., “Evaluation of Estimation in Software Development using Deep Learning-modified Neural Network,” Applied Nanoscience, vol. 13, pp. 2405–2417, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Dae-Yeob Kim, and Cheong Youn, “Traceability Enhancement Technique through the Integration of Software Configuration Management and Individual Working Environment,” 2010 Fourth International Conference on Secure Software Integration and Reliability Improvement, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Manoj Bhat et al., “The Evolution of Architectural Decision Making as a Key Focus Area of Software Architecture Research: A Semi-Systematic Literature Study,” 2020 IEEE International Conference on Software Architecture (ICSA), 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Carlo Batini et al., “Methodologies for Data Quality Assessment and Improvement,” ACM Computing Surveys, vol. 41, no. 3, pp. 1-52, 2009.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Jingxiu Yao, and Martin Shepperd, “The Impact of using Biased Performance Metrics on Software Defect Prediction Research,” Information and Software Technology, vol. 139, p. 106664, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[24] E. J. Weyuker, and A. Avritzer, “A Metric to Predict Software Scalability,” Proceedings Eighth IEEE Symposium on Software Metrics, 2002.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Vidroha Debroy, Senecca Miller, and Lance Brimble, “Building Lean Continuous Integration and Delivery Pipelines by Applying Devops Principles: A Case Study at Varidesk,” Association for Computing Machinery, pp. 851–856, 2018.
[CrossRef] [Google Scholar] [Publisher Link]