Investigating Awareness and Perception of Bias in AI Driven Platforms: A Survey-Based Study
| International Journal of Humanities and Social Science |
| © 2025 by SSRG - IJHSS Journal |
| Volume 12 Issue 5 |
| Year of Publication : 2025 |
| Authors : Aishaani Agarwal |
How to Cite?
Aishaani Agarwal, "Investigating Awareness and Perception of Bias in AI Driven Platforms: A Survey-Based Study," SSRG International Journal of Humanities and Social Science, vol. 12, no. 5, pp. 97-113, 2025. Crossref, https://doi.org/10.14445/23942703/IJHSS-V12I5P110
Abstract:
Artificial Intelligence (AI) increasingly influences decisions in everyday life, from education/health care/and homework to employment/social media. While AI offers multiple advantages of efficiency and innovation, anxiety about fairness, accountability, and transparency in AI systems remains at the forefront of conversation both in the news and public debate. Researchers have raised these issues, pointing to the fact that bias can be introduced at multiple levels in AI—from the training data the AI is trained on to the design choices the AI developer made to the way the user interacts with the AI decision-making system. Researchers have posited important questions related to how AI systems can affect society and people's responses to these AI systems. As such, this study examines how participants perceive and experience bias in AI systems (i.e., decision-making systems) with a focus on trust, accountability, and demographic characteristics such as age, gender, and education. Based on survey data from the youth participants, this research examines where participants perceive bias in AI systems, how participants feel about AI systems making day-to-day decisions in their daily lives, and who participants think should be liable when there is bias in AI systems. The research not only contributes to the ongoing dialogue about fairness and trust in AI but also has implications for design, governance, and public engagement.
Keywords:
Accountability, AI bias, Artificial Intelligence, Human-Computer Interaction, Perception Bias.
References:
[1] Raymond S. T. Lee, Artificial Intelligence in Daily Life, Springe, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Hanguang Xiao et al., “A Comprehensive Survey of Large Language Models and Multimodallarge Language Models in Medicine,” Information Fusion, vol. 117, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Chenjun Liu, and Yan Xu, “A Model for Evaluating the Effectiveness of News Dissemination under the Combination of Big Data and Artificial Intelligence,” 3rd International Conference on Data Analytics, Computing and Artificial Intelligence, Zakopane, Poland, pp. 606-611, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Lama H. Nazer et al., “Bias in Artificial Intelligence Algorithms and Recommendations for Mitigation,” PLOS Digital Health, vol. 2, no. 6, pp. 1-14, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Drona P. Rasali et al., “Cross-Disciplinary Rapid Scoping Review of Structural Racial and Caste Discrimination Associated with Population Health Disparities in the 21st Century,” Societies, vol. 14, no. 9, pp. 1-24, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Vittoria Scatiggio, “Tackling the Issue of Bias in Artificial Intelligence to Design AI-Driven Fair and Inclusive Service Systems. How Human Biases Are Breaching Into AI Algorithms, With Severe Impacts On Individuals And Societies, And What Designers Can Do to Face this Phenomenon and Change for the Better,” Thesis, pp. 1-84, 2022.
[Google Scholar] [Publisher Link]
[7] Haroon Sheikh, Corien Prins, and Erik Schrijvers, “AI as a System Technology,” Mission AI, pp. 85-134, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Kwadwo Asante, David Sarpong, and Derrick Boakye, “On the Consequences of AI Bias: When Moral Values Supersede Algorithm Bias,” Journal of Managerial Psychology, vol. 40, no. 5, pp. 493-516, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Xiaoyu Zhu et al., “Algorithm and Analytical Verification of Roller Straightening Process Model Considering Stress Inheritance Behavior,” AIP Advances, vol. 15, no. 3, pp. 1-10, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Priyansh, and Amrit Kaur Saggu, “Building Trust in AI Systems: A Study on User Perception and Transparent Interactions,” International Journal on Science and Technology, vol. 16, no. 1, pp. 1-13, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Fatih Bildirici, “Open-Source AI: An Approach to Responsible Artificial Intelligence Development,” Reflektif Journal of Social Sciences, vol. 5, no. 1, pp. 73-81, 2024.
[Google Scholar] [Publisher Link]
[12] Philipp Brauner et al., “What does the Public Think about Artificial Intelligence?—A Criticality Map to Understand Bias in the Public Perception of AI,” Frontiers in Computer Science, vol. 5, pp. 1-12, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Dirk Ifenthaler et al., “Artificial Intelligence in Education: Implications for Policymakers, Researchers, and Practitioners,” Technology, Knowledge and Learning, vol. 29, pp. 1693-1710, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Shengnan Han et al., “Aligning Artificial Intelligence with Human Values: Reflections from a Phenomenological Perspective,” AI & Society, vol. 37, pp. 1383-1395, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Andrea Papenmeier, Gwenn Englebienne, and Christin Seifert, “How Model Accuracy and Explanation Fidelity Influence user Trust,” arXiv preprint, pp. 1-7, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Anna Fine, Emily R. Berthelot, and Shawn Marsh, “Public Perceptions of Judges’ Use of AI Tools in Courtroom Decision-Making: An Examination of Legitimacy, Fairness, Trust, and Procedural Justice,” Behavioral Sciences, vol. 15, no. 4, pp. 1-21, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Teresa Sandoval-Martin, and Ester Martínez-Sanzo, “Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator,” Social Sciences, vol. 13, no. 5, pp. 1-17, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Andrea Ferrario, and Michele Loi, “How Explainability Contributes to Trust in AI,” Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul Republic of Korea, pp. 1457-1466, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Florian Pethig, and Julia Kroenung, “Biased humans, (Un)Biased Algorithms?,” Journal of Business Ethics, vol. 183, pp. 637-652, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Philipp Brauner et al., “Misalignments in AI Perception: Quantitative Findings and Visual Mapping of How Experts and the Public Differ in Expectations and Risks, Benefits, and Value Judgments,” arXiv preprint, pp. 1-27, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Kasper Trolle Elmholdt et al., “The hopes and Fears of Artificial Intelligence: A Comparative Computational Discourse Analysis,” AI & Society, vol. 40, pp. 4765-4782, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[22] M. Callahan, Algorithms were Supposed to Reduce Bias in Criminal Justice—Do they?, Boston University Today, 2023.
[Google Scholar]
[23] Ethics of Artificial Intelligence and Robotics, Stanford Encyclopedia of Philosophy, 2020. [Online]. Available: https://plato.stanford.edu/entrieS/ethics-ai/
[24] Ayesha Nadeem, Babak Abedin, and Olivera Marjanovic, “Gender Bias in AI: A Review of Contributing Factors and Mitigating Strategies,” Proceedings of the Australasian Conference on Information Systems, pp. 1-12, 2020.
[Google Scholar] [Publisher Link]
[25] James Manyika, Jake Silberg, and Brittany Presten, “What do we do about the Biases in AI?,” Harvard Business Review, pp. 1-5, 2019.
[Google Scholar] [Publisher Link]
[26] Jeff Larson et al., How we Analyzed the COMPAS Recidivism Algorithm, ProPublica, 2016. [Online]. Available: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
[27] Eirini Ntoutsi et al., “Bias in Data-Driven Artificial Intelligence Systems—An Introductory Survey,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, no. 3, pp. 1-14, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Ziad Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science, vol. 366, no. 6464, pp. 447-453, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[29] Mary K. Pratt, 5 Ways AI Bias Hurts Your Business, TechTarget Enterprise AI, 2021. [Online]. Available: https://www.techtarget.com/searchenterpriseai/feature/5-ways-AI-bias-hurts-your-business
[30] Maximilian Kasy, “IZA DP No. 16944: Algorithmic Bias and Racial Inequality: A Critical Review,” IZA Discussion Papers, Institute of Labor Economics (IZA), pp. 1-27, 2024.
[Google Scholar] [Publisher Link]

10.14445/23942703/IJHSS-V12I5P110