Between Human Trust and Algorithmic Prediction: Designing Intelligent Systems Using XAI and Modern Big Data Platforms

  • Ali Hussein Khalaf Al-Sammerraie Ministry of Education, Directorate General of Education Diyala, Baquba, Iraq
Keywords: Explainable AI (XAI), Big Data Analytics, Decision Support Systems, Business Intelligence, SHAP, XGBoost, Intelligent Systems

Abstract

Big data and AI have changed how organizations make decisions, and businesses rely more and more on automated decision-making to achieve better performance, manage risks, and develop and implement business strategies. But complexity and opacity of AI models can often erode user trust, particularly in high trust industries like finance, health and customer analytics. To solve this problem, This research engineers an intelligent analytical system that bridges algorithmic accuracy and human interpretability by integrating XAI with big data platforms. The framework addresses scalability challenges in industrial environments through a layered architecture optimized for real-time deployment." that bridges. We construct a layered architecture, with data ingestion and pre-processing using Spark and Pandas; model training; explainability; and a user-friendly interface using Streamlit and FastAPI. We validated the proposed approach through a real case concerning customer churn prediction in order to show that it provides competitive predictive accuracy, as well as transparent decisions. The experimental results indicate that SHAP enhances the user’s understanding and trust in AI-based decisions. This work provides a scalable, interpretable, and practical approach that is UAD-based for the deployment of intelligent decision support systems in enterprise settings, adding to the literature on trustful AI.

References

R. V. Hulland, M. B. Houston, and E. T. M. Handley, “Utilizing Design Science Research to advance theory: Guidelines for applied information systems research,” Communications of the Association for Information Systems, vol. 37, no. 1, p. 18, 2015. DOI: 10.17705/1CAIS.03718

A. R. Hevner, S. T. March, J. Park, and S. Ram, “Design science in information systems research,” MIS Quarterly, vol. 28, no. 1, pp. 75–105, 2004. DOI: 10.2307/25142769

T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , 2016, pp. 785–794. DOI: 10.1145/2939672.2939785

G. Ke et al., “LightGBM: A Highly Efficient Gradient Boosting Decision Tree,” IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 8, pp. 1554–1568, 2020. DOI: 10.1109/TKDE.2019.2955546

S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems , 2017, pp. 4765–4774. DOI: 10.48550/arXiv.1705.07874

J. Liu, Y. Chen, and H. Zhang, “Interpreting credit scoring models using SHAP values,” Journal of Financial Transformation , vol. 48, pp. 123–134, 2020. https://www.brunel.ac.uk/~bsstmmg/JFT/Vol48/Liu.pdf

A. Patel, R. Shah, and P. Desai, “Real-time fraud detection using Spark and SHAP,” International Journal of Big Data Intelligence, vol. 7, no. 1, pp. 45–57, 2020. DOI: 10.1504/IJBDI.2020.10025989

Y. Wang, J. Zhao, and L. Zhou, “Explainable demand forecasting in supply chains using ensemble models and SHAP,” International Journal of Production Economics , vol. 232, p. 107905, 2021. DOI: 10.1016/j.ijpe.2020.107905

Published
2025-08-09
How to Cite
Al-Sammerraie, A. H. K. (2025). Between Human Trust and Algorithmic Prediction: Designing Intelligent Systems Using XAI and Modern Big Data Platforms. CENTRAL ASIAN JOURNAL OF MATHEMATICAL THEORY AND COMPUTER SCIENCES, 6(4), 747-757. Retrieved from https://cajmtcs.centralasianstudies.org/index.php/CAJMTCS/article/view/806
Section
Articles