Human-Centered AI for Real-Time Self-Checkout Assistance: An Event-Driven Architecture with Human-in-the-Loop Decision Support for Enhanced Customer Experience and Shrink Reduction

Authors

  • Sri Harsha Konda

DOI:

https://doi.org/10.47941/jts.3523

Keywords:

Human-in-the-loop, Retail self-checkout, Shrink reduction, Event-driven architecture, Customer experience

Abstract

Purpose: Self-checkout (SCO) systems face persistent operational challenges including delayed assistance, inconsistent triage, and false-positive rates of 18 to 25 percent, contributing to industry shrink losses exceeding 112 billion dollars annually. Stores with high SCO utilization experience shrinkage 75 to 147 percent above industry averages. This paper proposes a human-centered artificial intelligence (AI) architecture grounded in explainable AI (XAI) and human-in-the-loop (HITL) principles, treating AI as decision support rather than autonomous enforcement.

Methodology: The event-driven system prioritizes assistance events using multi-factor scoring based on wait time, anomaly likelihood, and associate workload, while preserving human discretion and providing transparent explanations for all recommendations. The framework is analyzed through a theoretical lens using parameters calibrated to publicly available retail industry benchmarks.

Findings: Theoretical analysis suggests the framework has potential for significant reduction in assistance wait time, decreased false alert rates, and improved detection accuracy while maintaining associate decision authority.

Unique contribution to theory, practice and policy: This analysis demonstrates how responsible AI design can enhance operational efficiency without compromising human autonomy or customer experience, offering a replicable blueprint for human-centered AI deployment in consumer-facing retail environments.

Downloads

Download data is not yet available.

Author Biography

Sri Harsha Konda

Independent Researcher

References

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.

Alon-Barkat, S., & Busuioc, M. (2022). Human-AI interactions in public sector decision-making: ‘Automation bias’ and ‘selective adherence’ to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153–169.

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kiber, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI) (pp. 1–13).

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

Beck, A. (2018). Self-checkout in retail: Measuring the loss (Tech. Rep.). ECR Community Shrinkage and On-shelf Availability Group, Brussels.

Beck, A. (2022). Global study on self-checkout in retail (Tech. Rep.). ECR Retail Loss Group, Brussels.

Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–21.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of Machine Learning Research (Vol. 81, pp. 77–91).

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.

Demoulin, N. T. M., & Djelassi, S. (2016). An integrated model of self-service technology (SST) usage in a retail context. International Journal of Retail & Distribution Management, 44(5), 540–559.

Duarte, P., Silva, S. C., Linardi, M. A., & Novais, B. (2022). Understanding the implementation of retail self-service check-out technologies using necessary condition analysis. International Journal of Retail & Distribution Management, 50(13), 140–163.

Grand View Research. (2023). Self-checkout systems market size, share & trends analysis report (Tech. Rep.).

Grewal, D., Roggeveen, A. L., & Nordfalt, J. (2020). The future of in-store technology. Journal of the Academy of Marketing Science, 48(1), 96–113.

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Hille, E. M., Hummel, P., & Braun, M. (2023). Meaningful human control over AI for health? A review. Journal of Medical Ethics, 50(6), 416–421.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.

Kleppmann, M. (2017). Designing data-intensive applications: The big ideas behind reliable, scalable, and maintainable systems. O’Reilly Media.

Lee, S., & Kim, D.-Y. (2023). Investigating consumer acceptance of self-service technology: The effects of self-service technology types and service quality. Journal of Retailing and Consumer Services, 71, 103207.

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (Vol. 30, pp. 4765–4774).

McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. y. (2017). Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) (Vol. 54, pp. 1273–1282).

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35.

Molnar, C. (2022). Interpretable machine learning: A guide for making black box models explainable (2nd ed.). Leanpub.

Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054.

National Retail Federation. (2023). National retail security survey 2023 (Tech. Rep.). Washington, DC.

NCR Voyix. (2024). Securing self-checkout: A complex challenge requiring combined solutions (Tech. Rep.).

Newman, S. (2021). Building microservices: Designing fine-grained systems (2nd ed.). O’Reilly Media.

Orel, F. D., & Kara, A. (2014). Supermarket self-checkout service quality, customer satisfaction, and loyalty: Empirical evidence from an emerging market. Journal of Retailing and Consumer Services, 21(2), 118–129.

Pantano, E., Pizzi, G., Scarpi, D., & Dennis, C. (2020). Competing during a pandemic? Retailers’ ups and downs during the COVID-19 outbreak. Journal of Business Research, 116, 209–213.

Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410.

PYMNTS.com. (2021). The how we shop report: Measuring the rapid digital shift in shopping (Tech. Rep.).

Revilla, E., Seifert, M., & Ma, Y. (2023). Human–Artificial Intelligence collaboration in prediction: A field experiment in the retail industry. Journal of Management Information Systems, 40(4), 1071–1098.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ‘Why should I trust you?’: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144).

Sarkar, M., Rashid, M. H. O., Hoque, M. R., & Mahmud, M. R. (2025). Explainable AI in e-commerce: Enhancing trust and transparency in AI-driven decisions. Innovatech Engineering Journal, 2(1), 12–39.

Seifert, M., & Revilla, E. (2023). Designing human-AI collaboration in operations. California Management Review, 65(2), 5–29.

Sensormatic Solutions. (2023). Computer vision and the future of retail (Tech. Rep.).

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.

Shneiderman, B. (2022). Human-centered AI. Oxford University Press.

Sloane, M., & Wüllhorst, F. (2025). Human-in-the-loop as regulatory mandate: A comparative review of AI governance frameworks. AI & Society, forthcoming.

Speith, T. (2023). The role of explainable AI in the context of the AI Act. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1843–1850).

Taylor, E. (2016). Supermarket self-checkouts and retail theft: The curious case of the SWIPERS. Criminology & Criminal Justice, 16(5), 552–567.

Theatro. (2023). 2023 Retail customer experience survey (Tech. Rep.).

Vössing, M., Kühl, N., Lind, M., & Satzger, G. (2022). Designing transparency for effective human-AI collaboration. Information Systems Frontiers, 24(3), 877–895.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

Vicente, P., & Matute, H. (2024). The impact of AI errors in a human-in-the-loop process. Cognitive Research: Principles and Implications, 9(1), 1–17.

Weber, E., Klarmann, M., & Körner, M. (2023). Identifying application areas for machine learning in the retail sector: A literature review and interview study. SN Computer Science, 4(5), 426.

Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., & He, L. (2022). A survey of human-in-the-loop for machine learning. Future Generation Computer Systems, 135, 364–381.

Downloads

Published

2026-02-20

How to Cite

Konda, S. H. (2026). Human-Centered AI for Real-Time Self-Checkout Assistance: An Event-Driven Architecture with Human-in-the-Loop Decision Support for Enhanced Customer Experience and Shrink Reduction. Journal of Technology and Systems, 8(1), 7–34. https://doi.org/10.47941/jts.3523

Issue

Section

Articles