Integrating Human-Centered AI into the Technology Acceptance Model: Understanding AI-Chatbot Adoption in Higher Education
Abstract
Artificial intelligence (AI) is transforming education by enhancing assessments, personalizing learning, and improving administrative efficiency. However, the adoption of AI-powered chatbots in higher education remains limited, primarily due to concerns about trust, transparency, explainability, perceived control, and alignment with human values. While the Technology Acceptance Model (TAM) is commonly used to explain technology adoption, it does not fully address the challenges posed by AI systems, which require human-centered safeguards. To address this gap, this study extends TAM by incorporating Human-Centered AI (HCAI) principles—explainability, transparency, trust, and perceived control—resulting in the HCAI-TAM framework. An empirical study with 300 respondents was conducted using a structured English questionnaire, and regression analysis was applied to assess the relationships among variables. The model explained 65% (R² = 0.65) of the variance in behavioral intention and 55% (R² = 0.55) in usage behavior. The findings highlight that integrating HCAI principles into TAM enhances user adoption of AI chatbots in higher education, contributing both theoretically and practically.
Downloads
References
S. Sharma and S. Shrestha, “Integrating HCI principles in AI: A review of human-centered artificial intelligence applications and challenges,” J. Future Artif. Intell. Technol., vol. 1, no. 3, pp. 309–317, Dec. 2024.
M. De Choudhury, M. K. Lee, H. Zhu, and D. A. Shamma, “Introduction to this special issue on unifying human–computer interaction and artificial intelligence,” Hum.-Comput. Interact., vol. 35, no. 5–6, pp. 355–361, Nov. 2020, doi: 10.1080/07370024.2020.1744146.
S. J. H. Yang, H. Ogata, T. Matsui, and N.-S. Chen, “Human-centered artificial intelligence in education: Seeing the invisible through the visible,” Comput. Educ. Artif. Intell., vol. 2, p. 100008, 2021, doi: 10.1016/j.caeai.2021.100008.
A. Panda and S. Mohapatra, “Online healthcare practices and associated stakeholders: Review of literature for future research agenda,” Vikalpa: J. Decision Makers, vol. 46, no. 2, pp. 71–85, 2021, doi: 10.1177/0256090921102536.
S. Banerjee, S. Mohapatra, and M. Bharati, AI in Fashion Industry. Emerald Group Publishing, 2022, doi: 10.1108/9781802626339.
A. Rai, “Explainable AI: From black box to glass box,” J. Acad. Mark. Sci., vol. 48, no. 1, pp. 137–141, 2020, doi: 10.1007/s11747-019-00710-5.
J. Auernhammer, “Human-centered AI: The role of human-centered design research in the development of AI,” in Proc. DRS Int. Conf., Sep. 2020, doi: 10.21606/drs.2020.282.
S. Panda and S. T. Roy, “Reflections on emerging HCI–AI research,” AI Soc., vol. 39, no. 1, pp. 407–409, Feb. 2024, doi: 10.1007/s00146-022-01409-y.
F. Gurcan, N. E. Cagiltay, and K. Cagiltay, “Mapping human–computer interaction research themes and trends from its existence to today: A topic modeling-based review of past 60 years,” Int. J. Hum.-Comput. Interact., vol. 37, no. 3, pp. 267–280, Feb. 2021, doi: 10.1080/10447318.2020.1819668.
M. A. Kamal, M. M. Alam, H. Khawar, and M. S. Mazliham, “Play and learn case study on learning abilities through effective computing in games,” in Proc. 13th Int. Conf. Math., Actuarial Sci., Comput. Sci. Statist. (MACS), Dec. 2019, pp. 1–6, doi: 10.1109/MACS48846.2019.9024771.
R. Pushpakumar, “Human–computer interaction: Enhancing user experience in interactive systems,” E3S Web Conf., 2023, doi: 10.1051/e3sconf/202339904037.
H. Hasyim and M. Bakri, “Advancements in human–computer interaction: A review of recent research,” Adv. J. Ekonomi & Bisnis, vol. 2, no. 4, 2024, doi: 10.60079/ajeb.v2i4.327.
B. Shneiderman, “Human-centered artificial intelligence: Reliable, safe & trustworthy,” Int. J. Hum.-Comput. Interact., vol. 36, no. 6, pp. 495–504, 2020.
Y. Rogers, “Commentary: Human-centred AI: The new zeitgeist,” Hum.-Comput. Interact., vol. 37, no. 3, pp. 254–255, 2022, doi: 10.1080/07370024.2021.1976643.
A. Holzinger et al., “Digital transformation in smart farm and forest operations needs human-centered AI: Challenges and future directions,” Sensors, vol. 22, no. 8, p. 3043, 2022, doi: 10.3390/s22083043.
W. Xu, “Toward human-centered AI: A perspective from human–computer interaction,” Interactions, vol. 26, no. 4, pp. 42–46, 2019, doi: 10.1145/3328485.
C. Longoni, A. Bonezzi, and C. K. Morewedge, “Resistance to medical artificial intelligence,” J. Consum. Res., vol. 46, no. 4, pp. 629–650, 2019.
E. Glikson and A. W. Woolley, “Human trust in artificial intelligence: Review of empirical research,” Acad. Manage. Ann., vol. 14, no. 2, pp. 627–660, 2020.
D. Shin, “User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability,” J. Broadcast. Electron. Media, vol. 65, no. 3, pp. 369–393, 2021.
J. Morley, L. Floridi, L. Kinsey, and A. Elhalal, “From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices,” Sci. Eng. Ethics, vol. 26, no. 4, pp. 2141–2168, 2020, doi: 10.1007/s11948-019-00165-5.
A. Esteva et al., “A guide to deep learning in healthcare,” Nat. Med., vol. 25, pp. 24–29, 2019, doi: 10.1038/s41591-018-0316-z.
H. Bekkar and Y. Chtouki, “Chatbots in education: A systematic literature review,” in Proc. Int. Conf. Smart Comput. Commun. (ICSCC), 2024, pp. 637–644.
V. Funda and R. Piderit, “A review of the application of artificial intelligence in South African higher education,” in Proc. Conf. Inf. Commun. Technol. Soc. (ICTAS), Durban, South Africa, 2024, pp. 44–50.
E. J. Topol, “High-performance medicine: The convergence of human and artificial intelligence,” Nat. Med., vol. 25, pp. 44–56, 2019.
B. Shickel, P. J. Tighe, A. Bihorac, and P. Rashidi, “Deep EHR: A survey of recent advances in deep learning techniques for electronic health record (EHR) analysis,” IEEE J. Biomed. Health Inform., vol. 22, no. 5, pp. 1589–1604, 2017.
J. Amann, A. Blasimme, E. Vayena, D. Frey, and V. I. Madai, “Explainability for artificial intelligence in healthcare: A multidisciplinary perspective,” BMC Med. Inf. Decis. Making, vol. 20, no. 1, pp. 1–9, 2020.
V. Fundai and L. Cilliers, “Adoption of AI chatbots by South African university students,” Issues Inf. Syst., vol. 26, no. 1, pp. 9–19, 2025.
F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Quart., pp. 319–340, 1989.
S. Schmager, I. O. Pappas, and P. Vassilakopoulou, “Understanding human-centred AI: A review of its defining elements and a research agenda,” Behav. Inf. Technol., Feb. 2025, doi: 10.1080/0144929X.2024.2448719.
M. N. Alam, M. Kaur, and S. Kabir, “Explainable AI in healthcare: Enhancing transparency and trust upon legal and ethical consideration,” Int. Res. J. Eng. Technol. (IRJET), vol. 10, no. 6, 2023.
M. Fritzsche et al., “Ethical layering in AI-driven polygenic risk scores—New complexities, new challenges,” Front. Genet., vol. 14, 2023, Art. 1098439, doi: 10.3389/fgene.2023.1098439.
K. Rasheed et al., “Explainable, trustworthy, and ethical machine learning for healthcare: A survey,” Comput. Biol. Med., vol. 149, Art. 106043, 2022, doi: 10.1016/j.compbiomed.2022.106043.
A. B. Nassuora, “Student acceptance of mobile learning for higher education,” Amer. Acad. Scholarly Res. J., 2012.
K. Jairak, P. Praneetpolgrang, and K. Mekhabunchakij, “Acceptance of mobile learning for higher education students in Thailand,” Int. J. Comput. Internet Manage., special issue, pp. 36.1–36.8, 2009.
V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, “User acceptance of information technology: Toward a unified view,” MIS Quart., pp. 425–478, 2003.
Abstract views: 28 times
Download PDF: 28 times
Copyright (c) 2025 Journal of Information Systems and Informatics

This work is licensed under a Creative Commons Attribution 4.0 International License.
- I certify that I have read, understand and agreed to the Journal of Information Systems and Informatics (Journal-ISI) submission guidelines, policies and submission declaration. Submission already using the provided template.
- I certify that all authors have approved the publication of this and there is no conflict of interest.
- I confirm that the manuscript is the authors' original work and the manuscript has not received prior publication and is not under consideration for publication elsewhere and has not been previously published.
- I confirm that all authors listed on the title page have contributed significantly to the work, have read the manuscript, attest to the validity and legitimacy of the data and its interpretation, and agree to its submission.
- I confirm that the paper now submitted is not copied or plagiarized version of some other published work.
- I declare that I shall not submit the paper for publication in any other Journal or Magazine till the decision is made by journal editors.
- If the paper is finally accepted by the journal for publication, I confirm that I will either publish the paper immediately or withdraw it according to withdrawal policies
- I Agree that the paper published by this journal, I transfer copyright or assign exclusive rights to the publisher (including commercial rights)














