Trustworthy enough? Examining trustworthiness assessments of large language model-based medical agents
Loading...
Date
Publisher
American Psychological Association
Abstract
This research advances trust theory by examining factors shaping the development of a trustor’s perceived trustworthiness in the context of real-world interactions with a large language model-driven virtual doctor (VD). Employing a qualitative approach to elaborate the trustworthiness assessment model, we conducted 51 interviews with 65 participants. Our findings reveal a heterogeneity in the trustworthiness perceptions of and reported trust in VDs, ranging from a complete absence to a complete presence of trust, with many participants expressing conditional trust. The key factors contributing to this heterogeneity were participants’ benchmarks for trustworthiness, naïve theories, risk–benefit assessments, individual standards, and strategies for cue detection and utilization in assessing the trustworthiness of the VD. Our findings also highlight the crucial influence of third-party involvement in artificial intelligence system development and testing on trustworthiness assessments. These insights underscore the trustworthiness assessment model’s utility in understanding trust development processes.
Metadata
Philipps-Universität Marburg
License
Except where otherwised noted, this item's license is described as Attribution-NoDerivatives 4.0 International
