Item type:Article, Open Access

Trustworthy enough? Examining trustworthiness assessments of large language model-based medical agents

Abstract

This research advances trust theory by examining factors shaping the development of a trustor’s perceived trustworthiness in the context of real-world interactions with a large language model-driven virtual doctor (VD). Employing a qualitative approach to elaborate the trustworthiness assessment model, we conducted 51 interviews with 65 participants. Our findings reveal a heterogeneity in the trustworthiness perceptions of and reported trust in VDs, ranging from a complete absence to a complete presence of trust, with many participants expressing conditional trust. The key factors contributing to this heterogeneity were participants’ benchmarks for trustworthiness, naïve theories, risk–benefit assessments, individual standards, and strategies for cue detection and utilization in assessing the trustworthiness of the VD. Our findings also highlight the crucial influence of third-party involvement in artificial intelligence system development and testing on trustworthiness assessments. These insights underscore the trustworthiness assessment model’s utility in understanding trust development processes.

Metadata

Philipps-Universität Marburg
show more
Schlicker, Nadine Schlicker; Lechner, Fabian; Wehrle, Katja; Greulich, Berit; Hirsch, Martin C.; Langer, Markus: Trustworthy enough? Examining trustworthiness assessments of large language model-based medical agents. In: Technology, Mind, and Behavior, 6(2), 1–29, Jg. (), .

License

Except where otherwised noted, this item's license is described as Attribution-NoDerivatives 4.0 International