Types of Human-AI Role Development - Benefits, Harms and Risks of AI-Based Assistance from the Perspective of Professionals in Radiology
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
In this paper we analyse the role development of professionals in healthcare in face of AI applications to their workplaces. The conceptual background is role development theory aligned to human-AI work settings. The empirical fundament is a case study analysis conducted at Charité including a profile analysis of survey data from radiology (N=128) and a structured content analysis of ten semi-structured interviews with professionals. The outcome is the distinction of two most typical human-AI role concepts, (1) the AI-embracing human-AI role concept, and (2) the AI-ambivalent human AI-role concept. These types are based on the same set of antecedents in terms of AI literacy, former digital experience, individual perspective on the technology and the impact of AI on the overall change of individual tasks. This allows to understand why the first type experiences benefits from the human-AI role development while the second type cannot exclude personal harms. The AI-embracing role concept enhances role making with AI and incorporates AI implementation, the latent risk of AI in the AI-ambivalent concept leads to role taking against the technology.
How to Cite
##plugins.themes.bootstrap3.article.details##
artificial intelligence, role theory, role development, healthcare, technology acceptance
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.