The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
ProfessionalFull Access

APA Urges Caution About Incorporating AI Into Clinical Practice

Abstract

The use of artificial intelligence for clinical decision-making purposes is still very risky and raises issues about accuracy, bias, privacy, and confidentiality.

Photo: Floating image of computer with the letters “AI” superimposed.
iStock/Vertigo3d

Following the release of ChatGPT, artificial or augmented intelligence (AI) has been in the news more than ever, and it is already being employed in some limited ways in medicine and psychiatry.

However, a core feature of AI is that it is a form of machine learning—AI tools, like ChatGPT, are always learning and, therefore, always evolving, making it difficult to define a static role for AI in psychiatric practice now or in the future. Who knows how it may be used in five years?

For now, it is an area with little or no regulatory guidance or evidence base; moreover, there are safety and efficacy concerns, as well as the potential for bias and discrimination in how AI is trained and employed.

For these and other reasons, APA is urging members to approach the use of AI with caution. “Given the regulatory grey area, expansive data use practices of many platforms, and lack of an evidence base currently surrounding many AI applications in health care, clinicians need to be especially cautious about using AI-driven tools when making decisions, feeding any patient data into AI systems, or recommending AI-driven technologies as treatments,” according to an APA advisory. The advisory, titled “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now,” was written jointly by APA’s Office of General Council and Division of Policy, Programs, and Partnerships.

The advisory provides a general overview of AI. It does not contain legal advice, is not intended to be comprehensive, and does not cover all relevant aspects of AI. In addition, it is not a statement of APA’s position on AI or APA’s role in the future of AI.

“Some potential uses of AI within health care are to automate administrative tasks such as billing, scheduling, and basic patient communications,” according to the advisory. “AI could be used to take notes for clinicians, provide indicators of potential diagnoses, and deliver other decision-support interventions. AI tools are increasingly used by health systems to automate documentation in electronic health records and trawl clinical data for risk and quality indicators, by payers to automate coverage decisions, and by both consumer and clinical technology companies to replicate human speech in ‘chatbots’ and other interactive modalities.”

AI is also increasingly being considered and deployed in patient-facing capacities (psychotherapy and care navigation chatbots), but these kinds of interventions lack an evidence base around quality, safety, and effectiveness. APA’s App Evaluation Model can be consulted to help assess key details about an app or other technology.

  • Effectiveness and safety: Generative AI systems can promulgate biased information and have been found to make up false information (for instance, making up citations to peer-reviewed medical texts). Physicians are responsible for the care they provide and can be liable for treatment decisions that they make relying on AI that result in patient harm. AI is a tool, not a therapy, and physicians are ultimately responsible for clinical outcomes even when they are guided by AI.

  • Risk of bias and discrimination: Based on the data on which large language models (LLMs, like ChatGPT) are trained, these models run a significant risk of incorporating existing bias into clinical decision-making. For instance, AI models that listen to patient visits and assist in notetaking may not have adequate cultural competencies to take into account factors such as hearing impairments, accents, or verbal cues and may propagate disparities that impact care. Racial and other biases in AI-driven systems can be introduced and propagated as a result of structural discrimination affecting outcomes in specific patient populations.

  • Transparency: Patients have an expectation of honesty from their physicians, and thus “psychiatrists should strive to provide complete information to patients about their health and all aspects of their care, unless there are strong contravening cultural factors or overriding therapeutic factors such as risk of harm to the patient or others that would make full disclosure medically harmful” (see Topic 3.2.2. in the APA Commentary on Ethics in Practice). In fulfilling this ethical responsibility of honesty, physicians should ensure that they are transparent with patients about how AI is being used in their practice, particularly if AI is acting in a “human” capacity.

  • Protecting patient privacy: Although there is no regulation specific to language learning models in the United States, existing regulatory frameworks still apply. For example, any use of AI in your practice must be compliant with HIPAA and state requirements protecting the confidentiality of medical information.

    “We strongly recommend that clinicians avoid entering any patient data into generative AI systems like ChatGPT,” according to the advisory. “The terms and conditions of many of these AI tools provide access to and use of any information put into them, so entering patients’ medical information into an LLM could violate a physician’s obligations under HIPAA.”

The advisory concluded:“Overall, physicians should approach AI technologies with caution, particularly being aware of potential biases or inaccuracies; ensure that they are continuing to comply with HIPAA in all uses of AI in their practices; and take an active role in oversight of AI-driven clinical decision support, viewing AI as a tool intended to augment rather than replace clinical decision-making.” ■