The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Technology in PsychiatryFull Access

AI in Psychiatry: What APA Members Need to Know

Abstract

Caution is the watchword for now in trying to use artificial intelligence for various purposes in psychiatry.

Psychiatrists have been inundated with ideas and information about how artificial intelligence (AI) is going to impact—even revolutionize—the future of psychiatry. To help members understand AI better, APA hosted a webinar on the subject in August. Here, I am going to discuss some of the material presented as well as answer questions about AI that we have received from APA members.

APA uses the term “augmented intelligence” when referring to AI to focus on AI’s assistive role in augmenting human decision-making, not replacing it. Augmented or artificial intelligence (AI) has been proposed for a variety of clinical uses: assisting with documentation, automating elements of billing and prior authorizations, detecting potential medical errors, supporting literature reviews, and more. Clinicians wonder whether the technology is already available to support these tasks and how to harness it to improve their patient care and workflows. However, generative AI and other large language models (LLMs) can also propagate biased or substandard care and pose new challenges to protecting patient privacy.

The webinar was led by me; Khatiya Moon, M.D., an assistant professor of psychiatry at Zucker Hillside Hospital and a member of APA’s Committee on Mental Health Information Technology; and Abby Worthen, APA’s deputy director of digital health. In the webinar we addressed clinical, ethical, and legal considerations for AI, specifically LLMs such as ChatGPT and Google’s Bard. Here are the main takeaways from the webinar:

Clinical Considerations

  • Output from AI can be misleading or incorrect. It can draw conclusions that may lead to bias-related harm.

  • Knowing tech sources, algorithm features, and training methods may provide some insight into the accuracy of output and what biases may exist, but this information is often not disclosed by tech companies.

  • New evaluation metrics and benchmarks are needed to assess generative AI performance and utility of specific models in psychiatry.

  • We need to educate patients on the risks of using LLMs to answer personal health questions and share that LLMs do not maintain confidentiality.

  • If AI is used to make clinical decisions, patients must be informed.

Ethical and Legal Considerations

  • APA urges caution in the application of untested technologies in clinical settings. Clinicians should approach AI technologies with caution, being aware of potential biases or inaccuracies and ensure that they are continuing to comply with HIPAA in all uses of AI.

  • Physicians remain responsible for the care they provide and can be liable for treatment decisions they make relying on AI that result in patient harm. As such, physicians should always carefully review any output guided by AI before implementing it into a treatment plan.

  • Physicians should ensure that they are transparent with patients about how AI is being used in their practice, particularly if AI is acting in a “human” capacity.

  • Regulatory guardrails and best practices exist to protect patient privacy (that is, HIPAA best practices), including informed consent, data minimization, data security, and accountability. To utilize LLMs or generative artificial intelligence, health care entities generally need to enter into business associate agreements with technology companies to safeguard protected health information.

  • Prompts entered into LLMs are stored on company servers and subject to the company’s privacy policy. Prompts containing private health information could be leaked or sold to third parties, compromising patient privacy.

FAQs

  • Q  What are some available tools?

  • A  There are many LLMs available to the public. The most popular are ChatGPT, Google Bard, and Bing Chat powered by GPT-4. GPT-4All is an open-source ecosystem of chatbots that include uncensored models that can run locally and offline. Some LLMs focus on medical applications and include BioBERT, Clinical BERT, Med-BERT, and Google’s Med-PaLM2. There are generative AI models that can create images, video, and audio as well. A multitude of apps and services utilize generative AI technology to offer specific functionalities such as editing photos, creating presentation slides, summarizing journal articles, and more. Regardless of which model you try or use, keep the privacy considerations in mind to avoid HIPAA violations. References provided by LLMs are often false and generated, so make sure to double check output for accuracy.

  • Q  How can we use AI to our advantage especially regarding documentation without violating HIPAA or patient trust?

  • A  While publicly available models have the capability to minimally assist with documentation, the risks of HIPAA violations and inaccurate output are too great. Entering into a business associate’s agreement with a business focused on developing generative AI for clinical use may offer a HIPAA-compliant way to harness the technology as it continues to improve. ■

APA members who have questions about AI may send them to [email protected].

Darlene King, M.D.

Darlene King, M.D., is an assistant professor in the Department of Psychiatry at UT Southwestern Medical Center, deputy medical information officer at Parkland Health, and the chair of APA’s Committee on Mental Health Information Technology. She graduated from the University of Texas at Austin with a degree in mechanical engineering prior to attending medical school and residency at UT Southwestern.