The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
APA & MeetingsFull Access

Microsoft AI Expert: APA, MH Professionals Must Help Guide Future of AI and Mental Health

Abstract

The future of artificial intelligence and its impact on psychiatry was prominent among the topics discussed at APA’s Board of Trustees meeting in March.

Photo illustration of a robot hand touching a human hand.
Getty Images/iStock/FotografieLink

Psychiatrists and mental health professionals need to proactively help shape the future of artificial intelligence (AI) as it relates to psychiatric practice—or AI may end up shaping psychiatric practice instead.

That’s the message Jina Suh, Ph.D., principal researcher in the Human Understanding and Empathy Group at Microsoft Research, brought to APA’s Board of Trustees at its March meeting in Washington, D.C. She was joined in her presentation by Tim Althoff, Ph.D., an assistant professor of computer science at the University of Washington. He directs the Behavioral Data Science Group, which works on research related to AI, language models, and their application to mental health.

Suh described a future in which generative AI—machine learning systems, such as ChatGPT, that generate content derived from vast amounts of data—will loom large in all areas of psychiatric training and practice. But it is a future that APA and allied mental health organizations have the opportunity to help mold and direct. (At the meeting, trustees approved the Position Statement on the Role of Augmented Intelligence in Clinical Practice and Research; see box below.)

“Psychiatrists must thoughtfully and proactively envision the future of AI in mental health to support the patients and communities they serve and to train the next generation of psychiatrists with AI literacy,” Suh said.

Trustees Approve Cautionary Statement on AI

Artificial, or augmented, intelligence offers opportunities to improve quality of care by clinicians—but also comes with unacceptable risks of biased or substandard care or violations of privacy and informed consent, according to a position statement approved by the APA Board of Trustees in March.

AI can assist in clinical documentation, suggest care plans and lifestyle modifications, identify potential diagnoses and risks from medical records, and automate elements of billing and prior authorization. It may also be useful in detecting potential medical errors or systemic quality issues, according to the statement.

But oversight of and accountability for the role of AI-driven technologies in clinical care are critical. The position statement notes that the European Union’s AI Act, the first major effort to regulate AI systems, developed a regulatory framework that assigns applications of AI into risk categories and assigns oversight actions according to risk level.

The position statement asserts the following:

  • AI should augment treatment and should not replace clinicians.

  • Patients should be educated and informed, in a culturally and linguistically appropriate way, if clinical decisions are being driven by AI.

  • AI-driven systems must safeguard health information, and information should not be used for unauthorized purposes.

  • AI-driven systems used in health care should be labeled as AI-driven and categorized in a standardized and transparent way for practitioners as having “minimal,” “medium,” “high,” and “unacceptable” risk to patients.

  • AI-driven systems should incorporate existing evidence-based practices and standards of care, and AI developers should be held accountable and liable for injury caused by their failure to do so.

  • Research about AI must include investigation regarding algorithmic bias, ethical use, mental health equity, public trust, and effectiveness.

  • The input of people with lived experience of mental illness and substance use disorders should be solicited in the design and implementation of AI systems for treatment purposes.

The position statement is posted here.

The growth of AI-generated mental health products is projected to be enormous. A global health care consulting firm, Towards Healthcare, reported that the market value of mental health “chatbots” is estimated to surpass $6.51 billion by 2032. This growth is being driven by the shortage of mental health professionals and the demand for scalable, accessible, convenient, and affordable mental health services, Suh told trustees.

But predictably, there are lots of ways that AI can go wrong if it is untested or used for purposes other than what it was tested for. A report by the Center for Countering Digital Hate found that popular AI tools generate harmful content about 41% of the time when prompted to provide information on eating disorders.

Althoff, in comments to Psychiatric News, noted that current generative AI technology is making overly simplistic assumptions and said his lab has been working on addressing these shortcomings. “Current AI technology too often assumes that a third person, without any psychiatric expertise, often embedded in a different socio-cultural context, can judge what is harmful,” he said. “That doesn’t make any sense, and psychiatrists have known this for a long time, which is why the best version of this technology will come from multidisciplinary teams that integrate [their] expertise.”

Suh said that APA and individual psychiatrists should collaborate with other mental health professionals and vested organizations to do the following:

  • Collect and share AI failures and strategies for mitigating possible harm caused by AI failures.

  • Develop guidelines for how AI is applied to psychiatry and to mental health–related products accessible to the public.

  • Develop a checklist for guiding the design of chatbots.

  • Develop a framework for evaluating AI safety, including long-term effects on mental health professionals and patients, especially children.

Suh explained that popular products such as ChatGPT are built on foundation models, or “general-purpose AI systems.” These are capable of a range of general tasks (such as text synthesis, image manipulation, and audio generation). Notable examples of foundation models are OpenAI’s GPT-3 and GPT-4, which underpin the conversational chat agent ChatGPT.

“Because foundation models can be built ‘on top of’ to develop different applications for many purposes, this makes them difficult—but important—to regulate,” according to the Ada Lovelace Institute, an independent AI research institute in the United Kingdom. “When foundation models act as a base for a range of applications, any errors or issues at the foundation-model level may impact any applications built on top of from that foundation model.”

Photo of Jina Suh, Ph.D.

Incorporation of AI into clinical workflows should augment human participation, not replace it, said Jina Suh, Ph.D., in a presentation to the Board of Trustees.

For these reasons, Suh said, the foundation models upon which AI applications are built are far from perfect; they are associated with “hallucinations” (unrealistic, false, or nonexistent content) and may be prone to misinformation and bias. These models require the oversight of vested professionals, including mental health professionals, who can think strategically about where, when, and how AI should be integrated into various settings.

Suh posed some questions that are ripe for the input of psychiatry:

  • How can conversational data be mined by AI to improve patient-provider communication, patient understanding of diagnosis and treatment, and/or utilization of patient-generated data for personalized treatment?

  • How can humans collaborate with AI to augment the therapeutic power of human therapists?

  • How can AI be used to support reflective thinking by clinicians and the training of new physicians?

“There are exciting new opportunities in treatment delivery, especially when we focus on the generative capabilities that can aid in personalized brainstorming and planning, act as provocateurs to challenge thoughts or behaviors, or participate in role-playing and skills practice,” Suh told trustees.

A startling example of this is a simulation model, using generative AI, to provide real-time feedback to clinicians practicing dialectical behavior therapy (DBT). Suh and Althoff were coauthors of a report on the model published in arXiv, an open-access archive for scholarly articles in the fields of physics, mathematics, computer science, statistics and other fields.

“We built a system that performs bespoke simulation and role-play and gives expert level feedback through generative AI in the context of teaching interpersonal effectiveness skills in DBT,” Althoff told Psychiatric News.

In her remarks to the Board, Suh said AI should be applied to clinical practice selectively. “Because off-the-shelf generative AI models have demonstrated only surface-level knowledge of psychotherapy, it is important to [apply AI] strategically to select aspects of treatment rather than attempting to replace therapy or treatments entirely.”

She emphasized that the incorporation of AI into clinical workflows, whatever the setting, needs to enhance, not replace, human participation. “When considering AI innovation in clinical workflows, it is important to design for human augmentation through collaboration, reflection, or training rather than human replacement to preserve the importance of genuine human connection that is a cornerstone of psychiatry.”

The future of AI will be astonishing—in ways both exciting and possibly surprising—with interactive effects on the human mind and brain. What happens, she asks, when we have intelligence at our fingertips that completes our thoughts before our thoughts are fully formed?

“We need to anticipate and monitor short- and long-term effects of generative AI use on individuals’ cognition and mental health, including AI risks to vulnerable populations,” Suh said. “We also need to observe the impact of AI innovation in psychiatry on the psychiatric profession itself to avoid the future where mental health professionals are working on behalf of AI.”

Other Board Actions

In other business, the Board approved several recommendations from the APA Nominating Committee to increase member awareness of opportunities to serve on the Board of Trustees. These include expanding communication regarding elections using social media, APA’s website, and videos that can be posted on both; working with the Nominating Committee to host workshops and webinars; and Q&A sessions and other forums; and establishing mentorship opportunities between Board members and interested APA members.

Trustees also approved the following:

  • A 5% increase in member dues for 2025.

  • Participation in the FDA Total Product Life Cycle (TPLC) Advisory Program (TAP). TAP is intended to help ensure that U.S. patients have access to high-quality, safe, effective, and innovative medical devices for years to come by promoting early, frequent, and strategic communications between the FDA and medical device sponsors.

  • Reappointment to the APA Foundation Board of Directors for three-year terms of Michelle Durham, M.D., M.P.H., Ben Zobrist, Edmond Pi, M.D., and Monica Taylor-Desir, M.D., M.P.H., and appointment of Farha Abbasi, M.D.

  • Reappointment for five-year terms of Lisa Dixon, M.D., as editor of Psychiatric Services; Kimberly Yonkers, M.D., as editor of the Journal of Psychiatric Research and Clinical Practice; and Laura Roberts, M.D., as editor in chief of APA’s Publishing Book Division. ■