The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Clinical and Research NewsFull Access

As Psychiatry Confronts AI, Human Connection Still Prime

Published Online:https://doi.org/10.1176/appi.pn.2018.11a10

Abstract

Artificial intelligence offers both promises and challenges as it becomes increasingly prevalent.

The accelerating embrace of diagnostic algorithms and artificial intelligence (AI) in psychiatry offers both promise and peril, highlighting the need for psychiatrists to stay patient focused and not let technology displace the human element.

Graphic: Tech Time
Joel Silverman

An algorithm is a set of rules or instructions given to an AI program or other machine to help it learn on its own. While all algorithms are not AI, all AI requires the use of algorithms, the primary distinction being that AI points to the ability of a system to learn in response to data. AI is what enables “machine learning”—that is, the ability of a machine to learn from experience—in contrast to simply responding to a specific, static program (Psychiatric News, http://apapsy.ch/AI).

The possibility that AI systems may someday supersede the ability of psychiatrists in diagnosis and treatment recommendations poses a real challenge to the field, remarked John Luo, M.D. Luo is a clinical professor of psychiatry and director of the psychiatry residency training program at the University of California, Riverside, and an expert in medical informatics.

“We as a field must remain advocates for our patients and maintain the human connection,” he said. “We need to embrace technology but also treat it with a critical assessment in terms of risks and benefits for our patients, just as we do with medications or psychotherapy.”

John Torous, M.D., is a psychiatry instructor at Harvard Medical School and director of the Digital Psychiatry Division at Beth Israel Deaconess Medical Center. Torous sits on the Ethics Committee of the Society for Behavioral Medicine and is co-chair of the APA Ad Hoc Work Group on Access Through Innovation in Psychiatric Care.

“AI is only as powerful as the data behind it and people using it,” said Torous. The risk, he continued, is that if no thought is given to how these systems are being trained, biases may be amplified. Torous gave an alarming example of how Tay, an online Microsoft AI chatbot, developed a racist and xenophobic personality based on user input.

It is critical that psychiatry not minimize the importance of the therapeutic alliance, said Torous. “We know that the therapeutic alliance—the bond between a psychiatrist and a patient—is one of the strongest predictors of successful treatment outcomes.” But at this time, little is known about how a digital therapeutic alliance may work when there is no direct patient-psychiatrist interaction. While technology can increase access to mental health tools, we need to determine how well they work and for whom, said Torous.

Despite such concerns, it is critical that the field “be curious and have an open mind,” said Torous. “We need to ask the same questions and hold AI to the same standards as any other medical tool—how do we know that it is safe, effective, and useful and makes a meaningful difference?”

Photo: John Torous

“AI is only as powerful as the data behind it and people using it.” —John Torous, M.D.

Arshya Vahabzadeh, M.D., is former chair of the APA Council on Communications and is involved in the commercialization of assistive technologies as chief medical officer for Brain Power LLC, which develops and markets computerized glasses for people with autism and other brain-related challenges.

As psychiatrists increasingly confront AI diagnostic systems, they must be aware that the quality of data determines the accuracy of the outcome, he noted. “If you are analyzing things without context and have algorithms that have some intrinsic bias in them and don’t appreciate culture, that could really impact your findings,” he warned. When he hears about amazing new algorithms, he is skeptical. “As physicians, we have to be extra cautious and extra mindful about what our patients can use and the potential risks to their mental health. As physicians, our first duty is not to do any harm.”

There is no doubt that “the technology is going to continue to advance, computing power is going to get better, we’re going to have improved software, and we’re going to have larger datasets,” he said. Given this reality, psychiatrists will need to determine how they can be more involved in determining the appropriate role of AI systems in clinical care, he said.

Jamie Feusner, M.D., is a psychiatry professor at the Semel Institute for Neuroscience and Human Behavior at the University of California, Los Angeles, and senior author of a study on combining algorithms and brain imaging to predict treatment response in obsessive-compulsive disorder.

Feusner is optimistic about the “upside potential” for AI algorithms in mental health and thinks the downsides “can be managed and are probably a bit overblown.”

AI algorithms can “cast a very wide net and identify people who may be at risk” so they can be evaluated and diagnosed properly, he said. The risk, he continued, is the public’s overreliance on such tools without recognizing that they are just signposts pointing to whether or not in-person clinical expertise is needed.  ■