The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Letters to the EditorFull Access

Use of AI in Psychiatry

I appreciate the comments of Dr. Steven Hyler in the June issue on the “enablement and encouragement” of the use of large language models (like ChatGPT) in psychiatric education (“Is It Cheating to Use Chat GPT-4 in the Clinical Practice of Psychiatry?”). Often, discussion of this technology operates from a lens of fear due to misunderstanding of the technology. As clinicians, we are well versed in the impact of technology on human beings (for example, how social media use can impact teenage patients’ moods and anxieties), yet we do not have adequate training to understand the technological tools on which we opine. For this reason, I advocate for clinicians to learn as much as they can about AI, as it will have far reaching implications in our education and our patients’ care.

However, ChatGPT-4 is only one of myriad large language models (developed by OpenAI). Google, Meta, and other companies have released their own large language models (LLMs). Therefore, our discussions about AI in clinical psychiatry should be reflective of this diversity, investigating the efficacy of each. Furthermore, each LLM has variables within itself that impact the output that it generates. The dataset on which a model is trained impacts its results; therefore, if a model is trained on imperfect or biased data, then it will likely produce imperfect and biased responses.

Another aspect of an LLM like ChatGPT is a variable called “temperature.” This is a setting that may be changed, and it is the inherent variability of the model. If the “temperature” is set to a high level, then the large language model will provide varying responses to the same question. The manner in which you “prompt” ChatGPT prior to a question can even impact its response.

We should enthusiastically embrace the careful employment of AI in psychiatric education, but we should be mindful of these variables while also paying respect to the diversity of AI technology. ■

DECLAN GRABB, M.D.

Chicago, Ill.