The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Clinical and Research NewsFull Access

Data Mining May Help Identify Suicide Risk

Published Online:https://doi.org/10.1176/appi.pn.2018.8b6

Abstract

By analyzing hundreds of pieces of medical, behavioral, and demographic information using advanced software programs, researchers try to identify people at greatest risk of suicide.

As highlighted by data released this summer by the Centers for Disease Control and Prevention, U.S. suicide rates have climbed steadily over the past decade. Despite this uptick in suicides, however, suicides remain rare overall. Even within populations considered vulnerable for suicide, such as individuals hospitalized for a psychiatric condition, most people will not attempt suicide.

Photo: Doctor
iStock/mediaphotos

To better target suicide prevention efforts to those in greatest need, researchers must first find better ways to identify patients who might be truly at risk.

Some researchers believe people with suicidal ideation may leave clues to their future intent each time they visit a doctor—akin to a trail of clinical “bread crumbs.” These investigators are turning to sophisticated software programs to mine electronic health records for patterns that indicate a forthcoming suicide attempt.

The measures scanned by these programs include known suicide-associated variables such as history of mental illness and/or history of self-harm, along with prescription drug history, demographic information, and scores on clinical assessments such as the 9-item Patient Health Questionnaire (PHQ-9).

The programs scan through a patient’s medical history and use an algorithm to develop a risk score. These scores are then used to stratify patients into suicide risk categories. Such information could help clinicians initiate conversations with patients at highest risk of suicide earlier in treatment.

Developing algorithms most likely to identify people who may attempt suicide is not a one-formula-fits-all approach. Colin Walsh, M.D., an assistant professor of psychiatry and biomedical informatics at Vanderbilt University Medical Center who has developed programs to assess suicide risk in adults and adolescents, noted each program has its own distinct parameters. His adolescent risk algorithms, for example, consider childhood attention-deficit/hyperactivity disorder and defiant disorders as part of the suicide risk profile, whereas these disorders are not used in adult algorithms. Ronald Kessler, Ph.D., the McNeil Family Professor of Health Care Policy at Harvard Medical School, has been exploring the potential of suicide risk algorithms in military personnel. His programs include combat-specific information such as how much time soldiers have between deployments.

More research is needed before these algorithms can be routinely used in practice. In a report appearing in the American Journal of Psychiatry in May, Gregory Simon, Ph.D., and his colleagues at Kaiser Permanente described an algorithm they used to predict suicide attempt and suicide death following an outpatient visit. They found that among patients in the top 5 percent of risk scores, just 5.4 percent attempted suicide and 0.26 percent died by suicide in the 90 days following their outpatient visit. 

“Some critics have raised concerns that these outcomes are not common enough to warrant intense interventions, even among this high-risk group,” said Kessler. However, he countered that the predictive value of these algorithms is on par with other established risk calculators, including the Framingham Risk Score, which predicts people at high risk of a heart attack.

More importantly, Kessler noted that his research shows that patients at high risk of suicide tend to be at a greater risk of other associated health problems than those at a lower risk of suicide. “About 1 in 3 high-risk patients will experience an outcome like an accidental death, nonfatal but serious self-harm, or severe depressive disorder,” he said. “Our interventions for suicide also treat all these other possibilities.”

There are still other roadblocks before these suicide risk programs can be routinely used in practice. First, the nature of these “big data” tools means they would be limited to places that have a large clinical database, such as Kaiser Permanente or the Veterans Administration. Even in these health systems, it’s not yet known whether these risk programs are cost-effective.

Then there’s an important consideration at the level of the individual, Kessler said. Since these suicide screening algorithms piece together so much nuanced information, the patients themselves may not be fully aware of their suicidal ideations. “How do you elicit suicidality from those who don’t want to admit it?” Kessler asked. “Is it even the job of the health care system to make someone admit they have a problem, or is that asking too much?”

Walsh agrees that turning research into practice will be challenging, “but this is where pragmatics and partnerships come into play.” He told Psychiatric News that such risk-screening tools could be valuable if selectively implemented in emergency departments (EDs), where many suicidal patients find themselves in the weeks and months before a suicide attempt. EDs, especially ones affiliated with academic centers, have the robust records to enable these screening algorithms to work, as well as the needed staff to administer questionnaires such as the PHQ-9 before discharge.

As for patient discussions, he thinks these algorithms can be valuable if seen as conversation starters rather than a lab result. “When you see a person’s score, and the main items that contribute to that score, it may help guide your discussion,” he said. “I think we still have to be direct with patients about what the risk programs reveal, but we can be respectful and compassionate as well.” ■

“Predicting Suicide Attempts and Suicide Deaths Following Outpatient Visits Using Electronic Health Records” can be accessed here.