The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Clinical and Research NewsFull Access

Mental Health Apps Miss the Mark on Usability Standards, Study Shows

Published Online:https://doi.org/10.1176/appi.pn.2019.5a21

Abstract

A lack of consensus onwhat it means to have a “positive user experience” may limit real-world uptake of apps.

A review of 40 studies that evaluated mental health apps found that they all reported positive user-engagement scores—an unusual finding given that health apps are known to have problems keeping users engaged. Underlying problems noted in the review, published March 27 in Psychiatric Services in Advance, were that each study used a different set of subjective and/or objective measures, and none used consistent benchmarks to define a “positive” user experience.

Graphic: TechTime

“As with a medication, we need to make sure mobile apps are tolerable before we recommend them to a patient,” said John Torous, M.D., director of the Digital Psychiatry Division at Harvard-affiliated Beth Israel Deaconess Medical Center and a co-author of the study. Digital “tolerability” refers to whether an app is easy to use and is engaging so that it is used repeatedly. These findings indicate that app developers have their own idea of what constitutes usability, he said.

As Torous and his colleagues wrote in the article, “This lack of consensus makes it difficult to compare results across studies, hinders understanding of what makes apps engaging for different users, and limits their real-world uptake.”

Most App Privacy Policies Omit Pertinent Data-Sharing Details

Usability is just one area where mobile apps might not be all they are advertised to be. Another recent analysis co-authored by John Torous, M.D., found that mental health apps often have inadequate and/or misleading privacy policies. Torous and colleagues assessed the privacy disclosures of 36 popular apps for treating depression or quitting smoking and examined what data (both encrypted and unencrypted) were transmitted following simulated use.

While 25 of the 36 apps (69%) incorporated a privacy policy, in many cases the policy was not comprehensive. For example, 22 of these 25 policies provided information about the primary uses of collected data (such as using user data to improve app performance), but only 16 described secondary uses (such as sharing data if needed with legal authorities). Further, only 13 of the policies described how to opt out of data sharing, only eight provided information about data-retention practices, and only three discussed what happens to a person’s data in the event the company is bought or dissolved.

Almost all of the studied apps (33 of 36) transmitted user data to third parties, with Google and Facebook analytic services being the dominant destinations. In several cases, the apps failed to disclose in their privacy policy that such third-party transmission would occur or stated that such sharing would not occur. The researchers did not observe any transmission of personally identifiable information, but data sent to third parties routinely included information that could be linked back to the device.

“Our data highlight that, without sustained and technical efforts to audit actual data transmissions, relying solely on either self-certification or policy audit may fail to detect important privacy risks,” Torous and colleagues wrote. “For example, consolidation of data processing into a few transnational companies underlines the risk that user data may be inadvertently moved into jurisdictions with fewer user protections or that this may be exploited by malicious actors.”

This study was published April 19 in JAMA Network Open and can be accessed at here.

Of the 40 studies in the analysis, nine evaluated mobile apps for depression, four for bipolar disorder apps, seven for schizophrenia apps, seven for anxiety apps, and 13 for apps designed for multiple psychiatric disorders. The studies were selected because they all reported user-engagement indicators (UEI), a variety of measures describing the degree to which users find an app easy to use and engaging.

All of the studies reported that their app had a positive UEI rating. Of these, 15 studies used only subjective data (such as participant surveys or interviews), four used only objective data (such as verified number of login sessions), and 21 used a combination of measures.

“It is concerning that 15 of the 40 (38%) studies concluded that their app had positive UEIs without considering objective data,” Torous and colleagues wrote.

“Qualitative data are unquestionably valuable for creating a fuller, more nuanced picture of participants. ... However, there is also a need for objective measurements that can be reproduced to validate initial results and create a baseline for generalizing results of any single study.”

A problem with the studies that used objective data, however, was that most (20 of 25) did not set predetermined thresholds for good scores in advance—all analyses were retrospective.

Photo: Man with phone
iStock/asiseeit

Of the studies that included both subjective and objective measures, many set low thresholds for a positive UEI rating. For example, one study considered a user-satisfaction score of 60% to be sufficient, while another required app users to complete only one-third of their assigned tasks in a week.

In addition to low thresholds within individual studies, thresholds were inconsistent across studies. For example, frequency of usage was a common objective marker, but acceptable usage rates varied from once a day to just a few times a month.

Torous acknowledged that each of these 40 mental health apps was developed for a different purpose; therefore, some variation is expected. Still, he believes it is possible to develop some usability standards to make comparisons and evaluations easier and more reliable.

This study was funded by National Institutes of Health career development awards given to Torous and study co-author Mia Minen, M.D. ■

“User Engagement in Mental Health Apps: A Review of Measurement, Reporting, and Validity” can be accessed here.