The quality of mental-health data that’s been collected during the pandemic is so shockingly low that it can’t be used to make policy.
The COVID-19 pandemic has produced an urgent need for policy-relevant scientific evidence in Canada, as it has in other countries. Research funding organizations, providers of health data, and the private sector have all stepped up in an effort to fulfill these needs. Mental health’s importance has not been neglected in these discussions, but from the perspective of scientifically robust knowledge generation, the efforts so far appear to have mostly missed the mark.
Some of the problems relate to a “one size fits all” view of mental health. Mental health is a very broad term and encompasses many different emotional, cognitive, and behavioural states. It does not simply mean “feeling good all the time.” Indeed, negative emotional states in appropriate situations are necessary to drive adaptation and are a normal component of mental health. Mental disorders often cause negative emotional states, but negative emotional states are not always due to mental disorders.
Anxiety disorders are a good example. The hyperarousal response is a natural reaction to a circumstance of threat or danger. This response is universal, and in its healthy manifestations is adaptive, helping the body to prepare for a threatening or dangerous situation (“fight or flight”). Even though anxiety may be experienced as unpleasant, it reflects a healthy emotional response, and does not require treatment. On the other hand, anxiety disorders can be disabling and impairing and require professional attention.
Determining when psychological or emotional symptoms may represent a mental disorder, adaptive failure or when they are normal responses to challenging circumstances is an important distinction necessary to develop appropriate mental health policy. Despite major efforts in recent decades to refine this determination, this progress appears to have been neglected during the current pandemic.
Starting with the earliest reports from the outbreak in Wuhan, China, much of the mental health research related to the pandemic has used brief, self-reported questionnaires or scales designed to measure psychological symptoms. Such instruments are unable to make a diagnosis. By focusing on symptoms and not addressing a disorder’s presence or the presence of adaptive behaviours, the data produced from these studies cannot guide policy.
Psychological symptoms that do not reach the threshold for a disorder may require interventions that promote understanding of normal and expected emotional responses, or advice on self-care. On the other hand, mental disorders or the presence of substantial symptoms in the context of impaired functioning will require interventions supported by health and social services systems. These are important distinctions that impact policy development, and data must be collected and analysed with those goals in mind. Surveys that primarily ask about symptoms do not provide for such important distinctions to be made.
During the pandemic, mental health surveys have most often used questionnaires or symptom scales. A questionnaire can contain questions seeking to determine whether a symptom is present (“do you feel anxious?” for example), or the response can be scaled to assess the severity of a symptom (for example: “rate your anxious feelings from 0 to 10 where 0 means ‘not at all’ and 10 means ‘extremely.’”). Symptom scales can also assess clusters of related symptoms (for example, anxiety with its various psychological and physical manifestations such as anxious feelings or a racing heart) and add up the item scores to calculate a total score for the syndrome of anxiety. A diagnostic instrument must go well beyond these levels of assessment and address other issues such as the persistence of the symptoms over time and whether they cause additional problems such as impaired functioning or severe distress, etc.
Having low quality evidence is not better than having no evidence at all. Bad evidence does not reflect reality and is easily distorted by biases.
Questionnaires or symptom scales are therefore not policy informative. But many other surveys use measures that are even further removed from meaningful assessment of mental health states. These measures involve subjective perceptions of mental health and make use of single questions, asking respondents to provide global ratings (mild, moderate or severe, for example) for symptoms. Or they simply ask participants to summarize their mental health without making clear what the phrase “mental health” means. Whereas symptom scales have at least been confirmed to measure symptoms, mental health perceptions are of unknown significance for determining need for health or social services interventions and are not even a good reflection of symptoms. Nevertheless, the frequency of fair or poor perceived mental health or responses to a survey question on a single predetermined state (such as anxiety) has emerged as a core data element in studies conducted during this pandemic (examples here, here and here).
Having low quality evidence is not better than having no evidence at all. Bad evidence does not reflect reality and is easily distorted by biases. An example is measurement bias. Symptom rating scales have usually been designed for screening purposes, meaning that they are designed to be sensitive for detecting mental disorders, but they are non-specific, because many people experiencing the negative emotions that the scales measure do not have mental disorders. In other words, if you have an anxiety disorder, you are likely to have a high score on an anxiety screening scale, but many people with high scores do not have anxiety disorders. This can lead to overestimating the burden of mental disorders and, consequentially, precious resources may be misdirected toward people who do not need them.
Low quality sampling procedures (such as self selection into a volunteer sample) and low response rates can introduce gross distortions of estimates through selection bias. Vulnerable and high-risk populations may not be appropriately represented. Bad estimates may lead to bad policy.
Extensive resources have been invested in understanding the novel coronavirus epidemiology. Canada would be well served by research that will help us better understand the mental health aspects of the pandemic. To realize this, we need high quality data. At the very least, these considerations should be applied:
- Measurement must differentiate normal emotional responses to a crisis from problematic psychological states or the presence of mental disorders. Suitable instruments usually take the form of diagnostic interviews rather than a symptom rating scale. Diagnostic interviews used in surveys consist of an interview script that resembles the questions a health professional would ask while making a diagnosis. There are many suitable instruments well-known and widely used in Canada.
- A wide range of unique emotional states (both positive and negative) should be measured, preferably using instruments widely used in previous Canadian health surveys.
- Self-report measures of perceived mental health states should not be used, at least not in isolation.
- Accurate measures of functioning and quality of life should be employed. Instruments with proven accuracy have previously been used in Canadian population-based studies.
- Low quality approaches to selection of survey participants, such as relying upon volunteers, should not be applied.
Statistics Canada has extensive experience with conducting high quality mental health surveys, including the use of appropriate diagnostic measurement instruments, measurement of symptoms, functioning and quality of life. Sadly, this capacity is not being currently applied. Instead, there has been a disturbing acceptance of trivial and often misrepresented information, delivered from sub-optimal surveys and problematic interpretation of results. Canada has previously demonstrated the capacity to provide high-quality mental health data. It is necessary that during this pandemic we not default to collecting or using data that has little potential to inform our mental health policy response and may indeed be detrimental to it.
This article is part of the The Coronavirus Pandemic: Canada’s Response special feature.