Listen to the blog here.
Research methodology 101 in psychology typically starts by explaining statistical hypothesis testing, how data can be understood in a certain way (model) to draw inference. A theory-based statistical model is the approach in which researchers make meaning out of the constellation of data-points – in a systemic and falsifiable way that differentiates inferences from astrology.
Research is not easy. There are many decisions and assumptions researchers make in the process, e.g., how are concepts defined, how are these concepts measured, what are the relationship between these variables, do they overlap? Researchers design, clean, collect and frame data in a way such that they can tell a story – Data may speak for itself, but the theatre is built by the researchers. It is more than choosing which variables to put into the model, or discover which variables are statistically associated with the predictors. It is about how the confirmation or rejection of the statistical model should be interpreted, in what context, for which populations – and more.

Photo by Tara Winstead on Pexels.com
The industrial revolution automated jobs and led to an expansion of productivity; the “artificial intelligence (AI) revolution” appears to share similar aims. The first questions that pop to people’s minds are – “Can we automate this process? If so, how?” The same ideology has been applied to understanding data – there are AI models spring up like mushrooms after rain, with approaches like “covariate auto-selection” that promises to perform as good as (or outperforms) “traditional analysis” – whatever that means.
I am no fan of such practices. This is because I think data analysis is only a small part of the whole scientific process, there are limited ways you can “let the data speak” if the paradigm of data collection, conceptualisation etc. is never challenged. This AI-do-all approach, if deemed to be the best, or even worse, the default practice, will leave little room for users to challenge the premises and assumptions in which the inference are drawn, hence no true empirical theoretical advancements, but post-hoc theory-making. But can you really blame AI data scientist for this?
There is no point in finger-pointing [maybe 1 >:o)]. The problem of weak theory is prevalent in (mental) health research (More discussion here on formal theory: https://eiko-fried.com/on-theory/ – Eiko’s blogs, with a lot of resource on theory, do check them out!). An example that is highly relevant to my work is the use of ethnicity in health research – is it biology? Is it country of origin? Is it migration status? Is it social support and network? What is it’s relation with the covariates? Papers often describe whether their findings fit with previous research, but most of the time stopped at that level, “More research is needed”, and less discussion on theory. It is this tendency of focusing just on inference and less about theory that precipitates AI-based analytical practice to expand.

Photo by Tara Winstead on Pexels.com
This phenomenon begs the question, why is theory playing less of a role in mental health research? What is the driver behind this change in scientific practice? I believe a particular emotion – frustration – plays a role. I see this frustration arise from the huge implementation gap, and the insurmountable unmet needs, which is made worse by the replication crisis.
We are said to be in a mental health crisis. The healthcare system is more sensitive to detect mental health problems: they are recognised earlier and more broadly at primary care, but our ability to treat our patients did not improve to the same extent. It takes 17 years to translate health research into practice. IAPT, new waves of psychotherapy, medications… These attempts to improve service provision (by quantity/access) and quality did not match the increasing demands. With record level of demand for mental health support (even before Covid19), the whole community is pressured to provide solutions. The frustration stems from the compassion to the plight of patients.
The same frustration is felt by the funders too: decades of funding to find a pill to eradicate dementia, pilling resource to prioritise “what works”, stronger than ever appetite for interventions. The positioning of researchers in the field is no longer “neutral observer of (natural) phenomenon”, but “proactive driver of change”. The increasing need to demonstrate “impact” is evident of this change of positioning. Measure of impact depends on ability to demonstrate progress. Theory development is often a twisted journey, it intrinsically fares worse than randomised control trials in that regard in the current paradigm.
In conjunction with the replication crisis, where small sample size and poor methods (but not weak theory) were deemed to be the culprit, strength in numbers feels like a pre-requisite to publish in high-impact journals. This shapes the ecosystem of academia. Bigger institutes are in better position to run larger studies, hence sustenance of the self-prophesised loop of impact as the top research institute. There are less options for smaller institutes to compete – to rely on impact-driven evidence making, rather than theory testing or development. Research became more focused on interventions and local adaptations, rather than trying to come up with a grand theory for a disorder.

Researchers do not have to choose binarily between “theory” and “intervention”, there are plenty of middle-ground between the two. In fact, they go hand in hand to the development of any field. An “intervention”-leaning environment amplifies the need for researchers to understand and clarify “context” – how accumulated evidence can be applied to the situation at hand. I don’t think we are very well trained in this regard (yet), it hasn’t been the focus in the past, nor included in the curriculum. Approaches such as realist evaluation, rapid qualitative reviews etc. arise to address this gap. A “theory”-leaning environment, on the other hand, emphasis on understanding the nature of a phenomenon. For example, the biopsychosocial framework encourages multidisciplinary treatment, which hopefully the restructured integrated care systems are in better position to provide. Another example, where digital based mental health intervention apps taking many different approaches failed to live up to their expectations, perhaps rekindling the positioning and theory of such interventions is the bridge to success. Theory serve as a foundation for knowledge to be generated, decisions justified, and help the field explore alternative explanation of “reality”.
What’s next? It is for us, members of the scientific community to live out the direction of our field. We need to be pragmatic to come up with solutions to address the huge mental health needs, but we need to continue to be observant, patient, and preserve space for new theories and alternative framework of understanding of mental health to be developed and tested.