The RDS SE runs a quarterly newsletter which acts as both a piece of promotional material for the service and as way to update readers about local news and what research has been funded and is in progress in the region. There’s also a Q&A section: ‘Ask Dr. Claire’.
I’m not actually sure where the ‘Dr. Claire’ came from. Despite it actually being my name, the section existed before I started work at the RDS SE. However, in true nominative determinism, this section has ended up as my responsibility. And, in the spirit of multi-tasking, I thought I’d use some of the last Q&A I wrote as part of this blog. I don’t think this particular newsletter has been published yet, so consider this an advanced viewing.
The Q&A was all about randomised controlled trials (RCTs). I had originally intended it to be quite a factual piece talking about biases, blinding and groups. However, thanks to a well-worded poke by a colleague, I found myself going off on a bit of a rant.
You see, all the factual stuff is true. RCTs are the gold-standard design for investigating the efficacy and effectiveness of interventions. Random allocation of participants into groups does minimize bias and, therefore, results from RCTs do provide the most robust evidence for (or against) interventions.
However, this is only the case when they are appropriately designed. And, by this, I mean when they are appropriately controlled.
Now, this may be a very obvious comment, but the issue of the comparator is one of the most frequent discussion I have with researchers.
And, to be honest, it’s often less of a discussion and more of an argument. Hence my rant.
In my experience, researchers have often not given enough thought to an appropriate comparator for their intervention. This is an problem I come across both advising on research plans and reviewing research applications. It is not enough that a trials unit is doing the randomisation, that participants and researchers will be blinded, and that treatment as usual (TAU) or a wait-list will be the control. All too often, in my experience, TAU turns out to be no treatment and this is not a particularly robust design.
The placebo effect is well-established and is why an appropriate comparator is a necessity. In surgical trials, participants undergoing sham surgery have been shown to have improved outcomes. In psychological therapies, using a wait-list control has been shown to inflate intervention effect sizes. Ideally all RCTs would have a NICE-recommended, gold-standard treatment as their comparator. By comparing novel interventions against such an active control, results of RCTs have real meaning in a clinical context. We want to demonstrate that the new therapy is better (or at least as good and more cost-effective) than the current best therapy. This kind of result can lead to changes in both practice and policy.
However, it often isn’t as simple as that. Issues arise when there isn’t a current gold-standard treatment. In these cases, it can be tempting to use TAU or a wait-list as the comparator. However, this reintroduces bias, risks inflating the effect size and unblinding participants.
So, what is to be done?
The answer is to conduct feasibility work. Give some thought about what would make a good, active comparator. Talk to patients, clinicians, carers, and others. Work out some options. Then do some pilot work. In this way, whatever comparator you decide upon, you have justification for choosing it and you know that your RCT design will withstand scrutiny.
We all want the results of our research to be robust and meaningful. Getting the comparator right is a good way to ensure that they will be.
And if writing this particular post means I have to have this argument conversation with one researcher fewer? Well, so much the better.