RCTs – control or chaos?

The RDS SE runs a quarterly newsletter which acts as both a piece of promotional material for the service and as way to update readers about local news and what research has been funded and is in progress in the region. There’s also a Q&A section: ‘Ask Dr. Claire’.

I’m not actually sure where the ‘Dr. Claire’ came from. Despite it actually being my name, the section existed before I started work at the RDS SE. However, in true nominative determinism, this section has ended up as my responsibility. And, in the spirit of multi-tasking, I thought I’d use some of the last Q&A I wrote as part of this blog. I don’t think this particular newsletter has been published yet, so consider this an advanced viewing.

The Q&A was all about randomised controlled trials (RCTs). I had originally intended it to be quite a factual piece talking about biases, blinding and groups. However, thanks to a well-worded poke by a colleague, I found myself going off on a bit of a rant.

You see, all the factual stuff is true. RCTs are the gold-standard design for investigating the efficacy and effectiveness of interventions. Random allocation of participants into groups does minimize bias and, therefore, results from RCTs do provide the most robust evidence for (or against) interventions.

However, this is only the case when they are appropriately designed. And, by this, I mean when they are appropriately controlled.

Now, this may be a very obvious comment, but the issue of the comparator is one of the most frequent discussion I have with researchers.

And, to be honest, it’s often less of a discussion and more of an argument. Hence my rant.

In my experience, researchers have often not given enough thought to an appropriate comparator for their intervention. This is an problem I come across both advising on research plans and reviewing research applications. It is not enough that a trials unit is doing the randomisation, that participants and researchers will be blinded, and that treatment as usual (TAU) or a wait-list will be the control. All too often, in my experience, TAU turns out to be no treatment and this is not a particularly robust design.

The placebo effect is well-established and is why an appropriate comparator is a necessity. In surgical trials, participants undergoing sham surgery have been shown to have improved outcomes. In psychological therapies, using a wait-list control has been shown to inflate intervention effect sizes. Ideally all RCTs would have a NICE-recommended, gold-standard treatment as their comparator. By comparing novel interventions against such an active control, results of RCTs have real meaning in a clinical context. We want to demonstrate that the new therapy is better (or at least as good and more cost-effective) than the current best therapy. This kind of result can lead to changes in both practice and policy.

However, it often isn’t as simple as that. Issues arise when there isn’t a current gold-standard treatment. In these cases, it can be tempting to use TAU or a wait-list as the comparator. However, this reintroduces bias, risks inflating the effect size and unblinding participants.

So, what is to be done?

The answer is to conduct feasibility work. Give some thought about what would make a good, active comparator. Talk to patients, clinicians, carers, and others. Work out some options. Then do some pilot work. In this way, whatever comparator you decide upon, you have justification for choosing it and you know that your RCT design will withstand scrutiny.

We all want the results of our research to be robust and meaningful. Getting the comparator right is a good way to ensure that they will be.

And if writing this particular post means I have to have this argument conversation with one researcher fewer? Well, so much the better.

The NIHR – lost in the acronyms?

As I’ve commented before, the NIHR loves its acronyms. Work in the system for long enough and you can have entire conversations that consist largely of seemingly random strings of letters.

There are the research programmes – RfPB, HS&DR, HTA, EME, PHR, PGfAR, and PDG. Of course, then there’s i4i, which goes for a trendy look by using the now-ubiquitous lower case ‘i’ and adding a number into the mix.

Then there are the two centres that manage these research programmes – NETSCC and the CCF.

And let’s not forget the need to ensure your costings are in line with AcoRD guidance. And the CRN that provides the infrastructure and support for research in the NHS.

And then there are the RDSs that support researchers. There are 10 altogether. I’ll spare you the entire list, but let’s just say their acronyms are all a bit like the one for which I work – the RDS SE.

Now, I won’t dispute that these can be useful short-hand when talking to colleagues well habituated to this alphabet soup. But, they often present a real barrier to researchers on the ‘outside’.

And, to be honest, they can also be a barrier to those who work inside the system as well.

I used to be the Programme Manager for ID&P when I worked at NCCHTA, which is now NETSCC. (Translation: the manager for identification and priortisation for the National Co-ordinating Centre for Health Technology Assessment, now the NIHR’s Evaluations, Trials and Studies Coordinating Centre), where we would have monthly internal meetings, each one run by a different area. When it came time for my area to present, I put together a blockbuster game, complete with a hexagonal-celled board, for us all to play, to introduce everyone to the acronyms used by this one department alone.

The point of this is the importance of simplicity. From the start, the NIHR puts up a pretty big barrier to engaging with researchers, many of whom don’t even know what these letters stand for, let alone the acronyms for the myriad of research programmes, initiatives, documents and support organisations.

So, in an effort to cut through the minefield of letters, let me give a simple message:

I’m Claire, a research adviser. If you’d like to conduct health research into an issue you see in your clinical practice, then come talk to me. I can help you with your research question and design and also who to approach for funding. This is a free service and there are advisers located throughout England.

Find out more here.

Alternatively, comment on this post and I’ll help point you in the right direction.

ETA: There are a couple of good glossaries, of which I’ve just been reminded.
– The NIHR’s glossary is here.
– NETSCC’s glossary is here.
Many thanks to Nicola Tose for reminding me!

ETA2: Sarah Seaton has kindly added to the acronym list — see below for even more! I’m sure there are many more out there as well.

ETA3: Another one for the list: the lovely people at the Complex Reviews Support Unit (CRSU) who provide support for the delivery of complex reviews that are funded and/or supported by NIHR.

Who’s on your team?

Through my role as an adviser for the Research Design Service South East (RDS SE), I most often find myself working with clinicians in the NHS. To me, this is one of the most important roles the RDS – to offer busy clinicians advice and support on how to design, conduct and gain funding for research on issues that they see in their everyday practice. However, I have found myself working with academic researchers based primarily in universities more frequently of late. Perhaps this is an indication of the growing competition for research funds as the research councils, the traditional funders of university-based research, reduce their budgets and become more specific about the types of research they will fund. It is also a reflection of the growing commitment to health research within the National Institute for Health Research (NIHR). Whatever the cause, I’ve been interested to note the differences in expectation of researchers from these very different backgrounds.

One of the main differences I find between the two centres around expectations of the type of research team funders are looking for when assessing applications. When I advise clinical researchers, they are very open and appreciative of larger research teams, where every individual has their own area of expertise to bring to the table. This is something which the NIHR requires. If you’re planning to conduct a clinical trial, the NIHR want to see involvement from methodologists, statisticians, health economists and service users. All of this, in addition to the clinical expertise of the team in terms of the specific subject area. And brokering these collaborations is something with which RDSs can help.

By contrast, this notion of a large research team is something that can be less familiar in academic circles. I met with an academic researcher a few weeks ago who summed it up quite nicely. ‘We’re too used to doing everything ourselves,’ he said. ‘If a new skill is required for a project, then I’ll teach it to myself.’

This is a notion I recognize. From the earliest stages of academic research – the PhD – many researchers are left on their own to get on with their projects. You get some tips from your supervisor and maybe a post-doc in your group, but if something needs to be done, then it’s up to you to make sure that it is.

However, from the perspective of many funders, this is a waste of time and money. If your project involves collecting vast amounts of data, the funder wants to see that you have someone on your research team with a proven track record of analyzing such data. Otherwise, this represents a risk. Therefore, for every task you have highlighted, you should have someone on your team dedicated to complete it and with the necessary knowledge, experience and/or supervision to do so.

At the end of the day, the thing that all involved care about is that the research is successful. Therefore, maximize your chances of success. When it comes to your research team, think carefully about who’s on your team and make sure you’ve got the support to see your project through to successful completion.