Social care research – a heated debate….?

I, along with my colleagues from the Research Design Service (RDS) South East and RDS South West, recently attended the 5th International Clinical Trials Methodology Conference, which was held this year in Brighton. In addition to attending various talks and sessions, we were involved in one of the conference’s debates – a new item on the programme. The debates were held to encourage discussion around potentially conflicting areas and the conference organisers were keen to play up the potentially antagonistic views held by the speakers on either side of the debate’s motion in a fun and playful way. For our debate, RDS SE Director Jörg Huber decided to focus one of the NIHR’s more recent research priorities – social care research – and to facilitate discussion on whether research in this area is ready to make use of the methodologies employed by clinical trials in health research. The debate was very good-natured and we purposely played up the potentially opposing points of view.

Helen Weatherly, from the Centre for Health Economics at the University of York, took the first position: that the social care evidence base could be enhanced by using the methodologies of clinical trials. She argued that reductions in public expenditure on social care amidst increasing costs meant that the area was ripe for robust research to identify effective and cost-effective interventions. This position is underlined by the work of funders such as NIHR in establishing social care research bodies such as the NIHR School for Social Care Research and funding streams such as the NIHR Research for Social Care call. Given this need for high-quality research, and provided investment is there for researchers and research infrastructure, is social care research really that different to health care research? Indeed there are many similarities – both cover broad-span, complex interventions for which guidance on research methodologies and design already exists – the MRC guidance, for example. Trial design offers technical and ethical advantages in demonstrating the effectiveness of interventions and the methods of trials have been developed in a culture that has been able to embed research in its practice. There are challenges, of course, and we do need to be mindful of the differences of the social care environment, the training and education needs of social care practitioners, and the need to raise awareness of the value of involvement in research. However, there remains value to utlilising the clinical health care research methods in social care research and this is something for which we should strive.

Rosemary Greenwood, from RDS South West and the University of Bristol, and Ann-Marie Towers, from RDS South East and the University of Kent, argued for the second position: that social care research is not yet ready to employ the methodologies of clinical health research. They argued that social care research is still at far too early a stage for such an approach; that even getting a PICO right for a social care research project is fraught with difficulty. How do you get an ‘unbiased’ group? How can you randomize care homes to interventions which *they* are going to have to pay for? This latter is a crucial point – who is going to pay for these novel social care interventions? Social care interventions require co-production, but the resources for this co-production simply hasn’t existed until very recently. We first need to concentrate on using the available research funding to develop such interventions before we even consider evaluating their effectiveness in randomised controlled trials. And, in the meantime, considerable work needs to be done to the social care sector as a whole – from workforce capacity and training to research management infrastructure and incentivisation – for delivering such trials. We also need to figure out the pathways to impact with social care research, especially given the multiple providers of social care and an ever-changing policy backdrop.

We had great engagement from the floor. Issues raised included highlighting that we need to figure out where children’s social care fits into this picture, as current NIHR funding initiatives focus solely on adult social care. And that we really need to be talking about ‘health *and* social care’, as they’re often indivisible from a clinical perspective. Points were also made about the challenges involved with engaging different providers from different sectors with different economic and finance models.

It was a fascinating debate with excellent points from all sides – thank you so much to our excellent speakers! Ultimately, I think we all really do agree that both opportunities and challenges exist as we design and fund social care research and that we must work through these issues together. Social care research is undoubtedly a priority for funding by the NIHR and rightly so. But, as with all NIHR-funded research, it must be of high quality and utilize appropriate research methods. Research questions need to be clearly defined and centred around service users, with outcomes leading to tangible changes to practice which will benefit both the care system itself and the lives and experiences of those who use it. The research methodologies employed to answer these research questions therefore need to be appropriate to producing the right kinds of data. It may very well be the case that these methods will be those already being employed successfully in health care research, but they equally may be entirely different. We need to be aware of, and sensitive to, the different professional and research environments and open to the potential need to adapt or develop new methodologies that may be better suited to the social care arena.

Social care research is here to stay. And your local Research Design Service is here to help you design your research study and apply for funding.

Adaptive Trials

We recently had a speaker come talk about adaptive clinical trials. It was a good seminar with lots of clear, real-life examples and it was well-attended by RDS advisers and researchers based both in universities and in the NHS. I won’t go into detail about the content of the seminar – people far better qualified than me have already done so in abundance – but it did get me thinking about how such designs could be used by the researchers with whom I and my colleagues work.

A large proportion of the health research funded by the NIHR, through such programmes as HTA, EME and RfPB, are clinical trials. And there are many issues to consider when designing an RCT, some of which I’ve discussed before. Most researchers engaged in such projects have input from one of the many Clinical Trials Units (CTUs) and have experienced statisticians, data managers and trial managers on their research teams. And, of course, many RDS advisers who support such studies ourselves have this experience.

What is particularly relevant about adaptive trials, not least in the context of providing advice for health researchers, is the opportunity for flexibility they offer. The goal of an adaptive design is to enable researchers to learn from the accumulating data and make key design changes accordingly as things progress. Usually, while a trial is still in progress, we do not look at any of the data. However, the accumulating material may be more informative than that available before the trial had started. And it is this previously available data – things like effect sizes, recruitment strategies and randomisation, dosages – on which the design of the trial was based. The idea behind adaptive designs is that a trial can be improved by making use of interim data to refine certain aspects of it.

Improvement can mean a number of things, usually to do with making a clinical trial more efficient. It can, for example, lead to doing away with a treatment arm if a particular dosage or intervention is demonstrated to be inappropriate. It can also mean identifying with greater accuracy the number of patients needed or lead to refinements in the recruitment strategy, target population or treatment randomisation. It can even lead to decision-making about, amongst others, key objectives, end-points, test statistics, or subsequent phases of the research.

The trick to it all is in the pre-specification: making it very clear in the trial protocol what purpose will be served by carrying out an interim analysis, when it will be done, on what measures, by whom, and to what end. You need to put procedures in place to ensure that the blind holds and limit the people who see the data at this pre-specified interim stage.

As an RDS adviser, I can see merit in this approach. The ability to carry out a sample size re-estimation, for example, is something that I think could benefit many projects. So too is the ability to drop inferior treatment groups or use preferential randomisation. There are, of course, many other options for adaptive designs, and these are just two examples that I can think of which could be applied to many projects.

The drive for efficiency in health research is nothing new. I’ve written about it before and there are examples of it in the NIHR funding programmes themselves – a recent example being the call the HTA programme issued in the summer of 2014 for ‘efficient study designs’. And, as always, an integral part of any NIHR funding application is the demonstration of the value for money of the research itself. This is, of course, not to say that adaptive trials are the answer or even appropriate. Such designs come with their own risks – errors due to their greater complexity, more time needed in the planning stages, uncertainties around the ethical implications, and the need for greater regulatory review, to name just a few.

As always, the design of a study needs to fit its research question. Adaptive trials offer an intriguing option when uncertainties mean refinements are required during the trial itself in order to optimise its design.

RCTs – control or chaos?

The RDS SE runs a quarterly newsletter which acts as both a piece of promotional material for the service and as way to update readers about local news and what research has been funded and is in progress in the region. There’s also a Q&A section: ‘Ask Dr. Claire’.

I’m not actually sure where the ‘Dr. Claire’ came from. Despite it actually being my name, the section existed before I started work at the RDS SE. However, in true nominative determinism, this section has ended up as my responsibility. And, in the spirit of multi-tasking, I thought I’d use some of the last Q&A I wrote as part of this blog. I don’t think this particular newsletter has been published yet, so consider this an advanced viewing.

The Q&A was all about randomised controlled trials (RCTs). I had originally intended it to be quite a factual piece talking about biases, blinding and groups. However, thanks to a well-worded poke by a colleague, I found myself going off on a bit of a rant.

You see, all the factual stuff is true. RCTs are the gold-standard design for investigating the efficacy and effectiveness of interventions. Random allocation of participants into groups does minimize bias and, therefore, results from RCTs do provide the most robust evidence for (or against) interventions.

However, this is only the case when they are appropriately designed. And, by this, I mean when they are appropriately controlled.

Now, this may be a very obvious comment, but the issue of the comparator is one of the most frequent discussion I have with researchers.

And, to be honest, it’s often less of a discussion and more of an argument. Hence my rant.

In my experience, researchers have often not given enough thought to an appropriate comparator for their intervention. This is an problem I come across both advising on research plans and reviewing research applications. It is not enough that a trials unit is doing the randomisation, that participants and researchers will be blinded, and that treatment as usual (TAU) or a wait-list will be the control. All too often, in my experience, TAU turns out to be no treatment and this is not a particularly robust design.

The placebo effect is well-established and is why an appropriate comparator is a necessity. In surgical trials, participants undergoing sham surgery have been shown to have improved outcomes. In psychological therapies, using a wait-list control has been shown to inflate intervention effect sizes. Ideally all RCTs would have a NICE-recommended, gold-standard treatment as their comparator. By comparing novel interventions against such an active control, results of RCTs have real meaning in a clinical context. We want to demonstrate that the new therapy is better (or at least as good and more cost-effective) than the current best therapy. This kind of result can lead to changes in both practice and policy.

However, it often isn’t as simple as that. Issues arise when there isn’t a current gold-standard treatment. In these cases, it can be tempting to use TAU or a wait-list as the comparator. However, this reintroduces bias, risks inflating the effect size and unblinding participants.

So, what is to be done?

The answer is to conduct feasibility work. Give some thought about what would make a good, active comparator. Talk to patients, clinicians, carers, and others. Work out some options. Then do some pilot work. In this way, whatever comparator you decide upon, you have justification for choosing it and you know that your RCT design will withstand scrutiny.

We all want the results of our research to be robust and meaningful. Getting the comparator right is a good way to ensure that they will be.

And if writing this particular post means I have to have this argument conversation with one researcher fewer? Well, so much the better.