Research Retreats – do they work?

I was fortunate recently to be invited to speak at a Research Retreat run by the Research Design and Conduct Service (RDCS – another acronym for you!). The RDCS operates pretty much the same as the Research Design Service (RDS) does, but in Wales rather than in England. There are some differences between us – the acronyms change (rather NISCHR than NIHR; RfPPB rather than RfPB) and there are slight differences in the NHS set-up (Health Boards rather than NHS Trusts; differences in how support costs are provided)– but nothing too different overall. Most importantly, however, the research and funding landscape remain pretty familiar.

The concept of a research retreat is not a new one. I know that certain RDSs in England, RDS SW for example, have been running them for years. But it was not something I had been a part of before and I was keen to see one in action. How productive would researchers be when provided with a few days dedicated research time, with the support of a range of methodologists and advisers, away from the pressures of their clinical responsibilities?

The RDCS team had done their homework. They’d booked out a small country hotel that had lots of room where the teams could work, wonderful food and, perhaps most importantly, a continual supply of tea and coffee. The timing was also good; there was a funding deadline that many of the teams were working toward. And the group was good – 9 teams of researchers in total, making it a small enough group to get some really interesting discussions going yet large enough to allow groups to work independently on their own projects.

There were a couple of talks each day – my own on mixed-methods research, an invaluable one on project management and costings, a fascinating talk by a PPI representative, and a thorough run-down of the funding remit and requirements. These gave the teams direction, tips, and allowed us to get together for group discussions.

The rest of the time was dedicated to writing, with advisers circulating to help out where needed. I spent time with almost all of the teams, as did the other advisers. It was an illuminating process: to give the advice as I normally do, but then, instead of waiting a month or so for the next meeting to see how the team had progressed, to meet with them again later on that day to work on the next part of the project. It was like putting in several months worth of advising work into a couple of days. It was also extremely fulfilling to see the teams grow in confidence and enthusiasm as they had the time together to work on their projects with expertise on tap to really move things forward.

Of course, the proof of it all will be in the outcomes of the projects. How will the 3 applications to RfPPB fare? Will the research teams at the start of their research journey make it all the way through to submission and ultimate project funding?

I am optimistic.

When given the time and space to connect with the others in their team and really focus on their research question, study design and funding application, their dedication to and enthusiasm for their projects was tangible. These were people who had seen problems in their clinical practice and were driven to explore solutions by research in order to make things better for their patients.

And that, in a nutshell, is what fundable health research is really all about.

***~~~***

I have to say a big thank you to Mark Kelson, Kerry Hood and everyone else at the RDCS for inviting me!

GUEST POST by Sarah Seaton, research adviser from the RDS East Midlands

Prognostic modelling: what is it and how can it be used?

by Sarah Seaton, RDS East Midlands

Within the RDS, different members of team have varying expertise and interests. Mine is largely in observational research, and prognostic modelling, and this is generally what I advise on. Recently, I gave some staff training on this topic within the East Midlands and the below is a transcribed version of that session.

What is prognosis?

Prognostic modelling aims to estimate the risk of a future outcome in an individual. The outcomes are generally very specific. Examples include: mortality; disease progression and disease recurrence. Different factors (variables) which help predict the risk of this event occurring are used. This is known as multivariable analysis. There are two kinds of studies: development and validation. Here I will focus on development, but the validation studies are as important, if not more so. So, what we are doing is taking factors about the patient (e.g. age, weight, smoking status) and using it to predict an outcome (e.g. development of lung cancer).

Selecting factors that predict the outcome

The factors which are used need to be objective (easy to measure). These factors need to have data available at the time the prediction is made. For example, think of a situation where a GP might want to predict a patient’s risk of developing diabetes in the next 12 months. They have a fifteen minute consultation period where they will have very basic information available to them, for example: blood pressure, weight, height and ethnic group. These simple, objective factors are appropriate for the setting (GP practice) and are then used to predict the risk of diabetes.

Statistical parsimony

Keeping in mind our example of a GP appointment brings us to the idea of statistical parsimony. This is basically the idea that everything should be as simple as it can be (for medical people this is very much like Occam’s Razor). Choosing which factors predict an outcome can be a statistical exercise, but (and bearing in mind I am a statistician!) it is probably far more sensible to consider this as a clinical question, which needs literature reviewing and discussion amongst colleagues. The key issue here is to have as few variables as you can, for statistical reasons and for simplicities sake.

Statistical analysis and sample size calculations

This is not the time or place to discuss the methods behind this, but I will recommend a few excellent articles and books for anyone interested in further reading. Suffice to say, the work is not always trivial, and definitely needs a statistician. Sample size calculations for these types of studies are perhaps a little “unorthodox” and won’t be what you are used to, so make sure the statistician involved has done one before, or has read about them.

Presentation of results

The main issue of risk scores is presentation and ease of their use. There are some very good examples of risk score presentation and use. For example, take a look at the CRASH-2 “calculator” for predicting mortality after head injury. Although not beautiful, it is very simple to get a result out.

However, this remains a difficult area, and important consideration needs to be given to the fact that the model needs to be possible to easily implement. I think, possibly, there is great i-phone app potential here, if a sensible model is developed.

Getting funding

It is very hard to get funding for prognostic studies, and they are often “bolt ons” to other studies. In my opinion there is a good reason for this. It is not enough to tell a patient they are “at risk” of a disease. Something has to change in light of this new found risk. If nothing changes, then it was perhaps unethical to identify the patient when nothing could be offered, and a feeling of “so what?” is left behind. This, I think, is particularly true of the NIHR funders. So, as you write that application, or develop that model, think to yourself: what is going to change with this knowledge?

This is a whistle stop article, and much more detail is needed for anyone wanting to do one of these studies. A good starting point is:

Moons et al. Prognosis and prognostic research: what, why and how? BMJ 2009; 338.

Steyerberg, E. Clinical prediction models: a practical approach to development, validation and updating.