GUEST POST: Report from the COMET IV meeting Rome 19-21st Nov 2014 by Duncan Barron

My colleague, Duncan Barron, recently attended the COMET IV meeting in Rome and has written a guest post about the conference, the reasons for developing Core Outcome Sets (COS) and some of the presentations.

COMET IV meeting Rome 19-21st Nov 2014

Guest post by Duncan Barron, PPI Lead, Research Design Service South East

“Clinical trials are only as credible as their outcomes” (Tugwell, 1993).

The COMET (Core Outcome Measures in Effectiveness Trials) 4th meeting in Rome was held recently on the 19th-21st November in Rome and I was lucky enough to attend.

The COMET Initiative brings together people interested in the development and application of agreed standardised sets of outcomes, known as a ‘core outcome set.’ These sets should represent the minimum that should be measured and reported in all clinical trials, audits of practice or other forms of research for a specific condition.

There were speakers attending from all over Europe (with good representation from the UK) including patient representatives, some of whom presented on their experiences of being involved in developing outcome measures relevant to patients and their conditions.

The Core Outcome Sets (COS) are an “agreed standardised set of outcomes measures and reported as a minimum”. The aim of the COS is to increase consistency of reporting across trials and improve the quality of research.

The outcomes need to be appropriate, including being so to patients and the public. COMET provides guidance on what should be measured when developing a COS. This includes considering thing such as which domains to measure (eg. QoL; adverse effects) and considering the different ways of measuring outcomes.

A summary of some of the sessions

Roberto D’Amico spoke about his research in relapse remitting MS. He found that different reporting of data in trials meant it was impossible to analyse in a Systematic Review and therefore a waste of patient data. Therefore, there is a need for Core Outcome Scales.

Silvio Gavattini spoke on surrogate and composite end-points, highlighting that end-points of relevance to the patients (e.g. better QoL; decreasing mortality) are important. He gave an example of cholesterol as a surrogate for lowering MI, but that cholesterol is not a good surrogate for all drugs. In cancer, research decreasing the size of volume of tumour is usually viewed as good, but this is not always equivalent to a therapeutic end point. Survival is often the patients’ focus. Reducing tumour size can happen, but there can be considerable side effects which are not positive for the patient. Composite end points are a combination of individual end points in to one single measure (this can benefit in reducing the number of patients needed in a trial). However, the new measure can be influenced by the contribution of just one of the components which may be less meaningful overall. Therefore, each component should be equally meaningful.

Paula Williamson, from the Uni of Liverpool, spoke about the remit of COMET to encourage evidence based COS development and uptake. She highlighted a Systematic Review (Gargon et al (2014) of 198 COS (from 250 papers) that has been used to populate the COMET COS database – see here for more details. The systematic review revealed PPI in COS development in only 16% of the published COS. However, ongoing current PPI in COS is 90%. The Delphi technique is a regularly used component of COS (85% in ongoing COS work).

Trial funders including the NIHR Health Technology Assessment programme endorse COMET and recommend consulting the COMET database. “Where established Core Outcomes exist they should be included amongst the list of outcomes unless there is good reason to do otherwise. Please see The COMET Initiative website to identify whether Core Outcomes have been established.”
See here for more details.

Christian Apfelbacher (Uni of Regensburg) spoke about the methods used to develop COS in the field of atopic eczema in the HOME study.
1) Define the scope of the condition eg. setting (eg. trials); geo scope; stakeholders
2) Define the domains: what to measure
– used a Delphi to decide on domains
– 4 diff domains decided (eg. QoL; L/T flares).
3) Define core set of outcome measurement instruments
– beginning with a systematic review of existing tools (which are good enough instruments)
4) Dissemination and implementation:
– roadmap completed for one domain (“clinical signs”)
See here for more details.

Finn Gottrup, Prof of Surgery, Uni of Southern Denmark spoke about the development of Wound COS. Pervious COS had focused mainly on healing. His team undertook a Delphi study to identify a consensus on core outcomes for wound research. The work is still underway, but is linked with the European Wound Management (EWMA) group.

Holgar Schunemann (McMaster Uni, Canada) gave an interesting talk on Summary of Findings (SOF) Tables in published journals (as part of the The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) Working Group). SOF tables help improves understanding of findings for journal readers. He is working on Interactive SOF tables (electronic) where the reader can manipulate way info is presented and Diagnostic SOF tables where the reader can test the accuracy of outcomes.

Peter Tugwell (Uni of Ottawa) & David Tovey (Cochrane Collaboration) spoke about core outcomes in pain research. Different Cochrane working groups all have an interest in chronic pain. These groups are beginning to develop alliances in developing outcomes across groups. The focus should be on the patient perspective for example pain interference with function versus Pain intensity.

Rosemary Humphreys spoke about a patient perspective in COS in the HOME (Eczema) study at the Uni of Notts. She highlighted the following:
– Patients can help define outcomes and new ones researchers haven’t considered
– Clinicians know about the condition but
– Patients know about the impact on their lives incl family, relationships.
– Involve patients early help design the COS
– Challenges to patients: language; jargon – produce a glossary for the condition of interest

Iain Bruce (Royal Manchester Children’s Hospital) MOMENT (cleft palate) study highlighted some lessons learned in involving patients and carers:
– Benefit from patients with direct experience of condition
– Involved the CEO of CLAPA, a patient support group in the cleft palate field
– Ensure patient voice is heard (lived experience)
– Ensure outcomes of most importance to patients are considered
– PPI is not an add on. It’s a fundamental theme!
– Challenges: engaging patients and use of language; practical meeting dates/ costs; scientific studies can be daunting
– Researchers need to explain why patients’ opinions are important.
– Use the same PES for clinicians + patients for Delphi survey
– Use SMOG calculator for PES and readability
– GRADE system 1-9 (Guyatt et al)
– Traffic light system for children
– All views are given equal value
– Advice to researchers developing COS: use lay language and variety of diff views; sell the benefits, tell them why it’s important for them to be involved; engage early at the design stage

RCTs – control or chaos?

The RDS SE runs a quarterly newsletter which acts as both a piece of promotional material for the service and as way to update readers about local news and what research has been funded and is in progress in the region. There’s also a Q&A section: ‘Ask Dr. Claire’.

I’m not actually sure where the ‘Dr. Claire’ came from. Despite it actually being my name, the section existed before I started work at the RDS SE. However, in true nominative determinism, this section has ended up as my responsibility. And, in the spirit of multi-tasking, I thought I’d use some of the last Q&A I wrote as part of this blog. I don’t think this particular newsletter has been published yet, so consider this an advanced viewing.

The Q&A was all about randomised controlled trials (RCTs). I had originally intended it to be quite a factual piece talking about biases, blinding and groups. However, thanks to a well-worded poke by a colleague, I found myself going off on a bit of a rant.

You see, all the factual stuff is true. RCTs are the gold-standard design for investigating the efficacy and effectiveness of interventions. Random allocation of participants into groups does minimize bias and, therefore, results from RCTs do provide the most robust evidence for (or against) interventions.

However, this is only the case when they are appropriately designed. And, by this, I mean when they are appropriately controlled.

Now, this may be a very obvious comment, but the issue of the comparator is one of the most frequent discussion I have with researchers.

And, to be honest, it’s often less of a discussion and more of an argument. Hence my rant.

In my experience, researchers have often not given enough thought to an appropriate comparator for their intervention. This is an problem I come across both advising on research plans and reviewing research applications. It is not enough that a trials unit is doing the randomisation, that participants and researchers will be blinded, and that treatment as usual (TAU) or a wait-list will be the control. All too often, in my experience, TAU turns out to be no treatment and this is not a particularly robust design.

The placebo effect is well-established and is why an appropriate comparator is a necessity. In surgical trials, participants undergoing sham surgery have been shown to have improved outcomes. In psychological therapies, using a wait-list control has been shown to inflate intervention effect sizes. Ideally all RCTs would have a NICE-recommended, gold-standard treatment as their comparator. By comparing novel interventions against such an active control, results of RCTs have real meaning in a clinical context. We want to demonstrate that the new therapy is better (or at least as good and more cost-effective) than the current best therapy. This kind of result can lead to changes in both practice and policy.

However, it often isn’t as simple as that. Issues arise when there isn’t a current gold-standard treatment. In these cases, it can be tempting to use TAU or a wait-list as the comparator. However, this reintroduces bias, risks inflating the effect size and unblinding participants.

So, what is to be done?

The answer is to conduct feasibility work. Give some thought about what would make a good, active comparator. Talk to patients, clinicians, carers, and others. Work out some options. Then do some pilot work. In this way, whatever comparator you decide upon, you have justification for choosing it and you know that your RCT design will withstand scrutiny.

We all want the results of our research to be robust and meaningful. Getting the comparator right is a good way to ensure that they will be.

And if writing this particular post means I have to have this argument conversation with one researcher fewer? Well, so much the better.

The NIHR – lost in the acronyms?

As I’ve commented before, the NIHR loves its acronyms. Work in the system for long enough and you can have entire conversations that consist largely of seemingly random strings of letters.

There are the research programmes – RfPB, HS&DR, HTA, EME, PHR, PGfAR, and PDG. Of course, then there’s i4i, which goes for a trendy look by using the now-ubiquitous lower case ‘i’ and adding a number into the mix.

Then there are the two centres that manage these research programmes – NETSCC and the CCF.

And let’s not forget the need to ensure your costings are in line with AcoRD guidance. And the CRN that provides the infrastructure and support for research in the NHS.

And then there are the RDSs that support researchers. There are 10 altogether. I’ll spare you the entire list, but let’s just say their acronyms are all a bit like the one for which I work – the RDS SE.

Now, I won’t dispute that these can be useful short-hand when talking to colleagues well habituated to this alphabet soup. But, they often present a real barrier to researchers on the ‘outside’.

And, to be honest, they can also be a barrier to those who work inside the system as well.

I used to be the Programme Manager for ID&P when I worked at NCCHTA, which is now NETSCC. (Translation: the manager for identification and priortisation for the National Co-ordinating Centre for Health Technology Assessment, now the NIHR’s Evaluations, Trials and Studies Coordinating Centre), where we would have monthly internal meetings, each one run by a different area. When it came time for my area to present, I put together a blockbuster game, complete with a hexagonal-celled board, for us all to play, to introduce everyone to the acronyms used by this one department alone.

The point of this is the importance of simplicity. From the start, the NIHR puts up a pretty big barrier to engaging with researchers, many of whom don’t even know what these letters stand for, let alone the acronyms for the myriad of research programmes, initiatives, documents and support organisations.

So, in an effort to cut through the minefield of letters, let me give a simple message:

I’m Claire, a research adviser. If you’d like to conduct health research into an issue you see in your clinical practice, then come talk to me. I can help you with your research question and design and also who to approach for funding. This is a free service and there are advisers located throughout England.

Find out more here.

Alternatively, comment on this post and I’ll help point you in the right direction.

ETA: There are a couple of good glossaries, of which I’ve just been reminded.
– The NIHR’s glossary is here.
– NETSCC’s glossary is here.
Many thanks to Nicola Tose for reminding me!

ETA2: Sarah Seaton has kindly added to the acronym list — see below for even more! I’m sure there are many more out there as well.

ETA3: Another one for the list: the lovely people at the Complex Reviews Support Unit (CRSU) who provide support for the delivery of complex reviews that are funded and/or supported by NIHR.