Research topics – the benefits of an eternal outsider?

I meet with a lot of NHS clinicians to talk about research questions and study design. Our region covers many different Trusts and the level of clinical expertise is immense. These meetings are always interesting and challenging. I have to confess, however, that I don’t usually have the first clue about the specifics of the topic area we’re discussing.

When I first started as research adviser, this bothered me greatly. It was what made me most nervous about meeting a clinical researcher for the first time. However, I’ve now learned that there is actually benefit in not knowing all the ins and outs of the topic area. After all, it’s the clinician who is the expert and it’s not clinical expertise they need.

Far better to let the clinician explain the research topic to me. I can then ask the simple and obvious questions that help elucidate the research question. This discussion usually forms the basis for the argument for why the research needs to be done. And determining the priority of the research topic is the very first thing a funder will do. For NIHR commissioned calls, this prioritisation process has already been at least partially done. However, for the plethora of researcher–led funding streams – both NIHR and otherwise – the argument about the importance of the research topic is the first one you must make.

The fact that I’m at a distance from the research topic means that I can play devil’s advocate. I have a tendency to cover draft protocols in comments and track changes and send emails full of questions to researchers who send me initial project proposals. It’s much better for me to be the one to do it than a funding panel who will ultimately reject a proposal if there are too many unanswered questions.

It’s difficult to know exactly who will be present at a given panel meeting where the funding decision will be made. Even though lists of members are available – and they are definitely worth a look – actual attendance varies. And, when it comes to RfPB, which panel will assess your application can vary too. Relatively recently, a bunch of ‘South East’ applications where discussed at a panel meeting in a different area entirely.

When it comes to the day of the actual meeting of the funding panel, there may be someone there just as experienced, or even more experienced, than the PI. But, then again, there may not. You also can’t predict which direction the discussion will take. Will the panel be focusing more on the priority of the topic area or will it be the design or the plan of analysis that takes up the bulk of time during the discussion? The discussion time will be tiny relative to the time that has gone in to preparing the application and so there’s simply no way of knowing which aspect in particular will capture the panel’s attention.

It’s my job to try to cover every possible angle.

Many RDSs offer researchers the opportunity to put their application forward for a ‘dry run’. At the RDS SE, we hold a regular ‘pre-submission panel’, where we all get together and try to replicate an NIHR funding panel meeting. As a group, we cover a range of specialities, both methodologically and with regards to specific health topics. We have lay representatives who attend and even use the RfPB programme’s assessment criteria to rate each submission. It’s a useful exercise for both the adviser and the researcher. From my perspective, I find it fascinating to see how my comments fit in with those of my colleagues and it’s reassuring to have actual proof that we’re all pretty much on the same page when it comes time to assess the proposals. And, from a researcher’s perspective, they get a lot of feedback on their proposal, far more than the few bullet points of doom that accompany the formal NIHR letters informing PIs about funding decisions.

I’ve come to embrace my role as the eternal outsider when it comes to the majority of specific health research topics. I believe it allows me to offer researchers a far more honest assessment of their research proposal and gives me the tools to push researchers to improve the quality of their applications. It’s also a strength of the RDS as a whole, allowing us to attempt to replicate funding panels for researchers.

If you are preparing an application for NIHR funding, it is worth talking to your local RDS and benefit from our expertise… and our lack of it.

Top tips from the RDS National Training Day!

I attended the NIHR Research Design Service National Training Day recently. I believe it’s the second one that’s been held, but it’s the first one that I’ve been able to attend.

Even within the Research Design Service South East (RDS SE), it’s fairly rare for members from our 3 sites to get together in person. We do it perhaps a handful of times a year – mainly for our thrice annually Pre-Submission Panels and our annual Away Day. So, the prospect of meeting up, with not only the other RDS SE people, but with those from all the other 9 RDSs was an exciting one.

Attendance was good, even from RDSs for whom the trip to London was considerably more arduous than my own one and a half hour direct train trip from Brighton. At my level – that of a general research adviser with no overarching strategic role – there’s basically no formal opportunity to contact people from the other RDSs. It’s one of the reasons that I’ve embraced social media to the extent that I have – it’s a great way of talking to other advisers in RDSs whom I wouldn’t otherwise meet. So to have this opportunity present itself, and for it to be face-to-face, was great.

I have to say the experience didn’t disappoint. There must have been about 150 of us there – a good representation across the board. And, for the most part, we were able to share experiences and reflect on the fact that, despite the regional variations across the country, we face the same issues and challenges when it comes to supporting researchers design fundable projects. The take-home message was that we are doing a good job, but that there is always room for improvement.

I’m not going to do a formal report of the day – that would, I imagine, be fairly dull. But I did want to share some of the things I found most valuable in terms of research advice. Hopefully this will resonate with anyone reading who works for an RDS – and perhaps some of you were there too? – and also be useful to any researchers reading, regardless of whether your located in an NHS Trust or an HEI.

One of the opening plenary talks was by Prof. Tom Walley, Director of the HTA programme and of the other NETS programmes. In terms of things changing, he was able to tell us that the HTA programme are piloting a new Expression of Interest form, which will take the place of the longer and more detailed outline application form currently being used. The goal is to make this first step in the application process easier and faster, both to write and to be reviewed. You can find out more about this here.

I was also able to pick out four top tips from Tom’s talk.

The first was about making very clear to funders the importance the research question to the NHS and its users. Determining the priority of the question is the first thing a funding panel will do, before they even look at who is on the team or how they’re proposing to address the question. This explanation needs to be explained clearly and convincingly to a non-expert reader.

The second point was about clearly defining the evidence gap, as defined by systematic reviews. Tom quoted the figure that half of studies are designed without reference to a systematic review of the evidence that already exists. This raises the question of whether, if a systematic review doesn’t exist in the area in which you work, should this be the first thing you do? Indeed, the RfPB programme has recently issued new guidance around applications for systematic reviews with a suggested funding limit of £150K. You can find out more about that here.

The third point was the observation that there are too few NHS applicants, especially acting as principle investigators. We RDSs need to be proactive about engaging with clinical researchers. We have work to do in liaising with Trusts and demonstrating the benefits engaging with us will have for their clinicians interested in research.

The final point was around value for money. Tom observed that we have likely reached, or are very soon to reach, a funding ceiling. This fact makes clearly describing the importance and relevance of your research question all the more important. Funders will increasingly only be funding ‘priority’ questions. There is a drive for efficiency, in terms of both study design and in making use of data sets already in existence and routinely collected data. The recent HTA call for ‘efficient study designs’ is a nice demonstration of this. Such studies also need to be pragmatic, to reflect the realities of care in the NHS. This drive for value will also mean fewer extensions and a greater monitoring of milestones and targets. Studies are more likely to be closed down if it becomes apparent that they will require a big extension in time and money in order to succeed. There will be more importance placed of having feasibility and pilot data in order to demonstrate that patients are out there and are willing to be recruited into, and remain in, clinical trials.

I found these messages helpful and will definitely be referring to them in my advising in the future. I was gratified to see a very clear role for the RDS in facilitating high quality research and hope that this is a message we can get out to researchers.

That is, in essence, the aim of this blog.

GUEST POST: Report from the COMET IV meeting Rome 19-21st Nov 2014 by Duncan Barron

My colleague, Duncan Barron, recently attended the COMET IV meeting in Rome and has written a guest post about the conference, the reasons for developing Core Outcome Sets (COS) and some of the presentations.

COMET IV meeting Rome 19-21st Nov 2014

Guest post by Duncan Barron, PPI Lead, Research Design Service South East

“Clinical trials are only as credible as their outcomes” (Tugwell, 1993).

The COMET (Core Outcome Measures in Effectiveness Trials) 4th meeting in Rome was held recently on the 19th-21st November in Rome and I was lucky enough to attend.

The COMET Initiative brings together people interested in the development and application of agreed standardised sets of outcomes, known as a ‘core outcome set.’ These sets should represent the minimum that should be measured and reported in all clinical trials, audits of practice or other forms of research for a specific condition.

There were speakers attending from all over Europe (with good representation from the UK) including patient representatives, some of whom presented on their experiences of being involved in developing outcome measures relevant to patients and their conditions.

The Core Outcome Sets (COS) are an “agreed standardised set of outcomes measures and reported as a minimum”. The aim of the COS is to increase consistency of reporting across trials and improve the quality of research.

The outcomes need to be appropriate, including being so to patients and the public. COMET provides guidance on what should be measured when developing a COS. This includes considering thing such as which domains to measure (eg. QoL; adverse effects) and considering the different ways of measuring outcomes.

A summary of some of the sessions

Roberto D’Amico spoke about his research in relapse remitting MS. He found that different reporting of data in trials meant it was impossible to analyse in a Systematic Review and therefore a waste of patient data. Therefore, there is a need for Core Outcome Scales.

Silvio Gavattini spoke on surrogate and composite end-points, highlighting that end-points of relevance to the patients (e.g. better QoL; decreasing mortality) are important. He gave an example of cholesterol as a surrogate for lowering MI, but that cholesterol is not a good surrogate for all drugs. In cancer, research decreasing the size of volume of tumour is usually viewed as good, but this is not always equivalent to a therapeutic end point. Survival is often the patients’ focus. Reducing tumour size can happen, but there can be considerable side effects which are not positive for the patient. Composite end points are a combination of individual end points in to one single measure (this can benefit in reducing the number of patients needed in a trial). However, the new measure can be influenced by the contribution of just one of the components which may be less meaningful overall. Therefore, each component should be equally meaningful.

Paula Williamson, from the Uni of Liverpool, spoke about the remit of COMET to encourage evidence based COS development and uptake. She highlighted a Systematic Review (Gargon et al (2014) of 198 COS (from 250 papers) that has been used to populate the COMET COS database – see here for more details. The systematic review revealed PPI in COS development in only 16% of the published COS. However, ongoing current PPI in COS is 90%. The Delphi technique is a regularly used component of COS (85% in ongoing COS work).

Trial funders including the NIHR Health Technology Assessment programme endorse COMET and recommend consulting the COMET database. “Where established Core Outcomes exist they should be included amongst the list of outcomes unless there is good reason to do otherwise. Please see The COMET Initiative website to identify whether Core Outcomes have been established.”
See here for more details.

Christian Apfelbacher (Uni of Regensburg) spoke about the methods used to develop COS in the field of atopic eczema in the HOME study.
1) Define the scope of the condition eg. setting (eg. trials); geo scope; stakeholders
2) Define the domains: what to measure
– used a Delphi to decide on domains
– 4 diff domains decided (eg. QoL; L/T flares).
3) Define core set of outcome measurement instruments
– beginning with a systematic review of existing tools (which are good enough instruments)
4) Dissemination and implementation:
– roadmap completed for one domain (“clinical signs”)
See here for more details.

Finn Gottrup, Prof of Surgery, Uni of Southern Denmark spoke about the development of Wound COS. Pervious COS had focused mainly on healing. His team undertook a Delphi study to identify a consensus on core outcomes for wound research. The work is still underway, but is linked with the European Wound Management (EWMA) group.

Holgar Schunemann (McMaster Uni, Canada) gave an interesting talk on Summary of Findings (SOF) Tables in published journals (as part of the The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) Working Group). SOF tables help improves understanding of findings for journal readers. He is working on Interactive SOF tables (electronic) where the reader can manipulate way info is presented and Diagnostic SOF tables where the reader can test the accuracy of outcomes.

Peter Tugwell (Uni of Ottawa) & David Tovey (Cochrane Collaboration) spoke about core outcomes in pain research. Different Cochrane working groups all have an interest in chronic pain. These groups are beginning to develop alliances in developing outcomes across groups. The focus should be on the patient perspective for example pain interference with function versus Pain intensity.

Rosemary Humphreys spoke about a patient perspective in COS in the HOME (Eczema) study at the Uni of Notts. She highlighted the following:
– Patients can help define outcomes and new ones researchers haven’t considered
– Clinicians know about the condition but
– Patients know about the impact on their lives incl family, relationships.
– Involve patients early help design the COS
– Challenges to patients: language; jargon – produce a glossary for the condition of interest

Iain Bruce (Royal Manchester Children’s Hospital) MOMENT (cleft palate) study highlighted some lessons learned in involving patients and carers:
– Benefit from patients with direct experience of condition
– Involved the CEO of CLAPA, a patient support group in the cleft palate field
– Ensure patient voice is heard (lived experience)
– Ensure outcomes of most importance to patients are considered
– PPI is not an add on. It’s a fundamental theme!
– Challenges: engaging patients and use of language; practical meeting dates/ costs; scientific studies can be daunting
– Researchers need to explain why patients’ opinions are important.
– Use the same PES for clinicians + patients for Delphi survey
– Use SMOG calculator for PES and readability
– GRADE system 1-9 (Guyatt et al)
– Traffic light system for children
– All views are given equal value
– Advice to researchers developing COS: use lay language and variety of diff views; sell the benefits, tell them why it’s important for them to be involved; engage early at the design stage

RCTs – control or chaos?

The RDS SE runs a quarterly newsletter which acts as both a piece of promotional material for the service and as way to update readers about local news and what research has been funded and is in progress in the region. There’s also a Q&A section: ‘Ask Dr. Claire’.

I’m not actually sure where the ‘Dr. Claire’ came from. Despite it actually being my name, the section existed before I started work at the RDS SE. However, in true nominative determinism, this section has ended up as my responsibility. And, in the spirit of multi-tasking, I thought I’d use some of the last Q&A I wrote as part of this blog. I don’t think this particular newsletter has been published yet, so consider this an advanced viewing.

The Q&A was all about randomised controlled trials (RCTs). I had originally intended it to be quite a factual piece talking about biases, blinding and groups. However, thanks to a well-worded poke by a colleague, I found myself going off on a bit of a rant.

You see, all the factual stuff is true. RCTs are the gold-standard design for investigating the efficacy and effectiveness of interventions. Random allocation of participants into groups does minimize bias and, therefore, results from RCTs do provide the most robust evidence for (or against) interventions.

However, this is only the case when they are appropriately designed. And, by this, I mean when they are appropriately controlled.

Now, this may be a very obvious comment, but the issue of the comparator is one of the most frequent discussion I have with researchers.

And, to be honest, it’s often less of a discussion and more of an argument. Hence my rant.

In my experience, researchers have often not given enough thought to an appropriate comparator for their intervention. This is an problem I come across both advising on research plans and reviewing research applications. It is not enough that a trials unit is doing the randomisation, that participants and researchers will be blinded, and that treatment as usual (TAU) or a wait-list will be the control. All too often, in my experience, TAU turns out to be no treatment and this is not a particularly robust design.

The placebo effect is well-established and is why an appropriate comparator is a necessity. In surgical trials, participants undergoing sham surgery have been shown to have improved outcomes. In psychological therapies, using a wait-list control has been shown to inflate intervention effect sizes. Ideally all RCTs would have a NICE-recommended, gold-standard treatment as their comparator. By comparing novel interventions against such an active control, results of RCTs have real meaning in a clinical context. We want to demonstrate that the new therapy is better (or at least as good and more cost-effective) than the current best therapy. This kind of result can lead to changes in both practice and policy.

However, it often isn’t as simple as that. Issues arise when there isn’t a current gold-standard treatment. In these cases, it can be tempting to use TAU or a wait-list as the comparator. However, this reintroduces bias, risks inflating the effect size and unblinding participants.

So, what is to be done?

The answer is to conduct feasibility work. Give some thought about what would make a good, active comparator. Talk to patients, clinicians, carers, and others. Work out some options. Then do some pilot work. In this way, whatever comparator you decide upon, you have justification for choosing it and you know that your RCT design will withstand scrutiny.

We all want the results of our research to be robust and meaningful. Getting the comparator right is a good way to ensure that they will be.

And if writing this particular post means I have to have this argument conversation with one researcher fewer? Well, so much the better.

The NIHR – lost in the acronyms?

As I’ve commented before, the NIHR loves its acronyms. Work in the system for long enough and you can have entire conversations that consist largely of seemingly random strings of letters.

There are the research programmes – RfPB, HS&DR, HTA, EME, PHR, PGfAR, and PDG. Of course, then there’s i4i, which goes for a trendy look by using the now-ubiquitous lower case ‘i’ and adding a number into the mix.

Then there are the two centres that manage these research programmes – NETSCC and the CCF.

And let’s not forget the need to ensure your costings are in line with AcoRD guidance. And the CRN that provides the infrastructure and support for research in the NHS.

And then there are the RDSs that support researchers. There are 10 altogether. I’ll spare you the entire list, but let’s just say their acronyms are all a bit like the one for which I work – the RDS SE.

Now, I won’t dispute that these can be useful short-hand when talking to colleagues well habituated to this alphabet soup. But, they often present a real barrier to researchers on the ‘outside’.

And, to be honest, they can also be a barrier to those who work inside the system as well.

I used to be the Programme Manager for ID&P when I worked at NCCHTA, which is now NETSCC. (Translation: the manager for identification and priortisation for the National Co-ordinating Centre for Health Technology Assessment, now the NIHR’s Evaluations, Trials and Studies Coordinating Centre), where we would have monthly internal meetings, each one run by a different area. When it came time for my area to present, I put together a blockbuster game, complete with a hexagonal-celled board, for us all to play, to introduce everyone to the acronyms used by this one department alone.

The point of this is the importance of simplicity. From the start, the NIHR puts up a pretty big barrier to engaging with researchers, many of whom don’t even know what these letters stand for, let alone the acronyms for the myriad of research programmes, initiatives, documents and support organisations.

So, in an effort to cut through the minefield of letters, let me give a simple message:

I’m Claire, a research adviser. If you’d like to conduct health research into an issue you see in your clinical practice, then come talk to me. I can help you with your research question and design and also who to approach for funding. This is a free service and there are advisers located throughout England.

Find out more here.

Alternatively, comment on this post and I’ll help point you in the right direction.

ETA: There are a couple of good glossaries, of which I’ve just been reminded.
– The NIHR’s glossary is here.
– NETSCC’s glossary is here.
Many thanks to Nicola Tose for reminding me!

ETA2: Sarah Seaton has kindly added to the acronym list — see below for even more! I’m sure there are many more out there as well.

ETA3: Another one for the list: the lovely people at the Complex Reviews Support Unit (CRSU) who provide support for the delivery of complex reviews that are funded and/or supported by NIHR.

Who’s on your team?

Through my role as an adviser for the Research Design Service South East (RDS SE), I most often find myself working with clinicians in the NHS. To me, this is one of the most important roles the RDS – to offer busy clinicians advice and support on how to design, conduct and gain funding for research on issues that they see in their everyday practice. However, I have found myself working with academic researchers based primarily in universities more frequently of late. Perhaps this is an indication of the growing competition for research funds as the research councils, the traditional funders of university-based research, reduce their budgets and become more specific about the types of research they will fund. It is also a reflection of the growing commitment to health research within the National Institute for Health Research (NIHR). Whatever the cause, I’ve been interested to note the differences in expectation of researchers from these very different backgrounds.

One of the main differences I find between the two centres around expectations of the type of research team funders are looking for when assessing applications. When I advise clinical researchers, they are very open and appreciative of larger research teams, where every individual has their own area of expertise to bring to the table. This is something which the NIHR requires. If you’re planning to conduct a clinical trial, the NIHR want to see involvement from methodologists, statisticians, health economists and service users. All of this, in addition to the clinical expertise of the team in terms of the specific subject area. And brokering these collaborations is something with which RDSs can help.

By contrast, this notion of a large research team is something that can be less familiar in academic circles. I met with an academic researcher a few weeks ago who summed it up quite nicely. ‘We’re too used to doing everything ourselves,’ he said. ‘If a new skill is required for a project, then I’ll teach it to myself.’

This is a notion I recognize. From the earliest stages of academic research – the PhD – many researchers are left on their own to get on with their projects. You get some tips from your supervisor and maybe a post-doc in your group, but if something needs to be done, then it’s up to you to make sure that it is.

However, from the perspective of many funders, this is a waste of time and money. If your project involves collecting vast amounts of data, the funder wants to see that you have someone on your research team with a proven track record of analyzing such data. Otherwise, this represents a risk. Therefore, for every task you have highlighted, you should have someone on your team dedicated to complete it and with the necessary knowledge, experience and/or supervision to do so.

At the end of the day, the thing that all involved care about is that the research is successful. Therefore, maximize your chances of success. When it comes to your research team, think carefully about who’s on your team and make sure you’ve got the support to see your project through to successful completion.

Winding paths

Research careers are often meandering. You move from one position to the next, at times by luck as much as judgment. Different universities, different countries, different projects. Great importance is given to this semi-nomadic existence. It certainly has its difficulties, especially as you grow older and add a partner and children into the equation. Yet, it also has its benefits and I recognize that I would not have the job I do today had I not had experiences of these different places, projects and roles.

I’m a methodologist; a job I would not have considered when I first got my PhD in psychopathology back in 2004. I work for the National Institute for Health Research (NIHR), specifically for the Research Design Service South East (RDS SE). The NIHR does love its acronyms. There are 10 RDSs nationally, each covering a different area of England, and the RDS SE covers the counties of Sussex, Surrey and Kent.

I don’t mean for this blog to turn into an advertisement for the NIHR or RDS, but the service we offer is, in my experience at least, unique. We help researchers turn their research ideas into projects capable of competing for NIHR funding. You come to us with an idea to improve patient care and we help you to formulate a research question, plan a study to address that question and write an application to get the work funded.

Many of the researchers I meet are surprised at just how much support we offer. It’s not just the mechanics of bid writing, although we do that too, but also advice in study design, statistics, health economics, patient and public involvement (PPI; another great NIHR acronym), amongst others. I have friends who work in different disciplines who would love to access the kind of help the RDS offers, but nothing that I know of like this service exists outside of the structure of the NIHR.

I love planning studies and considering the best design to address each research question. It’s a challenge to design something that satisfies as a robust, academic experiment and also as a practical, clinical piece of work applicable to patients in the NHS. I also relish being able to work with a wide range of people – each an expert in their field, but who share the commitment to improving the care they can give their patients. It’s a privilege to help them achieve this aim in some small way.

I’m fortunate that my particular winding path has led me here. Hopefully, from time to time, I can help others who are taking the next steps of their own.

Late to the party…?

I’m fairly new to using social media in a professional context. In my personal life, I’m young enough to have a Facebook, but too old to have a tumblr. And, even on Facebook, the friends I have are just that – people I’ve known for years and with whom I have regular contact. I only have 86 of them and I post a lot of pictures of my children. I imagine its pretty boring to the vast majority of people, especially those who don’t know me.

The thought of actually using such a platform for work was, until recently entirely foreign. When I came back from maternity leave in January, a colleague mentioned that the Research Design Service South East, for whom I work as a methodologist, has a Twitter account. It’s @NIHR_RDSSE for those of you interested. Hearing this, I was intrigued – what possible benefit could we get as a service from the occasional posting of 140 characters?

As I explored Twitter, I was surprised by not only the number of professional organizations that have accounts, but by the number of individuals who tweet in, at least a semi-, professional capacity. I came across links to papers that I immediately downloaded, highlighted events of which I hadn’t been aware, and came across funding deadlines that I hadn’t yet got around to flagging. It was a revelation.

However, even more than realising what a great resource Twitter was, I found myself intrigued by the conversations in which individuals were engaged. It soon became clear that there was a real community of people on Twitter who were dedicated to, and passionate about, health research and improving patient care. And, perhaps more importantly, they were using Twitter as a medium to pursue these interests and engage with others who shared them.

For a while, that’s as far as my involvement went. I bookmarked some accounts, lurked on a few blogs, but still didn’t feel that I had anything to add to the conversation. Some weeks later, however, I found myself buried under a pile of draft NIHR applications for various funding programs. I spent a long week going through and commenting on them all. This process is a regular part of my job, but having so many to review in the same few days is, thankfully, rare. As I worked my way through them, I found myself writing the same set of comments time and again. These weren’t long, specific comments about individual applications, but rather short, global comments that I was repeating verbatim. I should write a list and hand them out to researchers, I thought to myself and that’s when it hit me – I could do just that and I knew the perfect platform. By the end of the day, I had a Twitter account, @ClaireRosten, and a hashtag, #NIHRtips.

I still tweet #NIHRtips whenever new ones come to mind and, to my delight, others also use the hashtag. I get re-tweets and have interactions with others because of them. A blog seems to be the logical progression. I hope that it will lead to more interactions and conversations with others who share my interests.

Of course, it also gives me something else to tweet about.

I realise I’m late to the party. But hopefully there’s enough time left for me to pick up a drink and join in.

Hi, I’m Claire and I’m a methodologist.

I’m an NIHR Research Design Service Adviser, methodologist and psychologist. I have interests specifically in research methodologies, trial design and mental health. I envisage the contents of this blog to centre around these things.

I also have a strong interest in science in general, the importance of STEM to the wider public and in promoting the role of women in science. I’ll probably also be blogging about these things from time to time too.

You can also find me on Twitter: @ClaireRosten, where I tweet about many health and science related things and tips for applying to the NIHR.

And, finally, the standard disclaimer: all views and advice offered in this blog are my own.