Health Technology Assessment 2007; Vol 11: number 48
Executive SummaryView/Download full monograph in Adobe Acrobat format (980 kbytes)
MK Campbell,1* C Snowdon,2 D Francis,3 D Elbourne,2 AM McDonald,1 R Knight,2 V Entwistle,1 J Garcia,2 I Roberts4 and A Grant1 (the STEPS group)
1 Health Services Research Unit, University of
2 Medical Statistics Unit, London School of Hygiene and Tropical Medicine, UK
3 Centre for Research and Innovation Management, University of Sussex Campus, Brighton, UK
4 Public Health Intervention Research Unit, London School of Hygiene and Tropical Medicine, UK
* Corresponding author
Randomised controlled trials are widely accepted as the gold standard for evaluating healthcare interventions. If, however, the target sample size is not achieved, the trial's results will usually be less reliable. If recruitment has to be extended to reach the required sample size, this will delay the use of the results in clinical practice, and usually cost more, so fewer trials can be conducted within the limited resources available.
It is unclear why certain trials recruit well whereas others do not. The aim was, therefore, to identify factors associated with good and poor recruitment to multicentre trials.
The study used a number of different perspectives ('multiple lenses'), and three components: Part A: an epidemiological review of a cohort of trials funded by the UK's Medical Research Council (MRC) and Health Technology Assessment (HTA) Programme; Part B: case studies of trials that appeared to have particularly interesting lessons for recruitment; and Part C: a single, in-depth case study of a large multicentre trial to examine the feasibility of applying a business-orientated analytical framework as a reference model in future trials.
Part A was based on 114 multicentre MRC and HTA Programme trials that started in or after 1994 and were due to end before 2003.
Whereas in Part A the planned level of recruitment was used as a surrogate measure for the 'success' of a trial, Part B was based on in-depth analyses of 45 interviews with people playing a wide range of roles within four trials that their funders identified as 'exemplars' (trials which had met, or were scheduled to meet, agreed targets and that the funderspublicised as successes).
Part C complements the emphasis on trial 'processes' in Parts A and B in a case study of a large multicentre trial (the CRASH trial) of treatment for head injury.
In the trials found in Part A, less than one-third recruited their original target within the time originally specified, and around one-third had extensions. Factors observed more often in trials that recruited successfully were: having a dedicated trial manager, being a cancer or drug trial, and having interventions only available inside the trial. However, these findings should be interpreted cautiously: the confidence intervals were wide; associations were, at best, only marginally statistically significant; and the trend for some factors was towards a negative association. The most commonly reported strategies to improve recruitment were newsletters and mailshots, but it was not possible to assess whether they were causally linked to changes in recruitment.
The analyses in Part B suggested that successful trials were those addressing clinically important questions at a timely point. The investigators were held in high esteem by the interviewees, and the trials were firmly grounded in existing clinical practices, so that the trial processes were not alien to clinical collaborators, and the results could be easily applicable to future practice. The interviewees considered that the needs of patients were well served by participation in the trials. Clinical collaborators particularly appreciated clear delineation of roles, which released them from much of the workload associated with trial participation. There was a strong feeling from interviewees that they were proud to be part of a successful team. This pride fed into further success. Good groundwork and excellent communications across many levels of complex trial structures were considered to be extremely important, including training components for learning about trial interventions and processes, and team building. All four trials had faced recruitment problems, and extra insights into the working of trials were afforded by strategies invoked to address them. Teams within trials that were not exemplars were not interviewed, and hence it is not known to what extent the perceptions observed in these exemplars differed from those in trials that were less successful.
The process of the case study in Part C was able to draw attention to a body of research and practice in a different discipline (academic business studies). It generated a reference model derived from a combination of business theory and work within CRASH. This enabled identification of weaker managerial components within CRASH, and initiatives to strengthen them. Although it is not clear, even within CRASH, whether the initiatives that follow from developing and applying the model will be effective in increasing recruitment or other aspects of the success of the trial, the reference model could provide a template, with potential for those managing other trials to use or adapt it, especially at foundation stages. The model derived from this project could also be used as a diagnostic tool if trials have difficulties and hence as a basis for deciding what type of remedial action to take. It may also be useful for auditing the progress of trials, such as during external review.
While not producing sufficiently definitive results to propose strong recommendations, the work here suggests that people undertaking future trials ought at least to think about the different needs at different phases in the life of trials, and place greater emphasis on 'conduct' (the process of actually doing trials). This implies learning lessons from successful trialists and trial managers, with better training for issues relating to trial conduct.
The complexity of large trials means that unanticipated difficulties are highly likely at some time in every trial. Part B suggested that successful trials were those flexible and robust enough to adapt to unexpected issues. Arguably, the trialists should also expect agility from funders within a proactive approach to monitoring ongoing trials.
Three important areas for further research arise. First, an extension of Part B to trials with different recruitment patterns (including 'failures') may help to clarify whether the patterns seen in the 'exemplar' trials differ or are similar. Second, Part C was based around a single large trial with the unusual feature that patients were mainly unconscious. Before use as an audit tool for diagnosing and/or addressing management factors, the reference model needs to be considered in other similar and different trials to assess its robustness. Finally, these and other strategies aimed at increasing recruitment and making trials more successful need to be formally evaluated for their effectiveness in a range of trials.
Campbell MK, Snowdon C, Francis D, Elbourne D, McDonald AM, Knight R, et al. Recruitment to randomised trials: strategies for trial enrolment and participation study. The STEPS study. Health Technol Assess 2007;11(48).
The Health Technology Assessment (HTA) programme, now part of the National Institute for Health Research (NIHR), was set up in 1993. It produces high-quality research information on the costs, effectiveness and broader impact of health technologies for those who use, manage and provide care in the NHS. 'Health technologies' are broadly defined to include all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care, rather than settings of care.
The research findings from the HTA Programme directly influence decision-making bodies such as the National Institute for Health and Clinical Excellence (NICE) and the National Screening Committee (NSC). HTA findings also help to improve the quality of clinical practice in the NHS indirectly in that they form a key component of the 'National Knowledge Service'.
The HTA Programme is needs-led in that it fills gaps in the evidence needed by the NHS. There are three routes to the start of projects.
First is the commissioned route. Suggestions for research are actively sought from people working in the NHS, the public and consumer groups and professional bodies such as royal colleges and NHS trusts. These suggestions are carefully prioritised by panels of independent experts (including NHS service users). The HTA Programme then commissions the research by competitive tender.
Secondly, the HTA Programme provides grants for clinical trials for researchers who identify research questions. These are assessed for importance to patients and the NHS, and scientific rigour.
Thirdly, through its Technology Assessment Report (TAR) call-off contract, the HTA Programme commissions bespoke reports, principally for NICE, but also for other policy-makers. TARs bring together evidence on the value of specific technologies.
Some HTA research projects, including TARs, may take only months, others need several years. They can cost from as little as £40,000 to over £1 million, and may involve synthesising existing evidence, undertaking a trial, or other research collecting new data to answer a research problem.
The final reports from HTA projects are peer-reviewed by a number of independent expert referees before publication in the widely read monograph series Health Technology Assessment.
Criteria for inclusion in the HTA monograph series
Reports are published in the HTA monograph series if (1) they have resulted from work for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors.
Reviews in Health Technology Assessment are termed 'systematic' when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others.
The research reported in this monograph was commissioned by the National Coordinating Centre for Research Methodology (NCCRM), and was formerly transferred to the HTA Programme in April 2007 under the newly established NIHR Methodology Panel. The HTA Programme project number is 06/90/13. The contractual start date was in March 2003. The draft report began editorial review in March 2007 and was accepted for publication in April 2007. The commissioning brief was devised by the NCCRM who specified the research question and study design. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors' report and would like to thank the referees for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
The views expressed in this publication are those of the authors and not necessarily those of the HTA Programme or the Department of Health.
Editor-in-Chief: Professor Tom Walley
Series Editors: Dr Aileen Clarke, Dr Peter Davidson, Dr Chris Hyde, Dr John Powell, Dr Rob Riemsma and Professor Ken Stein
Programme Managers: Sarah Llewellyn Lloyd, Stephen Lemon, Kate Rodger, Stephanie Russell and Pauline Swinburne
© 2007 Crown Copyright Top ^