Health Technology Assessment 2007; Vol 11: number 53
Executive SummaryView/Download full monograph in Adobe Acrobat format (870 kbytes)
S Hanney,1 M Buxton,1 C Green,2 D Coulson2 and J Raftery3*†
1 Health Economics Research Group, Brunel University, Uxbridge, UK
2 NCCHTA, University of Southampton, UK
3 Department of Epidemiology and Public Health, University of Birmingham, UK
* Corresponding author
† Present address: Wessex Institute of Health Research and Development, University of Southampton, UK
This project aimed to address two sets of questions:
The first question was answered by a literature review of assessments of research programmes. Using standard techniques, an initial 1600 papers were identified. About 200 papers were put on a preliminary list and 46 studies were reviewed in detail. The review identified the methods used (desk, questionnaire, interviews and case studies), the main models or conceptual frameworks applied, and assessed strengths and weaknesses of the main approaches.
The answers to the second set of questions were sought using a 'multiple methods' approach, which triangulated National Coordinating Centre for Health Technology Assessment (NCCHTA) documentation, a survey of lead researchers and detailed case studies. The survey, using an established questionnaire, covered 204 eligible researchers who had led a project completed between 1993 and 2003.
Sixteen case studies provided more detailed examples of impact, the factors associated with impact and the best methods to assess impact. The 16 comprised nine clinical trials, four evidence synthesis research projects and three Technology Assessment Reports (TARs) for the National Institute for Health and Clinical Excellence (NICE). These were selected by stratified random sampling, to the best of our knowledge for the first time in payback analysis. The case studies consisted of interviews with lead investigators; analysis of relevant documents including the main published papers and reports; and any other relevant reviews. Interviewees were asked to identify factors linked to the level of impact achieved. Each case study was written up using the payback framework. A cross-case analysis informed the analysis of factors associated with achieving payback.
Each case study was scored for impact before and after the interview to assess the gain in information due to the interview. The draft write-up of each study was checked with each respondent for accuracy and changed if necessary.
The literature review identified a highly diverse literature, but confirmed that the 'payback' framework pioneered by Buxton and Hanney was the most widely used and most appropriate model available. It encompassed key elements of many of the alternatives. The review confirmed that impact on knowledge generation was more easily quantified than that on policy, behaviour or especially health gain. The review of the included studies indicated a higher level of impact on policy than is often assumed to occur.
The diverse literature suggested that two different sets of studies might provide the most appropriate comparators for the two main parts of the NHS HTA Programme. Studies of the impact of 'HTA Programmes' for policy-making bodies can best be compared with the TARs for NICE. The group 'Other Health Research Programmes' provides comparisons for the primary and secondary research projects of the NHS HTA Programme.
The survey showed that data pertinent to payback exist and can be collected. However over one-third of projects did not respond, despite repeated reminders. Against this, a 100% response was not feasible as over the 10-year period several lead researchers had died, retired, moved or were otherwise not reachable.
The completed questionnaires confirmed, corrected and extended the data collated by NCCHTA on publications and other indicators. They showed that the HTA Programme had considerable impact in terms of publications, dissemination, policy and behaviour. It also showed, as expected, that different parts of the Programme had different impacts. The TARs for NICE had the clearest impact on policy in the form of NICE guidance. Other policy 'customers' included the National Screening Committee (NSC) and National Service Frameworks.
Overall impacts measured in the survey were consistent with or somewhat better than those for other programmes identified in the literature review. Mean publications per project were 2.93 (1.98 excluding the monographs), above the level reported for other programmes. The proportion of NICE TARs reporting an impact on past policy at 96% was among the highest for the 'HTA Programmes'. The 60% of primary and secondary studies reporting an impact on policy was above the other programmes in its group (although some of the latter were responsive mode programmes which would not have been expected to make much impact on policy). The percentage of primary and secondary projects reporting an impact on behaviour was somewhere in the middle of the range for the 'Other Health Research Programmes'. Comparisons with other programmes must be treated with considerable caution due to differences in programme objectives, topics researched and methods of assessing impact.
The NCCHTA's reliance on researchers to inform it of publications was shown to lead to incomplete data. Around one-quarter of publications in peer-reviewed journals were missed. These data could probably be better collected using the Internet and then asking the researchers to correct and amend the resulting list. Other data such as those on presentations and further research could only be collected from the researchers.
The case studies revealed the large diversity in the levels and forms of impacts and the ways in which they arise. All the NICE TARs and more than half of the other case studies had some impact on policy making at the national level, whether through NICE, NSC, National Service Frameworks, professional bodies or the Department of Health. This underlines the importance of having a customer or 'receptor' body. A few case studies had very considerable impact in terms of knowledge production and in informing national and international policies. In some of these the Principal Investigator had prior expertise and/or a research record in the topic. The case studies confirmed the questionnaire responses, but also provided more details, including information on how some projects led to further research.
All but one of the case studies with high impact had successful peer-reviewed publications and engaged in active dissemination. Although researchers were generally satisfied with NCCHTA, some complained about lengthy procedures and one about changes in study design.
The pre- and post-interview scoring showed reasonable correlations and high inter-rater reliability. This indicated that most researchers were not making exaggerated claims for impact in their questionnaire responses.
This study concluded that the HTA Programme has had considerable impact in terms of knowledge generation and perceived impact on policy and to some extent on practice. This high impact may have resulted partly from the HTA Programme's objectives, in that topics tend to be of relevance to the NHS and have policy customers. The required use of scientific methods, notably systematic reviews and trials, coupled with strict peer reviewing, may have helped projects publish in high-quality peer-reviewed journals.
It could be argued on the basis of the review that the NHS would benefit from an expansion of the HTA Programme, and that more should be done to encourage NHS customers to seek research relevant to their own work with a view to changing practice.
Recommendations were made on how the HTA Programme could improve, including:
Three main areas for further research were identified:
Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess 2007;11(53).
The Health Technology Assessment (HTA) Programme, now part of the National Institute for Health Research (NIHR), was set up in 1993. It produces high-quality research information on the costs, effectiveness and broader impact of health technologies for those who use, manage and provide care in the NHS. 'Health technologies' are broadly defined to include all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care, rather than settings of care.
The research findings from the HTA Programme directly influence decision-making bodies such as the National Institute for Health and Clinical Excellence (NICE) and the National Screening Committee (NSC). HTA findings also help to improve the quality of clinical practice in the NHS indirectly in that they form a key component of the 'National Knowledge Service'.
The HTA Programme is needs-led in that it fills gaps in the evidence needed by the NHS. There are three routes to the start of projects.
First is the commissioned route. Suggestions for research are actively sought from people working in the NHS, the public and consumer groups and professional bodies such as royal colleges and NHS trusts. These suggestions are carefully prioritised by panels of independent experts (including NHS service users). The HTA Programme then commissions the research by competitive tender.
Secondly, the HTA Programme provides grants for clinical trials for researchers who identify research questions. These are assessed for importance to patients and the NHS, and scientific rigour.
Thirdly, through its Technology Assessment Report (TAR) call-off contract, the HTA Programme commissions bespoke reports, principally for NICE, but also for other policy-makers. TARs bring together evidence on the value of specific technologies.
Some HTA research projects, including TARs, may take only months, others need several years. They can cost from as little as £40,000 to over £1 million, and may involve synthesising existing evidence, undertaking a trial, or other research collecting new data to answer a research problem.
The final reports from HTA projects are peer-reviewed by a number of independent expert referees before publication in the widely read monograph series Health Technology Assessment.
Criteria for inclusion in the HTA monograph series
Reports are published in the HTA monograph series if (1) they have resulted from work for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors.
Reviews in Health Technology Assessment are termed 'systematic' when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others.
The research reported in this monograph was commissioned by the HTA Programme as project number 03/67/01. The contractual start date was in April 2005. The draft report began editorial review in October 2006 and was accepted for publication in May 2007. As the funder, by devising a commissioning brief, the HTA Programme specified the research question and study design. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors' report and would like to thank the referees for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
The views expressed in this publication are those of the authors and not necessarily those of the HTA Programme or the Department of Health.
Editor-in-Chief: Professor Tom Walley
Series Editors: Dr Aileen Clarke, Dr Peter Davidson, Dr Chris Hyde, Dr John Powell, Dr Rob Riemsma and Professor Ken Stein
Programme Managers: Sarah Llewellyn Lloyd, Stephen Lemon, Kate Rodger, Stephanie Russell and Pauline Swinburne
© 2007 Crown Copyright Top ^Test text shtml