Health Technology Assessment 2004; Vol 8: number 6
Executive SummaryView/Download full monograph in Adobe Acrobat format (1013 kbytes)
View/Download 4-page summary in Adobe Acrobat format (suitable for printing)
JM Grimshaw,1* RE Thomas,1 G MacLennan,1 C Fraser,1 CR Ramsay,1 L Vale,1,2 P Whitty,3 MP Eccles,4 L Matowe,1 L Shirran,1 M Wensing,5 R Dijkstra5 and C Donaldson6
1 Health Services Research Unit, University of Aberdeen, UK
2 Health Economics Research Unit, University of Aberdeen, UK
3 Department of Epidemiology and Public Health, University of Newcastle upon Tyne, UK
4 Centre for Health Services Research, University of Newcastle upon Tyne, UK
5 Centre for Quality of Care Research, University of Nijmegen, The Netherlands
6 Department of Community Health Sciences, University of Calgary, Canada
* Corresponding author. Current affiliation: Clinical Epidemiology Programme, Ottawa Health Research Institute and Center for Best Practices, Institute of Population Health, University of
Current affiliation: Department of Pharmacy Practice, Faculty of Pharmacy, Kuwait University, Kuwait
Current affiliation: Centre for Health Services Research, University of Newcastle upon Tyne, UK
Clinical practice guidelines are an increasingly common element of clinical care throughout the world. Such guidelines have the potential to improve the care received by patients by promoting interventions of proven benefit and discouraging ineffective interventions. However, the development and introduction of guidelines are not without costs. In some circumstances, the costs of development and introduction are likely to outweigh their potential benefits. In other circumstances, it may be more efficient to adopt less costly but less effective dissemination and implementation strategies. Local healthcare organisations have relatively few resources for clinical effectiveness activities and policy makers need to consider how best to use these to maximise benefits.
The aims of the study were:
Medline (19661998), Healthstar (19751998), Cochrane Controlled Trial Register (4th edition 1998), EMBASE (19801998), SIGLE (19801988) and the specialised register of the Cochrane Effective Practice and Organisation of Care (EPOC) group were searched using a gold standard search strategy developed from handsearches of key journals. The search strategy was 93% sensitive and 18% specific.
Two reviewers independently abstracted data on the methodological quality of the studies (using the Cochrane EPOC groups methodological quality criteria), characteristics of study setting, participants, targeted behaviours and characteristics of interventions. Studies reporting economic evaluations and cost analyses were further assessed against the British Medical Journal guidelines for reviewers of economic evaluations.
Single estimates of dichotomous process variables (e.g. proportion of patients receiving appropriate treatment) were derived for each study comparison based upon the primary end-point (as defined by the authors of the study) or the median measure across several reported end-points. An attempt was made to reanalyse studies with common methodological weaknesses. Separate analyses were undertaken for comparisons of single interventions against no-intervention controls, single interventions against controls receiving interventions, multifaceted interventions against no-intervention controls and multifaceted interventions against controls receiving interventions. The study also explored whether the effects of multifaceted interventions increased with the number of intervention components. For each intervention, the number of comparisons showing a positive direction of effect, the median effect size across all comparisons, the median effect size across comparisons without unit of analysis errors, and the number of comparisons showing statistically significant effects were reported. A planned meta-regression analysis could not be undertaken owing to the large number of different combinations of multifaceted interventions.
Telephone interviews were conducted with key informants from primary and secondary care.
In total, 235 studies reporting 309 comparisons met the inclusion criteria. The overall quality of the studies was poor. Seventy-three per cent of comparisons evaluated multifaceted interventions, although the maximum number of replications of a specific multifaceted intervention was 11 comparisons. Overall, the majority of comparisons reporting dichotomous process data (86.6%) observed improvements in care; however, there was considerable variation in the observed effects both within and across interventions. Commonly evaluated single interventions were reminders (38 comparisons), dissemination of educational materials (18 comparisons) and audit and feedback (12 comparisons). There were 23 comparisons of multifaceted interventions involving educational outreach. The majority of interventions observed modest to moderate improvements in care. For example, the median absolute improvement in performance across interventions was 14.1% in 14 cluster randomised comparisons of reminders, 8.1% in four cluster randomised comparisons of dissemination of educational materials, 7.0% in five cluster randomised comparisons of audit and feedback and 6.0% in 13 cluster randomised comparisons of multifaceted interventions involving educational outreach. No relationship was found between the number of component interventions and the effects of multifaceted interventions.
Only 29.4% of comparisons reported any economic data. Eleven reported cost-effectiveness analyses, 38 reported cost consequence analyses (where differences in cost were set against differences in several measures of effectiveness) and 14 reported cost analyses (where some aspect of cost was reported but not related to benefits). The majority of studies only reported costs of treatment; only 25 studies reported data on the costs of guideline development or guideline dissemination and implementation. The majority of studies used process measures for their primary end-point, despite the fact that only three guidelines were explicitly evidence based (and may not have been efficient). Overall, the methods of the economic evaluations and cost analyses were poor. The viewpoint adopted in economic evaluations was only stated in ten studies. The methods to estimate costs were comprehensive in about half of the studies, and few studies reported details of resource use. Owing to the poor quality of reporting of the economic evaluation, data on resource use and cost of guideline development, dissemination and implementation were not available for most of the studies; only four studies provided sufficiently robust data for abstraction.
Respondents rarely identified existing budgets to support guideline dissemination and implementation strategies and made frequent comments about using soft money or resources for specific initiatives to support such activities. In general, the respondents thought that only dissemination of educational materials and short (lunchtime) educational meetings were generally feasible within current resources.
There is an imperfect evidence base to support decisions about which guideline dissemination and implementation strategies are likely to be efficient under different circumstances. Decision makers need to use considerable judgement about how best to use the limited resources they have for clinical governance and related activities to maximise population benefits. They need to consider the potential clinical areas for clinical effectiveness activities, the likely benefits and costs required to introduce guidelines and the likely benefits and costs as a result of any changes in provider behaviour. Further research is required to: develop and validate a coherent theoretical framework of health professional and organisational behaviour and behaviour change to inform better the choice of interventions in research and service settings, and to estimate the efficiency of dissemination and implementation strategies in the presence of different barriers and effect modifiers.
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004;8(6).
The NHS R&D Health Technology Assessment (HTA) Programme was set up in 1993 to ensure that high-quality research information on the costs, effectiveness and broader impact of health technologies is produced in the most efficient way for those who use, manage and provide care in the NHS.
Initially, six HTA panels (pharmaceuticals, acute sector, primary and community care, diagnostics and imaging, population screening, methodology) helped to set the research priorities for the HTA Programme. However, during the past few years there have been a number of changes in and around NHS R&D, such as the establishment of the National Institute for Clinical Excellence (NICE) and the creation of three new research programmes: Service Delivery and Organisation (SDO); New and Emerging Applications of Technology (NEAT); and the Methodology Programme.
The research reported in this monograph was identified as a priority by the HTA Programmes Methodology Panel and was funded as project number 94/08/29.
The views expressed in this publication are those of the authors and not necessarily those of the Methodology Programme, HTA Programme or the Department of Health. The editors wish to emphasise that funding and publication of this research by the NHS should not be taken as implicit support for any recommendations made by the authors.
Criteria for inclusion in the HTA monograph series
Reports are published in the HTA monograph series if (1) they have resulted from work commissioned for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors.
Reviews in Health Technology Assessment are termed systematic when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others.
Methodology Programme Director: Professor Richard Lilford
HTA Programme Director: Professor Tom Walley
Series Editors: Dr Ken Stein, Professor John Gabbay, Dr Ruairidh Milne and Dr Rob Riemsma
Managing Editors: Sally Bailey and Caroline Ciupek
The editors and publisher have tried to ensure the accuracy of this report but do not accept liability for damages or losses arising from material published in this report. They would like to thank the referees for their constructive comments on the draft document.
© 2004 Crown Copyright Top ^