Health Technology Assessment 2005; Vol 9: number 26
Executive SummaryView/Download full monograph in Adobe Acrobat format (740 kbytes)
AM Glenny,1* DG Altman,2 F Song,3 C Sakarovitch,2 JJ Deeks,2 R DAmico,2 M Bradburn2 and AJ Eastwood4
In collaboration with the International Stroke Trial Collaborative Group
1 Cochrane Oral Health Group, Dental School, University of Manchester, UK
2 Cancer Research UK, Centre for Statistics in Medicine, Wolfson College Annexe, Oxford, UK
3 School of Medicine Health Policy and Practice, University of East Anglia, Norwich, UK
4 Centre for Reviews and Dissemination, University of York, UK
The randomised controlled trial (RCT) is the most valid design for evaluating the relative efficacy of healthcare technology. However, many competing interventions have not been directly compared in RCTs and indirect methods have been commonly used in meta-analyses. Such indirect comparisons are subject to greater bias (especially selection bias) than head-to-head randomised comparisons, as the benefit of randomisation does not hold across trials. Therefore, it is essential to evaluate such bias that may lead to inaccuracies in the estimates of treatment effects and result in inappropriate policy decisions.
The objectives of this study were:
The Database of Abstracts of Reviews of Effects (DARE) (1994 to March 1999) was searched for systematic reviews involving meta-analysis of RCTs that reported both direct and indirect comparisons, or indirect comparisons alone. A systematic review of MEDLINE (1966 to February 2001) and other databases was carried out to identify published methods for analysing indirect comparisons.
Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses, with half of the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results.
Further detailed case studies comparing the results from both direct and indirect comparisons of the same effects were undertaken.
Of the reviews identified through DARE that included meta-analyses of two or more RCTs, 31/327 (9.5%) included indirect comparisons. A further five reviews including indirect comparisons were identified through electronic searching. Few reviews carried out a formal analysis. Some reviews based analysis on the naive addition of data from the treatment arms of interest. Interpretation of indirect comparisons was not always appropriate.
Few methodological papers were identified. Some valid approaches for aggregate data that could be applied using standard software were found: the adjusted indirect comparison, meta-regression and, for binary data only, multiple logistic regression (fixed effect models only).
Simulation studies showed that the naive method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similarly sized trials are needed for the indirect approach to have the same power as directly randomised comparisons.
Detailed case studies comparing direct and indirect comparisons of the same effect show considerable statistical discrepancies, but the direction of such discrepancy is unpredictable.
When conducting systematic reviews to evaluate the effectiveness of interventions, direct evidence from good-quality RCTs should be used wherever possible. If little or no such evidence exists, it may be necessary to look for indirect comparisons from RCTs. The reviewer needs, however, to be aware that the results may be susceptible to bias.
When making indirect comparisons within a systematic review, an adjusted indirect comparison method should ideally be used using the random effects model. If both direct and indirect comparisons are possible within a review, it is recommended that these be done separately before considering whether to pool data.
There is a need for evaluation of methods for analysis of indirect comparisons for continuous data.
There is a need for empirical research into how different methods of indirect comparison perform in cases where there is a large treatment effect.
Further research is required to consider how to determine when it is appropriate to look at indirect comparisons and how to judge when to combine both direct and indirect comparisons. Research into how evidence from indirect comparisons compares to that from non-randomised studies may also be warranted.
Empirical investigations were based on one large, multicentre trial with a common protocol across each centre. It would be useful to repeat the investigations using individual patient data from a meta-analysis of several RCTs using different protocols.
The odds ratio was used as the measure of effect within this simulation study. Although logistic regression calls for the effect measure to be the odds ratio, it would be interesting to evaluate the impact of choosing different binary effect measures for the inverse variance method.
Glenny AM, Altman DG, Song F, Sakarovitch C, Deeks JJ, DAmico R, et al. Indirect comparisons of competing interventions. Health Technol Assess 2005;9(26).
The research findings from the NHS R&D Health Technology Assessment (HTA) Programme directly influence key decision-making bodies such as the National Institute for Health and Clinical Excellence (NICE) and the National Screening Committee (NSC) who rely on HTA outputs to help raise standards of care. HTA findings also help to improve the quality of the service in the NHS indirectly in that they form a key component of the National Knowledge Service that is being developed to improve the evidence of clinical practice throughout the NHS.
The HTA Programme was set up in 1993. Its role is to ensure that high-quality research information on the costs, effectiveness and broader impact of health technologies is produced in the most efficient way for those who use, manage and provide care in the NHS. Health technologies are broadly defined to include all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care, rather than settings of care.
The HTA programme commissions research only on topics where it has identified key gaps in the evidence needed by the NHS. Suggestions for topics are actively sought from people working in the NHS, the public, consumer groups and professional bodies such as Royal Colleges and NHS Trusts.
Research suggestions are carefully considered by panels of independent experts (including consumers) whose advice results in a ranked list of recommended research priorities. The HTA Programme then commissions the research team best suited to undertake the work, in the manner most appropriate to find the relevant answers. Some projects may take only months, others need several years to answer the research questions adequately. They may involve synthesising existing evidence or designing a trial to produce new evidence where none currently exists.
Additionally, through its Technology Assessment Report (TAR) call-off contract, the HTA Programme is able to commission bespoke reports, principally for NICE, but also for other policy customers, such as a National Clinical Director. TARs bring together evidence on key aspects of the use of specific technologies and usually have to be completed within a limited time period.
Criteria for inclusion in the HTA monograph series
Reports are published in the HTA monograph series if (1) they have resulted from work commissioned for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors.
Reviews in Health Technology Assessment are termed systematic when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others.
The research reported in this monograph was commissioned by the HTA Programme as project number 96/51/99. The contractual start date was in February 1999. The draft report began editorial review in May 2002 and was accepted for publication in November 2004. As the funder, by devising a commissioning brief, the HTA Programme specified the research question and study design. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors report and would like to thank the referees for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
The views expressed in this publication are those of the authors and not necessarily those of the HTA Programme or the Department of Health.
Editor-in-Chief: Professor Tom Walley
Series Editors: Dr Peter Davidson, Professor John Gabbay, Dr Chris Hyde, Dr Ruairidh Milne, Dr Rob Riemsma and Dr Ken Stein
Managing Editors: Sally Bailey and Caroline Ciupek
© 2005 Crown Copyright Top ^