Health Technology Assessment 2004; Vol 8: number 36
Executive SummaryView/Download full monograph in Adobe Acrobat format (677 kbytes)
View/Download 4-page summary in Adobe Acrobat format (suitable for printing)
Z Philips,1 L Ginnelly,1* M Sculpher,1 K Claxton,1,2 S Golder,3 R Riemsma,3 N Woolacott3 and J Glanville3
1 Centre for Health Economics, University of York, UK
2 Department of Economics, University of York, UK
3 Centre for Reviews and Dissemination, University of York, UK
* Corresponding author
Decision-analytic models represent an explicit way to synthesise evidence currently available on the outcomes and costs of alternative (mutually exclusive) healthcare interventions. Usually their objective is to obtain a clear understanding of the relationship between incremental cost and effect in order to assess relative cost-effectiveness and to determine which interventions should be adopted given existing information. Given that the use of decision-analytic modelling for health technology assessment has increased exponentially in recent years, there is a need to consider how good practice in the field has been defined. Since the 1980s, several published guidelines have been available for those developing and evaluating decision-analytic models for health technology assessment. However, given the speed at which economic evaluation methodology has progressed, it is timely to review, critically appraise and consolidate those existing guidelines on the use of decision-analytic modelling in health technology assessment, and to identify key issues where guidance is lacking.
The project consisted of four key elements:
Systematic searches identified 26 papers offering general guidance on good quality decision-analytic modelling. Of these, 15 met the inclusion criteria and were reviewed and consolidated into a single set of brief statements of good practice. Based on this review, a checklist was developed and applied to three independent decision-analytic models.
Elements were summarised under the headings of Structure, Data and Consistency. Within the published literature, the process of developing a framework for good practice has been iterative. Although the checklist provided excellent guidance on some key issues for model evaluation, it was too general to pick up on the specific nuances of each model.
The searches that were developed helped to identify important data for inclusion in the model. However, the quality of life searches proved to be problematic: the published search filters did not focus on those measures specific to cost-effectiveness analysis and although the strategies developed as part of this project were more successful few data were found.
Fourteen relevant references were identified, although three of these did not provide actual estimates of bias. Of the remaining 11 studies, five concluded that a non-randomised trial design is associated with bias and six studies found similar estimates of treatment effects from observational studies or non-randomised clinical trials and randomised controlled trials (RCTs).
Decision modelling is central to the NICE technology assessment process and it is essential to assess the quality of those models that are developed to inform the Appraisal Committee. One purpose of developing the synthesised guideline and checklist was to provide a framework for critical appraisal by the various parties involved in the health technology assessment process. First, the guideline and checklist can be used by groups that are reviewing other analysts models and, secondly, the guideline and checklist could be used by the various analysts as they develop their models (to use it as a check on how they are developing and reporting their analyses).
The Expert Advisory Group (EAG) that was convened to discuss the potential role of the guidance and checklist in the NICE TAR process felt that, in general, the guidance and checklist would be a useful tool in the NICE TAR process for the assessment team, technical leads and committee members. However, some caution must be applied when using the checklist, and it is particularly important to realise that the checklist is not meant to be used exclusively to determine a models quality, and so should not be used as a substitute for critical appraisal.
Currently, no common checklist is used in the review process. It is hoped that further discussion between the assessment teams and NICE will lead to the use of the same checklists across the groups. This would include those used for economic evaluation in general, as well as decision models in particular.
The review of current guidelines showed that although authors may provide a consistent message regarding some aspects of modelling, in other areas conflicting attributes are presented in different guidelines.
A preliminary assessment showed that, in general, the checklist appears to perform well, in terms of identifying those aspects of the model that should be of particular concern to the reader. The checklist cannot, however, provide answers to the appropriateness of the model structure and structural assumptions, as these may be seen as a general problem with generic checklists and do not reflect any shortcoming with the synthesised guidance and checklist developed here. The assessment of the checklist, as well as feedback from the EAG, indicated the importance of its use in conjunction with a more general checklist or guidelines on economic evaluation.
The review of current guidance for good quality decision-analytic modelling for health technology assessment highlighted a number of methodological areas that have not received attention in the literature on good practice. There are a lot of these areas and, therefore, it was only possible to consider two specific methods areas in decision modelling: the identification of parameter estimates from published literature, and the issue of adjusting treatment effect estimates taken from observational studies for potential bias. Literature reviews showed that both of these areas are under-researched and are areas in which further research is needed.
This project has highlighted many areas where further methods research may be of value. In particular:
Philips Z, Ginnelly L, Sculpher M, Claxton K, Golder S, Riemsma R, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess 2004;8(36).
The research findings from the NHS R&D Health Technology Assessment (HTA) Programme directly influence key decision-making bodies such as the National Institute for Clinical Excellence (NICE) and the National Screening Committee (NSC) who rely on HTA outputs to help raise standards of care. HTA findings also help to improve the quality of the service in the NHS indirectly in that they form a key component of the National Knowledge Service that is being developed to improve the evidence of clinical practice throughout the NHS.
The HTA Programme was set up in 1993. Its role is to ensure that high-quality research information on the costs, effectiveness and broader impact of health technologies is produced in the most efficient way for those who use, manage and provide care in the NHS. Health technologies are broadly defined to include all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care, rather than settings of care.
The HTA programme commissions research only on topics where it has identified key gaps in the evidence needed by the NHS. Suggestions for topics are actively sought from people working in the NHS, the public, consumer groups and professional bodies such as Royal Colleges and NHS Trusts.
Research suggestions are carefully considered by panels of independent experts (including consumers) whose advice results in a ranked list of recommended research priorities. The HTA Programme then commissions the research team best suited to undertake the work, in the manner most appropriate to find the relevant answers. Some projects may take only months, others need several years to answer the research questions adequately. They may involve synthesising existing evidence or designing a trial to produce new evidence where none currently exists.
Additionally, through its Technology Assessment Report (TAR) call-off contract, the HTA Programme is able to commission bespoke reports, principally for NICE, but also for other policy customers, such as a National Clinical Director. TARs bring together evidence on key aspects of the use of specific technologies and usually have to be completed within a limited time period.
Criteria for inclusion in the HTA monograph series
Reports are published in the HTA monograph series if (1) they have resulted from work commissioned for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors.
Reviews in Health Technology Assessment are termed systematic when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others.
The research reported in this monograph was commissioned and funded by the HTA Programme on behalf of NICEas project number 02/32/01. The authors have been wholly responsible for all data collection, analysis and interpretation and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors report and would like to thank the referees for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
The views expressed in this publication are those of the authors and not necessarily those of the HTA Programme, NICE or the Department of Health.
HTA Programme Director: Professor Tom Walley
Series Editors: Dr Peter Davidson, Professor John Gabbay, Dr Chris Hyde, Dr Ruairidh Milne, Dr Rob Riemsma and Dr Ken Stein
Managing Editors: Sally Bailey and Caroline Ciupek
© 2004 Crown Copyright Top ^