Health Technology Assessment 2005; Vol 9: number 35
Executive SummaryView/Download full monograph in Adobe Acrobat format (878 kbytes)
M King,1* I Nazareth,2 F Lampe,2 P Bower,3 M Chandler,1 M Morou,1 B Sibbald3 and R Lai4
1 Department of Mental Health Sciences,
Royal Free and University College Medical School, London, UK
2 Department of Primary Care and Population Sciences, Royal Free and University College Medical School, London, UK
3 National Primary Care Research and Development Centre, University of Manchester, UK
4 Medical Library, Royal Free and University College Medical School, London, UK
* Corresponding author
Participants in randomised controlled trials (RCTs) may have preferences for particular interventions that threaten external and internal validity. We tested three hypotheses: preferences affect recruitment to RCTs; preferences are important effect modifiers in RCTs; and the size of the effect modifier is larger in RCTs that require greater effort and participation by participants.
The objective of this study was to develop a conceptual framework of preferences for interventions in the context of RCTs, as well as to examine the extent to which preferences affect recruitment to RCTs and modify the measured outcome in RCTs through a systematic review of RCTs that incorporated participants and professionals preferences. A further objective was to make recommendations on the role of participants and professionals preferences in the evaluation of health technologies.
The conceptual framework and review of measurement methods was based on a review of published papers in the psychology and economics literature concerning concepts of relevance to patient decision-making and preferences, and their measurement.
For the systematic review we included RCTs in the world literature that measured or recorded preferences, allocated participants based on preference and had follow-ups of non-randomised cohorts (registry studies) where patients received preferred treatment. We excluded reviews where there was no measurement or recording of preferences, RCTs of decision aids, reviews with post hoc measurement of preferences, registry studies with follow-up without regard to preferences and experiments testing normal volunteers.
The following data were extracted:
Data were synthesised and analysed as follows:
The following were found to be key elements for a conceptual framework of preferences in the context of RCTs:
The search identified 10,023 citations, of which 44 were eventually included in the systematic review. This covered 34 RCTs.
Our findings give support to our first hypothesis, namely that preferences affect trial recruitment. However there was less evidence of bias in the characteristics of individuals agreeing to be randomised and therefore limited evidence that external validity was seriously compromised. With regard to our second hypothesis, there was some evidence that participant or physician preferences influenced outcome in a proportion of trials. However, evidence for moderate or large preference effects was weaker in large trials and after accounting for baseline differences. Preference effects were also inconsistent in direction. There was no evidence that preferences influenced attrition. Therefore, the available evidence does not support the operation of a consistent and important preference effect. Interventions cannot be categorised consistently on degree of participation. Examining differential preference effects based on unreliable categories ran the risk of drawing incorrect conclusions, so we refrained from testing our third hypothesis.
Preferences are hypothesised to be based on expectancies concerning the process and outcomes associated with the intervention and the perceived value placed on those outcomes and processes. However, participants preferences may be based on insufficient or incorrect information. In addition, decisions about treatment choice may not always accord with preferences and may be influenced by clinicians, relatives or friends. When preferences are likely to affect the external validity of an RCT, it is important to present potential participants with appropriate evidence, without straying into coercion. We have suggested how preferences might best be measured. Once participants have been recruited, preferences may affect perceptions of the intervention and satisfaction but appear to exert few major effects on further participation or clinical outcome. Comprehensive cohort designs may still be worthwhile; however, when a significant proportion of patients refuse to be randomised and (1) follow-up data are economical to collect, for example, from routinely collected sources, or (2) when costs of follow-up are higher, a random sub-sample of participants are allocated to their preferred treatment and followed up.
Our review also adds to the growing evidence that when preferences based on informed expectations or strong ethical objections to an RCT exist, observational methods are a valuable alternative. Data from observational studies may be valuable in situations where:
All RCTs in which participants and/or professionals cannot be masked to treatment arms should attempt to estimate participants preferences. This would increase the amount of evidence available to answer questions about the effect of treatment preferences within and outwith RCTs. Furthermore, RCTs should routinely attempt to report the proportion of eligible patients who refused to take part because of their preferences for treatment. Beyond these two general recommendations, our findings also indicate a number of approaches to the design, conduct and analysis of RCTs that take account of participants and/or professionals preferences. We refer to these as a methodological tool kit for undertaking RCTs that incorporate some consideration of patients or professionals preferences.
Besides understanding more about how participants and professionals preferences affect the internal validity of RCTs and informing professionals and patients about the need for good evidence of efficacy, we need greater application of information systems within the NHS to make use of routine data collection as one source of evidence on effectiveness.
The following areas are suggested for future research:
King M, Nazareth I, Lampe F, Bower P, Chandler M, Morou M, et al. Conceptual framework and systematic review of the effects of participants and professionals preferences in randomised controlled trials. Health Technol Assess 2005;9(35).
The research findings from the NHS R&D Health Technology Assessment (HTA) Programme directly influence key decision-making bodies such as the National Institute for Health and Clinical Excellence (NICE) and the National Screening Committee (NSC) who rely on HTA outputs to help raise standards of care. HTA findings also help to improve the quality of the service in the NHS indirectly in that they form a key component of the National Knowledge Service that is being developed to improve the evidence of clinical practice throughout the NHS.
The HTA Programme was set up in 1993. Its role is to ensure that high-quality research information on the costs, effectiveness and broader impact of health technologies is produced in the most efficient way for those who use, manage and provide care in the NHS. Health technologies are broadly defined to include all interventions used to promote health, prevent and treat disease, and improve rehabilitation and long-term care, rather than settings of care.
The HTA programme commissions research only on topics where it has identified key gaps in the evidence needed by the NHS. Suggestions for topics are actively sought from people working in the NHS, the public, consumer groups and professional bodies such as Royal Colleges and NHS Trusts.
Research suggestions are carefully considered by panels of independent experts (including consumers) whose advice results in a ranked list of recommended research priorities. The HTA Programme then commissions the research team best suited to undertake the work, in the manner most appropriate to find the relevant answers. Some projects may take only months, others need several years to answer the research questions adequately. They may involve synthesising existing evidence or designing a trial to produce new evidence where none currently exists.
Additionally, through its Technology Assessment Report (TAR) call-off contract, the HTA Programme is able to commission bespoke reports, principally for NICE, but also for other policy customers, such as a National Clinical Director. TARs bring together evidence on key aspects of the use of specific technologies and usually have to be completed within a limited time period.
Criteria for inclusion in the HTA monograph series
Reports are published in the HTA monograph series if (1) they have resulted from work commissioned for the HTA Programme, and (2) they are of a sufficiently high scientific quality as assessed by the referees and editors.
Reviews in Health Technology Assessment are termed systematic when the account of the search, appraisal and synthesis methods (to minimise biases and random errors) would, in theory, permit the replication of the review by others.
The research reported in this monograph was commissioned by the HTA Programme as project number 98/26/03. The contractual start date was in November 2001. The draft report began editorial review in August 2003 and was accepted for publication in June 2004. As the funder, by devising a commissioning brief, the HTA Programme specified the research question and study design. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors report and would like to thank the referees for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.
The views expressed in this publication are those of the authors and not necessarily those of the HTA Programme or the Department of Health.
Editor-in-Chief: Professor Tom Walley
Series Editors: Dr Peter Davidson, Professor John Gabbay, Dr Chris Hyde, Dr Ruairidh Milne, Dr Rob Riemsma and Dr Ken Stein
Managing Editors: Sally Bailey and Caroline Ciupek
© 2005 Crown Copyright Top ^