Tom MacDonald, Isla Mackenzie, and Li Wei Medical Research Institute of Dundee2
Traditional randomised clinical trials are very expensive and time-consuming and often have poor external validity (Ware and Hamel, 2011). The challenge for modern medicine is to find ways of producing good-quality evidence with good external validity and to do so more efficiently, with less bureaucracy, more expeditiously, and above all less expensively than we have done in the past. To achieve these goals seems daunting at first but most of us work in organisations that already have amazing infrastructures that can be harnessed for research.
This paper will use the example of the United Kingdom (UK) National Health Service (NHS) but the methods are likely to be generalisable to other health care systems and health maintenance organisations.
The UK NHS is an organisation that has 61 million subjects about which everything is known (at least theoretically). These patients are
1 The views expressed in this discussion paper are those of the authors and not necessarily of the authors’ organizations or of the Institute of Medicine. The paper is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.
2 Participants in the activities of the IOM Forum on Drug Discovery, Development, and Translation. This discussion paper is based on a submission to the Forum’s November 2011 workshop, Envisioning a Transformed Clinical Trials Enterprise in the United States: Establishing an Agenda for 2020, to inform the workshop discussions surrounding international case studies in the area of clinical research transformation.
treated from cradle to grave in a system that is free at the point of delivery. So, there are data on all drug treatments (both prescribed and dispensed), all physician visits, all comorbidities, all laboratory tests, all hospitalisations, and all certified causes of deaths. In addition, there are data about ancestors and offspring and a lot of other detail about habits, social deprivation, etc. When asked in surveys, the public appears to support the use of these data to inform about the effectiveness and safety of medicines (Mackenzie et al., 2012).
The NHS in Scotland has traditionally had good record-linkage abilities due to far-sighted public health physicians in the 1960s (Kendrick, undated, 1997). This system has been utilised for observational research such as pharmacovigilance (Evans and MacDonald, 1999) and to develop clinical disease registers and managed clinical networks (Morris et al., 1997), but more recently record-linkage has been seen as an accurate way to track the outcomes of subjects randomised in clinical trials (Ford et al., 2007; West of Scotland Coronary Prevention Study Group, 1995).
RANDOMISING PATIENTS: STREAMLINED STUDIES
Randomising patients to different treatments within the NHS and using record-linkage to track outcomes is the concept behind the “streamlined study.” Such studies can be double-blind with subjects being provided with masked investigational medicinal products (IMPs) or, if better external validity is required, internal validity can be traded by using open designs. The Febuxostat versus Allopurinol Streamlined Trial (FAST) study, which is running in the United Kingdom and Denmark, is an example of a streamlined study in which IMPs are provided to patients by mail. Follow-up is by a composite of e-mail, phone calls, family doctors, and record-linkage to hospitalisation and deaths.3 Blinding the end-points of this type of study (which are unlikely to be influenced by the patient’s knowledge of what they are taking) results in the Prospective, Randomised, Open, Blinded Endpoint or PROBE design (Hansson et al., 1992). Of course, the ultimate “randomised effectiveness” study simply randomises therapy, which is then prescribed and compared to normal care prescribing. One such study, the Standard Care versus Celecoxib Outcome Trial (SCOT), is currently running in the United Kingdom, Denmark, and the Netherlands.4 These “naturalistic” designs most closely mimic usual care and are designed to inform about the safety and effectiveness
3 For more information on the FAST study, visit http://www.ukctg.nihr.ac.uk/trialdetails/ISRCTN72443728.
of medicines and can help policy makers decide whether these medicines should be reimbursed.
The use of this type of trial design used specifically for post-approval safety research has recently been reviewed (Reynolds et al., 2011). Only 13 studies of this type of design were identified in this review, so these studies do not have an extensive track record. However, the authors concluded that this type of design has demonstrated utility for comparative research of medicines and vaccines.
There have been calls for studies of medicines to be run independently from the pharmaceutical industry (Steinbrook and Kassirer, 2010). In Europe, the study sponsor is the legal entity responsible for the conduct of a study and is independent of the study funder. We believe that academic and/or NHS sponsorship is the best way to carry out independent research on medicines and this is the mechanism under which we carry out this research, such as the SCOT and FAST trials (MacDonald et al., 2010).
PROS AND CONS OF STREAMLINED STUDIES
Streamlined studies make use of information technology (IT) to assist with the identification and invitation of patients to participate. In the case of the NHS, individual primary care physician practices can be recruited and contracted. These practices then allow electronic searches of their practice records to identify subjects who meet the study entry criteria. A list of potentially suitable subjects is scrutinised by the primary care physicians to remove subjects who they deem unsuitable to be contacted. The final list is then used to send letters to these subjects (the text of which is preapproved by the ethics committee) on practice notepaper and signed by the patient’s doctor. A study patient information sheet is enclosed with this letter. Those patients who reply and who express willingness to be considered for study inclusion are then contacted by the study research nurse who takes consent, formally screens them using an electronic case report form, and randomises them using an interactive voice recognition system or online randomisation tool.
The benefit of using such a system is that large populations of patients who meet the inclusion and exclusion criteria for a study can be screened efficiently and invited to participate. Thus, in the SCOT study, for example, more than 630 family practices, representing a total population in excess of 4 million patients, have signed research contracts and had their records electronically searched to identify suitable study subjects. This is a highly efficient way to screen large populations.
The follow-up of subjects is also efficient as all hospitalisations and deaths are recorded centrally in the United Kingdom, so subjects can be efficiently tracked by record-linkage. Secure, study-specific web portals allow family physicians and study staff to report adverse events, adjust medication, track laboratory results, etc. NHS records of hospitalisations suspected of being endpoints can be retrieved, scanned into portable document format, redacted where necessary, and abstracted to forms to allow endpoint committee adjudication. Such adjudication is also done using remote secure systems that allow geographically diverse end-point committee members to interact efficiently.
These electronic systems allow efficiencies that reduce the overall costs of research. It seems to be even better in Denmark, where prescribing records are collated centrally along with hospitalisation and mortality data and where the population seems to welcome, and even expect, electronic records to be used for research purposes. However, these electronic systems do not solve all study problems.
The bureaucratic processes of obtaining multiple consents and approvals required to carry out clinical research are not diminished by such study designs (Duley et al., 2008; McMahon et al., 2009). This article is not the place to rehearse these issues. Suffice it to say that the UK Academy of Medical Sciences has produced a report for the UK government on this matter that has made a number of recommendations aimed at reducing this burden of bureaucracy (The Academy of Medical Sciences, 2011). (See also the Institute of Medicine discussion paper by Sir Michael Rawlins, Health Research as a Public Good, 2012.)
Engaging Patients in Research
A major hurdle facing streamlined studies (as well as other clinical studies) is how to engage patients in the health research agenda. While we can efficiently identify suitable subjects electronically based on eligibility criteria, for every 100 patients written to, our experience is that an average of only 14 subjects are randomised. The dominant reason for this is that most patients do not reply to letters of invitation written to them by their family physicians. In inner-city practices in the United Kingdom, 70 percent or more of subjects do not reply at all. In rural practices, we get more replies and more positive replies, perhaps because patients have closer relationships with their family doctors in the rural setting. However, while electronically searching family physician records provides a way of writing to large cohorts of suitable patients, it does not solve the
problem of how to engage patients in research and enhance recruitment. Several reviews have addressed this issue (Caldwell et al., 2010; Treweek et al., 2010). Despite trying numerous initiatives, we have not yet found a good solution to this problem (see Box 1 and Mackenzie et al., 2010). Cracking this difficult nut will require effort.
We have recently received ethical committee approval for a study to formally evaluate the effect of paying patients an incentive of £100 (about $150) to participate in clinical trials. This method has some evidence base (Halpern et al., 2004; Martinson et al., 2000) and appears to be common practice in the United States (Dickert et al., 2002). Perhaps this could provide a solution both to recruiting more subjects and to recruiting subjects who are more representative of the population at large.
Clinical outcome trials are designed to include subjects who are likely to experience outcome events. Even for a composite outcome such as hospitalisation for myocardial infarction, cerebrovascular accident, or vascular death, the expected event rate may only be 1 to 2 percent per year even in an “at-risk” older population. A problem that bedevils trialists is that the patients enrolled into studies often have event rates lower than expected. There are many potential explanations for this phenomenon but one explanation is that patients who respond positively to letters of invitation to participate are those who have an interest in their health and thus exhibit good heath behaviour and are therefore at lower risk of events. Subjects with poor health behaviour are less likely to participate. Clearly, trials of younger patients with few risk factors could not feasibly examine outcome events as these studies would have to be unfeasibly large or long, or both, to generate sufficient events. Because of this, streamlined studies have to limit recruitment to older subjects preferably with additional risk factors in order to limit the size and duration of the trial to reasonable parameters. As with all studies that restrict inclusion of subjects, the generalisability, or external validity, of such studies is reduced by such restrictions. However, even when entry is restricted to high-risk and older age groups, event rates can be low. The very elderly and the socially deprived are underrepresented in clinical trials. Part of the engagement of the public in research must be to target these groups and stress how important it is that they are included.
A potential criticism of streamlined studies is that there is relatively little patient contact post-randomisation. While a lack of scheduled patient visits reduces dramatically the costs of follow-up, an argument can be made that it also results in a loss of post-randomisation “control” of the study. Usually, subjects in randomised trials are “encouraged” to persist with randomised medication until the end of the trial. In streamlined studies (by design), persistence with medication more closely resembles normal care. Thus, subjects may be more likely to switch randomised medication. The effect of this is that streamlined studies become more “observational” with time. For superiority studies where the primary analysis is the “intention to treat” population, switching of medication post-randomisation will dilute the observed efficacy. However, the result will be more informative of the likely effectiveness of such an intervention being introduced into normal care. Clearly, such clinical effectiveness is the metric that drives decisions about cost-effectiveness. For non- inferiority designs in which the primary outcome of interest is a per-protocol analysis, switching therapy post-randomisation results in subjects being censored at the point of switching with a resulting reduction in the person-years exposure to medication and thus reduced power of the study. Streamlined studies need to take this factor into account in the design stage and over-recruit subjects to compensate for this effect. In addition, these studies should prospectively plan to carry out an observational type of analysis by treatment taken at the time of an event as a supporting post hoc analysis. Such analyses need to be developed.
ALTERNATIVES TO RANDOMISING PATIENTS INTO STREAMLINED STUDIES
Whilst streamlined studies have many advantages, they are not a perfect solution to getting good data quickly and inexpensively. For this reason we have explored other methods of evaluating treatments.
Randomising Family Practice Prescribing or Cluster Randomisation
In the United Kingdom, family practices invariably adopt a limited list or practice formulary of medications to which their practice computer systems default when they prescribe. These formularies are often derived from regional formularies which in turn are derived from the recommendations of bodies such as the Scottish Medicines Consortium (SMC)5 or the National Institute for Clinical Excellence (NICE).
There are 15,158 practices in the UK NHS. If half of even a small proportion of these used one medication for a particular indication and the other half used a different medication, then a cluster randomised design would produce excellent outcome data very quickly as the sheer numbers involved would allow studies of even quite rare conditions to be done. This method provides a framework for the pragmatic evaluation of the comparative effectiveness of medications (Maclure, 2009), and we have found that such designs are supported by the public (Mackenzie et al., 2012).
One such study is already under way as a pilot in the United Kingdom. This is a British Hypertension Society Research Network study titled A Randomised Policy Trial to Evaluate the Optimal Policy Diuretic for the Treatment of Hypertension. This trial seeks to formally evaluate the new proposals from NICE to change diuretic therapy for hypertension in the United Kingdom from bendroflumethiazide to chlortalidone or indapamide,7 guidance which has been criticised for having a poor evidence base (Brown et al., 2012).
How Randomising Practice Formulary Studies Are Done
The way these studies work is that practices agree to participate and are then randomly allocated a drug (or treatment strategy) to implement. Each practice writes to all patients affected by this formulary change to tell them that the practice has decided to change (or not to change) their formulary first-choice drug for the purposes of aiding the evaluation to determine which drug is the better treatment. Patients are informed that this change will be evaluated and that their anonymised data will be used in this evaluation. Patients are offered the opportunity to opt out of the change of drug or to opt out of their anonymised data being used in the evaluation.
The pilot phase of the current diuretic study seeks to determine the workload generated to practices by writing to patients and dealing with feedback and potential opt-outs. Clearly, the level of remuneration required by practices will depend on this workload. However, the majority of general practitioners surveyed are supportive of this type of evaluation, and we believe that the majority of patients also support the NHS evaluation of drugs used in the NHS (Mackenzie et al., 2012).
Recently, the UK government has announced an initiative suggesting that everyone in the NHS should contribute to research and become research patients.8 Such an initiative will hopefully support this type of
trial design, in which the NHS evaluates the medicines the NHS uses (Mackenzie et al., 2012).
Ethical issues concerning cluster randomised trial designs have been debated but no clear consensus has been reached (Taljaard et al., 2009). When asked about the ethics of cluster-randomising new practice guidelines that were drawn up based on opinion versus previous guidelines, most panelists at a recent conference in Ottawa were of the view that it would be unethical not to do such an evaluation (International Consensus Conference, 2011). Such randomised policy designs and variants have long been promoted by Malcolm Maclure and others (Maclure et al., 2007). Their widespread use would enable us to know which drug-prescribing policies are good or bad. At present, we never know.
Evaluating New Medicines
Family practice–based cluster randomised studies could provide the framework to study the effectiveness of newly licensed therapies. Mechanisms to limit the use of novel drugs exist worldwide because of financial constraints on drug budgets. Such policies make it very difficult for manufacturers of novel therapies to collect observational, postmarketing data on safety and effectiveness. However, if half of a large group of participating practices changed their prescribing to a novel medicine from the standard therapy, then half of the population would enjoy the latest therapy at no cost to the NHS, as the pharmaceutical industry would provide such study medication free (or reimburse the NHS for its cost). Such a system provides a low-cost framework to judge the effectiveness and safety of novel therapies expeditiously. Since clinical effectiveness is the principal driver of cost-effectiveness, such a system would provide the data to support the widespread introduction (or not) of new treatments.
Advantages of Cluster Randomising Practice Formularies
A major advantage of cluster randomisation is that the costs and bureaucracy of doing these studies are minimal. A feature of the design of these trials means that the analyses are done using anonymised data. This means that the trial sponsor has no way of determining which patients experience serious adverse events. However, family doctors can still report such events directly to the regulatory authorities in an anonymous fashion.
Discussions have been held with the UK Medicines and Healthcare products Regulatory Agency (MHRA) as to the requirement of such
studies to obtain clinical trial authorisation. The ruling at present is that the particular diuretic comparison study described above is not within the scope of the clinical trials directive as, although practices are randomised, individual patients still have the ability to determine their own treatment (M. Ward & E. Godfrey, MHRA, personal communication). It is probable that similar designs of other drug comparisons using such cluster randomisation would be regarded in the same way.
Design variants of randomising practice policies might be appropriate if, for example, the purpose of the evaluation were to determine the effectiveness of a new therapy thought to be beneficial when added to existing therapy. Designed delay studies (sometimes known as the stepped wedge design) might be judged ethically appropriate in such instances, especially where the implementation of such a new prescribing policy is limited by resource constraints. Such designs gradually introduce policy “randomly.” For example, if 100 practices were studied, a few would start with the new policy in the first month, a few in the second month, and so on, until all 100 practices had introduced the new policy. The beauty of such a system is that it produces data with excellent external validity and everyone gets the new policy over the course of the study.
New European legislation on the post-licensing risk management of novel marketed medicines will place a greater onus on manufacturers to gather postmarketing data on their products (Waller, 2011). This will stimulate better methods of data collection post-licensing and NICE and the SMC will stimulate the need for better comparative-effectiveness data.
Other Trial Designs
Good-quality prospective safety data can be collected directly from patients, as was shown in a recent prospective follow-up study of subjects vaccinated against the H1N1 virus (Mackenzie et al., 2011). Here, subjects being vaccinated responded to posters and volunteered to be followed up by e-mail, text message, or telephone. This system worked well and has stimulated other study designs that could be adapted. An example is the British Hypertension Society Research Network Treatment In the Morning versus Evening (TIME) study.9 This study advertises to potential participants who are willing to log on, consent, and be randomised in taking their antihypertensive medication in the morning or the evening. Patients are followed up by regular e-mails and record-linkage. One can envision a future scenario in which patients are recruited on the Internet, screened online, and agree to their physician giving assent, then are randomized,
mailed medication, and followed up by e-mail, with their own physician or record-linkage providing outcome data. Investigator training in Good Clinical Practice and trial startup training can be provided by webinars to defray the cost of the usual face-to-face training. Table 1 summarizes the pros and cons of each of the trial designs discussed above.
Most of the design concepts presented here have been implemented by us at least into the pilot phase. Not everyone will agree with the ethical approach, or the robustness or feasibility of these designs, but experience will teach us how to adapt these concepts to improve the cost- effectiveness of obtaining high-quality data. We have found that the public are largely supportive of initiatives to improve the safety and effectiveness of medicines (Mackenzie et al., 2012). As a society we need to continue to think up better ways to acquire high-quality data to enhance and make health care delivery more efficient.
Academy of Medical Sciences. 2011. A new pathway for the regulation and governance of health research. http://www.acmedsci.ac.uk/p47prid88.html (accessed January 16, 2012).
Brown, M. J., J. K. Cruickshank, and T. M. Macdonald. 2012. Navigating the shoals in hypertension: Discovery and guidance. British Medical Journal 344:d8218, doi:10.1136/bmj.d8218.
Caldwell, P. H., S. Hamilton, A. Tan, and J. C. Craig. 2010. Strategies for increasing recruitment to randomised controlled trials: Systematic review. PLoS Med 7(11):e1000368.
Dickert, N., E. Emanuel, and C. Grady. 2002. Paying research subjects: An analysis of current policies. Annals of Internal Medicine 136:368-373.
Duley, L., K. Antman, J. Arena, A. Avezum, M. Blumenthal, J. Bosch, S. Chrolavicius, T. Li, S. Ounpuu, A. C. Perez, P. Sleight, R. Svard, R. Temple, Y. Tsouderous, C. Yunis, and S. Yusuf. 2008. Specific barriers to the conduct of randomized trials. Clinical Trials 5:40-48.
Evans, J. M. M., and T. M. MacDonald. 1999. Record-linkage for pharmacovigilance in Scotland. British Journal of Clinical Pharmacology 47:105-110.
Ford, I., H. Murray, C. J. Packard, J. Shepherd, P. W. Macfarlane, S. M. Cobbe (West of Scotland Coronary Prevention Study Group). 2007. Long-term follow-up of the West of Scotland Coronary Prevention Study. New England Journal of Medicine 357:1477-1486.
Halpern, S. D., J. H. Karlawish, D. Casarett, J. A. Berlin, and D. A. Asch. 2004. Empirical assessment of whether moderate payments are undue or unjust inducements for participation in clinical trials. Archives of Internal Medicine 164:801-803.
Hansson, L., T. Hedner, and B. Dahlof. 1992. Prospective randomised open blinded end-point (PROBE) study. A novel design for intervention trials. Blood Pressure 1:113-119.
International Consensus Conference to Generate Ethics Guidelines for Cluster Randomized Trials. 2011. Ottawa, Ontario, November 28-30.
Kendrick, S. Undated. The Scottish record linkage system. http://www.isdscotland.org/Products-and-Services/Medical-Record-Linkage/Files-for-upload/The_Scottish_Record_Linkage_System.doc (accessed January 9, 2012).
Kendrick, S. 1997. Chapter 10: The development of record linkage in Scotland. Record Linkage Techniques. www.fcsm.gov/working-papers/skendrick.pdf (accessed January 9, 2012).
MacDonald, T., C. Hawkey, and I. Ford. 2010. Academic sponsorship. Time to treat as independent. British Medical Journal 341, doi:10.1136/bmj.c6837.
Mackenzie, I. S., L. Wei, D. Rutherford, E. A. Findlay W. Saywood, M. K. Campbell, and T. M. MacDonald. 2010. Promoting public understanding of randomised clinical trials using the media: The “Get Randomised” campaign. British Journal of Clinical Pharmacology 69:128-135.
Mackenzie, I. S., T. M. Macdonald, S. Shakir, M. Dryburgh, B. J. Mantay P. McDonnell, and D. Layton. 2011. Influenza H1N1 (swine flu) vaccination: A safety surveillance feasibility study using self-reporting of serious adverse events and pregnancy outcomes. British Journal of Clinical Pharmacology (Nov 15), doi:10.1111/j.1365-2125.2011.04142.x [Epub ahead of print].
Mackenzie, I. S., L. Wei, K. R. Paterson, and T. M. MacDonald. 2012. Cluster randomised trials of prescription medicines or prescribing policy—public and general practitioner opinions in Scotland. British Journal of Clinical Pharmacology (Jan 30), doi:10.1111/j.1365-2125.2012.04195.x [Epub ahead of print].
Maclure, M. 2009. Explaining pragmatic trials to pragmatic policy-makers. Canadian Medical Association Journal 180:1001-1003.
Maclure, M., B. Carleton, and S. Schneeweiss. 2007. Designed delays versus rigorous pragmatic trials: Lower carat gold standards can produce relevant drug evaluations. Medical Care 45(10 Suppl 2):S44-S49.
Martinson, B. C, D. Lazovich, H. A. Lando, C. L. Perry, P. G. McGovern, and R. G. Boyle. 2000. Effectiveness of monetary incentives for recruiting adolescents to an intervention trial to reduce smoking. Preventive Medicine 31:706-713.
McMahon, A. D., D. I. Conway, T. M. Macdonald, and G. T. McInnes. The unintended consequences of clinical trials regulations. 2009. PLoS Med 3(11):e1000131.
Morris, A. D., D. I. R. Boyle, R. McAlpine, A. Emslie-Smith, R. T. Jung, R. W. Newton, and T. M. MacDonald. 1997. The Diabetes Audit and Research in Tayside Scotland (DARTS) study: Electronic record linkage to create a diabetes register. British Medical Journal 315:524-528.
Reynolds, R. F., J. A. Lem, N. M. Gatto, and S. M. Eng. 2011. Is the large simple trial design used for comparative, post-approval safety research? A review of a clinical trials registry and the published literature. Drug Safety 34:799-820.
Steinbrook, R., and J. P. Kassirer. 2010. Data availability for industry sponsored trials: What should medical journals require? British Medical Journal 341, doi:10.1136/bmj.c5391.
Taljaard, M., C. Weijer, J. M. Grimshaw, J. B. Brown, A. Binik, R. Boruch, J. C. Brehaut, S. H. Chaudhry M. P. Eccles, A. McRae, R. Saginur, M. Zwarenstein, and A. Donner. 2009. Ethical and policy issues in cluster randomized trials: Rationale and design of a mixed methods research study. Trials 10:61, doi:10.1186/1745-6215-10-61.
Treweek, S., M. Pitkethly, J. Cook, M. Kjeldstrøm, T. Taskila, M. Johansen, F. Sullivan, S. Wilson, C. Jackson, R. Jones, and E. Mitchell. Strategies to improve recruitment to randomised controlled trials. 2010. Cochrane Database Syst Rev (4):MR000013.
Waller, P. 2011. Getting to grips with the new European Union pharmacovigilance legislation. Pharmacoepidemiology and Drug Safety 20:544-549, doi:10.1002/pds.2119.
Ware, J. PL, and M. B. Hamel. 2011. Pragmatic trials—guides to better patient care? New England Journal of Medicine 364:1685-1687.
West of Scotland Coronary Prevention Study Group. 1995. Computerised record linkage: Compared with traditional patient follow-up methods in clinical trials and illustrated in a prospective epidemiological study. Journal of Clinical Epidemiology 48:1441-1452.
Patient Recruitment/Public Engagement Initiatives
That Have Not Worked in Our Experience
• Television campaigns aimed at engaging patients (Mackenzie et al., 2010)
— Very costly, raised awareness but did not change recruitment
• Advertisements in local and national newspapers and local radio
— Costly, ineffective, and attracted small numbers of mostly unsuitable subjects
• Publicity for clinical trials in local newspaper articles
— Attracted mostly unsuitable subjects
• Study-specific websites (i.e., http://www.scottrial.co.uk/)
— Did not attract study subjects
• Multiple revisions of patient letters from family doctors
— Did not affect patient response rate
— Follow-up letters to non-responders were not effective
• Postage options
— Normal, first-class, registered/courier delivery made no difference in patient response rate
• Publicised open public meetings to discuss clinical research with suitable patients also invited on an individual basis by physicians (including general talks of interest, i.e., a physiotherapist discussing exercises for arthritis
— Public meetings did not attract patients
• Population-based so potentially good external validity
• Large populations searched
• Only suitable subjects invited
• Reflects normal care
• Reduced cost
• Self-selected “healthier” subjects sign up so external validity is not perfect
• ~ 50% of practices do not participate
• Few subjects accept
• Become “observational” rapidly
• Similar bureaucracy
|Cluster randomised trials||
• Reduced bureaucracy
• Large populations studied efficiently
• Most patients participate
• No individual patient consent
• Inexpensive, efficient, & quick
• Treatments prescribed
• Record-linkage outcomes
• Do not (usually) need a clinical trials authorisation
• Only anonymous data retrieved
• Reduced statistical power from cluster design
• Opt-out patients may bias study
• Requires practices to consent
• Requires efficient electronic tracking of anonymous subjects
• Only suitable for products with a marketing authorisation. May be criticised as “seeding studies”
• End-point validation required
• Not suitable for all health outcome studies
|Prospective observational follow-up studies||
• Consented subjects
• Patient-reported outcomes
• Able to contact patients
• Large cohorts & frequent e-mail contact
• Patients self-selected & more likely to take part in other research
• Observes normal care
• Reduced data privacy issues
• Outcomes need to be validated (via family doctor) and coded
• Ease of capture of further data
• IT and clinical support resource issues
• May not be representative of the population
• Not randomised
|Internet-only randomised trials of drug therapy||
• Theoretically feasible
• Patient-driven recruitment
• Recruited by targeted advertising (i.e. doctor surgeries)
• Internet only
• Less field-based manpower required
• Single-centre studies
• Must comply with clinical trials legislation
• Data-quality assurance required
• Require family doctor assent
• Need to engage public in research agenda to maximize utility
• Requires Internet access and Il literate subjects
• More IT and office-based clinical staff
• Bureaucracy may still be significant
This page intentionally left blank.