How is it, that guidelines for treatment often seem unrelated to the patient sitting in front of the doctor? Guidelines are mostly based on evidence gathered from randomised controlled trials. These trials are very good at assessing efficacy - that is, can a treatment work? Despite this, trials are not without substantial biases. Many people may be screened before a few are chosen to be included in a study, yet the results of the study will be applied to the very people who were excluded. The population studied in trials tends to be young, male, white, suffering from a single condition and using a single treatment. Most patients, at least in general practice, do not fit this description. They often have multiple illnesses, take multiple medications and are either too young or too old to have been included in clinical trials. Perhaps we should accept a proposal to define efficacy in relation to medications as ‘the extent to which a drug has the ability to bring about its intended effect under ideal circumstances, such as in a randomised clinical trial’.1

Efficacy is not the same as effectiveness.2 A treatment is effective if it works in real life in non-ideal circumstances. In real life, medications will be used in doses and frequencies never studied and in patient groups never assessed in the trials. Drugs will be used in combination with other medications that have not been tested for interactions, and by people other than the patient - the ‘over the garden fence’ syndrome. Effectiveness cannot be measured in controlled trials, because the act of inclusion into a study is a distortion of usual practice.

Effectiveness can be defined as ‘the extent to which a drug achieves its intended effect in the usual clinical setting’.1 It can be evaluated through observational studies of real practice. This allows practice to be assessed in qualitative as well as quantitative terms.3

Australia is well suited to conduct observational studies because we have a high standard of relatively unrestricted practice and good national databases, such as those held by the Health Insurance Commission. These databases can be used for validating researchers’ separate database effectiveness studies. In America there are very large patient databases held by the Health Maintenance Organisations. Their size is impressive, but size is not everything. The data may have been collected primarily for billing and they may be incomplete. Clinical practice is often governed by protocols, and medications are limited to those supplied by the current preferred providers. The reimbursement mechanism for doctors may mean that they code conditions at the highest severity level. Patients belonging to one of these organisations may not represent the American population as a whole. In Britain, the General Practice Research Database, compiled from practice electronic records, is very useful, especially for studies in pharmacoepidemiology. The British enjoy relatively unrestricted clinical practice, but they do not have readily usable national datasets against which to check the validity of their database studies.

It is an irony that drugs are licensed for use almost exclusively on the results of controlled trials, yet they are withdrawn from use because of observational data that would not be acceptable to licensing authorities. Biases are present in observational studies, just as they are in trials, but they can be defined and often controlled for, giving these studies a much greater value than that currently awarded to them.

Efficiency depends on whether a drug is worth its cost to individuals or society. The most efficacious treatment, based on the best evidence, may not be the most cost-effective option. It may not be acceptable to patients. In every country, rationing of health care is a reality. There is no country, however wealthy, that can afford to deliver all the health care possible to the whole of its population at all times. Rationing may be implicit or explicit, but it will happen. Good effectiveness and efficiency studies will make this rationing more informed.

Good practical guidelines, such as the Therapeutic Guidelines series, are clearly very important and extremely useful. They could be made even more relevant to the patient in front of the doctor, by being less dependent on efficacy studies. We should make more use of effectiveness and efficiency studies and abandon the censorship of the evidence drawn from them.

E-mail: [email protected]

 

John Marley

Professor, Department of General Practice, University of Adelaide, Adelaide