Medical guidelines should insists on proof that time-honored medical practices and procedures that cost money and may harm or kill patients are actually effective. This Forum is about how to force organized veterinary medicine to issue Evidence Based Guidelines.


Postby malernee » Fri Nov 05, 2004 8:06 pm

BMJ 2000;320:1283 ( 6 May )

Personal views

The sins of expertness and a proposal for redemption

Two decades ago I was an expert on the subject of compliance with therapeutic regimens. I enjoyed the topic enormously, lectured internationally on it, had my opinion sought by other researchers and research institutes, and my colleagues and I ran international compliance symposiums and wrote two books, chapters for several others, and dozens of papers about it. Whether at a meeting or in print, I was always given the last word on the matter.

There are still far more experts around than is healthy

It then dawned on me that experts like me commit two sins that retard the advance of science and harm the young. Firstly, adding our prestige to our opinions gives the latter far greater persuasive power than they deserve on scientific grounds alone. Whether through deference, fear, or respect, others tend not to challenge them, and progress towards the truth is impaired in the presence of an expert. The second sin of expertness is committed on grant applications and manuscripts that challenge the current expert consensus. Reviewers face the unavoidable temptation to accept or reject new evidence and ideas, not on the basis of their scientific merit, but on the extent to which they agree or disagree with the public positions taken by experts on these matters. Sometimes this rejection of "unpopular" ideas is overt (and sometimes it is accompanied by comments that devalue the investigators as well as their ideas, but this latter sin is by no means unique to experts). At other times, the expert bias against new ideas is unconscious. The result is the same: new ideas and new investigators are thwarted by experts, and progress toward the truth is slowed.

Chastened by these realisations, in 1983 I wrote a paper calling for the compulsory retirement of experts and never again lectured, wrote, or refereed anything to do with compliance. I received lots of fan mail about this paper from young investigators, but almost none from experts. I repeated my training in inpatient internal medicine, spent much more time in clinical practice, and applied my methodological skills to a new set of challenges in appraising and applying evidence at the bedside.

As before, the experience was challenging and exhilarating. Working with gifted colleagues, first at McMaster and later in Oxford and throughout Europe, I became an expert in an old field with a new name: evidence based medicine. Because interest in these ideas was so great, especially among young clinicians around the world, my writing and editing was published in several languages, and when I was not running a clinical service I was out of town demonstrating evidence based medicine at the bedside and lecturing about it (over 100 times in 1998).

Although acceptance of my views was not universal, once again my conclusions came to be given too much credence and my opinions too much weight. And newcomers to the field who regarded me with affection faced an additional deterrent to challenging my expertness: they feared hurting my feelings as well as earning my disapproval. Two clinical signs confirmed that I was once again an expert. The first was the reception of an honorary degree and the second bears my name: "Sackettisation," defined as "the artificial linkage of a publication to the evidence based medicine movement in order to improve sales."

As before, I decided to get out of the way of the young people now entering this field, and will never again lecture, write, or referee anything to do with evidence based clinical practice. My energies are now devoted to thinking, teaching, and writing about randomised trials, and my new career is as challenging and exhilarating as its predecessors.

Is redemption possible for the sins of expertness? The only one I know that works requires the systematic retirement of experts. To be sure, many of them are sucked into chairs, deanships, vice presidencies, and other black holes in which they are unlikely to influence the progress of science or anything else for that matter. Surely a lot more people could retire from their fields and turn their intelligence, imagination, and methodological acumen to new problem areas where, having shed most of their prestige and with no prior personal pronouncements to defend, they could enjoy the liberty to argue new evidence and ideas on the latter's merits.

But there are still far more experts around than is healthy for the advancement of science. Because their voluntary retirement does not seem to be any more frequent in 2000 than it was in 1980, I repeat my proposal that the retirement of experts be made compulsory at the point of their academic promotion and tenure.


If you would like to submit a personal view please send no more than 850 words to the Editor, BMJ, BMA House, Tavistock Square, London WC1H 9JR or e-mail

David L Sackett, director.

BMJ 2000;320 ( 6 May )

Choice GP
Experts: off with their heads

Dave Sackett, the father of evidence based medicine, announces on p 1283 that he will "never again lecture, write, or referee anything to do with evidence based clinical practice." And so an era ends.

Sackett is not doing this because he has ceased to believe in evidence based clinical practice but because he is worried about the power of experts. In characteristically provocative style he proposes that "the retirement of experts be made compulsory at the point of their academic promotion and tenure." There are two problems with experts. Firstly, their opinions are given more weight than they deserve on scientific grounds and so impede progress. Secondly, experts are likely to review grant applications and manuscripts in terms of how much they agree with the expert view. They are thus biased against new ideas.

This is Sackett's second retirement from being an expert. In 1983 he renounced his expertness on compliance with treatment and received a flood of enthusiastic mail from young investigatorsbut almost none from experts. Perhaps this time more experts will follow Sackettnot into retirement but into a new branch of study. This is, of course, a third argument for stepping down from being an expert: people who had the ability to become experts in one subject can now bring the power of their thinking to a new subject. Innovation often comes from combining ideas from different disciplines. Evidence based medicine came from applying epidemiological ideas to clinical practice.

The BMJ challenges its many expert readers either to renounce their expertness or to produce compelling arguments, preferably evidence based, in favour of experts.
Site Admin
Posts: 462
Joined: Wed Aug 13, 2003 5:56 pm

Excluding the experts? CMAJ editorial policy

Postby guest » Tue Oct 11, 2005 10:36 am

CMAJ • October 11, 2005; 173 (8). doi:10.1503/cmaj.1050003.
© 2005 CMA Media Inc. or its licensors

CMAJ editorial policy


Excluding the experts?
Steve Arshinoff
York Finch Eye Associates, Toronto, Ont.

Although it is a laudatory objective to publish only articles by authors who have no financial relationship with corporations or the products and issues discussed in the articles, as outlined in a recent CMAJ editorial,1 it may not be beneficial for readers to be deprived of the information thus excluded. Furthermore, the new CMAJ policy on conflict of interest guarantees that, no matter how carefully and without bias a drug company studies its product, the report of such a study will never appear in the journal. However, journals like CMAJ are only too willing to criticize drug companies for not publishing drug studies, accusing them of trying to hide information.

Because of my unusual academic background and interests, I serve (or have served) as a paid consultant for almost every company that manufactures or sells ophthalmic viscosurgical devices, as well as the Canadian government and the US Food and Drug Administration. I sat on the International Organization for Standardization (ISO) committee that set the world standard for ophthalmic viscosurgical devices.2 I have published over 200 peer-reviewed articles, most dealing with these devices, and I am on the editorial boards of 3 ophthalmic journals. In this capacity, I review a significant proportion of the major articles about ophthalmic viscosurgical devices before they appear in the medical literature.

Like other medical editors and reviewers, I am extremely careful to avoid any possible bias in my own articles and in my reviews of articles by other researchers. I consult for all sides on most issues; I do not care who wins an argument from the financial point of view, but I do care passionately that the academic issues are resolved honestly and correctly. Undoubtedly there are many other "experts" like me, who will henceforth be excluded from contributing to your journal.

Your opinion of the ability of your readers to distinguish good articles from bad (as suggested by this policy) seems rather insulting. I am unaware of any example where censorship benefited the reader, and the new CMAJ policy appears to be nothing less than misguided blanket censorship.


Conflicts of interests and investments [editorial]. CMAJ 2004;171(11):1313.[Free Full Text]
Ophthalmic implants — ophthalmic viscosurgical devices. ISO standard 15798:2001. Geneva: International Organization for Standardization; 2001.

CMAJ • October 11, 2005; 173 (8). doi:10.1503/cmaj.1050192.
© 2005 CMA Media Inc. or its licensors
This Article

Google Scholar

Articles by Choi, S.


PubMed Citation
Articles by Choi, S.

Related Collections


CMAJ editorial policy


Excluding the experts?
Stephen Choi
Deputy Editor, CMAJ

Steve Arshinoff is concerned that CMAJ's new conflict of interest policy1 will censor legitimate science from the journal. This is certainly not the intention of the policy, which does not apply to original research papers but is restricted to narrative review articles and commentaries. The editors are well aware that companies producing drugs and medical devices frequently conduct research and fund clinical trials; the resultant papers will continue to be considered and published in the journal on the basis of their scientific merit.

Commentaries and narrative reviews, on the other hand, do not follow protocols and are inherently prone to bias. Arshinoff suggests that authors who receive a substantial income from drug companies can maintain their objectivity. His own case in this regard notwithstanding, there is ample evidence that many physicians who receive income or gifts from drug companies are indeed influenced and are more likely to hold favourable views of the products of those companies than might otherwise be the case.2,3

Readers also understand that financial conflicts of interest can challenge authors' objectivity. Given that the information published in the journal is used by our readers to practise medicine, that patient care is at stake and that public trust in physicians understandably erodes when drug companies influence the care that physicians provide, the editors feel a responsibility to safeguard the highest possible level of objectivity in those pages of the journal most directly devoted to the practice of medicine.


Conflicts of interests and investments [editorial]. CMAJ 2004;171(11):1313.[Free Full Text]
Wazana A. Physicians and the pharmaceutical industry: Is a gift ever just a gift? JAMA 2000;283:373-80.[Abstract/Free Full Text]
Stelfox HT, Chua G, O'Rourke K, Detsky AS. Conflict of interest in the debate over calcium-channel antagonists. N Engl J Med 1998;338(2):101-6.[Abstract/Free Full Text]

EXPERT OPINION not a weak form of evidence

Postby guest » Tue Dec 27, 2005 12:31 pm

Expert opinion usually refers to the views of professionals
who have expertise in a particular form of practice or field of
inquiry, such as clinical practice or research methodology.
Expert opinion may refer to one person’s views or to the
consensus view of a group of experts. When the concept of
evidence based practice was first introduced, expert opinion
was identified as the least reliable form of evidence on the
effectiveness of interventions, and positioned at the lowest
level in ‘‘levels of evidence’’ hierarchies.9 Other developments
have determined that ranking expert opinion with
levels of evidence is not useful or appropriate because
expert opinion is qualitatively different to the forms of
evidence that are derived from research.10 Opinion can be
identified as a means by which research is judged and
interpreted rather than as a weaker form of evidence.
Lay knowledge refers to the understanding that members of
the lay public bring to an issue or problem. Lay knowledge
encompasses ‘‘the meanings that health, illness, disability
and risk have for people.’’11 Formal identification and
examination of lay knowledge is mostly conducted through
qualitative forms of inquiry.12 Adequate attention to lay
knowledge has been proposed as a criterion for critically
appraising qualitative research.13 Concerns that some
health professionals may not adequately value lay knowledge
have been expressed.14 Lay knowledge can be difficult
to access and synthesise, and focus on quantitative
forms of evidence can lead decision makers to undervalue
the lay knowledge that is derived from narratives and
stories.15 16
A fundamental principle of evidence based public health is
the close linkage between sound argument and evidence. The
following terms are relevant to this principle.
Argument refers to a sequence of statements in which the
premise purports to give reason to accept the conclusion.17
Hence the premise is the proposition from which the
conclusion is drawn.18 In scientific or legal debate ‘‘investigating
hunches in the light of evidence or defending arguments as
rational are two fundamental concerns of critical analysis’’.19
Reasoning refers to the process of drawing inferences or
conclusions from premises, facts, or other evidence. It is
valuable to distinguish between three types of reasoning.
N Induction refers to reasoning that proceeds from the
particular to the general. Thus induction is applied to
infer general conclusions or general theory from empirical
data, such as particular observations or cases.
N Deduction refers to reasoning that proceeds from the
general to the particular. Thus deduction relies on general
theory to infer particular conclusions.
N Abduction refers to reasoning that makes an inference to
the best available explanation; that is, selecting from a
number of possibilities the hypothesis that provides the
best explanation of the available evidence.6
Logic is the science of ‘‘correct’’ reasoning. If the logic of an
argument is concerned with validity, then the key question is
whether, if the premises are true, we have a valid reason to
accept the conclusion.17
Validity is derived from the Latin word validus, meaning
strong. It refers to the degree to which something is well
founded, just, or sound. Validity is often used in conjunction
with qualifying terms that attribute specific meanings, as
N Measurement validity refers to the degree to which a
measurement actually measures what it purports to.
Measurement validity is classified into three types.
Construct validity is the extent to which the measurement
corresponds to theoretical concepts or constructs; content
validity is the extent to which the measurement incorporates
the scope or domain of the phenomenon under
study; and criterion validity is the extent to which the
phenomenon correlates with an external criterion of that
phenomenon. Criterion validity can be concurrent (the
measurement and criterion refer to the same point in
time) or predictive (the ability of the measurement to
predict the criterion).5
N Study validity refers to the degree to which the inferences
drawn from a study are warranted when account is taken
of the study methods; the representativeness of the study
sample; and the nature of the population from which it is
There are two types of study validity. Internal validity is the
degree to which the results of a study are correct for the
sample of people being studied. External validity (generalisability)
is the degree to which the study results hold true for a
population beyond the subjects in the study or in other
Reliability is the degree to which observations or measures
can be replicated, when repeated under the same conditions.
Reliability is necessary, but not sufficient, to establish the
validity of a proposition. Poor reliability can be due to
variability in the observer or measurement tool, or instability
in the actual phenomenon under study.5
Proof is the evidence that produces belief in the ‘‘truth’’ of a
proposition or argument.18 In a dispute, the burden of proof
lies with the party responsible for providing evidence of their
proposition, or for shifting a conclusion from the default
position. For example, under the legal system of many
countries an accused person is presumed innocent (default
position) until proven guilty. The burden of proof lies with
the prosecution. Standard legal questions such as: ‘‘Who has
the burden of proof?’’ ‘‘What must be proven?’’ and ‘‘By what
standard must it be proven?’’ apply to public health.21 There
are often significant differences however, in how these
questions are answered.
The burden of proof in public health determines how
evidence based practice is interpreted and applied. For
example, should strategies be ‘‘considered useful until proven
ineffective or assumed to be useless until proven effective? We
must decide where the burden of proof lies. If the burden of proof
Evidence based public health 539
on 27 December 2005 Downloaded from
rests on demonstrating ineffectiveness, the default is to do everything;
if it rests on demonstrating efficacy, the default is to do nothing.’’22
The magnitude and severity of public health problems are
often expressed as measures of frequency or proportions and
Prevalence is the proportion of people in a population who
have some attribute or condition at a given point in time or
during a specified time period.
Incidence (incidence rate) is the number of new events (for
example, new cases of a disease) in a defined population,
occurring within a specified period of time.
Incidence proportion (cumulative incidence) is the proportion
of people who develop a condition within a fixed time
period. An incidence proportion is synonymous with risk.
For example, the proportion of people who develop a
condition during their lifespan represents the lifetime risk
of disease.23
Causality is ‘‘the relating of causes to the effects they
produce’’.5 Broadly, causality is about production in the sense
that a cause is something that produces or creates an effect.24
Causality is fundamental to two aspects of evidence based
public health: (1) demonstrating and understanding the
causes of public health problems; and (2) establishing the
probability and nature of causal relations between an
intervention and its effects. Traditional public health research
has focused on the former (the magnitude and aetiology of
disease), but the literature on evidence based practice has
emphasised methods and processes for generating, appraising,
and applying intervention research. (See also
‘‘Evaluation’’ and ‘‘Critical Appraisal Criteria’’).
Various definitions of causality exist, and differing
perspectives can result in different conclusions on whether
a causal relation has been established. Such differences also
lead to different expectations of what constitutes ‘‘good’’
evidence for public health decisions. Some alternative
formulations of causality are described below.
Causes are sometimes described as necessary or sufficient
causes. A cause is necessary when it must always precede the
effect in order for that effect to occur; without the cause, the
effect cannot occur. Alternatively, a cause is sufficient when it
inevitably produces an effect; if the cause is present the effect
must occur. In a relation between a cause and an effect, the
cause may be necessary, sufficient, neither, or both.5 Such
deterministic and clear cut causal relations are not commonly
observed in public health research.
Probabilistic or statistical causality is an alternative to
determinism. A probabilistic cause is one that increases or
decreases the chance (likelihood) that the effect will occur. A
probabilistic statement about a cause and effect provides
quantitative information about an estimate of the strength
and nature of that relation. It also provides quantitative
information on potential effect modification, and about any
dose-response relation that may exist between the cause and
its effect.23 The application of probabilistic causality is the
cornerstone of clinical epidemiology, evidence based medicine,
and evidence based public health.25–27
Counterfactual causality describes how the observed effect
differs under different sets of conditions. A counterfactual
relation can be described in deterministic or probabilistic
terms, to show how the outcome (or its probability) differs
when the cause is present or absent (while, ideally, all other
variables are held constant).23 Counterfactual causality
underlies the use of control groups in research.
An intervention comprises an action or programme that aims
to bring about identifiable outcomes. A public health
intervention can be defined by the fact that it is applied to
many, most, or all members in a community, with the aim of
delivering a net benefit to the community or population as
well as benefits to individuals. Public health interventions
include policies of governments and non-government organisations;
laws and regulations; organisational development;
community development; education of individuals and
communities; engineering and technical developments;
service development and delivery; and communication,
including social marketing.28
Public health interventions sometimes harm some individuals,
and it is important that these harms are identified in
any evaluation of the interventions. This allows for informed
consideration of whether the harms to individuals are either
so small (and/or so rare) that the benefits to others outweigh
those harms. For example, population immunisation programmes
benefit many people who are protected by the effect
of the vaccine, and the whole community benefits if the
‘‘herd immunity’’ becomes so great that the infectious
organism finds it difficult to survive. To obtain this benefit
however, many people are inconvenienced (for example, by
having sore arms for a few hours) and a very few may be
harmed by the side effects of the vaccine. In most countries,
however, the net benefit of selected immunisation programmes
is considered sufficient to warrant population level
Public health interventions can also be described according
to whether the ‘‘community’’ that is the focus of intervention
was: (a) the setting for intervention; (b) the target of change;
(c) a resource for intervention; or (d) the agent of change.29
McLeroy et al also distinguish between the level of intervention,
and target of intervention, in that an intervention may occur at
one level but produce change at other levels.29 These
distinctions between different types of intervention assist
with the specification of public health objectives and can
guide evaluation of intervention outcomes. They help target
research for public health evidence.
The Dictionary of Epidemiology defines evaluation as ‘‘a process
that attempts to determine as systematically and objectively
as possible the relevance, effectiveness, and impact of
activities in the light of their objectives.’’5 Evaluation
generates type 2 and type 3 evidence (see ‘‘research
evidence’’) and thus identifies what should or could be done
to address a health problem, and how it can be done. The
term evaluation is often used interchangeably with evaluative
research,30 and intervention evaluations are also referred to as
intervention studies. We have observed that the term
‘‘evaluation’’ is sometimes used to refer only to ‘‘in-house’’
quality control studies or internal audits, which, regrettably,
do not have the status (or funding support) of research. The
tendency to devalue evaluation may explain why there is
little type 3 evidence in the published literature, despite the
importance of such evidence to decision makers.
The term evaluation does not imply a particular type of a
study design. An evaluation could be a randomised controlled
trial, an interrupted time series design, or a case study.
Hierarchies of study design indicate the degree to which
these studies are susceptible to bias.31 (See also ‘‘levels of
evidence’’). Although up to 32 different types of evaluation
have been identified,32 we include below those most
commonly used in public health programme evaluation.
Process evaluation is an assessment of the process of
programme delivery. The components of a process evaluation
of an intervention may include assessment of the following:
540 Rychetnik, Hawe, Waters, et al
on 27 December 2005 Downloaded from
recruitment of participants and maintenance of participation
(also known as programme reach); the context in which
programme is conducted and evaluated; resources required
and used; implementation of the programme relative to
programme plan; barriers and problems encountered; the
magnitude of exposure to materials and activities; initial use or
engagement in programme activities at start of programme;
continued use of activities over time; and attainment of quality
standards.33 34
Formative evaluation refers to the programme planners’ use
of data from process evaluation that has been conducted
early in the development of an intervention, so that
adjustments to the programme can be made if necessary.35
Impact evaluation examines the initial effect of a programme
on proximal targets of change, such as policies, behaviours,
or attitudes. Thus impact evaluation corresponds to assessment
of the initial objectives of the programme.33 34
Outcome evaluation refers to the consequent effect of a
programme on the health outcomes in populations, corresponding
to the programme goal or target.33 34 Outcome
evaluation has also been called summative evaluation, because
upon its completion a researcher or policy maker would be in
a position to make an overall statement about the worth of a
programme.35 Such a statement assumes prior successful
completion of process and impact evaluation.
Evaluability assessment is a systematic process to check
whether or not a programme is logically theorised, planned,
and resourced, and sufficiently well implemented, before the
conduct of an impact or outcome evaluation.36 The term
‘‘evaluability assessment’’ was first coined in the early 1980s
with the aim of preventing wasteful outcome evaluations;
that is, preventing the investment of funds to seek the effects
of programmes that were so poorly designed or implemented
that one would not expect effects to be present.37
Goal free evaluation is an assessment of all programme
effects, whether or not they are part of the intended
objectives or goals.38 The programme effects examined in
goal free evaluation may be those that initially occur after
intervention (corresponding to impact evaluation) and/or
subsequent effects (corresponding to outcome evaluation).
Utilisation focused evaluation starts with the evaluator asking
decision makers what type of information (evidence) they
would find most useful.39 The purpose is to increase the
transfer of evidence into practice. Part of this may include
scenario setting using hypothetical findings from a proposed
study to determine how (or if) decision makers will use the
data produced from the research.40 Utilisation focused
evaluation can also encompass process, impact, or outcome
evaluation; depending on the user’s needs. Note: goal free
evaluation can also be utilisation focused—that is, tied to the
interests of the intended users of that evaluation.
The logic of evidence based practice identifies a cyclic relation
between evaluation, evidence, practice, and further evaluation.
It is based on the premise that evaluations determine
whether anticipated intervention effects occur in practice,
and identify unanticipated effects. The reports of such
evaluations are a valuable source of evidence to maximise
the benefits, and reduce the harms, of public health policy
and practice. The evidence can also inform evaluation
planning, and thus improve the quality and relevance of
new research.
The various stages in this cycle tend to be completed by
different groups with differing imperatives and priorities. To
understand the challenges that may arise in evidence based
public health, it is valuable to distinguish the following
Evidence reviews
To interpret and use evaluation research, the research must
itself be evaluated to determine the degree to which it
provides credible (valid and reliable) information, and
whether the information is useful (relevant and generalisable)
in a different context.41 Hence an evidence review refers
to the process of critically appraising evaluation research and
summarising the findings, with the purpose of answering a
specified review question. In the context of evidence based
practice, evidence reviews tend to be technical processes that
require a good understanding of research methods and that
are guided by standardised criteria and review protocols.42–44
(See also ‘‘Systematic Reviews’’ and ‘‘Critical Appraisal
Evidence based recommendations
Formulating evidence based recommendations or guidelines
draws on reviews of evidence and interprets the findings to
make a statement on the implications of the evidence for
current policy and practice. This requires substantial input
from practitioners, policy makers, and consumers who can
integrate the findings in the evidence with the necessary
practical and social considerations.45 46
Evidence based guidelines specify the nature and strength
of the evidence on which the recommendations are based. In
many cases the recommendations are themselves graded;
with the grade of recommendation determined by the
strength of the evidence.9 47–49 Evidence based recommendations
may also be graded with respect to the balance of
benefits and harms.50
Consideration of the context in which the recommendations
are to be implemented (and the implications of that
implementation) inevitably raises questions of interpretation
that do not emerge when summaries of evidence are
considered in isolation. This can lead to disagreement about
recommendations; poor compliance with guidelines even
when they are evidence based; or conflicting guidelines on
the same topic from different organisations.51–54
Evidence based policy and practice (public health
The advocacy and lobbying that are required to influence
policies, change practice, and achieve public health action are
an important component of public health.55 The process of
achieving influence is often more difficult, and requires more
complex social and political negotiations, than appraising
evidence and formulating recommendations. In public health
advocacy, research provides only one type of evidence, and
evidence of any type is but one consideration that is taken
into account.56 Social, political, and commercial factors often
drive or determine the use of evidence in policy settings.57–59 A
key feature of evidence based policy and practice is that it is
informed by a consideration of the evidence, but the
decisions made will depend on prevailing values and
Evidence based public health action is also often inhibited
by a mismatch between the magnitude and importance of a
public health problem, and the adequacy of evidence on
potential interventions to address the problem. For example,
despite the fact that health inequalities and childhood obesity
are major, high priority public health problems, evidence is
lacking to determine the most effective (or cost effective)
policy and practice initiatives to address them.60 61
Linkage and exchange strategies
An ongoing challenge in public health is to close the gap
between research and practice.62 Linkage and exchange
strategies refer to initiatives that seek to promote research
utilisation in decision contexts, and encourage research that
generates purposeful and useful evidence.63
Evidence based public health 541
on 27 December 2005 Downloaded from
Disentanglement strategies
If evidence based proposals are given primacy over others,
there are real incentives for those with interests in policy and
practice directions to influence the creation and use of
evidence.57 64 Clear demarcation between those who generate
or review evidence and those with political or commercial
interests is essential. Disentanglement strategies seek to
establish structures and systems that protect independent
research and reviews that are free from the influence of
vested interests.65 66
A systematic review is a method of identifying, appraising,
and synthesising research evidence. The aim is to evaluate
and interpret all available research that is relevant to a
particular review question. A systematic review differs from a
traditional literature review in that the latter describes and
appraises previous work, but does not specify methods by
which the reviewed studies were identified, selected, or
evaluated. In a systematic review, the scope (for example, the
review question and any sub-questions and/or sub-group
analyses) is defined in advance, and the methods to be used
at each step are specified. The steps include: a comprehensive
search to find all relevant studies; the use of criteria to
include or exclude studies; and the application of established
standards to appraise study quality. A systematic review also
makes explicit the methods of extracting and synthesising
study findings.31 42 43
A systematic review can be conducted on any type of
research; for example, descriptive, analytical (experimental
and observational), and qualitative studies.67 The methods of
synthesis or summary that are used in a systematic review
can be quantitative or narrative/qualitative (see ‘‘metaanalysis’’
and ‘‘narrative systematic review’’). Systematic
reviews are used to answer a wide range of questions, such as
questions on: burden of illness, aetiology and risk, prediction
and prognosis, diagnostic accuracy, intervention effectiveness
and cost effectiveness, and social phenomena.31 Systematic
reviews in public health are increasingly used to answer
questions about health sector initiatives, as well as other
social policies that affect health.68 69
The relevance and value of a systematic review is enhanced
if potential users of the review are involved in relevant stages
of the process. For example, users can help to ensure that the
review question is relevant to policy and practice decisions;
that the review considers all relevant measures and outcomes;
and that the review findings and recommendations
are presented in a format that is easy for the user to
follow.70 71
The premise of systematic reviews is that another reviewer
using the same methods to address the same review question
will identify the same results. Although such repeatability
has tended to be more achievable in quantitative reviews and
meta-analyses, there are ongoing developments to improve
and standardise methods of narrative synthesis.72
Meta-analysis is a specific method of statistical synthesis
that is used in some systematic reviews, where the results
from several studies are quantitatively combined and
summarised.31 The pooled estimate of effect from a metaanalysis
is more precise (that is, has narrower confidence
intervals) than the findings of each of the individual
contributing studies, because of the greater statistical power
of the pooled sample.
Narrative review is sometimes used to describe a nonsystematic
review.73 The term narrative systematic review is used
for systematic reviews of heterogeneous studies, where it is
more appropriate to describe the range of available evidence
than to combine the findings into an overall result.74 A
narrative systematic review can be conducted on both
quantitative and qualitative research.
Cochrane reviews are systematic reviews carried out under
the auspices of the Cochrane Collaboration. Review protocols
are peer reviewed and published electronically before reviews
being conducted. Cochrane reviews are also peer reviewed for
method and content before publication, and there is
commitment to update the reviews every two years.42
Publication bias is the bias that can result in a systematic
review because studies with statistically significant results
are more likely to be published than those that show no effect
(particularly for intervention studies). Publication bias can be
minimised if an attempt is made to include in a systematic
review all relevant published and unpublished studies. This
process can be facilitated by international registers of trials.
Heterogeneity is used generically to refer to any type of
significant variability between studies contributing to a metaanalysis
that renders the data inappropriate for pooling. This
may include heterogeneity in diagnostic procedure, intervention
strategy, outcome measures, population, study samples,
or study methods. The term heterogeneity can also refer to
differences in study findings. Statistical tests can be applied
to compare study findings to determine whether differences
between the findings are statistically significant.23 For
example, significant heterogeneity between estimates of
effect from intervention studies suggests that the studies
are not estimating a single common effect. In the presence of
significant heterogeneity, it is more appropriate to describe
the variations in study findings than to attempt to combine
the findings into one overall estimate of effect.31
Critical appraisal criteria are checklists or standards that are
used to evaluate research evidence. Critical appraisal criteria
can be applied to assess the value of a single study, or they
are used to appraise several studies as part of the process of
systematic review. Critical appraisal criteria address different
variables, depending on the nature and purpose of the
research, and the expectations and priorities of the reviewers.
Methodological rigour refers to the robustness and credibility
of the methods that are used in a study, and whether the
study methods are appropriate to the study question.
An explicit and standardised approach to the critical
appraisal of study methods is an important feature of
evidence based public health. The aim is to determine
whether the research findings are valid or credible as a piece
of evidence. Critical appraisal checklists for assessing
methodological rigour now exist for almost all types of
research questions and study designs.13 31 75
Levels of evidence refer to a hierarchy of study designs that
have been grouped according to their susceptibility to bias.
The hierarchy indicates which studies should be given most
weight in an evaluation where the same question has been
examined using different types of study.9 31
Strength of evidence is often assessed on a combination of the
study design (level of evidence), study quality (how well it
was implemented), and statistical precision (p value and
confidence intervals).10
Magnitude refers to the size of the estimate of effect, and the
statistical significance and/or importance (clinical or social) of
a quantitative finding. Magnitude and statistical significance
are numerical calculations, but judgements about the
importance of a measured effect are relative to the topic
and the decision context.
Completeness considers whether the research evidence
provides all the information that is required. For example,
when evaluating evidence on public health interventions,
reviewers need descriptive information on the intervention
strategies that were adopted; the implementation of the
542 Rychetnik, Hawe, Waters, et al
on 27 December 2005 Downloaded from
intervention and how well it was done; the setting and
circumstances in which it was implemented; whom the
intervention reached (or did not reach); and how the
intervention was received. Reviewers should also seek
information on the unanticipated intervention effects, effect
modification, and the potential harms of intervention.76
Relevance refers to whether the research is appropriate to
the identified review question and whether the study
findings are transferable (generalisable) to the population
or setting whom the question concerns.
Criteria of causation refer to a set of criteria used to assess the
strength of a relation between a cause and an effect. The
criteria were first proposed by Bradford Hill to assess whether
the relation between an identified risk factor and a disease
was one of causation, or merely association.77 78 The refined
and widely adopted criteria are as follows5 79 80:
N Temporality means that the exposure always precedes the
N Strength of the association is defined by the magnitude and
statistical significance of the measured risk
N Dose-response relation means that an increasing level of
exposure (amount and/or time of exposure) increases the
risk of disease
N Reversibilty/Experiment means a reduction in exposure is
associated with lower rates of disease, and/or the condition
can be changed or prevented by an appropriate
experimental regimen.
N Consistency means the results are replicated in studies in
different settings or using different methods and thus the
measured association is consistent.
N Biological plausibility means that the relation makes sense
according to the prevailing understanding of pathobiological
N Specificity is established when a single putative cause
produces a specific effect.
N Analogy/Coherence means that the cause and effect relation
is already established for a similar exposure or disease
and/or the relation coheres with existing theories
Assumptions are beliefs or tenets that are taken for granted.
They are fundamental to effective communication. In the
absence of assumptions, every interaction would need to
begin with a detailed exposition of all that is believed or
understood by all involved. Assumptions usually remain
implicit, and often invisible, until they are questioned or
challenged. However, the invisibility of assumptions can be
problematic when, for example, collaborators think differently
but use the same language or terminology.
Although the purpose of using research evidence is to
introduce clarity and greater objectivity to deliberations about
policy and practice, all evidence based claims are founded on
assumptions. Assumptions shape the questions that are
posed, influence the arguments that are made, and determine
the evidence that is presented to support arguments. This
explains why we may be ‘‘resistant to and not persuaded by
evidence that relies on divergent or antagonistic assumptions; while
the same evidence merely confirms what people wedded to those
assumptions already know’’.4
One way to uncover assumptions is to generate a range of
hypothetical findings from a piece of research, and discuss the
implications of these findings before real data are collected.40
This would help in revealing the assumptions or prejudices
that both decision makers and researchers bring to their
responses to, and interpretation of, particular potential
results (see also Utilisation Focused Evaluation)
Problem framing refers to how different people often have
different ways of thinking about a problem, and their various
perspectives are enmeshed in the way they define, present,
and examine that problem.81 82 This can affect how concepts
like aetiology, causality, and evidence are discussed, described
in writing, and researched. Thus, how a problem is
framed determines the research questions that are asked, and
the type of evidence that becomes available as a consequence.
For example, researchers may privilege genetic explanations
of health patterning over environmental explanations; or
individual level analyses over group or contextual level
Frames are often tied to disciplinary perspectives, ideologies,
or particular historical or political contexts (see also
‘‘Paradigm’’). Like assumptions, frames are sometimes
implicit rather than explicit. Thus researchers may unconsciously
frame their study questions, and report findings in
ways that do not make their framing of an issue visible or
accountable.16 83
A paradigm encapsulates the commitments, beliefs, assumptions,
values, methods, outlooks, and philosophies of a
particular ‘‘world view’’. The term was popularised by
Thomas Kuhn (1922–1996) whose text on The Structure of
Scientific Revolutions examined the notion that throughout
history, scientific inquiry has been driven by different
paradigms; and thus what may be considered ‘‘normal
science’’ at one period is subject to change when enough
people adopt new ways of looking at the world.84
Some differences of opinion about evidence in public
health can be attributed to differences of paradigm. For
example, earlier in this glossary we distinguished between
reviewing evidence (a technical process that requires a sound
understanding of research methods); formulating evidence
based recommendations (which requires technical and
practical expertise); and achieving public health action
(social and political negotiations). Sometimes those who
generate or review evidence and those who interpret and use
evidence have differing views on fundamental issues such as
the nature of inquiry, what reliable knowledge is, and
substantiation. That is, they have different perspectives of
the following:
N Ontology— the study of reality or the real nature of what is
(also called metaphysics);
N Epistemology—the study of knowledge and justification;
N Methodology— the theory of how inquiry should proceed, or
the principles and procedures of a particular field of
Commonly cited paradigms of inquiry include:
N Positivism—this is now outmoded because it was based on
a ‘‘naive realism’’ that assumed all reality was completely
independent of the observer, and thus with the right
scientific methods it could be measured or apprehended as
‘‘objective truth’’.
N Post-positivism—this is the paradigm of many scientific and
social-science methods of inquiry (also known as ‘‘critical
realism’’). It incorporates a belief in some independent
forms of reality, accepting that they can be only
imperfectly (or probabilistically) apprehended, and that
understanding of the reality is always subject to change. A
majority of the premises and principles of evidence based
public health fall within the post-positivist paradigm.
Evidence based public health 543
on 27 December 2005 Downloaded from
There are also many areas of public health research and
action that reflect paradigms that are alternatives to postpositivism,
for example, critical theory, constructivism, and
participatory paradigms.85 These paradigms give greater
emphasis to plural realities, and how these are shaped by
social, political, cultural, economic, ethnic, and gender
values. They also focus on locally constructed realities, and
value subjective interpretations of those realities. Participatory
research highlights the importance of inquiry based on
collaborative action. Aspects of these paradigms are also
reflected in some analyses and critiques of evidence based
practice.16 86–88
Authors’ affiliations
. . . . . . . . . . . . . . . . . . . . .
L Rychetnik, M Frommer, Sydney Health Projects Group, School of
Public Health, University of Sydney, Australia
P Hawe, Alberta Heritage Foundation for Medical Research,
Department of Community Health Sciences, University of Calgary,
Canada and School of Public Health, LaTrobe University, Victoria,
E Waters, Centre for Community Child Health, University of Melbourne,
Murdoch Children’s Research Institute, Victoria, Australia, and
Cochrane Health Promotion and Public Health Field
A Barratt, Screening and Test Evaluation Program, School of Public
Health, University of Sydney, Australia
1 Trumble WR, Stevenson A, eds. Shorter Oxford English dictionary on
historical principles. Oxford: Oxford University Press, 2002.
2 Detels R, Breslow. Current scope and concerns in public health. In: Detels R,
McEwen J, Beaglehole R, et al, eds. Oxford textbook of public health. Vol 1.
Oxford: Oxford University Press, 2002.
3 Sackett DL, Rosenberg WM, Gray JA, et al. Evidence-based medicine: what it
is and what it isn’t. BMJ 1996;312:71–2.
4 Marston G, Watts R. Tampering with the evidence: a critical appraisal of
evidence-based policy making. The drawing board: an Australian review of
public affairs 2003;3:143–63.
drawingboard/ (accessed 13 Aug 2003) (page152).
5 Last JM, ed. A dictionary of epidemiology. 4th edn. New York: Oxford
University Press, 2001.
6 Schwandt TA. Qualitative inquiry; a dictionary of terms. Thousand Oaks, CA:
Sage, 1997.
7 Brownson RC, Gurney JG, Land GH. Evidence-based decision making in
public health. Journal of Public Health Management Practice 1999;5:86–97.
8 Brownson RC, Baker EA, Leet TL, et al. Evidence-based public health. Oxford:
Oxford University Press, 2003:7.
9 Oxford Centre for Evidence-based Medicine. Levels of evidence and grades
of recommendation. Web site hosted by University Department of Psychiatry,
Warneford Hospital, Headington, Oxford.
levels_of_evidence.asp (accessed 2 Oct 2003).
10 National Health and Medical Research Council. How to use the evidence:
assessment and application of scientific evidence. Canberra: Commonwealth
of Australia, 2000:10, 6.
11 Popay J, Williams G. Public health research and lay knowledge. Soc Sci Med
12 Popay J, Thomas C, Williams G, et al. A proper place to live: health
inequalities, agency and the normative dimensions of space. Soc Sci Med
13 Popay J, Rogers A, Williams G. Rationale and standards for the systematic
review of qualitative literature in health services research. Qualitative Health
Research 1998;8:341–51.
14 El Ansari W, Phillips CJ, Zwi AB. Narrowing the gap between academic
professional wisdom and community lay knowledge: perceptions from
partnerships. Public Health 2002;116:151–9.
15 Watkins F, Bendel N, Scott-Samuel A, et al. Through a glass darkly: what
should public health observatories be observing? J Public Health Med
16 Little M. Assignments of meaning in epidemiology. Soc Sci Med
17 Audi R, ed. The Cambridge dictionary of philosophy. Cambridge: Cambridge
University Press, 1995.
18 Delbridge A, Bernard JRL. The compact Macquarie dictionary. Sydney: The
Macquarie Library and Macquarie University, 1994.
19 Phelan P, Reynolds P. Argument and evidence; critical analysis for the social
sciences. London: Routledge, 1996:12.
20 Fletcher RH, Fletcher SW, Wagner EH. Clinical epidemiology; the essentials.
3rd edn. Baltimore: Williams and Wilkins, 1996:12.
21 Annas GJ. Burden of proof: judging science and protecting public health in
(and out of) the courtroom. Am J Public Health 1999;89:490–3.
22 Welch HG, Lurie JD. Teaching evidence-based medicine: caveats and
challenges. Acad Med 2000;75:235–40.
23 Rothman KJ, Greenland S. Modern epidemiology. 2nd edn. Philadelphia:
Lippincott-Raven, 1998:30, 662.
24 Parascandola M, Weed DL. Causation in epidemiology. J Epidemiol
Community Health 2001;55:905–12.
25 Sackett DL, Haynes RB, Guyatt GH, et al. Clinical epidemiology: a basic
science for clinical medicine. 2nd edn. Boston: Little, Brown, 1991.
26 Sackett DL, Richardson WS, Rosenberg W, et al. Evidence-based medicine;
how to practice and teach EBM. New York: Churchill Livingstone, 1997.
27 Glasziou P, Longbottom H. Evidence-based public health practice.
Aust N Z J Public Health 1999;23:436–40.
28 Frommer M, Rychetnik L. From evidence-based medicine to evidence-based
public health. In: Lin V, Gibson B, eds. Evidence-based health policy; problems
and possibilities. Melbourne: Oxford University Press, 2003:61.
29 McLeroy K, Norton B, Kegler M, et al. Community-based intervention.
Am J Public Health 2003;93:529–33.
30 Suchman EA. Evaluative research. New York: Russell Sage, 1967.
31 Glasziou P, Irwig L, Bain C, et al. Systematic reviews in health care; a practical
guide. Cambridge: Cambridge University Press, 2001.
32 Patton MQ. Practical evaluation. Beverly Hills: Sage, 1982.
33 Baranowski T, Stables G. Process evaluation of the 5-a-day projects. Health
Education and Behavior 2000;27:157–66.
34 Hawe P, Degeling D, Hall J. Evaluating health promotion; a health workers
guide. Sydney: MacLennan and Petty, 1990.
35 Patton MQ. Qualitative evaluation and research methods. 2nd edn. Newbury
Park: Sage, 1990.
36 Scanlon JW, Horst P, Jay JN, et al. Evaluability assessment: avoiding type III
and type IV errors. In: Gilbert GR, Conklin PJ, eds. Evaluation
management. A source book of readings. Charlottesville: US Civil Service
Commission, 1977.
37 Wholey JS. Evaluability assessment. In: Rutman L, ed. Evaluative research
methods; a basic fuide. Beverly Hills: Sage, 1977.
38 Scriven M. Prose and cons about goal free evaluation. Evaluation Comment
39 Patton MQ. Utilization-focused evaluation. 2nd edn. Beverly Hills: Sage,
40 Hawe P. Needs assessment must become more change-focused.
Aust N Z J Public Health 1996;20:473–8.
41 Oxman AD, Sackett DL, Guyatt GH for the evidence based medicine working
group. Users guides to the medical literature I. How to get started. JAMA
42 The Cochrane Collaboration. The Cochrane reviewers handbook. Current
version 4.2, last updated March 2003.
hbook.htm (accessed 26 Sep 2003).
43 NHS Centre for Reviews and Dissemination. Undertaking systematic reviews
of research on effectiveness: CRD’s guidance for carrying out or
commissioning reviews (CRD report 4; 2nd edn 2001). http:// (accessed 26 Sep 2003).
44 Rychetnik L, Frommer M. A schema for evaluating evidence on public health
interventions—version 4. Melbourne: National Public Health Partnership,
July 2002.
schemaV4.pdf (accessed 2 Oct 2003).
45 National Health and Medical Research Council. A guide to the development,
implementation and evaluation of clinical practice guidelines. Canberra:
Commonwealth of Australia, 1999.
46 International Union for Health Promotion and Education. The evidence of
health promotion effectiveness; shaping public health in a new Europe. Part
one (core document) and part two (evidence book). 2nd edn. A report for the
European Commission. Brussels. Luxembourg: European Commission, 2000.
47 Canadian Task Force on the Periodic Health Examination. The periodic
health examination: Canadian task force on the periodic health examination.
Can Med Assoc J 1979;121:1193–54.
48 Woolf SG, DiGuiseppi CG, Atkins D, et al. Developing evidence-based
clinical practice guidelines: lessons learned by the US Preventive Services Task
Force. Annu Rev Public Health 1996;17:511–38.
49 Briss PA, Zaza S, Pappaioanou M, et al. Developing an evidence-based guide
to community preventive services—methods. Am J Prev Med
2000;18(suppl 1):35–43.
50 Report of the US Preventive Services Task Force. Guide to clinical preventive
services. 3rd edn. Periodic updates.
ajpmsuppl/harris3.htm#codes (accessed 7 Nov 2003).
51 Flegg KM, Rowling YJ. Clinical breast examination. A contentious issue in
screening for breast cancer. Aust Fam Physician 2000;29:343–6.
52 Harris R, Lohr KN. Screening for prostate cancer: an update of the evidence
for the US Preventive Services Task Force. Ann Intern Med 2002;137:917–29.
53 Report of the US Preventive Services Task Force. Screening for prostate
cancer; recommendations and rationale. Guide to clinical preventive services.
3rd edn. ... taterr.htm
(accessed 7 Nov 2003).
54 American Cancer Society. Cancer reference information; detailed guide:
prostate cancer.
CRI_2_4_3X_Can_prostate_cancer_be_found_early_36.asp (accessed 7 Nov
55 Chapman S. Advocacy in public health: roles and challenges. Int J Epidemiol
56 Canadian Health Services Research Foundation. Knowledge transfer in
health. A report on a two-day conference jointly organised by the Canadian
Research Transfer Network and the Health Research Transfer Network of
Alberta. Calgary, 2002.
57 Lin V, Gibson G, eds. Evidence-based health policy; problems and
possibilities: policy case studies. Melbourne: Oxford University Press,
544 Rychetnik, Hawe, Waters, et al
on 27 December 2005 Downloaded from
58 Shamasunder B, Bero L. Financial ties and conflicts of interest between
pharmaceutical and tobacco companies. JAMA 2002;288:738–44.
59 Bero L. Implications of the tobacco industry documents for public health and
policy. Annu Rev Public Health 2003;24:267–88.
60 Mackenbach JP. Tackling inequalities in health: the need for building a
systematic evidence base. J Epidemiol Community Health 2003;57:162.
61 Campbell K, Waters E, O’Meara S, et al. Interventions for preventing obesity
in children. Cochrane Library. Issue 4. Oxford: Update Software, 2002.
62 Davis P, Howden-Chapman P. Translating research findings into health
policy. Soc Sci Med 1996;43:865–72.
63 Lomas J. Using ‘linkage and exchange’ to move research into policy at a
Canadian Foundation. Health Aff 2000;19:236–40.
64 Moynihan R. Who pays for the pizza? Redefining the relationships between
doctors and drug companies. 1: Entanglement. BMJ 2003;326:1189–92.
65 Moynihan R. Who pays for the pizza? Redefining the relationships between
doctors and drug companies. 2: Disentanglement. BMJ 2003;326:1193–6.
66 Moynihan R. Cochrane plans to allay fears over industry influence. BMJ
67 Petticrew M. Systematic reviews from astronomy to zoology: myths and
misconceptions. BMJ 2001;322:98–101.
68 Thomson H, Petticrew M, Morrison D. Health effects of housing improvement:
systematic review of intervention studies. BMJ 2001;323:187–90.
69 Farrington DP, Welsh BC. Improved street lighting and crime prevention.
Justice Quarterly 2002;19:313–42.
70 Oliver S. Exploring lay perspectives on questions of effectiveness. In:
Maynard A, Chalmers I, eds. Non-random reflections on health services
research. London: BMJ Books, 1997:272–91.
71 Oliver S, Dezateux C, Kavanagh J, et al. Disclosing carrier status to parents
following newborn screening. Cochrane Library. Issue 2. Oxford: Update
Software, 2003.
72 Popay J, Sowden A, Robert H, et al. Developing methods for the narrative
synthesis of quantitative and qualitative data in systematic reviews of
effectiveness. Economic and Social Research Council; research methods
program. ... opay.shtml
(accessed 7 Nov 2003).
73 Cook DJ, Mulrow CD, Haynes RB. Synthesis of best evidence for health care
decisions. Mulrow CD, Cook DJ, eds. Systematic reviews; synthesis of best
evidence for health care decisions. Philadelphia: American College of
Physicians, 1998:7.
74 Thomas R. School-based programmes for preventing smoking. Cochrane
Library. Issue 3. Oxford: Update Software, 2003. (Most recent update May
2003, and last substantive update January 2003).
75 Deeks JJ, Dinnes J, D’Amico R, et al. Evaluating non-randomised intervention
studies. Health Technol Assess 2003;7.
summ727.htm (accessed 7 Nov 2003).
76 Rychetnik L, Frommer M, Hawe P, et al. Criteria for evaluating evidence
on public health interventions. J Epidemiol Community Health
77 Hill AB. The environment and disease: association or causation. Proc R Soc
Med 1965;58:295–300.
78 Hill AB. A short textbook of medical statistics. London: Hodder and Stoughton,
79 Susser MW. What is a cause and how do we know one? A grammar for
pragmatic epidemiology. Am J Epidemiol 1991;133:635–48.
80 Fletcher RH, Fletcher SW, Wagner EH. Clinical epidemiology; the essentials.
3rd edn. Baltimore: Williams and Wilkins, 1996.
81 Entman RM. Framing: toward clarification of a fractured paradigm. Journal of
Communication 1993;43:51–8.
82 Schon DA. The reflective practitioner; how professionals think in action. New
York: Basic Books, 1983.
83 Lloyd B, Hawe P. Solutions forgone? How health professionals frame postnatal
depression as a problem. Soc Sci Med 2003;57:1783–95.
84 Kuhn TS. The structure of scientific revolutions. 3rd edn. Chicago: University of
Chicago Press, 1996.
85 Lincoln YS, Guba EG. Paradigmatic controversies, contradictions, and
emerging influences. 2nd edn. In: Denzin NK, Lincoln YS, eds. Handbook of
qualitative research. Thousand Oaks: Sage, 2000.
86 Sindall C. Health policy and normative analysis: ethics, evidence and politics.
In: Lin V, Gibson B, eds. Evidence-based health policy; problems and
possibilities. Melbourne: Oxford University Press, 2003:80–94.
87 Willis E, White K. Evidence-based medicine, the medical profession and
health policy. Evidence-based health policy; problems and possibilities. In:
Lin V, Gibson B, eds. Evidence-based health policy; problems and
possibilities. Melbourne: Oxford University Press, 2003:33–43.
88 Willis K. Challenging the evidence—women’s health policy in Australia. In:
Lin V, Gibson B, eds. Evidence-based health policy; problems and
possibilities. Melbourne: Oxford University Press, 2003:211–23.
APHORISM OF THE MONTH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Epidemiology and a sense of place
‘‘My books are always about living in places, not just rushing through them. As we get to
know Europe slowly, tasting the wines, cheeses, and characters of different countries you
begin to realize that the important determinant of any culture is after all … the spirit of
place. Just as one particular vineyard will always give you a special wine with discernible
characteristics so a Spain, an Italy, a Greece will always give you the same type of culture
… will express itself through the human beings just as it does through its wild flowers’’
(Lawrence Durrell, New York Times Magazine, 12 June 1960)
As one of the major disciplines underpinning the study of public health, epidemiology
concerns itself with the analysis of time, place, and person. Durrell’s beautiful insight can
make our epidemiological teaching seem one dimensional. If we are to unpack the
epidemiological study of place we need help from the social sciences, anthropology, and not
least from personal narrative. We also must embrace the interrelationship between humans
and environment which is our habitat.
Evidence based public health 545
on 27 December 2005 Downloaded from

Return to evidence based vet guidelines

Who is online

Users browsing this forum: No registered users and 1 guest