1. Evidence-based policy and practice aims to objectively assess the quality of social research evidence to inform decisions. However, some argue that claiming evidence can be isolated from political influences and that it should be privileged in policymaking are "brave claims".
2. All observation is theory-laden, so the quest for objective truths is questionable. Quality assessment depends on judgments and values, not just methods.
3. Evidence-based practice aims to minimize gaps between research evidence and practice/policy to improve outcomes. However, drawing on research findings in practice is still limited. Assessing alternative practices requires consideration of clients' values and informed consent.
2. Interest in the issue of ‘research quality’ is at an all
time high. Undoubtedly, one of the key spurs to
the quest for higher standards in social research is
the evidence-based policy movement. The chosen
instrument for figuring out best-possible, future
interventions in policy domains is the systematic
review of all first-rate, bygone evidence from
previous studies in that realm. A key step in the
logic is to provide an ‘inclusion criterion’ as a
means identifying those existing studies upon
which most reliance should be placed.
3. Pawson notes “The very idea of evidence-based
policy rests on a pair of remarkably brave claims.
The first, is the proposition that evidence can be
heard amidst the political clamour of modern
policy-making. The second, is that evidence
should have a privileged voice in policy formation
because there are objective methods available to
judge and justify the quality of the advice
provided by social research”. The starting point
for the quality assessment in evidence-based
policy, lies with this assertion that there are
objective means of detecting factual ‘truths’. Truth
is “out there” waiting to be discovered.
4. It isn’t good data as such that makes science honest. It is
the ceaseless scientific interrogation of its significance that
makes data honest. It is now commonly accepted that we
have no direct observational access to reality and that all
observation is ‘theory-laden’. (Pawson)
When your doctor suspects you have the ‘flu and pops a
thermometer in your mouth, she is not observing
temperature directly. She is utilizing a theory confirmed a
century previously about the linear expansion of mercury.
A theory about your illness is tested against a different,
well-established theory.
5. The modernisation of social care places a high premium
on evidence. At the level of central government,
commitment to service reform is increasingly based on
evidence about effectiveness. At the level of citizens,
acceptance of professional expertise is increasingly
tempered by a well informed critique, supported by
improved access to high-quality information. At the level
of service providers, accountable, regulated services means
ensuring that practice is based on evidence rather than on
past practice or current patterns of service.
See
https://www.sheffield.gov.uk/.../corporate.../professional
s/.../Supporting-...
6. Within the various kinds of evidence required to
inform social care (including citizens’ views,
practitioners’ experience, and organizational audit
and inspection), that provided by research plays a
special role. The best research is specifically designed
to be as free as possible from bias in favour of any
interest group or policy position, and potentially
provides the most secure basis to inform national
policy.
7. Evidence-based health care originated because of gaps
among evidentiary, ethical, and application concerns.
Sackett et al. (1997) estimated that about two questions
arose for every three patients and that 30 percent of all
questions remained unanswered. (We do not know how
many questions arise in the course of social work remain
unanswered.) Evidence-based practice (EBP) suggests and
explores ways to decrease gaps both at the level of clinical
practice and decision making about groups or populations,
for example purchasing services (Gray, 2001). It is as much
about the ethics of educators and researchers as it is about
the ethics of practitioners and agency administrators.
8. Ethics and accreditation standards, ethical and evidentiary
issues are often far apart in practice. Consider gaps
between obligations described in the Code of Ethics of the
National Association of Social Workers (1996) and
everyday practice regarding informed consent and
drawing on practice and policy-related research. For
example, research findings suggest that social workers do
not draw on practice-related research findings (e.g. Rosen,
1994; Rosen, Proctor, Morrow-Howell, & Staudt, 1995). The
survey conducted by Sheldon and Chilvers (2000) found
that 18% of the total number of social workers surveyed
(n=2,285) had read nothing related to practice within the
last 6 months.
9. If professionals are not familiar with the
evidentiary status of alternative practices and
policies, they cannot pass this information on to
their clients; they cannot honor informed consent
obligations. If some alternatives are effective in
attaining outcomes clients and significant others
value and practice proceeds based on ignorance of
this information, clients are deprived of
maximizing opportunities to achieve hoped for
changes in their lives.
10. EBP involves a shift in paradigm. Intuition and
unsystematic clinical expertise are considered insufficient
grounds on which to make decisions. On the other hand,
the "value laden nature of clinical decisions" (Guyatt &
Rennie, 2002, p. 4) implies that we cannot rely on evidence
alone: Thus, knowing the tools of evidence-based practice
is necessary but not sufficient for delivering the highest
quality of [client] care. The philosophy of evidence-based
practice encourages practitioners to be effective advocates
for their clients: "physicians concerned about the health of
their patients as a group, or about the health of the
community, should consider how they might contribute to
reducing poverty" (Guyatt & Rennie, 2002, p. 9).
11. EBP originated within the medical school of
McMaster University, Toronto, in the early 1990s
(Evidence-Based Medicine Working Group
[EBMWG], 1992). By definition, EBM involves the
conscientious, explicit, and judicious application
of best research evidence to a range of domains:
clinical examinations, diagnostic tests, prognostic
markers, and the safety and efficacy of
interventions whose purposes may be therapeutic,
rehabilitative, or preventative, with therapeutic
interventions understandably getting most of the
attention.
12. “EBM is laid out in a neat and orderly way, with a
painstakingly described set of five steps that
compose its practice, a list of questions to answer
when following each of these steps, flow charts, a
classification of evidence in terms of its relevance
and value, and careful descriptions of blind,
randomized clinical trials (RCTs) as the gold
standard for deciding the efficacy of
interventions” (Jane Gilgun, 2005) RCTs are
called experimental designs in the social sciences.
13. EBM relies heavily on “measurement”. That is the
quantification of indicators, such as confidence intervals,
effect size, experimental event rate, control over event
rate, and number needed to be treated to prevent one
event. Guyatt et al. (2000) recommended the
quantification of both evidence and values, stating this is
“the most rigorous approach to making recommendations”
(p. 1839). Evidence about diagnosis, prognosis, or harm
can arise from other forms of research besides RCTs,
including case studies and qualitative research. Evidence
about the efficacy of interventions whose face validity is
self-evident and whose withholding poses ethical issues do
not require RCTs (Ellis, Mulligan, Rowe, & Sackett, 1995).
14. 1. Converting information needs related to practice
decisions into answerable questions.
2. Tracking down, with maximum efficiency, the best
evidence with which to answer them.
3. Critically appraising that evidence for its validity,
impact (size of effect), and applicability (usefulness in
practice)
4. Applying the results of this appraisal to practice and
policy decisions. This involves deciding whether evidence
found (if any) applies to the decision at hand (e.g., Is a
client similar to those studied?) and considering client
values and preferences in making decisions as well as
other applicability concerns
5. Evaluating our effectiveness and efficiency in carrying
out steps and seeking ways to improve them in the future
(Sackett et al., 2000, pp. 3-4).
15. Clarifying the question for review about the
effectiveness of a particular treatment
Searching for primary studies that address this
question
Appraising the quality of these studies in terms of
their ability to answer the efficacy question
Extracting the data from each proficient study on
the outcomes of the treatment in that particular
trial
Synthesizing the data by aggregating the results of
all competent trials
Disseminating the findings about the overall
efficacy of the treatment.
16. If it is assumed that there is only one research
design that permits authoritative statements to be
made on treatment efficacy, then it follows that
appraisal of research quality can be made directly
and early in the review process. This is exactly the
function of quality standards articulated by
Oxman and now well established in evidence-
based medicine. Meta-analysis seeks to establish a
causal linkage between a particular treatment and
a specific outcome and it is deemed that RCTs are
the only permissible design for making such
inferences.
17. “The awesome logic of experimental control is
brought to bear here. If subjects are randomly
allocated into experimental and control
conditions, and the treatment is applied to the
former but not to the latter, then any subsequent
differences between the two groups must be the
result of the only matter on which they differ –
namely, the application of the treatment”
(Pawson, ESRC Working Paper 1).
18. “Do not mistake science for procedural uniformity. Science
is not all control and calculation, checking and double-
checking. It also proceeds though insight and imagination,
speculative hunches and bold conjectures” (Pawson).
Polanyi’s (1966) felicitous phrase, ‘we can know more than
we can tell’ means that any particular scientific inquiry has
thousands of decision points and that their resolution
often relies on the experience, judgement and tacit
wisdom of the researcher. This is especially so when it
comes to the matter of hypothesis generation and
inference making.
So science is truly value-laden !
19. Post empiricist philosophy no longer dwells on technical
definitions of science but seeks its identification in such
matters as its propositional, critical and normative
structure. What matters is not research practice but
research programmes (Lakotos, 1970). What counts is the
logic of scientific discovery (Popper, 1959). It seems that
science “is science” because it balances theory and
method, concepts and evidence, guesswork and
surveillance. With Haack (1993), we may say that findings
are justified when they accord with the evidence and are
consonant with other theories.