The field of program evaluation presents a diversity of images and claims about the nature and role of evaluation that confounds any attempt to construct a coher- ent account of its methods or confidently identify important new developments. We take the view that the overarching goal of the program evaluation enterprise is to contribute to the improvement of social conditions by providing scientifically credible information and balanced judgment to legitimate social agents about the effectiveness of interventions intended to produce social benefits. Because of its centrality in this perspective, this review focuses on outcome evaluation, that is, the assessment of the effects of interventions upon the populations they are intended to benefit. The coverage of this topic is concentrated on literature published within the last decade with particular attention to the period subsequent to the related reviews by Cook and Shadish (1994) on social experiments and Sechrest & Figueredo (1993) on program evaluation.
The word ‘evaluation’ has become increasingly used in the language of community, health and social services and programs. The growth of talk and practice of evaluation in these fields has often been promoted and encouraged by funders and commissioners of services and programs. Following the interest of funders, has been a growth in the study and practice of evaluation by community, health and social service practitioners and academics. When we consider why this move in evaluative thinking and practice has occurred, we can assume the position of the funder and simply answer, ‘...because we want to know if this program or service works’. Practitioners, specialists and academics in these fields have been called upon by governments and philanthropists to aid the development of effective evaluation. Over time, they have led their own thinking and practice independently. Evaluation in its simplest form is about understanding the effect and impact of a program, service, or indeed a whole organization. Evaluation as a practice is not so simple however, largely because in order to assess impact, we need to be very clear at the beginning what effect or difference we are trying to achieve.
The literature review begins with an overview of qualitative and quantitative research methods, followed by a description of key forms of evaluation. Health promotion evaluation and advocacy and policy evaluation will then be explored as two specific domains. These domains are not evaluation methodologies, but forms of evaluation that present unique requirements for effective community development evaluation. Following this discussion, the review will explore eight key evaluation methodologies: appreciative enquiry, empowerment evaluation, social capital,
social return on investment, outcomes based evaluation, performance dashboards and scorecards and developmental evaluation. Each of these sections will include specific methods, the values base of each methodo ...
The field of program evaluation presents a diversity of images a.docx
1. The field of program evaluation presents a diversity of images
and claims about the nature and role of evaluation that
confounds any attempt to construct a coher- ent account of its
methods or confidently identify important new developments.
We take the view that the overarching goal of the program
evaluation enterprise is to contribute to the improvement of
social conditions by providing scientifically credible
information and balanced judgment to legitimate social agents
about the effectiveness of interventions intended to produce
social benefits. Because of its centrality in this perspective, this
review focuses on outcome evaluation, that is, the assessment of
the effects of interventions upon the populations they are
intended to benefit. The coverage of this topic is concentrated
on literature published within the last decade with particular
attention to the period subsequent to the related reviews by
Cook and Shadish (1994) on social experiments and Sechrest &
Figueredo (1993) on program evaluation.
The word ‘evaluation’ has become increasingly used in the
language of community, health and social services and
programs. The growth of talk and practice of evaluation in these
fields has often been promoted and encouraged by funders and
commissioners of services and programs. Following the interest
of funders, has been a growth in the study and practice of
evaluation by community, health and social service practitioners
and academics. When we consider why this move in evaluative
thinking and practice has occurred, we can assume the position
of the funder and simply answer, ‘...because we want to know if
this program or service works’. Practitioners, specialists and
academics in these fields have been called upon by governments
and philanthropists to aid the development of effective
evaluation. Over time, they have led their own thinking and
2. practice independently. Evaluation in its simplest form is about
understanding the effect and impact of a program, service, or
indeed a whole organization. Evaluation as a practice is not so
simple however, largely because in order to assess impact, we
need to be very clear at the beginning what effect or difference
we are trying to achieve.
The literature review begins with an overview of qualitative and
quantitative research methods, followed by a description of key
forms of evaluation. Health promotion evaluation and advocacy
and policy evaluation will then be explored as two specific
domains. These domains are not evaluation methodologies, but
forms of evaluation that present unique requirements for
effective community development evaluation. Following this
discussion, the review will explore eight key evaluation
methodologies: appreciative enquiry, empowerment evaluation,
social capital,
social return on investment, outcomes based evaluation,
performance dashboards and scorecards and developmental
evaluation. Each of these sections will include specific
methods, the values base of each methodology, the resources
required, and examples of the methodologies in action.
Research Methods and Types ofEvaluation
This literature review will focus on methodologies and models
of evaluation. Comprehensive evaluation frameworks are likely
to involve a range of evaluation activities to assess different
aspects of a programme or organisation. Different types of
evaluation will answer different questions, and use different
types of data to answer those questions. For example, an
organisation may develop a comprehensive outcomes plan that
aligns its projects and activities with particular objectives and
3. outcomes, which lead to the achievement of its core goals and
aim. There will be different indicators chosen that will track
progress for certain outcomes and/or objectives. However, there
may still be a requirement or desire to examine in detail the
impact or effect of a specific project or activity or set of
activities. These evaluation exercises may be specific one-off
projects.
There are two major groups of methods that will be used in such
exercises, qualitative and quantitative methods. Quantitative
methods gather data that is countable and measurable. Common
forms of data that is analysed in quantitative evaluation
includes survey data, service usage data, control groups (where
one group receives a particular service, and the other does not),
published statistics and demographic data. Qualitative methods
of inquiry has a genesis in the social sciences, but is widely
used across a range of disciplines. These methods seek to
understand human behaviour and the reasons people do what
they do, and why they make the decisions they do. Common
forms of qualitative data that are collected and analysed
includeinterviews, focus groups, observations of
practice or work, and case studies (Trotman, 2008; Scriven
2001; Davidson, 2005). Many research projects will use both
qualitative and quantitative lines of enquiry, these projects are
known as mixed methods. In this way qualitative data can help
unpack and strengthen quantitative hypothesis and results (and
vice versa).In terms of evaluation, there are two types of
evaluation that are useful to understand. All can use both
qualitative and quantitative data.
Formative Evaluation
Formative evaluation is often undertaken at the development
4. stage of a programme or service. This may happen at the
beginning of a programme design process or at review stages of
a programme. Formative evaluation complements (and perhaps
overlaps) with some of the outcome planning models described
earlier and seeks to understand:
· what resources are available
· what needs the community or people being served have
· what is already known about the community
· whetherthe programme or service design been well thought
through (for example, are the
· assumptions and links made between aims, outcomes,
objectives and activities ‘right’, have indicators been
identified?
· who needs to be involved. (Barnes, 2009; Dehar et al, 1993).
· Needs assessments often form a part of a formative evaluation
process. Formative evaluations are
· usually for internal use, to inform programmedesign and
review. (Davidson, 2005; Barnes, 2009).
Summative Evaluation
Summative evaluation provides information on the product's
efficacy ( it's ability to do what it was designed to do). For
example, did the learners learn what they were supposed to
learn after using the instructional module. In a sense, it lets the
learner know "how they did," but more importantly, by looking
at how the learner's did, it helps you know whether the product
5. teaches what it is supposed to teach. Summative evaluation is
typically quantitative, using numeric scores or letter grades to
assess learner achievement.Summative evaluation looks at the
impact of an intervention on the target group. This type of
evaluation is arguably what is considered most often as
'evaluation' by project staff and funding bodies- that is, finding
out what the project achieved.
Summative evaluation can take place during the project
implementation, but is most often undertaken at the end of a
project. As such, summative evaluation can also be referred to
as ex-post evaluation (meaning after the event).
Summative evaluation is often associated with more objective,
quantitative methods of data collection. Summative evaluation
is linked to the evaluation drivers of accountability. It is
recommended to use a balance of both quantitative and
qualitative methods in
to get a better understanding of what your project has achieved,
and how or why this has occurred. Using qualitative methods of
data collection can also provide a good insight into unintended
consequences and lessons for improvement.
order
Summative evaluation is outcome-focused more than process
focussed. It is important to distinguish outcome from output.
Summative evaluation is not about stating that three workshops
were held, with a total of fifty people attending (outputs), but
rather the result of these workshops, such as increased
knowledge or increased uptake of rainwater tanks (outcomes).
· Summative evaluation provides a means to find out whether
6. your project has reached its goals/objectives/outcomes.
· Summative evaluation allows you to quantify the changes in
resource use attributable to your project so that you can track
how you are the impact of your project.
· Summative evaluation allows you to
the impact of different projects and make results-based
decisions on future spending allocations (taking into account
unintended consequences ).
compare
· Summative evaluation allows you to develop a better
understanding of the process of change, and finding out what
works, what doesn’t, and why. This allows you to gather the
knowledge to learn and improve future project designs and
implementation.
The Evaluations Reviewed
Tobacco use is the leading preventable cause of death and
disease in the United States, contributing to more than 430,000
deaths annually (1a). Tobacco control programs are designed
ultimately to help reduce disease, disability, and death related
to tobacco use. To determine the effectiveness of these
programs, one must document and measure both their
implementation and their effect. Program evaluation is a tool
used to assess the implementation and outcomes of a program,
to increase a program’s efficiency and impact over time, and to
demonstrate accountability. This paper reviews research paper
that use the formative, and summative evaluation do determine
effectiveness of the project.
7. The task of evaluation encourages us to examine the operations
of a program, including which activities take place, who
conducts the activities, and who is reached as a result. In
addition, evaluation will show how well the program adheres to
implementation protocols. Through program evaluation we can
determine whether activities are implemented as planned and
identify program strengths, weaknesses, and areas for
improvement. For example, a smoking cessation program may
be very effective for those who complete it, but it may not be
attended by many people. Evaluation activities may determine
that the location of the program or prospective participants’
lack of transportation is an attendance barrier. As a result,
program managers can try to increase attendance by moving the
class location or meeting times, or by providing free public
transportation.
The CDC has identified four goals that tobacco control
programs should work within to reduce tobacco-related
morbidity and mortality:
· Preventing the initiation of tobacco use among young people.
· Promoting quitting among young people and adults.
· Eliminating nonsmokers’ exposure to environmental tobacco
smoke (ETS).
· Identifying and eliminating the disparities related to tobacco
use and its effects among different population groups.
Comprehensive tobacco control programs use multiple strategies
to address these goals. Typically, strategies are grouped into
three program components: community mobilization, policy and
8. regulatory action, and the strategic use of media. Program
evaluation includes documenting the effectiveness of these
strategies in meeting program goals. Program evaluation is a
tool with which to demonstrate accountability to program
stakeholders (including state and local officials, policymakers,
and community leaders) by showing them that a program really
does contribute to reduced tobacco use and less exposure to
ETS. Evaluation findings can thus be used to show that money
is being spent appropriately and effectively and that further
funding, increased support, and policy change might lead to
even more improved health outcomes. Evaluation helps ensure
that only effective approaches are maintained and that resources
are not wasted on ineffective programs. Getting young adults to
quit smoking: A formative evaluation of the X-Pack Program
Lorien C. Abroms, Sc.D.,1 Richard Windsor, Ph.D.,1 and Bruce
Simons-Morton, Ph.D.2
In this study Getting Young Adults to Quit Smoking: A
Formative Evaluation of the X-Pack Program the authors use
formative evaluation to review the X-pack Program.The X-pack
program is a program geared toward adult college students to
help them quit smoking tobacco. The evaluation found the
programm to be effective. The lack of promising smoking
cessation interventions targeting young adults is a recognized
public health problem. This study was designed to determine the
feasibility of a young-adult-oriented program, the X-Pack
Program, when administered to college student smokers, and to
estimate its effect enrollment
HYPERLINK
"http://ntr.oxfordjournals.org/content/10/1/27.abstract"
INCLUDEPICTURE "http://cdncache-
a.akamaihd.net/items/it/img/arrow-10x10.png" d
9. to receive either a moderately intensive, E-mail-based, young-
adult intervention (the X-Pack group) or a less-intensive
program aimed at a general adult audience (the Clearing the Air
group). Participants were assessed at baseline and at 3 and 6
months after enrollment. Participants in the X-Pack group rated
their treatment more favorably overall, were more engaged in
program activities, and quit for more consecutive days at the 3-
and 6-month follow-ups, compared with the Clearing the Air
group. Differences in quit rates favored the X-Pack group at the
two follow-ups, but the differences were not significant. These
findings offer some support for the X-Pack Program when
administered to college smokers. on smoking cessation
HYPERLINK
"http://ntr.oxfordjournals.org/content/10/1/27.abstract"
INCLUDEPICTURE "http://cdncache-
a.akamaihd.net/items/it/img/arrow-10x10.png" d
. Participants (N=83) were randomized after
Lorelei Cropley,
HYPERLINK
"http://scholarworks.uno.edu/do/search/?q=author_lname%3A"
Mitchell" author_fname%3A"F"&start=0&context=1823871"
F Mitchell
HYPERLINK
"http://scholarworks.uno.edu/do/search/?q=author_lname%3A"A
nderson" author_fname%3A"Peter"&start=0&context=1823871"
Peter B. AndersonReport on a formative evaluation conducted
for the Youth Against Tobacco counter marketing campaign
Another paper also used formative evaluation to report on the
10. effectiveness of a tobacco cessation program. This paper reports
the formative evaluation findings of a tobacco counter-
marketing campaign and recommendations based on those
findings. Focus groups were used to determine reactions to four
messages for the Youth Against Tobacco (YAT) campaign.
Participants aged eleven to eighteen years old who resided in
the metropolitan New Orleans area, were selected from area
schools using convenience sampling. Results indicated that
while reactions to the messages were positive overall, some
aspects of the messages seemed too complex to be used
effectively in the campaign. These results helped to guide
modification and selection of messages prior to use in the
campaign.
Adolescents from the metropolitan New Orleans area were
pretested using focus groups to determine if any of the four
messages or YAT logo were understandable, relevant,
memorable, and/or acceptable. Inclusion criteria specified that
participants be between the age of eleven and eighteen and
reside in the metropolitan New Orleans area. Participants were
recruited by convenience sampling from two schools. A total of
24 adolescents participated in the focus groups.Instrumentation
for the focus group consisted of an unstructured moderator's
discussion guide with five open-ended questions (see Figure 3).
The instrument was developed based on existing pre-test
instruments (National Cancer Institute, 1992).Limitations of
focus group methodology are the subjectivity of responses;
therefore results can not be applied to the general population.
However, this methodology is appropriate for the formative
evaluation stage of this campaign, where it was used to develop
message concepts and direct use of these messages for a tobacco
counter-marketing campaign for adolescents in the Great New
Orleans area.
Lorien Abroms, ScD, Meenakshi Ahuja, MBBS, MPHc, Yvonne
Kodl, MPHc, Justin Sims, CEO, Jonathan Winickoff, MD and
11. Richard Windsor, PhD, MS( ).The Text2Quit Program: Results
From a Formative Evaluation of An Interactive Mobile Health
Smoking Cessation Program
A formative evaluation method was also used to measure
“Text2Quit”. “Text2Quit is a personalized, interactive mobile
health program that provides a series of text messages over the
course of a 3-month period, before and after a participant’s quit
date. The text messages include educational messages, peer ex-
smoker messages, medication reminders and relapse messages.
Text2Quit also lets participants text in for support when they
need additional motivation or are having a craving”. The
participants were surveyed in the beginning of the project ae
and at two, and four weeks post-enrollment.
Design and testing of an interactive smoking cessation
intervention for inner-city women. Anna M McDaniel, Gail R
Casper, Sondra K Hutchison, Renee M Stratton
The purpose of this study was to design and test the usability of
a computer-mediated smoking cessation program for inner-city
women. Design and content were developed consistent with
principles of user-centered design. Formative and summative
evaluation strategies were utilized in its testing. The summative
evaluation was designed to test usability in a naturalistic
environment. A sample of 100 women who receive care at an
inner-city community health center participated in the study.
Average time for completing the computer program was 13.9
minutes. Participants reported a high level of satisfaction with
usability of the program. Standardized instruments to measure
cognitive processes of change related
were completed at baseline and at 1 week. Participants reported
a decrease in favorable attitudes toward smoking (P=0.014) and
an increase in cognitive change processes at follow-up
12. (P=0.037). These results indicate that interactive computer
technology is acceptable to, and potentially useful for,
promoting smoking cessation in low-income women.
to smoking
Tobacco and Literacy Education Project Pilot Test of Three
Tobacco Education Lessons:Evaluation Report
This paper use both formative and summative methodology
design to test the effectiveness of Tobacco Education lession
whose aim are to prevent and smoking cessation. This
evaluation had two main aspects: formative and summative. The
purpose of the formative was to collect information from the
teachers and students that would help the team to make
decisions about how to improve the Tobacco
Education lessons and materials. The purpose of the summative
(or outcome) aspect of the evaluation
was to collect information from the teachers and students about
how the Tobacco Education lessons
affected them, in order to make a judgment about the value of
the lessons.
The writer conducted a mixed method evaluation, using
qualitative data from teachers and
Students to make decisions about how to revise the Tobacco
Education curriculum, and using
a combination of qualitative and quantitative data to make a
judgment about the value of the
Lessons in meeting the key objectives of the project. In addition
to the four teachers who taught the pilot lessons, and the adult
students who participated in their classes, we also collected
13. questionnaire data from a small comparison group of adult
students in another program who did not participate in the
Tobacco Education lessons
The evaluators used a combination of individual written
feedback, focus groups and interviews to collect information
from the teachers about their suggestions for improving the
lessons and materials. The authors used student focus groups to
get information from students about their ideas for improving
the lessons and materials. To get input from teachers about what
they liked or would change in the lessons, they asked teachers
to provide specific comments about each lesson as and after
they taught it. They also gave them feedback forms with which
to provide suggestions about each lesson, including recording in
detail the particular activities they designed to introduce the
lesson to their students
Using summative evaluation, the evaluators The pre‐and post‐
lesson questionnaires, interviews, and teacher and student focus
groups gave us information about whether and how much the
Tobacco Education lessons contributed to teachers’ intentions to
use the lessons in their future adult basic education classes. In
the teachers’ focus groups and interviews, we asked teachers to
rate the overall quality of the lessons
Conclusion
Evaluation encompasses the set of tools that are used to
measure the effec-tiveness of public health programs by
determining what works. Traditional evaluations in public
health have focused on assessing the impact of specific program
activities on defined outcomes. Evaluation is also a conceptual
approach to the use of data—as part of a quality improvement
process—in public health management. Public health
organizations must continually improve upon the standards of
evidence used in the evaluation of public health so that results
14. can inform managerial and policy decision making. As public
health interventions become more integrated within the
community, collaboration in evaluation efforts is a growing
imperative.Evaluation concepts and methods are of growing
importance to public health organizations, as well as to
education and social services programs. Increasingly, public
health managers are being held accountable for their actions,
and managers, elected officials, and the public are asking
whether programs work, for whom, and under what conditions.
Public health decision makers need to know which program
variants work best, whether the public is getting the best
possible value for its investment, and how to increase the
impact of existing programs. These evaluation questions are
being asked of long-standing programs, new activities, and
proposed interventions. These developments parallel today’s
emphasis on “evidence-based medicine” in clinical areas and
suggest the growing role of “evidence-based
management”within public health organizations. In this context,
evaluation is, first of all, a set of tools that is used to improve
the effectiveness of public health programs and activities by
determining which programs work, and also which program
variants work most effectively. These tools derive from social
science and health services research and include concepts of
study design, a variety of statistical methods, and economic
evaluation tools. Evaluation is also a conceptual approach to the
use of data—as part of a quality improvement process—in
public health management