SlideShare uma empresa Scribd logo
1 de 8
Evaluation Framework
Fogarty International Center
Advancing science for global health
Initial Document: December 2002 Last Modified: July 2008 Contact: Linda Kupfer, Ph.D.

I. Evaluation Criteria
The goals of evaluation at Fogarty are:
•

To stimulate the performance of Fogarty programs and to encourage innovative approaches to
address problems and issues relating to improving global health

•

To provide a transparent process for assessment of Fogarty programs and to demonstrate sound
stewardship of federal funds and the programs they support

•

To provide information for strategic planning, strengthen programs, improve performance,
enhance funding decisions, demonstrate public health and economic benefits, and provide new
directions for Fogarty programs

•

To provide mechanisms to identify program accomplishments to Fogarty, NIH, HHS, funding
agencies, national and international partners and the U.S. Congress

•

To identify important lessons learned and best-management practices in performance of Fogarty
programs as a whole, and make recommendations for implementation of future programs

Continuing evaluation is designed to strengthen, improve and enhance the impact of Fogarty
programs. There are several important areas of evaluation that can be used to assess the
effectiveness of a Fogarty program.
Areas of Evaluation
1. Program Planning
2. Program Management
a) Project Selection, b) Recruiting Talent, c) Institutional Setting, d) Program Components,
e) Human Subjects and Fiscal Accountability, and f) Best Practices
3. Partnerships and Communications
4. Program Results
a) Program Input, b) Program Outputs, and c) Program Outcomes
5. Program Impacts
a) Program Efficiency/Effectiveness and b) Program Relevance
The criteria for evaluation are described in detail below as well as coinciding metrics for
assessment:
1. Program Planning
Criteria: Effective programs will use the strategic
planning framework of Fogarty as well as that of
program partners as the basis for development of
the program RFA/PA. The RFA/PA should also be
based on the needs of the U.S. scientific
community, host countries, and as identified in
collaboration with stakeholders such as other
government agencies, foreign scientists and experts
in the field.

Metrics: Program Planning
▪ Evidence of a planning process and a plan
(priority determination, clear articulation)
▪ Relevance of program to Fogarty, NIH ICs,
and HHS strategic plans
▪ Stakeholder involvement in planning
▪ Re-evaluation of program over time
▪ Integration of recommendations into planning
▪ Planning for sustainability of program results

2. Program Management
a. Project Selection: An effective program
should incorporate a strong peer review
process.
The selection/review process
should take into account host country needs
in the program’s scientific area as well as any
other criteria listed in the RFA or PA. Peer
review should include reviewers with relevant
developing country research experience.
b. Recruiting Talent: Every program will attract
a variety of talent. Strong programs will have
mechanisms in place to identify and attract
the best and most appropriate talent
available.

c. Program Components: Each program is
made up of various projects or grants that
together form a program. It is the role of the
Program Officer to ensure that the various
projects or grantees have a chance to interact
and gain experience from one another.
Network meetings should have goals and
objectives that are clear to all participants
from the beginning.
Stakeholders and
partners should be involved in the network
meetings.
d. Institutional Setting: Programs vary in their
institutional setting and institutional support.
The program should be well supported by

2

Metrics: Project Selection
▪ Composition of panels
▪ Review criteria
▪ Quality of feedback to PI
▪ Amount of time allowed for review
▪ Conflict of interest issues
▪ Involvement of the Program Officer

Metrics: Recruiting Talent
▪ Recruitment of new/young investigators
▪ Recruitment of foreign investigators
▪ Minority applicants
▪ Interdisciplinary teams
▪ Success rate
▪ Turnover of investigators
Metrics: Program Components
▪ Network meetings – goals and objectives of
the meetings
▪ Other meetings/ways at which PIs and/or
trainees get together exchange ideas
▪ Program operation (award size, length of time,
funding mechanism, funding amount,
reapplication restrictions)

Metrics: Institutional Setting
▪ Matching funds
▪ Mentorship support
▪ Laboratory support
▪ Administrative support and good business
practices
both the academic institution(s) involved and the federal institutions involved. There must
be appropriate business practices available at both the domestic and the foreign institution
for grant implementation to go smoothly.
e. Human
Subjects
and
Fiscal
Accountability:
Programs
should
demonstrate that they have appropriate
mechanisms in place to account for federal
funds and are properly documenting protocol
reviews for human subjects.

Metrics: Human Subjects and Fiscal
Accountability
▪ Presence of operational IRB
▪ Good accounting practices
▪ Good documentation practices
▪ Assurance that all intended funding is
reaching foreign collaborators and trainees

f.

Metrics: Best Practices (Examples)
▪ Strategies used to prevent brain drain
▪ Strategies used to target program goals (e.g.,
Interdisciplinarity)
▪ Strategies used to promote long-term
mentoring
▪ Strategies for selecting trainees
▪ Strategies used to promote long-term
networking
▪ Other best practices

Best Practices: As a result of ongoing
evaluation, strong programs will help identify
best practices with regard to various program
factors, for example, prevention of brain drain,
sustainability, and mentorship.

3. Partnerships and Communication
a. Partnerships:
Federal, national and
international partnerships are essential to
addressing global health issues. Partnerships
should be pursued, nurtured and maintained.

b. Communications: To be fully successful,
scientific results must be communicated to
the user community and utilized. During the
evaluation of the program, the link to the user
community
will
be
reviewed
and
implementation of the science into policy or
practice will be assessed.

3

Metrics: Partnerships
▪ Number of partnerships
▪ Different types of partnerships (NIH, HHS,
other federal, NGO, private sector)
▪ Involvement of partners in development of the
program and its strategic goals
▪ Funds from partners
▪ Cost of partnership

Metrics: Communications
▪ Appropriate community input into strategic
planning through informational
meeting/training sessions held with community
▪ Involvement of community on advisory board
of program
▪ Involvement of program in the community
▪ Requests for information, presentations
▪ Community needs surveys
▪ User community feedback (mechanisms and
tracking)
4. Program Results
Depending upon the age of a program, significant results will fall into different categories. The
following should be documented and reported, analyzed and evaluated:
a. Program Input: The total of the resources put into the program (funds and in-kind input
from partners nationally and internationally – any “enabling resources”).
b. Program Outputs: The program must be
managed to produce program outputs that are
the immediate, observable products of
research and training activities, such as
publications or patent submissions, citations,
and degrees conferred. Quantitative indices
of output are tools for the program that allow
POs and PIs to track changes, highlight
progress and identify potential problems.

c. Program Outcomes: Longer-term results for
which a program is designed to contribute,
such as strengthened research capacity
within the U.S. and foreign sites, effective
transfer of scientific principles and methods,
success
in
obtaining/attracting
further
scientific
and/or
international
support
(expected for more mature programs).

Metrics: Outputs
▪ Number and list of publications (journal
articles, book chapters, reports, etc.)
▪ List of trainees as first author
▪ Number and list of presentations
▪ Number of trainees
▪ Fields of training
▪ Number and type of degrees/certificates
earned
▪ New curriculum developed and implemented
▪ Number and list of meetings

Metrics: Outcomes
▪ Number of laboratories started
▪ Scientific departments started or strengthened
▪ Scientific methods discovered – number and
type
▪ Number of new grants or new funding
procured
▪ Awards received
▪ Careers paths initiated or enhanced

5. Program Impacts
The total consequences of the program, including
unanticipated benefits. These can include the
influence of research activities on clinical public
health practice or health policy, success in
establishing a sustainable career structure,
affecting the career path of trainees, changes in
health care systems, and alterations in health
care laws. Demonstrating impacts requires more
complex analysis and synthesis of multiple lines
of evidence of both a quantitative and qualitative
nature (expected for the most mature programs).

4

Metrics: Impacts
▪ New policies adopted or advanced
▪ New scientific advancement developed
▪ Alteration of health care system
▪ Alteration of health care laws
▪ Alteration of health care practice
▪ Alteration of intervention implementation
▪ New clinical procedures adopted
▪ New career structure in place
▪ Improve health of population
a. Program Efficiency/Effectiveness: In
addition to assessing program impacts,
assessment of program efficiency can help
strengthen program effectiveness.
b. Program Relevance: An effective program
will demonstrate relevance to progress of
scientific field as well as utility to greater
program community (e.g., practitioners,
policymakers).

Metrics: Efficiency/Effectiveness
▪ Publications per dollar
▪ Publications per program
▪ Cost per trainee
▪Metrics: Relevance program
Trainees skilled per
▪ Evidence of research outcomes disseminated
▪ Citation/impact factor related to program
publications
▪ Qualitative evidence that program outcomes
were useful to program field/greater program
community

II. Evaluation Principles and Elements
1.
2.
3.
4.

Principals of Evaluation at Fogarty
Elements and Basis for Review and Evaluation
Program Development
Self-Evaluation Process

Each is described in detail below:
1. Principles of Evaluation at Fogarty:
• Evaluation at Fogarty is a routine, continuous quality improvement, review process.
• Evaluation focuses on outputs, outcomes, and impacts and mechanisms to ensure that these
occur. While reporting of metrics (number of trainees achieving advanced degrees, number
of publications, etc.) is necessary, reviews will go beyond metrics and will incorporate
qualitative data and depend on the basic principle of external peer review to generate
recommendations.
• Programs are assessed against their own goals and objectives, taking into account fiscal
resources and granting mechanisms.
• Review and evaluation uses retrospective measurements of the achievements over a
specific time period (eventually a cyclical period) based in part on measured quantitative
outputs, outcomes, and impacts (metrics), as well as success stories and more qualitative
outputs, outcomes and impacts. This information is used to make recommendations for the
future.

2. Elements and Basis for Review and Evaluation
The review and evaluation process is a continuum that spans a period of time beginning with
strategic planning. Fogarty programs arise from the Fogarty Strategic Plan. Specific program
plans are then developed with input from stakeholders, in the form of well-developed Requests
for Applications (RFA) and Program Announcements (PA).

5
Program Officers then monitor the progress of trainees and projects. At the five year point, a
team of experts conduct a process evaluation and make suggestions for improving the
program. This type of correction can improve a program mid-course. During year 9/10 of the
program, an outcome evaluation is conducted that includes data collection and data analysis
by a contractor who specializes in evaluation.
A key to effective program review is the degree to which the review is normalized to the
resources, objectives and program planning of the individual program. Given that each program
has different financial resources, utilizes different talent pools with various specialties, faces
different issues in host countries, works under unique institutional policies, and uses different
approaches to reducing global health disparities, the reviews are tailored to take program
variability into account.
3. Program Development
The foundation for individual program review is a well-developed program plan that culminates
in an RFA (PA). Importantly, planning a program at NIH normally requires a two-year lead time
to allow sufficient input, partnership development and administrative review. Each program has
its own RFA (PA) that can act as a strategic plan for that program. The RFA (PA) stems from
Fogarty and NIH strategic plans, as well as the strategic plans of the program partners.
Planning is imperative to program effectiveness and should be based on experience, past
program results, and stakeholder needs and expectations. Each program should develop a
plan that addresses its goals and objectives. Although this plan need not be formalized and
written down, written form will ensure continuity for the program. The program plan can be
developed and informed through consultations, workshops, and meetings and should be
specific to resource needs, managing the program to meet those needs, data needs, and data
gathering, analysis and storage.
A program plan, reflecting the input of management and constituents, will include:
•
•
•

Articulation of the vision and focus of the program as well as why this direction is being
taken;
Background on scientific relevance of the program area, program implementation issues
and mechanisms for establishing priorities for investment of resources; and
Goals, objectives and performance milestone targets that provide guidance for evaluating
program performance.

Planning is fundamental to program evaluation. Developing the understanding, communication
and data collection processes necessary to meet the basic goals of the program is necessary.
A program should be reassessed and new planning (planning workshops, planning meetings
etc.) take place every 5 years or as appropriate. Network meetings can also be used as part of
the continuous review and planning for a program.
4. Self-Evaluation Process
Each program should conduct self-evaluation and analysis on a regular basis, in between the
more formal program evaluations.
Each program’s self-evaluation will be based on
performance milestones unique to that program, as well as the criteria given below for all
programs. Annual self-evaluation can be accomplished at network meetings or following the

6
submission of progress reports from the projects under the program. It is important that the selfevaluation include identification of results, potential problems and mechanisms for resolving
these problems. Analysis of program data should be conducted as part of the program selfanalysis. In some cases, both collection and analysis of program data may need to be
contracted out. Data collected by the program could include the metrics mentioned in the
criteria section above.

III. Evaluation Roles
1.

Role of the Fogarty International Center Advisory
Board (FICAB) and Fogarty Administration
2.
Role of the Program Officer (PO)
3.
Role of the Evaluation Officer (EO)
4.
Expert Panel—Make-up and Role
Each is described in detail below:
1.

Role of the Fogarty International Center
Advisory Board (FICAB) and Fogarty Administration

It is anticipated that the Fogarty International Center Advisory Board (FICAB) will play a role in
evaluation, either by participating in Program reviews or by reviewing the evaluations as they
are distributed.
Fogarty will communicate the results of all the Fogarty evaluations to the
FICAB and Fogarty Administration. It is anticipated that Fogarty administration will use program
evaluations to make strategic funding and programmatic decisions.
2.

Role of the Program Officer (PO)

7
Fogarty has ultimate responsibility for the effectiveness of its programs. The PO is responsible
for the day-to-day evaluation and analysis of the program progress. The PO works with the
Evaluation Officer to analyze program progress, synthesize program results, and to set up the
review or evaluation. Together they determine the appropriate outside experts to be part of the
review as well as determine specifics of the review e.g. dates and questions to be asked within
the Framework as well as review the report to ensure accuracy prior to its being finalized. Once
the review is finalized, the PO will write a response to the review recommendations in a timely
manner.
3. Role of the Evaluation Officer (EO)
The evaluation officer, in coordination with the Fogarty POs and Fogarty administration is
responsible for setting the annual schedule for review and evaluation and applies for all funds
for reviews and evaluations. The evaluation officer works with the PO to set the agenda and
schedule for the reviews and provides training for reviewers and experts. The evaluation officer
works with the review panel to conduct the review write the final report and works with other NIH
IC s and other experts on evaluation to ensure that the Fogarty evaluations are current. She
serves as the overall planner and interface for program evaluations. Review recommendations
should be incorporated into the following evaluation.
4. Expert Review Panels – Make-up and Role
An expert panel can be used to help conduct any evaluation of Fogarty programs using the
formal Framework and criteria. The panel can be made up of 3-5 members, including, if
possible one Fogarty Advisory Board member, and 3 to 6 experienced administrators and
decision-makers, health care professionals and scientists, as well as people experienced in
program review from other disciplines as appropriate. Expert panel members should be highly
respected and recognized in their fields. Panel membership should be jointly determined and
agreed to by Fogarty staff and the evaluation officer. An individual respected by all parties, very
familiar with Fogarty objectives and programs, and someone with a longer-term commitment to
Fogarty should chair the panel, if needed.

8

Mais conteúdo relacionado

Destaque

Telefones ips escolas nova
Telefones ips escolas novaTelefones ips escolas nova
Telefones ips escolas novantebrusque
 
二分木クラスを作ってGoogle testでチェックしてみた話
二分木クラスを作ってGoogle testでチェックしてみた話二分木クラスを作ってGoogle testでチェックしてみた話
二分木クラスを作ってGoogle testでチェックしてみた話Gou Sawada
 

Destaque (6)

Lean UX principles
Lean UX principlesLean UX principles
Lean UX principles
 
Telefones ips escolas nova
Telefones ips escolas novaTelefones ips escolas nova
Telefones ips escolas nova
 
二分木クラスを作ってGoogle testでチェックしてみた話
二分木クラスを作ってGoogle testでチェックしてみた話二分木クラスを作ってGoogle testでチェックしてみた話
二分木クラスを作ってGoogle testでチェックしてみた話
 
Paradigms
ParadigmsParadigms
Paradigms
 
women empowerment and Malala
women empowerment and Malalawomen empowerment and Malala
women empowerment and Malala
 
Nivel
NivelNivel
Nivel
 

Mais de Washington Evaluators

2020 WE Member Engagement Survey Results - Summary
2020 WE Member Engagement Survey Results - Summary2020 WE Member Engagement Survey Results - Summary
2020 WE Member Engagement Survey Results - SummaryWashington Evaluators
 
Building a Community of Practice through WE's Mentor Minutes
Building a Community of Practice through WE's Mentor MinutesBuilding a Community of Practice through WE's Mentor Minutes
Building a Community of Practice through WE's Mentor MinutesWashington Evaluators
 
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...Washington Evaluators
 
Harry Hatry: Cost-Effectiveness Basics for Evidence-Based Policymaking
Harry Hatry: Cost-Effectiveness Basics for Evidence-Based PolicymakingHarry Hatry: Cost-Effectiveness Basics for Evidence-Based Policymaking
Harry Hatry: Cost-Effectiveness Basics for Evidence-Based PolicymakingWashington Evaluators
 
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...Washington Evaluators
 
DC Consortium Student Conference 1.0
DC Consortium Student Conference 1.0DC Consortium Student Conference 1.0
DC Consortium Student Conference 1.0Washington Evaluators
 
Are Federal Managers Using Evidence in Decision Making?
Are Federal Managers Using Evidence in Decision Making?Are Federal Managers Using Evidence in Decision Making?
Are Federal Managers Using Evidence in Decision Making?Washington Evaluators
 
Causal Knowledge Mapping for More Useful Evaluation
Causal Knowledge Mapping for More Useful EvaluationCausal Knowledge Mapping for More Useful Evaluation
Causal Knowledge Mapping for More Useful EvaluationWashington Evaluators
 
Partnerships for Transformative Change in Challenging Political Contexts w/ D...
Partnerships for Transformative Change in Challenging Political Contexts w/ D...Partnerships for Transformative Change in Challenging Political Contexts w/ D...
Partnerships for Transformative Change in Challenging Political Contexts w/ D...Washington Evaluators
 
@WashEval: Facilitating Evaluation Collaboration for 30+ Years
@WashEval:  Facilitating Evaluation Collaboration for 30+ Years@WashEval:  Facilitating Evaluation Collaboration for 30+ Years
@WashEval: Facilitating Evaluation Collaboration for 30+ YearsWashington Evaluators
 
Transitioning from School to Work: Preparing Evaluation Students and New Eval...
Transitioning from School to Work: Preparing Evaluation Students and New Eval...Transitioning from School to Work: Preparing Evaluation Students and New Eval...
Transitioning from School to Work: Preparing Evaluation Students and New Eval...Washington Evaluators
 
The Importance of Systematic Reviews
The Importance of Systematic ReviewsThe Importance of Systematic Reviews
The Importance of Systematic ReviewsWashington Evaluators
 
GPRAMA Implementation After Five Years
GPRAMA Implementation After Five YearsGPRAMA Implementation After Five Years
GPRAMA Implementation After Five YearsWashington Evaluators
 
Challenges and Solutions to Conducting High Quality Contract Evaluations for ...
Challenges and Solutions to Conducting High Quality Contract Evaluations for ...Challenges and Solutions to Conducting High Quality Contract Evaluations for ...
Challenges and Solutions to Conducting High Quality Contract Evaluations for ...Washington Evaluators
 
Junge wb bb presentation 06 17-15 final
Junge wb bb presentation 06 17-15 finalJunge wb bb presentation 06 17-15 final
Junge wb bb presentation 06 17-15 finalWashington Evaluators
 
Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1Washington Evaluators
 
Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1Washington Evaluators
 
Washington Evaluators 2014 Annual Report
Washington Evaluators 2014 Annual ReportWashington Evaluators 2014 Annual Report
Washington Evaluators 2014 Annual ReportWashington Evaluators
 
Sustaining an Evaluator Community of Practice
Sustaining an Evaluator Community of PracticeSustaining an Evaluator Community of Practice
Sustaining an Evaluator Community of PracticeWashington Evaluators
 

Mais de Washington Evaluators (20)

2020 WE Member Engagement Survey Results - Summary
2020 WE Member Engagement Survey Results - Summary2020 WE Member Engagement Survey Results - Summary
2020 WE Member Engagement Survey Results - Summary
 
Building a Community of Practice through WE's Mentor Minutes
Building a Community of Practice through WE's Mentor MinutesBuilding a Community of Practice through WE's Mentor Minutes
Building a Community of Practice through WE's Mentor Minutes
 
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
 
Harry Hatry: Cost-Effectiveness Basics for Evidence-Based Policymaking
Harry Hatry: Cost-Effectiveness Basics for Evidence-Based PolicymakingHarry Hatry: Cost-Effectiveness Basics for Evidence-Based Policymaking
Harry Hatry: Cost-Effectiveness Basics for Evidence-Based Policymaking
 
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
George Julnes: Humility in Valuing in the Public Interest - Multiple Methods ...
 
DC Consortium Student Conference 1.0
DC Consortium Student Conference 1.0DC Consortium Student Conference 1.0
DC Consortium Student Conference 1.0
 
Are Federal Managers Using Evidence in Decision Making?
Are Federal Managers Using Evidence in Decision Making?Are Federal Managers Using Evidence in Decision Making?
Are Federal Managers Using Evidence in Decision Making?
 
Causal Knowledge Mapping for More Useful Evaluation
Causal Knowledge Mapping for More Useful EvaluationCausal Knowledge Mapping for More Useful Evaluation
Causal Knowledge Mapping for More Useful Evaluation
 
Partnerships for Transformative Change in Challenging Political Contexts w/ D...
Partnerships for Transformative Change in Challenging Political Contexts w/ D...Partnerships for Transformative Change in Challenging Political Contexts w/ D...
Partnerships for Transformative Change in Challenging Political Contexts w/ D...
 
@WashEval: Facilitating Evaluation Collaboration for 30+ Years
@WashEval:  Facilitating Evaluation Collaboration for 30+ Years@WashEval:  Facilitating Evaluation Collaboration for 30+ Years
@WashEval: Facilitating Evaluation Collaboration for 30+ Years
 
Transitioning from School to Work: Preparing Evaluation Students and New Eval...
Transitioning from School to Work: Preparing Evaluation Students and New Eval...Transitioning from School to Work: Preparing Evaluation Students and New Eval...
Transitioning from School to Work: Preparing Evaluation Students and New Eval...
 
The Importance of Systematic Reviews
The Importance of Systematic ReviewsThe Importance of Systematic Reviews
The Importance of Systematic Reviews
 
GPRAMA Implementation After Five Years
GPRAMA Implementation After Five YearsGPRAMA Implementation After Five Years
GPRAMA Implementation After Five Years
 
Challenges and Solutions to Conducting High Quality Contract Evaluations for ...
Challenges and Solutions to Conducting High Quality Contract Evaluations for ...Challenges and Solutions to Conducting High Quality Contract Evaluations for ...
Challenges and Solutions to Conducting High Quality Contract Evaluations for ...
 
Junge wb bb presentation 06 17-15 final
Junge wb bb presentation 06 17-15 finalJunge wb bb presentation 06 17-15 final
Junge wb bb presentation 06 17-15 final
 
Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1
 
Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1Building Program Evaluation Capacity in Central Asia, Part 1
Building Program Evaluation Capacity in Central Asia, Part 1
 
Washington Evaluators 2014 Annual Report
Washington Evaluators 2014 Annual ReportWashington Evaluators 2014 Annual Report
Washington Evaluators 2014 Annual Report
 
Sustaining an Evaluator Community of Practice
Sustaining an Evaluator Community of PracticeSustaining an Evaluator Community of Practice
Sustaining an Evaluator Community of Practice
 
Visualizing Evaluation Results
Visualizing Evaluation ResultsVisualizing Evaluation Results
Visualizing Evaluation Results
 

Último

lok sabha Elections in india- 2024 .pptx
lok sabha Elections in india- 2024 .pptxlok sabha Elections in india- 2024 .pptx
lok sabha Elections in india- 2024 .pptxdigiyvbmrkt
 
11042024_First India Newspaper Jaipur.pdf
11042024_First India Newspaper Jaipur.pdf11042024_First India Newspaper Jaipur.pdf
11042024_First India Newspaper Jaipur.pdfFIRST INDIA
 
14042024_First India Newspaper Jaipur.pdf
14042024_First India Newspaper Jaipur.pdf14042024_First India Newspaper Jaipur.pdf
14042024_First India Newspaper Jaipur.pdfFIRST INDIA
 
16042024_First India Newspaper Jaipur.pdf
16042024_First India Newspaper Jaipur.pdf16042024_First India Newspaper Jaipur.pdf
16042024_First India Newspaper Jaipur.pdfFIRST INDIA
 
Geostrategic significance of South Asian countries.ppt
Geostrategic significance of South Asian countries.pptGeostrategic significance of South Asian countries.ppt
Geostrategic significance of South Asian countries.pptUsmanKaran
 
Power in International Relations (Pol 5)
Power in International Relations (Pol 5)Power in International Relations (Pol 5)
Power in International Relations (Pol 5)ssuser583c35
 
Political-Ideologies-and-The-Movements.pptx
Political-Ideologies-and-The-Movements.pptxPolitical-Ideologies-and-The-Movements.pptx
Political-Ideologies-and-The-Movements.pptxSasikiranMarri
 
Emerging issues in migration policies.ppt
Emerging issues in migration policies.pptEmerging issues in migration policies.ppt
Emerging issues in migration policies.pptNandinituteja1
 
15042024_First India Newspaper Jaipur.pdf
15042024_First India Newspaper Jaipur.pdf15042024_First India Newspaper Jaipur.pdf
15042024_First India Newspaper Jaipur.pdfFIRST INDIA
 
Foreign Relation of Pakistan with Neighboring Countries.pptx
Foreign Relation of Pakistan with Neighboring Countries.pptxForeign Relation of Pakistan with Neighboring Countries.pptx
Foreign Relation of Pakistan with Neighboring Countries.pptxunark75
 
12042024_First India Newspaper Jaipur.pdf
12042024_First India Newspaper Jaipur.pdf12042024_First India Newspaper Jaipur.pdf
12042024_First India Newspaper Jaipur.pdfFIRST INDIA
 
13042024_First India Newspaper Jaipur.pdf
13042024_First India Newspaper Jaipur.pdf13042024_First India Newspaper Jaipur.pdf
13042024_First India Newspaper Jaipur.pdfFIRST INDIA
 
Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...
Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...
Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...The Lifesciences Magazine
 
Transforming Andhra Pradesh: TDP's Legacy in Road Connectivity
Transforming Andhra Pradesh: TDP's Legacy in Road ConnectivityTransforming Andhra Pradesh: TDP's Legacy in Road Connectivity
Transforming Andhra Pradesh: TDP's Legacy in Road Connectivitynarsireddynannuri1
 

Último (14)

lok sabha Elections in india- 2024 .pptx
lok sabha Elections in india- 2024 .pptxlok sabha Elections in india- 2024 .pptx
lok sabha Elections in india- 2024 .pptx
 
11042024_First India Newspaper Jaipur.pdf
11042024_First India Newspaper Jaipur.pdf11042024_First India Newspaper Jaipur.pdf
11042024_First India Newspaper Jaipur.pdf
 
14042024_First India Newspaper Jaipur.pdf
14042024_First India Newspaper Jaipur.pdf14042024_First India Newspaper Jaipur.pdf
14042024_First India Newspaper Jaipur.pdf
 
16042024_First India Newspaper Jaipur.pdf
16042024_First India Newspaper Jaipur.pdf16042024_First India Newspaper Jaipur.pdf
16042024_First India Newspaper Jaipur.pdf
 
Geostrategic significance of South Asian countries.ppt
Geostrategic significance of South Asian countries.pptGeostrategic significance of South Asian countries.ppt
Geostrategic significance of South Asian countries.ppt
 
Power in International Relations (Pol 5)
Power in International Relations (Pol 5)Power in International Relations (Pol 5)
Power in International Relations (Pol 5)
 
Political-Ideologies-and-The-Movements.pptx
Political-Ideologies-and-The-Movements.pptxPolitical-Ideologies-and-The-Movements.pptx
Political-Ideologies-and-The-Movements.pptx
 
Emerging issues in migration policies.ppt
Emerging issues in migration policies.pptEmerging issues in migration policies.ppt
Emerging issues in migration policies.ppt
 
15042024_First India Newspaper Jaipur.pdf
15042024_First India Newspaper Jaipur.pdf15042024_First India Newspaper Jaipur.pdf
15042024_First India Newspaper Jaipur.pdf
 
Foreign Relation of Pakistan with Neighboring Countries.pptx
Foreign Relation of Pakistan with Neighboring Countries.pptxForeign Relation of Pakistan with Neighboring Countries.pptx
Foreign Relation of Pakistan with Neighboring Countries.pptx
 
12042024_First India Newspaper Jaipur.pdf
12042024_First India Newspaper Jaipur.pdf12042024_First India Newspaper Jaipur.pdf
12042024_First India Newspaper Jaipur.pdf
 
13042024_First India Newspaper Jaipur.pdf
13042024_First India Newspaper Jaipur.pdf13042024_First India Newspaper Jaipur.pdf
13042024_First India Newspaper Jaipur.pdf
 
Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...
Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...
Mitochondrial Fusion Vital for Adult Brain Function and Disease Understanding...
 
Transforming Andhra Pradesh: TDP's Legacy in Road Connectivity
Transforming Andhra Pradesh: TDP's Legacy in Road ConnectivityTransforming Andhra Pradesh: TDP's Legacy in Road Connectivity
Transforming Andhra Pradesh: TDP's Legacy in Road Connectivity
 

The Utilization of DHHS Program Evaluations: A Preliminary Examination

  • 1. Evaluation Framework Fogarty International Center Advancing science for global health Initial Document: December 2002 Last Modified: July 2008 Contact: Linda Kupfer, Ph.D. I. Evaluation Criteria The goals of evaluation at Fogarty are: • To stimulate the performance of Fogarty programs and to encourage innovative approaches to address problems and issues relating to improving global health • To provide a transparent process for assessment of Fogarty programs and to demonstrate sound stewardship of federal funds and the programs they support • To provide information for strategic planning, strengthen programs, improve performance, enhance funding decisions, demonstrate public health and economic benefits, and provide new directions for Fogarty programs • To provide mechanisms to identify program accomplishments to Fogarty, NIH, HHS, funding agencies, national and international partners and the U.S. Congress • To identify important lessons learned and best-management practices in performance of Fogarty programs as a whole, and make recommendations for implementation of future programs Continuing evaluation is designed to strengthen, improve and enhance the impact of Fogarty programs. There are several important areas of evaluation that can be used to assess the effectiveness of a Fogarty program. Areas of Evaluation 1. Program Planning 2. Program Management a) Project Selection, b) Recruiting Talent, c) Institutional Setting, d) Program Components, e) Human Subjects and Fiscal Accountability, and f) Best Practices 3. Partnerships and Communications 4. Program Results a) Program Input, b) Program Outputs, and c) Program Outcomes 5. Program Impacts a) Program Efficiency/Effectiveness and b) Program Relevance
  • 2. The criteria for evaluation are described in detail below as well as coinciding metrics for assessment: 1. Program Planning Criteria: Effective programs will use the strategic planning framework of Fogarty as well as that of program partners as the basis for development of the program RFA/PA. The RFA/PA should also be based on the needs of the U.S. scientific community, host countries, and as identified in collaboration with stakeholders such as other government agencies, foreign scientists and experts in the field. Metrics: Program Planning ▪ Evidence of a planning process and a plan (priority determination, clear articulation) ▪ Relevance of program to Fogarty, NIH ICs, and HHS strategic plans ▪ Stakeholder involvement in planning ▪ Re-evaluation of program over time ▪ Integration of recommendations into planning ▪ Planning for sustainability of program results 2. Program Management a. Project Selection: An effective program should incorporate a strong peer review process. The selection/review process should take into account host country needs in the program’s scientific area as well as any other criteria listed in the RFA or PA. Peer review should include reviewers with relevant developing country research experience. b. Recruiting Talent: Every program will attract a variety of talent. Strong programs will have mechanisms in place to identify and attract the best and most appropriate talent available. c. Program Components: Each program is made up of various projects or grants that together form a program. It is the role of the Program Officer to ensure that the various projects or grantees have a chance to interact and gain experience from one another. Network meetings should have goals and objectives that are clear to all participants from the beginning. Stakeholders and partners should be involved in the network meetings. d. Institutional Setting: Programs vary in their institutional setting and institutional support. The program should be well supported by 2 Metrics: Project Selection ▪ Composition of panels ▪ Review criteria ▪ Quality of feedback to PI ▪ Amount of time allowed for review ▪ Conflict of interest issues ▪ Involvement of the Program Officer Metrics: Recruiting Talent ▪ Recruitment of new/young investigators ▪ Recruitment of foreign investigators ▪ Minority applicants ▪ Interdisciplinary teams ▪ Success rate ▪ Turnover of investigators Metrics: Program Components ▪ Network meetings – goals and objectives of the meetings ▪ Other meetings/ways at which PIs and/or trainees get together exchange ideas ▪ Program operation (award size, length of time, funding mechanism, funding amount, reapplication restrictions) Metrics: Institutional Setting ▪ Matching funds ▪ Mentorship support ▪ Laboratory support ▪ Administrative support and good business practices
  • 3. both the academic institution(s) involved and the federal institutions involved. There must be appropriate business practices available at both the domestic and the foreign institution for grant implementation to go smoothly. e. Human Subjects and Fiscal Accountability: Programs should demonstrate that they have appropriate mechanisms in place to account for federal funds and are properly documenting protocol reviews for human subjects. Metrics: Human Subjects and Fiscal Accountability ▪ Presence of operational IRB ▪ Good accounting practices ▪ Good documentation practices ▪ Assurance that all intended funding is reaching foreign collaborators and trainees f. Metrics: Best Practices (Examples) ▪ Strategies used to prevent brain drain ▪ Strategies used to target program goals (e.g., Interdisciplinarity) ▪ Strategies used to promote long-term mentoring ▪ Strategies for selecting trainees ▪ Strategies used to promote long-term networking ▪ Other best practices Best Practices: As a result of ongoing evaluation, strong programs will help identify best practices with regard to various program factors, for example, prevention of brain drain, sustainability, and mentorship. 3. Partnerships and Communication a. Partnerships: Federal, national and international partnerships are essential to addressing global health issues. Partnerships should be pursued, nurtured and maintained. b. Communications: To be fully successful, scientific results must be communicated to the user community and utilized. During the evaluation of the program, the link to the user community will be reviewed and implementation of the science into policy or practice will be assessed. 3 Metrics: Partnerships ▪ Number of partnerships ▪ Different types of partnerships (NIH, HHS, other federal, NGO, private sector) ▪ Involvement of partners in development of the program and its strategic goals ▪ Funds from partners ▪ Cost of partnership Metrics: Communications ▪ Appropriate community input into strategic planning through informational meeting/training sessions held with community ▪ Involvement of community on advisory board of program ▪ Involvement of program in the community ▪ Requests for information, presentations ▪ Community needs surveys ▪ User community feedback (mechanisms and tracking)
  • 4. 4. Program Results Depending upon the age of a program, significant results will fall into different categories. The following should be documented and reported, analyzed and evaluated: a. Program Input: The total of the resources put into the program (funds and in-kind input from partners nationally and internationally – any “enabling resources”). b. Program Outputs: The program must be managed to produce program outputs that are the immediate, observable products of research and training activities, such as publications or patent submissions, citations, and degrees conferred. Quantitative indices of output are tools for the program that allow POs and PIs to track changes, highlight progress and identify potential problems. c. Program Outcomes: Longer-term results for which a program is designed to contribute, such as strengthened research capacity within the U.S. and foreign sites, effective transfer of scientific principles and methods, success in obtaining/attracting further scientific and/or international support (expected for more mature programs). Metrics: Outputs ▪ Number and list of publications (journal articles, book chapters, reports, etc.) ▪ List of trainees as first author ▪ Number and list of presentations ▪ Number of trainees ▪ Fields of training ▪ Number and type of degrees/certificates earned ▪ New curriculum developed and implemented ▪ Number and list of meetings Metrics: Outcomes ▪ Number of laboratories started ▪ Scientific departments started or strengthened ▪ Scientific methods discovered – number and type ▪ Number of new grants or new funding procured ▪ Awards received ▪ Careers paths initiated or enhanced 5. Program Impacts The total consequences of the program, including unanticipated benefits. These can include the influence of research activities on clinical public health practice or health policy, success in establishing a sustainable career structure, affecting the career path of trainees, changes in health care systems, and alterations in health care laws. Demonstrating impacts requires more complex analysis and synthesis of multiple lines of evidence of both a quantitative and qualitative nature (expected for the most mature programs). 4 Metrics: Impacts ▪ New policies adopted or advanced ▪ New scientific advancement developed ▪ Alteration of health care system ▪ Alteration of health care laws ▪ Alteration of health care practice ▪ Alteration of intervention implementation ▪ New clinical procedures adopted ▪ New career structure in place ▪ Improve health of population
  • 5. a. Program Efficiency/Effectiveness: In addition to assessing program impacts, assessment of program efficiency can help strengthen program effectiveness. b. Program Relevance: An effective program will demonstrate relevance to progress of scientific field as well as utility to greater program community (e.g., practitioners, policymakers). Metrics: Efficiency/Effectiveness ▪ Publications per dollar ▪ Publications per program ▪ Cost per trainee ▪Metrics: Relevance program Trainees skilled per ▪ Evidence of research outcomes disseminated ▪ Citation/impact factor related to program publications ▪ Qualitative evidence that program outcomes were useful to program field/greater program community II. Evaluation Principles and Elements 1. 2. 3. 4. Principals of Evaluation at Fogarty Elements and Basis for Review and Evaluation Program Development Self-Evaluation Process Each is described in detail below: 1. Principles of Evaluation at Fogarty: • Evaluation at Fogarty is a routine, continuous quality improvement, review process. • Evaluation focuses on outputs, outcomes, and impacts and mechanisms to ensure that these occur. While reporting of metrics (number of trainees achieving advanced degrees, number of publications, etc.) is necessary, reviews will go beyond metrics and will incorporate qualitative data and depend on the basic principle of external peer review to generate recommendations. • Programs are assessed against their own goals and objectives, taking into account fiscal resources and granting mechanisms. • Review and evaluation uses retrospective measurements of the achievements over a specific time period (eventually a cyclical period) based in part on measured quantitative outputs, outcomes, and impacts (metrics), as well as success stories and more qualitative outputs, outcomes and impacts. This information is used to make recommendations for the future. 2. Elements and Basis for Review and Evaluation The review and evaluation process is a continuum that spans a period of time beginning with strategic planning. Fogarty programs arise from the Fogarty Strategic Plan. Specific program plans are then developed with input from stakeholders, in the form of well-developed Requests for Applications (RFA) and Program Announcements (PA). 5
  • 6. Program Officers then monitor the progress of trainees and projects. At the five year point, a team of experts conduct a process evaluation and make suggestions for improving the program. This type of correction can improve a program mid-course. During year 9/10 of the program, an outcome evaluation is conducted that includes data collection and data analysis by a contractor who specializes in evaluation. A key to effective program review is the degree to which the review is normalized to the resources, objectives and program planning of the individual program. Given that each program has different financial resources, utilizes different talent pools with various specialties, faces different issues in host countries, works under unique institutional policies, and uses different approaches to reducing global health disparities, the reviews are tailored to take program variability into account. 3. Program Development The foundation for individual program review is a well-developed program plan that culminates in an RFA (PA). Importantly, planning a program at NIH normally requires a two-year lead time to allow sufficient input, partnership development and administrative review. Each program has its own RFA (PA) that can act as a strategic plan for that program. The RFA (PA) stems from Fogarty and NIH strategic plans, as well as the strategic plans of the program partners. Planning is imperative to program effectiveness and should be based on experience, past program results, and stakeholder needs and expectations. Each program should develop a plan that addresses its goals and objectives. Although this plan need not be formalized and written down, written form will ensure continuity for the program. The program plan can be developed and informed through consultations, workshops, and meetings and should be specific to resource needs, managing the program to meet those needs, data needs, and data gathering, analysis and storage. A program plan, reflecting the input of management and constituents, will include: • • • Articulation of the vision and focus of the program as well as why this direction is being taken; Background on scientific relevance of the program area, program implementation issues and mechanisms for establishing priorities for investment of resources; and Goals, objectives and performance milestone targets that provide guidance for evaluating program performance. Planning is fundamental to program evaluation. Developing the understanding, communication and data collection processes necessary to meet the basic goals of the program is necessary. A program should be reassessed and new planning (planning workshops, planning meetings etc.) take place every 5 years or as appropriate. Network meetings can also be used as part of the continuous review and planning for a program. 4. Self-Evaluation Process Each program should conduct self-evaluation and analysis on a regular basis, in between the more formal program evaluations. Each program’s self-evaluation will be based on performance milestones unique to that program, as well as the criteria given below for all programs. Annual self-evaluation can be accomplished at network meetings or following the 6
  • 7. submission of progress reports from the projects under the program. It is important that the selfevaluation include identification of results, potential problems and mechanisms for resolving these problems. Analysis of program data should be conducted as part of the program selfanalysis. In some cases, both collection and analysis of program data may need to be contracted out. Data collected by the program could include the metrics mentioned in the criteria section above. III. Evaluation Roles 1. Role of the Fogarty International Center Advisory Board (FICAB) and Fogarty Administration 2. Role of the Program Officer (PO) 3. Role of the Evaluation Officer (EO) 4. Expert Panel—Make-up and Role Each is described in detail below: 1. Role of the Fogarty International Center Advisory Board (FICAB) and Fogarty Administration It is anticipated that the Fogarty International Center Advisory Board (FICAB) will play a role in evaluation, either by participating in Program reviews or by reviewing the evaluations as they are distributed. Fogarty will communicate the results of all the Fogarty evaluations to the FICAB and Fogarty Administration. It is anticipated that Fogarty administration will use program evaluations to make strategic funding and programmatic decisions. 2. Role of the Program Officer (PO) 7
  • 8. Fogarty has ultimate responsibility for the effectiveness of its programs. The PO is responsible for the day-to-day evaluation and analysis of the program progress. The PO works with the Evaluation Officer to analyze program progress, synthesize program results, and to set up the review or evaluation. Together they determine the appropriate outside experts to be part of the review as well as determine specifics of the review e.g. dates and questions to be asked within the Framework as well as review the report to ensure accuracy prior to its being finalized. Once the review is finalized, the PO will write a response to the review recommendations in a timely manner. 3. Role of the Evaluation Officer (EO) The evaluation officer, in coordination with the Fogarty POs and Fogarty administration is responsible for setting the annual schedule for review and evaluation and applies for all funds for reviews and evaluations. The evaluation officer works with the PO to set the agenda and schedule for the reviews and provides training for reviewers and experts. The evaluation officer works with the review panel to conduct the review write the final report and works with other NIH IC s and other experts on evaluation to ensure that the Fogarty evaluations are current. She serves as the overall planner and interface for program evaluations. Review recommendations should be incorporated into the following evaluation. 4. Expert Review Panels – Make-up and Role An expert panel can be used to help conduct any evaluation of Fogarty programs using the formal Framework and criteria. The panel can be made up of 3-5 members, including, if possible one Fogarty Advisory Board member, and 3 to 6 experienced administrators and decision-makers, health care professionals and scientists, as well as people experienced in program review from other disciplines as appropriate. Expert panel members should be highly respected and recognized in their fields. Panel membership should be jointly determined and agreed to by Fogarty staff and the evaluation officer. An individual respected by all parties, very familiar with Fogarty objectives and programs, and someone with a longer-term commitment to Fogarty should chair the panel, if needed. 8