O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Importance of M&E

Mais Conteúdo rRelacionado

Audiolivros relacionados

Gratuito durante 30 dias do Scribd

Ver tudo

Importance of M&E

  1. 1. Importance of Monitoring and Evaluation
  2. 2. Lecture Overview  Monitoring and Evaluation  How to Build an M&E System
  3. 3. What is M, what is E, why and how to monitor MONITORING & EVALUATION
  4. 4. What is Monitoring  Ongoing process that generates information to inform decision about the program while it is being implemented.  Routine collection and analysis of information to track progress against set plans and check compliance to established standards  Helps identify trends & patterns, adapts strategies, and inform decisions  Key words: • Continuous – ongoing, frequent in nature • Collecting and analyzing information – to measure progress towards goals • Comparing results – assessing the performance of a program/project
  5. 5. Why is Monitoring Important?  Evidence of how much has been or has NOT been achieved • Quantitative: numbers, percentage • Qualitative: narrative or observation  Examination of trends  Highlight problems  Early warning signs  Corrective actions  Evaluate effectiveness of management action  Determine achievement of results
  6. 6. What is Evaluation  Evaluation is an assessment of an intervention to determine its relevance, efficiency, effectiveness, impact, and sustainability. The intention is to provide information that is credible and useful, enabling incorporation of lessons learned into decision making processes.  Key Words: • Assessment – of the value of an event of action • Relevance • Efficiency • Effectiveness • Impact • Sustainability • Lessons learned
  7. 7. What is Evaluation? Evaluation Program Evaluation Impact Evaluation
  8. 8. What is M and what is E? Monitoring Measures progress towards goals, but doesn’t tell us the extent to which results achieved or the impact Continuous, frequent Has to take place during the intervention Evaluation Measures whether progress towards goal is caused by the intervention - causality Infrequent, time bound Can evaluate ongoing or completed intervention
  9. 9. Monitoring and Evaluation Evaluation Program Evaluation Impact Evaluation Monitoring
  10. 10. Components of Program Evaluation  What are the characteristics of the target population? What are the risks and opportunities? What programs are most suitable?  What is the logical chain connecting our program to the desired results?  Is the program being rolled out as planned? Is their high uptake among clients? What do they think of it?  What was the impact and the magnitude of the program?  Given the magnitude of impact and cost, how efficient is the program? Needs assessment Program theory assessment Monitoring and process evaluation Impact evaluation Cost effectiveness Are your questions connected to decision-making?
  11. 11. Evaluation Programme Evaluation Impact Evaluation Program Evaluation
  12. 12. Who is this Evaluation For?  Academics  Donors • Their Constituents  Politicians / policymakers  Technocrats  Implementers  Proponents, Skeptics  Beneficiaries
  13. 13. How can Impact Evaluation Help Us?  Answers the following questions • What works best, why and when? • How can we scale up what works?  Surprisingly little hard evidence on what works  Can do more with given budget with better evidence  If people knew money was going to programmes that worked, could help increase pot for anti-poverty programmes
  14. 14. Programs and their Evaluations: Where do we Start? Intervention  Start with a problem  Verify that the problem actually exists  Generate a theory of why the problem exists  Design the program  Think about whether the solution is cost effective Program Evaluation  Start with a question  Verify the question hasn’t been answered  State a hypothesis  Design the evaluation  Determine whether the value of the answer is worth the cost of the evaluation
  15. 15. Endline Evaluation Life Cycle of a Program Baseline Evaluation Change or improvement Distributing reading materials and training volunteers • Reading materials delivered • Volunteers trained • Target children are reached • Classes are run, volunteers show up • Attendance in classes • Entire district is covered • Refresher training of teachers • Tracking the target children, convincing parents to send their child • Incentives to volunteer to run classes daily and efficiently (motivation) • Efforts made for children to attend regularly • Improve coverage Theory of Change/ Needs Assessment Designing the program to implement Background preparation, logistics, roll out of program Monitoring implementati on • Process evaluation • Progress towards target Planning for continuous improvement Reporting findings - impact, process evaluation findings Using the findings to improve program model and delivery
  16. 16. Program Theory – a Snap Shot Impacts Outcomes Outputs Activities Inputs Results Implementation
  17. 17. With a Focus on measuring both implementation and results? HOW TO BUILD AN M&E SYSTEM
  18. 18. Methods of Monitoring  First hand information  Citizens reporting  Surveys  Formal reports by project/programme staff • Project status report • Project schedule chart • Project financial status report • Informal Reports • Graphic presentations
  19. 19. Monitoring: Questions  Is the intervention implemented as designed? Does the program perform? Inputs Outputs Outcomes Implementation Plans and targets  Is intervention money, staff and other inputs available and put to use as planned? Are inputs used effectively?  Are the services being delivered as planned?  Is the intervention reaching the right population and target numbers?  Is the target population satisfied with services? Are they utilizing the services?  What is the intensity of the treatment?
  20. 20. Implementing Monitoring  Develop a monitoring plan • How should implementation be carried out? What is going to be changed? • Are the staff’s incentives aligned with project? Can they be incentivized to follow the implementation protocol? • How will you train staff? How will they interact with beneficiaries or other stakeholders? • What supplies or tools can you give your staff to make following the implementation design easier? • What can you do to monitor? (Field visits, tracking forms, administrative data, etc.) • Intensity of monitoring (frequency, resources required,…)?
  21. 21. Ten Steps to a Results-based Monitoring and Evaluation System Conducting a readiness and needs assessment Selecting key indicators to monitor outcomes Planning for improvement selecting realistic targets Using evaluation information Using findings 1 2 3 4 5 6 7 8 9 10 Agreeing on outcomes to monitor and evaluate Gathering baseline data on indicators Monitoring for results Reporting findings Sustaining the M&E system within the organization
  22. 22. Conducting a needs and readiness assessment 1 2 3 4 5 6 7 8 9 10  What are the current systems that exist?  What is the need for the monitoring and evaluation?  Who will benefit from this system?  At what levels will the data be used?  Do we have organization willingness and capacity to establish the M&E system?  Who has the skills to design and build the M&E system? Who will manage?  What are the barriers to implementing M&E system on the ground (resource-crunch)?  How will you fight these barriers?  Will there be pilot programs that can be evaluated within the M&E system? - DO WE GO AHEAD?
  23. 23. Agreeing on outcomes (to monitor and evaluate) 1 2 3 4 5 6 7 8 9 10  What are we trying to achieve? What is the vision that our M&E system will help us achieve?  Are there national or sectoral goals (commitment to achieving the MDGs)?  Political/donor driven interest in goals?  In other words, what are our Outcomes: Improving coverage, learning outcomes… broader than focusing on merely inputs and activities
  24. 24. Selecting key indicators to monitor outcomes 1 2 3 4 5 6 7 8 9 10  Identify WHAT needs to get measured so that we know we have achieved our results?  Avoid broad based results, but assess based on feasibility, time, cost, relevance  Indicator development is a core activity in building an M&E system and drives all subsequent data collection, analysis, and reporting  Arriving at indicators will take come time  Identify plans for data collection, analysis, reporting PILOT! PILOT! PILOT!
  25. 25. Gathering baseline data on indicators 1 2 3 4 5 6 7 8 9 10  Where are we today?  What is the performance of indicators today?  Sources of baseline information: Primary or Secondary data  Date types: Qualitative or Quantitative  Data collection instruments
  26. 26. Planning for improvement selecting realistic targets 1 2 3 4 5 6 7 8 9 10  Targets – quantifiable levels of the indicators  Sequential, feasible and measurable targets  If we reach our sequential set of targets, then we will reach our outcomes!  Time bound – Universal enrolment by 2015 (outcome – better economic opportunities), Every child immunized by 2013 (outcome - reduction in infant mortality) etc.  Funding and resources available to be taken into account Target 1 Target 2 Target 3 Outcomes
  27. 27. Monitoring for implementation and results 1 2 3 4 5 6 7 8 9 10 Impacts Outcomes Outputs Activities Inputs Results Implementation Results monitoring Implementation monitoring Change in percentage children who cannot read; Change in teacher attendance Provision of materials; training of volunteers; usage of material; number of volunteers teaching
  28. 28. Evaluation(?), Using Evaluation Information 1 2 3 4 5 6 7 8 9 10 Monitoring does not information on attribution and causality. Information through Evaluation can be useful to  Helps determine are the right things being done  Helps select competing strategies by comparing results – are there better ways of doing things?  Helps build consensus on scale-up  Investigate why something did not work – scope for in-depth analysis  Evaluate the costs relative to benefits and help allocate limited resources
  29. 29. Reporting findings Sustaining M&E System 1 2 3 4 5 6 7 8 9 10 Using Results  Reporting Findings: What findings are reported to whom, in what format, and at what intervals. A good M&E system should provide an early warning system to detect problems or inconsistencies, as well as being a vehicle for demonstrating the value of an intervention – so do not hide poor results.  Using Results: recognize both internal and external uses of your results  Sustaining the M&E System: Some ways of doing this are generating demand, assigning responsibilities, increasing capacity, gather ing trustworthy data.
  30. 30. THANK YOU

Notas do Editor

  • Evaluation ek assessment hai program ka jo uski
    RELEVANCE: upyukta – jo program hai vo country context or logo ke liya kaam aayega? (water is pure, but I give chlorine tablets)
    EFFICIENCY: saksham - How economically are resources being used? (cost is more than benefit)
    EFFECTIVENESS: kitna prabhav tha – objectives achieved or not?
    IMPACT: Long term effects, intended or not intended
    SUSTAINABILTY: Nirantarta – program ka benefit continue karega?
  • First let’s narrow down our definition of Evaluation
    Evaluation is a very big term and could mean many things…

    In general, we’ll be talking about program evaluation
    So that means, not the type of evaluation that’s more administrative in nature…
    Performance evaluations, audits, etc…
    Unless those are part of a new policy or program that we wish to evaluate…
    Programs are still a general term
    Could include Policies, or more generally, “interventions”
    What distinguishes impact evaluation?
    What makes “randomized evaluation” distinct?

    Where does monitoring fit in?
    The quiz you took at the beginning… that’s part of an evaluation. You’ll take one at the end as well. And you’ll also give us some course feedback
    something we’ll look at after this whole course is done and use it to make design and implementation changes to the course
    But it’s not something we consider part of the course design itself. It’s not really meant to serve as a pedagogical device.
    The clickers. That’s more part of monitoring. It’s specifically designed as part of the pedagogy.
    It gives us instant feedback based on which we make mid-course adjustments, corrections, etc.
    Part of the pedagogy is that we have a specific decision tree that the implementers (in this case, lecturers) use based on the results of the survey.
  • Progress: Vikaas Extent: had
    Building an evaluation system allows for:
    • a more in-depth study of results-based outcomes and
    impacts
    • bringing in other data sources than just extant
    indicators
    • addressing factors that are too difficult or expensive to
    continuously monitor
    • tackling the issue of why and how the trends being
    tracked with monitoring data are moving in the
    directions they are (perhaps most important).
  • In general, we’ll be talking about program evaluation
    So that means, not the type of evaluation that’s more administrative in nature…
    Performance evaluations, audits, etc…
    Unless those are part of a new policy or program that we wish to evaluate…
    Programs are still a general term
    Could include Policies, or more generally, “interventions”
    What distinguishes impact evaluation?
    What makes “randomized evaluation” distinct?

    Where does monitoring fit in?
    The quiz you took at the beginning… that’s part of an evaluation. You’ll take one at the end as well. And you’ll also give us some course feedback
    something we’ll look at after this whole course is done and use it to make design and implementation changes to the course
    But it’s not something we consider part of the course design itself. It’s not really meant to serve as a pedagogical device.
    The clickers. That’s more part of monitoring. It’s specifically designed as part of the pedagogy.
    It gives us instant feedback based on which we make mid-course adjustments, corrections, etc.
    Part of the pedagogy is that we have a specific decision tree that the implementers (in this case, lecturers) use based on the results of the survey.
  • In general, we’ll be talking about programme evaluation
    So that means, not the type of evaluation that’s more administrative in nature…
    Performance evaluations, audits, etc…
    Unless those are part of a new policy or programme that we wish to evaluate…
    Programmes are still a general term
    Could include Policies, or more generally, “interventions”
    What distinguishes impact evaluation?
    What makes “randomized evaluation” distinct?

    Where does monitoring fit in?
    The quiz you took at the beginning… that’s part of an evaluation. You’ll take one at the end as well. And you’ll also give us some course feedback
    something we’ll look at after this whole course is done and use it to make design and implementation changes to the course
    But it’s not something we consider part of the course design itself. It’s not really meant to serve as a pedagogical device.
    The clickers. That’s more part of monitoring. It’s specifically designed as part of the pedagogy.
    It gives us instant feedback based on which we make mid-course adjustments, corrections, etc.
    Part of the pedagogy is that we have a specific decision tree that the implementers (in this case, lecturers) use based on the results of the survey.
  • Who is your audience? And what questions are they asking?
    Academics: we have quite a few academics in the audience.

    Beneficiaries: This may be slightly different from “who are your stakeholders”?

    This effects the type of evaluation you do, but also the questions you seek to answer.
  • This question is larger than a question of aid
    Aid accounts for less than 10% of development spending.
    Governments have their own budgets, their own programmes.
  • Before thinking about evaluations, we should think about what it is that we’re evaluating… Here I’ll generically call it, an “intervention”

    You’d be surprised how many policies are implemented that address non-existent problems. One of our evaluations in India is of a policy called continuous and comprehensive evaluation…

    Now let’s ask a very specific question…
  • Ignore blue boxes
  • If this our program theory – anumaan, parikalpna – then it has two parts. The first is where the program is being implemented and the second is where the results have started to show.
  • Hum ek M&E system kaise banaye jo program amal karne ko aur uske parinaam dono ko naapta hai
  • How the world bank says we can do it – handbook for development practitioners
  • Give examples
  • Outcome – parinaam
    MDG example – eight international development goals that all UN member states have to achieve by the year 2015
    Eradicate extreme hunger and poverty
    - universal primary education
  • Indicator: suchak, nideshak
    Relevant: uchit

  • Instruments: jariya
  • Measure: naap Feasible: sambhav
    Another example – I want higher literacy levels
    1) 50% children in school; 2) 100 % children in school; 3) teacher attendance increases; 4) children scores improve
  • Implementation monitoring – data collected on inputs, activities and immediate outputs; information on administrative, implementation, management issues
    Results monitoring – sets indicators for outcomes and collects data on outcomes; systematic reporting on progress towards outcomes
  • Evaluation – karanta sthapit karna, could be expensive but very useful in certain cases

×