Program evaluation methodologies




















Writing down or making a diagram of the intervention logic makes sure that you clearly understand how the project is supposed to work and what was expected in the beginning. The intervention logic is dynamic and might change during the course of the project, but these changes can be documented and give a good insight into areas where plans and expectations needed to be adjusted to reality. Once you spelled out in detail how your project is planned to work and how the expected impact is, you are in a position to formulate good evaluation questions.

Evaluation questions are those questions that are supposed to be answered by your evaluation. It gives you the opportunity to specify what you actually want to analyze in your evaluation. If you word your questions carefully, you can make sure a critical analysis can take place.

At the same time, you should try to find questions though that you will be able to answer. While project applications are full of promised impact, in reality, it is quite difficult to actually measure impact.

To be able to assess the impact of your project , you would need to take a lot of data also out of your project to be sure no external events influenced the outcome of your project.

Sometimes that means to break one issue down to several questions. These questions can be quantitative answered by hard data, numbers, etc. As shown in figure 3 below, you have to find the perfect middle ground between being too broad or too narrow-minded.

The question on the left impact on education is too broad, it would not be possible to answer this question in project evaluation. Impact evaluation on education would have to take into consideration many other factors like the general shift in attitudes, all other initiatives in the sector, policy changes, etc. Still, if all this data would be available, it would be very difficult to quantify the impact in comparison to other interventions.

If you try to answer this question in this scope, the donors would know that your evaluation must be flawed. Figure 3: Comparison between different evaluation questions. The question on the right in comparison is to narrow number of schools. It would be possible to answer it with a simple number and would not give any further information about the quality of education or the actual use of these schools.

It gives no room for critical analysis and thus would be no good evaluation question. The questions in the middle show one way to ask a combination of quantitative and qualitative questions that can also lead to a more critical assessment of the project activities, but at the same time give a good picture of what the project has actually achieved. When you defined the appropriate evaluation questions, the next step is to think about the necessary data and the methods to analyze that data.

There are plenty of tools and instruments available to conduct an evaluation, but to decide which ones are appropriate you have to take into consideration the availability of data, the quality of your data, and the resources available for the evaluation.

Some instruments are very time-intensive, so if you did not allocate sufficient time and manpower for the evaluation, these instruments are also not a good fit.

It is also important not to forget to allocate resources to your evaluation methodology to be able to carry it out. Many times, the evaluation does not get enough attention in regard to resources and people do not have enough designated time to carry it out.

This poses several risks, as not enough time is designated to the important task and the project manager might be biased.

Once you have carried out the above-mentioned steps ideally in a team , you have gathered enough information to design your evaluation methodology. You have decided which methods you will need and which data you will have to collect for that, and ideally already allotted the corresponding responsibilities to the assigned staff so that everybody knows what her or his role is in the process.

If you put this information together in a document, it is also a good opportunity to share this with your donors or potential donors. A thought-through evaluation methodology shows that you and your organization are very familiar with the working area of your project, have put a lot of thought into the design, and are able and willing to critically analyze your project interventions.

It creates transparency and thus more reason for the donors to trust you and your organization. It also makes common ground with respect to expectations towards the evaluation report in the end and gives all stakeholders the opportunity to add input if needed and desired. Of course, designing the methodology is only the first step. Throughout the project, you have to be careful that it also gets implemented according to the plan and that no big problems arise.

Most program managers assess the value and impact of their work all the time when they ask questions, consult partners, make assessments, and obtain feedback.

They then use the information collected to improve the program. What distinguishes program evaluation from ongoing informal assessment is that program evaluation is conducted according to a set of guidelines.

Evaluation should be practical and feasible and conducted within the confines of resources, time, and political context. Moreover, it should serve a useful purpose, be conducted in an ethical manner, and produce accurate findings. Evaluation findings should be used both to make decisions about program implementation and to improve program effectiveness. Many different questions can be part of a program evaluation, depending on how long the program has been in existence, who is asking the question, and why the information is needed.

All of these are appropriate evaluation questions and might be asked with the intention of documenting program progress, demonstrating accountability to funders and policymakers, or identifying ways to make the program better. Increasingly, public health programs are accountable to funders, legislators, and the general public. Many programs do this by creating, monitoring, and reporting results for a small set of markers and milestones of program progress. Linking program performance to program budget is the final step in accountability.

The early steps in the program evaluation approach such as logic modeling clarify these relationships, making the link between budget and performance easier and more apparent. While the terms surveillance and evaluation are often used interchangeably, each makes a distinctive contribution to a program, and it is important to clarify their different purposes.

Surveillance is the continuous monitoring or routine data collection on various factors e. Surveillance systems have existing resources and infrastructure. Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially of longer term and population-based outcomes.

There are limits, however, to how useful surveillance data can be for evaluators. Also, these surveillance systems may have limited flexibility to add questions for a particular program evaluation.

In the best of all worlds, surveillance and evaluation are companion processes that can be conducted simultaneously.

Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program. Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth. Evaluators can also use qualitative methods e. Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model.

Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting. Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences.

Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved.

While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.

Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes.

Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries. Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.

The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners. Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference.

You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.

The steps in the CDC Framework are informed by a set of standards for evaluation. The 30 standards cluster into four groups:. Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand? Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved?

Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community? Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results? For example, in , CDC published a framework to guide public health professionals in developing and implementing a program evaluation CDC, Although the components are interdependent and might be implemented in a nonlinear order, the earlier domains provide a foundation for subsequent areas.

They include:. Five years before CDC issued its framework, the Joint Committee on Standards for Educational Evaluation created an important and practical resource for improving program evaluation. The Joint Committee, a nonprofit coalition of major professional organizations concerned with the quality of program evaluations, identified four major categories of standards — propriety, utility, feasibility, and accuracy — to consider when conducting a program evaluation.

Propriety standards focus on ensuring that an evaluation will be conducted legally, ethically, and with regard for promoting the welfare of those involved in or affected by the program evaluation. In addition to the rights of human subjects that are the concern of institutional review boards, propriety standards promote a service orientation i.

Utility standards are intended to ensure that the evaluation will meet the information needs of intended users. Involving stakeholders, using credible evaluation methods, asking pertinent questions, including stakeholder perspectives, and providing clear and timely evaluation reports represent attention to utility standards. The scope of the information collected should ensure that the data provide stakeholders with sufficient information to make decisions regarding the program.

Accuracy standards are intended to ensure that evaluation reports use valid methods for evaluation and are transparent in the description of those methods. Meeting accuracy standards might, for example, include using mixed methods e. Both identify the need to be pragmatic and serve intended users with the goal of determining the effectiveness of a program. Principles of Community Engagement - Second Edition. Section Navigation.



0コメント

  • 1000 / 1000