Policy 101
U.S.-funded education programs must be able to collect data on the costs and benefits of basic education investments to (a) increase transparency and accountability in using U.S. taxpayer dollars, (b) learn which programs are and are not effective; and (c) help countries track their progress towards education goals, particularly those aligned with USAID's Education Strategy. BEC regularly provides technical feedback about how to strengthen USAID's monitoring and evaluation practices, from the practitioners' perspective.
Recommendations
The following best-practices and policy recommendations were collaboratively written by BEC's 2016-2017 Monitoring & Evaluation working group. Download here.
MONITORING AND EVALUATION (M&E) IN USG PROGRAMS
USG-supported education programs, whether funded through projects administered by U.S.-based implementing partners or through transfers of funds to multilateral agencies such as the Global Partnership for Education, must be able to collect timely, actionable data for the costs and benefits of education investments to (a) increase transparency and accountability in using USG resources; (b) inform practice and programming with the goal of replicating successes (and eliminating those programs that are not effective); and (c) assist beneficiary countries in measuring and reporting against the international education goals, particularly those aligned with USAID’s Education Strategy (2011–2017).
Successes and shortcoming can be understood through a variety of evaluations, including (1) rigorous impact evaluations to test the effectiveness of educational interventions against a counterfactual, (2) performance evaluations using qualitative and quantitative methods to determine under what conditions an intervention works best and informs “best-practice” design, and (3) value-for-money evaluations that compare outcomes to costs to assess return on and feasibility of investments made.
Combined, these evaluation strategies have had a powerful impact on the effectiveness and sustainability of education strategies and programming. For example, over the past decade, multiple impact assessments have revealed that learning outcomes are best improved by providing strong early grade reading programs in local languages, resulting in USAID’s Goal 1 strategy and multiple host government education reforms (Pflepsen, 2011). Countries as diverse as Egypt, Kenya, Yemen, and Nepal have made early grade reading a priority based on USG-supported evaluations. Formative and summative performance evaluations and targeted research have resulted in operational guidance and further refined the global community’s understanding of how to support education programs. For early grade reading, these evaluations have shown the efficacy of scripted lessons, supplementary reading materials, interactive teacher training, and ongoing coaching, as well as using student assessment in the classroom and for system management. Employing these approaches, the majority of USAID’s reading programs have shown significant positive effects on reading skill development. This information is foundational to national education plans, “including credible strategies informed by effective practices and standards” that many USAID partner countries are now preparing to take to scale.
LEVERAGING DATA FOR INCREASED IMPACT
Whether developing new books to improve reading outcomes or creating training programs to ensure that marginalized youth have the skills they need for employment, development practitioners need “just-in-time data” to tell them if they are on the right path. Monitoring of program implementation is a powerful accountability mechanism, revealing whether promised activities and input have been provided. Monitoring also provides opportunities to observe whether the implementation and roll-out of activities is proceeding as planned, with adherence to the planned design; when implementation or distribution of inputs is unequal or does not match need, program officers can help make sure that those activities are redirected. Monitoring is critical both to adjusting the approach, if needed, and to understanding evaluation results. Evaluation should always be connected to monitoring, allowing implementers to understand how the activities and inputs contribute to and result in the desired outcomes.
It is important that a full range of monitoring and evaluation activities continue to be part of education programming to address the core questions of effectiveness, efficiency, and sustainability. Use of affordable options for information and communication technology for rapid data collection, analysis, and feedback should be expanded. External evaluations should be conducted to verify reported results and bring added perspective to deepen understanding of program strengths and weaknesses. Experimentation and research are integral to the learning process. Whether incorporated into program design or conducted independently, targeted inquiries can answer key design questions, as part of a deliberatively developed research agenda, whether formulated at the program, country, agency, or global levels.
Guidance about conducting M&E should explore different options and methods of collecting data. Not all issues require costly randomized control trials or large surveys to provide viable information to inform and assess—“just-enough” approaches to M&E are also needed to contain costs, conserve resources for implementation, and not overburden program or country staff. More efficient instruments to measure results—especially student outcomes—must be designed for routine use, such as group-administered reading and math assessments, school readiness assessments, and social-emotional learning assessments.
Standardized monitoring data has been very helpful for allowing Washington-based decision makers to understand implementation progress across dozens of countries and field sites. However, both practitioners and managers need a better understanding of how to obtain, use, and share data. Program designers and managers must have sufficient understanding of M&E design to ensure that their plans, requirements, and budgets are realistic. Feedback loops are critical if data is to inform decision-making, whether at a project-, agency-, or government-level. Provision must be made for sharing data and its interpretation in and among partner countries and in the education development community.
Education programs vary greatly in dosage, duration, and context. An early grade reading program in a conflict-ridden country may differ significantly in results and costs from a similar program conducted in a stable, well-resourced environment, yet both may be considered successful. Aggregation and cross-country comparison must be undertaken cautiously, if at all. At the same time, setting targets for program performance must be realistic, based on historical and comparable country data, rather than wished-for results. And, finally, greater understanding of M&E should foster an appreciation that even interventions that demonstrate no discernable statistical effect may still be viable and that valuable lessons can be obtained from “failure” and transformed into successful new practices, approaches, and techniques that further education for all.
Banner Photo: American Institutes of Research (AIR)