2020 Federal Standard of Excellence


Millennium Challenge Corporation

Score
9
Leadership

Did the agency have senior staff members with the authority, staff, and budget to build and use evidence to inform the agency’s major policy and program decisions in FY20?

1.1 Did the agency have a senior leader with the budget and staff to serve as the agency’s Evaluation Officer (or equivalent)? (Example: Evidence Act 313)
  • The Monitoring and Evaluation (M&E) Managing Director serves as the Millennium Challenge Corporation’s (MCC) Evaluation Officer. The Managing Director is a career civil service position with the authority to execute M&E’s budget, an estimated $17.4 million in due diligence funds in FY20, with a staff of 30 people. In accordance with the Foundations for Evidence-Based Policymaking Act, MCC designated an Evaluation Officer.
1.2 Did the agency have a senior leader with the budget and staff to serve as the agency’s Chief Data Officer (or equivalent)? (Example: Evidence Act 202(e))
  • The Director of Product Management in the Office of the Chief Information Officer is MCC’s Chief Data Officer. The Chief Data Officer manages a staff of eight and an estimated FY20 budget of $1.5 million in administrative funds. In accordance with the Foundations for Evidence-Based Policymaking Act, MCC designated a Chief Data Officer.
1.3 Did the agency have a governance structure to coordinate the activities of its evaluation officer, chief data officer, statistical officer, and other related officials in order to inform policy decisions and evaluate the agency’s major programs?
  • The MCC Evaluation Management Committee (EMC) oversees decision-making, integration, and quality control of the agency’s evaluation and programmatic decision-making in accordance with the Foundations for Evidence-Based Policymaking Act. The EMC integrates evaluation with program design and implementation to ensure that evaluations are designed and implemented in a manner that increases their utility, to both MCC and in-country stakeholders as well as external stakeholders. The EMC includes the agency’s evaluation officer, Chief Data Officer, representatives from M&E, the project lead, sector specialists, the economist, and gender and environmental safeguards staff. For each evaluation the EMC has between 11-16 meetings or touchpoints, from evaluation scope-of-work to final evaluation publication. The EMC plays a key role in coordinating MCC’s Evidence Act implementation.
Score
7
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence-building plan), and did it publicly release the findings of all completed program evaluations in FY20?

 2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • Every MCC investment must adhere to MCC’s rigorous Policy for Monitoring and Evaluation (M&E) that requires every MCC investment to contain a comprehensive M&E Plan. For each investment MCC makes in a country, the country’s M&E plan is required to be published within 90 days of entry-into-force. The M&E Plan lays out the evaluation strategy and includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In FY20, in an effort to advance MCC’s evidence base and respond to the Evidence Act, MCC specifically embarked on a learning agenda around women’s economic empowerment (WEE) with short- and long-term objectives. Women’s economic empowerment is one of the priorities of MCC leadership and, as such, the agency is focused on expanding the evidence base to answer these key research questions:
    • How do MCC’s WEE activities contribute to MCC’s overarching goal of reducing poverty through economic growth?
    • How does MCC’s WEE work contribute to increased income and assets for households—beyond what those incomes would have been without the gendered/WEE design?
    • How does MCC’s WEE work increase income and assets for women and girls within those households?
    • How does MCC’s WEE work increase women’s empowerment, defined through measures relevant to the WEE intervention and project area?
  • These research questions were developed through extensive consultation within MCC and with external stakeholders.
2.4 Did the agency publicly release all completed program evaluations?
  • MCC publishes each independent evaluation of every project, underscoring the agency’s commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly available on the MCC Evaluation Catalog. As of September 2020, MCC had contracted, planned, and/or published 208 independent evaluations. All MCC evaluations produce a final report to present final results, and some evaluations also produce an interim report to present interim results. To date, 110 Final Reports and 41 Interim Reports have been finalized and released to the public.
  • In FY20, MCC also continued producing Evaluation Briefs, a new MCC product that distills key findings and lessons learned from MCC’s independent evaluations. MCC will produce Evaluation Briefs for each evaluation moving forward, and is in the process of writing Evaluation Briefs for the backlog of all completed evaluations. As of October 2020, MCC has published 76 Evaluation Briefs.
  • Finally, in FY20, MCC began the process of re-imagining its Evaluation Catalog to seamlessly link evaluation, data, and access with better usability and findability features. The new MCC Evidence Platform will transform stakeholders’ ability to access and use MCC evaluation data and evidence.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • MCC is currently working on a draft capacity assessment in accordance with the Evidence Act. Additionally, once a compact or threshold program is in implementation, Monitoring and Evaluation (M&E) resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of all of MCC’s programs. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E Managing Director, Chief Economist, and Vice President for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY20, MCC began or continued comprehensive, independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • MCC employs rigorous, independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of September 2020, 37% of MCC’s evaluation portfolio consists of impact evaluations, and 63% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project, and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted according to the program’s Monitoring and Evaluation (M&E) Plan, in accordance with MCC’s Policy for M&E.
Score
8
Resources

Did the agency invest at least 1% of program funds in evaluations in FY20? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, rigorous evaluations, including random assignments)

3.1 ____ (Name of agency) invested $____ on evaluations, evaluation technical assistance, and evaluation capacity-building, representing __% of the agency’s $___ billion FY20 budget.
  • MCC invested $16.1 million on evaluations, evaluation technical assistance, and evaluation capacity-building, representing 2.3% of the agency’s $694 million FY20 budget (minus staff/salary expenses).
3.2 Did the agency have a budget for evaluation and how much was it? (Were there any changes in this budget from the previous fiscal year?)
  • MCC budgeted $16.1 million on monitoring and evaluation in FY20, a decrease of $10.2 million compared to FY19 ($26.3 million total). MCC notes that this is not a reduction in investment, but a reflection of the number of evaluations conducted in FY20, which was lower than in FY19.
3.3 Did the agency provide financial and other resources to help city, county, and state governments or other grantees build their evaluation capacity (including technical assistance funds for data and evidence capacity building)?
  • In support of MCC’s emphasis on country ownership, MCC also provides substantial, intensive, and ongoing capacity building to partner country Monitoring and Evaluation staff in every country in which it invests. As a part of this, MCC provides training and ongoing mentorship in the local language. This includes publishing select independent evaluations, Evaluation Briefs, and other documentation in the country’s local language. The dissemination of local language publications helps further MCC’s reach to its partner country’s government and members of civil society, enabling them to fully reference and utilize evidence and learning beyond the program. MCC also includes data strengthening and national statistical capacity as a party of its evidence-building investments. This agency-wide commitment to building and expanding an evidence-based approach with every partner country is a key component of MCC’s investments.
  • In FY20, MCC also realized a first-of-its-kind partnership in its Morocco investment. MCA-Morocco, the local implementing entity, signed MCC’s first Cooperation Agreement, a funded partnership within a country program, under the new Partnership Navigator Program Partnership Solicitation process. This first MCA-driven partnership agreement will bring Nobel prize-winning economic analysis approaches from MIT and Harvard to partner with a Moroccan think tank to create an Employment Lab to conduct rigorous research into Moroccan labor market programs and policies. This research will be coupled with training and capacity building to key Moroccan policymakers to promote evidence-based decision-making.
Score
5
Performance Management / Continuous Improvement

Did the agency implement a performance management system with outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY20?
(Example: Performance stat systems, frequent outcomes-focused data-informed meetings)

4.1 Did the agency have a strategic plan with outcome goals, program objectives (if different), outcome measures, and program measures (if different)?
4.2 Does the agency use data/evidence to improve outcomes and return on investment?
  • MCC is committed to using high-quality data and evidence to drive its strategic planning and program decisions. The Monitoring and Evaluation plans for all programs and tables of key performance indicators for all projects are available online by compact and threshold program and by sector, for use by both partner countries and the general public. Prior to investment, MCC performs a Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at investment closeout, drawing on information from MCC’s monitoring data (among other data and evidence), to test original assumptions and assess the cost effectiveness of MCC programs. This year, MCC has also pushed to undertake and publish evaluation-based ERRs. As a part of the independent evaluation, the evaluators analyze the MCC-produced ERR five or more years after investment close to understand if and how benefits actually accrued. These evaluation-based ERRs add to the evidence base by better understanding the long-term effects and sustainable impact of MCC’s programs.
  • In addition, MCC produces periodic reports that capture the results of MCC’s learning efforts in specific sectors and translate that learning into actionable evidence for future programming. In FY20, MCC produced two new Principles into Practice reports. MCC compiled evidence and learning on its technical and vocational education and training activities in the education sector, called Training Service Delivery for Jobs and Productivity. MCC is also finalizing a report on its research related to learning in the water, sanitation, and hygiene sector through a new report called Lessons from Evaluations of MCC Water, Sanitation, and Hygiene Programs.
4.3 Did the agency have a continuous improvement or learning cycle processes to identify promising practices, problem areas, possible causal factors, and opportunities for improvement? (Examples: stat meetings, data analytics, data visualization tools, or other tools that improve performance)
  • MCC continues to implement and expand a new reporting system that enhances MCC’s credibility around results, transparency, learning, and accountability. The Star Report and its associated quarterly business process captures key information to provide a framework for results and improve the ability to promote and disseminate learning and evidence throughout the compact and threshold program lifecycle. For each compact and threshold program, evidence is collected on performance indicators, evaluation results, partnerships, sustainability efforts, and learning, among other elements. Critically, this information is available in one report after each program ends. Each country will have a Star Report published roughly seven months after completion.
  • Continual learning and improvement is a key aspect of MCC’s operating model. MCC continuously monitors progress towards compact and threshold program results on a quarterly basis using performance indicators that are specified in the Monitoring and Evaluation (M&E) Plan for each country’s investments. The M&E Plans specify indicators at all levels (process, output, and outcome) so that progress towards final results can be tracked. Every quarter each partner country submits an Indicator Tracking Table that shows actual performance of each indicator relative to the baseline that was established before the activity began and the performance targets that were established in the M&E Plan. Key performance indicators and their accompanying data by country are updated every quarter and published online. MCC management and the relevant country team review this data in a formal Quarterly Performance Review meeting to assess whether results are being achieved and integrate this information into project management and implementation decisions.
  • Also in FY20, MCC is producing and publishing a new product called MCC Sector Packages. For each sector in which MCC works, MCC will have a one-stop, interactive repository of sector-level common indicators, research questions, evaluation findings, and applied learnings. These documents will also show how past evidence is being used in developing new investments
Score
7
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) outcomes, cost-effectiveness, and/or the performance of federal, state, local, and other service providers programs in FY20? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies; data-use policies)

5.1 Did the agency have a strategic data plan, including an open data policy? (Example: Evidence Act 202(c), Strategic Information Resources Plan)
  • In FY20, MCC began development of a strategic data plan. As detailed on MCC’s Digital Strategy and Open Government pages, MCC promotes transparency to provide people with access to information that facilitates their understanding of MCC’s model, MCC’s decision-making processes, and the results of MCC’s investments. Transparency, and therefore open data, is a core principle for MCC because it is the basis for accountability, provides strong checks against corruption, builds public confidence, and supports informed participation of citizens.
  • As a testament to MCC’s commitment to and implementation of transparency and open data, the agency was again the highest-ranked U.S. government agency in the 2020 Publish What You Fund Aid Transparency Index for the sixth consecutive Index. In addition, the U.S. government is part of the Open Government Partnership, a signatory to the International Aid Transparency Initiative, and must adhere to the Foreign Aid Transparency and Accountability Act. All of these initiatives require foreign assistance agencies to make it easier to access, use, and understand data. All of these actions have created further impetus for MCC’s work in this area, as they establish specific goals and timelines for adoption of transparent business processes.
  • Additionally, MCC convened an internal Data Governance Board, an independent group consisting of representatives from departments throughout the agency, to streamline MCC’s approach to data management and advance data-driven decision-making across its investment portfolio.
5.2 Did the agency have an updated comprehensive data inventory? (Example: Evidence Act 3511)
  • MCC makes extensive program data, including financials and results data, publicly available through its Open Data Catalog, which includes an “enterprise data inventory” of all data resources across the agency for release of data in open, machine readable formats. The Department of Policy and Evaluation leads the MCC Disclosure Review Board process for publicly releasing the de-identified microdata that underlies the independent evaluations on the Evaluation Catalog, following MCC’s Microdata Management Guidelines to ensure appropriate balance in transparency efforts with protection of human subjects’ confidentiality.
5.3 Did the agency promote data access or data linkage for evaluation, evidence-building, or program improvement? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; downloadable machine-readable, de-identified tagged data; Evidence Act 3520(c))
  • In addition to the Evaluation Catalog, which links and provides access to all of MCC’s microdata from evaluation packages, MCC’s Data Analytics Program (DAP) enables enterprise data-driven decision-making through the capture, storage, analysis, publishing, and governance of MCC’s core programmatic data. The DAP streamlines the agency’s data lifecycle, facilitating increased efficiency. Additionally, the program promotes agency-wide coordination, learning, and transparency. For example, MCC has developed custom software applications to capture program data, established the infrastructure for consolidated storage and analysis, and connected robust data sources to end user tools that power up-to-date, dynamic reporting and also streamlines content maintenance on MCC’s public website. As a part of this effort, the Monitoring and Evaluation team has developed an Evaluation Pipeline application that provides up-to-date information on the status, risk, cost, and milestones of the full evaluation portfolio for better performance management.
5.4 Did the agency have policies and procedures to secure data and protect personal, confidential information? (Example: differential privacy; secure, multiparty computation; homomorphic encryption; or developing audit trails)
  •  MCC’s Disclosure Review Board ensures that data collected from surveys and other research activities is made public according to relevant laws and ethical standards that protect research participants, while recognizing the potential value of the data to the public. The board is responsible for: reviewing and approving procedures for the release of data products to the public; reviewing and approving data files for disclosure; ensuring de-identification procedures adhere to legal and ethical standards for the protection of research participants; and initiating and coordinating any necessary research related to disclosure risk potential in individual, household, and enterprise-level survey microdata on MCC’s beneficiaries.
  • The Microdata Evaluation Guidelines inform MCC staff and contractors, as well as other partners, on how to store, manage, and disseminate evaluation-related microdata. This microdata is distinct from other data MCC disseminates because it typically includes personally identifiable information and sensitive data as required for the independent evaluations. With this in mind, MCC’s Guidelines govern how to manage three competing objectives: share data for verification and replication of the independent evaluations, share data to maximize usability and learning, and protect the privacy and confidentiality of evaluation participants. These Guidelines were established in 2013 and updated in January 2017. Following these Guidelines, MCC has publicly released 76 de-identified, public use, microdata files for its evaluations. MCC’s experience with developing and implementing this rigorous process for data management and dissemination while protecting human subjects throughout the evaluation life cycle is detailed in Opening Up Evaluation Microdata: Balancing Risks and Benefits of Research Transparency. MCC is committed to ensuring transparent, reproducible, and ethical data and documentation and seeks to further encourage data use through a new MCC Evidence Platform.
5.5 Did the agency provide assistance to city, county, and/or state governments, and/or other grantees on accessing the agency’s datasets while protecting privacy?
  • Both MCC and its partner in-country teams produce and provide data that is continuously updated and accessed. MCC’s website is routinely updated with the most recent information, and in-country teams are required to do the same on their respective websites. As such, all MCC program data is publicly available on MCC’s website and individual MCA websites for use by MCC country partners, in addition to other stakeholder groups. As a part of each country’s program, MCC provides resources to ensure data and evidence are continually collected, captured, and accessed. In addition, each project’s evaluation has an Evaluation Brief that distills key learning from MCC-commissioned independent evaluations. Select Evaluation Briefs have been posted in local languages, including Mongolian, Georgian, French, and Romanian, to better facilitate use by country partners.
  • MCC also has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR), referred to as the Data Collaboratives for Local Impact (DCLI). This partnership is improving the use of data analysis for decision-making within PEPFAR and MCC partner countries by working toward evidence-based programs to address challenges in HIV/AIDS and health, empowerment of women and youth, and sustainable economic growth. Data-driven priority setting and insights gathered by citizen-generated data and community mapping initiatives contribute to improved allocation of resources in target communities to address local priorities, such as job creation, access to services, and reduced gender-based violence. DCLI continues to inform and improve the capabilities of PEPFAR activities through projects such as the Tanzania Data Lab, which has trained nearly 700 individuals, nearly 50% of whom are women, and has hosted a one-of-a-kind “Data Festival.” Recently, the Lab has announced a partnership with the University of Virginia Data Science Institute and catalyzed the launching of the first Masters in Data Science in East Africa, in partnership with the University of Dar es Salaam.
Score
6
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY20? (Example: What Works Clearinghouses)

6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • For each investment, MCC’s Economic Analysis (EA) division undertakes a Constraints Analysis to determine the binding constraints to economic growth in a country. To determine the individual projects in which MCC will invest in a given sector, MCC’s EA division combines root cause analysis with a cost-benefit analysis. The results of these analyses allow MCC to determine which investments will yield the greatest development impact and return on MCC’s investment. Every investment also has its own set of indicators as well as standard, agency-wide sector indicators for monitoring during the lifecycle of the investment and an evaluation plan for determining the results and impact of a given investment. MCC’s Policy for Monitoring and Evaluation details MCC’s evidence-based research and evaluation framework. Per the Policy, each completed evaluation requires a summary of findings, now called the Evaluation Brief, to summarize the key components, results, and lessons learned from the evaluation. Evidence from previous MCC programming is considered during the development of new programs. Per the Policy, “monitoring and evaluation evidence and processes should be of the highest practical quality. They should be as rigorous as practical and affordable. Evidence and practices should be impartial. The expertise and independence of evaluators and monitoring managers should result in credible evidence. Evaluation methods should be selected that best match the evaluation questions to be answered. Indicators should be limited in number to include the most crucial indicators. Both successes and failures must be reported.”
6.2 Did the agency have a common evidence framework for funding decisions?
  • MCC uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. MCC uses evidence-based selection criteria, generated by independent, objective third parties, to select countries for grant awards. To be eligible for selection, World Bank-designated low- and lower-middle-income countries must first pass the MCC  – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. An in-depth description of the country selection procedure can be found in the annual report.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions? 
  • All evaluation designs, data, reports, and summaries are made publicly available on MCC’s Evaluation Catalog, which includes evaluation information for every MCC program. Evaluation packages have a depth of information for each program including evaluation designs and questions, baseline data, surveys, questionnaires, microdata, interim reports, and final reports. To further the dissemination and use of MCC’s evaluations’ evidence and learning, the Agency publishes Evaluation Briefs, a new product to capture and disseminate the results and findings of its independent evaluation portfolio. An Evaluation Brief will be produced for each evaluation and offers a succinct, user-friendly, systematic format to better capture and share the relevant evidence and learning from MCC’s independent evaluations. These accessible products will take the place of MCC’s Summaries of Findings. Evaluation Briefs will be published on the Evaluation Catalog and will complement the many other products published for each evaluation. In FY20, MCC also began the process of re-designing the Evaluation Catalog into a new MCC Evidence Platform, in part to make MCC’s evaluation evidence and data easier to find and use.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • Using internal research and analysis to understand where and how its published evaluations, datasets, and knowledge products are utilized, MCC is embarking on a re-designed Evaluation Catalog, prioritizing evidence-building in key sectors, and continuing to refine and publish new evidence dissemination products. Under this comprehensive approach, Evaluation Briefs act as a cornerstone to promoting utilization across audience groups. Enhanced utilization of MCC’s vast evidence base and learning was a key impetus behind the creation and expansion of the Evaluation Briefs and Star Reports, two new MCC products. A push to ensure sector-level evidence use has led to renewed emphasis of the Principles into Practice series, with recent reports on the transport, education, and water & sanitation (forthcoming) sectors.
  • MCC has also enhanced its in-country evaluation dissemination events to ensure further results and evidence building with additional products in local languages and targeted stakeholder learning dissemination strategies.
Score
7
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY20? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with rigorous evaluation requirements) 

7.1 Did the agency engage leadership and staff in its innovation efforts to improve the impact of its programs?
  • MCC supports the creation of multidisciplinary country teams to manage the development and implementation of each compact and threshold program. Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems. Prior to moving forward with a program investment, teams are encouraged to use the lessons from completed evaluations to inform their work going forward.
  • In FY20, MCC launched its second-ever internal Millennium Efficiency Challenge (MEC) designed to tap into the extensive knowledge of MCC’s staff to identify efficiencies and innovative solutions that can shorten the compact and threshold program development timeline while maintaining MCC’s rigorous quality standards and investment criteria.
  • In September 2014, MCC’s Monitoring and Evaluation division launched the agency’s first Open Data Challenge, which continued into FY20. The Open Data Challenge initiative is intended to facilitate broader use of MCC’s U.S.-taxpayer funded data, encourage innovative ideas, and maximize the use of data that MCC finances for its independent evaluations.
 7.2 Did the agency have policies, processes, structures, or programs to promote innovation to improve the impact of its programs?
  • MCC’s approach to development assistance hinges on its innovative and extensive use of evidence to inform investment decisions, guide program implementation strategies, and assess and learn from its investment experiences. As such, MCC’s Office of Strategic Partnerships offers an Annual Program Statement (APS) opportunity that allows MCC divisions and country teams to tap the most innovative solutions to new development issues. In FY20, the Monitoring and Evaluation division, using MCC’s APS and traditional evaluation firms, has been piloting partnerships with academics and in-country think tanks to leverage innovative, lower cost data technologies across sectors and regions. These include:
    • using satellite imagery in Sri Lanka to measure visible changes in investment on land to get early indications if improved land rights are spurring investment;
    • leveraging big data and cell phone applications in Colombo, Sri Lanka to monitor changes in traffic congestion and the use of public transport; independently measuring power outages and voltage fluctuations using cell phones in Ghana, where utility outage data is unreliable, and where outage reduction is a critical outcome targeted by the Compact;
    • using pressure loggers on piped water at the network and household levels to get independent readings on access to water in Dar es Salaam, Tanzania; and using remote sensing to measure water supply in water kiosks in Freetown, Sierra Leone.
  • MCC regularly engages in implementing test projects as part of its overall compact programs. A few examples include: (1) in Morocco, an innovative pay-for-results mechanism to replicate or expand proven programs that provide integrated support; (2) a “call-for-ideas” in Benin for information regarding potential projects that would expand access to renewable off-grid electrical power; (3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system; and (4) an Innovation Grant Program in Zambia to encourage local innovation in pro-poor service delivery in the water sector.
7.3 Did the agency evaluate its innovation efforts, including using rigorous methods?
  • Although MCC rigorously evaluates all program efforts, MCC takes special care to ensure that innovative or untested programs are thoroughly evaluated. In addition to producing final program evaluations, MCC is continuously monitoring and evaluating all programs throughout the program lifecycle, including innovation efforts, to determine if mid-program course-correction actions are necessary. This interim data helps MCC continuously improve its innovation efforts so that they can be most effective and impactful. Although 37% of MCC’s evaluations use random-assignment methods, all of MCC’s evaluations – both impact and performance – use rigorous methods to achieve the three-part objectives of accountability, learning, and results in the most cost-effective way possible. Of particular interest in the innovation space in FY20, MCC conducted its first impact evaluation of an institutional reform program with the publication of the evaluation of the Indonesia Procurement Modernization Project. This project was specifically designed as a pilot with the evaluation results being used to determine further scale. MCC also published an evaluation of another pilot effort in Namibia that sought to improve community-based rangeland and livestock management. MCC took a comprehensive approach to measuring the various aspects of the program logic, including direct measurement of livestock (weighing and aging cows), direct measurement of rangeland health (measured grass height), and direct observation to verify self-reported behaviors. The resulting learning is extremely nuanced, which has proved especially useful to MCC and Namibian stakeholders since the intervention was advertised as a pilot.
Score
15
Use of Evidence Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its competitive grant programs in FY20? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring for evidence; Pay for Success provisions)

8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
  • MCC awards all of its agency funds through two competitive grants: (1) the compact program ($634.5 million in FY20; eligible grantees: developing countries) and (2) the threshold program ($30.0 million in FY20; eligible grantees: developing countries).
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
  • For country partner selection, as part of the compact and threshold competitive programs, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These objective indicators of a country’s performance are collected by independent third parties.
  • When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) improved Scorecard performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this use data to inform project proposal assessment, project design, and implementation approaches.”
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations) 
  • Per its Policy for Monitoring and Evaluation (M&E), MCC requires independent evaluations of every project to assess progress in achieving outputs and outcomes and program learning based on defined evaluation questions throughout the lifetime of the project and beyond. As described above, MCC publicly releases all these evaluations on its website and uses findings, in collaboration with stakeholders and partner countries, to build evidence in the field so that policymakers in the United States and in partner countries can leverage MCC’s experiences to develop future programming. In line with MCC’s Policy for M&E, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)?
  • MCC uses evidence of effectiveness to allocate funds in all its competitive grant programs as noted above.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?  
  • Based on the results of a rigorous impact evaluation, MCC’s compact in Burkina Faso improved educational infrastructure, by renovating 396 classrooms in 132 primary schools and funding ancillary educational needs for students (e.g., latrines, school supplies, and food) and adults (e.g., teachers’ housing and gender-sensitivity training). Students in intervention schools had overall student enrollment rates increase by 6%, with girls’ enrollment increasing by 10.3%; higher test scores; higher primary school graduation rates; and lower early marriage rates. In completing this program, MCC learned that addressing the factors that specifically threaten female education helps girls access and remain in school. Additionally, addressing schools’ weak educational quality (e.g., curriculum, faculty, management), coupled with improving the quality of students’ access to and facilities for education, should further improve students’ learning. This learning has since been applied in current education investments.
 8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
  • As described above, MCC develops a Monitoring & Evaluation (M&E) Plan for every grantee, which describes the independent evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. As such, grantees use program funds for evaluation.
  • MCC’s Policy for Monitoring and Evaluation stipulates that the “primary responsibility for developing the M&E Plan lies with the MCA [grantee] M&E Director with support and input from MCC’s M&E Lead and Economist. MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.” The M&E policy is intended primarily to guide MCC and partner country staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes. All MCC investments also include M&E capacity-building for grantees.
Score
10
Use of Evidence in Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its non-competitive grant programs in FY20? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)

MCC does not administer non-competitive grant programs (relative score for criteria #8 applied).

Score
8
Repurpose for Results

In FY20, did the agency shift funds away from or within any practice, policy, or program that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; incentivizing or urging grant applicants to stop using ineffective practices in funding announcements; proposing the elimination of ineffective programs through annual budget requests; incentivizing well-designed trials to fill specific knowledge gaps; supporting low-performing grantees through mentoring, improvement plans, and other forms of assistance; using rigorous evaluation results to shift funds away from a program)

10.1 Did the agency have policy(ies) for determining when to shift funds away from grantees, practices, policies, interventions, and/or programs that consistently failed to achieve desired outcomes, and did the agency act on that policy?
  • MCC has established a Policy on Suspension and Termination that lays out the reasons for which MCC may suspend or terminate assistance to partner countries, including if a country “engages in a pattern of actions inconsistent with the MCC’s eligibility criteria,” by failing to achieve desired outcomes such as:
    • A decline in performance on the indicators used to determine eligibility;
    • A decline in performance not yet reflected in the indicators used to determine eligibility; or
    • Actions by the country which are determined to be contrary to sound performance in the areas assessed for eligibility for assistance, and which together evidence an overall decline in the country’s commitment to the eligibility criteria.
  • Of 61 compact selections by MCC’s Board of Directors, including regional compacts, 14 have had their partnerships or a portion of their funding ended due to concerns about country commitment to MCC’s eligibility criteria or a failure to adhere to their responsibilities under the compact. MCC’s Policy on Suspension and Termination also allows MCC to reinstate eligibility when countries demonstrate a clear policy reversal, a remediation of MCC’s concerns, and an obvious commitment to MCC’s eligibility indicators, including achieving desired results.
  • In a number of cases, MCC has repurposed investments based on real-time evidence. In MCC’s first compact with Lesotho, MCC cancelled the Automated Clearing House Sub-Activity within the Private Sector Development Project after monitoring data determined that it would not accomplish the economic growth and poverty reduction outcomes envisioned during compact development. The remaining $600,000 in the sub-activity was transferred to the Debit Smart Card Sub-Activity, which targeted expanding financial services to people living in remote areas of Lesotho. In Tanzania, the $32 million Non-Revenue Water Activity was re-scoped after the final design estimates on two of the activity’s infrastructure investments indicated higher costs that would significantly impact their economic rates of return. As a result, $13.2 million was reallocated to the Lower Ruvu Plant Expansion Activity, $9.6 million to the Morogoro Water Supply Activity, and $400,000 for other environmental and social activities. In all of these country examples, the funding is either reallocated to activities with continued evidence of results or returned to MCC for investment in future programming.
10.2 Did the agency identify and provide support to agency programs or grantees that failed to achieve desired outcomes?
  • For every investment in implementation, MCC undertakes a Quarterly Performance Review with senior leadership to review, among many issues, quarterly results indicator tracking tables. If programs are not meeting evidence-based targets, MCC undertakes mitigation efforts to work with the partner country and program implementers to achieve desired results. These efforts are program- and context-specific but can take the form of increased technical assistance, reallocated funds, and/or new methods of implementation. For example, in FY20 MCC reallocated funds in its compact with Ghana after the country failed to achieve agreed-upon policy reforms to ensure the sustainability of the investments. Upon program completion, if a program does not meet expected results targets, MCC works to understand and memorialize why and how this occurred, beginning with program design, the theory of change, and program implementation. The results and learning from this inquiry are published through the country’s Star Report.
  • MCC also consistently monitors the progress of compact programs and their evaluations across sectors, using the learning from this evidence to make changes to MCC’s operations. For example, as part of MCC’s Principles into Practice initiative, in November 2017 MCC undertook a review of its portfolio investments in roads in an attempt to better design, implement, and evaluate road investments. Through evidence collected across 16 countries with road projects, MCC uncovered seven key lessons including the need to prioritize and select projects based on a road network analysis, to standardize content and quality of road data collection across road projects, and to consider cost and the potential for learning in determining how road projects are evaluated. In FY19, the lessons from this analysis are being applied to road projects in compacts in Côte d’Ivoire and Nepal as MCC roads investments see a shift toward increased maintenance investments. Critically, the evidence also pointed to MCC shifting how it undertakes road evaluations which led to a new request and re-bid for proposals for MCC’s roads evaluations based on new guidelines and principles.
Back to the Standard

Visit Results4America.org