2020 Federal Standard of Excellence


Research and Evaluation

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence-building plan), and did it publicly release the findings of all completed program evaluations in FY20?

Score
8
Administration for Children and Families (HHS)
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • ACF’s evaluation policy confirms ACF’s commitment to conducting evaluations and using evidence from evaluations to inform policy and practice. ACF seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. ACF established the policy in 2012 and published it in the Federal Register on August 29, 2014. In late 2019, ACF released a short video about the policy’s five principles and how we use them to guide our work.
  • As ACF’s primary representative to the HHS Evidence and Evaluation Council, the ACF Deputy Assistant Secretary for Planning, Research, and Evaluation co-chairs the HHS Evaluation Policy Subcommittee—the body responsible for developing an HHS- wide evaluation policy.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • In accordance with OMB guidance, ACF is contributing to an HHS-wide evaluation plan. OPRE also annually identifies questions relevant to the programs and policies of ACF and proposes a research and evaluation spending plan to the Assistant Secretary for Children and Families. This plan focuses on activities that the Office of Planning, Research, and Evaluation plans to conduct during the following fiscal year.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In accordance with OMB guidance, HHS is developing an HHS-wide evidence-building plan. To develop this document, HHS asked each sub-agency to submit examples of their agency’s priority research questions, potential data sources, anticipated approaches, challenges and mitigation strategies, and stakeholder engagement strategies. ACF drew from our existing program-specific learning agendas and research plans and has contributed example priority research questions and anticipated learning activities for inclusion in the HHS evidence-building plan. ACF also intends to release to the public a broad learning plan.
  • In addition to fulfilling requirements of the Evidence Act, ACF has supported and continues to support systematic learning and stakeholder engagement activities across the agency. For example:
    • Many ACF program offices have or are currently developing detailed program-specific learning agendas to systematically learn about and improve their programs—studying existing knowledge, identifying gaps, and setting program priorities. For example, ACF and HRSA have developed a learning agenda for the MIECHV program, and ACF is supporting ongoing efforts to build a learning agenda for ACF’s Healthy Marriage and Responsible Fatherhood (HMRF) programming.
    • ACF will continue to release annual portfolios that describe key findings from past research and evaluation work and how ongoing projects are addressing gaps in the knowledge base to answer critical questions in the areas of family self-sufficiency, child and family development, and family strengthening. In addition to describing key questions, methods, and data sources for each research and evaluation project, the portfolios provide narratives describing how evaluation and evidence-building activities unfold in specific ACF programs and topical areas over time, and how current research and evaluation initiatives build on past efforts and respond to remaining gaps in knowledge.
    • ACF works closely with many stakeholders to inform priorities for its research and evaluation efforts and solicits their input through conferences and meetings such as the Research and Evaluation Conference on Self-Sufficiency, the National Research Conference on Early Childhood, and the Child Care and Early Education Policy Research Consortium Annual Meetings; meetings with ACF grantees and program administrators; engagement with training and technical assistance networks; surveys, focus groups, interviews, and other activities conducted as a part of research and evaluation studies; and through both project-specific and topical technical working groups, including the agency’s Family Self-Sufficiency Research Technical Working Group. ACF’s ongoing efforts to engage its stakeholders will be described in more detail in ACF’s forthcoming description of its learning activities.
2.4 Did the agency publicly release all completed program evaluations?
  • ACF’s evaluation policy requires that “ACF will release evaluation results regardless of findings…Evaluation reports will present comprehensive findings, including favorable, unfavorable, and null findings. ACF will release evaluation results timely – usually within two months of a report’s completion.” ACF has publicly released the findings of all completed evaluations to date. In 2019, OPRE released over 110 research publications. OPRE publications are publicly available on the OPRE website.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • In accordance with OMB guidance, ACF is contributing to an HHS-wide capacity assessment to be released by September 2020. ACF also continues to support the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts as follows:
  • Coverage: ACF conducts research in areas where Congress has given authorization and appropriations. Programs for which ACF is able to conduct research and evaluation using dedicated funding include Temporary Assistance for Needy Families, Health Profession Opportunity Grants, Head Start, Child Care, Child Welfare, Home Visiting, Healthy Marriage and Responsible Fatherhood, Personal Responsibility Education Program, Sexual Risk Avoidance Education, Teen Pregnancy Prevention, Runaway and Homeless Youth, Family Violence Prevention Services, and Human Trafficking services. These programs represent approximately 85% of overall ACF spending.
  • Quality: ACF’s Evaluation Policy states that ACF is committed to using the most rigorous methods that are appropriate to the evaluation questions and feasible within budget and other constraints, and that rigor is necessary not only for impact evaluations, but also for implementation/process evaluations, descriptive studies, outcome evaluations, and formative evaluations; and in both qualitative and quantitative approaches.
  • Methods: ACF uses a range of evaluation methods. ACF conducts impact evaluations as well as implementation and process evaluations, cost analyses and cost benefit analyses, descriptive and exploratory studies, research syntheses, and more. ACF is committed to learning about and using the most scientifically advanced approaches to determining effectiveness and efficiency of ACF programs; to this end, OPRE annually organizes meetings of scientists and research experts to discuss critical topics in social science research methodology and how innovative methodologies can be applied to policy-relevant questions.
  • Effectiveness: ACF’s Evaluation Policy states that ACF will conduct relevant research and disseminate findings in ways that are accessible and useful to policymakers and practitioners. OPRE engages in ongoing collaboration with ACF program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions such as ACF regulations and funding opportunity announcements. For example, when ACF’s Office of Head Start significantly revised its Program Performance Standards—the regulations that define the standards and minimum requirements for Head Start services—the revisions drew from decades of OPRE research and the recommendations of the OPRE-led Secretary’s Advisory Committee on Head Start Research and Evaluation. Similarly, ACF’s Office of Child Care drew from research and evaluation findings related to eligibility redetermination, continuity of subsidy use, use of funds dedicated to improving the quality of programs, and other information to inform the regulations accompanying the reauthorization of the Child Care and Development Block Grant.
  • Independence: ACF’s Evaluation Policy states that independence and objectivity are core principles of evaluation and that it is important to insulate evaluation functions from undue influence and from both the appearance and the reality of bias. To promote objectivity, ACF protects independence in the design, conduct, and analysis of evaluations. To this end, ACF conducts evaluations through the competitive award of grants and contracts to external experts who are free from conflicts of interest; and, the Deputy Assistant Secretary for Planning, Research, and Evaluation, a career civil servant, has authority to approve the design of evaluation projects and analysis plans; and has authority to approve, release, and disseminate evaluation reports.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
Score
10
Administration for Community Living (HHS)
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • ACL’s public evaluation policy confirms ACL’s commitment to conducting evaluations and using evidence from evaluations to inform policy and practice. ACL seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. The policy addresses each of these principles.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • ACL’s agency-wide evaluation plan was submitted to the Department of Health and Human Services (HHS) in support of HHS’ requirement to submit an annual evaluation plan to OMB in conjunction with its Agency Performance Plan. ACL’s annual evaluation plan includes the evaluation activities the agency plans related to the learning agenda and any other “significant” evaluation, such as those required by statute. The plan describes the systematic collection and analysis of information about the characteristics and outcomes of programs, projects, and processes as a basis for judgments, to improve effectiveness, and/or inform decision-makers about current and future activities.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • Based on the learning agenda approach that ACL adopted in 2018, ACL published a learning agenda, in FY20. In developing the plan, ACL engaged stakeholders through meetings with program staff and grantees as required under OMB M-19-23. Additional meetings with stakeholder groups, such as through conference sessions, were put on hold for 2020 due to COVID-19 travel restrictions.
2.4 Did the agency publicly release all completed program evaluations?
  • ACL releases all evaluation reports as well as interim information such as issue briefs, webinar recordings, and factsheets based on data from its evaluation and evidence building activities.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • Staff from the Office of Performance and Evaluation (OPE) played an active role HHS’s capacity assessment efforts serving on the Capacity Assessment Subcommittee of the HHS Evidence and Evaluation Council. ACL’s self-assessment results were provided to HHS to support HHS’ ability to submit the required information to OMB. ACL’s self-assessment results, which provided information about planning and implementing evaluation activities, disseminating best practices and findings, and incorporating employee views and feedback; and carrying out capacity-building activities in order to use evaluation research and analysis approaches and data in the day-to-day operations will be reviewed by the ACL Data Council in order to develop a capacity building plan.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • ACL typically funds evaluation design contracts, such as those for the Older Americans Act Title VI Tribal Grants Program evaluation and the Long Term Care Ombudsman Evaluation, that are used to determine the most rigorous evaluation approach that is feasible given the structure of a particular program. While the Ombudsman program is full coverage programs, where comparison groups are not possible, ACL most frequently uses propensity score matching to identify comparison group members. This was the case for the Older Americans Act Nutrition Services Program and National Family Caregivers Support Program evaluations and the Wellness Prospective Evaluation Final Report conducted by CMS in partnership with ACL and published in January 2019.
  • ACL’s  National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) funds the largest percentage of ACL’s RCTs. Of their 718 research projects being conducted by grantees, 23% (163/718) are employing a randomized clinical trial (RCT) or “true experimental” design. To ensure research quality, NIDILRR adheres to strict peer reviewer evaluation criteria that are used in the grant award process (see part (c) for details on rigor of research projects and part (d) for details on the design of research projects). In addition, ACL’s evaluation policy states that “In assessing the effects of programs or services, ACL evaluations will use methods that isolate to the greatest extent possible the impacts of the programs or services from other influences such as trends over time, geographic variation, or pre-existing differences between participants and non-participants. For such causal questions, experimental approaches are preferred. When experimental approaches are not feasible, high-quality quasi-experiments offer an alternative.” ACL is in the process of implementing a method for rating each proposed evaluation against OMB’s Program Evaluation Standards and Practices as defined in OMB M-20-12.
Score
9
U.S. Agency for International Development
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • USAID has an agency-wide evaluation registry that collects information on all evaluations planned to commence within the next three years (as well as tracking ongoing and completed evaluations). Currently, this information is used internally and is not published. To meet the Evidence Act requirement, USAID will include an agency-wide evaluation plan in the Agency’s draft Annual Performance Plan/Annual Performance Report submitted to OMB in September 2020.
  • In addition, USAID’s Office of Learning, Evaluation, and Research works with bureaus to develop internal annual Bureau Monitoring, Evaluation and Learning Plans that review evaluation quality and evidence building and use within each bureau and identify challenges and priorities for the year ahead.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • USAID has an agency-wide learning agenda called the Self-Reliance Learning Agenda (SRLA). The SRLA prioritizes evidence needs related to the Agency’s mission to foster country self-reliance which covers all development program/sector areas, humanitarian assistance and resilience, and agency operations. This vision and mission is articulated in USAID’s Policy Framework and reorients the Agency’s programs, operations, and workforce around the vision of self-reliance or ending the need for foreign assistance.
  • USAID used a strongly consultative process for developing SRLA, as described in the SRLA Fact Sheet. First, the Agency compiled learning questions from a number of feedback processes to initially capture 260 questions which through consultations were reduced to the final to thirteen that represent the Agency’s priority learning needs related to Self-Reliance.
  • USAID is currently implementing the learning agenda and partnering with internal and external stakeholders to generate and gather evidence and facilitate the utilization of learning. These stakeholders include USAID’s implementing partners, other U.S. agencies, private coalitions and think tanks, researchers and academics, bilateral/multilateral organizations, and local actors and governments in the countries in which it works. Examples of learning products generated to date include a Paper Series on Capacity and Capacity Strengthening; SRLA Review of Selected Evidence.
2.4 Did the agency publicly release all completed program evaluations?
  • All final USAID evaluation reports are published on the Development Experience Clearinghouse (DEC), except for a small number of evaluations that receive a waiver to public disclosure (typically less than 5% of the total completed in a fiscal year). The process to seek a waiver to public disclosure is outlined in the document Limitations to Disclosure and Exemptions to Public Dissemination of USAID Evaluation Reports and includes exceptions for circumstances such as those when “public disclosure is likely to jeopardize the personal safety of U.S. personnel or recipients of U.S. resources.”
  • To increase awareness of available evaluation reports, USAID has created infographics showing the number and type of evaluations completed in FY2015, FY2016, and FY2017. These include short narratives that describe findings from selected evaluations and how that information informed decision-making. USAID is creating a public dashboard to share evaluation data from FY2016 through the most recent year of reporting. The information for FY2019 is being finalized.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • USAID uses rigorous evaluation methods, including random control trials (i.e. assignment studies) and quasi-experimental methods for research and evaluation purposes. For example, in FY2019, USAID completed 12impact evaluations, four of which used random control trials.
  • The Development Innovation Ventures (DIV) program makes significant investments using randomized controlled trials and quasi-experimental evaluations to provide evidence of impact for pilot approaches to be considered for scaled funding. USAID is also experimenting with cash benchmarking—using household grants to benchmark traditional programming. USAID has conducted five randomized control trials (RCT) of household grants or “cash lump sum” programs, and three RCTs of more traditional programs with household grant elements.
Score
8
AmeriCorps
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • AmeriCorps has an evaluation policy that presents five key principles that govern the agency’s planning, conduct, and use of program evaluations: rigor, relevance, transparency, independence, and ethics.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • In FY19, AmeriCorps finalized and posted a five year, agency-wide strategic evaluation plan. The AmeriCorps CEO’s goal is to use the plan to guide FY20 budget planning.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • AmeriCorps uses the terms learning agenda, evaluation plan, and evidence-building plan synonymously. AmeriCorps has a strategic evidence plan that includes an evergreen learning agenda. The plan will be reviewed and updated annually. While the agency is open to the feedback of external stakeholders, it has not engaged external stakeholders in the development of the evidence plan.
2.4 Did the agency publicly release all completed program evaluations?
  • All completed evaluation reports are posted to the Evidence Exchange, an electronic repository for evaluation studies and other reports. This virtual repository was launched in September 2015.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • A comprehensive portfolio of research projects has been built to assess the extent to which AmeriCorps is achieving its mission. As findings emerge, future studies are designed to continuously build the agency’s evidence base. R&E relies on scholarship in relevant fields of academic study; a variety of research and program evaluation approaches including field, experimental, and survey research; multiple data sources including internal and external administrative data; and different statistical analytic methods. AmeriCorps relies on partnerships with universities and third party research firms to ensure independence and access to state of the art methodologies. AmeriCorps supports its grantees with evaluation technical assistance and courses to ensure their evaluations are of the highest quality and requires grantees receiving $500,000 or more in annual funding to engage an external evaluator. These efforts have resulted in a robust body of evidence that national service allows: (1) national service participants to experience positive benefits, (2) nonprofit organizations to be strengthened, and (3) national service programs to effectively address local issues.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • AmeriCorps uses the research design most appropriate for addressing the research question. When experimental or quasi-experimental designs are warranted, the agency uses them and encourages its grantees to use them, as noted in the agency evaluation policy: “AmeriCorps is committed to using the most rigorous methods that are appropriate to the evaluation questions and feasible within statutory, budget and other constraints.” As of May 2020, AmeriCorps has received 42 grantee evaluation reports that use experimental design and 124 that use quasi-experimental design.
Score
9
U.S. Department of Education
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • The Department’s new Evaluation Policy is posted online at ed.gov/data and can be directly accessed here. Key features of the policy include the Department’s commitment to: (1) independence and objectivity; (2) relevance and utility; (3) rigor and quality; (4) transparency; and (5) ethics. Special features include additional guidance to ED staff on considerations for evidence-building conducted by ED program participants, which emphasize the need for grantees to build evidence in a manner consistent with the parameters of their grants (e.g., purpose, scope, and funding levels), up to and including rigorous evaluations that meet WWC standards without reservations.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • ED’s FY22 Draft Annual Evaluation Plan will be shared with OMB in the fall and finalized in the spring. Consistent with OMB Circular A-11 Section 290, the FY 22 Annual Evaluation Plan will be posted publicly in February 2021 concurrent with the Budget Release. ED anticipates that plan will include all current and planned program evaluations across ED and such details that are required by the Evidence Act and associated OMB guidance.
  • ED’s current evaluation plan covers the subset of agency activities funded by ESSA FY18 and FY19 appropriations, for work to be procured in FY19 and FY20, and begun—effectively—in FY20 and FY21. Since the passage of ESSA, IES has worked with partners across ED, including the Evidence Leadership Group, to prepare and submit to Congress a biennial, forward-looking evaluation plan covering all mandated and discretionary evaluations of education programs funded under ESSA (known as ED’s “8601 plan”).
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • ED is developing its Learning Agenda consistent with milestones established by the Evidence Act and OMB guidance. Per OMB guidance on performance management systems, ED will share priority questions in its draft learning agenda with OMB in June 2020 as part of the Strategic Review Process. The complete draft Learning Agenda will be shared with OMB in Fall 2020. After receiving feedback from OMB and external stakeholders, ED will submit a final Learning Agenda to OMB in Fall 2021. OMB Circular A-11 Section 290 does not require the Learning Agenda be publicly released prior to February 2022, concurrent with the FY23 Budget Release.
  • To develop its draft Learning Agenda, ED has expanded the question generation and prioritization process used in the development of its “8601 Plan” (see above) to all principal operating components across ED. To help ensure alignment of the draft learning agenda to ED’s Strategic Plan, the Evidence Leadership Group has been expanded to include a member from ED’s Performance Improvement Office. The Evaluation Officer regularly consults with ED’s Enterprise Risk Management (ERM) function to explore the intersection between the Learning Agenda and high-priority issues identified in ERM processes. Broad stakeholder feedback will be received on topics addressed in the Draft Learning Agenda after initial comments have been received from OMB on its format and sufficiency.
2.4 Did the agency publicly release all completed program evaluations?
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • ED completed its Interim Capacity Assessment, meeting all milestones established by the Evidence Act and OMB guidance. It addresses six dimensions of the Department’s capacity to build and use evidence, with an emphasis on evaluation. Specific components include: (1) a list of existing activities being evaluated by the Department; and assessments of the extent to which those activities (2) meet the needs of the Department’s operating components; (3) meet the Department’s most important learning, management, and accountability needs; (4) use appropriate methods; (5) are supported by agency capacity for effective planning, execution, and dissemination; and (6) are supported by agency capacity for effective use of evaluation evidence and data for analysis.
  • A distinguishing feature of ED’s Interim Capacity Assessment is an agency-wide survey of all employees that focus in two domains: (1) their capacity to build and use evidence and (2) their capacity to use data. The specific questions employees received depend upon their position level (i.e., supervisory or non-supervisor) and their job role (i.e., grant maker/monitor; non-grant maker/monitor; data professional). The results of this survey are already being used to develop training related to evidence building, evidence use, and analytics and fulfills, in part, requirements of the Federal Data Strategy.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The IES website includes a searchable database of planned and completed evaluations, including those that use experimental, quasi-experimental, or regression discontinuity designs. As of July 2020, that list includes 43 completed or planned experimental studies, two quasi-experimental studies, and five regression discontinuity studies. All impact evaluations rely upon experimental trials. Other methods, including matching and regression discontinuity designs, are classified as rigorous outcomes evaluations. Not included in this count are studies that are descriptive or correlational in nature, including implementation studies and less rigorous outcomes evaluations.
Score
10
U.S. Dept. of Housing & Urban Development
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • PD&R has published a Program Evaluation Policy that establishes core principles and practices of PD&R’s evaluation and research activities. The six core principles are rigor, relevance, transparency, independence, ethics, and technical innovation.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • HUD’s learning agendas, called the Research Roadmap, have served as agency-wide evaluation plans that list and describe research and evaluation priorities for a five-year planning period. Annual evaluation plans are developed based on a selection of Roadmap proposals, newly emerging research needs, and incremental funding needs for major ongoing research and are submitted to Congress in association with PD&R’s annual budget requests. Actual research activities are substantially determined by Congressional funding and guidance. Under the Evidence Act, PD&R will prepare public Annual Evaluation Plans informed by the new Research Roadmap to be submitted in conjunction with the Annual Performance Plan.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • HUD’s Research Roadmap has served as the Department’s evidence-building plan and learning agenda for eight years, and a new Roadmap was developed in FY19-20. HUD’s participatory process (see for example pp. 14–16 of Roadmap Update 2017) engages internal and external stakeholders to identify research questions and other evidence-building activities to support effective policy-making. Stakeholders include program partners in state and local governments and the private sector; researchers and academics; policy officials; and members of the general public who frequently access the HUDuser.gov portal. Outreach mechanisms for learning agenda development include email, web forums, conferences and webcasts, and targeted listening sessions. The 2019 roadmapping process added a new public-access conference and webcast. The updated Roadmap provides critical content for developing a learning agenda under the Evidence Act as a component of the next Strategic Plan.
2.4 Did the agency publicly release all completed program evaluations?
  • PD&R’s Program Evaluation Policy requires timely publishing and dissemination of all evaluations that meet standards of methodological rigor. Completed evaluations and research reports are posted on PD&R’s website, HUDUSER.gov. Additionally, the policy includes language in research and evaluation contracts that allows researchers to independently publish results, even without HUD approval, after not more than six months.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • PD&R is HUD’s independent evaluation office, with scope spanning all the Department’s program operations. In FY20 PD&R is leading the effort to assess the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts, consistent with the values established in HUD’s Evaluation Policy. The forthcoming Research Roadmap covers much of this content, and a formal Capacity Assessment process was designed by evaluation leaders in coordination with the Chief Data Officer and performance management personnel. The initial Capacity Assessment addresses updated content requirements of OMB Circular A-11 (2020) and includes primary data collection through an exploratory key informant survey of senior managers across the Department. The identified weaknesses in evidence-building capacity will become the focus of subsequent in-depth assessments and interventions to be integrated in the Department’s next Strategic Plan.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
Score
7
U.S. Department of Labor
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • DOL has an Evaluation Policy that formalizes the principles that govern all program evaluations in the Department, including methodological rigor, independence, transparency, ethics, and relevance. The policy represents a commitment to using evidence from evaluations to inform policy and practice.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • The Chief Evaluation Office (CEO) develops, implements, and publicly releases an annual DOL evaluation plan. The evaluation plan is based on the agency learning agendas as well as the Department’s Strategic Plan priorities, statutory requirements for evaluations, and Secretarial and Administration priorities. The evaluation plan includes the studies CEO intends to undertake in the next year using set-aside dollars. Appropriations language requires the Chief Evaluation Officer to submit a plan to the U.S. Senate and House Committees on Appropriations outlining the evaluations that will be carried out by the Office using dollars transferred to CEO– the DOL evaluation plan serves that purpose. The evaluation plan outlines evaluations that CEO will use its budget to undertake. CEO also works with agencies to undertake evaluations and evidence building strategies to answer other questions of interest identified in learning agencies, but not undertaken directly by CEO.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In FY20, the Department is developing its annual evaluation plan, building from individual agencies and learning agendas to create a combined document. DOL has leveraged its existing practices and infrastructure to develop the broad, four-year prospective research agenda, or Evidence-Building Plan, per the Evidence Act requirement. Both documents will outline the process for internal and external stakeholder engagement.
2.4 Did the agency publicly release all completed program evaluations?
  • All DOL program evaluation reports and findings funded by the CEO are publicly released and posted on the complete reports section of the CEO website. DOL agencies, such as the Employment & Training Administration (ETA), also post and release their own research and evaluation reports. CEO is also in the process of ramping up additional methods of communicating and disseminating CEO-funded studies and findings, and published its first quarterly newsletter in September 2020.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • DOL’s Evaluation Policy touches on the agency’s commitment to high-quality, methodologically rigorous research through funding independent research activities. Further, CEO staff have expertise in research and evaluation methods as well as in DOL programs and policies and the populations they serve. The CEO also employs technical working groups on the majority of evaluation projects whose members have deep technical and subject matter expertise. The CEO has leveraged the FY20 learning agenda process to create an interim Capacity Assessment, per Evidence Act requirements, and will conduct a more detailed assessment of individual agencies’ capacity, as well as DOL’s overall capacity, in these areas for publication in 2022.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • DOL employs a full range of evaluation methods to answer key research questions of interest, including when appropriate, impact evaluations. Among DOL’s active portfolio of approximately 50 projects, the study type ranges from rigorous evidence syntheses to implementation studies to quasi-experimental outcome studies to impact studies. Examples of current DOL studies with a random assignment component include an evaluation of a Job Corps’ demonstration pilot, the Cascades Job Corps College and Career Academy. An example of a multi-arm randomized control trial is the Reemployment Eligibility Assessments evaluation, which assesses a range of strategies to reduce Unemployment Insurance duration and wage outcomes.
Score
7
Millennium Challenge Corporation
 2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • Every MCC investment must adhere to MCC’s rigorous Policy for Monitoring and Evaluation (M&E) that requires every MCC investment to contain a comprehensive M&E Plan. For each investment MCC makes in a country, the country’s M&E plan is required to be published within 90 days of entry-into-force. The M&E Plan lays out the evaluation strategy and includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In FY20, in an effort to advance MCC’s evidence base and respond to the Evidence Act, MCC specifically embarked on a learning agenda around women’s economic empowerment (WEE) with short- and long-term objectives. Women’s economic empowerment is one of the priorities of MCC leadership and, as such, the agency is focused on expanding the evidence base to answer these key research questions:
    • How do MCC’s WEE activities contribute to MCC’s overarching goal of reducing poverty through economic growth?
    • How does MCC’s WEE work contribute to increased income and assets for households—beyond what those incomes would have been without the gendered/WEE design?
    • How does MCC’s WEE work increase income and assets for women and girls within those households?
    • How does MCC’s WEE work increase women’s empowerment, defined through measures relevant to the WEE intervention and project area?
  • These research questions were developed through extensive consultation within MCC and with external stakeholders.
2.4 Did the agency publicly release all completed program evaluations?
  • MCC publishes each independent evaluation of every project, underscoring the agency’s commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly available on the MCC Evaluation Catalog. As of September 2020, MCC had contracted, planned, and/or published 208 independent evaluations. All MCC evaluations produce a final report to present final results, and some evaluations also produce an interim report to present interim results. To date, 110 Final Reports and 41 Interim Reports have been finalized and released to the public.
  • In FY20, MCC also continued producing Evaluation Briefs, a new MCC product that distills key findings and lessons learned from MCC’s independent evaluations. MCC will produce Evaluation Briefs for each evaluation moving forward, and is in the process of writing Evaluation Briefs for the backlog of all completed evaluations. As of October 2020, MCC has published 76 Evaluation Briefs.
  • Finally, in FY20, MCC began the process of re-imagining its Evaluation Catalog to seamlessly link evaluation, data, and access with better usability and findability features. The new MCC Evidence Platform will transform stakeholders’ ability to access and use MCC evaluation data and evidence.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • MCC is currently working on a draft capacity assessment in accordance with the Evidence Act. Additionally, once a compact or threshold program is in implementation, Monitoring and Evaluation (M&E) resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of all of MCC’s programs. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E Managing Director, Chief Economist, and Vice President for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY20, MCC began or continued comprehensive, independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • MCC employs rigorous, independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of September 2020, 37% of MCC’s evaluation portfolio consists of impact evaluations, and 63% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project, and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted according to the program’s Monitoring and Evaluation (M&E) Plan, in accordance with MCC’s Policy for M&E.
Score
2
Substance Abuse and Mental Health Services Administration
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • Formerly, SAMHSA had an Evaluation Policy and Procedure (P&P) that provided guidance across the agency regarding all program evaluations. Under “Evaluation Policies,” the SAMHSA website states: “Under [the] Evidence Act, federal agencies are expected to expand their capacity for engaging in program evaluation by designating evaluation officers, developing learning agendas; producing annual evaluation plans, and enabling a workforce to conduct internal evaluations. To this end, SAMHSA seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of its evaluations.”
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • While the Evaluation P&P served as the agency’s formal evaluation plan, a updated, draft evaluation plan is not available.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
2.4 Did the agency publicly release all completed program evaluations?
  • As of August 2020, no evaluation reports or summaries are posted on the website, including any ongoing evaluation studies. However, the publications page lists 63 reports, of which nine appears to be evaluation reports. A word search of SAMHSA’s website for the term “evaluation” yielded five results, none of which are evaluation reports.
  • The following criteria is used to determine whether an evaluation is significant: (1) whether the evaluation was mandated by Congress; (2) whether there are high priority needs in states and communities; (3) whether the evaluation is for a new or congressionally-mandated program; (4) the extent to which the program is linked to key agency initiatives; (5) the level of funding; (6) the level of interest from internal and external stakeholders; and (7) the potential to inform practice, policy, and/or budgetary decision-making. Results from significant evaluations are made available on SAMHSA’s evaluation website.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • SAMHSA did not describe progress in developing an interim or draft Capacity Assessment. In 2017, SAMHSA formed a new workgroup, the Cross-Center Evaluation Review Board (CCERB). According to the former Evaluation P&P, the CCERB reviews and provides oversight of significant evaluation activities for SAMHSA, from contract planning to evaluation completion and at critical milestones, and is comprised of representatives from each of the centers, and Office of Tribal Affairs and Policy (OTAP) for cultural competency consultation, as necessary. CCERB staff provide support for program-specific and administration-wide evaluations. It is unclear if the CCERB still exists. A word search of the SAMHSA website (August 2020) for “Cross-Center Evaluation Review Board” yielded no results.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • SAMHSA does not list any completed evaluation reports on its evaluation website. Of the nine evaluation reports found on the publications page, none appear to use experimental methods. According to the Evaluation P&P (p. 5): “evaluations should be rigorously designed to the fullest extent possible and include ‘…inferences about cause and effect [that are] well founded (internal validity), […] clarity about the populations, settings, or circumstances to which results can be generalized (external validity); and requires the use of measures that accurately capture the intended information (measurement reliability and validity).’
Back to the Standard

Visit Results4America.org