2020 Federal Standard of Excellence
Use of Evidence in Competitive Grant Programs**
Did the agency use evidence of effectiveness when allocating funds from its competitive grant programs in FY20? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring for evidence; Pay for Success provisions)
Score
7
7
Administration for Children and Families (HHS)
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY20, the five largest competitive grant programs are:
- Head Start ($10.6 billion; eligible applicants: public or private non-profit organizations, including community-based and faith-based organizations, or for-profit agencies);
- Unaccompanied Children Services ($1.3 billion; eligible applicants: private on-profit and for-profit agencies);
- Preschool Development Grants ($275 million; eligible applicants: states);
- Healthy Marriage Promotion and Responsible Fatherhood Grants ($148.8 million; eligible applicants: states, local governments, tribal entities, and community-based organizations, both for profit and not-for-profit, including faith-based);
- Runaway and Homeless Youth Program ($113.8 million; eligible applicants: community-based public and private organizations)
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- ACF reviewed performance data from current Healthy Marriage and Responsible Fatherhood grantees (using the nFORM system) to set priorities, interests, and expectations for 2020 HMRF grant applicants. For example, because nFORM data indicated that organizations were more likely to meet enrollment targets and engage participants when they focused on implementing one program model, ACF’s 2020 FOA mentions specific interest in grantee projects, “that implement only one specific program model designed for one specific youth service population (p. 12)”
- ACF “anticipates giving preference to those applicants that were awarded a Healthy Marriage or Responsible Fatherhood grant between 2015 and 2019, and that (a) are confirmed by ACF to have met all qualification requirements under Section IV.2, The Project Description, Approach, Organizational Capacity of this FOA; and (b) are confirmed by ACF to have received an acceptable rating on their semi-annual grant monitoring statements during years three and four of the project period. Particular consideration will be given to applicants that: (1) designed and successfully implemented, through to end of 2019, an impact evaluation of their program model, and that the impact evaluation was a fair impact test of their program model and that was not terminated prior to analysis; or (2) successfully participated in a federally-led impact evaluation” (p. 17).
- ACF will evaluate HMRF grant applicants based upon their capacity to conduct a local impact evaluation and their proposed approach (for applicants required or electing to conduct local evaluations); their ability to provide a reasonable rationale and/or research base for the program model(s) and curriculum(a) proposed; and their inclusion of a Continuous Quality Improvement Plan, clearly describing the organizational commitment to data-driven approaches to identify areas for program performance, testing potential improvements, and cultivating a culture and environment of learning and improvement, among other things. Further, The Compliance And Performance reviews (CAPstone) entail a thorough review of each grantee’s performance. The Office of Family Assistance (OFA) sends a formal set of questions about grantee performance that the grant program specialists and TA providers answer ahead of time, and then they convene meetings where the performance of each grantee is discussed by OFA, OPRE, and the TA provider at length using nFORM data and the answers to the formal questions mentioned above.
- The Head Start Designation Renewal System (DRS) determines whether Head Start/Early Head Start grantees are delivering high-quality comprehensive services to the children and families that they serve. These determinations are based on seven conditions, one of which looks at how Head Start classrooms within programs perform on the Classroom Assessment Scoring System (CLASS), an observation-based measure of the quality of teacher-child interactions. When the DRS deems grantees to be underperforming, grantees are denied automatic renewal of their grant and must apply for funding renewal through a standard open competition process. In the most recent Head Start FOA language, grantees who are re-competing for Head Start funds must include a description of any violations, such as deficiencies, areas of non-compliance, and/or audit finding in their record of Past Performance (p. 26). Applicants may describe the actions they have taken to address these violations. According to Head Start policy, in competitions to replace or potentially replace a current grantee, the responsible HHS official will give priority to applicants that have demonstrated capacity in providing effective, comprehensive, and well-coordinated early childhood education and development services and programs (see section 1304.20: Selection among applicants).
- ACF manages the Runaway and Homeless Youth Training and Technical Assistance Center (RHYTTAC), the national training and technical assistance entity that provides resources and direct assistance to the Runaway and Homeless Youth (RHY) grantees and other youth serving organizations eligible to receive RHY funds. RHYTTAC disseminates information about and supports grantee implementation of high-quality, evidence-informed, and evidence-based practices. The RHYTTAC funding opportunity announcement evaluates applicants based on their strategy for tracking RHY grantee uptake and implementation of evidence-based or evidence-informed strategies.
- The Head Start Designation Renewal System (DRS) determines whether Head Start/Early Head Start grantees are delivering high-quality comprehensive services to the children and families that they serve. These determinations are based on seven conditions, one of which looks at how Head Start classrooms within programs perform on the Classroom Assessment Scoring System (CLASS), an observation-based measure of the quality of teacher-child interactions. When the DRS deems grantees to be underperforming, grantees are denied automatic renewal of their grant and must apply for funding renewal through a standard open competition process. In the most recent Head Start FOA language, grantees who are re-competing for Head Start funds must include a description of any violations, such as deficiencies, areas of non-compliance, and/or audit finding in their record of Past Performance (p. 26). Applicants may describe the actions they have taken to address these violations. According to Head Start policy, in competitions to replace or potentially replace a current grantee, the responsible HHS official will give priority to applicants that have demonstrated capacity in providing effective, comprehensive, and well-coordinated early childhood education and development services and programs (see section 1304.20: Selection among applicants).
- ACF also evaluates Unaccompanied Children Services, Preschool Development Grants, and Runaway and Homeless Youth grant applicants based upon: their proposed program performance evaluation plan; how their data will contribute to continuous quality improvement; and their demonstrated experience with comparable program evaluation, among other factors.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- ACF’s template (see p. 14 in Attachment C) for competitive grant announcements includes standard language that funding opportunity announcement drafters may select to require grantees to either 1) collect performance management data that contributes to continuous quality improvement and is tied to the project’s logic model, or 2) conduct a rigorous evaluation for which applicants must propose an appropriate design specifying research questions, measurement and analysis.
- As a condition of award, Head Start grantees are required to participate fully in ACF-sponsored evaluations, if selected to do so. As such, ACF has an ongoing research portfolio that is building evidence in Head Start. Research sponsored through Head Start funding over the past decade has provided valuable information not only to guide program improvement in Head Start itself, but also to guide the field of early childhood programming and early childhood development. Dozens of Head Start programs have collaborated with researchers in making significant contributions in terms of program innovation and evaluation, as well as the use of systematic data collection, analysis and interpretation in program operations.
- ACF’s 2020 Healthy Marriage and Responsible Fatherhood (HMRF) Grants establish required evidence activities by scope of grantee services (p.4). For example, large scope services (requesting funding between $1M-$1.5M) “must propose a rigorous impact evaluation (i.e., randomized-controlled trial (RCT) or high-quality, quasi-experimental design (QED) study)…and must allocate at least 15 percent, but no more than 20 percent, of their total annual funding for evaluation” (p. 19) Regardless of their scope of services, all 2020 HMRF grantees must plan for and carry out continuous quality improvement activities (p. 18) and conduct a local evaluation (p. 18) or participate in a federally led evaluation or research effort (p. 22). ACF has an ongoing research portfolio building evidence related to Strengthening Families, Healthy Marriage, and Responsible Fatherhood, and has conducted randomized controlled trials with grantees in each funding round of these grants.
- The 2003 Reauthorization of the Runaway and Homeless Youth Act called for a study of long-term outcomes for youth who are served through the Transitional Living Program (TLP). In response, ACF is sponsoring a study that will capture data from youth at program entry and at intermediate- and longer-term follow-up points after program exit and will assess outcomes related to housing, education, and employment. ACF is also sponsoring a process evaluation of the 2016 Transitional Living Program Special Population Demonstration Project.
- Additionally, Unaccompanied Children Services (p. 33), Preschool Development Grants (p. 30), andRunaway and Homeless Youth (p.24) grantees are required to develop a program performance evaluation plan.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)?
- ACF’s Personal Responsibility Education Program includes three individual discretionary grant programs that fund programs exhibiting evidence of effectiveness, innovative adaptations of evidence-based programs, and promising practices that teach youth about abstinence and contraception to prevent pregnancy and sexually transmitted infections.
- To receive funding through ACFs Sexual Risk Avoidance Education (SRAE) program, applicants must cite evidence published in a peer-reviewed journal and/or a randomized controlled trial or quasi-experimental design to support their chosen interventions or models.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- As mentioned above, ACF is conducting a multi-pronged evaluation of the Health Profession Opportunity Grants Program (HPOG). Findings from the first cohort of HPOG grants influenced the funding opportunity announcement for the second round of HPOG (HPOG 2.0) funding. ACF used findings from the impact evaluation of the first cohort of HPOG grants to provide insights to the field about which HPOG program components are associated with stronger participant outcomes. For example, based on the finding that many participants engaged in short-term training for low-wage, entry-level jobs, the HPOG 2.0 FOA more carefully defined the career pathways framework, described specific strategies for helping participants progress along a career pathway, and identified and defined key HPOG education and training components. Applicants were required to more clearly describe how their program would support career pathways for participants. Based on an analysis, which indicated limited collaborations with healthcare employers, the HPOG 2.0 FOA required applicants to demonstrate the use of labor market information, consult with local employers, and describe their plans for employer engagement. The HPOG 2.0 FOA also placed more emphasis on the importance of providing basic skills education and assessment of barriers to make the programs accessible to clients who were most prepared to benefit, based on the finding that many programs were screening out applicants with low levels of basic literacy, reading, and numeracy skills.
- ACF’s Personal Responsibility Education Innovative Strategies Program (PREIS) grantees must conduct independent evaluations of their innovative strategies for the prevention of teen pregnancy, births, and STIs, supported by ACF training and technical assistance. These rigorous evaluations are designed to meet the HHS Teen Pregnancy Prevention Evidence-Based Standards and are expected to generate lessons learned so that others can benefit from these strategies and innovative approaches.
- In 2019, ACF awarded two child welfare discretionary grants to build knowledge of what works: (1) Regional Partnership Grants to Increase the Well-Being of, and to Improve the Permanency Outcomes for, Children and Families Affected By Opioids and Other Substance Abuse: these grants aim to build evidence on the effectiveness of targeted approaches that improve outcomes for children and families affected by opioids and other substance use orders. To this end, grantees will evaluate their local program; select and report on performance indicators that align with proposed program strategies and activities; and participate in a national cross-site evaluation that will describe outcomes for children, adults, and families enrolled in RPG projects as well as the outcomes of the partnerships. (2) Community Collaboratives to Strengthen and Preserve Families: these grants will support the development, implementation, and evaluation of primary prevention strategies to improve the safety, stability, and well-being of all families through a continuum of community-based services and supports. Projects will include both process and outcome evaluations.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- ACF’s template (see p. 14 in Attachment C) for competitive grant announcements includes standard language instructing grantees to conduct evaluation efforts. Program offices may use this template to require grantees to collect performance data or conduct a rigorous evaluation. Applicants are instructed to include third-party evaluation contracts in their proposed budget justifications.
- ACF’s 2020 Healthy Marriage and Responsible Fatherhood (HMRF) Grants establish required evidence activities by scope of grantee services (p.4). For example, large scope services (requesting funding between $1M-$1.5M) “must propose a rigorous impact evaluation (i.e., randomized-controlled trial (RCT) or high-quality, quasi-experimental design (QED) study)…and must allocate at least 15 percent, but no more than 20 percent, of their total annual funding for evaluation” (p. 19) Regardless of their scope of services, all 2020 HMRF grantees must plan for and carry out continuous quality improvement activities (p. 18) and conduct a local evaluation (p. 18) or participate in a federally led evaluation or research effort (p. 22).
- ACF’s 2018 Preschool Development Grants funding announcement notes that “it is intended that States or territories will use a percentage of the total amount of their [renewal] grant award during years 2 through 4 to conduct the proposed process, cost, and outcome evaluations, and to implement a data collection system that will allow them to collect, house, and use data on the populations served, the implementation of services, the cost of providing services, and coordination across service partners.”
- ACF’s rules (section 1351.15) allow Runaway and Homeless Youth grant awards to be used for “data collection and analysis.
- Regional Partnership Grants (RPG) (p. 1) require a minimum of 20 percent of grant funds to be spent on evaluation elements. ACF has supported the evaluation capacity of RPG grantees by providing technical assistance for data collection, performance measurement, and continuous quality improvement; implementation of the cross-site evaluation; support for knowledge dissemination; and provision of group TA via webinars and presentation.
- Community Collaboratives to Strengthen and Preserve Families (CCSPF) grants (p. 7) require a minimum of 10 percent of grant funds to be used on data collection and evaluation activities. ACF has supported the evaluation capacity of CCSPF grantees by providing technical assistance for developing research questions, methodologies, process and outcome measures; implementing grantee-designed evaluations and continuous quality improvement activities; analyzing evaluation data; disseminating findings; and supporting data use in project and organizational decision-making processes.
- ACF also provides evaluation technical assistance to:
- support grantees participating in federal evaluations (e.g., projects supporting grantees from Health Profession Opportunity Grants 2.0 and Tribal Health Profession Opportunity Grants 2.0); and
- support grantees who are conducting their own local evaluations (e.g., projects supporting Healthy Marriage and Responsible Fatherhood grantees, Personal Responsibility Education Programgrantees, and YARH)
Score
7
7
Administration for Community Living (HHS)
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY20, the five largest competitive grant programs are:
- National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) ($112.0 million; eligible applicants: State, local, and tribal governments and nonprofits, public and State controlled institutions of higher education)
- NIDILRR’s largest competitive grants are its Disability and Rehabilitation Research Projects (DRRP)
- Centers for Independent Living ($90.8 million; eligible applicants: Nonprofits; Public and State controlled institutions of higher education)
- One of their largest competitive grants for was the Centers for Independent Living Training and Technical Assistance Grant
- State Health Insurance Assistance Program ($52.1 million; eligible applicants: Unrestricted)
- One of the relevant NOFAs is for 2020 State Health Insurance Assistance Program (SHIP) Base Grant
- University Centers for Excellence in Developmental Disabilities Education, Research and Service ($41.6 million; eligible applicants: entities in each State designated as UCEDDs to carry out the four core functions of interdisciplinary pre-service preparation and continuing education, community services, research, and information dissemination)
- Medicare Improvements for Patients and Providers Act Programs (MIPPA) ($38 million; Eligible applicants are: Nonprofits; City or township governments; Public and State controlled institutions of higher education; Native American tribal; Public housing authorities/Indian housing authorities; Private institutions of higher education; Native American tribal organizations; Special district governments; County governments; State governments; and Independent school districts).
- National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) ($112.0 million; eligible applicants: State, local, and tribal governments and nonprofits, public and State controlled institutions of higher education)
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- Based on a strict interpretation of the phrase “evidence of prior effectiveness to make grant awards,” NIDILRR currently does not use evidence of prior effectiveness to make grant awards. Instead, ACL makes these grant awards by largely relying on the expert evaluative judgments of our peer reviewers. Making grant awards by using peer review is a standard, and widely-accepted, evidenced-based practice. For example, see page 7 of the full DPCP full announcement.
- Independent Living (IL) NOFAs describe evaluation criteria including plans for technical assistance to enhance grant effectiveness and the provision of information developed about best practices (full announcement (p. 21)). To continue receiving CIL program funding, eligible centers must provide evidence that they have previously had an impact on the goals and objectives for this funding.
- SHIP NOFAs describe evaluation criteria including plans to improve alignment of policies, processes, and procedures to program goals and increased accountability to program expectations at all levels (full announcement (p.25)).
- University Centers for Excellence in Developmental Disabilities Education, Research & Service (UCEDDs) are a nationwide network of independent but interlinked centers, representing an expansive national resource for addressing issues, finding solutions, and advancing research related to the needs of individuals with developmental disabilities and their families. According to the funding opportunity announcement applications are also reviewed based on their description of current or previous evidence of relevant experience.
- MIPPA funds are awarded to State grantees and to the National Center for Benefits Outreach and Enrollment. To continue funding without restrictions, State grantees are required to submit state plans that ACL staff review for the specific strategies that grantees will employ to enhance efforts through statewide and local coalition building. TheNational Center applicants must describe the rationale for using the particular intervention, including factors such as evidence of intervention effectiveness. In 2019, the Center was awarded additional funding based on prior performance— specifically, assisting over 7.6 million individuals to identify over $29.6 billion in potential annual benefits.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- NIDILRR, and its grantees, are in the disability and rehabilitation evidence-building business. NIDILRR grantees generate new knowledge, on particular disability topics or develop new disability products which eventually becomes part of a larger evidence base. To generate this new knowledge, NIDILRR grantees must conduct a series of research and development activities that produce important outputs. These research and development activities are guided by the following two frameworks: The NIDILRR Stages of Research Framework, and the NIDILRR Stages of Development Framework. The NIDILRR Stages of Research Framework is published in 45 CFR 1330.4 while the Stages of Development Framework is published in 45 CFR 1330.5.
- Independence Living/Centers for Independent living grants are required to show that they are “improving performance, outcomes, operations, and governance of CILs.” (Full Announcement (p. 5))
- SHIP grantees are required to build and disseminate evidence of what works through documenting and promoting “knowledge, successes, and lessons learned within the SHIP network. This includes sharing ideas, products, and materials with other SHIP grantees, ACL, and the SHIP Technical Assistance Center.” (Full Announcement (p.5)).
- A central purpose of UCEDD grants is the building and dissemination of evidence of what works. UCEDDs are a nationwide network of independent but interlinked centers, representing an expansive national resource for addressing issues, finding solutions, and advancing research related to the needs of individuals with developmental disabilities and their families.
- MIPPA Grant funds support the identification and dissemination of promising practices for (i.e., practices built upon evidence of effectiveness) improving benefits outreach and enrollment.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)
- ACL requires that evidence of effectiveness is used in all award decisions. Grant officers attend training regarding ways to include information about evidence building into funding opportunity announcements. This includes information about text that can be included in funding announcements: 1) describing requirements for developing measurable outcomes; 2) explaining how the inclusion of evidence and evidence building plans can be used to score grant applications; and 3) instructing grant reviewers regarding rating applicants’ presentation of evidence and evidence building plans. The training was recorded and is available to all staff.
- ACL’s Alzheimer’s Disease Programs Initiative (ADPI) translates and implements evidence-based supportive services for persons with ADRD and their caregivers at the community level. Award criteria include the extent to which applicants ” describe partnerships, collaborations and innovative activities that will be implemented in support of goal/objective achievement, including the dementia specific evidence-based/evidence informed intervention(s) to be implemented in the project” (Full Announcement (p. 24))
- The review criteria for the Lifespan Respite Care Program: State Program Enhancement Grants includes the applicant’s description of “how the proposed project will build upon the accomplishments made in previous Lifespan Respite Care Program grants” (Full Announcement (p. 23)).
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- Prior to the development of visual scene displays by the NIDILRR-funded Augmentative and Alternative Communication Rehabilitation Engineering Research Center (AAC-RERC), the only Augmentative and Alternative Communication (AAC) option was traditional grid displays with isolated symbols presented in rows and columns. It was difficult for many adults with acquired conditions resulting in significant language and cognitive limitations to use these traditional grid displays. Visual Scene Displays (VSDs) offer an easier alternative to traditional grid displays. They go beyond standard pictures and symbols organized in rows and columns by providing information on the situation or context. Put more simply, VSDs are photos or pictures that people can use to communicate messages to others. These photos depict familiar scenes, objects or people—and users can touch “hot spots” on the photo to speak messages that relate to the pictured scene or object. For example, a person with aphasia might touch a hotspot on a picture of a sibling and say this is my sister. This additional information on the situation and context makes it easier for persons with complex communication needs to express their wants and needs and therefore enhances their ability to interact and participate with others in the community. Research from the AAC RERC and external researchers demonstrates the effectiveness of VSDs with adults with severe chronic aphasia, primary progressive aphasia, dementia, etc. As a result of the continued efforts of the AAC-RERC and their partners, this VSD technology has been successfully transferred to all of the major AAC manufacturers and app developers.
- ACL’s Alzheimer’s Disease Supportive Services Program (ADSSP) encourages the translation of dementia specific interventions for use in communities. Examples include: the Savvy Caregiver (evidence-based) psychoeducational intervention focused on training family caregivers about the basic knowledge, skills, and attitudes needed to handle the challenges of caring for a family member with Alzheimer’s disease and to be an effective caregiver; Cuidando con Respeto (evidence-informed), Spanish version of the original Savvy Caregiver Program; and Savvy Caregiver Express (evidence-informed), a condensed version of the original Savvy Caregiver Program. ACL’s requirement for inclusion of dementia specific evidence-based interventions is demonstrated in the 2018 funding opportunity announcement entitled Alzheimer’s Disease Programs to States and Communities.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- Funding opportunity announcements and grant reviews stress the need for strong performance measurement and evaluation. ACL’s technical assistance centers— the National Resource Center on Nutrition and Aging (NRC), the Alzheimer’s Disease Supportive Services Program (ADSSP) and the University Centers for Excellence in Developmental Disabilities Education, Research, and Service— promote the use and generation of evidence with ACL grantees. Grantees manuals also include information about the importance of and requirements for evaluation (see the Administration on Aging: Title VI Resource Manual). Staff of ACL’s Office of Performance and Evaluation make presentations regarding the importance of evidence with regional staff who are in frequent contact with State grantees and at grantee conferences (see ACL Track: The ACL Older Americans Act (OAA) Performance System – Crossing the Finish Line and ACL/CMS Track: Raising the Bar in Medicaid HCBS & Community Inclusion – Showcasing Transformation presented at the 2019 home- and community-based services (HCBS) conference; ACL Track: Assuring the Health & Welfare of Medicaid HCBS Beneficiaries: Federal Findings, Investments, & Promising Practices in Systems Change and ACL Track: Innovative Housing & Health & Human Services Collaborations: A Game-Changer in Supportive Housing & Community Living presented at the 2018 HCBS conference).
Score
10
10
U.S. Agency for International Development
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- USAID’s top five program accounts based on actual appropriation amounts in FY19 are:
- International Disaster Assistance ($4.39 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Economic Support Fund ($3.69 billion ADS 303);
- Migration and Refugee Assistance ($3.43 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Global Health (USAID) ($3.15 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Development Assistance ($3 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303).
- See the U.S. Foreign Assistance Reference Guide for more information on each of these accounts. More information can also be found in the FY2021 Congressional Budget Justification (page 2 and 3, column 4). USAID generally does not limit eligibility when awarding grants and cooperative agreements; eligibility may be restricted for an individual notice of funding opportunity in accordance with the procedures in ADS 303.
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID’s Program Cycle Policy ensures evidence from monitoring, evaluation and other sources informs funding decisions at all levels, including during strategic planning, project and activity design, procurement and implementation.
- USAID’s Senior Obligation Alignment Review (SOAR) helps to ensure the Agency is using evidence to design and approve funding for innovative approaches to provide long-term sustainable outcomes and provides oversight on the use of grant or contract mechanisms and proposed results.
- USAID includes past performance to comprise 30% of the non-cost evaluation criteria for contracts. As part of determining grant awards, USAID’s policy requires an applicant to provide a list of all its cost-reimbursement contracts, grants, or cooperative agreements involving similar or related programs during the past three years. The grant Selection Committee chair must validate the applicant’s past performance reference information based on existing evaluations to the maximum extent possible, and make a reasonable, good faith effort to contact all references to verify or corroborate how well an applicant performed.
- For assistance, as required by 2 CFR 200, USAID also does a risk assessment to review an organization’s ability to meet the goals and objectives outlined by the agency. Internal procedures for conducting the risk assessment are found in ADS 303.3.9, with guidance on how to look for evidence of effectiveness from potential grantees. Per the ADS, this can be done through reviewing past performance and evaluation/performance reports such as the Contractor Performance Assessment Reporting System (CPARS).
- Even though there is no federal requirement (as there is with CPARS), USAID also assesses grantee past performance for use when making funding decisions (detailed in ADS 303, p. 66). Per USAID’s ADS 303 policy, before making an award of any grant or cooperative agreement the Agreement Officer must state in the memorandum of negotiation that the applicant has a satisfactory record of performance. When making the award, the Agreement Officer may consider withholding authority to proceed to the next phase of a grant until provided evidence of acceptable performance within a given period.
- USAID was recognized by GAO in its recent report published on September 5, 2018, Managing for Results: Government-wide Actions Needed to Improve Agencies’ Use of Performance Information in Decision Making (GAO-18-609SP) as one of four agencies (out of 23 surveyed) with proven practices for using performance information. USAID was also the only CFO Act agency with a statistically significant increase in the Agency Use of Performance Information Index since 2007.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Grantees report on the progress of activities through documentation such as Activity Monitoring, Evaluation, and Learning (MEL) Plans, periodic performance reporting, and external and internal evaluation reports (if applicable). These reports help USAID remain transparent and accountable and also help the Agency build evidence of what does and does not work in its interventions. Any internal evaluation undertaken by a grantee must also be provided to USAID for learning purposes. All datasets compiled under USAID-funded projects, activities, and evaluations are to be submitted by grantees to the USAID Development Data Library. All final evaluation reports must also be submitted to the Agency’s Development Experience Clearinghouse (DEC), unless they receive a waiver to the USAID’s public dissemination requirements. These are rare and require the concurrence of the Director of the Office of Learning, Evaluation, and Research.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)
- USAID is actively engaged in utilizing evidence of effectiveness to allocate funds. For example, Development Innovation Ventures (DIV) invests in innovations that demonstrate evidence of impact, cost-effectiveness, and a viable pathway to scale. DIV provides four types of grants: 1) proof of concept, 2) positioning for scale, 3) scaling proven solutions, and 4) evidence grants.
- The more funding requested (up to $5 million dollars), the more DIV requires in an innovation’s evidence base, the deeper the due diligence process, and the greater the expectation that the applicant will be able to demonstrate development impact and potential to scale. After a decision is made to allocate funding, 98% of all DIV awards are structured as fixed amount pay-for-performance grants, ensuring that awards maximize the impact of U.S. taxpayer dollars. Over the past eight years, DIV has invested $118 million in nearly 200 innovations across 45 countries.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- No USAID examples.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- USAID’s Program Cycle Policy states that “[f]unding may be dedicated within a project or activity design for implementing partners to engage in an internal evaluation for institutional learning or accountability purposes.”
- USAID’s Development Innovation Ventures (DIV) specifically references evaluations and rigorous evidence in the official solicitation: “Larger scale Stage 2 innovations (over $500,000) must include or test the evidence of impact of an innovation. This evidence of impact must be causal and rigorous—the grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period.” More on DIV’s funding framework can be found in its evaluation criteria (see DIV’s most recent Annual Program Statement for the evaluation criteria (p. 6)).
Score
13
13
AmeriCorps
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY20, the five largest competitive grant programs are:
- AmeriCorps State and National program (excluding State formula grant funds) ($253,704,774 million; eligible grantees: nonprofit organizations, state governments, tribal governments, local governments, institutions of higher education);
- Senior Corps RSVP program ($51,355,000 million; eligible grantees: nonprofit organizations, local governments).
- The Social Innovation Fund (SIF) grants were integrated into the Office of Research and Evaluation in FY19.
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- AmeriCorps’s AmeriCorps State and National grants program (excluding State formula grant funds), allocated up to 44 out of 100 points to organizations that submit applications supported by performance and evaluation data in FY20. Specifically, up to 24 points can be assigned to applications with theories of change supported by relevant research literature, program performance data, or program evaluation data; and up to 20 points can be assigned for an applicant’s incoming level of evidence and the quality of the evidence. Further, in 2020 AmeriCorps prioritized the funding of specific education, economic opportunity, and health interventions with moderate or strong levels of evidence.
- Since AmeriCorps’ implementation of a scoring process that assigns specific points for level of evidence, the percentage of grant dollars allocated to strong, moderate, preliminary, and no evidence categories has shifted over time (see chart below), such that more FY20 grant dollars were awarded to applicants with strong and moderate levels of evidence for proposed interventions, and fewer grant dollars were awarded to applicants with little to no evidence of effectiveness. Note that 51% of FY20 grant dollars versus 41% of FY19 grant dollars were invested in interventions with a strong or moderate evidence base.
- In FY18, Senior Corps RSVP embedded evidence into their grant renewal processes by offering supplemental funding, “augmentation grants,” to grantees interested in deploying volunteers to serve in evidence-based programs. More than $3.3 million of Senior Corps program dollars were allocated, over three years, toward new evidence-based programming augmentations. Grantees will be operating with their augmentations through fiscal year 2021.
- In a survey completed in FY20, Senior Corps grantees reported that 4,043 volunteer stations and 20,320 volunteers (10% of all volunteers) were engaged in evidence-based programming.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- AmeriCorps State and National grantees are required to evaluate their programs as part of the grant’s terms and conditions. Grantees receiving more than $500,000 required to conduct an independent, external evaluation (see p. 23 of the FY20 notice of funding for a description of these requirements).
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)?
- CNCS administers only two competitive grant programs, described above.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- AmeriCorps has summarized the accomplishments of its competitive grant programs in a series of research briefs that describe the core components of effective interventions in the areas of education, economic opportunity, and health. The education brief was used to justify the FY19 funding priority for evidence-based interventions in the AmeriCorps State and National competition. All interventions described in these briefs illustrate how AmeriCorps competitive grant recipients have achieved better outcomes and built knowledge about what works. Our most current list was updated in FY20 and will be published as part of a larger report in the fall of 2020.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- AmeriCorps State and National grantees, including city, county, tribal, and state governments, are required to use their AmeriCorps funds to evaluate their programs. In FY20, AmeriCorps awarded $8.5 million for the Commission Investment Fund that supports State Commissions, which are typically housed within state government– approximately one third of these grants will focus on building the capacity of State Commissions and their grantees to collect and use performance and evaluation data. AmeriCorps’s Evidence Exchange includes a suite of scaling products on the evidence exchange to help grantees replicate evidence-based interventions.
Score
13
13
U.S. Department of Education
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- ED’s top five program accounts based on actual appropriation amounts in FY20 are:
- TRIO ($1.96 billion; eligible applicants: eligible grantees: institutions of higher education, public and private organizations);
- Charter Schools Program ($440 million; eligible grantees: local charter schools)
- GEAR UP ($365 million; eligible grantees: state agencies; partnerships that include IHEs and LEAs)
- Teacher and School Leader Incentive Program (TSL) ($200 million; eligible grantees: local education agencies, partnerships between state and local education agencies);
- Comprehensive Literacy Development Grants ($192 million; eligible grantees: state education agencies).
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- ED uses evidence of effectiveness when making awards in its largest competitive grant programs.
- The vast majority of TRIO funding in FY20 was used to support continuation awards to grantees that were successful in prior competitions that awarded competitive preference priority points for projects that proposed strategies supported by moderate evidence of effectiveness. Within the TRIO program, ED will make new awards under Student Support Services. That competition provides points for applicants that propose a project with a key component in its logic model that is informed by research or evaluation findings that suggest it is likely to improve relevant outcomes.
- Under the Charter Schools Program, ED generally requires or encourages applicants to support their projects through logic models – however, applicants are not expected to develop their applications based on rigorous evidence. Within the CSP program, the Grants to Charter School Management Organizations for the Replication and Expansion of High-Quality Charter Schools (CMO Grants) supports charter schools with a previous track record of success.
- For the 2019 competition for GEAR UP State awards, ED used a competitive preference priority for projects implementing activities that are supported by promising evidence of effectiveness. FY20 funds are supporting continuation awards.
- The TSL statute requires applicants to provide a description of the rationale for their project and describe how the proposed activities are evidence-based, and grantees are held to these standards in the implementation of the program.
- The Comprehensive Literacy Development (CLD) statute requires that grantees provide subgrants to local educational agencies that conduct evidence-based literacy interventions. ESSA requires ED to give priority to applicants demonstrating strong, moderate, or promising levels of evidence.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- The Evidence Leadership Group (ELG) advises program offices on ways to incorporate evidence in grant programs through encouraging or requiring applicants to propose projects that are based on research and by encouraging applicants to design evaluations for their proposed projects that would build new evidence.
- ED’s grant programs require some form of an evaluation report on a yearly basis to build evidence, demonstrate performance improvement, and account for the utilization of funds. For examples, please see the annual performance reports of TRIO, the Charter Schools Program, and GEAR UP. The Teacher and School Leader Incentive Program is required by ESSA to conduct a national evaluation. The Comprehensive Literacy Development Grant requires evaluation reports. In addition, IES is currently conducting rigorous evaluations to identify successful practices in TRIO-Educational Opportunities Centers and GEAR UP. In FY19, IES released a rigorous evaluation of practices embedded within TRIO-Upward Bound that examined the impact of enhanced college advising practices on students’ pathway to college.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)?
- The Education and Innovation (EIR) program supports the creation, development, implementation, replication, and taking to scale of entrepreneurial, evidence-based, field-initiated innovations designed to improve student achievement and attainment for high-need students. The program uses three evidence tiers to allocate funds based on evidence of effectiveness, with larger awards given to applicants who can demonstrate stronger levels of prior evidence and produce stronger evidence of effectiveness through a rigorous, independent evaluation. The FY19 competition included checklists and PowerPoints to help applicants clearly understand the evidence requirements.
- ED incorporates the evidence standards established in EDGAR as priorities and selection criteria in many competitive grant programs.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- The Education and Innovation (EIR) program supports the creation, development, implementation, replication, and scaling up of evidence-based, field-initiated innovations designed to improve student achievement and attainment for high-need students. IES released The Investing in Innovation Fund: Summary of 67 Evaluations, which can be used to inform efforts to move to more effective practices. ED is exploring the results to determine what lessons learned can be applied to other programs.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- In 2016, ED released non-regulatory guidance to provide state educational agencies, local educational agencies (LEAs), schools, educators, and partner organizations with information to assist them in selecting and using “evidence-based” activities, strategies, and interventions, as defined by ESSA, including carrying out evaluations to “examine and reflect” on how interventions are working. However, the guidance does not specify that federal competitive funds can be used to conduct such evaluations. Frequently, though, programs do include a requirement to evaluate the grant during and after the project period.
Score
8
8
U.S. Dept. of Housing & Urban Development
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY20, HUD’s five largest competitive grant programs are:
- Continuum of Care ($2.35 billion; eligible grantees: state and local governments and coalitions)
- Lead-Hazard Reduction ($275 million; eligible grantees: local governments)
- Choice Neighborhoods Implementation ($182 million; eligible grantees: state and local governments)
- Section 202 Service Coordinators ($100 million; eligible grantees: service coordinators/housing providers)
- Indian Housing ($91 million; eligible grantees: tribes and tribally designated housing entities).
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- The Continuum of Care program (CoC) provides homelessness assistance awards on the basis of system performance measures focused on outcomes and evidence of effectiveness. This includes up to 56 points (out of 200) for past “performance related to reducing homelessness” and four points for “reallocat[ing] lower performing projects to create new higher performing projects that are based on performance review of existing projects.” Additionally, a precondition for Continuum of Care applicants to be awarded FY19 expansion bonus funding was that they rank homeless assistance projects on the basis of how they improve system performance (p. 34).
- Lead Hazard Reduction Grants require applicants to demonstrate a strategic approach to address low-income neighborhoods having concentrated lead hazards for children. The FY20 grants required the grantees to use evidence-based lead hazard control methods and meet cost-savings, productivity, and grant compliance benchmarks. The application assigned 13 points (out of 100) based on grantees’ past performance. Past research showing large returns on investment supported HUD’s decision to request a 26 percent increase in program funding for FY20, and HUD is funding studies using an implementation science framework to continue improving efficiency and efficacy of lead interventions.
- The Indian Housing competitive grant program was established to address issues of overcrowded and physically inadequate housing identified by a PD&R needs assessment completed in 2017, Housing Needs of American Indians and Alaska Natives in Tribal Areas.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- As a condition of grant award, all HUD competitive grantees are required to cooperate (p. 5) in any HUD-sponsored research or evaluation studies.
- The Continuum of Care program is supported by the National Homeless Data Analysis Project, which provides communities with resources to improve data collection and consistent reporting about individuals experiencing homelessness to support national Annual Homeless Assessment Reports.
- HUD Lead Paint grantees are required to integrate evidence into their work by conducting clearance testing of all housing units treated. Technical studies provide evidence to improve lead hazard detection, evaluation, and control technologies, as well as implementation, and rigorous evaluation has demonstrated the large return on investment related to children’s health from controlling lead hazards.
- All HUD-funded programs require recipients to submit, not less than annually, a report documenting achievement of outcomes under the purpose of the program and the work plan in the award agreement for accountability purposes and to build evidence of effective practices in the field.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)?
- HUD’s Housing Counseling Grant Program ($43 million in FY19) provides counseling services to tenants and homeowners. One of the program’s main objectives is to “Distribute federal financial support to housing counseling agencies based on past performance.” As such, the program allocates seven points (out of 100) for past performance based on the “the positive impacts that an Applicant’s housing counseling services had on clients.” HUD scores this item based on its own performance records.
- HUD continues to extend the Standards for Success reporting framework to additional competitive grant programs, establishing a performance outcomes framework that will both drive performance and determine future funding recipients by providing strategically aligned performance metrics that are standardized and sufficiently granular to provide information on relative effectiveness.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- Continuum of Care programs are the nation’s primary structure for assisting people experiencing homelessness. Over more than a decade, increased CoC effectiveness has been supported by Homeless Management Information Systems and evidence-based funding of increased permanent supportive housing. As a result, the estimated number of chronically homeless individuals declined 27 percent between 2010 and 2016; subsequent increases in unsheltered chronically homeless individuals, however, motivated increases in emergency shelter beds. Following federal criteria, 78 communities and 3 states have effectively ended veteran homelessness.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- HUD operates a centralized evaluation program under the guidance of the evaluation officer. As a condition of grant award, all HUD competitive grantees are required to cooperate in any HUD-sponsored research or evaluation studies and to provide program monitoring data. A number of program statutes do not authorize formal evaluation as an eligible activity for use of program funds. HUD also provides technical assistance to strengthen grantees’ capacity for evaluation and performance management capacity.
- The Continuum of Care FY19 homelessness assistance program NOFA offers one point for applicants who propose to use requested funds to improve their ability to evaluate the outcome of projects funded by the CoC Program and the Emergency Solutions Grant program (p. 39).
Score
6
6
U.S. Department of Labor
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY20, the five largest competitive programs and their appropriation amounts were:
- Scaling Apprenticeship Through Sector-Based Strategies ($183,800,000; eligible grantees: public/private partnerships where lead applicants are institutions of higher education (IHE) representing a consortium of IHEs, or a state system of higher education, such as a community college system office or single state higher education board);
- Expanding Opportunity Through Industry Recognized Apprenticeship Programs (IRAP) ($150,000,000; expected Funding Opportunity Announcement (FOA) release FY19);
- Apprenticeships: Closing the Skills Gap ($100,000,000; eligible grantees: public/private partnerships where lead applicants are IHEs or an IHE representing a consortium of IHEs, a state system of higher education, such as a community college system office or a single state higher educational board, a nonprofit trade, industry, or employer association, labor unions, or labor-management organizations);
- Reentry Projects ($82,000,000; eligible grantees: non-profit organizations, state or local governments, Indian and Native American entities eligible for grants under Section 166 of the Workforce Innovation and Opportunity Act (WIOA)); and
- YouthBuild ($80,000,000; eligible grantees: private non-profit or public agencies including community and faith-based organizations, local workforce development boards or one-stop center partner programs, educational institutions, community action agencies, state or local housing development agencies, any Indian and Native American entity eligible for grants under Section 166 of WIOA, community development corporations, state or local youth service conservation corps, and any other public or private non-profit entity that is eligible to provide education or employment training under a federal program and can meet the required elements of the grant).
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- The Reentry Projects funding opportunity provides up to eight points (out of 100) for past performance. Grant applicants must specifically provide information on their performance goals. The application states, “[a]ll applicants must specifically address the placement in education or employment and certificate/degree attainment outcomes.”
- The Employment & Training Administration’s (ETA) YouthBuild applicants are also awarded points based on past performance (a possible 28 points out of 100), viewing these metrics as important to demonstrating successful career outcomes for youth.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- All five of DOL’s largest grant programs have been or will be involved in evaluations designed by the Chief Evaluation Office (CEO) and the relevant DOL agencies. In each case DOL required or encouraged (through language in the funding announcement and proposal review criteria) grantees to use evidence-based models or strategies in grant interventions and/or to participate in an evaluation, especially to test new interventions that theory or research suggest are promising.
- For example, DOL will conduct an implementation evaluation of the Sector Based Apprenticeship Program. This evaluation will also include the development of an impact evaluation design options paper that identifies areas of opportunity for testing the impacts of apprenticeship strategies on employment or other outcomes. The objective of this study is to identify innovative and promising models, practices, and partnership strategies to expand apprenticeship opportunities in high-growth occupations and industries to build the evidence on apprenticeship. There are options for more rigorous evaluations in the contract as appropriate.
- Additionally, DOL currently has an evaluation underway of the Reentry Projects grant program. The Reentry Projects grant program used a tiered evidence framework to require applicants to propose evidence-based and informed interventions, or new interventions that theory or research suggests are promising, (or a combination of both) that lead to increased employment outcomes for their target populations and must frame their goals and objectives to address this issue. The evaluation will identify and evaluate promising practices used in reentry employment programs through both an implementation and impact study among select grantees to understand their effectiveness in improving participant outcomes such as employment, earnings, and recidivism.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- DOL includes requirements of demonstrated effectiveness in the allocation of funds, as well as the commitment to building new evidence in order to receive funds, both of which are of equal importance given the fact that many DOL-funded programs lack a sufficient body of evidence to only support those that are already evidence-based. For example, among current Employment & Training Administration (ETA) competitive grant programs, this has involved requiring: (1) a demonstration of an approach as being evidence-based or promising for receipt of funds (i.e., Reentry Funding Opportunity Announcement) or for potential to receive additional funds (i.e., TechHire); (2) an independent third-party local or grantee evaluation with priority incentives for rigorous designs (e.g., tiered funding, scoring priorities, bonus scoring for evidence-based interventions or multi-site rigorous tests); or (3) full participation in an evaluation as well as rigorous grantee (or local) evaluations. Additionally, applicants for the International Labor Bureau’s (ILAB) competitive funding opportunities are required to conduct and/or participate in evaluations as a condition of award.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- In 2015, DOL funded an evaluation of the 36-month Linking Employment Activities Pre-Release (LEAP) Programwhich included an implementation study of LEAP pilot programs that provided jail-based American Job Centers (AJCs) to individuals preparing to re-enter society after time in jail. The findings of the evaluation identified many promising practices for offering both pre- and post-release services and were published in 2018 (see the Final Report and Issue Brief Compendium). In 2020, DOL funded a 42-month Pathway Home Pilot Project and accompanying evaluation that builds on lessons learned from the LEAP program by providing workforce services to incarcerated individuals pre- and post-release. For example, the requirement in the Pathway Home grant for participants to maintain the same caseworker pre- and post-release, was suggested as a promising practice in the LEAP Implementation Study.
- DOL funded a national evaluation of the Trade Adjustment Assistance Community College and Career Training (TAACCCT) grant program, which was a $1.9 billion initiative consisting of four rounds of grants, from 2011 to 2018. The grants were awarded to institutions of higher education (mainly community colleges) to build their capacity to provide workforce education and training programs. The implementation study assessed the grantees’ implementation of strategies to better connect and integrate education and workforce systems, address employer needs, and transform training programs and services to adult learners. The synthesis identified key implementation and impact findings based on a review of evaluation reports completed by grantees’ third-party evaluators. The outcomes study examined the training, employment, earnings, and self-sufficiency outcomes of nearly 2,800 participants from nine grants in Round 4. Findings from these studies provide evidence-based practices and insights that are being applied to the new Strengthening Community College Initiative Funding Opportunity Announcement, as well as future DOL investments.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- DOL has a formal Evaluation Policy. Guidance on using funds to conduct and or participate in program evaluations and/or to strengthen their evaluation–building efforts can be found in each grant funding opportunity, and is a condition of each grant. The “Special Program Requirements” section of the respective grant funding opportunity notifies grantees of this responsibility. Generally, this section states: “As a condition of grant award, grantees are required to participate in an evaluation, if undertaken by DOL. The evaluation may include an implementation assessment across grantees, an impact and/or outcomes analysis of all or selected sites within or across grantees, and a benefit/cost analysis or assessment of return on investment. Conducting an impact analysis could involve random assignment (which involves random assignment of eligible participants into a treatment group that would receive program services or enhanced program services, or into control group(s) that would receive no program services or program services that are not enhanced). We may require applicants to collect data elements to aid the evaluation. As a part of the evaluation, as a condition of award, grantees must agree to: (1) make records available to the evaluation contractor on participants, employers, and funding; (2) provide access to program operating personnel, participants, and operational and financial records, and any other pertaining documents to calculate program costs and benefits; (3) in the case of an impact analysis, facilitate the assignment by lottery of participants to program services (including the possible increased recruitment of potential participants); and (4) follow evaluation procedures as specified by the evaluation contractor under the direction of DOL, including after the period of operation. After award, grantees will receive detailed guidance on ETA’s evaluation methodology, including requirements for data collection. Grantees will receive technical assistance to support their participation in these activities.
Score
15
15
Millennium Challenge Corporation
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- MCC awards all of its agency funds through two competitive grants: (1) the compact program ($634.5 million in FY20; eligible grantees: developing countries) and (2) the threshold program ($30.0 million in FY20; eligible grantees: developing countries).
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- For country partner selection, as part of the compact and threshold competitive programs, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These objective indicators of a country’s performance are collected by independent third parties.
- When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) improved Scorecard performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this use data to inform project proposal assessment, project design, and implementation approaches.”
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Per its Policy for Monitoring and Evaluation (M&E), MCC requires independent evaluations of every project to assess progress in achieving outputs and outcomes and program learning based on defined evaluation questions throughout the lifetime of the project and beyond. As described above, MCC publicly releases all these evaluations on its website and uses findings, in collaboration with stakeholders and partner countries, to build evidence in the field so that policymakers in the United States and in partner countries can leverage MCC’s experiences to develop future programming. In line with MCC’s Policy for M&E, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs in FY20 (besides its five largest grant programs)?
- MCC uses evidence of effectiveness to allocate funds in all its competitive grant programs as noted above.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- Based on the results of a rigorous impact evaluation, MCC’s compact in Burkina Faso improved educational infrastructure, by renovating 396 classrooms in 132 primary schools and funding ancillary educational needs for students (e.g., latrines, school supplies, and food) and adults (e.g., teachers’ housing and gender-sensitivity training). Students in intervention schools had overall student enrollment rates increase by 6%, with girls’ enrollment increasing by 10.3%; higher test scores; higher primary school graduation rates; and lower early marriage rates. In completing this program, MCC learned that addressing the factors that specifically threaten female education helps girls access and remain in school. Additionally, addressing schools’ weak educational quality (e.g., curriculum, faculty, management), coupled with improving the quality of students’ access to and facilities for education, should further improve students’ learning. This learning has since been applied in current education investments.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- As described above, MCC develops a Monitoring & Evaluation (M&E) Plan for every grantee, which describes the independent evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. As such, grantees use program funds for evaluation.
- MCC’s Policy for Monitoring and Evaluation stipulates that the “primary responsibility for developing the M&E Plan lies with the MCA [grantee] M&E Director with support and input from MCC’s M&E Lead and Economist. MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.” The M&E policy is intended primarily to guide MCC and partner country staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes. All MCC investments also include M&E capacity-building for grantees.
Score
6
6
Substance Abuse and Mental Health Services Administration
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
In FY20, the five largest competitive grant programs are:
- State Opioid Response Grants ($1.5 billion; eligible applicants: states);
- Children Mental Health Services ($1.25 billion; eligible applicants: States, Tribes, Communities, Territories);
- Strategic Prevention Framework ($119.5 million; eligible applicants: public and private nonprofit entities);
- Targeted Capacity Expansion – General ($100.2 million; eligible applicants: domestic public and private nonprofit entities);
- Project AWARE ($92 million; eligible applicants: State education agencies).
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- The FY20 State Opioid Response Grants application required states to use evidence-based practices to address opioid use disorder (p. 19), as 1 of 5 evaluation criteria; however the application did not allot points for the various criteria.
- The FY20 Strategic Prevention Framework Grants application states that applicants are expected to use evidence-based practices (p. 8), but this does not factor in the evaluation of applications (pp. 14-16).
- The FY20 Project AWARE State Education Agency Grants application gave applicants 25 out of 100 points for the following: “Identify the Evidence-Based Practice(s) (EBPs) that will be used in each of the three LEAS [local educational agencies]. Discuss how each EBP chosen is appropriate for your population(s) of focus and the outcomes you want to achieve. Describe any modifications that will be made to the EBP(s) and the reason the modifications are necessary” (p. 21).
- The FY19 Targeted Capacity Expansion Grants application gave applicants 25 out of 100 points for proposing evidence-based services or practices (p. 19).
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- The FY20 Strategic Prevention Framework Grants application states that SAMHSA may negotiate additional terms and conditions with applicants prior to grant award, including “requirements relating to participation in a cross-site evaluation” (p. 51).
- The FY20 Project AWARE State Education Agency Grants application states that SAMHSA may negotiate additional terms and conditions with applicants prior to grant award, including “requirements relating to participation in a cross-site evaluation” (p. 58).
- The FY19 Targeted Capacity Expansion Grants application stated that SAMHSA may negotiate additional terms and conditions with applicants prior to grant award, including “requirements relating to participation in a cross-site evaluation” (p. 57).
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant program (besides its five largest grant programs)?
- No examples available.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- No examples available.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- No examples available.