The Office of Population Affairs (OPA) evaluation grantees receive training and technical assistance (TA) to ensure their evaluations are designed and implemented to meet research quality standards. OPA offers evaluation TA through a variety of mechanisms including individual TA, group training, webinars, and written documents.
The FY 2018 Teen Pregnancy Prevention (TPP) Tier 2 grantees implementing new and innovative strategies to prevent teen pregnancy will receive Evaluation TA for conducting formative and process/implementation evaluations. Mathematica Policy Research, along with the Center for Relationship Education, will provide implementation evaluation-related coordination, training, and support to well-position grantees for successful delivery of TPP services to teens and to inform related national efforts.
Training
PDF content is undergoing Section 508 review and will be updated pending remediation. For immediate assistance, please contact: opa@hhs.gov.
- Monday March 9, 2020: Qualitative Analysis; Summary, Slides, Transcript, Video
- February 28, 2020: Pre-Post Outcome Analyses; Summary, Slides, Transcript, Video
- January 16, 2020: Assessing Need, Demand, and Local Context; Slides, Transcript, Video
- September 28, 2017: Extending Your Reach- Mounting an Integrated Communications Strategy; Slides, Transcript
- April 5, 2016: Technical Assistance Webinar Identifying Appropriate Data Sources for Community-Level Evaluations: Tier 1B Grantees and Evaluators; Slides, Audio, Transcript
- February 4, 2016: Technical Assistant Webinar Designing Community-Level Evaluations: Tier 1B Grantees & Evaluators; Slides, Audio, Transcript
- November 2014: Getting Your Message Heard: Simple and Successful Dissemination; Slides, Audio, Transcript
- June 23, 2011: Working Together: Program Staff’s Role in Effectiveness Evaluations; Slide Set 1, Slide Set 2, Transcript
- Tip Sheet: Evaluation Strategies for Virtual Implementation in Response to COVID-19 (2020): Transitioning quickly to online delivery can be challenging, but it’s also an opportunity to test new approaches to providing content. This tip sheet has considerations for transitioning implementation and evaluation online, including assessing need and demand, demonstrating feasibility, continuously improving quality, documenting lessons learned, and collecting data.
- Building and Retaining Partnerships Tip Sheet (September 2020): The key to a successful program is creating strong partnerships and maintaining them, especially when conducting an evaluation. Strong partnerships can establish your program within the community, helping sustain it throughout the grant period and into the future.
- Strategies for Engaging Parents and Caregivers Tip Sheet (July 2020): Engaging parents and other caregivers in programming for teen pregnancy prevention can help reinforce lessons that youth receive through a program. This tip sheet identifies strategies to engage parents in teen pregnancy prevention programming, focusing on ways to address attitudinal, interpersonal, and structural barriers to participation.
- Fidelity Monitoring Tip Sheet (July 2020): Fidelity refers to implementing a program or curriculum as intended, which can be a key factor in its success and whether it positively moves youth outcomes. Fidelity monitoring refers to a system of measuring and analyzing the degree to which a program is implemented as intended. This tip sheet guides TPP grantees through designing and utilizing a fidelity monitoring system.
- Documenting Adaptations Tip Sheet (July 2020): An adaptation is a change to a program’s content, delivery, or core components. An adaptation can be minor or major. This tip sheet explains why it is important to document adaptations and guides TPP grantees conducting evaluations on how to adapt a program.
- Focus Group Tip Sheet (April 2020): A focus group is a way to collect data in which a group of participants gathers to share knowledge, voices, opinions, beliefs, and attitudes about a specific topic or concept. Researchers moderate the small group conversation to collect data that help answer key research questions. Focus groups can help researchers learn rich details of participants’ experiences.
- Interviewing Tip Sheet (April 2020): An interview is a way to collect data in which an individual shares knowledge, voice, opinions, beliefs, and attitudes about a specific topic or concept. Interviews can help researchers learn nuanced details about a stakeholder’s experience.
- Observation Tip Sheet (April 2020): An observation is a way of using your senses—most often vision and hearing—to collect data. Researchers document their observations of activities, events, and interactions. Observations can provide information on fidelity, quality, engagement, and other topics.
- Understanding How Components of An Intervention Can Influence Outcomes (March 2020): Teen pregnancy prevention programs often have many program components that make up an intervention. This brief describes the foundational elements of program components analysis, potential types of research questions about how program components influence outcomes, analytic approaches for answering those research questions, and reporting and interpretation.
- Institutional Review Board Tip Sheet (January 2020): This tip sheet provides a brief overview of IRBs and tips on navigating the process for OPA-funded grantees conducting an implementation evaluation
- Tier 1B Grant Implementation Study Planning (September 2017): OAH, now part of OPA, Tier 1B grantees need well-designed implementation studies that can assess the successes and challenges of implementing the Tier 1B grant project. This brief guides grantees through the initial steps of implementation study design, including research question selection and prioritization, data source mapping, and study timeline development.
- Should Teen Pregnancy Prevention Studies Randomize Students or Schools? The Power Tradeoffs Between Contamination Bias and Clustering (September 2017): Evaluators of TPP programs implemented in schools face a difficult tradeoff in selecting the level of randomization. If schools are randomized, then the study’s statistical power is reduced by larger standard errors resulting from clustering. If students are randomized within schools, then the study’s power is potentially reduced by attenuation bias that can occur when members of the program group date members of the control group (contamination bias). This brief quantifies this tradeoff to help evaluators choose the best unit of randomization. See the Appendix for information about the analytic approach, simulation findings for alternative model assumptions, and tables of descriptive statistics.
- Estimating Program Effects on Program Participants (September 2017): A randomized experiment provides the opportunity to calculate an unbiased estimate of the effect of an intervention. Specifically, an intent-to-treat (ITT) analysis allows researchers to credibly estimate the effect of the offer of an intervention. However, not all participants comply with their assigned condition and this non-compliance can lead to an underestimate of the effect of actually receiving the intervention. This brief will describe analytic approaches for estimating a credible Treatment on the Treated (TOT) estimate as a supplement to the ITT estimate, and provide guidance on how to report this finding in a final report or journal article.
- Selecting Benchmark and Sensitivity Analyses (September 2017): Researchers make a number of decisions in how they prepare and analyze their data to show program effectiveness, and these decisions can influence the findings of an evaluation. The brief highlights a number of common situations where TPP researchers make decisions that might influence findings (e.g. handling inconsistent/missing data, statistically adjusting for covariates/blocks, etc.), includes suggested approaches to use as benchmark and sensitivity analyses for each of these common decision points, especially in the context of the HHS evidence review, and offers guidance on ways to present and interpret benchmark and sensitivity results in reports or journal articles.
- An Overview of Economic Evaluation Methods (October 2016): Programs need cost data to estimate how much it costs to provide a program, to understand the resources they use, and to answer other questions about the cost of teen pregnancy prevention programs. This brief describes several economic evaluation methodologies and discusses how to plan for and collect the cost data necessary for these analyses.
- Structural Elements of an Intervention - PDF (October 2016): Program developers want to be able to accurately describe their interventions and understand which pieces of the intervention contribute to changes in participant outcomes. This brief provides guidance on unpacking interventions—specifically, it discusses how to dissect an intervention into its structural elements (core components), measure aspects of implementation related to structural elements, and assess how those structural elements influence participant outcomes.
- TA Brief for Tier 1B Grantees: How Study Design Influences Statistical Power in Community-Level Evaluations (September 2016): The choice of study design has important implications for the sample size (number of communities) needed to detect policy-relevant impacts. This brief uses a general example to show how statistical power varies across three types of quasi-experimental community-level designs.
- TA Brief for Tier 1B Grantees: Data Sources for Community-Level Outcomes (August 2016): OAH, now part of OPA, Tier 1B evaluations need data that can assess the impact of multiple strategies on a whole community or set of communities. This brief reviews general factors to consider when choosing data sources, and highlights advantages and disadvantages of four types of administrative/secondary data sources.
- TA Brief for Tier 1B Grantees: Defining Treatment Communities and Estimating Community Impacts (June 2016): Evaluations of OAH, now part of OPA, Tier 1B projects are designed to assess impacts of the broad strategy at the community level. This brief provides guidance to grantees and evaluators on how to define treatment communities for evaluation purposes, and includes a basic example of how to estimate and interpret a community-level impact.
- Developing and Implementing Systems for Tracking Recruitment and Retention for Programs Participating in Effectiveness Evaluations (June 2016): To ensure a program meets enrollment targets, it is essential to monitor the flow of enrollees through the various stages of a recruitment process. This brief provides researchers and practitioners with tools to track both recruitment and retention in TPP programs.
- Recommendations for Successfully Recruiting and Retaining School Participation in a Teen Pregnancy Prevention Impact Evaluation (June 2015): This brief complements the brief on “Recommendations for Successfully Recruiting and Retaining District Participation in a Teen Pregnancy Prevention Impact Evaluation.” Once school district approval is received, school recruitment can begin. This brief provides steps for securing schools’ interest and participation in a TPP impact evaluation.
- Recommendations for Successfully Recruiting and Retaining District Participation in a Teen Pregnancy Prevention Impact Evaluation (June 2015): Almost all school districts require approval before conducting a program evaluation within their schools. Permission must first come from school districts and then individual schools can be recruited. This brief provides steps for obtaining district approval for an evaluation of a TPP program implemented in a school setting.
- Calculating Minimum Detectable Impacts in Teen Pregnancy Prevention Impact Evaluations (December 2014): A common goal of a teen pregnancy prevention impact evaluation is to show that the intervention being tested has a positive and statistically significant effect on participant behavioral outcomes. This brief provides an overview of how researchers can calculate the minimum detectable impacts (MDIs) for a given evaluation – this is analogous to a “power calculation.” An accompanying Excel tool is provided to allow evaluators to calculate MDIs for their own impact evaluations, and example calculations are presented in the brief.
- Using the Linear Probability Model to Estimate Impacts on Binary Outcomes in Randomized Controlled Trials (December 2014): Researchers are often apprehensive to use linear regression as an analytic approach when the outcomes being examined are binary (yes/no) responses. This brief provides researchers with a technical explanation for why the linear probability model—the linear regression methodology used for binary outcomes—is appropriate in the context of calculating impacts in an evaluation.
- Sample Attrition in Teen Pregnancy Prevention Impact Evaluations (November 2014): A randomized controlled trial (RCT) is able to produce an unbiased estimate of the effect of an intervention. However, when a small or non-representative subset of the initially assigned sample is used to show the effect of the intervention (e.g., the set of individuals who respond to a follow-up survey), the resulting estimate of program effectiveness may be biased. This brief outlines how non-response (i.e., sample attrition) affects individual- and cluster-level RCTs, how the bias from attrition can be assessed, and strategies to limit sample attrition in teen pregnancy prevention evaluations.
- Baseline Inequivalence and Matching (November 2014): In order for a study to provide compelling evidence of program effectiveness, the intervention and comparison groups should be equivalent on key characteristics measured at baseline. This brief discusses why baseline equivalence is important, how it can be assessed, and provides guidance on matching methods that can improve baseline equivalence to provide persuasive evidence of the effect of teen pregnancy prevention interventions.
- Coping with Missing Data in Randomized Controlled Trials (May 2013): Missing outcome data can pose a serious threat to the validity of experimental impacts. This brief provides guidance on how to manage this issue if it occurs, including strategies on how to clearly describe the problem and how to use valid statistical methods to adjust for it.
- Estimating Program Impacts for a Subgroup Defined by Post-Intervention Behavior: Why is it a Problem? What is the Solution? (December 2012): The impact of teenage pregnancy prevention programs on an outcome like contraceptive use for sexually active youth is often of interest to researchers and policymakers. This brief describes a serious pitfall in estimating program impacts on outcomes such as contraceptive use among only a subgroup of youth who are sexually active at follow-up—a strategy that likely produces biased estimates even in a study with a random assignment design. This brief illustrates the source and nature of the bias, and offers alternative strategies for analyzing impacts on sexual risk behavior that produce unbiased estimates by maintaining the integrity of the random assignment design.
- Planning Evaluations Designed to Meet Scientific Standards: Communicating Key Components of the Plan for a Rigorous and Useful Evaluation of a Teenage Pregnancy Prevention Program (July 2011): This brief discusses planning effectiveness evaluations that will meet HHS evidence standards, while also being useful to decision makers, and discusses approaches for clearly communicating key evaluation plan components to funders.
Evaluation updates are developed to share answers to commonly asked questions received from grantees.
- May 2017: Evaluations at a Glance
- December 2016: Evaluations at a Glance
- June 2015: FAQs about School Recruitment
- March 2015: Evaluation Reporting at a Glance
- December 2013: FAQs about the Implications of Clustering in RCTs
- November 2013: Evaluations at a Glance
- December 2011: FAQs about Reporting Implementation Findings
- July 2011: FAQs about Evaluation Start-Up
- January 2011: Evaluation Technical Assistance Update
Technical Assistance (TA) Schedule for Research & Demonstration Projects (TPP Tier 2)
- Reporting timeline for 5-year evaluation grants
- Year 1: Revised evaluation design
- Year 2: Evaluation abstract
- Year 3: Implementation analysis plan
- Year 3: Impact analysis plan
- Year 5: Final impact evaluation report
- Year 5: Final evaluation abstract
- Twice yearly during data collection:
- Baseline equivalence tables
- Reporting templates
- Evaluation Collaboration site for Grantees:
- Grantees FY2010-2014
- Grantees FY2015-2019
- FY 2015-2019 Tier 2 Measures