Skip to main content
Back to Overview
Every Summer Counts
Document
  • Author(s)
  • Jennifer Sloan McCombs, Catherine H. Augustine, John F. Pane, and Jonathan Schweig
  • Publisher(s)
  • RAND Corporation
Page Count 67 pages

Research Approach

This research  followed nearly 6,000 students in five urban school districts from the end of 3rd grade through the spring of 7th grade. 

It builds on previous RAND studies, with new data showing that three school years after the programs’ second summer, academic benefits had decreased in magnitude. At the same time, they remained educationally meaningful. Researchers also evaluated the effect of different implementation factors on outcomes.

The districts involved included Boston; Dallas; Duval County, Fla.; Pittsburgh; and Rochester, N.Y.  Along with local out-of-school time intermediaries and community partners,  they were participants  in Wallace’s National Summer Learning Project (NSLP).  The project was launched in 2011 to understand the implementation and effectiveness of voluntary summer learning programs.

 Researchers collected data related to five outcomes:

  • State assessments in mathematics and language arts administered in spring 2017
  • Suspensions
  • End-of-year course grades in mathematics and language arts
  • School-year attendance
  • Social-emotional competencies

The  preferred approach for estimating causal effects used an intention-to-treat (ITT) approach. It compared  the outcomes of all students who were randomly admitted to two summers of programming (2013 and 2014) with the outcomes of all students who were randomly assigned to the control group. That was  regardless of whether the students actually attended the summer program. For correlational analyses, researchers  used simple extensions to this model.

In the spring 2017 data, researchers  observed large differences in attrition by district. These changes affected the relative precision of the district-level estimates, causing them to become more or less influential in the overall result produced by meta-analysis. But this can complicate or mislead interpretation of changes in estimates of program impact over time. For that reason, researchers adopted a different approach for addressing missing data.

Share This