Skip to main content
Back to Overview
One Day I Will Make It
Document
  • Author(s)
  • Kristin E. Porter, Sondra Cuban, John P. Comings, and Valerie Chase
  • Publisher(s)
  • MDRC
Page Count 77 pages

Research Approach

The design involves measuring student achievement at two points in time—approximately 12 months apart—with a battery of literacy tests. The literacy levels reported in Chapter 2 of this report are based on findings from the first point in time. To the extent possible, all the students who participated in the first wave of the tests have been tracked and retested, regardless of whether they remained active in their LILAA program. The results of these tests were supplemented with in-depth qualitative interviews of a subsample of these students, to capture (1) the extent of participation in literacy activities inside and outside the library literacy programs and (2) students’ perceptions of changes in their literacy levels.

The Tests Used in the Achievement Study

The battery of tests selected for the achievement study consists of instruments that are all considered program-based and learner-centered, and they all focus on the learning process as program outcomes. Most important, these tests rely on different procedures for different students, making them more appropriate for each individual and thus more meaningful and valid. These tests are reputable, standardized tests that are used nationally, which allows for library literacy programs to be assessed among other adult education providers, giving them an equal base of comparison. This is the first time that a battery of standardized tests has been given specifically to a cohort of library literacy programs within a systematic study of them. The tests are described below. 

  1. The Test of Word Reading Efficiency (TOWRE), published by Pro-Ed, measures reading rate and word recognition. Reading rate and word recognition are important predictors of reading comprehension. (Someone who reads too slowly loses the meaning of long and complicated sentences.) The TOWRE consists of two subtests:
    1. The Sight Word Efficiency (SWE) subtest measures the number of printed words that can be correctly identified within 45 seconds. 
    2. The Phonemic Decoding Efficiency (PDE) subtest measures the number of pronounceable words that the test-taker can decode in 45 seconds.  
  2. The Peabody Picture Vocabulary Test (PPVT) measures vocabulary skills, assessing both verbal and auditory attainment of Standard English. It is a measure of listening and reading vocabulary. The test can be administered to persons of any age. Test-takers are asked to select pictures that best match the meaning of words that are read aloud by the person administering the test.
  3. The Adult Basic Learning Examination (ABLE) test measures several skills, including reading comprehension (only the reading comprehension part is used in this study). For the reading comprehension subtest, test-takers are presented with signs and short reading passages about the day-to-day lives of adults. The passages are followed by questions that test comprehension of the text and the ability to make inferences. The test has three levels: Level 1 is for adults with one to four years of education (primary schooling); Level 2 is for adults with five to eight years of schooling (intermediate schooling); and Level 3 is for adults who have had at least eight years of schooling but who have not graduated from high school.
  4. The Basic English Skills Test (BEST) is a special test for students of English as a Second Language (ESL); it measures English speaking and listening skills. It is designed to measure competency-based listening comprehension, speaking, and elementary reading and writing skills. Test-takers are presented with a series of real-life listening and speaking tasks, such as telling time, paying for a store item, and giving and receiving directions. Only those sample members who are learners of English (60 students) took the BEST. 

Though the achievement tests described above were selected to measure different literacy outcomes, some of the differences in achievement levels across the tests are notable. The most highly correlated tests are the ABLE and the TOWRE, as would be expected, because the rate of word recognition (as measured by the TOWRE) is an important predictor of reading comprehension (as measured by the ABLE). The absence of correlation between the PPVT and the other tests may suggest that the vocabulary skills measured by the PPVT are distinct and surprisingly unrelated to the other literacy skills as measured with this battery of test.

Measuring Student Persistence

The data used to analyze student persistence in Chapter 2 come from attendance records from each of the programs in the Literacy in Libraries Across America (LILAA) persistence study. These records provide the hours of participation in different types of learning activities, month by month, from January 2000 through December 2002. They also include basic information about the students, including their gender, date of birth, and ethnicity. However, the level of detail and the definitions for participation were not consistent across programs. Therefore, in order to talk about students’ experiences the same way across all the programs—using the data available—the researchers developed some basic definitions related to students’ status in a program. These definitions, listed below, may differ from the operational definitions used by the programs and, therefore, may result in the researchers’ finding different trends (or different participation numbers) than the programs themselves reported.

The resulting sample for the participation analyses includes 4,255 new or returning participants (that is, those who became active in a program after at least three months of inactivity) who have a confirmed or undetermined exit. The sample was identified between January 2000 and September 2002. Wakefield—where data collection started late—is slightly underrepresented in this sample. 

Because of the many challenges that the LILAA programs faced in collecting and managing data for the persistence study, it was inevitable that some student participation was not captured in the data provided for the evaluation. Despite such underreporting, however, the data still reveal valuable trends in participation patterns at each program. Because the data cover a long period of time (from January 2000 to December 2002), periodic lapses in entering data on students’ attendance do not invalidate the larger picture of their persistence. Also, although data are likely missing, it can usually be assumed that they will be missing at a relatively consistent level over time, because the programs tended to be faced with the same set of challenges. Therefore, trends over time reveal trends in participation.

Topics:
Share This

GET THE LATEST UPDATES

Sign up to receive our monthly email newsletter and news from Wallace.
SignUp