Ana səhifə

U. S. Department of Education Office of Planning, Evaluation and Policy Development


Yüklə 1.39 Mb.
səhifə12/12
tarix26.06.2016
ölçüsü1.39 Mb.
1   ...   4   5   6   7   8   9   10   11   12

C. The Effect of Additional Upward Bound Participation


Though subject to the caveats discussed in this chapter, findings from this analysis suggest that longer program participation or program completion may yield large, positive effects on several postsecondary outcomes.

1. Postsecondary Enrollment


Our findings suggest that Upward Bound would encourage postsecondary enrollment among more shorter-duration participants—students who participated for no more than 24 months—if it could keep them in the program longer. An additional year of Upward Bound participation increases enrollment at four-year institutions, raising it by 9 percentage points (see
Table V.2).

Our findings also suggest that Upward Bound would have a larger effect on noncompleters if it retained them through high school graduation (see Table V.3). If noncompleters remained in Upward Bound through program completion, we estimate that they would, on average, participate for an additional 18 months, as the average duration was just over 13 months for noncompleters and more than 31 months for completers. The impact estimate of Upward Bound completion for any postsecondary enrollment is 19 percentage points, raising enrollment from 77 to 96 percent. The effect of program completion on postsecondary enrollment operates primarily through increased enrollment in a four-year college or university, raising it by 27 percentage points. These effects of program completion are much more pronounced than the effects from increased program duration.





We have also examined the effect of longer participation and completion on the selectivity of the four-year institutions attended by Upward Bound participants. Longer Upward Bound participation increased the likelihood of attending a highly selective four-year college or university by 4 percentage points. For noncompleters, Upward Bound completion would also raise the likelihood of attending a highly selective four-year institution as indicated by the 10 percentage point impact estimate.






2. Financial Aid


Our estimates suggest that an additional year of Upward Bound participation would increase the likelihood of applying for financial aid by 6 percentage points. The evidence also suggests that the impact is substantially larger for completers, as the estimated impacts of program completion are 21 percentage points for aid application and 20 points for Pell Grant receipt.

3. Postsecondary Completion


Longer participation in Upward Bound increases the likelihood of completing any postsecondary credentials, with a statistically significant 8 percentage point impact. Parallel to the effect of longer participation on postsecondary enrollment, the positive effect of longer participation on overall postsecondary completion appears to be driven by an increase in the likelihood of completing a degree at a four-year institution (an increase of 5 percentage points). Longer participation did not have a detectable effect on the likelihood of completing an associate degree or a certificate or license. Our estimates show similar, though much larger, positive effects of Upward Bound completion on the likelihood of completing a postsecondary credential. The impact estimate for any postsecondary credential is 21 percentage points, primarily attributable to an 18 percentage point increase in the likelihood of completing a bachelor’s degree.

D. Interpretation of the Findings


The potential effects of retaining Upward Bound participants who would otherwise leave the program early may be large, with estimates suggesting that additional participation would raise the postsecondary enrollment and completion rates for shorter-duration participants and noncompleters. However, we suspect the true effects of additional participation are probably smaller than the estimates presented in this chapter. Although we used rigorous statistical methods in our analysis, we could not randomly assign students to different levels of Upward Bound participation. Because participants decide how long to participate and whether to complete the program (unless they are expelled), the groups may differ along many dimensions, including unmeasured characteristics like the motivation to attend college. If so, the estimated effects of additional participation, based on comparisons between these groups, may be partly attributable to differences in motivation that predated the Upward Bound participation of these students.

While this selection bias could be positive or negative, we suspect that it leads us to overestimate the effects of additional participation. It seems likely that more motivated students participate longer in Upward Bound and complete Upward Bound at higher rates than less motivated students, leading to higher levels of motivation among longer-duration participants and completers. If more motivated students tend to enroll in college at higher rates than less motivated students, longer-duration participants and completers would have higher college enrollment rates than shorter-duration participants and noncompleters. While matching may reduce the motivational differences between the samples, we expect that remaining unobserved differences partially explain the large positive effects of additional participation and completion reported in this chapter.



References


Adelman, Cliff. “Participation in Outreach Programs Prior to High School Graduation: Socioeconomic Status by Race.” Paper presented at the ConnectED Conference, San Diego, CA, January 10, 2000.

Angrist, Joshua D., Guido W. Imbens, and Donald B. Rubin. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association, vol. 91, no. 434, June 1996.

Astin, Alexander W. What Matters in College? Four Critical Years Revisited. San Francisco, CA: Jossey-Bass, 1993.

Avery, Christopher, and Thomas J. Kane. “Student Perceptions of College Opportunities:


The Boston COACH Program.” In College Choices, edited by Caroline M. Hoxby. Cambridge, MA: National Bureau of Economic Research, 2004.

Balfanz, Robert, and Nettie Legters. “Locating the Dropout Crisis: Which High Schools Produce the Nation’s Dropouts? Where Are They Located? Who Attends Them?” Baltimore, MD: Johns Hopkins University, June 2004.



Barron’s Profiles of American Colleges 2003. New York, NY: Barron’s, 2002.

Bloom, Howard. “Accounting for No-Shows in Experimental Evaluation Designs.” Evaluation Review, vol. 8, 1984.

Bowen, William G., and Derek Bok. The Shape of the River: Long-Term Consequences of Considering Race in College and University Admissions. Princeton, NJ: Princeton University Press, 1998.

Brogan, D. “Software for Sample Survey Data, Misuse of Standard Packages.” In Encyclopedia of Biostatistics, vol. 5, edited by P. Armitage and T. Colton. New York, NY: Wiley, 1998, pp. 4167-4174.

Cohen, Jacob. Statistical Power Analysis for the Behavioral Sciences. Second Edition. Hillsdale, NJ: Lawrence Erlbaum, 1988.

Coleman, James, E. Campbell, C. Hobson, J. McPartland, A. Mood, F. Weinfield, and R. York. “Equality of Educational Opportunity.” Washington, DC: U.S. Department of Health, Education, and Welfare, 1966.

Congressional Budget Office. “Educational Achievement: Explanations and Implications of Recent Trends.” Washington, DC: Congressional Budget Office, August 1987.

Constantine, Jill, Neil Seftor, Emily Sama Martin, Tim Silva, and David Myers. “A Study of the Effect of the Talent Search Program on Secondary and Postsecondary Outcomes in Florida, Indiana, and Texas.” Final report submitted to the U.S. Department of Education. Princeton, NJ: Mathematica Policy Research, Inc., June 2006.

Fasciano, Nancy, and Jonathan E. Jacobson. “The National Evaluation of Upward Bound: Grantee Survey Report.” Washington, DC: U.S. Department of Education, 1997.

Ingels, Steven J., Thomas R. Curtin, Philip Kaufman, Martha Naomi Alt, and Xianglei Chen. “Coming of Age in the 1990s: The Eighth-Grade Class of 1988 12 Years Later.” (NCES 2002-321). Washington, DC: U.S. Department of Education, National Center for Education Statistics, Office of Educational Research and Improvement, March 2002.

Jacobson, Jonathan, Cara Olsen, Jennifer King Rice, Stephen Sweetland, and John Ralph. “Educational Achievement and Black-White Inequality.” Washington, DC: U.S. Department of Education, National Center for Education Statistics, Office of Educational Research and Improvement, 2001.

James, Donna Walker, Sonia Jurich, and Steve Estes. “Raising Minority Academic Achievement: A Compendium of Education Programs and Practices.” Washington, DC: American Youth Policy Forum, 2001.

Jencks, Christopher, Marshall Smith, Henry Acland, Mary Jo Bane, David Cohen, Herbert Gintis, Barbara Heyns, and Stephen Michelson. Inequality: A Reassessment of the Effect of Family and Schooling in America. New York, NY: Basic Books, 1972.

Kane, Thomas J. The Price of Admission: Rethinking How Americans Pay for College. Washington, DC: Brookings Institution Press, 1999.

King, Jennifer. “Missed Opportunities: Students Who Do Not Apply for Financial Aid.” Washington, DC: American Council on Education, 2004.

Lipsey, M.W., and D.B. Wilson. “The Efficacy of Psychological, Education,, and Behavioral Treatment: Confirmation from Meta-analysis.” American Psychologist, vol. 48, 1993,


pp. 1181-1209.

Moore, Mary T. “The National Evaluation of Upward Bound: A 1990s View of Upward Bound Programs Offered, Students Served, and Operational Issues.” Washington, DC: U.S. Department of Education, 1997.

Mosteller, Frederick, and Daniel Moynihan. On Equality of Educational Opportunity. New York, NY: Vintage Books, 1972.

Myers, David, Mary Moore, Allen Schirm, and Zev Waldman. “The National Evaluation of Upward Bound: Design Report.” Report submitted to the U.S. Department of Education. Washington, DC: Mathematica Policy Research, Inc., November 1993.

Myers, David, Robert Olsen, Neil Seftor, Julie Young, and Christina Tuttle. “The Impacts of Regular Upward Bound: Results from the Third Follow-Up Data Collection.” Report submitted to the U.S. Department of Education. Washington, DC: Mathematica Policy Research, Inc., April 2004.

Myers, David, and Allen Schirm. “The Impacts of Upward Bound: Final Report on Phase I of the National Evaluation.” Report submitted to the U.S. Department of Education. Washington, DC: Mathematica Policy Research, Inc., April 1999. Available at [http://www.ed.gov/ offices/OUS/PES/higher/upward.pdf].

Myers, David, and Allen Schirm. “The Short-Term Impact of Upward Bound: An Interim Report.” Report submitted to the U.S. Department of Education. Washington, DC: Mathematica Policy Research, Inc., May 1997.

Peske, Heather G., and Kati Haycock. “Teaching Inequality: How Poor and Minority Students are Shortchanged on Teacher Quality.” Washington, DC: The Education Trust, June 2006.

Schirm, Allen, Elizabeth Stuart, and Allison McKie. “The Quantum Opportunity Program Demonstration: Final Impacts.” Washington, DC: Mathematica Policy Research, Inc.,
July 2006.

Schochet, Peter, John Burghardt, and Steven Glazerman. “National Job Corps Study: The Impacts of Job Corps on Participants’ Employment and Related Outcomes.” Princeton, NJ: Mathematica Policy Research, Inc., June 2001.

St. John, Edward P., Glenda Droogsma Musoba, Ada B. Simmons, and Choong-Geun Chung. “Meeting the Access Challenge: Indiana’s Twenty-First Century Scholar Program.” Indianapolis, IN: Lumina Foundation, 2002.

Swail, Scott, and Laura Perna. “A View of the Landscape: Results of the National Survey of Outreach Programs.” Outreach Program Handbook 2001. New York, NY: The College Board, 2000.

Swanson, Christopher B. “Who Graduates? Who Doesn’t? A Statistical Portrait of Public High School Graduation, Class of 2001.” Washington, DC: Urban Institute Education Policy Center, February 2004.

Trenholm, Christopher, Barbara Devaney, Ken Fortson, Lisa Quay, Justin Wheeler, and Melissa Clark. “Impacts of Four Title V, Section 510 Abstinence Education Programs.” Princeton, NJ: Mathematica Policy Research, Inc., April 2007.

U.S. Department of Education, National Center for Education Statistics. “Access to Postsecondary Education for the 1992 High School Graduates.” NCES 98-105, by Lutz Berkner and Lisa Chavez (MPR Associates). Project Officer: C. Dennis Carroll. Washington, DC, 1997. Available at [http://nces.ed.gov/pubs98/98105.pdf].

U.S. Department of Education, National Center for Education Statistics. “The Condition of Education, 2001.” Washington, DC: U.S. Department of Education, 2001.

U.S. Department of Education, National Center for Education Statistics. “The Condition of Education, 2006.” NCES 2006-071, Washington, DC: U.S. Department of Education, 2006.

U.S. Department of Education, National Center for Education Statistics. “The Condition of Education, 2007.” NCES 2007-064, Washington, DC: U.S. Department of Education, 2007.

U.S. Department of Education, National Center for Education Statistics. “Digest of Education Statistics, 2005.” NCES 2006-030, Washington, DC: U.S. Department of Education, 2006.

U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics. “National Assessment of Educational Progress: The Nation’s Report Card, Mathematics 2005.” NCES 2006-453, Washington, DC: U.S. Department of Education 2005a.

U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics. “National Assessment of Educational Progress: The Nation’s Report Card, Reading 2005.” NCES 2006-451, Washington, DC: U.S. Department of Education 2005b.

Wilde, Elizabeth Ty, and Robinson Hollister. “How Close Is Close Enough? Evaluating Propensity Score Matching Using Data from a Class Size Reduction Experiment.” Journal of Policy Analysis and Management, vol. 26, 2007, pp. 455-477.





1 Upward Bound includes three programs: regular Upward Bound, Veterans Upward Bound, and Upward Bound Math-Science. The focus of this report is regular Upward Bound and we use the term “Upward Bound” to refer to that program.

2 These rates compare favorably to other studies with similar populations and long follow-up periods. For the Quantum Opportunity Program Demonstration Evaluation’s third telephone survey about nine to ten years after the demonstration started, the response rate was 76 percent overall, and the treatment group response rate exceeded the control group response rate by 3 percentage points (Schirm, Stuart, and McKie 2006). For the National Job Corps Evaluation Study, the 48-month follow-up survey had an 80 percent overall response rate, and the treatment group response rate exceeded the control group response rate by about 4 percentage points (Schochet, Burghardt, and Glazerman 2001). In the study of Impacts of Four Title V Section 510 Abstinence Education Programs, the third follow-up survey conducted about four to six years after random assignment had an overall response rate of 82 percent, and the treatment group response rate exceeded the control group response rate by 1 percentage point (Trenholm et. al. 2007).

3 Looking at outcomes from the FSA data suggests that survey nonresponse bias may be small. For the treatment group, 73.0 percent of survey respondents and 75.0 percent of survey nonrespondents applied for financial aid, while 70.6 and 68.5 percent of survey respondents and nonrespondents, respectively, applied for aid from the control group. Similarly, the rates of Pell grant receipt were close for survey respondents and nonrespondents in both groups: 58.8 and 58.3 percent for the treatment group; 55.6 and 52.7 percent for the control group.

4 Of the 1,656 institutions reported by sample members in the survey, 1,465 could be matched to the Integrated Postsecondary Education Data System (IPEDS), and 925 of those (63 percent) appear in the NSC’s list of participating institutions. The vast majority of the remaining schools were vocational institutions, along with some two-year schools. A higher proportion (just under 80 percent) of students reported attending a school that was also in the NSC list, with the difference in these rates likely due to attendance at multiple institutions. Based on the differences in coverage by sector, sample members were less likely to be confirmed as enrollees by the NSC data if they attended only vocational institutions.

5 For the CACE analysis presented in this report, we regard participation in regular Upward Bound or Upward Bound Math Science by control group members as forms of crossover.

6 A CACE estimate is roughly 1 / (0.85-0.14) = 1.408 times the ITT estimate.

7 Table II.2 refers to “Project 69.” As discussed below, sample members from this project comprise a large proportion of the weighted evaluation sample. Our regression models include an indicator variable and interaction variables for Project 69 to capture effects of the other control variables that are specific to this project.

8 For the subgroups based on the academic performance index, the bottom 20 percent of ninth-grade academic achievement was labeled in previous reports (Myers and Schirm 1999, Myers et al. 2004) as “higher academic risk,” and the top 80 percent of ninth-grade academic achievement was “lower academic risk.” We instead use the terms “lower performing” and “higher performing,” respectively, to make the labels more intuitive, simplify discussion, and facilitate comparison with the lower and higher expectations groups. Construction of the index is described in Myers et al. (2004).

9 Due to item nonresponse, some subgroups are not defined for all sample members, resulting in a grand total that is smaller than the full sample size of 2,844.

10 For outcomes that were measured using data from sources in addition to the fifth follow-up survey, the weights reflected the probability of having the data needed to measure the outcomes, which is not just the survey response probability.

11 To ensure inclusion in the sample of substantial numbers of some of the less common types of projects, as discussed earlier in this chapter, the sample included only one of the 56 projects that were medium-sized, urban, hosted by four-year public universities, and not serving a group of students that was predominantly Asian, Native American, or Latino (the most common type of project). Because this one project’s probability of selection was much lower than the average selection probability, the students in this one project represent 26 percent of the eligible applicants nationwide and are weighted accordingly. This was a consequence of the study's requirement to over-sample relatively uncommon types of projects. The weights that are used in our main analyses account for the study design. Chapter III presents a summary of the sensitivity analyses pertaining to sample weighting, with full details in Appendix G.

12 The counts and percentages in this paragraph are unweighted.

13 See Appendix H for a list of other supplemental service programs attended by sample members.

14 As discussed in this chapter and elsewhere in the report, each data source has different relative strengths and weaknesses. A specific concern with using NSC data to measure enrollment pertains to its coverage of postsecondary institutions. Specifically, when students in the evaluation first began enrolling in postsecondary institutions, the percentage of institutions that participated in the NSC was lower than it was in later years. Also, coverage rates vary across different types of institutions. In light of concerns about coverage of the NSC, the sensitivity analyses include enrollment measures that are based on survey and FSA data only—relevant NSC data are ignored by these measures. As shown in the main analyses and presented in Table III.1 (and Appendix C), the impacts on overall postsecondary enrollment and enrollment at four-year institutions according to a measure (5B) using all three data sources—survey, FSA, and NSC—are 1.54 and 1.29, respectively. As shown in the sensitivity analyses and presented in Appendix C, the corresponding impacts according to a measure (6B) using only survey and FSA data are 1.38 and 1.29. None of these estimates is statistically significant.

15 The impacts on overall enrollment and completion, for example, are 1.04 and 1.57 (p-values = 0.73 and 0.51), while the impacts from the main analysis are 1.54 and 2.26. The one exception to the pattern is the impact on the receipt of certificates and licenses, which is 4.66 and significant (p-value = 0.09), compared with the impact of 4.54 estimated in the main analysis.

16 A standardized measure has potentially important limitations. It requires data pertaining to the timing of events, which are likely subject to greater recall error than data about whether an event occurred. It also ignores relevant data, specifically, the available longer-run data about postsecondary outcomes that occur after the chosen cut-off date. In light of these limitations, the main analysis examines outcomes that are observed at any time during the period for which data are available from the surveys, the NSC, or the FSA records.

17 In conducting the subgroup analyses, we assessed the sensitivity of the findings to alternative ways of measuring the outcomes, and present the results in Appendix I. We did not, however, conduct sensitivity analyses pertaining to sample weighting.

18 Looking across the various measures of enrollment (see Appendix I), the largest estimated effect on four-year college or university enrollment by sample members with lower educational expectations is 21.0 percentage points, based on the measure that uses only fifth follow-up survey data. This is similar to the 20 percentage point survey-based effect reported in Myers et al. (2004).

19 For students who applied to Upward Bound in eighth or ninth grade, this measure, like ninth-grade GPA, is based on ninth-grade transcripts, and could be affected by participation in Upward Bound if the program has an immediate effect on high school courses taken and achievement in those courses.

20 Among eligible Upward Bound applicants, 76 percent of those who took a course below algebra in ninth grade reported that they expected to earn a bachelor’s degree or above, as compared with 87 percent of those who took algebra or above. Although the level of ninth-grade mathematics class is far from a perfect predictor of self-reported educational expectations, the percentage of applicants with high self-reported expectations is significantly different between the two groups defined by the level of ninth-grade mathematics class. Furthermore, the subgroups defined by ninth-grade mathematics class are interesting in their own right.

21 The model used the same variables that were included as control variables in our regression analyses (see Table II.2), excluding the indicator for Project 69 and its interactions with the other variables.
1   ...   4   5   6   7   8   9   10   11   12


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət