Ana səhifə

U. S. Department of Education Office of Planning, Evaluation and Policy Development


Yüklə 1.39 Mb.
səhifə6/12
tarix26.06.2016
ölçüsü1.39 Mb.
1   2   3   4   5   6   7   8   9   ...   12

C. Previous Findings


Earlier evaluation reports have presented findings based on data collected through the first three rounds of surveys and transcript collection. Those reports reveal how Upward Bound affected eligible applicants while still in high school—in terms of both precollege services received and academic achievement—and how Upward Bound affected their postsecondary experiences approximately three years after completing high school, including whether they had enrolled in a postsecondary institution, their highest level of postsecondary attendance (four-year, two-year, or vocational), and the number of postsecondary credits earned. The key findings were as follows:

  • For the average eligible applicant, Upward Bound had little effect on most key high school outcomes, including credits, grades, and graduation. Myers et al. (2004) found Upward Bound had no effect on total credits and a small effect on credits earned in high school math. The program increased the number of math credits earned by 0.2 credits; that is, about one in five students completed an additional high school math course because of exposure to Upward Bound. Upward Bound had no effect on credits earned in science, English, social studies, or foreign language courses. Also, the program had no effect on honors and Advanced Placement credits, grades earned in high school, or high school graduation.

  • For the average eligible applicant, Upward Bound had few short-run effects on postsecondary outcomes, but may have increased enrollment at four-year colleges and universities. Upward Bound had little effect on enrollment and credits earned at two-year or vocational postsecondary institutions and on the receipt of college financial aid. Myers et al. (2004) found some inconclusive evidence that Upward Bound may have increased the percentage of treatment group members attending a four-year college or university. When all postsecondary enrollment information reported by sample members was included, the estimated effect was 6 percentage points and was statistically significant; however, when unverified enrollment information was excluded (as described in Appendix H of the report), the effect fell to 5 percentage points and was not statistically significant.

  • Upward Bound had positive effects for eligible applicants with lower educational expectations. For eligible applicants with lower educational expectations—those who did not expect to earn a bachelor’s degree when they applied to Upward Bound—Upward Bound increased Advanced Placement or honors credits as well as credits earned in core academic subjects in high school. It also had short-term effects on some postsecondary outcomes for this group, such as the likelihood of enrolling in a four-year college or university, total postsecondary credits, and credits earned at four-year colleges and universities. For eligible applicants who expected to obtain a bachelor’s degree or more, Upward Bound had little short-term effect on any of these outcomes.



II. Research Design and analytic issues

A. Research Design

1. Selection of Upward Bound Projects and Random Assignment


At its inception, the national evaluation of Upward Bound was unusual within education evaluation studies because of two important design elements: (1) a nationally representative sample of Upward Bound projects and (2) random assignment of eligible applicants to Upward Bound and a control group. These two design elements provide for both external validity and internal validity—that is, the ability to generalize the results to the population of regular Upward Bound projects and to make inferences about the causal effects of Upward Bound on eligible applicants’ outcomes. Although the use of random assignment has become more common in recent years, it is still rare for evaluations to include a nationally representative sample of program sites.

a. Selection of Upward Bound Projects

For the evaluation, we randomly selected 70 Upward Bound projects representative of all 395 regular Upward Bound projects operating in the 50 states and the District of Columbia that were hosted by a postsecondary institution, had operated for at least three years as of October 1992, and were not dedicated to serving only students with physical disabilities. Many different designs for selecting the sample of projects were considered. Several designs had relatively modest stratification and modest variability in sampling rates for different types of projects defined by potentially policy-relevant characteristics, including project size and type of host institution. Such a design would have supported precise estimates for many key subgroups


while sacrificing very little precision in the estimates for the full sample. Other designs that were considered were much more highly stratified and had highly variable sampling rates to yield substantial overrepresentation—relative to the full universe of projects—of some types of projects with less common characteristics (e.g., serving predominately Native American students) and substantial underrepresentation of some types of projects with more common characteristics. Objectives of such a design included assuring the Upward Bound community that some relatively rare types of projects were adequately represented and, if policy interest later emerged, allowing more precise estimates of Upward Bound’s effects on particular applicant and project subgroups, even though estimates for other subgroups and the full sample would be less precise as a result of variability in project selection probabilities and, therefore, sample weights. The design that was chosen sought to balance the competing needs of the evaluation. Under the chosen design, project selection probabilities varied substantially across strata that were defined by location (urban or rural), type and control of the host institution (two- or four-year, public or private), size, and racial or ethnic composition.

Of the 70 projects originally selected, 11 could not participate or had to be excluded for various reasons. For example, some did not plan to recruit new students for the 1992–93 school year, some had too few applicants to accommodate random assignment, and some did not have their Upward Bound grants renewed. We replaced eight of these 11 projects with similar, randomly selected projects, arriving at a total sample of 67 projects. See Appendix A for a detailed description of the sample selection and weighting procedures.



b. Random Assignment of Eligible Applicants to Upward Bound and a Control Group

During the 1992–93 and 1993–94 school years, we randomly assigned eligible applicants from each project to either a treatment group, which was invited to participate in Upward Bound, or a control group, which was not invited to participate. Eligible applicants were defined as students who the projects had recruited and who met both the federal eligibility criteria (low-income or potential first-generation college student status) and any project-specific criteria for participation. All of the projects received more applications than they had openings, and all served the same number of students they would have normally served under their usual selection procedures.

We implemented random assignment over 14 months so that projects could use their standard recruiting procedures and enroll students in accordance with their usual enrollment schedules. Nationwide, the random assignment process resulted in a treatment group of about 1,500 students and a control group of about 1,300 students for subsequent impact analyses. Myers et al. (1993) presented a detailed description of the random assignment procedures.

To accommodate project wishes concerning the composition of the participants served by the program, such as sex, racial, or ethnic group balance, we used stratified random sampling to select the treatment and control groups (and weighted sample members appropriately to account for different random assignment probabilities). Nonetheless, random assignment may have led some Upward Bound projects to serve students they would not normally have served. Before random assignment, we asked project directors to rate each applicant as either most likely, somewhat likely, or least likely to have been selected under normal selection procedures; in this report, we assessed whether the effects of Upward Bound vary across these three groups. Appendix I provides little evidence that the effects on postsecondary enrollment and completion varied across groups; however, there is evidence of significant positive effects on attendance and completion at vocational schools for the somewhat likely to be selected group.

With random assignment, the only systematic difference between the treatment and control groups in the present evaluation is that treatment group members were offered the opportunity to participate in Upward Bound; otherwise, the two groups are statistically equivalent (Myers and Schirm 1997). On important demographic variables such as gender, race or ethnicity, and Upward Bound eligibility status, differences between treatment and control proportions are small. Statistically significant differences between the two groups exist within two categories of background variables: a student’s own educational expectations and the educational expectations held by his or her mother. Even in a randomized experiment, there will generally be a few differences between the groups purely due to chance; using a 10 percent level for statistical significance, we would expect to find significant differences for 10 percent of the comparisons. To adjust for the small differences between the treatment and control groups, we computed regression-adjusted estimates of program effects in which we statistically controlled for these and other background characteristics. We describe our estimation methods in more detail below.

2. Outcome Measures


The outcomes for which impact estimates are presented in this report can be grouped into three areas: postsecondary enrollment, financial aid application and receipt, and postsecondary completion.

Postsecondary Enrollment. We estimate the impacts of Upward Bound on enrollment at any type of postsecondary educational institution, along with the highest level of postsecondary institution attended, and the selectivity of four-year colleges and universities attended. Highest level of enrollment was defined as four-year for sample members who attended a public or private, nonprofit, four-year college or university; two-year for sample members who attended a public or private, nonprofit, two-year college, but not a four-year college or university; and vocational for sample members who attended a for-profit institution but no two- or four-year institution.

Selectivity of four-year colleges and universities attended was measured by using school ratings from Barron’s Profiles of American Colleges (2003). If a school was rated as “most competitive,” “highly competitive,” or “very competitive,” we classified the school as more selective. If a school was rated as “competitive,” “less competitive,” “noncompetitive,” “special,” or unrated, or was excluded from Barron’s, we classified the school as less selective. According to the classification system, more selective colleges and universities generally accept less than 75 percent of applicants, and students at more selective institutions were generally in the top half of their high school class. Less selective postsecondary institutions generally admit more than 75 percent of their applicants. The values of the four-year college or university selectivity outcome variables are set to 0 for sample members who did not attend a four-year college or university, that is, such sample members are classified the same as sample members who attended less selective four-year institutions.



Financial Aid Application and Receipt. We also estimate the impacts of Upward Bound on the likelihood of a sample member applying for and receiving any financial aid, as well as on the likelihood of receiving a Pell grant.

Postsecondary Completion. We estimate the impacts of Upward Bound on completion of any postsecondary credential, as well as on the highest postsecondary credential (degree, certificate, or license) earned. Highest credential was defined as a four-year degree for sample members who earned a bachelor’s degree or higher; a two-year degree for sample members who earned an associate degree but not a bachelor’s degree; and a certificate or license for sample members who earned a postsecondary certificate or license but no higher degree.

To measure these postsecondary outcomes, we use data from the fifth follow-up survey, as well as from administrative records. We describe these different data sources below, along with their strengths and weaknesses in providing valid information for measuring these outcomes of interest.


3. Data Sources


The analyses described in this report are based on information provided by treatment and control group members during the follow-up interviews and by the postsecondary institutions that they reported attending, as well as by two administrative data sources.

Surveys and Transcripts. Almost all sample members completed a baseline questionnaire when they applied to Upward Bound (see Table II.1). We then conducted follow-up surveys in 1994–95, 1996–97, 1998–99, 2001–02, and 2003-04 and achieved high response rates for all surveys.2 The estimates in this report rely substantially on data from the fifth follow-up survey, conducted in 2003–04, which yielded a 74 percent response rate; if sample members are weighted to account for unequal selection probabilities (see Appendix A), the response rate is 72 percent. This survey focused on obtaining information from sample members about their postsecondary educational attainment.

The response rate for the treatment group was 4 percentage points higher than for the control group. Given this small difference in response rates, the differences between marginal treatment respondents—treatment group members who would not have responded if they had been assigned to the control group—and other treatment respondents would have to be very large to have any perceptible effect on the impact estimates. Furthermore, we use the extensive baseline data available to incorporate an adjustment for nonresponse into the sample weights.




After each follow-up survey, we also collected transcripts from high schools and postsecondary educational institutions attended by sample members. Following the fifth follow-up survey, we requested postsecondary transcripts for 2,079 sample members and received transcripts for 1,772 of them (85 percent); Appendix B describes the data collection procedures.



Administrative Data. Survey respondents may differ from nonrespondents in ways that may affect outcomes (see Appendix Table A.3). While we attempt to account for these differences in observables in our estimation methods and weights, there may be differences in unobservables that remain. Therefore, we collected data from other sources that allow us to help mitigate any differences due to survey nonresponse.3 These two administrative data sources, the National Student Clearinghouse (NSC) and the federal Student Financial Aid (FSA) records, use completely different reporting systems. The NSC collects enrollment and degree information from the majority of colleges and universities in the United States, enabling it to provide verification of these activities by institution and semester. The FSA records are based on the Free Application for Federal Student Aid (FAFSA) filled out by most college aspirants, and include information on aid application and Pell receipt. Refer to Appendix B for full descriptions of these administrative data sources.

4. Construction of the Outcome Measures


The data available from the follow-up surveys, the NSC, and the FSA records are used to construct various outcome measures in three different ways: using only the fifth follow-up survey, using only administrative records, and blending data from the surveys and the administrative sources in different combinations. As data from the NSC were available for a period of time after the fifth follow-up survey was completed, we construct two versions of an outcome when data from the NSC records are used: one using all the information available from the NSC records (NSC Full), and the other using information available from the NSC by the end of calendar year 2004, when the fifth follow-up survey was complete (NSC Truncated). A more detailed discussion about the construction of various outcome measures using these different data sources is provided in Appendix B. In the main body of this report, we focus on one measure of enrollment (5B) and one measure of completion (7B); these measures use the fifth follow-up survey, full NSC data, and FSA records in combination and, when the data are not definitive, treat a sample member as a nonenrollee or noncompleter only if there is also no application for financial aid. In the appendixes, we present estimates for many measures to assess the robustness of our findings.

We use the different data sources because they have different relative strengths and weaknesses. In conducting the impact analysis for this report, our basic principle has been to utilize the maximum amount of information that is available on the sample members. While the follow-up surveys provide data on a broad range of outcomes, we face the problem of not having

data for survey nonrespondents, and the nonrespondents might be systematically different from respondents, potentially leading to nonresponse bias in our estimates. The NSC and the FSA data are two convenient resources to mitigate this problem, as we can get information on both survey respondents and nonrespondents from these administrative records.

However, these administrative sources have their own limitations. The NSC does not cover the entire universe of postsecondary schools, and does not cover all member schools for the entire relevant time period. Nationally, current rates of coverage are 87 percent for students attending a two-year institution and 90 percent for students attending a four-year institution. The coverage rates were lower in earlier years (the NSC data go back to 1993–94); in terms of total U.S. college enrollment, coverage by the NSC data rose from 57 percent in 1997 to 88 percent in 2002, with small increases in subsequent years. Thus, the NSC might be missing data for a sample member who attended and potentially completed his or her education at a postsecondary institution because the institution was not covered by the NSC during the relevant years.4 FSA records provide data on all sample members; however, they do not have information on postsecondary completion, and they provide information on enrollment for only some students (those who receive a Pell grant).


1   2   3   4   5   6   7   8   9   ...   12


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət