National Academies Press: OpenBook

Quality in Student Financial Aid Programs: A New Approach (1993)

Chapter: 5 Alternative Perspectives on Quality in Student Financial Aid Programs

« Previous: 4 Current Quality Control Procedures
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 83
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 84
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 85
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 86
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 87
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 88
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 89
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 90
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 91
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 92
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 93
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 94
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 95
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 96
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 97
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 98
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 99
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 100
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 101
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 102
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 103
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 104
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 105
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 106
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 107
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 108
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 109
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 110
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 111
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 112
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 113
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 114
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 115
Suggested Citation:"5 Alternative Perspectives on Quality in Student Financial Aid Programs." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 116

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

SAlternative Perspectives on Quality in Student Financial Aid Programs In the preceding chapter, the panel carefully defined the types of error that are encountered in the student financial aid system and then discussed the quality control and measurement procedures that are currently in use. In this chapter we consider quality from the perspective of three partici- pants with major responsibilities for the control of error in the system the Department of Education, the student applicant, and the postsecondary in- stitution. We begin with a discussion of the attempts by the Department of Education to gauge the "state of error" in the system through sample sur- veys, particularly the most recent major study of the Integrated Quality Control Measurement Project. After a discussion of the shortcomings of that study and problems with the large-scale survey approach to error mea- surement in general, we turn to an examination of possible systemic causes of error by considering the complexity of the process from the viewpoint of the individual applicant. In the final section of the chapter, we consider some of the problems in maintaining quality that are faced by the postsecondary insti- tution the bearer of legal and moral responsibility for the just administration of financial aid. Not surprisingly, the perspectives of the three players differ in many respects. Each viewpoint, however, constitutes an important dimen- sion of the "total picture" of quality in the student financial aid system. SURVEYS OF ERROR Thus far, the sole attempts to measure errors throughout the student financial aid system have been through a sequence of special surveys of aid 83

84 Q UALITY IN STUDENT FINANCIAL AID PROGRAMS recipients commissioned by the Department of Education. The panel first discusses these studies and then addresses the issue of nonrecipients. The early studies (see Table 5-1) focused on errors in Pell grants. More recent studies included estimates of error in Campus-Based and Guaranteed Stu- dent Loan awards as the survey designs were developed to select cases in those programs. Our focus in this discussion is on two of the department's TABLE 5-1 Previous Quality Control Studies Commissioned by the Department of Education Year(s) 1975 1977 Focus of Study Office of Education study. Compared IRS records with data from applicant. Repeated in 1977 and 1980. Report of Student Financial Assistant Group. Sources of data were public testimony, previous studies, and audits. 1979-80 Basic Educational Opportunity Grant study. Examined application data and institutional records. 1980-83 Pell Grant Quality Control Study. Studied error in the Pell Grant Program. The study, which consisted of two large national surveys, compared delivery systems, assessed options for redesigning delivery systems, and developed the Institutional Quality Control Handbook. The study concluded that error in the Pell program was significant, and aside from the need for program and policy restructuring, quality could be improved through implementation of verification and validation efforts. 1983-86 Stage One of the Title IV Quality Control Study. A national survey of 1987 recipients of Campus-Based aid and certified applicants of Guaranteed Student Loans (GSLs). This study examined error in the Campus-Based and GSL programs. Stage Two of the Title IV Quality Control Study. A national survey of recipients of Pell grants, Campus-Based aid, and certified applicants of GSLs. This study examined error in the Title IV programs. Stage Two of the Title IV Quality Control Study. Comparison with other federal programs and assessment of potential incentives and . . . disincentives. 1985-92 The Institutional Quality Control Pilot Project. Assisted educational institutions in designing their own quality improvement programs and demonstrated the applicability of quality control procedures for financial aid at the institutional level. 1986 Guaranteed Student Loan Quality Control Project. Identified and measured error in the GSL program due to financial institutions, guaranty agencies, and the Department of Education. 1988-90 The Integrated Quality Control Measurement Project. Examined the extent of error in the delivery of Title IV aid, patterns and trends in error that point to systemic problems in the delivery system, possible causes of error and steps that can be taken to enhance quality in the future, and the extent to which prior actions have improved quality. SOURCE: Based in part on Chauvin et al. (1989).

ALTERNATIVE PERSPECTIVES ON QUALITY 85 commissioned reports, Title IV Quality Control Project: Stage Two, Final Report, Vol. 2, Corrective Actions (Advanced Technology and Westat, 1987b) ~nnr1 7nt~or~fPH O`~:lilv Cnnf:rr,I M~.<ur~m~nt Project, Findings and Correc ~, . ~ ^, v v ~ <~ . ~ v ~ w =, ~ v ~ v v V J ~ J _ _ 7 ~ _ _ t~ live Actions (Price Waterhouse, 1990b). We also rely on Pell Grant Valida- tion Imposes Some Costs and Does Not Greatly Reduce Award Errors: New Strategies Are Needed (General Accounting Office, 1985J. Each of these studies presents information on error by program, classified by student and institutional error. The studies reviewed indicate that error in the Title IV student financial aid programs is sizable and persistent. In the 1988-89 academic year, for example, of the approximately $15.4 billion in aid distributed, Price Waterhouse (199Ob) estimated that almost 11 percent of the program dollars were awarded in error. Possible inaccuracies in this and other estimates of program error are discussed later in this chapter, but there is little doubt that the errors, whatever their exact total, are considerable. At each step of the delivery process (applying, determining eligibility, calculating awards, disbursing awards, and monitoring educational progress), errors can be made. Despite the apparent magnitude of this problem, little has been done to prevent error for example, changing procedures that seem to generate error. In- stead, efforts have been focused on detecting errors in examined files and correcting those specific errors identified. All the parties involved in student financial aid and its administration are potential originators of error, and indeed, a total quality management approach would hold each of them responsible for the efficient working of the system. Key players include the Department of Education, students applying for and receiving aid and their parents, third-party application processors, and the educational institutions. Historically, however, the em- phasis in reporting has been on student and institutional error. Table 5-2 summarizes the results of various studies showing estimated student error, institutional error, and total error (i.e., resulting from either or both sources). Student Error Student errors may result from the use of incorrect data, difficulties in estimating and projecting data, or the complexities of the application and its instructions. Table 5-3 compares sources of student error over time in the Pell Grant Program, and Table 5-4 compares sources of student error in academic year 1988-89 by type of Title IV program. In all studies the important sources of error were relatively consistent over time, with the exception of dependency status, which was the most important source of error in 1982-83 and 1985-86 but dropped off the list of important errors in 1988-89, at least in part because the definitions of dependency changed considerably. Also note that the estimates of error cited from these studies

86 -= _ ._ Go ~or o ~ Do o _ Ct _ ~ ~ ~Do o ~ ~ o EN ~_ =- s ~of to ~ ~Cal ~ Do ._ _ ~o _ ~ ~oo > ~ ~- ,~ US ~ Go ~ ~ ~ 3 ~¢ [11 ~_ ~Cal _ ~ ~ O ~ ~ Cal - Cal ~ O Ce . ~ ~hi 0 ~O X - 00 ~ 3 ~ ~ rid ~cat an,- hi - o - - ~ ~rid ~ o so t_ _ ~ o Do ~Go ~ ~ ~ ~ ~V ~_ ._ ~ ~ ~o ~ ~o. ~U~ c ~ct ·- s ~r ~ ~0 - ~ 0 c~ $_ Co :- ~C ~oo ~ C ~V) _ . ~- .-. ~ - O 3 0~ ~ o ~ c ~~) ~O0 <) ~ Ct ·- ~- ) C~ t- ~ ~O ~ ~ - o c~ ~ 3 ~1 ~t ~t r<, ~, c~, <, _ ~ c'> _ 50 ~ =\ ~t 00 00 0 ·aOO s ~ o o o o o o o _ _ _ o C~ .~ ~o o ~ ~C~ ~ C~ . _ C~ V) C~ . _ C) Cq ~ ~_ ~ ~o ~C~ ~ ~ ~ ~oo oo oo oo oo oc oo oo oo i_ ~ ~ oo o ~ ~U~ ~oo oo oo ~C~ Ct ~oo oo oo oo oo oo oo X oo c ~ ~ax ^ ~ c ~E ~2> -,, ~ v, o ~ s , ~s ~C) Ct ~J Ct o u ~, ~ E~ ~,= ~ ~ m ~<,,, m Cll o O _ e _ O _ e _ E ~ s :;- E ~c) _ ~ ~ 0) ~ c.> ~ ~ ° c~ ~ ~ ~ ~V ~V ~ ~ V V ~ ·- ~ V ~ .= O ce oo ~ (5\ ~ _ ._ c' c' ct ~ ~ .= O uO: 0 ~ ~ c: 3 -5 Ct Ct e~ ._ 5: Ct .0 3 Ct ~ .o C) - O C;: ~ ,~, ~ O 5 0 ._ C~ c;} ~ . 5 ._ ._ CC ~ O O O U) .e~ O s~ C) ~- ·~ O .5 ~ · C) Ce - 0 ~ D ~ 3 ~ t .~ C,<~ 0 3 ._ · ~ C~3 · _ u: - c~ ._ ~ (~N c, ~ ct ,3 °° h, _ ~ ~ E-. 0 ~ ~ s ° s°- c~ s c~ u: 0 ~ c~ ~ D 3 .= ~ O ~ C~ ~ Cq ·- ._ ~ U: ~ ~ ~ Ct U~ -, C~ t ._ C~ 1 I ~ 0 ~ -° 3 -^ ~ 0 ~ v, ~ - . _ m ~ ~ ~ Ct C~ ~ O 5~ ~ '1)0 ~ ~O _ 55 C.) ~ V ~0D ~ 5 - CtO Ct ~3 s s~ ~ . _.= 0 ce~ ~ _0 ct . ~ ~ ~ ~ ~ ._ · _ ~ce s~u: ~ ~ cq ~v 3 3 ~ ~ ~ 0 D C~ ~ , ~ C) ._ _ C) cL o~ ~ o . ~ e,. o ~ _ ~3 O C~ s: Ct C' ~ ~ au _ _ 50 Ct 5 . C~ O ~ =, O V C~ ~ s ~ o (v 50 1= c~ 'e ~o ( ~ c ~ ~ ~ o D ~._ ~ C~ _ 5 . C~ ~ 5 ~ ~ Ct U, ~ ~ e C 'V o E ~ ~ ~ _ ~ ~ s~ ·~=,C, ~ ct E~ .. ~ _ ~ . _ _ ~ oo ~ 3 ~ ,,, 0 ~ c.) ~ . oo _ 0 0

e o o ' s: ;^ - ~ o o - ~ o ~ . - ~ 3 D ;- C) ~ O o ~k o O ~ _ ~ ~* C~ ~ ._ C 3 ~ Ct o ~, o s: >, Ct C~ ~ ~ C) Ct Cq ~ ._ o ~ C~ - C<: 't ,3 C~ ~ ._ ~s (D ~, ._ ._ _ o s~ C~ - ,o ~ - ._ _~ ~ (* Ct o o cL1C) O oo r~ _CC ~ ~ C) Cto ~ ~._ ca_ ct C). _ ~ ~£ ¢ ;- - o ~o ~> ¢ · C~ X ,g C~ ~o C) o E~ · U: x ~ .= . . o ~ - ' o~ t ~ ._ o ~ - - (D - o v o~ ~: . ~cq - ~ ct ~ ~ . - o ~ ~ ~ ~s c) · - o : - ~ 'e ~ o ct ~ o ~ t -> c) - . . x . - o s~ z - - ct 3 ~ Ct ~ Q Ct Cq Ct ;°- c s ~ ~S Ct o C~ ~S Cq Ct 3 ._ C~ - Ct o E~ oc Cq Q~, o C) ~ Q _ s ~, ._ ~ ._ Ct - U: X ~t L~ ._ C) - ~ , C C~ _ ·- d O ~ s~ . . ~ ._ kL1 5 _ ._ X o C~ Ct ._ C~ C) C) ._ 3 o C o - C~ ._ ~> Cq C) - C) o Ct - o V) - C~ o o c5 ._ C~ ._ o ._ ._ o - - ; - - o - c) ct £ x o C~ C) s~ D - ~0 U) s~ _~ au - o C~ D Ct - C~ o o - - ._ o C~ ~3 Ct - o ._ . _ .= C~ Ct m C~ U, C~ ~ 3 _ _ , ~ . _ ~ ,.., - (, , ~ - , Ct ._ _ D O . _ . ~ X o ~I) ~ . . _ Ct - - D ._ X . _ C~ _' D ~U, X e~ =, D O C) _ s~ ^ .S O ~ <4-, ~O Ct ~ o C~ U: Ct 3 C~ - Ct U: ·- ~D Q ~0 ~i 3 Ct o ~i C) ` - o' ~ ' C~ C) ._ C~ - C~ o - - 87

88 QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 5-3 Components of Student and Institutional Error in the Pell Grant Program Academic Year Error Item $ Error (millions) Student error 1982-83a Dependency status 64 Other nontaxable income 46 Household size 34 Number in postsecondary schools 24 Home equity 18 Assets of dependent students 17 Adjusted gross income of parents and independent students 16 Income of dependent students 12 Taxes paid 2 1985-86b Other nontaxable income 75.1 Home equity 64.0 Dependency status 45.4 Dependent students' net assets 35.5 Students' expected income 32.6 Household size 29.9 Adjusted gross income 20.6 Number in college 18.4 1988-89C Parents' number in college 72 Parents' household size 70 Parents' home value 47 Students' 1987 adjusted gross income 44 Parents' real estate/investment value 32 Students' household size 31 Students' 1987 other nontaxable income and benefits 24 Parents' 1987 adjusted gross income 24 Institutional error 1982-834 Missing financial aid transcript 95 Incorrect determination of enrollment status -39 Incorrect calculation or disbursement of award 24 Incorrect determination of cost of attendance -21 1985 -86e Missing financial aid transcript 41.2 Missing Selective Service compliance statement 30.5 Missing statement of educational purpose 28.1 Award to students with bachelor's degree 13.6 Incorrect determination of enrollment status 9.6 Incorrect determination of cost of attendance 8.3 Loan default 4.7 Incorrect calculation or disbursement of award 3.7 continued

ALTERNATIVE PERSPECTIVES ON QUALITY TABLE 5-3 Continued 89 Academic Year Error Item $ Error (millions) 1988-~ Missing statement of educational purpose Missing Selective Service compliance statement Award to students with bachelor's degree Missing financial aid transcript Incorrect determination of cost of attendance Independent under unusual circumstances Default/repayment Half-time enrollment Factoring other aid 29 29 29 14 9 l not observed NA aGeneral Accounting Office (1985:Table 20, p. 57). bAdvanced Technology and Westat (1987a:Exhibit 3-2, p. 3-5). CThese numbers are rough estimates based on those values found on the graph (Price Waterhouse, 1 990b:Exhibit III- 1, p. 14). dNegative values are net underawards. General Accounting Office (1985:Table 22, p. 60). eAdvanced Technology and Westat (1987a:Exhibit 3-3, p. 3-8). fThese numbers are rough estimates based on those values found on the graph (Price Waterhouse, 1990b:Exhibit III-2, p. 16). are based on sample data and thus are themselves subject to the sources of error associated with surveys. These problems will be addressed later in this chapter. Assuming that errors are not made deliberately, several types of "stu- dent error" could not be corrected even with additional care on the part of the applicant. For example, some problems arise because, to apply for aid from some states and institutions, the application must be completed before the applicant completes his or her federal tax form. Thus, dependency status, taxes, and adjusted gross income are estimates and prone to error. Similarly, household size and the number in postsecondary schools are pro- jections for the following academic year and are subject to change as cir- cumstances change. In order to simplify the application procedure and include fewer items that are subject to error, the panel recommends the following: Recommendation 5-1: For applicants who have filed a tax return in either of the prior two tax years, the information used to complete the application and determine the award should be based on the most re- cently filed of these income tax returns. When the earlier tax year is used, updated information should be required as soon as a new tax return is filed. The updated information should not be used to change the award before the next term. Further, "household size" and "num- ber in postsecondary school" should be based on the situation as of the application date.

9o QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 5-4 Components of Student and Institutional Error in the Federal Student Financial Aid Programs, 1988-89 Dollar Error (million) Item Campus-Based Stafford Pell Grants Program Loans Student errora Parents' number in college Parents' household size Parents' home value Student's adjusted gross income Parents' real estate/investment value Student's household size Student's 1987 other nontaxable income and benefits Parents' 1987 adjusted gross income Institutional errorb 72 70 47 44 32 31 24 24 Missing statement of educational purpose Missing Selective Service compliance statement Award to students with bachelor's degree Missing financial aid transcript Incorrect determination of cost of attendance 9 Independent under unusual circumstances Default/Repayment Half-time enrollment Factoring other aid 82 60 31 29 13 31 23 33 29 29 29 14 s 29 6 1 21 59 not observed NA107 NA 61105 47 52 38 88 27 52 37 46 53 33 o 18 86 7 NOTE: "Home value" was removed from consideration under reauthorization. aThese numbers are rough estimates based on those values found on the graph (Price Waterhouse, 1990b:Exhibit III-1, p. 14). bThese numbers are rough estimates based on those values found on the graph (Price Waterhouse, 1990b:Exhibit III-2, p. 16). For example, applicants for the 1994-95 school year would use 1992 tax year data if they have not filed 1993 taxes at the time of application. Institutional Error The most recent study (Price Waterhouse, l990b) reported institutional error for three categories: procedural, calculation, and distribution errors. Procedural errors occur when institutions do not adhere to established guidelines for granting awards. Table 5-3 compares sources of institutional error over time in the Pelt Grant Program and Table 5-4 compares sources of institu

ALTERNATIVE PERSPECTIVES ON QUALITY 91 tional error by type of Title IV program for the 1988-89 academic year. Note that for incorrect determination of enrollment status and of cost of attendance, the estimated net error in the Pell Grant Program for 1982-83 was negative, which indicates that the estimated underpayments (which we show to be underestimated later in this chapter) were greater than the esti- mated overpayments for these error sources. Enrollment status does not appear to have been a major source of error in the Pell Grant Program in later years, but it persisted as a troublesome issue in the Stafford Loan Program. According to the Advanced Technology and Westat (1987b) re- port, confusion regarding enrollment status generally occurs in cases in- volving students attending summer sessions or clock-hour students. Incor- rect determination of cost of attendance causes a large dollar error to occur in all three Title IV programs. Incorrect calculation and distribution of awards are also persistent prob- lems for institutions. Calculation error, which Price Waterhouse (199Ob) reported as dominating institutional marginal errors, occurs when incorrect information is used to calculate awards; for example, errors made in deter- mining the correct cost of attendance. Factoring in other aid produces considerable calculation error as well. In the 1988-89 academic year, when a calculation error was made for 21 percent of applications, factoring error was found for 9 out of 10 applications in error. Distribution error occurs when an incorrect award amount is disbursed. With respect to the expected award or certification, the Price Waterhouse (199Ob) study found meaningful errors in the Campus-Based and Stafford loan programs, but acknowledged potential overstatement of the largest source. These errors do not affect the need calculations as do all the other errors discussed in the study. The panel will focus here on the award error compo- nents, since audits of financial administration were already discussed. As can be seen from Tables 5-3 and 5-4, technical errors, such as a missing statement of purpose, certification of Selective Service registration, or financial aid transcript, have a large dollar impact, for when these docu- ments are found to be missing, their absence renders the entire award erro- neous. Such cases may or may not otherwise be eligible for financial aid, and for those who are eligible, the calculated award may or may not be the correct amount. The General Accounting Office (GAO) requested an analy- sis by the Department of Education of cases in which technical errors due to missing documentation were ignored and the files reanalyzed for substan- tive errors. The analysis indicated that the estimated dollar value of institu- tional errors dropped by about a third when technical errors were removed, and the percentage of cases with "institutional error" dropped by half (Gen- eral Accounting Office, 1985~. Underawards, when technical errors were ignored, were more frequent and had a larger dollar value than overawards. Department of Education staff mentioned that these administrative require

92 QUALITY IN STUDENT FINANCIAL AID PROGRAMS meets are easy to observe during audits and review and can be indicators of more serious administrative problems. The panel believes that there is a need to refocus the effort used to detect technical errors. For example, potential improvements in the indicators of quality have been suggested by Goodwin (1991~. Recommendation 5-2: The Department of Education should determine which, if any, of the administrative requirements that can lead to "tech- nical error" are useful surrogates for variables that deal directly with program goals but are difficult to measure. Administrative require- ments that are useful in this sense should be retained, but others should be eliminated. The Most Recent Study of Error The IQCMP The Integrated Quality Control Measurement Project (IQCMP) is a large- scale survey of institutions and students who received financial aid during the 1988-89 academic year. The survey, designed and executed by Price Waterhouse in association with the Gallup Organization and Pelavin Associ- ates, involved the first-stage selection of 350 academic institutions and sec- ond-stage selection of 3,310 students therein, for a total of 2,653 usable award analyses. The study attempts to evaluate the quality of the Title IV delivery systems for the (1) Pell Grant Program, (2) Campus-Based Pro- grams (Supplemental Educational Opportunity Grant, College Work-Study, and Perkins Loan), and (3) Stafford Loan Program. The IQCMP is by far the most ambitious undertaking of its type since the inception of the Title IV programs, and the panel wishes to emphasize at the start of this discussion that the contractors should be commended for their efforts to obtain measures of error that can lead to meaningful im- provements in the system. Indeed, the panel considered the IQCMP to be so important to an understanding of the state of program error that it com- missioned a special background paper (see Appendix A). Many of the comments that follow are based on that paper, although it should be read in its own right for a fuller understanding of some of the problems with the Price Waterhouse study. Estimated Error The IQCMP provides estimates of errors of overpayment and underpay- ment in the distribution of Pell grant and Campus-Based financial aid, and estimates of overaward errors in Stafford loans. The errors are reported in absolute terms that is, both under- and overawards are assigned positive

ALTERNATIVE PERSPECTIVES ON QUALITY 93 values as their measure of error in quality-and they are reported in terms of percentage of total dollars disbursed, percentage of awards in error, and mean dollars of error per recipient in error. Components of Measured Errors A detailed description of the compo- nents of measured errors is contained in the report and need not be repeated here. However, it should be noted that the definition of error used in the study encompasses discrepancies between "actual" award and "best" award permitted under Title IV regulations. In some cases, this definition resulted in inclusion of some measured error that resulted from unavoidable projec- tion errors. For example, an overaward resulting from a student or family incorrectly projecting household size is counted as an error in the IQCMP, whereas a good faith error of this type is not an error under the Title IV regulations. Other cases were labeled as error because of discrepancies between federal regulations and state or institutional rules. For example, the failure of an academic institution to follow its own internal rules say, giving a certain student a Perkins loan that exceeds a stated maximum amount allowable is not a violation of federal rules, but nevertheless is a measurable error in the IQCMP. Although the appropriateness of some of the cases of error may be debatable, in general the broader definitions result in measures that are more closely associated with "quality of delivery" than would be obtained under stricter statutory definitions. Further, since the sample consists only of recipients, it cannot be used to estimate errors of nonpayment to eligible nonrecipients. For these three reasons (the use of absolute error, the broad definition of error, and the exclusion of nonrecipients from the estimates) the error estimates from the IQCMP do not represent dollars spent in error. Errors in the Estimation of Error In any sample survey aimed at as- sessing error in a process, there are two major sources of error in the sum- mary measurement of the error of interest. To avoid confusion in the mul- tiple use of the term error in the same sentence, we must distinguish between a "true" error in quality and an error in its observation. First, if a deviation in quality, when it occurs, can be detected and its magnitude observed with perfect accuracy (i.e., without error), then the only error of concern in assessment is the sampling error encountered when survey statistics are used to estimate population quantities such as averages and totals. Unfortu- nately, in most observational phenomena, there is another source of inaccu- racy, called nonsampling error. Within the realm of nonsampling error, one can further distinguish among causes due to problems of undercoverage, nonresponse, and mismeasurement. For example, in the IQCMP the frame from which the sample was drawn excluded certain newer institutions. This exclusion can lead to an undercoverage error if the part of the target popula- tion that had no chance of falling into the sample is sizeable and has a -

94 QUALITY IN STUDENT FINANCIAL AID PROGRAMS different level or patterns of award error. The success of the "ratio estima- tion" method of correction for undercoverage mentioned in the IQCMP report relies on the validity of an assumption of similarity between the parts of the population that are covered and not covered by the frame. The effects of nonresponse errors are much the same as those of undercoverage. It is reported that about 20 percent of the original sample of students was dropped from the study because either the students or their parents could not be interviewed. An overall interview response rate of 80 percent is usually considered quite respectable by modern survey standards, but if the nonrespondents differ dramatically from the respondents, poten- tially serious biases could exist in the reported results. Finally, nonsampling errors can occur because repeated observations of the same phenomenon generally do not lead to the same reported measure- ment, especially if the required measures are based on human judgment. For example, marginal error was measured in the IQCMP study by compar- ing an award calculated with all reported values and the same award calcu- lated with all reported values except for the substitution of one "best value" for the source of potential marginal error. Measurement error occurs in this observational process if repeated attempts at the same measurement result in variability of the marginal error from observation to observation. In many cases, two persons looking at the same award situation will not agree on the best value for a certain item. It is important to have some idea of the magnitude of the contribution of this source of measurement error to the quality (technically called the mean square error) of the reported results. The IQCMP report provides little or no information concerning the possibility of serious measurement error in the survey field operations. Although it is often difficult to reduce or eliminate measurement error in sample surveys, it is possible to design the measurement procedures so that the components of variance due to measurement error and to sampling error can be estimated separately. For example, matches with official records can be used to estimate bias, and independent repeated measurements of a subsample can be used to estimate measurement variability. There is no indication in the IQCMP report that measurement error and its effect on accuracy were either controlled or est~:- mated after the fact. Recommendation 5-3: Future studies commissioned for student finan- cial aid programs should consider carefully and discuss the problems that result from errors of measurement. The studies should include iThere is a discussion of bias in the document on the sampling plan discussed below, but the bias under consideration is that of noncoverage of the frame, not bias due to systematic effects of measurement error.

ALTERNATIVE PERSPECTIVES ON QUALITY 95 estimation of components of the overall error in estimation due to nonsampling sources arid discussion of assumptions underlying those estimates. The IQCMP Sample Design The design of the IQCMP survey is described in a separate document (Price Waterhouse, 1989~. All the institutions, except the 50 institutions that were part of the quality control pilot project,2 were grouped by geo- graphic proximity into clusters ranging in size from one to four schools (but most frequently three). The clusters were then selected with probabilities proportional to the number of students in each program in institutions in the cluster. Second-stage selection rates for students were such that clusters had equal sample sizes and the rates were constant for samples in each of the three programs. A technical discussion in the report on the sample design demonstrates that the measure of size used at first-stage selection ensures that the ex- pected total sample size (across all program combinations) is constant for each cluster, thus balancing workload. The discussion also states that the design feature of constant overall direct selection rates for each program tends to increase the precision of the estimates of error rates. However, the design was complicated by including in the sample size for a particular program cases that were sampled for one of the other programs but were also found to be in that program. Analyzing a given program using students in multiple programs to augment the direct sample size results in differen- tial weighting, and the ultimately weighted cluster sizes for that program are not equal. Thus, the final effect of the seeming elegance of the design on the precision of estimation is not clear.3 2The 50 institutions participating in the quality control pilot project (to be discussed in detail in Chapter 8) were considered to be selected at a first stage with certainty, and students were selected separately from each of the three programs proportionate to the program totals within each institution. A student who was selected for one program but also participated in one or more of the remaining programs, however, was used to augment the direct sample for the other programs. This required weighting the sample observations to account for differences in sampling probabilities. 3In Appendix A, Reiser discusses an error in the calculation of the probabilities of selection due to failure to consider the joint probabilities of selection of the same individual from more than one program. The effect of this error on the weights used in ultimate analyses is not known. It should also be noted that the use of a sample consisting of direct and augmented cases for each program introduces some degree of dependency between error estimates across programs. This point may not be of great importance if the data are used to comment on each program separately, but it should be kept in mind if two or more programs are compared.

96 QUALITY IN STUDENT FINANCIAL AID PROGRAMS The description of the sample design includes a table that reports the expected (planned) sampling error as a function of various sample sizes (Price Waterhouse, 1989:Exhibit IV-2. The table shows that the expected half-width of a 90 percent confidence interval for an estimated percentage seldom exceeded 2.0 percentage points and never was greater than 2.5 per- centage points. These prior estimates appear to have held (more or lessJ in the sole instances where estimated sampling variability is indicated in the final report (Price Waterhouse, 1990b:Exhibits II-1 through II-73. Reiser (Appendix A) observed, however, that in many other instances where re- ported standard errors would be helpful they are absent. It is especially difficult to judge the statistical significance of reported interdomain com- parisons without knowing the standard errors of the differences. The panel would like to have seen greater reporting of estimated sam- pling errors and design effects, especially when the total dollars and aver- age error amount for recipients in error are reported. (The Department of Education has not insisted on computation of standard errors for all survey estimates.) The description of the "bootstrap simulation" method of vari- ance estimation at the end of the sampling plan document is vague and confusing. The paucity of reported sampling errors implies that the method- was not actually used on a large scale. Commonly used techniques in designs similar to those used in this study employ either the jackknife, balanced repeated replication, or Taylor Series methods of variance estima- tion after grouping first-stage clusters into random groups or pseudo pairs. The panel wonders whether consideration was given to those approaches and if so, why they were rejected. Software that computes estimates and estimated standard errors consistent with survey designs like that used in the IQCMP, such as SUDAAN or PC CARP, is available for personal computers.4 Recommendation 5-4: In future sample surveys commissioned by the Department of Education, the final report should contain estimates of sampling error for all important estimates. The report should also contain estimates of design effects resulting from the complexity of the sampling plan. Other Troublesome Issues Finally, we mention some observed inconsistencies in the reporting of the results of the IQCMP that were discovered by the panel staff. First, the 4At a minimum, some attempt at a more sweeping discussion of the precision of the estima- tion of the reported error rates and amounts, similar to the way in which generalized variance functions are used in the survey reports of many government agencies, would have strength- ened the report, even though generalized variance functions are often of doubtful precision themselves.

ALTERNATIVE PERSPECTIVES ON QUALITY 97 1990 Price Waterhouse study purportedly breaks down and analyzes the components that are the principal contributors to student marginal error. Upon investigation, however, Exhibit III-1 in the Executive Summary of their report does not agree with the information in the cited exhibits in the main body of the report (Exhibits III-3, III-4, and III-5J. For example, according to the Executive Summary, reported information on the parent number in college ranks as the most significant contributor to student error in the Pell program. However, Exhibit III-3 of the report ranks this component's percentage of dollars in error considerably lower, 0.6 percent below that for parent-reported untaxed income (3.6 percent), parent reports on house- hold size (3.4 percent), parent reports of home value (2.3 percent), parent reports of real estate/investment value (1.7 percent), parent reports of Social Security benefits (1.2 percent), student reports of household size (1.1 per- cent), and student reports of adjusted gross income (0.9 percent). Similarly, the Executive Summary cites student's adjusted gross income as the most significant source of student marginal error in Stafford loan overcertification error. Exhibit III-5 of the main body of the report, however, shows that it is exceeded in percentage of dollar error by five other components. Depart- mental staff investigated these problems and found that there were typo- graphical errors in the information in the body of the report (the data in the appendix of the report being correct). Inconsistencies may be found in the written text as well. For example, the Executive Summary specifies parent number in college and parent household size as the two largest contributors to student marginal error, while page III- 12 of the main report indicates that the two top contributors are untaxed income and parent household size. Upon consideration, these discrepancies lead us to question the reliabil- ity of these data and the analytic results. While the presentation of the data in these reports using Pareto charts is a very useful technique for determin- ing the most serious problems, the department must be careful that accept- able judgments and conclusions are made based on the findings of the IQCMP. Taking all these issues together, the panel doubts seriously that the vari- ous so-called quality control or error studies have very accurate measures of dollars spent or not spent in error. At the same time, through the history of these studies, the repeated finding of high error levels and the consistency of findings regarding contributing causes of error have been useful. In fact, the studies raised many issues regarding sources of error in program adminis- tration and are useful in identifying initiatives for quality improvement. Failure to Estimate Error Leading to Nonaward According to Department of Education data for fiscal year 1982-83, 42 percent of Title IV aid recipients received overawards and 21 percent re

98 QUALITY IN STUDENT FINANCIAL AID PROGRAMS ceived underawards. Typically, overaward has been estimated as much larger than underaward whenever such a classification has been made. There is, however, a serious defect in all the major studies of error in student finan- cial aid programs that must be emphasized. The various surveys, including the most recent IQCMP, analyze only cases of awards; there is no coverage of students to whom financial aid was totally denied. Thus, it is not surpris- ing that these studies find more overpayment error than underpayment er- ror.5 Department of Education staff indicated to the panel that the study design flaw was considered several times in the past, but no funding was provided to address this issue. On a more basic level, a major concern of the panel is whether the emphasis in the Department of Education on trying to measure and control possible overpayment errors leads to an underemphasis on measuring progress toward the program goals of access and choice in postsecondary education. Potential recipients mistakenly denied financial aid, despite actual eligibility, surely have restricted access and choice. The depart- ment should obtain some estimate of the number of such inappropriate denials and the dollar amounts involved. In the next section the panel will urge the department to go a bit further and try to ascertain the extent to which students who would have been eligible for aid had they applied nevertheless failed to apply, perhaps because of the perceived difficulty of the application process or perhaps because of ignorance of the available programs. Recommendation 5-5: The Department of Education should improve estimates of "error" by including estimates of the coverage of student financial aid programs, that is, ascertaining the frequency with which eligible applicants are mistakenly deniedfinancial aid and the underaward amounts associated therewith. Rethinking the Use of Surveys of Error The cost of the IQCMP was approximately $2 million for planning, execution, and analysis alone, not including the efforts of Department of Education personnel in administering the contract. An important question is whether large-scale surveys that attempt to measure the state of error in the system and to identify the sources of that error are cost-effective. Do they result in estimates of sufficient precision and relevance that they can guide quality improvement efforts? Or would the resources be better spent on SThe GAO attempted to fill part of this gap by planning to survey by telephone some 2,OOO eligible students who did not receive a Fell grant. They were able to contact only 42 of the young people, testimony to the difficulty involved in trying to get some measure of underpay- ment error (General Accounting Office, 1985). Such studies would be greatly enhanced if they were done immediately upon rejection of the application so that the problem of recontacting the respondent would be greatly reduced.

ALTERNATIVE PERSPECTIVES ON QUALITY 99 alternative approaches to total quality management? If such studies are deemed to be useful, how frequently should they be carried out, and on what scale? The panel believes that the relevant information for informed policymaking can be gleaned from the surveys that historically have been used for estima- tion of aggregate error, but the methodological problems with the surveys and apparent limitations on the ability of student financial aid program managers in the Department of Education to detect and criticize the short- comings leads the panel to make the following recommendation: Recommendation 5-6: The Department of Education should not rou- tinely embark on surveys of the type that have been used in the past to estimate total error levels in student financial aid programs. The re- sources are likely to be better spent on continuous monitorin~using data from audit and review activities, for example~nd other approaches to quality improvement. Special studies should be done, but only when the study objectives are clearly linked to a policy evaluation or process improvement plan and steps can be taken to eliminate the methodologi- cal defects of past surveys. Certainly, the National Center for Education Statistics has staff with the expertise to assist student financial aid program managers in developing studies and reviewing the reports of the studies. THE APPLICANT'S VIEW As the ultimate customer of financial aid services and as the provider of the data that puts the overwhelming inspection processes described in the previous chapter into motion, the applicant or potential applicant must weigh heavily in decisions concerning process improvement. The panel focused on two areas of importance related to the applicant's need for quality: early awareness and the application itself. Nonapplicants for Financial Aid Although not a part of the panel's original charge, the subject of stu- dents who may be eligible but never apply is an issue of particular concern to the panel. There may be potential students who could benefit from aid but who do not apply for any number of reasons, such as not understanding the process, misunderstanding requirements, believing that they are ineli- gible, thinking that college costs are too high, and so on. Such individuals, actually eligible, surely have restricted access and choice as a result of the error in award determination, which is contrary to the objective of federally funded aid. The panel's concern rests in the fact that little effort has been

100 QUALITY IN STUDENT FINANCIAL AID PROGRAMS made to measure the frequency of these occurrences. Progress toward cor- recting the problem has been limited to outreach in selected areas, but it has not been consistent. In 1990 the GAO reviewed several sources of information in an effort to determine what applicants and their families know about federal financial aid at various stages of the college choice process. The study found that gaps in specific knowledge exist at all stages, as well as misconceptions, lack of information, and misinformation about aid all of which are critical components of higher education decision making (General Accounting Of fice, 1990a). Among the GAO's findings was the fact that those individuals who were aware of the financial aid possibilities were more likely to enter postsecondary institutions. Low-income students aware of Pelt grants in their sophomore year of high school and middle-income students similarly aware of loans were most likely to enroll in a postsecondary school upon high school graduation. Minority students and students whose parents had a high school diploma or less also enrolled in higher education more often when they had early knowledge of federal grants and loans. More recently, in a statement before the U.S. House of Representatives, Subcommittee on Postsecondary Education, the GAO's assistant comptrol- ler general for program evaluation and methodology (Chelimsky, 1991:1) noted that, "Currently, knowledge of available student aid is limited and inaccurate, and many students who probably could benefit from higher edu- cation end their schooling early." Clearly, this is a problem that must be addressed. In order to move closer to the goal of higher education being accessible to all, changes must be made in reaching and advising potential students and their families. Although the studies cited above seem to imply that it is obvious that causation runs from knowledge of the programs to enrollment, a word of caution is necessary. It may well be that preexisting intention to enroll encourages early knowledge seeking. A longitudinal study, at least, would be needed to determine the influence of early information on enrollment . . c .eclslons. Recommendation 5-7: The panel commends the Department of Educa- tion for its recent efforts to improve early awareness of federal finan- cial aid programs, such as providing financial aid software to high schools. The panel recommends increasing those efforts. Studies of the effectiveness of the efforts and users' reactions to the timeliness, accu- racy, usefulness, and clarity of outreach and counseling services are needed. Very few ongoing data sets are available from which to obtain informa- tion about students who have not applied for financial aid. The Department

ALTERNATIVE PERSPECTIVES ON QUALITY 101 of Education maintains administrative records on only those students apply- ing for financial aid, and the IQCMP studies discussed earlier collected data only on aid recipients. The only sources of data on students not applying for federal financial aid are National Center for Education Statistics surveys the National Postsecondary Student Aid Study (NPSAS), High School and Beyond, and the National Longitudinal Survey. The NPSAS asks students who did not apply for aid why they did not. Departmental staff indicated to the panel that the unweighted frequency of responses to the question regarding the most important reason for not applying indicates that the overwhelming majority stated either that their families paid for their education or that their income was too high to receive aid. In addition to collecting self-reported data on why students did not apply for aid, the Department of Education is currently using the NPSAS to analyze the characteristics of postsecondary students who appear eligible for Pell grants but did not receive them. Preliminary results from this com- parison reveal that eligible nonrecipients were much more likely than recipi- ents to be part-time and/or part-year students. The eligible nonrecipients were also much more likely to be financially independent, attend less than four- year schools, and to have greater income and assets than Pell recipients. Recommendation 5-8: The Department of Education should gather data on the reasons for nonutilization of student financial aid by potential recipients. Consideration should be given to ways of estimating the number of potentially qualified applicants who are discouraged for one rea.~on or another from applying for financial aid in the first place. Further, the department should consider ways of estimating the number of potential students who do not even attempt to enter a postsecondary school because of their ignorance of available financial aid. Also, data on the knowledge of aid should be collected from students and their families before the student finishes secondary school. Such data could be obtained in a variety of ways, including ongoing national surveys. The resulting information will be important in devising a program for reaching those who are eligible but do not apply. Applying for Federal Student Financial Aid For the 1992-93 award year, students who wished to apply for the Federal Pell Grant, Federal Stafford Loan, Federal Supplemental Educa- tional Opportunity Grant, Federal Work-Study, or Federal Perkins Loan pro- grams could make application in one of three ways: · Students could complete the Application for Student Financial Aid (AFSA) produced and distributed by the Department of Education.

102 QUALITY IN STUDENT FINANCIAL AID PROGRAMS . Alternatively, students who wished to apply for the above programs could complete an application form produced by one of the multiple data entry (MDE) contractors. Federal regulations require that MDE-produced forms incorporate verbatim all instructions and data items from the AFSA. However, a major difference between AFSA and MDE-produced forms is that the latter can include additional items needed for the administration of student aid programs, including nonfederal programs, at campuses served by the MDEs. Thus, MDE-produced forms tend to be longer, and more complex, than the AFSA. · In the 1990-91 award year, the Department of Education introduced the Electronic Data Exchange application process. This process, which is available at an expanding number of schools, enables participating schools to enter and review federal student financial aid application data using a personal computer or mainframe computer. Panel members were concerned that the design of the three application methods could cause confusion among applicants and financial aid officers, lead to inadvertent errors in financial aid decisions, and deter some students from applying. Such up-front errors could then lead the Department of Education to perceive a need for more inspection. Testimony before the panel, as well as the panel's own deliberations, suggest that the application forms for student financial aid might have been difficult to complete, espe- cially for the typical lower income applicant, and that the design of the form might complicate the work of student financial aid officers who advise students about financial aid, check the accuracy of applicants' responses on application forms, and attempt to correct erroneous data. As a result, the panel undertook a study of the federally produced AFSA and the Electronic Data Exchange process (see the report in Appendix C). The 1992-93 AFSA The AFSA for the 1992-93 academic year became available to appli- cants in late 1991. The form is available in English and Spanish versions, although the panel examined only the English version during its delibera- t~ons. The English version of the 1992-93 AFSA consists of a four-page appli- cation form and a preaddressed mailing envelope stapled in the middle of a 12-page booklet, which consists mainly of instructions, definitions, and worksheets needed to fill out the attached application form. The packet contains only a limited amount of information on the five federal programs for which the AFSA is the relevant application, and students wishing more information on those programs are directed (on page 11 of the booklet) to write to the Department of Education for a free copy of another booklet, The Student

ALTERNATIVE PERSPECTIVES ON QUALITY 103 Guide: Financial Aid from the Department of Education - Grants, Loans, and Work-Study 1992-1993 (U.S. Department of Education, 1992-93~. The Student Guide is a 60-page booklet describing the five programs for which the AFSA is the relevant application, as well as Supplemental Loans for Students and Parent Loans for Undergraduate Students. The AFSA package and the Student Guide provide different types of information, and information from both is needed if students are to make informed decisions about financial aid applications. For example, the Stu- dent Guide describes federal financial aid programs and discusses key eligi- bility requirements and borrowers' rights and responsibilities. It also lists telephone numbers for Federal Student Aid Information Centers, numbers that are not given or referred to in the AFSA packet, where the need is probably much greater. On the other hand, the Student Guide does not provide definitions of several key terms or discuss special circumstances that can trigger flexibility in student aid eligibility requirements. Instead, those are discussed in the AFSA packet. The important point here is that an applicant who relied only on the documents provided by the federal govern- ment in order to apply for student financial aid would apparently need 72 pages of information and instructions in order to complete an informed application for student financial aid from the federal government. More important, the four-page AFSA form appears dauntingly complex. A cursory inspection of the form by panel members revealed that the form is crowded in appearance, uses a potpourri of response formats presented in multiple colors, and sometimes contains confusing or incomplete directions and inconsistent wording of questions. Moreover, only one copy of the form is included in the AFSA package. Thus, no working copy is available for use as a draft or to keep as a record should verification or correction be required. Despite this, the instructions specifically state that applicants are to use a pen in filling out the form, and a standard black-and-white photo- copy of the AFSA would not adequately reflect the multicolored format. As a result, many applicants routinely request two copies of the entire 12-page AFSA package, thus wasting 12 pages to secure another 4-page form. In summary, the panel's initial work suggested that the AFSA had been produced without sufficient regard for well-known form design consider- ations or sufficient attention to the accuracy of the wording of items or clarity of directions to the applicant. To confirm this impression, the panel requested and received help from 13 federal agencies and research organi- zations known to have expertise in the design, collection, and processing of information similar to that collected by the 1992-93 AFSA. In mid-March of 1992, the panel mailed those organizations a letter that stated, in part: The panel is especially interested in expert advice concerning the complex ity of the form and instructions, especially for the typical applicants (low income, high school graduates and their parents).... Comments or sugges

104 QUALITY IN STUDENT FINANCIAL AID PROGRAMS lions based on your organization's experience with the collection and study of similar data would be helpful to determine whether errors made by applicants or data processors might be reduced through revisions in word- ing, sequence, instructions, format, or other aspects of the application and related tasks. In general, the reviewers were critical of the AFSA and accompanying instructions. For example, reviewers commented as follows: . "The aid application forms and processes are much more complex and onerous on applicants than they need be, more onerous than a 1040 tax form. Ironically, [many] of the students seeking aid come from low income families which either file no tax return or file a 1040EZ. The student aid application form is probably the most complicated financial form they have ever seen." · "We agree that this form and accompanying instructions are exacting and complex due to the myriad of financial requirements which must be documented by law. As a result, student-level applicants and low-income parents are likely to encounter much difficulty in both understanding and completing the AFSA." · "I found the form to be difficult to follow. Part of the difficulty is due to the amount and complexity of the information required, but the overly complicated appearance and structure of the form also contribute to confusion." · "Looking at the form, I don't think any of the questions are very difficult. But it has the appearance of being overpowering. The first page is cluttered and the rest is red and grey. From the face of it, my guess is that it is confusing." "I think the similarity to income tax forms might underline the im- portance of completing the form correctly, but many people are intimidated by tax forms. I am concerned that this general similarity and the overall lack of "white space" make the task appear more formidable than it is." "Needed improvements would require a total redesign of all features mentioned in the letter wording, sequence, instructions, format, and so on." · "The form itself appears manageable for typical applicants.... The instructions that accompany the application form are quite difficult and complex. We would expect significant confusion and incorrect responses if the instructions are not modified." Other, more detailed comments were offered by the reviewers, as were specific suggestions for changing the AFSA and accompanying instructions (see Appendix C). Although the reviewers did not cite statistical research demonstrating the extent of error in responses to items similar to those included on the AFSA, several suggested that review and research should be

ALTERNATIVE PERSPECTIVES ON QUALITY undertaken on this point. comments: 105 The following statements exemplify reviewers " a careful review should be conducted to determine minimum data requirements; then extensive questionnaire design work should be un- dertaken to develop a form that collects these data in a way that is simplest for applicants to provide. The questionnaire design work should address not only wording of questions but also form layout and general instructions provided. Cognitive psychology techniques could be very useful." · "Pretest with respondents typical of those who would use the form. If you can observe them completing the forms, and probe where they appear to be having difficulty or are making errors, it may help explain why errors are made and how to correct or avoid them." The informal testing methods suggested by the reviewers are not neces- sarily expensive to conduct, and they have been employed by many federal agencies. Use of such techniques was discussed in the Office of Manage- ment and Budget's Statistical Policy Paper 10 (DeMaio, 1983~; the ex- amples in that report include the Social Security Administration's group interviews with teenagers and adults prior to a revision of the application for a Social Security form. Papers commissioned by the panel describe Internal Revenue Service (IRS) experience with the informal testing of forms (see Appendix E) and two redesign projects conducted at the Behavioral Science Research Center of the Bureau of Labor Statistics (BLS) the IRS Schedule C study and a BLS survey (see Appendix D). centers are sometimes referred to as cognitive labs. Such research The AFSA and related documents have never been field-tested among students. A predecessor of the AFSA, the basic grant form, was field-tested under contract in 1981, but far fewer data elements were required by legis- lation to be collected at that time, and the prototype was an uncrowded, two-page form (Rehab Group and Macro Systems, 1981~. Nevertheless, much helpful information was gathered during this field-testing and used to improve the form. In addition, the contractor that conducted the field test made a series of suggestions for future activities designed to improve the application for student financial aid. Those suggestions included using routine activities to gather information on problems applicants are having with the form, engaging in research on the effects of variations in forms on respondents, and reviewing comparable benefit application forms as a means of improving the forms used to apply for federal student financial aid. The panel concurs with those recommendations and believes that, over a decade later, they should be implemented by the Department of Education. Similar concerns were expressed by the General Accounting Office (19851.

106 The 1992-93 Student Aid Report QUALITY IN STUDENT FINANCIAL AID PROGRAMS The panel also examined the Student Aid Report (SAR), which is sent to applicants after the paper version of the AFSA is processed. If an AFSA is not rejected for one of a variety of technical reasons, an applicant can expect to receive a SAR from the processor within about four weeks. The SAR contains the data given by the applicant on the AFSA, information from the Department of Education about the applicant's eligibility for fed- eral student aid, and instructions about what to do next. At this step, applicants also might receive either an Information Review form or an In- formation Request form on which to provide additional information or make corrections and to verify any assumption edits that were inserted during processing. Applicants who appear to qualify for Pell grants based on their initial application receive a payment voucher at this time. If corrections or additions to an application are needed, applicants can expect to wait two to three weeks after they submit changes for receipt of a revised SAR. As part of its study, the panel reviewed the comments of over 600 applicants who wrote to the Department of Education in 1991 in response to an invitation in the SAR to voice their comments or ideas for improving the SAR. Three-fourths of the comments referred to confusion in review- ing or correcting data due to a lack of correspondence between the item sequence on the SAR and the item sequence on the AFSA. Apparently, as students work their way through notations on the SAR, they must move back-and-forth within the AFSA to make corrections, all the while paying careful attention to the AFSA's color coding scheme. Thus, in line with the recommendations above, the panel also suggests that the Department of Education review the SAR with the goal of relieving the burden on the applicant. Suggestions about potential changes to the SAR are discussed in Appendix C. Electronic Version of the 1992-93 AFSA In addition to examining the paper versions of the AFSA and the SAR, the panel also reviewed the Department of Education's Electronic Data Ex- change application process- also known as Stage Zero (for details of this review, see Appendix C). According to the user's manual, the electronic application form for 1992-93 is "intended for student use in the financial aid office environment, with monitoring and counseling done as needed by financial aid staff. Some institutions may want to have aid administrator staff walk through the questions as the student enters his/her information. Others may have staff members use this entry mode while interviewing applicants" (U.S. Department of Education, 1991a:15-1~. In practice, most applicants work from an already completed paper version of the AFSA.

ALTERNATIVE PERSPECTIVES ON' QUALITY 107 Alternatively, an "expert" entry mode is provided so financial aid personnel can quickly enter data from applications initally submitted on paper. An advantage of the Stage Zero process is that it includes an Electronic Needs Analysis System, which enables financial aid personnel to use data already keyed by students to calculate a number of expected results for a given applicant. Using this system, financial aid personnel can calculate an expected family contribution using the Congressional Methodology. Other financial calculations, such as an estimated Pell Grant Index for determin- ing Pell eligibility, as well as an estimated Pell award amount, can be calculated (the Congressional Methodology and the now unused Pell Grant Index were discussed in Chapter 3~. In addition, Stage Zero allows verifi- cation selection criteria to be applied on-site. While the calculations are not official, they do provide immediate information to applicants. Further, an important incentive for using this system is that it can considerably shorten the time from application completion to receipt of federal aid. Stage Zero also can be used for electronic filing of renewal applica- tions, a process that was introduced for the 1992-93 school year and is available to returning students at selected institutions. Renewals for other students require filling out the entire paper AFSA, no matter how much of the data previously filed are still correct. In the electronic renewal process, only changes in data need be entered, after which a copy of the completed form is generated by the computer. Learning to use Stage Zero is easy. Experience with a keyboard is helpful, but even hunt-and-peck typists and others inexperienced with com- puters should be able to adapt to the system easily. Answers to most ques- tions are preceded and those that are not do not generally require lengthy answers. The first screen displayed by the program contains simple instruc- tions on using the system, and additional information is provided through help messages. On the screens that follow, the wording, sequencing, and numbering of items match the contents of the paper version of the AFSA. A limited number of questions appear on each screen, which overcomes the "crowded" look of the paper AFSA, and answer cells are consistently lo- cated on the right-hand side of the screen (except for long items, such as names and addresses), which eliminates the varied formats of the paper AFSA. Although the panel's review of Stage Zero revealed several minor prob- lems, the electronic application process appears to have many advantages over the paper process. Data entries can be proofread on screen, and errors can be corrected simply by backing up and rekeying the entries. In addi- tion, edit and query messages automatically appear on screen to alert the user to incorrect, incomplete, inconsistent, or suspect data. In some cases, previously inserted data are entered automatically when called for in later items. In addition, the complex skips called for in the paper version are

108 QUALITY IN STUDENT FINANCIAL AID PROGRAMS performed automatically, and help messages are available throughout the program. The program also contains worksheets that do the mathematical calculations called for in some parts of the application and then automati- cally insert the totals into the proper box on the application form. When using Stage Zero, an applicant can stop at any point in the pro- gram, although if the application is not completed, the entries made cannot be saved. The Department of Education should determine whether this is a shortcoming in need of action. At the conclusion of the program, applicants are asked if they want to review the application. Choosing "yes" causes the display to return to the beginning of the form. Corrections can be made easily, and when the applicant okays the data, the program automatically informs the user that the program is being validated. Corrections can also be made if inconsistencies are flagged during the validation process. If the application passes the validation stage, the program automatically saves the file and prints a summary of the data entered on a two-page printout, which also contains a certification statement for the student to sign. The Quality Control Guidelines in the user's manual instruct the applicant to then compare the answers on the printout with the responses on the applicant's paper form. However, this is a difficult task because item numbers are not shown on the printout and section names do not match those on the paper version of the AFSA. Moreover, applicants are in- structed to make written corrections on the printout, but that is not always possible due to a lack of space. Completed and validated applications can be sent to the Department of Education's Central Processing System automatically, or data files on floppy diskettes can be mailed. Applications can be processed and an Electronic Student Aid Report returned to the institution within 72 hours using the Electronic Data Exchange System. The Stage Zero process has many advantages. Many of the advantages are nullified, however, by having an applicant first fill out the paper AFSA and get the necessary signatures before filling out the electronic applica- tion. The panel believes that this does not take advantage of the many Stage Zero features that make it easier for the applicant to understand the information required on the application or to provide that information accu- rately. If applicants could start with Stage Zero, working from notes, and fill out the paper form concurrently or subsequently, errors and burden might be reduced. Moreover, the use of Stage Zero is currently limited to postsecondary educational institutions participating in student aid programs. However, a slightly modified version of the software might be appropriate for off-cam- pus use. In fact, many home and most office personal computers meet the requirements for installation of the 1992-93 Stage Zero version, and with appropriate modifications, it might be possible for high schools, churches,

ALTERNATIVE PERSPECTIVES ON QUALITY 109 community organizations, civic groups, and mentors with access to a com- puter to assist aid applicants in filling out the forms. Recommendations on Financial Aid Forms Based on its analyses of the various forms used in the federal student financial aid application process, the panel makes two recommendations: Recommendation 5-9: The complexity of the forms, instructions, and information booklets leads to excessive burden for applicants and is a cause of error. Thus, the Department of Education should consult ex- perts in form and question development, such as those found at cogni- tive research laboratories, to aid in its efforts to improve the applica tion materials. Recommendation 5-10: The Department of Education should continue to improve and expand the availability of its electronic data exchanges, making sure to address the issue of appropriate balance between the potentialfor improved data and additional burdens that might be placed on the schools. Although the panel was unable to estimate the amount of error caused by current versions of the AFSA and SAR or the reduction in error due to use of Stage Zero, the evidence reviewed above suggests that attention to these recommendations has considerable potential to reduce applicant bur- den and improve the quality of data. The reduction in burden, alone, is a quality dimension that should be of interest to a governmental agency. Im- proved data quality can reduce inspection and correction activities during SAR and verification reviews. POSTSECONDARY INSTITUTIONS Institutions are increasingly burdened by the administrative complexi- ties imposed by the federal government. From an original promise to ad- minister the Fell eligibility requirements centrally, the government has moved to an increasingly complex student aid report, which must be verified by school personnel. The addition of information unrelated to student need, such as Selective Service status, resulted in a form which has grown from two pages to four. In addition, hard copies of applications and related "underwriting" materials must be physically retained for long periods of time, which creates the need for expensive warehousing in the financial aid offices. Postsecondary institutions have other concerns as well. Various layers of aid and the complexity within each layer contribute to a cumbersome aid

0 QUALITY IN STUDENT FINANCIAL AID PROGRAMS structure. Additionally, with continuous regulatory changes and require- ments, schools must often adapt to changes in rules and accurately adminis- ter financial aid simultaneously. All factors considered, the system inevita- bly becomes error prone. In the remainder of this chapter, the panel relates concerns of the institutions that were repeatedly expressed by representa- tives of institutions and during visits to student financial aid offices and looks at the available measurements of those concerns. Administrative Cost and Regulatory Requirements Federal and Local Costs The program of oversight of the federal student aid programs appears to impose unreasonably high costs for inspection of processes. Contributing to the high costs of inspection are layers of redundant review. An indi- vidual applicant, for example, may be subject to repeated data and process examination of initial information as a student loan recipient, as a Pell grant recipient, as a part of internal audits, and as a part of federal audit requirements or program reviews. Cycles of reexamination are also problematic. Applications undergo separate inspections at each institution if the student is applying for or considering transfer to several institutions of higher education. Even a student who applies for and attends only one institution is not immune from repeated cycles of review of data each time credit load varies, income changes, a new award is received, or for a myriad of other conditions. Revision of each student award five times per academic year is not uncommon. Besides adding to the institution's work, these changes add uncertainty to the finan- cial planning of the student. State and School Program Coordination Although the subject of this study is the federal student aid process, it is important to note that to the student and institutional participants, the federal government, while dominant, is nonetheless only one provider of student aid (see Tables 5-5 and 5-6~. Assessment of the perceived quality of the aid process must consider all sources of aid because many students with high financial need would simply not be able to participate in higher educa- tion with federal funds alone, even with maximum funds available. This is especially true for a student living away from home at a private university, but it is also true for some public universities. The packaging of the various aid sources, however, adds to the administrative burden of the institution. State grant programs vary in the proportion of financial aid they pro- vide, but almost every state provides some level of support in addition to

ALTERNATIVE PERSPECTIVES ON QUALITY TABLE 5-5 Profile of Aid Packages, All Students 111 Number of Students with Campus- Stafford and Aid Combination (000s) Pen Grant Based SLS Loans Other Average Dollars per Recipient, by Program 10,578 2,912 607 592 50 141 92 295 613 636 490 485 189 353 187 370 o o o o o o o o 1,243 1,339 1,545 1,434 1,55 1 1,525 1,603 1,545 o o o o 1,265 1,495 1,600 1,598 o o o o 1,135 1,325 1,085 1,465 o o 3,646 3,763 o o 4,180 4,1 14 o o 2,805 2,572 2,481 2,563 o 2,854 o 3,89 1 o 3,46 1 o 4,374 o 1,763 o 2,095 o 2,085 o 2,819 NOTE: SLS=Supplemental Loans for Students Program; "Other" includes state, Veterans' Administration, and institutional aid. SOURCE: Data from 1990 National Postsecondary Student Aid Study; provided by Daniel Goldenberg. the federal programs. In part that is due to the incentive structure of the State Student Incentive Grant Program, a federal grant that requires state matching; however, most states match a much higher proportion of federal aid than is required. States direct funds to needy students, to perceived manpower needs, and to students with high academic performance, in the main. Thus, the states add their own regulations. It is possible that the level of state regulation has been minimized due to the high burden on institutions and on the states themselves from federal student aid regulation. Nonetheless, institutions must often resolve conflicts inherent in federal and state regulations. Further, the student recipient often perceives the layers of state and federal regulation as confusing or, worse, as a roadblock. For students at high-cost institutions, additional information requirements may be requested in order to ration scarce school resources. For example, a student whose parents are divorced encounters federal and state require- ments to provide information from his or her custodial parent and steppar- ent (if any), but, in addition, may be required to provide parallel data from his or her noncustodial parent and stepparent to meet the institution's needs.

12 QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 5-6 Profile of Aid Packages, Full-Time Undergraduates Average Dollars per Recipient, by Program Number of Students with Campus-Stafford and Aid Combination (000s) Fell GrantBasedSLS LoansOther 3,698 0000 1,206 0002,648 380 003,0670 353 002,7222,792 35 01,40000 107 01,48203,259 54 01,3522,9020 199 01,3962,8053,758 360 1,379000 471 1,427001,823 416 1,59702,8680 405 1,45902,5991,965 153 1,6211,21400 301 1,5701,40202,115 163 1,6301,1212,5140 336 1,5681,4802,5522,746 NOTE: SLS=Supplemental Loans for Students Program; "Other" includes state, Veterans' Administration, and institutional aid. SOURCE: Data from 1990 National Postsecondary Student Aid Study; provided by Daniel Goldenberg. Extensive and Ever-Changing Regulations and Policy A significant factor in the ability of financial aid administrators in institutions of higher education to perform their jobs thoughtfully, accu- rately, and compassionately on behalf of students is the extensive and ever- changing regulations and policy governing the student aid programs. Change is rapid and accountability is immediate. Regulations or statutes frequently go into effect within 30 days of being issued or are retroactive, and changes often take effect months before the information required to implement them is received. As a result, financial aid offices and officers must often func- tion with unclear directions. For this reason, the quality and timeliness of information on regulatory changes should be improved, including the inter- pretation of the regulations and guidance on how to establish administrative systems to implement the changes. The Higher Education Act, which authorizes funding for federal student financial assistance, is reauthorized every six years. If changes in the stu- dent aid programs were limited to a six-year cycle, the process could be

ALTERNATIVE PERSPECTIVES ON QUALITY 113 manageable and would allow administrators to use their expertise to per- form the best job for all students. However, the reality and ideal are very far apart. In the past 10 years, over 20 new responsibilities for student aid administrators have been enacted. Ryan (1988) lists the following examples: . verification of data used to award Pell grants, guaranteed student loans (GSLs), and all federal Campus-Based aid; increased Pelt program reporting and documentation; · implementation of a determination of satisfactory academic progress; multiple disbursements of Guaranteed Student Loans; implementation of the Supplemental Loans for Students and Parent Loans for Undergraduate Students programs; verification of citizenship; · documentation of independent student status; adherence to changes in the Federal Tax Reform Act, which affected scholarships and grants; new refund and repayment policies; and collection of Selective Service compliance and antidrug statements. Ryan indicates that these duties not only required changes in procedure at institutions, they also required reeducation of students and retraining of staff. In some cases, changes to computer systems and informational hand- outs were needed. Although a deadline of December 1 for any policy or regulation to go into effect for the next academic year (starts on July 1) was established during the 1986 reauthorization, the deadline is frequently ignored. Often, .. . . . · · rat . . ~1 ~ regulations are either deemed too slgnl~lcant to Delay or ouagelary savings take precedence over the need to provide sufficient lead time for implemen- tation. As a result of concerns regarding extensive and ever-changing regu- lations and policy, the 1992 reauthorization act requires a master calendar for financial aid rules and that the Department of Education engage in negotiated rule making. Another issue raised by ever-changing regulations and policy is the inappropriate use of common-cause regulations to effect special-cause events.6 Over the years, added layers of regulations have been placed on students and institutions as a whole as a method of achieving compliance from small segments of the student and institutional population. Whether it is worth burdening all to ensure the compliance of a few is questionable. Common-cause regulations do not adequately deter schools with unde- sirable outcomes nor reward those that exhibit good management of federal programs. Guidelines are not designed with the intent of improving the Recall the discussion of common-cause and special-cause problems in Chapter 2.

4 QUALITY IN STUDENT FINANCIAL AID PROGRAMS administration of financial aid programs by setting performance standards; rather, penalties are assessed for noncompliance as an attempt to motivate good performance. Financial aid offices struggle to keep up with the burdensome adminis- trative requirements. Administrative costs reportedly have risen faster than the overhead recovery rates allowed by federal regulation; consequently, institutions are forced to allocate resources to the financial aid office that could be spent on instructional purposes. This is particularly serious due to the significant financial crisis facing many, if not most, colleges. Only institutions that award a high proportion of their own funds, as contrasted with federal funds, appear able to invest in improvement. Some institutions believe that some loans, for example Perkins loans, are too cumbersome for their limited financial aid staff to administer and therefore they do not offer them. Burden and Error Given the complexity of the student financial aid system and the large number of players involved throughout the process, many concerns arise regarding administrative burden and error. How much burden is accept- able? Who should bear the burden? How real is the problem associated with error? A recent assessment of regulatory burden (Price Waterhouse, 1991) found consensus among all types of institutions regarding such issues as manda- tory in-person counseling and the complex determination of dependency status. Mandatory in-person counseling of all loan recipients upon entering and leaving the institution was found to be burdensome, unproven in reduc- ing defaults, and inappropriate for institutions with low default rates. Simi- larly, collecting documentation each year to prove that a student is indepen- dent, when the student has already proven independence in previous years seems to be an unnecessary burden, particularly given that the rules are difficult to administer. Other issues of concern agreed upon included the manual reporting and recordkeeping requirements associated with the financial aid transcript and Student Aid Report, and the burdensome need to collect numerous signed statements from the student regarding illegal use of drugs, educational pur- pose, and other items. To alleviate much of the burden associated with manually keeping records, Price Waterhouse (1991) proposed instituting the use of an electronic system and recommended that changes to specific stat- utes and regulations be evaluated for their potential to reduce unnecessary burden. When addressing the concerns related to obtaining signed documenta- tion from students, the issue of technical error arises. Technical errors, or

ALTERNATIVE PERSPECTIVES ON QUALITY 115 categorical errors as some studies refer to them, are a component of proce- dural error which occurs when an institution fails to adhere to established guidelines. As noted earlier in this chapter, when any one of the following documents, required by the Department of Education, is not present in a student's file, the student is considered to be categorically ineligible: finan- cial aid transcript, statement of educational purpose, statement of registra- tion for Selective Service, and documentation of independent status under unusual circumstances. These causes of error have the greatest impact on total error because when they occur, the entire amount of an award is con- sidered to be in error. Categorical errors account for about two-thirds of the institutional errors found in Department of Education commissioned stud- ies. Three reports General Accounting Office (1985), Advanced Technol- ogy and Westat (1987b), and Price Waterhouse (1990b) all found that al- though the frequency of categorical errors is low, the dollar impact was . . A. s~gn~cant. In the 1982-83 academic year, 3 percent of Pell grant recipients re- ceived awards in error due to a missing financial aid transcript, and the net estimated error equaled $95 million (General Accounting Office, 1985~. In the 1985-86 academic year, net marginal error due to a missing financial aid transcript was $41.2 million and affected 2 percent of Pell grant recipients; for GSL recipients, 1 percent had categorical errors totaling $142.8 million (Advanced Technology and Westat, 1987b). In the 1985-86 academic year, some form of categorical error occurred in 4 percent of Pell grant cases, which resulted in $114.2 million in institutional payment error. Estimates related to the error due to a missing financial aid transcript for the 1988-89 academic year by aid program are presented in Table 5-7. Although the dollars in error appear large, as a percentage of total dollars they are no more than the percentage of students with error. Since dollar error is nested TABLE 5-7 Pell Award Error Due To Missing Financial Aid Transcript, 1988-89 Academic Year Average Percent of Percent of Error per Recipients Dollars Recipient Program with Error in Error with Error Pell 0.4% 0.3% $ 875 Campus-Based 0.3% <0.05% $ 664 Stafford Loan (overcertification only) 0.2% 0.2% $ 3397 SOURCE: Price Waterhouse ( 1 990b).

6 QUALITY IN STUDENT FINANCIAL AID PROGRAMS within this small number of students, error reduction strategies still require finding those students in error. Further, once the appropriate documents are obtained and added to the student's file, the technical error is corrected and the award is no longer considered to be in error. While the Department of Education requires institutions to collect signed statements regarding illegal use of drugs and educational purpose, the certi- fication is not of direct use to the institution because it neither prevents drug use nor ensures that the student has a valid educational purpose. Since regulations such as these are driven by statute, Congress must take action if changes are to occur. Under the current system, institutions bear a consid- erable burden in obtaining and storing this type of certification. There is, however, no reason why the depository cannot be changed so that a central ized process developed by the Department of Education can take on an equitable amount of responsibility for federal requirements not related to the education of the student. (This action is discussed in Chapter 9.) In an effort to improve the existing system along the lines suggested above, Congress has made some progress in initiating changes. The 1992 reauthorization of the Higher Education Act calls for increased utilization of verification mechanisms in which student statements and supporting docu- mentation can be verified through record matching, by using either an auto- mated or other system. These provisions include requiring that data base matches with the Selective Service be made and that the Social Security Number of all aid recipients be verified. With this advancement toward electronic data transfer of student financial aid information, the burden im- posed by the collection of student certifications could be greatly reduced and the value returned for these activities could be increased. In general, technical errors are an administrative problem that can be solved by obtaining the appropriate documentation. Record matching is often an easy way to obtain the documentation. When this error is cor- rected, most student award amounts will not change (except in cases in which an award is incorrectly given to students who already have a bachelor's degree). Although technical error is not as severe in its impact on the intent of the programs as other types of error found in Title IV aid administration, it is a regulatory violation that troubles policymakers who are locked into an enforcement/penalty approach. Recommendation 5-11: The Department of Education should increase its efforts to remove unnecessary burden from students and institutions by further development of automated data matches whenever possible. The panel will expand on this recommendation in Chapter 9.

Next: 6 Financial Aid Data Systems »
Quality in Student Financial Aid Programs: A New Approach Get This Book
×
Buy Paperback | $45.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Federal financial aid for postsecondary education students involves both large expenditures and a complex distribution system. The accuracy of the needs-based award process and the system of accountability required of the 8,000 institutional participants are the focus of this book. It assesses the current measures of system quality and possible alternatives, such as a total quality management approach. The analysis covers steps to eliminate sources of error—by reducing the complexity of the application form, for example. The volume discusses the potential for a risk-based approach for verification of applicant-supplied information and for audit and program reviews of institutions.

This examination of the interrelationships among the aid award and quality control activities will be of interest to anyone searching for a more efficient aid system. The book can also serve as a case study for other government agencies seeking to examine operations using modern quality management principles.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!