National Academies Press: OpenBook

Quality in Student Financial Aid Programs: A New Approach (1993)

Chapter: 4 Current Quality Control Procedures

« Previous: Part II:
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 49
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 50
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 51
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 52
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 53
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 54
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 55
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 56
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 57
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 58
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 59
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 60
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 61
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 62
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 63
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 64
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 65
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 66
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 67
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 68
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 69
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 70
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 71
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 72
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 73
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 74
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 75
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 76
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 77
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 78
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 79
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 80
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 81
Suggested Citation:"4 Current Quality Control Procedures." National Research Council. 1993. Quality in Student Financial Aid Programs: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/2226.
×
Page 82

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 Current Quality Control Procedures In this chapter, the panel reviews current efforts by the Department of Education to control and monitor the quality of the award and payment processes, the types of error that are uncovered and the importance of de- tecting them, and the burden quality control places on those in the system. For the purpose of this chapter, and in many respects in the Department of Education's historical view and its charge to the panel, "quality of the award and payment processes" is synonymous with the concept of accuracy in the award and payment to recipients of student financial aid. This narrow, "payment error" view forms the rationale for the department's current con- trol and monitoring activities. We examine the views of other participants in the system in Chapter 5. ERROR DEFINED Although reducing payment error (defined below) is an important as- pect of quality particularly in programs that disburse public funds it is only one of many dimensions of quality, as discussed in Chapter 2. Its pervasiveness, however, makes an understanding of payment error a prereq- uisite to an understanding of quality control, as practiced by the Department of Education. In principle, there are two kinds of payment errors in student financial aid programs, or in fact in any program designed to dispense resources to those in need of them: (1) errors of overaward or overpayment and (2) errors of underaward or underpayment. Overpayments can be sub- divided into (1) excess payments to eligible recipients and (2) all payments 49

so QUALITY IN STUDENT FINANCIAL AID PROGRAMS to ineligible recipients. In parallel fashion, underpayments comprise (1) insufficient payments to eligible recipients and (2) the lack of payments to those mistakenly classified as ineligible. Two further characterizations of error apply to overpayment and under- payment errors. One distinction is between substantive errors and technical errors. Substantive errors are directly associated with the provision of information that determines eligibility for student financial aid and the cal- culation based on that information. Technical errors occur when a legally necessary document has not been submitted or has been submitted but is missing from a student's file. The lack of such documents makes the stu- dent categorically ineligible for financial aid; the inclusion of the docu- ments, however, may or may not render the student eligible, depending on other factors that could be examined if the file were further investigated. Technical errors include the failure to have on file documentation of satis- factory educational progress, an indication of registration for Selective Ser- vice, a statement of educational purpose, or a financial aid transcript (a report of federal aid given to the student by institutions previously attended). Certainly, an existing but temporarily misplaced document, while a possible indication of poor administrative work, is not as serious a technical error as noncompliance with a requirement, such as maintaining satisfactory educa- tional progress. A second distinction deals with the originating source of the error- whether it is the institution, data processor, or student. These three sources of error are the primary focus of the quality control efforts discussed in this chapter. Data processing errors are basically self-explanatory, but a few comments on institutional and student errors are warranted. Institutional error may occur in the form of failure to follow Title IV regulations or the institution's own policies for Title IV aid even though failure to follow the latter may not violate Title IV regulations. (Some failures, termed liability errors, require reimbursing the federal government for the amount of the error.) For their part, students may make intentional reporting errors, for which they are liable for fines or imprisonment. Alternatively, students may make unintentional errors in reporting, which if found in verification, should re- sult in an adjustment to their award, but which do not make them liable for fines or imprisonment. Still another type of student error arises from incor- rect projections or estimates (e.g., federal income tax to be paid) on the application form. These do not count as errors under Title IV regulations, but they have been tabulated as error in some studies of program quality (e.g., the Integrated Quality Control Measurement Project, discussed in Chapter 51. The differences among the various Title IV programs also lead to dif- ferent ways in which errors in the analysis of student need translate into

CURRENT QUALITY CONTROL PROCEDURES 51 dollars misspent. Because the Pell program involves a direct grant, the conceptualization of error in that program is the most straightforward: Er- ror is simply defined as the difference between the payment made to a student and the amount that should have been paid according to a correct need analysis. Thus, errors in Pell grant need analyses are dollars actually misspent or erroneously not spent. On the other hand, an error in a Campus-Based program is conceptual- ized as the discrepancy between calculated need and need if correctly calcu- lated with accurate data. Because funding is limited, however, actual awards are often less than the calculated need. Hence an error, as a concept, is not equivalent to actual dollars misspent or erroneously not spent. Similarly, in the Stafford Loan Program, error is conceptualized as any mismatch between appropriate and actual certification amounts. A student, however, may choose not to borrow the full amount, so again, errors do not necessarily represent actual dollars misspent or erroneously not spent. In addition, the student may repay the loan, which limits the cost of the error to any subsidies or allowances provided by the federal government on the erroneous part of the loan. As a final note on these errors, the errors addressed in this chapter involve, for the most part, applicants who were found eligible for an award. Recently, the Department of Education has made an effort to inspect the cases of unsuccessful applicants as well. Eligible individuals who do not apply are also a source of underpayment not addressed by current quality control activities. This is a major issue of quality in the system rather than a source of error and is discussed in Chapter 5. THE QUALITY CONTROL PROCESS The questions concerning the quality control process that were posed by the Department of Education in requesting this study included the fol- lowing: · How much information should be obtained from applicants and how intensively should its accuracy be reviewed? · What are appropriate or realistic levels of performance to be ex- pected from participants in the financial aid system? · What are reasonable trade-offs between performance and the burden imposed on those in the system? Materials provided to the panel by the Department of Education's Divi- sion of Quality Assurance describe the current quality control process as a three-part effort consisting of prevention, inspection, and oversight The department has another activity related to quality control special sample- survey studies that are discussed in the next chapter.

52 QUALITY IN STUDENT FINANCIAL AID PROGRAMS Prevention consists of activities aimed at avoiding errors. The Depart- ment of Education identifies two such activities: · training, which is provided by the department to data entry contrac- tors, financial aid administrators and other institutional officers, lenders, accrediting agencies, state scholarship and guaranty agencies, and others in the financial aid community; and · verification of student data, which entails institutional review of stu- dent-submitted information and, if necessary, correction of errors. Inspection consists of after-the-fact monitoring activities during audits and program reviews designed to determine the accuracy of program admin- istration by the schools. Such activities are often developed in the belief that they help to ensure compliance because the possibility of penalties and/ or sanctions acts as a deterrent. There are two types of inspection activities: · Audits. Audits are typically conducted by a certified public accoun- tant and are submitted to the Department of Education for review and ap- proval. Audits focus on determining the reliability of institutional financial data, management controls, and compliance with requirements of partici- pants in the federal student financial aid programs. Until the 1992 reautho- rization of the Higher Education Act, participation in federal Title IV pro- grams required that all institutions undergo at least a biannual audit of the Title IV financial aid programs in which they participated. The reauthori- zation requires annual audits. · Program reviews. The Department of Education's regional and cen- tral office staff conduct program reviews to determine compliance with federal rules and regulations governing the student aid programs. The re- views are conducted at educational and lending institutions and guaranty agencies. (The department also requires guaranty agencies to review their largest institutions and lenders. The reviews, which are not discussed in this report, focus on compliance with the rules and regulations for guaran- teed student loan, or GSL, programs. The reports must be submitted to the department for review and approval.) Oversight consists of periodic studies of various program areas within the Department of Education. The studies are conducted by the department's Office of Inspector General (OIG) or the General Accounting Office (GAOJ and focus on procedures for monitoring compliance with various require- ments and the management of those activities. The panel reviewed the award determination system and related quality control efforts using an approach that looks for potential to improve the accuracy of awards. In viewing the study this way, we identified several activities related to quality control and process improvement that were not part of the prevention, inspection, and oversight strategy defined by the

CURRENT QUALITY CONTROL PROCEDURES TABLE 4-1 Current Activities Related to Quality Control and Improvement in the Process of Awarding Student Financial Aid 53 Stage in the Process Current Activities Learning about the programs Filling out forms Data entry Data editing Verification Retrospective activities Outreach activities Financial aid administrator help area code 800 telephone information lines Feedback from involved organizations Electronic application Financial aid administrator help Renewal applications Inspection sampling and reporting by data entry contractors Computer flagging Student Aid Reports generated with highlights from edits and applicant corrections Verification of data from a selected percentage of applicants Audits Reviews Oversight Special sample-survey studies Department of Education. Using the simplified process diagram presented in Figure 3-3, we identified six distinct but interconnected activities related to control, improvement, or consequence of poor performance (see Table 4- 11. In the remainder of this chapter, we review inspection activities after the student submits an application for aid. The inspection activities include the processing activities of data entry, editing, and verification and the retrospective activities of audit and review. In Chapter 5, we review the measured outcomes of the financial aid process and relate the findings to the inspection activities and to problems the applicant faces in understand- ing and completing the application for aid. The issue of where the burden for ensuring effective performance of the system should be placed is ad- dressed in Chapters 5 and 9. PROCESSING ACTIVITIES Data Entry Data entry is done under contract. A central processor handles the federal application form, and several data entry contractors (called multiple data entry contractors, or MDEs) each handle a separate version of the

54 QUALITY IN STUDENT FINANCIAL AID PROGRAMS application form, which includes federal application information and state- required data.1 Entry of the application data and subsequent corrections of the data involve opening and handling mail and entering and processing data. (We also include printing and mailing the Student Aid Report, or SAR, in this category.) The amount of work during peak application peri- ods and the turnaround time required increase the risk of errors. Error in processing is a concern because applicants might be incorrectly informed of their eligibility status, which could affect their decisions about whether to attend school and, if so, where. The panel examined specifications, developed by the Department of Education and/or the contractors, that would indicate that extraordinary care is taken to ensure that data entry operations are accurately and efficiently accomplished from the time applications and corrections are received to the time that the SAR is produced. At every step of the process, traditional quality control inspection procedures are specified. At the receipt and re- view stage, for example, documents must successfully pass an initial com- pleteness check and then a further review. A random sample of batches of applications is also selected and checked for key entry errors. At the output stage, quality of print and integrity of data are checked in a sample from each stack of printed SARs. In its efforts to improve data processing activities, the Department of Education encourages contractors to obtain ongoing feedback from employ- ees at different levels, such as data entry, operations, systems, and project management staff. In addition, the department requires all MDEs to submit an annual requirements analysis, a comprehensive review of all major as- pects of the system, comments from applicants and institutions, and recom- mendations for any changes that are necessary. To assess ongoing work performance and product quality, the MDEs have developed feedback systems. One MDE, for example, utilizes Correc- tive Action/Error Cause Removal Sheets, on which employees indicate and describe the existence of a problem. The MDE's quality assurance depart- ment then works with management to implement corrective action. Addi- tionally, units specifically assigned to address quality issues collect and maintain detailed statistics on data entry quality and conduct ongoing re- view and evaluation of processing functions and requested changes. The panel did not attempt to verify the contractors' strict adherence to, or the success of, the defined quality control activities. However, a General Accounting Office (1985:9) study reported that "a small-scale review that was part of the 1980-1981 error study suggested that keystroke error in entering data from application forms to the computer terminal was low." 1As explained in more detail in Chapter 5, application forms can be obtained from the Department of Education, lenders, schools, or from one of the MDEs.

CURRENT QUALITY CONTROL PROCEDURES 55 This still appears to be true; management reports consistently indicate error rates well below 1 percent for keystroke and other data handling activities. Given the extensive quality control efforts employed over the years, the panel concludes that the handling of applications and data entry are well under control. The quality control methods at data entry may be somewhat excessive, but the total cost of data entry with those quality control activi- ties is low, especially compared with the more controversial areas of con- trolling student and institutional error. Data Editing When an application or correction arrives at the processing facility, data are edited on a very rudimentary level (e.g., completeness of key fields, such as a signature, and decisions on poor handwriting). After the data pass data entry quality control inspections, the processor transmits the data file to the central processor for more extensive computer editing. There, the Central Processing System (CPS) performs checks for consistency with data in several federal data bases (e.g., Selective Service, Immigration and Natu- ralization, Drug Enforcement, and defaulted federal student loan data bases) and performs edits similar to those used in many survey operations (e.g., assume and impute values for missing or illogical data of a noncritical nature, consistency checks among data elements, and range checks of data elements). Processing of applications failing the most critical aspects of these edits is suspended. For other edits, incorrect or "suspect" data ele- ments are "highlighted," and an award index is computed. The data as entered from the application, highlighted data elements, and award index information are mailed to the applicant as part of the Student Aid Report. For applications originating with the central processor, SARs are mailed directly by the central processor. For applications originating with an MDE, the information needed to produce a SAR (highlighted and eligibility infor- mation) are transmitted back to the MDE. The MDE then prints the SAR and mails it to the student. There was some objection to this process in the past. For example, an MDE complained to the Department of Education about rigid rules on overly specified formats and the wastefulness of having each of the MDE organizations develop programs to print the SAR forms. The student, whether subject to highlights of possible errors or not, is asked to review all the data on the SAR and, if corrections are needed, return the corrected SAR to the processor. The corrected data are re-edited and a new SAR is produced. Almost one third of applicants must "recycle" their SAR, which delays their award determination (see Table 4-21. Follow- ing the completion of the correction cycle, the student provides the SAR to each institution to which he or she has applied for aid (recall Figure 3-3~. Data from the Department of Education's Management Information System

56 QUALITY IN STUDENT FINANCIaL AID PROGRAMS TABLE 4-2 Frequency of Valid Applications, by the Number of Transactions, 1990-91 Academic Year Number of Transactions Count of Applications Percent of Applications 1 2 3 4 5 6 7 8-55 Total 4,827,615 1,555,070 443,479 146,134 49,739 18,819 7,469 6,086 7,054,411 68.4 22.0 6.3 2.1 0.7 0.3 0.1 0.1 00.0 NOTE: Number of transactions is the sum of the number of times the Student Aid Report is returned plus one for the initial application. SOURCE: National Computer Systems (1990-9la:5-1). (MIS) are used to produce tables of aggregate correction rates for each data item, but use of other statistical analyses might be instructive. For ex- ample, a cross-tabulation of error items by the number of SAR cycles might indicate the items that are responsible for most of the repeated SAR cy- cling. Taking actions to reduce the need for SAR recycling could reduce program costs. There is evidence that the SAR itself is an effective quality control device. On applications for the 1990-91 academic year, for example, more than half of the corrections of data critical to computing the award formula were made without the corrected field having been highlighted. For "fed- eral income tax paid," a field that is influential in the award formula, the proportion of unsolicited corrections was over 80 percent (National Com- puter Systems, 1990-9 la:4-11 to 4-141. Some of these changes may occur as a result of verification initiated by institutions (some schools do 100 percent verification anyway). Thus, further information on why the changes were needed would be useful for planning strategies to get the correct infor- mation the first time. The central processor performs a second edit function flagging appli- cations for institutional verification. Verification activities are discussed next.

CURRENT QUALITY CONTROL PROCEDURES Verification 57 Verification, formerly called validation, by the institution of applicants' reported data items is the Department of Education's primary tool in its efforts to control applicant error. Each school is required to verify key elements of the student record for all records flagged by the central proces- sor (or by the school's own error-prone profile in the case of the limited number of institutions participating in the quality control project discussed in Chapter 8~. Additionally, whether the student's file is selected for verifi- cation or not, the institution is responsible for resolving any conflicting data it may contain (e.g., between an unsolicited tax return and the federal record). Corrections, if made, must be reported to the central processor (either through the mailed SAR process or electronically at the institution'. Adding to the complexity of the verification process is the structure that requires the process to occur at each institution to which the student applied for aid. The panel was informed that an institution initiating data corrections based on information from its file frequently finds that the data are then changed by the student or another institution, and the resultant central processing data do not match the institution's record. One of the burdens of this prolonged SAR process is that with each change the student's Pell award may have to be recalculated, and if ad- justed, the interdependencies among the various aid programs may cause changes to any Campus-Based or loan determinations that are part of the student's aid package. For example, for each dollar of Pell correction in the case of a fully awarded student, a Stafford loan dollar often must be changed and the Stafford loan certification process begun again. Data on the Stafford loan must then be revised by the school, the lender, and the guaranty agency. There is little year-to-year comparison of an applicant's information in the student financial aid system unless instituted by the school, in which case it also becomes liable for errors in reconciliation when using the prior data. The need to make quick decisions concerning the award, changing family and student incomes, and the increasing degree to which students move among institutions of higher education are likely barriers to year-to- year data comparisons, which might otherwise lead to increased veracity. Although the verification design, at one time, reportedly called for some applicant records to be purposely reselected for verification in successive years, no formal reports on studies of the data were made available to the panel. Also, since only about 50 percent of applicants reapply for aid the next year, the ability to make good verification decisions on first-time ap- plicants is important. Verification activities follow a basic cycle for each program year, which includes the following steps:

58 QUALITY IN STUDENT FINANCIAL AID PROGRAMS · The Department of Education develops strategies for selecting appli- cations thought to be more likely than others to be in error. · The central processor compares each application with the verifica- tion selection criteria at the time the application first enters into the system (recall that some initial edits lead to the application being rejected). · Institutions verify applicant data for the selected applications. (Rules concerning the maximum percentage of applicants an institution must verify, the data items that must be verified, and acceptable documentation for data item verification have varied over the years. By federal rules issued prior to reauthorization, the institution need not verify more than 30 percent of applications. Reauthorization, however, allows the Secretary of Education to mandate verification of all applications.) · Institutions report data changed in the verification process to the central processor for recalculation of the award formula and for creation of the analytic data sets used in the Management Information System. (The data are used in the analysis to create the selection strategy for the follow- ing year.) In essence, verification is reapplication for financial aid with supporting documentation. The institutional verification process has been the subject of consider- able research expenditure by the Department of Education since 1980. From the earliest reports through the most recent, the approach has been criti- cized. (A list of the reports is provided in the next chapter, where the panel summarizes data from the reports.) Major criticisms include the following: · The cost-benefit of the approach is questionable. · Unfair burdens are placed on the academic institutions. The timing of and changes to awards create difficulties. · The approach may possibly unfairly target certain groups. · The verification data are also prone to error. . Although not a focus of the reports, the panel considers the impact of reapplying on the applicant an additional criticism. The Department of Education describes the current verification system as an attempt to balance error reduction and the burden imposed on institu- tions. Efforts to move toward verification of all applicants have been tem- pered by institutional lobbying that sought to limit the burden that institu- tions must endure to correct errors not of their making. As a result, the 30 percent of valid applicant records that are selected for verification are cho- sen through a complex sequence of statistical procedures (see the next sec- tion) intended to target the most error-prone applicants. The verification system has been studied and modified over the years, but the panel could not find evidence that major advances have been made

s9 in responding to the major criticisms of the system. For example, the cost- benefit of the system remains a question. While the MIS estimates the additional error removed by each selection criterion, there are no current estimates of the overall cost of the system, including resources used by the Department of Education and the institutions and the applicant's time. The General Accounting Office (1985) estimated the 1982-83 cost to institutions as slightly more than the dollars of award error eliminated by the verifica- tion process. The GAO study did not measure the potential deterrent effect that verification may have on student or institutional error, nor did any other studies reviewed by the panel. But, a recent quality control study (Price Waterhouse, 1991) shows little difference in the amount of final award error (based on a "verification" conducted during the study) among applications selected by the Department of Education's verification strat- egy, those selected for verification by institutions, and those not verified. Thus, the SAR process may provide most of the deterrents, and what appear to be marginal gains at best may result from verification activities. (Be- sides, 50 percent of applicants are first-time entrants to the process and would likely be much less aware of the verification possibility than a tax- payer is of an Internal Revenue Service, or IRS, audit.) Going further, several studies suggest a return to the original concept that verification be done centrally and not at the institutions. Verification requires an enormous effort from most everyone involved. The institutions must perform this unwanted task, often duplicating efforts for students ap- plying to several institutions. The Department of Education has to use a contractor to help develop efficient selection criteria and conduct studies of verification efforts, and it must monitor the criteria for fairness and react to audits and conduct reviews of the institutions to determine compliance. The panel, recognizing the costly consequences of verification activi- ties, devoted considerable attention to verification. Panel members, staff, and consultants visited with Department of Education staff and contractors involved in verification and reviewed several contracted analyses and analysis plans. The panel focused its initial activities on assessing the efficiency of the verification selection methodology and the burdens verification imposes. Selection Methodology CURRENT QUALITY CONTROL PROCEDURES The panel was interested in the statistical underpinnings of the verifica- tion selection strategy and observed the following. The analysis that leads to the criteria for selecting applications and the creation of computer pro- grams that select the applicants are carried out under contract. The contrac- tor provides detailed plans for conducting the activities in accordance with a very comprehensive, long-term analysis plan that was developed in 1986. That plan recognized the importance of such issues as timing, the need to

60 QUALITY IN STUDENT FINANCIAL AID PROGRAMS account for student corrections and actual payments, biases in using data from past years to develop current selection criteria, and the need for infor- mation on ineligible applicants. Commendable progress in many of these areas is evident in the system. However, the panel's evaluation was ham- pered by a lack of complete documentation of the statistical procedures used to create selection criteria, which forced the panel to rely on verbal descriptions provided by the Department of Education and its contractors. Some detail on the methodology was presented in a report prepared for the panel by Carr and Sutton (1992), but limited quantitative information was available to assess the admittedly ad hoc procedures. The approach to determining the criteria can be considered a three-part model, as follows: l."Suspect" groups of applicants are defined based on selected re- sponses in categorical data and ranges in other data supplied by the appli- cant on the initially accepted form. Each group generally constitutes less than 2 percent of the total applicant population. The groups are created (that is, the categories and ranges are selected) on the assumption that they are more error prone than the applicant population in general. Several methods are used to define groups. Groups examined in the past and found to have relatively high error rates, logical groups based on program changes or other consistent ideas, and any other groups identified using statistical methods, such as discriminant analysis, are all considered candidates for use. The ranges and categories that statistically define or characterize a group, say income above a certain value and independent status, form what is called a criterion. 2. A measure of effectiveness is computed for each criterion. The effectiveness of a criterion is measured using data from two control groups that were randomly selected from the prior year-a "verified" and a "not to be verified" control group. As the names imply, these groups contain records that were specifically chosen to be verified or excluded from verification, respectively. Using the verified control group, an estimate is made of the "error" found by verification within the records that meet each criterion. This estimate is the difference in the computation of the award formula using original data and the verified data. Using the "not to be verified" group, an estimate is made of the self-corrections during SAR transactions by computing the difference in the award formula using original data and the final SAR transaction data. A differencing between the estimates of the two control groups for each criterion removes the effect of self-correction by the applicant in SAR processing, which provides a measure of verifica- tion-induced corrections.2 2Note that this is not an exact measure of error removed because verification data are subject to measurement error, but it is a reasonable proxy.

CURRENT QUALITY CONTROL PROCEDURES 61 3. The criteria are then rank ordered based on the effectiveness mea- sure. The criteria selected for use in the current year are selected from the rank ordering beginning with the criterion with the highest ranking and ending when the percentage of the total population to be verified by the criteria reaches a preset cutoff (currently 30 percent). Any applicant with initial data meeting the definition of a criterion selected for use is flagged for verification, and the applicant is so informed on the returned SAR. ace At first glance, this ad hoc approach would appear to be better replaced by logistic regression or other techniques that would directly identify indi- vidual applicants at high risk of error. But two factors affect the methodol- ogy to be used to identify error-prone applicants. First, records to be veri- fied must be selected based on the data in the initial application. This is necessary because the first SAR is often the SAR sent to the school, and the SAR indicates to the school which applicants are to be verified. Logistic regression could certainly do this, but it is the second factor that led to the ad hoc approach. The second factor is that applicants often correct much of their initial error by returning a corrected SAR. This self-correction must be considered; otherwise, the selection technique could inefficiently pick cases that would self-correct anyway. Records selected early do not allow analysts to distinguish between self-corrections and verification-driven changes. The system developers are to be commended for their creativity in setting up the control group of applicants held immune from selection for verifica- tion, thus allowing measures of applicant self-correction (returned SARs'. Still, logistic regression might provide an efficient and more straightfor- ward methodology if some items that are often corrected in the SAR pro- cess were excluded from the analysis (e.g., income items reported before the tax form is filed). The point is that the relevant measures of error corrected in verification are those remaining after the SAR corrections. To the extent that precorrection errors and post-correction errors differ in mag- nitude and source, a methodology that measures error based on pre- rather than post-SAR data is flawed. Also, a major shortcoming of the logistic regression method is that it identifies applicants with a larger probability of error, not those with a larger magnitude of error. Reports produced by the MIS provide indications about the interaction and performance of verification and SAR processes. The data made avail- able to the panel, however, provided little direct information on the distri- bution of errors "found" using the criterion and control groups. Additional analysis of the data indicated that only about 14 percent of applicants cor- rect data that result in a change to the award calculation, which implies that a few large errors and not many average-sized errors dominate the average. The similarity of averages among the control groups suggests that many

62 QUALITY IN STUDENT FINANCIAL AID PROGRAMS applicants self-correct to an almost error-free state during SAR processes. That is, the SAR process may have corrected most of the errors in the initial application data, and it may be a very effective quality control tool. The control groups could be used as an "embedded experimental design" to assess the value of highlighting strategies. If such experiments lead to improvements in the data, the SAR process could be even more effective. The 1986 MIS management plan indicated that highlighting and criterion methods were to be studied for their influence on correction behaviors and their ability to detect errors. It was not obvious to the panel, however, that the Department of Education supported an extensive effort on those activi- ties by its contractors. Similar criticisms about the lack of knowledge about the effectiveness of those activities were expressed by the General Account- ing Office (1985~. Tabulations produced by the MIS indicated that from 1986 to 1989 verification of randomly selected applicants would "find" more than half of the error "found" in the applications selected by criteria. The additional error removed was, on average, only about 70 points (roughly dollars) in the computation of the award index across all records studied during those years. Another MIS output gave evidence that large verification-induced changes were difficult to isolate. Only a few criterion groups had average verification changes that altered the award computation by more than 30 percent and, combined, those groups contained only 2 percent of the appli cants. Some methodological improvements might be possible in defining the criterion groups. For example, are there more efficient groupings of appli- cants that would jointly maximize the post-SAR error found and minimize the selection of correct applications? The contractor suggested that incor- porating more sophisticated research on tax code relationships would be worth investigating. Other methods mentioned, such as discriminant analy- sis and procedures like those used by the IRS (Hiniker, 1987), may not perform well when used with mixtures of categorical and nonnormally dis- tributed data like these. Looking for multivariate outliers within categorical data cells might be worth study, as well as nonparametric methods such as classification and regression trees (CART), as proposed by Carr and Sutton (1992~. Finally, an alternative prediction hierarchy might be used to clas- sify records into groups that would self-correct versus those that would not. Predicting records in need of verification within each of those groups might then be more accurate. Although some cases of fraud or abuse undoubtedly will arise in a program of this size, none of the studies the panel reviewed indicated any widespread problems of fraud by students. The panel recognizes that the Department of Education believes verification is necessary to fulfill its con- gressional mandate, yet the panel concludes that the verification process as

CURRENT QUALITY CONTROL PROCEDURES 63 currently used is unfairly burdensome for the overwhelming majority of well-intentioned people working within the system. In using any other technique, the emphasis should be to move from the current focus on catch- ing mistakes to also providing information for future preventive actions. Recommendation 4-1: A research effort should be initiated to assist the Department of Education's contractor in developing better criteria for selecting records to be verified and, more important, identifying oppor- tunities for earlier removal of errors through instructions, form im- provements, and SAR highlighting strategies. Current andfuture changes to the verification selection methodology should be carefully documented. A final comment on existing verification methods is needed. The 1992 reauthorization made several important changes to the data structure. Sev- eral items associated with wealth that were error prone in the past have been removed from the award formula and the application either entirely or by raising the income boundary for the "simplified" needs test from $15,000 to $50,000. Unlike IRS audit-scoring methods, which make use of external data such as employer- and bank-transmitted records, or federal benefit programs that require up-front verification of critical data items (see discus- sion in Chapter 7), the Department of Education's verification selection methods rely on the internal consistency of the applicant's reported data. With fewer data items being collected, the criteria development procedures will most likely be less effective in finding error in the remaining data items. Regardless of the reauthorization changes, the panel believes that the current verification approach is inefficient, and it builds a case for redesign of the verification approach in Chapter 9. A Further Role of the Financial Aid Administrator To this point the quality control activities described have been me- chanical in nature. At times, circumstances can be expected that are rare or that occur after the application process has started. In such cases, the institution's financial aid administrator must, for certain circumstances and may for other circumstances, by federal law, exercise professional judgment in determining and documenting when the circumstances for an individual student warrant variation from the law or regulations. Most typical are nondiscretionary, recent changes in a family's financial circumstances or structure, such as death of a parent, loss of job, or legal action to sever the family's relationship with the student. Additional documentation of such changes is required for the Central Processing System and for the institution's own files. The exercise of professional judgment occurs in a small percent- age of cases, but it is highly labor intensive and subject to human error and . . . varying Interpretation.

64 QUALITY IN STUDENT FINANCIAL AID PROGRAMS Thus, the financial aid administrator stands in the middle of the pro- cesses, attempting to serve as the student's advocate to ensure the maximum legitimate award while appearing to the student to be a barrier. The penalty for error when making these decisions is seen by the administrator not only as a financial liability for the aid pool of that cohort of students but also as a potential mortgage against the aid of future cohorts of students. The capacity of financial aid administrators to make such subjectively deter- mined changes has been strongly and increasingly endorsed in each recent reauthorization of the Higher Education Act. Yet, this activity may increase the potential for error (as discussed below) and the liabilities assessed dur- ing audit and review. AUDITS AND PROGRAM REVIEWS The Department of Education has responsibility for ensuring that the approximately 8,000 participating postsecondary institutions administer stu- dent financial assistance programs in compliance with law and regulations. The department executes this responsibility through two major retrospective . . . activities: · requiring compliance audits by an independent auditor; and · conducting program reviews through its 10 regional offices. The Title IV student financial assistance audit and program review pro- cesses are designed to determine the accuracy of the administration of stu- dent financial aid programs by an institution. Specifically, the Title IV audit objectives are to measure the reliability of an institution's financial data, the adequacy of internal control systems, and compliance with pro- gram regulations. The objectives of a program review of an institution are to determine compliance with program regulations and to evaluate financial and administrative capabilities. Until recently, program reviewers also at- tempted to provide technical assistance. The operations of these retrospec- tive activities as they relate to the problem of error in the delivery of student financial aid are described below. Audits The objectives of an audit are to provide the Department of Education with information to determine whether an institution has · provided financial data and reports that can be relied upon; · internal controls, structure, policies, and procedures in place to pro- vide reasonable assurance that it is managing student financial assistance programs in compliance with applicable laws and regulations; and

CURRENT QUALITY CONTROL PROCEDURES 65 · complied with the terms and conditions of federal awards and, thus, has made expenditures of federal funds that are proper and supportable. Institutions identify eligible audit firms and contract for their services. The Audit Guide prepared by the U.S. Department of Education (199Oa) de- scribes the audit and provides procedural guidance.3 Prior to the 1992 reauthorization, audits were required at least every two years (annual compliance and financial audits are now required). Au- dits must be conducted by an auditor who meets standards of independence specified by the GAO. The GAO's audit standards also require that audi- tors engaged in auditing Title IV programs have sufficient understanding of the institution's internal controls structure and federal compliance require- ments to fulfill the task adequately. The Audit Guide specifies mandatory audit areas and the minimum number of cases to be tested for each area. For example, the auditor is generally required to select at random and test a minimum of 50 cases (25 percent if the total population is less than 200) from appropriate populations to deter- mine procedural correctness in the following areas: · determining student eligibility, · coordinating student aid programs, · calculating and disbursing awards, maintaining student files, certifying and disbursing loans, and calculating refunds. The auditor must select sufficient samples during an on-site visit to render the required opinions on each program even if that necessitates se- lecting larger samples than the specified minimums. The auditor must re- port as "findings" all instances of noncompliance, regardless of their mate- riality. The audit process also requires a review of prior audit findings. At the conclusion of the visit, the auditor prepares a report that details every finding and renders opinions on the institution's overall internal control program and compliance with federal law and regulations. The auditor discusses the report with school officials, and the report is then sent to the Department of Education. 3Authority for audit of the Title IV student financial assistance programs is contained in Section 487(c)(1) of the Higher Education Act of 1965, as amended by Public Law 94-482, Part D, 133. Anderson (Appendix F) describes several forms of audit that apply, depending on the type of institution. For ease of presentation, we focus on the audits contracted for by the schools.

66 Audit Results QUALITY IN STUDENT FINANCIAL AID PROGRAMS In response to the panel's request, the Department of Education sup- plied the following list of the areas in which audit findings most frequently occur: ability -to -benefit determination missing financial aid transcript entrance/exit counseling late refunds student not making satisfactory academic progress verification of student-supplied data excess cash improper Pen disbursements excessive loan default rate missing certification statements Selective Service registration drug abuse nondefault affidavits - educational purpose dropout/withdrawal rate excessive late lender notification · Perkins loan notes unsigned/missing/improper As a result of the audit findings, institutions may be liable for the amount of erroneous awards and for administrative penalties, such as fines, limitations on Title IV programs, suspensions, or termination of their par- ticipation in Title IV programs. Each institution must submit to the Depart- ment of Education a corrective action plan detailing the process by which it will correct any deficiencies or findings uncovered in the audit. Material findings or nonsubmission of the audit may result in further inspection activities, such as a program review by the Department of Education student financial aid staff or an audit by the Department of Education's Office of the Inspector General, and punitive actions, such as limitations on, suspen- sion of, or elimination from participation. According to the Department of Education, in recent years slightly over 3,000 institutional audits have been completed each year. Typically, 40 to 50 percent of the audits yield significant findings. Penalty assessments in the period 1984 to 1990 averaged nearly $11,000 per audit (as best esti- mated from various data provided to the panel). The department also re- ported that very little of the penalty assessed is actually collected by the time all appeals and negotiations for informal fines (see below) have been concluded and indicated that this is due to an insufficient number of staff.

CURRENT QUALITY CONTROL PROCEDURES 67 A recent OIG internal report (U.S. Department of Education, 1992) found widespread problems with the effectiveness of audit programs, in- cluding the following: . Almost half of the schools did not submit the required audit reports during the period 1981-88. (Some departmental staff consider this finding to be inaccurate and believe the rate was lower. Also, there has been more effort during the past two years to identify schools that are late in filing reports.) While there is a process for informal agreement between the depart- ment and the school on fines for significant program violations, recurring audit deficiencies, or overdue audits, in some instances, the fines were not proposed or were for such nominal amounts that they would not deter further violation. Moreover, the department was lax in collecting informal fines. · The department accepted unaudited documentation provided by the auditee in violation of Office of Management and Budget (OMB) rules. (De- partmental staff indicated that they often request that the documentation be verified in the "prior audit" section of the next required nonfederal audit.) · Error rates and amounts found in the audit sample should have been expanded to obtain an estimate of error in all cases, using the appropriate sampling fraction. (Note that total file reviews are required when evidence indicates excessive problems. Departmental staff indicated that a 10 per- cent or greater error rate in the sample would trigger a full-file inspection.) The OIG report attributed these weaknesses to "multiple causes including the lack of a reliable database to monitor compliance with the audit require- ments, a lack of reconciliation procedures, and inadequate guidance to the staff" (p. 61. . The panel also noted that assessments made for errors found in the auditor's sample were not expanded by the sampling fraction used in select- ing the sample. This favors institutions with nontrivial compliance prob- lems, such as poor academic progress by aid recipients or inappropriate handling of funds. On the other hand, some of the items examined do not lead to serious administrative problems (e.g., misplaced certification state- ments), which would make the department's strict intolerance of any error, a zero defect standard, especially unfair if findings were expanded to reflect all cases. A fair solution to assessments based on audit findings might parallel the actions recommended by the NAS panel that reviewed quality control in welfare systems (Kramer, 19881. Such a solution would require obtaining data from the audits to estimate the distribution of institutional error rates.4 This estimated distribution, reflecting the current capabilities of the institu 4The distribution could be estimated using an iterative process of density estimation that identifies and adjusts outliers during each iteration.

68 QUALITY IN STUDENT FINANCIAL AID PROGRAMS lions, becomes the basis for determining the cutoff points for unusually low or unusually high error rates. Institutions found to have unusually high error rates would be subject to penalties. Further, rewards might be consid- ered for those institutions exhibiting exceptionally low error rates. The panel has similar concerns with program reviews (discussed later in this chapter). Recommendation 4-2: The Department of Education should review the use of sample findings in audits. Adequate expansions of sample find- ings are needed, but their use must also reflect the seriousness of the error and make allowance for tolerance of reasonable error levels. While the panel will not directly suggest an acceptable error level, the choice of level should be based on the distribution of errors by all or similar types of institutions. Risk-Based and Internally Based Audits The panel heard many reports of problems with audit activities during its open meetings and in private discussions with departmental staff and financial aid administrators. Those reports led it to commission a study by Urton Anderson, an expert in the field of audits (see paper included as Appendix F). Anderson reinforced the panel's concerns about the ineffec- tiveness of the audit programs and made a number of suggestions for im- provement. For example, he suggested that the Department of Education be encouraged to follow the current movement by the auditing community toward a risk-based approach to auditing compliance. Risk-based auditing refers to the practice of systematically distinguish- ing audit areas/locations by factors that predict the likelihood that the area/ location has problems and then allocating auditing resources across audit areas/locations based on those predictions (i.e., more audit resources are devoted to places determined to have the higher likelihood of problems). Problems may be variously defined depending on the objective of the audit (e.g., material error in the financial statements or material compliance). Risk-based auditing should be applied to the selection of institutions for audit and to the allocation of audit resources across areas within the audit of each institution. Progress in the latter has already been made with the use in some independent audits of the compliance risk model presented in State- ment of Auditing Standards (SAS) No. 68 (American Institute of Certified Public Accountants, 1992~. However, such an approach is inconsistent with the current approach sanctioned by the Department of Education and OMB guidelines and by the 1992 reauthorization legislation, which requires an- nual audits. Congress and the oversight authorities at the Department of Education and OMB should revise those guidelines and legislation to be consistent with a risk-based approach.

CURRENT QUALITY CONTROL PROCEDURES 69 The SAS 68 compliance risk model, however, retains a focus on attesta- tion of compliance (measurement of the level of compliance) rather than on the use of audit resources to improve compliance. Risk models that more effectively identify potential problem areas directly and promote compli- ance could be developed using techniques such as those employed in inter- nal audit functions to allocate audit resources (Siers and Blyskal, 1987~. Indeed, the Department of Education is currently using a direct risk ap- proach based on logical risk assessments to allocate program review re- sources (discussed in the next section). However, the department has used it only to identify problem areas and has not taken full advantage of its potential to promote compliance by including a strategic component (Anderson and Young, 1988a, b). A direct risk approach could also be used in inde- pendent audits that cover multiple administrative units, as is frequently the case for public institutions (e.g., a single audit covers all the public institu- tions within a state). Anderson also indicated that other models for ensuring compliance have recently been developed. One such model that has shown promise is the "certification" model, an approach that has been applied in the European Economic Community's "Eco-audit" scheme and by internal auditors in organizations implementing total quality management programs. The ap- proach consists of relying on those people actually involved in the process to take more responsibility for assuring the quality of the process. Critical points in the process are "audited," or certified, by people who are indepen- dent of the process but not necessarily independent of the organization. For example, in the European Eco-audit scheme, management at each site par- ticipating in the scheme prepares an environmental statement. The state- ment includes a description of significant environmental issues, a summary of the data on pollutant emissions and the like, a presentation of the organization's environmental policies and specific objectives for the particular site, and an evaluation of the environmental performance of the protection system imple- mented at the site. The statement is then verified by an accredited environ- mental auditor (i.e., one who meets specifically outlined standards of train- ing). This auditor can be an employee of the organization (if the organization has set up an appropriate system to provide independence from the activity audited, such as an internal audit function) or an external auditor. The important modification in this approach is that the emphasis re- garding the "auditor" is on being qualified to conduct the audit rather than on organizational independence. One of the frequent and very troubling problems encountered with the independent audit of student financial aid programs has been that independent auditors do not understand program compliance requirements or financial aid operations. Requiring more spe- cific accreditation of the auditor improves the quality of the auditing pro- cess. Allowing the audit to be conducted by direct employees of the organi

70 ~- ~- QUALITY IN STUDENT FINANCIAL AID PROGRAMS zation who are independent of the activity can provide for further expertise of the auditor through the auditor's knowledge of the institution and the industry. Appropriate actions would have to be taken when institutions or auditors violate the independence or objectivity that is expected of these audits. In Chapter 8, the panel discusses the Department of Education's Institu- tional Quality Control Pilot Project, an example of applying the certifica- tion approach to student financial aid programs. In the project's "sample certification" requirement, a third party, such as an institutional auditor external to the financial aid office, certifies that the sample has been drawn accorc ing to the required procedure. The panel views extension of this technique to other compliance areas, such as using certified auditors, as productive. The panel is concerned about many quality-related aspects of the audit program, including the recent backlog of unresolved audits, the system of fines, the accuracy of audit findings, the burden placed on institutions, the inability to provide recommendations to schools for positive changes, and the apparent lack of information about the effectiveness of audits as a deter- rent to improper program administration. Using a risk-based approach to selecting institutions for compliance audit (as is now done for program reviews) appears to offer increased efficiency in the use of audit resources, particularly if the responsibility for conducting the audits is centralized at the state or national level. Many criticisms of audits also apply to program reviews, and the over- lap of purpose and function between audits and reviews should be addressed. Thus, the panel has deferred making a recommendation until after the dis- cussions that follow on program reviews and OIG activities. Program Reviews The Department of Education describes the role of program reviews as identifying and addressing, through assessed liabilities, regulatory viola- tions. Authority for the program review function of the Title IV student financial assistance programs is contained in the Higher Education Act. Program reviews of postsecondary institutions, lenders, and guaranty agen- cies are generally conducted on site by Department of Education staff from one of the 10 regional Student Financial Assistance offices. Typically, a reviewer is at a school from three to five days. With the department's current emphasis on fewer but more thorough reviews, however, on-site stays are more frequently longer than one week. Generally, reviews are conducted by one reviewer, although two or more often work together. The objectives of program review are to (1) determine an institution's (postsecondary school, lender, or guaranty agency) compliance with Title IV program regu

CURRENT QUALITY CONTROL PROCEDURES TABLE 4-3 Program Reviews, by Type, Fiscal Years 1985-92 71 Guaranty FiscalSchool Lender Agency YearReviews Reviews ReviewsTotal 1985763 675 101,488 1986417 463 10890 1987372 239 8619 1988677 313 10969 1989820 524 131,357 19901,139 802 311,972 19911,016 728 361,776 1992699 500* 24*1,124 *Estimated. SOURCE: U.S. Department of Education, Office of Student Financial Assistance rations and its capability to perform financial and administrative processes and (2) provide technical assistance to individual schools. As noted earlier, guaranty agencies are required to review their high-volume lenders and institutions, potentially adding another level of inspection and burden on schools. (We note that there is some effort to conduct concurrent reviews to avoid duplication.) Frequency of Reviews and Selection Criteria Generally, reviews of closed schools and precertification reviews are given first priority.5 Otherwise, the department's goal is to review each institution once every four years. In the past 10 years, however, a shortage of personnel has prevented the department from meeting this objective. Table 4-3 indicates the year-to-year variability in the number of reviews completed. While approximately 8,000 postsecondary institutions partici- pate in Title IV programs, fewer than one in eight is reviewed each year. To balance the risk associated with the longer time between reviews, the department selects most institutions for program review based on a series of deficiency factors. For the past several years, the great majority (about 80 percent) of program reviews have been conducted based on "screening" criteria, which have been updated annually. The screening criteria (see Table 4-4) are intended to reflect concerns regarding the management of student aid programs. Each criterion is weighted by assigning it a numeri- cal value. The criteria assign the highest review priority to schools with Certification reviews and, generally, closed-school reviews are not conducted on site.

72 QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 4-4 Program Review Selection Criteria, 1993 CRITERIA POINTS 1. Schools with Federal Family Education Loan Program (FFELP) default rates in fiscal year 1990 of 25% and above (includes those schools with default rates that are based on 30 or more borrowers entering repayment and those schools with average default rates) 2. High student withdrawal rate of 33% and above 33-39% 40-49% 50% and above 3. No program review in past 4 years 4. Schools with change of ownership/recertification since January 1, 1990 5. Schools new to Title IV programs since January 1, 1990 6. Schools being monitored for financial capability 7. Significant increases in FFELP loan volume (1988-89 and 1989-90) or Federal Pell Grant volume (1989-90 and 1990-91) based on percentages for most recent award years (The number of points assigned depends on the dollar range of the program and the percentage increase.) 8. Regional assessment (e.g., student complaints, adverse publicity) 40 10 20 30 30 25 25 20 10, 15, or 20 25 NOTE: Overdue audit report was removed from the criteria in 1993. SOURCE: U.S. Department of Education (199lb). high default rates, high student withdrawal rates, and no program review in the past four years. Overdue audits were a high-priority criterion before 1993. Within the ranking, schools are reviewed according to the resources at the disposal of each region. Since 1989, an average of 900 school re- views have been conducted each year.6 While regional staffing appears to 6This number reflects reviews of all types, not necessarily those resulting from the screening, process. For example, reviews of closed schools frequently do not involve a site visit and complaint-based reviews may be limited to the subject of the complaint. All are counted as school reviews.

CURRENT QUALITY CONTROL PROCEDURES 73 be roughly proportional to the percentage of schools and the loan volume, staffing reportedly is not proportional to high-risk characteristics, such as schools with high default rates or the rate of schools with potential liabili ties Dented In reviews. The Visit The monitoring function of the visit is to assess the institution in terms of (a) the accuracy of its student eligibility determinations and the calcula- tion and disbursement of awards and (b) its general administrative capabil- ity and financial responsibility. Institutions are usually informed of the pending visit and are required to provide information and materials, includ- ing student rosters and policy manuals, prior to the on-site visit. Each of the following programs is reviewed: . . Federal Pell Grant · Federal Supplemental Educational Opportunity Grant Federal Perkins Loan · Federal Work-Study · Federal Family Education Loans The aspects of the administration of Title IV student financial assistance programs that are subject to review include the following: · ability to benefit . satisfactory academic progress by students · student eligibility · verification of student-provided information The reviewer also examines the disbursement of aid to student accounts and the financial accounting for Title IV program funds. At institutions with a high default rate, additional processes are required. As with audits, the program review outcomes can include corrective actions, liabilities, and administrative penalties, such as fines and limitation, suspension, or termi- nation of Title IV programs. The core of the program review is derived from observations based on a sample of student financial aid files selected by the reviewer. The reviewer selects 10 files from each award year being reviewed (usually two years). There is no set method for the selection. Some reviewers select random samples, others select discretionary samples, and others select a combina- tion of the two. Reviewers are supposed to select files that represent all the Title IV programs in which the school participates. The reviewer then evaluates, through the evidence in the student and financial files, the vari- ous operations of the school to test for compliance with law and regula- tions. Reviewers may expand their sample if a noted deficiency indicates a

74 QUALITY IN STUDENT FINANCIAL AID PROGRAMS problem with potentially greater frequency than indicated by its occurrence in the sample. If an instance of noncompliance is noted, the reviewer is expected to document the deficiency using worksheets provided in the department's Program Review Guide (U.S. Department of Education, 1991b). The reviewer not only identifies errors, but also calculates the value of the errors and identifies necessary corrective actions. The calculations cover payments to ineligible recipients and overpayments and underpayments to eligible recipients. The extent of corrective action depends on the fre- quency of error based on the sample. It may involve correcting the indi- vidual case file in which the deficiency occurred, or it may involve a re- quirement that the school identify all such cases over a one- to five-year period that have the same characteristics and report the frequency of error noted and the value of such errors for that subuniverse. The on-site review ends with an exit interview with administrative per- sonnel from the school, during which the reviewer summarizes the findings, makes recommendations, and presents the required actions and any poten- tial liabilities. A written report follows, usually within 30 days, which details findings, required actions, and recommendations for change. The institution then has 30 days to respond to the report and provide any docu- mentation or information that rebuts the findings. The Department of Edu- cation then evaluates the institution's response and produces a letter to the institution, which includes the final determination for all findings and any assessed liabilities or proposed fines. This part of the resolution process may extend over several months and may involve many complications, such as extensive file reviews. The program review is closed when all assessed liabilities have been paid and the institution has sufficiently responded to required findings. Table 4-5 identifies the most frequent findings based on recent program reviews. Based on the regional office's evaluation of monetary liabilities, the reviewer prepares data entry forms that show the codes for violations found during the review and the reviewer's assessment of the overall seriousness of the violations taken as a whole. Liabilities can be based on only the students in the sample or a total file review. Program review liabilities are entered into the data base after they are final (that is, the program review has been resolved and closed). Frequency of violations are not recorded, only the occurrence. However, depending on the type of violationks), sig- nificant liabilities may result. Thus, one instance of a violation, such as inadequate student consumer information, at a large number of schools, will appear as a high-frequency error in the data base. On the other hand, an error that occurs at 10 percent of the schools but has serious implications, such as inadequate attendance records at a clock-hour school, will appear as less significant. Similarly, an error that occurs in a large number of cases at 10 percent of the schools will appear as less significant. Thus, a data base

CURRENT QUALITY CONTROL PROCEDURES TABLE 4-5 Top 15 Program Review Findings, by Occurrence, Fiscal Year 1991 75 FINDING Verification procedures not followed/documented Financial aid transcript missing/incomplete Consumer information requirement not met Satisfactory academic progress standards not adequate/developed/monitored Guaranteed Student Loans-refunds not made/late Guaranteed Student Loans exit interview not documented Refunds-late/not made to Title IV account Ineligible student citizenship Excess cash balances maintained Inconsistent information in student file Ability to benefit-undocumented Accounting records inadequate/not maintained Bank accounts federal funds not identified FISAP income grid not documented Student budgets (Fell Grant) improper OCCURRENCE 408 371 274 261 236 227 219 197 167 161 145 141 136 135 128 Top 15 total 3,206 Total for all others 3,844 Grand total 7,050 SOURCE: U.S. Department of Education, Office of Student Financial Assistance. is maintained, but it does not serve the purpose of a management tool consistent with comments in Chapter 2. That is, it is not used to identify common and special causes of error and would be difficult to use in such a way. The seriousness of problems uncovered in a program review determines whether the reviewer's report will be reviewed by central office personnel. The central office review is, reportedly, largely an edit for compliance with Title IV regulations, policy, and procedures to ensure that corrective actions are proper and consistent with national guidelines. Central office staff routinely consult with regional office staff to discuss any significant devia- tions from policy or procedure. The central office review does not include a review of workpapers or other objective information provided or requested on a routine basis; nor is there a standard process to verify the reviewer's ~ . assessment ot seriousness. Outcomes In some cases, the actions of an institution indicate enough of a risk to the federal government that an immediate suspension of participation in

76 QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 4-6 Summary of Administrative Actions Fiscal Year Action198619871988 198919901991 Terminations initiated131427 3053113 Terminations imposed5110 161838 DisqualificationsN/AN/A2 6713 Limitations/settlements828 51751 Emergency actions229 1038 Formal fines imposed437 132787 Informal fines imposed4381106 194215254 Debarment/suspensionN/AN/AN/A 245857 Put on the reimbursement191352 126192277 system Total school reviews417372677 82011391016 NOTE: N/A - not applicable SOURCE: U.S. Department of Education, Office of Student Financial Assistance. Title IV programs is sought. This emergency action occurs when the risk outweighs due process considerations. Given serious violation of regulations, significant liability, or the expo- sure of fraud and abuse, the Department of Education may fine an institu- tion and limit, suspend, or terminate its program participation. Fines can range up to $25,000 per violation for schools and may be combined with other actions. Limitations restrict an institution's participation and some- times are used to settle a termination action. Suspensions range up to 60 days; they are rarely used, however, because they require the same process as a termination and may take up to a year to effect. Terminations, which are effective for up to 18 months, are sought for serious program violations, failure to pay a fine, noncompliance with a limitation agreement, or failure to solve a program review or audit finding. See Table 4-6 for a summary of ~ . . . . . admlnlstratlve actions in recent years. Data and the Targeting of Reviews Because on-site reviews are conducted at institutions with high defi- ciency factors or occasionally at those going out of the program, the results may not be easily used to represent the overall problems occurring with the institutional administration of Title IV programs. Moreover, even in the institutions reviewed, the number of findings per institution is small, on average (e.g., an average of seven per institution in fiscal year 1991 (Table 4-5). Targeting of institutions for review has improved somewhat recently.

CURRENT QUALITY CONTROL PROCEDURES 77 Institutional liabilities averaged over $32,000 per review in fiscal year 1992, compared with under $15,000 in fiscal year 1991, and the number of re- views declined from about 1,100 to under 700. The department views this as progress in the right direction, that is, focusing on fewer schools with larger "returns" per school. Another interpretation is that error rates have increased, but there are no procedures for making such inferences from program review data. Further, the liabilities assessed do not equal recover- ies of funds. Linking recoveries and measurement of the degree of assur- ance that all potential assessments were found would not be useful. Never- theless, the panel is concerned about the targeting of the reviews. According to Department of Education staff, data management initially focused on tracking the occurrence of a review, not the findings that resulted. In more recent years, in response to requests for data on findings, data have been gathered concerning the findings of noncompliance with regulations. Yet, data continue to be anecdotal, that is, to reflect what individual reviewers said they saw at selected schools. Although field reviews and the central office coordinating and data system are based on uncontrolled anecdotal observations, refinements in the automated system have permitted the manipulation and combination of data elements and their use to make general statements concerning the types and frequencies of errors. This has guided policy and management practices to an important extent. The panel believes that what is needed is a well- defined and well-maintained data base system that incorporates the results of program reviews in a form that management can use to guide policymaking. From information provided by Department of Education personnel and discussions with financial aid administrators, the panel learned of the fol- lowing problems with the data base associated with the program review process: · The current design of the data base uses a cutoff sample weighted by the screening criteria. This is because neither the selection of institutions for review nor the selection of cases within an institution is based on a sampling strategy that would allow inferences about the population. While the selection methodology does help target reviews toward the most prob- lematic institutions, the process does not result in the development of a data base from which generalizations about either institutions or cases within institutions can be made. Also, reviews do not cover all regulatory, statu- tory, and administrative requirements, because of time and personnel limita- tions. However, even if the cases and institutions were statistically sampled, the "findings" data base would be flawed because there is no control for which requirements are subject to review across regions or even within regional offices. The data base is further limited in that it contains only exceptions (violations); information about successful practices that resulted in compliance is not recorded.

78 QUALITY IN STUDENT FINANCIAL AID PROGRAMS · The reliability of the categorization of "error" is a cause of concern. The definitions of some errors are incomplete or unclear. Thus, the experi- ence of the reviewers can contribute to measurement variability. There need to be more specific evidentiary requirements governing observations of error. · No standards are in place by which to compare conclusions across regions or within regions. Reviews need not be identical in scope and process, nor in documentation. In addition, the levels of seriousness of institutional problems, by which each review is characterized, are not well defined. Although reviews with the highest problem rating go to the central office for clearance, there are no measures of the reliability of the review- ers' ratings. Nor is there assurance that reviews that were not referred to the central office were, in fact, not serious. Verification on a sample of cases would be needed. · The reviews do not lead to a measure of the success of follow-up or oversight activities. Essentially, schools that are found to have significant problems must identify the extent of the problem through a review of all students' files and report their own error rates. (The Department of Educa- tion may require CPA certification.) As an alternative for less significant findings, at the next scheduled audit, the auditor must review the school's self-examination results and report to the Department of Education in the "prior audit" section of the audit report. Moreover, the actions taken to resolve program review findings are minimally observed by the department. The findings do not lead to a program improvement process they just report "assessed liabilities" over and over, and future reviews are not tar- geted based on the findings. Most important, the focus on assessed liability may be misleading. As noted above, actual liabilities after appeals are much less than the assessed amounts. The panel did not learn of any efforts to link the denied amounts to problems in the error definitions, risk, or ~ . . . . future oversight activities. OIG Activities In addition to the audits and program reviews just described, the OIG conducts and supervises audits, investigations, inspections, and other re- views of the department's programs and operations. The OIG's role is to provide leadership, coordination, and policy recommendations to promote economy, efficiency, and effectiveness; prevent fraud and abuse in the department's operations and programs; and review proposed and existing legislation and regulations governing the department's programs. The OIG has also imple- mented a procedure for evaluating audits performed by nonfederal auditors. This procedure includes a review of audit working papers and information sharing with some state boards of accounting.

CURRENT QUALITY CONTROL PROCEDURES 79 Departmental staff indicated that for the past several years, OMB and the OIG have identified the department's student financial assistance pro- grams as vulnerable to fraud and abuse. OIG audits, investigations, inspec- tions, and other reviews disclose problems that involve ability to benefit and other admissions abuses; ineligible courses and course stretching; ac- creditation, eligibility, and certification; branch campuses and ineligible campuses; refund practices; loan due diligence; issues related to the Supplemental Loans for Students and Parent Loans for Undergraduate Students programs; and issues related to bankrupt and closed schools. In addition to recom- mending the recovery of funds in individual cases, the OIG makes recom- mendations for changes in systemic requirements and practices, which if implemented, are intended to help prevent many of the abuses from occur- ring in the future. The Panel's Comments and Recommendations on Audits and Reviews The rules and regulations governing student eligibility for financial aid are complex. The panel questioned whether and, if so, to what extent the complexities themselves are significant sources of error. The current moni- toring and compliance activities and data bases do not address this issue. They are designed to assess absolute performance with respect to the accu- rate administration of student financial assistance programs, to impose sanctions based on error, and to count the occurrences of error. Audit and review activities are necessary if the Department of Educa- tion is to fulfill its responsibility to ensure that program participation is limited to those institutions that are willing and able to operate in accor- dance with program goals and expectations. Because factors such as changes in economic conditions and in financial aid personnel affect an institution's ability to maintain desired levels of quality, "problem" institutions will ap- pear sporadically and must be dealt with promptly and efficiently. Audits provide reasonable promptness, especially now that they are an annual re- quirement. Program reviews occur relatively infrequently because current Department of Education budgetary and personnel ceilings preclude regular reviews of all institutions. Yet, the review process has the greater potential to be proactive and to provide the useful instruction and technical assistance that promote quality and help to build a sense of partnership between the institutions and the department. The audit and program review processes used by the Department of Education to enforce quality standards are marked by considerable duplica- tion, ineffectiveness, and wasted effort. The independent audit of schools checks for internal controls and compliance with program regulations in a

80 QUALITY IN STUDENT FINANCIAL AID PROGRAMS way that essentially duplicates parts of the program review process. Such checks are more effectively performed by knowledgeable reviewers and should be fully incorporated within the program review process. However, an effective review process would identify high-risk areas that could be part of the audit function. Inspection is a deterrent only when a better job can be done. To maxi- mize an institution's potential, the audit and review processes must be bet- ter integrated and redesigned with a sharper focus on addressing meaningful measures of quality and quality improvement rather than the current con cept of measuring compliance with an unrealistic "zero defect" standard. To expend resources where risk is greatest, for example, data from past audits and reviews should be used to improve the methods for selecting institutions for inspection and the methods for sampling records at those institutions. Further, the match between the capabilities of the inspector and the inspection function warrants careful attention. More information is needed about the reliability of independent auditor and reviewer findings. For example, are the relative frequencies of findings comparable between audits and reviews? While the audit function can provide timely information, program reviews are the only system of institutional quality checks that is entirely in the control of the Department of Education. Thus, the reviews should have the highest degree of reliability and objectivity, since all other systems rely on a considerable degree of good faith for self-reporting of problems or successes. The panel believes the Department of Education should revamp its au- dit and program review systems in order to realize the potential of those activities to support its gate-keeping function and provide technical assis- tance to institutions and useful data to policymakers. The panel finds four areas in need of attention. 1. In response to the lack of usable data sets: · To make efficient use of scarce resources, it is critically important that the Department of Education create more useful data sets and make better use of statistical sampling techniques in its inspection processes that provide the data. Selection of institutions for inspection is a key task. Reviews should continue to be focused primarily on problem institutions. Still, it is wise to include a purely random component in the selection process so that all institutions are on notice that they may be selected and to provide information that could be used to make inferences about all institu- tions. The majority of resources should be allocated to the highest risk areas, based on models that are derived from the best data available and are frequently refined to reflect the results obtained at previously selected insti- tutions.

CURRENT QUALITY CONTROL PROCEDURES 81 · Once an institution is selected for examination, various sampling techniques offer natural and effective means of ensuring that the time de- voted to the examination is appropriate. For example, an initial sample can be chosen according to a prescription that uses the results of that sample to determine whether further sampling is necessary and, if so, how large the next stage of sampling should be. Such methods reduce significantly the number of records required on the average to provide accurate inferences about the total population of records in the category. Another plan, multiphase sampling, could be useful. Here, a large sample is selected for a "quick" review, possibly looking for known but simple indicators of problems. If the results of the large sample warrant it, a subsample is drawn for detailed examination. With either procedure, initial sample size should be based on knowledge of the variability of the problem and the risk it poses, not on a flat standard as is now done. · The overly large number of review areas should be reduced. Areas not known to be error prone should be reviewed on a sampling schedule to provide an assurance of continuing compliance. · Program reviews must result in the creation of a data base that can be used to do more than simply identify error patterns (national, regional, and type of school) within the group of schools reviewed. Incorporating a designed sample of institutions and recording the frequency of errors at the institutional level rather than just the occurrence of error would permit general conclusions about error levels. Currently, conclusions beyond a particular school are not statistically valid. · Some details of successful compliance, especially in processes that are problematic at most schools, as opposed to "finding problems" only, should be reported as part of the review process. · Reviewers should be experienced enough to look beyond prescribed review instructions to report unsound practices that undermine the intent of student financial aid programs even if they do not violate regulations or prescribed procedures. · Better information on the cost of the audit and review functions and the liabilities actually collected should be maintained so resources can be allocated effectively. 2. In response to the lack of a good definition of "error": · To provide much-needed clarity in the assessment of error, the in- spection processes should maintain clear distinctions among errors of dif- ferent types, for example, process errors as opposed to material errors and errors for which the institution is responsible as opposed to errors for which student applicants are responsible. 3. In response to the lack of standards:

82 QUALITY IN STUDENT FINANCIAL AID PROGRAMS · The review process itself should be subject to an organized effort focused on quality improvement. Sources of sampling and measurement error should be subjects of study to ensure that review outputs, incorporated into a well-maintained data base, adequately support regional and categori- cal comparisons, statistical estimates (e.g., of error rates), and policy deci- sions. . Quality improvement activities should be part of the audit and re- view processes, from selection of the institutions to close out of the inspec- tion, and should include verifiable standards for conducting the activity and for a higher level review of the results. 4. In response to the lack of follow-up: · Reviews should seek to determine the cause of problems and report possible corrective action. · Review findings, once they incorporate the prior suggestions for im- provement, should be used as a basis for new policy and/or legislation. · Finding violations should stimulate attempts to find causes and, if indicated, to identify systemic failure. There should be systematic follow- up on resolution of findings at each institution, and the results should be used to target areas for future reviews. Because current program reviews and annual audits duplicate many activities and the skills required may not match the expertise of those doing the "inspection," the panel makes the following recommendation. Recommendation 4-3: The Department of Education should redesign the current system of program reviews and independent audits. Pro- gram reviews should focus on compliance as part of an overall quality improvement program. Checks on institutional compliance and internal controls should be performed only in program reviews and audits should focus only on financial attestation. The Department of Education should also develop, test, and imple- ment methods to systematize and standardize the program review pro- cess. The department should not interpret the 1992 reauthorization of the Higher Education Act as a requirement to review every school ac- cording to a fixed schedule. Risk-based statistical methods should be used to identify problem schools for more frequent reviews, and other schools should be selected randomly at a nominal rate that would fulfill the necessary "gate-keeping" functions. The department must improve the evaluative feedback and technical assistance provided to institu- tions during reviews. At the same time, the reviews should be used to accumulate data that provide the department with a continuous over- view of error rates, compliance levels, and other information of signifi- cance for management in making policy.

Next: 5 Alternative Perspectives on Quality in Student Financial Aid Programs »
Quality in Student Financial Aid Programs: A New Approach Get This Book
×
Buy Paperback | $45.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Federal financial aid for postsecondary education students involves both large expenditures and a complex distribution system. The accuracy of the needs-based award process and the system of accountability required of the 8,000 institutional participants are the focus of this book. It assesses the current measures of system quality and possible alternatives, such as a total quality management approach. The analysis covers steps to eliminate sources of error—by reducing the complexity of the application form, for example. The volume discusses the potential for a risk-based approach for verification of applicant-supplied information and for audit and program reviews of institutions.

This examination of the interrelationships among the aid award and quality control activities will be of interest to anyone searching for a more efficient aid system. The book can also serve as a case study for other government agencies seeking to examine operations using modern quality management principles.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!