Click for next page ( 50


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 49
4 Current Quality Control Procedures In this chapter, the panel reviews current efforts by the Department of Education to control and monitor the quality of the award and payment processes, the types of error that are uncovered and the importance of de- tecting them, and the burden quality control places on those in the system. For the purpose of this chapter, and in many respects in the Department of Education's historical view and its charge to the panel, "quality of the award and payment processes" is synonymous with the concept of accuracy in the award and payment to recipients of student financial aid. This narrow, "payment error" view forms the rationale for the department's current con- trol and monitoring activities. We examine the views of other participants in the system in Chapter 5. ERROR DEFINED Although reducing payment error (defined below) is an important as- pect of quality particularly in programs that disburse public funds it is only one of many dimensions of quality, as discussed in Chapter 2. Its pervasiveness, however, makes an understanding of payment error a prereq- uisite to an understanding of quality control, as practiced by the Department of Education. In principle, there are two kinds of payment errors in student financial aid programs, or in fact in any program designed to dispense resources to those in need of them: (1) errors of overaward or overpayment and (2) errors of underaward or underpayment. Overpayments can be sub- divided into (1) excess payments to eligible recipients and (2) all payments 49

OCR for page 49
so QUALITY IN STUDENT FINANCIAL AID PROGRAMS to ineligible recipients. In parallel fashion, underpayments comprise (1) insufficient payments to eligible recipients and (2) the lack of payments to those mistakenly classified as ineligible. Two further characterizations of error apply to overpayment and under- payment errors. One distinction is between substantive errors and technical errors. Substantive errors are directly associated with the provision of information that determines eligibility for student financial aid and the cal- culation based on that information. Technical errors occur when a legally necessary document has not been submitted or has been submitted but is missing from a student's file. The lack of such documents makes the stu- dent categorically ineligible for financial aid; the inclusion of the docu- ments, however, may or may not render the student eligible, depending on other factors that could be examined if the file were further investigated. Technical errors include the failure to have on file documentation of satis- factory educational progress, an indication of registration for Selective Ser- vice, a statement of educational purpose, or a financial aid transcript (a report of federal aid given to the student by institutions previously attended). Certainly, an existing but temporarily misplaced document, while a possible indication of poor administrative work, is not as serious a technical error as noncompliance with a requirement, such as maintaining satisfactory educa- tional progress. A second distinction deals with the originating source of the error- whether it is the institution, data processor, or student. These three sources of error are the primary focus of the quality control efforts discussed in this chapter. Data processing errors are basically self-explanatory, but a few comments on institutional and student errors are warranted. Institutional error may occur in the form of failure to follow Title IV regulations or the institution's own policies for Title IV aid even though failure to follow the latter may not violate Title IV regulations. (Some failures, termed liability errors, require reimbursing the federal government for the amount of the error.) For their part, students may make intentional reporting errors, for which they are liable for fines or imprisonment. Alternatively, students may make unintentional errors in reporting, which if found in verification, should re- sult in an adjustment to their award, but which do not make them liable for fines or imprisonment. Still another type of student error arises from incor- rect projections or estimates (e.g., federal income tax to be paid) on the application form. These do not count as errors under Title IV regulations, but they have been tabulated as error in some studies of program quality (e.g., the Integrated Quality Control Measurement Project, discussed in Chapter 51. The differences among the various Title IV programs also lead to dif- ferent ways in which errors in the analysis of student need translate into

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES 51 dollars misspent. Because the Pell program involves a direct grant, the conceptualization of error in that program is the most straightforward: Er- ror is simply defined as the difference between the payment made to a student and the amount that should have been paid according to a correct need analysis. Thus, errors in Pell grant need analyses are dollars actually misspent or erroneously not spent. On the other hand, an error in a Campus-Based program is conceptual- ized as the discrepancy between calculated need and need if correctly calcu- lated with accurate data. Because funding is limited, however, actual awards are often less than the calculated need. Hence an error, as a concept, is not equivalent to actual dollars misspent or erroneously not spent. Similarly, in the Stafford Loan Program, error is conceptualized as any mismatch between appropriate and actual certification amounts. A student, however, may choose not to borrow the full amount, so again, errors do not necessarily represent actual dollars misspent or erroneously not spent. In addition, the student may repay the loan, which limits the cost of the error to any subsidies or allowances provided by the federal government on the erroneous part of the loan. As a final note on these errors, the errors addressed in this chapter involve, for the most part, applicants who were found eligible for an award. Recently, the Department of Education has made an effort to inspect the cases of unsuccessful applicants as well. Eligible individuals who do not apply are also a source of underpayment not addressed by current quality control activities. This is a major issue of quality in the system rather than a source of error and is discussed in Chapter 5. THE QUALITY CONTROL PROCESS The questions concerning the quality control process that were posed by the Department of Education in requesting this study included the fol- lowing: How much information should be obtained from applicants and how intensively should its accuracy be reviewed? What are appropriate or realistic levels of performance to be ex- pected from participants in the financial aid system? What are reasonable trade-offs between performance and the burden imposed on those in the system? Materials provided to the panel by the Department of Education's Divi- sion of Quality Assurance describe the current quality control process as a three-part effort consisting of prevention, inspection, and oversight The department has another activity related to quality control special sample- survey studies that are discussed in the next chapter.

OCR for page 49
52 QUALITY IN STUDENT FINANCIAL AID PROGRAMS Prevention consists of activities aimed at avoiding errors. The Depart- ment of Education identifies two such activities: training, which is provided by the department to data entry contrac- tors, financial aid administrators and other institutional officers, lenders, accrediting agencies, state scholarship and guaranty agencies, and others in the financial aid community; and verification of student data, which entails institutional review of stu- dent-submitted information and, if necessary, correction of errors. Inspection consists of after-the-fact monitoring activities during audits and program reviews designed to determine the accuracy of program admin- istration by the schools. Such activities are often developed in the belief that they help to ensure compliance because the possibility of penalties and/ or sanctions acts as a deterrent. There are two types of inspection activities: Audits. Audits are typically conducted by a certified public accoun- tant and are submitted to the Department of Education for review and ap- proval. Audits focus on determining the reliability of institutional financial data, management controls, and compliance with requirements of partici- pants in the federal student financial aid programs. Until the 1992 reautho- rization of the Higher Education Act, participation in federal Title IV pro- grams required that all institutions undergo at least a biannual audit of the Title IV financial aid programs in which they participated. The reauthori- zation requires annual audits. Program reviews. The Department of Education's regional and cen- tral office staff conduct program reviews to determine compliance with federal rules and regulations governing the student aid programs. The re- views are conducted at educational and lending institutions and guaranty agencies. (The department also requires guaranty agencies to review their largest institutions and lenders. The reviews, which are not discussed in this report, focus on compliance with the rules and regulations for guaran- teed student loan, or GSL, programs. The reports must be submitted to the department for review and approval.) Oversight consists of periodic studies of various program areas within the Department of Education. The studies are conducted by the department's Office of Inspector General (OIG) or the General Accounting Office (GAOJ and focus on procedures for monitoring compliance with various require- ments and the management of those activities. The panel reviewed the award determination system and related quality control efforts using an approach that looks for potential to improve the accuracy of awards. In viewing the study this way, we identified several activities related to quality control and process improvement that were not part of the prevention, inspection, and oversight strategy defined by the

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES TABLE 4-1 Current Activities Related to Quality Control and Improvement in the Process of Awarding Student Financial Aid 53 Stage in the Process Current Activities Learning about the programs Filling out forms Data entry Data editing Verification Retrospective activities Outreach activities Financial aid administrator help area code 800 telephone information lines Feedback from involved organizations Electronic application Financial aid administrator help Renewal applications Inspection sampling and reporting by data entry contractors Computer flagging Student Aid Reports generated with highlights from edits and applicant corrections Verification of data from a selected percentage of applicants Audits Reviews Oversight Special sample-survey studies Department of Education. Using the simplified process diagram presented in Figure 3-3, we identified six distinct but interconnected activities related to control, improvement, or consequence of poor performance (see Table 4- 11. In the remainder of this chapter, we review inspection activities after the student submits an application for aid. The inspection activities include the processing activities of data entry, editing, and verification and the retrospective activities of audit and review. In Chapter 5, we review the measured outcomes of the financial aid process and relate the findings to the inspection activities and to problems the applicant faces in understand- ing and completing the application for aid. The issue of where the burden for ensuring effective performance of the system should be placed is ad- dressed in Chapters 5 and 9. PROCESSING ACTIVITIES Data Entry Data entry is done under contract. A central processor handles the federal application form, and several data entry contractors (called multiple data entry contractors, or MDEs) each handle a separate version of the

OCR for page 49
54 QUALITY IN STUDENT FINANCIAL AID PROGRAMS application form, which includes federal application information and state- required data.1 Entry of the application data and subsequent corrections of the data involve opening and handling mail and entering and processing data. (We also include printing and mailing the Student Aid Report, or SAR, in this category.) The amount of work during peak application peri- ods and the turnaround time required increase the risk of errors. Error in processing is a concern because applicants might be incorrectly informed of their eligibility status, which could affect their decisions about whether to attend school and, if so, where. The panel examined specifications, developed by the Department of Education and/or the contractors, that would indicate that extraordinary care is taken to ensure that data entry operations are accurately and efficiently accomplished from the time applications and corrections are received to the time that the SAR is produced. At every step of the process, traditional quality control inspection procedures are specified. At the receipt and re- view stage, for example, documents must successfully pass an initial com- pleteness check and then a further review. A random sample of batches of applications is also selected and checked for key entry errors. At the output stage, quality of print and integrity of data are checked in a sample from each stack of printed SARs. In its efforts to improve data processing activities, the Department of Education encourages contractors to obtain ongoing feedback from employ- ees at different levels, such as data entry, operations, systems, and project management staff. In addition, the department requires all MDEs to submit an annual requirements analysis, a comprehensive review of all major as- pects of the system, comments from applicants and institutions, and recom- mendations for any changes that are necessary. To assess ongoing work performance and product quality, the MDEs have developed feedback systems. One MDE, for example, utilizes Correc- tive Action/Error Cause Removal Sheets, on which employees indicate and describe the existence of a problem. The MDE's quality assurance depart- ment then works with management to implement corrective action. Addi- tionally, units specifically assigned to address quality issues collect and maintain detailed statistics on data entry quality and conduct ongoing re- view and evaluation of processing functions and requested changes. The panel did not attempt to verify the contractors' strict adherence to, or the success of, the defined quality control activities. However, a General Accounting Office (1985:9) study reported that "a small-scale review that was part of the 1980-1981 error study suggested that keystroke error in entering data from application forms to the computer terminal was low." 1As explained in more detail in Chapter 5, application forms can be obtained from the Department of Education, lenders, schools, or from one of the MDEs.

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES 55 This still appears to be true; management reports consistently indicate error rates well below 1 percent for keystroke and other data handling activities. Given the extensive quality control efforts employed over the years, the panel concludes that the handling of applications and data entry are well under control. The quality control methods at data entry may be somewhat excessive, but the total cost of data entry with those quality control activi- ties is low, especially compared with the more controversial areas of con- trolling student and institutional error. Data Editing When an application or correction arrives at the processing facility, data are edited on a very rudimentary level (e.g., completeness of key fields, such as a signature, and decisions on poor handwriting). After the data pass data entry quality control inspections, the processor transmits the data file to the central processor for more extensive computer editing. There, the Central Processing System (CPS) performs checks for consistency with data in several federal data bases (e.g., Selective Service, Immigration and Natu- ralization, Drug Enforcement, and defaulted federal student loan data bases) and performs edits similar to those used in many survey operations (e.g., assume and impute values for missing or illogical data of a noncritical nature, consistency checks among data elements, and range checks of data elements). Processing of applications failing the most critical aspects of these edits is suspended. For other edits, incorrect or "suspect" data ele- ments are "highlighted," and an award index is computed. The data as entered from the application, highlighted data elements, and award index information are mailed to the applicant as part of the Student Aid Report. For applications originating with the central processor, SARs are mailed directly by the central processor. For applications originating with an MDE, the information needed to produce a SAR (highlighted and eligibility infor- mation) are transmitted back to the MDE. The MDE then prints the SAR and mails it to the student. There was some objection to this process in the past. For example, an MDE complained to the Department of Education about rigid rules on overly specified formats and the wastefulness of having each of the MDE organizations develop programs to print the SAR forms. The student, whether subject to highlights of possible errors or not, is asked to review all the data on the SAR and, if corrections are needed, return the corrected SAR to the processor. The corrected data are re-edited and a new SAR is produced. Almost one third of applicants must "recycle" their SAR, which delays their award determination (see Table 4-21. Follow- ing the completion of the correction cycle, the student provides the SAR to each institution to which he or she has applied for aid (recall Figure 3-3~. Data from the Department of Education's Management Information System

OCR for page 49
56 QUALITY IN STUDENT FINANCIaL AID PROGRAMS TABLE 4-2 Frequency of Valid Applications, by the Number of Transactions, 1990-91 Academic Year Number of Transactions Count of Applications Percent of Applications 1 2 3 4 5 6 7 8-55 Total 4,827,615 1,555,070 443,479 146,134 49,739 18,819 7,469 6,086 7,054,411 68.4 22.0 6.3 2.1 0.7 0.3 0.1 0.1 00.0 NOTE: Number of transactions is the sum of the number of times the Student Aid Report is returned plus one for the initial application. SOURCE: National Computer Systems (1990-9la:5-1). (MIS) are used to produce tables of aggregate correction rates for each data item, but use of other statistical analyses might be instructive. For ex- ample, a cross-tabulation of error items by the number of SAR cycles might indicate the items that are responsible for most of the repeated SAR cy- cling. Taking actions to reduce the need for SAR recycling could reduce program costs. There is evidence that the SAR itself is an effective quality control device. On applications for the 1990-91 academic year, for example, more than half of the corrections of data critical to computing the award formula were made without the corrected field having been highlighted. For "fed- eral income tax paid," a field that is influential in the award formula, the proportion of unsolicited corrections was over 80 percent (National Com- puter Systems, 1990-9 la:4-11 to 4-141. Some of these changes may occur as a result of verification initiated by institutions (some schools do 100 percent verification anyway). Thus, further information on why the changes were needed would be useful for planning strategies to get the correct infor- mation the first time. The central processor performs a second edit function flagging appli- cations for institutional verification. Verification activities are discussed next.

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES Verification 57 Verification, formerly called validation, by the institution of applicants' reported data items is the Department of Education's primary tool in its efforts to control applicant error. Each school is required to verify key elements of the student record for all records flagged by the central proces- sor (or by the school's own error-prone profile in the case of the limited number of institutions participating in the quality control project discussed in Chapter 8~. Additionally, whether the student's file is selected for verifi- cation or not, the institution is responsible for resolving any conflicting data it may contain (e.g., between an unsolicited tax return and the federal record). Corrections, if made, must be reported to the central processor (either through the mailed SAR process or electronically at the institution'. Adding to the complexity of the verification process is the structure that requires the process to occur at each institution to which the student applied for aid. The panel was informed that an institution initiating data corrections based on information from its file frequently finds that the data are then changed by the student or another institution, and the resultant central processing data do not match the institution's record. One of the burdens of this prolonged SAR process is that with each change the student's Pell award may have to be recalculated, and if ad- justed, the interdependencies among the various aid programs may cause changes to any Campus-Based or loan determinations that are part of the student's aid package. For example, for each dollar of Pell correction in the case of a fully awarded student, a Stafford loan dollar often must be changed and the Stafford loan certification process begun again. Data on the Stafford loan must then be revised by the school, the lender, and the guaranty agency. There is little year-to-year comparison of an applicant's information in the student financial aid system unless instituted by the school, in which case it also becomes liable for errors in reconciliation when using the prior data. The need to make quick decisions concerning the award, changing family and student incomes, and the increasing degree to which students move among institutions of higher education are likely barriers to year-to- year data comparisons, which might otherwise lead to increased veracity. Although the verification design, at one time, reportedly called for some applicant records to be purposely reselected for verification in successive years, no formal reports on studies of the data were made available to the panel. Also, since only about 50 percent of applicants reapply for aid the next year, the ability to make good verification decisions on first-time ap- plicants is important. Verification activities follow a basic cycle for each program year, which includes the following steps:

OCR for page 49
58 QUALITY IN STUDENT FINANCIAL AID PROGRAMS The Department of Education develops strategies for selecting appli- cations thought to be more likely than others to be in error. The central processor compares each application with the verifica- tion selection criteria at the time the application first enters into the system (recall that some initial edits lead to the application being rejected). Institutions verify applicant data for the selected applications. (Rules concerning the maximum percentage of applicants an institution must verify, the data items that must be verified, and acceptable documentation for data item verification have varied over the years. By federal rules issued prior to reauthorization, the institution need not verify more than 30 percent of applications. Reauthorization, however, allows the Secretary of Education to mandate verification of all applications.) Institutions report data changed in the verification process to the central processor for recalculation of the award formula and for creation of the analytic data sets used in the Management Information System. (The data are used in the analysis to create the selection strategy for the follow- ing year.) In essence, verification is reapplication for financial aid with supporting documentation. The institutional verification process has been the subject of consider- able research expenditure by the Department of Education since 1980. From the earliest reports through the most recent, the approach has been criti- cized. (A list of the reports is provided in the next chapter, where the panel summarizes data from the reports.) Major criticisms include the following: The cost-benefit of the approach is questionable. Unfair burdens are placed on the academic institutions. The timing of and changes to awards create difficulties. The approach may possibly unfairly target certain groups. The verification data are also prone to error. . Although not a focus of the reports, the panel considers the impact of reapplying on the applicant an additional criticism. The Department of Education describes the current verification system as an attempt to balance error reduction and the burden imposed on institu- tions. Efforts to move toward verification of all applicants have been tem- pered by institutional lobbying that sought to limit the burden that institu- tions must endure to correct errors not of their making. As a result, the 30 percent of valid applicant records that are selected for verification are cho- sen through a complex sequence of statistical procedures (see the next sec- tion) intended to target the most error-prone applicants. The verification system has been studied and modified over the years, but the panel could not find evidence that major advances have been made

OCR for page 49
s9 in responding to the major criticisms of the system. For example, the cost- benefit of the system remains a question. While the MIS estimates the additional error removed by each selection criterion, there are no current estimates of the overall cost of the system, including resources used by the Department of Education and the institutions and the applicant's time. The General Accounting Office (1985) estimated the 1982-83 cost to institutions as slightly more than the dollars of award error eliminated by the verifica- tion process. The GAO study did not measure the potential deterrent effect that verification may have on student or institutional error, nor did any other studies reviewed by the panel. But, a recent quality control study (Price Waterhouse, 1991) shows little difference in the amount of final award error (based on a "verification" conducted during the study) among applications selected by the Department of Education's verification strat- egy, those selected for verification by institutions, and those not verified. Thus, the SAR process may provide most of the deterrents, and what appear to be marginal gains at best may result from verification activities. (Be- sides, 50 percent of applicants are first-time entrants to the process and would likely be much less aware of the verification possibility than a tax- payer is of an Internal Revenue Service, or IRS, audit.) Going further, several studies suggest a return to the original concept that verification be done centrally and not at the institutions. Verification requires an enormous effort from most everyone involved. The institutions must perform this unwanted task, often duplicating efforts for students ap- plying to several institutions. The Department of Education has to use a contractor to help develop efficient selection criteria and conduct studies of verification efforts, and it must monitor the criteria for fairness and react to audits and conduct reviews of the institutions to determine compliance. The panel, recognizing the costly consequences of verification activi- ties, devoted considerable attention to verification. Panel members, staff, and consultants visited with Department of Education staff and contractors involved in verification and reviewed several contracted analyses and analysis plans. The panel focused its initial activities on assessing the efficiency of the verification selection methodology and the burdens verification imposes. Selection Methodology CURRENT QUALITY CONTROL PROCEDURES The panel was interested in the statistical underpinnings of the verifica- tion selection strategy and observed the following. The analysis that leads to the criteria for selecting applications and the creation of computer pro- grams that select the applicants are carried out under contract. The contrac- tor provides detailed plans for conducting the activities in accordance with a very comprehensive, long-term analysis plan that was developed in 1986. That plan recognized the importance of such issues as timing, the need to

OCR for page 49
72 QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 4-4 Program Review Selection Criteria, 1993 CRITERIA POINTS 1. Schools with Federal Family Education Loan Program (FFELP) default rates in fiscal year 1990 of 25% and above (includes those schools with default rates that are based on 30 or more borrowers entering repayment and those schools with average default rates) 2. High student withdrawal rate of 33% and above 33-39% 40-49% 50% and above 3. No program review in past 4 years 4. Schools with change of ownership/recertification since January 1, 1990 5. Schools new to Title IV programs since January 1, 1990 6. Schools being monitored for financial capability 7. Significant increases in FFELP loan volume (1988-89 and 1989-90) or Federal Pell Grant volume (1989-90 and 1990-91) based on percentages for most recent award years (The number of points assigned depends on the dollar range of the program and the percentage increase.) 8. Regional assessment (e.g., student complaints, adverse publicity) 40 10 20 30 30 25 25 20 10, 15, or 20 25 NOTE: Overdue audit report was removed from the criteria in 1993. SOURCE: U.S. Department of Education (199lb). high default rates, high student withdrawal rates, and no program review in the past four years. Overdue audits were a high-priority criterion before 1993. Within the ranking, schools are reviewed according to the resources at the disposal of each region. Since 1989, an average of 900 school re- views have been conducted each year.6 While regional staffing appears to 6This number reflects reviews of all types, not necessarily those resulting from the screening, process. For example, reviews of closed schools frequently do not involve a site visit and complaint-based reviews may be limited to the subject of the complaint. All are counted as school reviews.

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES 73 be roughly proportional to the percentage of schools and the loan volume, staffing reportedly is not proportional to high-risk characteristics, such as schools with high default rates or the rate of schools with potential liabili ties Dented In reviews. The Visit The monitoring function of the visit is to assess the institution in terms of (a) the accuracy of its student eligibility determinations and the calcula- tion and disbursement of awards and (b) its general administrative capabil- ity and financial responsibility. Institutions are usually informed of the pending visit and are required to provide information and materials, includ- ing student rosters and policy manuals, prior to the on-site visit. Each of the following programs is reviewed: . . Federal Pell Grant Federal Supplemental Educational Opportunity Grant Federal Perkins Loan Federal Work-Study Federal Family Education Loans The aspects of the administration of Title IV student financial assistance programs that are subject to review include the following: ability to benefit . satisfactory academic progress by students student eligibility verification of student-provided information The reviewer also examines the disbursement of aid to student accounts and the financial accounting for Title IV program funds. At institutions with a high default rate, additional processes are required. As with audits, the program review outcomes can include corrective actions, liabilities, and administrative penalties, such as fines and limitation, suspension, or termi- nation of Title IV programs. The core of the program review is derived from observations based on a sample of student financial aid files selected by the reviewer. The reviewer selects 10 files from each award year being reviewed (usually two years). There is no set method for the selection. Some reviewers select random samples, others select discretionary samples, and others select a combina- tion of the two. Reviewers are supposed to select files that represent all the Title IV programs in which the school participates. The reviewer then evaluates, through the evidence in the student and financial files, the vari- ous operations of the school to test for compliance with law and regula- tions. Reviewers may expand their sample if a noted deficiency indicates a

OCR for page 49
74 QUALITY IN STUDENT FINANCIAL AID PROGRAMS problem with potentially greater frequency than indicated by its occurrence in the sample. If an instance of noncompliance is noted, the reviewer is expected to document the deficiency using worksheets provided in the department's Program Review Guide (U.S. Department of Education, 1991b). The reviewer not only identifies errors, but also calculates the value of the errors and identifies necessary corrective actions. The calculations cover payments to ineligible recipients and overpayments and underpayments to eligible recipients. The extent of corrective action depends on the fre- quency of error based on the sample. It may involve correcting the indi- vidual case file in which the deficiency occurred, or it may involve a re- quirement that the school identify all such cases over a one- to five-year period that have the same characteristics and report the frequency of error noted and the value of such errors for that subuniverse. The on-site review ends with an exit interview with administrative per- sonnel from the school, during which the reviewer summarizes the findings, makes recommendations, and presents the required actions and any poten- tial liabilities. A written report follows, usually within 30 days, which details findings, required actions, and recommendations for change. The institution then has 30 days to respond to the report and provide any docu- mentation or information that rebuts the findings. The Department of Edu- cation then evaluates the institution's response and produces a letter to the institution, which includes the final determination for all findings and any assessed liabilities or proposed fines. This part of the resolution process may extend over several months and may involve many complications, such as extensive file reviews. The program review is closed when all assessed liabilities have been paid and the institution has sufficiently responded to required findings. Table 4-5 identifies the most frequent findings based on recent program reviews. Based on the regional office's evaluation of monetary liabilities, the reviewer prepares data entry forms that show the codes for violations found during the review and the reviewer's assessment of the overall seriousness of the violations taken as a whole. Liabilities can be based on only the students in the sample or a total file review. Program review liabilities are entered into the data base after they are final (that is, the program review has been resolved and closed). Frequency of violations are not recorded, only the occurrence. However, depending on the type of violationks), sig- nificant liabilities may result. Thus, one instance of a violation, such as inadequate student consumer information, at a large number of schools, will appear as a high-frequency error in the data base. On the other hand, an error that occurs at 10 percent of the schools but has serious implications, such as inadequate attendance records at a clock-hour school, will appear as less significant. Similarly, an error that occurs in a large number of cases at 10 percent of the schools will appear as less significant. Thus, a data base

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES TABLE 4-5 Top 15 Program Review Findings, by Occurrence, Fiscal Year 1991 75 FINDING Verification procedures not followed/documented Financial aid transcript missing/incomplete Consumer information requirement not met Satisfactory academic progress standards not adequate/developed/monitored Guaranteed Student Loans-refunds not made/late Guaranteed Student Loans exit interview not documented Refunds-late/not made to Title IV account Ineligible student citizenship Excess cash balances maintained Inconsistent information in student file Ability to benefit-undocumented Accounting records inadequate/not maintained Bank accounts federal funds not identified FISAP income grid not documented Student budgets (Fell Grant) improper OCCURRENCE 408 371 274 261 236 227 219 197 167 161 145 141 136 135 128 Top 15 total 3,206 Total for all others 3,844 Grand total 7,050 SOURCE: U.S. Department of Education, Office of Student Financial Assistance. is maintained, but it does not serve the purpose of a management tool consistent with comments in Chapter 2. That is, it is not used to identify common and special causes of error and would be difficult to use in such a way. The seriousness of problems uncovered in a program review determines whether the reviewer's report will be reviewed by central office personnel. The central office review is, reportedly, largely an edit for compliance with Title IV regulations, policy, and procedures to ensure that corrective actions are proper and consistent with national guidelines. Central office staff routinely consult with regional office staff to discuss any significant devia- tions from policy or procedure. The central office review does not include a review of workpapers or other objective information provided or requested on a routine basis; nor is there a standard process to verify the reviewer's ~ . assessment ot seriousness. Outcomes In some cases, the actions of an institution indicate enough of a risk to the federal government that an immediate suspension of participation in

OCR for page 49
76 QUALITY IN STUDENT FINANCIAL AID PROGRAMS TABLE 4-6 Summary of Administrative Actions Fiscal Year Action198619871988 198919901991 Terminations initiated131427 3053113 Terminations imposed5110 161838 DisqualificationsN/AN/A2 6713 Limitations/settlements828 51751 Emergency actions229 1038 Formal fines imposed437 132787 Informal fines imposed4381106 194215254 Debarment/suspensionN/AN/AN/A 245857 Put on the reimbursement191352 126192277 system Total school reviews417372677 82011391016 NOTE: N/A - not applicable SOURCE: U.S. Department of Education, Office of Student Financial Assistance. Title IV programs is sought. This emergency action occurs when the risk outweighs due process considerations. Given serious violation of regulations, significant liability, or the expo- sure of fraud and abuse, the Department of Education may fine an institu- tion and limit, suspend, or terminate its program participation. Fines can range up to $25,000 per violation for schools and may be combined with other actions. Limitations restrict an institution's participation and some- times are used to settle a termination action. Suspensions range up to 60 days; they are rarely used, however, because they require the same process as a termination and may take up to a year to effect. Terminations, which are effective for up to 18 months, are sought for serious program violations, failure to pay a fine, noncompliance with a limitation agreement, or failure to solve a program review or audit finding. See Table 4-6 for a summary of ~ . . . . . admlnlstratlve actions in recent years. Data and the Targeting of Reviews Because on-site reviews are conducted at institutions with high defi- ciency factors or occasionally at those going out of the program, the results may not be easily used to represent the overall problems occurring with the institutional administration of Title IV programs. Moreover, even in the institutions reviewed, the number of findings per institution is small, on average (e.g., an average of seven per institution in fiscal year 1991 (Table 4-5). Targeting of institutions for review has improved somewhat recently.

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES 77 Institutional liabilities averaged over $32,000 per review in fiscal year 1992, compared with under $15,000 in fiscal year 1991, and the number of re- views declined from about 1,100 to under 700. The department views this as progress in the right direction, that is, focusing on fewer schools with larger "returns" per school. Another interpretation is that error rates have increased, but there are no procedures for making such inferences from program review data. Further, the liabilities assessed do not equal recover- ies of funds. Linking recoveries and measurement of the degree of assur- ance that all potential assessments were found would not be useful. Never- theless, the panel is concerned about the targeting of the reviews. According to Department of Education staff, data management initially focused on tracking the occurrence of a review, not the findings that resulted. In more recent years, in response to requests for data on findings, data have been gathered concerning the findings of noncompliance with regulations. Yet, data continue to be anecdotal, that is, to reflect what individual reviewers said they saw at selected schools. Although field reviews and the central office coordinating and data system are based on uncontrolled anecdotal observations, refinements in the automated system have permitted the manipulation and combination of data elements and their use to make general statements concerning the types and frequencies of errors. This has guided policy and management practices to an important extent. The panel believes that what is needed is a well- defined and well-maintained data base system that incorporates the results of program reviews in a form that management can use to guide policymaking. From information provided by Department of Education personnel and discussions with financial aid administrators, the panel learned of the fol- lowing problems with the data base associated with the program review process: The current design of the data base uses a cutoff sample weighted by the screening criteria. This is because neither the selection of institutions for review nor the selection of cases within an institution is based on a sampling strategy that would allow inferences about the population. While the selection methodology does help target reviews toward the most prob- lematic institutions, the process does not result in the development of a data base from which generalizations about either institutions or cases within institutions can be made. Also, reviews do not cover all regulatory, statu- tory, and administrative requirements, because of time and personnel limita- tions. However, even if the cases and institutions were statistically sampled, the "findings" data base would be flawed because there is no control for which requirements are subject to review across regions or even within regional offices. The data base is further limited in that it contains only exceptions (violations); information about successful practices that resulted in compliance is not recorded.

OCR for page 49
78 QUALITY IN STUDENT FINANCIAL AID PROGRAMS The reliability of the categorization of "error" is a cause of concern. The definitions of some errors are incomplete or unclear. Thus, the experi- ence of the reviewers can contribute to measurement variability. There need to be more specific evidentiary requirements governing observations of error. No standards are in place by which to compare conclusions across regions or within regions. Reviews need not be identical in scope and process, nor in documentation. In addition, the levels of seriousness of institutional problems, by which each review is characterized, are not well defined. Although reviews with the highest problem rating go to the central office for clearance, there are no measures of the reliability of the review- ers' ratings. Nor is there assurance that reviews that were not referred to the central office were, in fact, not serious. Verification on a sample of cases would be needed. The reviews do not lead to a measure of the success of follow-up or oversight activities. Essentially, schools that are found to have significant problems must identify the extent of the problem through a review of all students' files and report their own error rates. (The Department of Educa- tion may require CPA certification.) As an alternative for less significant findings, at the next scheduled audit, the auditor must review the school's self-examination results and report to the Department of Education in the "prior audit" section of the audit report. Moreover, the actions taken to resolve program review findings are minimally observed by the department. The findings do not lead to a program improvement process they just report "assessed liabilities" over and over, and future reviews are not tar- geted based on the findings. Most important, the focus on assessed liability may be misleading. As noted above, actual liabilities after appeals are much less than the assessed amounts. The panel did not learn of any efforts to link the denied amounts to problems in the error definitions, risk, or ~ . . . . future oversight activities. OIG Activities In addition to the audits and program reviews just described, the OIG conducts and supervises audits, investigations, inspections, and other re- views of the department's programs and operations. The OIG's role is to provide leadership, coordination, and policy recommendations to promote economy, efficiency, and effectiveness; prevent fraud and abuse in the department's operations and programs; and review proposed and existing legislation and regulations governing the department's programs. The OIG has also imple- mented a procedure for evaluating audits performed by nonfederal auditors. This procedure includes a review of audit working papers and information sharing with some state boards of accounting.

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES 79 Departmental staff indicated that for the past several years, OMB and the OIG have identified the department's student financial assistance pro- grams as vulnerable to fraud and abuse. OIG audits, investigations, inspec- tions, and other reviews disclose problems that involve ability to benefit and other admissions abuses; ineligible courses and course stretching; ac- creditation, eligibility, and certification; branch campuses and ineligible campuses; refund practices; loan due diligence; issues related to the Supplemental Loans for Students and Parent Loans for Undergraduate Students programs; and issues related to bankrupt and closed schools. In addition to recom- mending the recovery of funds in individual cases, the OIG makes recom- mendations for changes in systemic requirements and practices, which if implemented, are intended to help prevent many of the abuses from occur- ring in the future. The Panel's Comments and Recommendations on Audits and Reviews The rules and regulations governing student eligibility for financial aid are complex. The panel questioned whether and, if so, to what extent the complexities themselves are significant sources of error. The current moni- toring and compliance activities and data bases do not address this issue. They are designed to assess absolute performance with respect to the accu- rate administration of student financial assistance programs, to impose sanctions based on error, and to count the occurrences of error. Audit and review activities are necessary if the Department of Educa- tion is to fulfill its responsibility to ensure that program participation is limited to those institutions that are willing and able to operate in accor- dance with program goals and expectations. Because factors such as changes in economic conditions and in financial aid personnel affect an institution's ability to maintain desired levels of quality, "problem" institutions will ap- pear sporadically and must be dealt with promptly and efficiently. Audits provide reasonable promptness, especially now that they are an annual re- quirement. Program reviews occur relatively infrequently because current Department of Education budgetary and personnel ceilings preclude regular reviews of all institutions. Yet, the review process has the greater potential to be proactive and to provide the useful instruction and technical assistance that promote quality and help to build a sense of partnership between the institutions and the department. The audit and program review processes used by the Department of Education to enforce quality standards are marked by considerable duplica- tion, ineffectiveness, and wasted effort. The independent audit of schools checks for internal controls and compliance with program regulations in a

OCR for page 49
80 QUALITY IN STUDENT FINANCIAL AID PROGRAMS way that essentially duplicates parts of the program review process. Such checks are more effectively performed by knowledgeable reviewers and should be fully incorporated within the program review process. However, an effective review process would identify high-risk areas that could be part of the audit function. Inspection is a deterrent only when a better job can be done. To maxi- mize an institution's potential, the audit and review processes must be bet- ter integrated and redesigned with a sharper focus on addressing meaningful measures of quality and quality improvement rather than the current con cept of measuring compliance with an unrealistic "zero defect" standard. To expend resources where risk is greatest, for example, data from past audits and reviews should be used to improve the methods for selecting institutions for inspection and the methods for sampling records at those institutions. Further, the match between the capabilities of the inspector and the inspection function warrants careful attention. More information is needed about the reliability of independent auditor and reviewer findings. For example, are the relative frequencies of findings comparable between audits and reviews? While the audit function can provide timely information, program reviews are the only system of institutional quality checks that is entirely in the control of the Department of Education. Thus, the reviews should have the highest degree of reliability and objectivity, since all other systems rely on a considerable degree of good faith for self-reporting of problems or successes. The panel believes the Department of Education should revamp its au- dit and program review systems in order to realize the potential of those activities to support its gate-keeping function and provide technical assis- tance to institutions and useful data to policymakers. The panel finds four areas in need of attention. 1. In response to the lack of usable data sets: To make efficient use of scarce resources, it is critically important that the Department of Education create more useful data sets and make better use of statistical sampling techniques in its inspection processes that provide the data. Selection of institutions for inspection is a key task. Reviews should continue to be focused primarily on problem institutions. Still, it is wise to include a purely random component in the selection process so that all institutions are on notice that they may be selected and to provide information that could be used to make inferences about all institu- tions. The majority of resources should be allocated to the highest risk areas, based on models that are derived from the best data available and are frequently refined to reflect the results obtained at previously selected insti- tutions.

OCR for page 49
CURRENT QUALITY CONTROL PROCEDURES 81 Once an institution is selected for examination, various sampling techniques offer natural and effective means of ensuring that the time de- voted to the examination is appropriate. For example, an initial sample can be chosen according to a prescription that uses the results of that sample to determine whether further sampling is necessary and, if so, how large the next stage of sampling should be. Such methods reduce significantly the number of records required on the average to provide accurate inferences about the total population of records in the category. Another plan, multiphase sampling, could be useful. Here, a large sample is selected for a "quick" review, possibly looking for known but simple indicators of problems. If the results of the large sample warrant it, a subsample is drawn for detailed examination. With either procedure, initial sample size should be based on knowledge of the variability of the problem and the risk it poses, not on a flat standard as is now done. The overly large number of review areas should be reduced. Areas not known to be error prone should be reviewed on a sampling schedule to provide an assurance of continuing compliance. Program reviews must result in the creation of a data base that can be used to do more than simply identify error patterns (national, regional, and type of school) within the group of schools reviewed. Incorporating a designed sample of institutions and recording the frequency of errors at the institutional level rather than just the occurrence of error would permit general conclusions about error levels. Currently, conclusions beyond a particular school are not statistically valid. Some details of successful compliance, especially in processes that are problematic at most schools, as opposed to "finding problems" only, should be reported as part of the review process. Reviewers should be experienced enough to look beyond prescribed review instructions to report unsound practices that undermine the intent of student financial aid programs even if they do not violate regulations or prescribed procedures. Better information on the cost of the audit and review functions and the liabilities actually collected should be maintained so resources can be allocated effectively. 2. In response to the lack of a good definition of "error": To provide much-needed clarity in the assessment of error, the in- spection processes should maintain clear distinctions among errors of dif- ferent types, for example, process errors as opposed to material errors and errors for which the institution is responsible as opposed to errors for which student applicants are responsible. 3. In response to the lack of standards:

OCR for page 49
82 QUALITY IN STUDENT FINANCIAL AID PROGRAMS The review process itself should be subject to an organized effort focused on quality improvement. Sources of sampling and measurement error should be subjects of study to ensure that review outputs, incorporated into a well-maintained data base, adequately support regional and categori- cal comparisons, statistical estimates (e.g., of error rates), and policy deci- sions. . Quality improvement activities should be part of the audit and re- view processes, from selection of the institutions to close out of the inspec- tion, and should include verifiable standards for conducting the activity and for a higher level review of the results. 4. In response to the lack of follow-up: Reviews should seek to determine the cause of problems and report possible corrective action. Review findings, once they incorporate the prior suggestions for im- provement, should be used as a basis for new policy and/or legislation. Finding violations should stimulate attempts to find causes and, if indicated, to identify systemic failure. There should be systematic follow- up on resolution of findings at each institution, and the results should be used to target areas for future reviews. Because current program reviews and annual audits duplicate many activities and the skills required may not match the expertise of those doing the "inspection," the panel makes the following recommendation. Recommendation 4-3: The Department of Education should redesign the current system of program reviews and independent audits. Pro- gram reviews should focus on compliance as part of an overall quality improvement program. Checks on institutional compliance and internal controls should be performed only in program reviews and audits should focus only on financial attestation. The Department of Education should also develop, test, and imple- ment methods to systematize and standardize the program review pro- cess. The department should not interpret the 1992 reauthorization of the Higher Education Act as a requirement to review every school ac- cording to a fixed schedule. Risk-based statistical methods should be used to identify problem schools for more frequent reviews, and other schools should be selected randomly at a nominal rate that would fulfill the necessary "gate-keeping" functions. The department must improve the evaluative feedback and technical assistance provided to institu- tions during reviews. At the same time, the reviews should be used to accumulate data that provide the department with a continuous over- view of error rates, compliance levels, and other information of signifi- cance for management in making policy.