The United States lags behind some other industrialized countries in college participation rates and college attainment, observed William Tierney. In 2006, 40 percent of adults aged 25 to 64 had earned a college degree, putting the nation third in international comparisons (Organization for Economic Co-operation and Development, 2009, 2010). The nation ranks 10th in terms of the percentage of the population that enters college (64 percent in the United States), and 14th in college graduation rate. Indicators for this educational stage could provide better understanding of who attends college, the benefits that college may offer, and college students’ experiences and outcomes, but there are challenges.
There are more than 6,600 postsecondary institutions in the United States, of which approximately 4,400 grant degrees.22 Nearly 1,700 are 2-year institutions, and 2,700 are 4-year institutions. A small but growing segment of the higher education landscape is for-profit institutions, and distance learning opportunities are expanding as well. Within these basic categories, institutions vary enormously in mission, size, population served, resources, and many other characteristics. Making valid and fair comparisons across different types of institutions is a key challenge, and this was a theme throughout the presentations and discussion of possible indicators to monitor the progress of higher education. The indicators suggested by presenters are listed in Table 4-1.
Graduation and Retention Rates
The importance of tracking graduation rates was highlighted by most of the presenters, who also offered a variety of comments.
Kevin Dougherty noted that the best way to measure completion has been the subject of considerable debate. The National Center for Education Statistics collects completion data through the Integrated Postsecondary Education System (IPEDS), but it does not currently provide data on part-time students (roughly three-fifths of community college students attend part-time), those who enroll after the fall, or those who transfer (Committee on Measures of Student Success, 2011). “IPEDS is a good dataset but it is
TABLE 4-1 Indicators Suggested for Higher Education
|CHARACTERISTICS OF INSTITUTIONS, SERVICE PROVIDERS, AND RESOURCES|
• Graduation and retention rates, disaggregated to capture community college students and other nontraditional students, and perhaps also financial aid status, family income, and need for remediation at the time of matriculation
• Measure of the highest level of education attained by students 10 years after they first enrolled in a postsecondary institution
• Transfer rates—students who successfully transfer from a community college to a 4-year institution or proportion of students who graduate or transfer within 4 to 6 years of normal completion time
• Educational progress rates, such as: measure of proportion of students college-ready at matriculation; percentage of students who persist through graduation; completion rates for remedial coursework and progression to college-level coursework; cumulative credits earned; or an indicator tracking students K-highest level of schooling in which they enroll
• Preparation for careers and job placement, using employment rates and salaries at 1 and 5 years postgraduation
• Research and development activity, using, e.g., spending on research and development, number of patents secured, or income earned through licenses; indicator for humanities and social sciences also needed
• Job placement and earnings
• Learning outcomes, such as: cognitive skills or functioning, occupational competence and preparedness; civic awareness and responsibility, global and intercultural competence, moral reasoning
• Navigational capital or understanding of college access and success process
• Participation in different kinds of colleges and programs, including distribution of students by key demographic characteristics (e.g., gender, race/ethnicity, family income, disability status, and age) across different types of postsecondary institutions and higher education outcomes
• Net cost and affordability for families—could include net cost of tuition and fees, minus grants, disaggregated by family income, or average student’s loan burden relative to starting salary
important that it be amplified,” in Dougherty’s view, and he noted that such proposals are currently under consideration.23
Lashawn Richburg-Hayes endorsed this view and emphasized the importance of including in the indicator system disaggregated data that captures community college students and other nontraditional students, and perhaps also financial aid status, family income, and need for remediation at the time of matriculation. These are important points to track, in her view, because they reflect groups who have the greatest obstacles to success. Laura Perna also noted the challenges of capturing the variation in students and their differing pathways through institutions. She suggested including a basic measure of graduation rates for full-time students at the institution in which they first enrolled, but also including tools on the indicators website that allow users to disaggregate the data to reflect graduation rates for different types of colleges and universities and for students with different demographic and academic characteristics.
Another challenge, Dougherty noted, is to decide what time window to use for completion. The current standard is to look at students who graduate within 150 to 200 percent of what is regarded as a normal time for completing the degree. But since so many students attend part time, many take much longer than that. It might be useful, he suggested, to either extend the window or to have several windows, to capture students who stay enrolled or re-enroll. Perna advocated including a measure of the highest level of education attained by students 10 years after they first enrolled in a postsecondary institution, as a way of addressing this concern.
Tierney cited both graduation and retention rates (the number of full-time students who return each year) as important, and he also noted another challenge to consider with this indicator. “If graduation is the criterion,” he commented, “the for-profits know how to do that—they will graduate students.” For that reason it is important to include retention as well, since some students may need more time to meet requirements.
A related and equally important measure, for Dougherty, is of students who successfully transfer from a community college to a 4-year institution without earning a degree at the community college. However, he added, community college graduation rates usually do not include students who transfer without having first received an associate’s degree—a complete picture of the contribution community colleges make would include these data. Such data are available in many state longitudinal data systems and also from the National Student Clearinghouse (an organization that collects data from more than 3,300 participating colleges and universities). Richburg Hayes also called for a measure of the proportion of students who graduate or transfer within 200 percent (4 years) or 300 (6 years) percent of the normal time to completion.
23Dougherty also noted that the College Board is currently developing a website that will make a variety of outcomes indicators for community colleges publicly available.
Educational Progress Rates
Students’ progress as they move through higher education was important to several of the panelists, and they suggested several possible indicators. For Lashawn Richburg-Hayes, it is important to begin with a baseline—a measure of the proportion of first-time, first-year students who are college ready at the time they matriculate. This is different from the measures of college readiness that might be used to assess the effectiveness of K-12 education, she explained, because many students—particularly those enrolling in community colleges—are not beginning their postsecondary work immediately after high school.
More than half of community college students matriculate 5 or more years after high school, and their average age is 26 to 28. Even if those students were college ready at the time they completed high school, she observed, they are likely to have forgotten “the trigonometry, the algebra, even the fractions, for that matter, because they did not use them” after high school. In order to fairly assess community colleges, then, she believes it is important to understand “what it is they are starting with.”
Similarly, Laura Perna included a measure of college readiness to monitor the pipeline of students entering college, such as the percentage of 9th graders who graduate from high school, enter college, persist through the first year of college, and ultimately graduate. This should be a K-12 indicator, she noted, but since she did not see consensus on this point in that discussion, she included it as a higher education indicator. She noted that college entrance examinations are not ideal measures of college readiness.
Looking next at what takes place during postsecondary schooling, Dougherty noted that several indicators can show whether or not students are on track to complete a degree. These can also be useful to policy makers because they can help identify the “points of blockage students are running into,” he added. One is a measure of completion rates for remedial or developmental programs and whether students go on to complete a college-level course in the area in which they received remediation. Twenty-five states collect these data, and many also collect data on the pace at which students accrue credits toward a degree, and the time it takes them to earn a credential (García and L’Orange, 2010). Richburg-Hayes also advocated measuring completion rates for developmental education requirements.
William Tierney also cited the importance of measuring institutions’ capacity to meet students’ needs for remediation. While needs vary across types of institutions, he noted, “We need criteria that work across the postsecondary sector” to measure this because students at every type of institution come in needing remediation.
Richburg-Hayes also cited the importance of measuring institutions’ effectiveness at remediation, and suggested using “the proportion of students who were not prepared for college-level work upon matriculation who pass the developmental course requirements within three semesters.” Until “you deal with the missing skills,” she noted, “it is not reasonable to think about graduation rates and transfer rates.”
Richburg-Hayes also advocated another indicator of students’ progress through postsecondary education, the cumulative total degree-applicable credits students earn. Community college students tend to approach college differently from the way more traditional students do, she explained. They may have many different reasons for attending and usually integrate their coursework into work and family commitments
differently than do 4-year students. When students need considerable remediation, they may need to delay the courses required for the degree. These students take longer to meet graduation requirements and are the most likely to drop out before earning a degree.
“Since remediation barriers have been identified as a key deterrent to graduation,” she concluded, tracking the rate at which students are meeting actual degree requirements and other goals is important. Doing so is complicated, however, by the fact that institutions have varying methods for assessing the need for remediation as well as varying courses and procedures for providing it.
Perna looked at progression in a different way, suggesting an indicator to track students from pre-K through the highest level of schooling in which they enroll. These data could potentially be linked to occupational data, she noted, which would provide important insights about the short- and long-term benefits of different types of preparation and postsecondary pathways. She noted that state longitudinal data systems and the National Student Clearinghouse have begun to track students more comprehensively, so the pre-K to postsecondary indicator may not be an impossible dream. She agreed that measuring college learning is important but believes that the filed does not currently have good ways to measure it.
Preparation for Careers and Job Placement
Regardless of an institution’s mission and the characteristics of the population it serves, it has a responsibility to provide students with the skills they will need to meet their goals. It is important to monitor how well they are doing so, “whether for cosmetology or engineering,” William Tierney noted, and whether schools are placing students in jobs comparable to their education and training.
Perna addressed this in a more concrete way, suggesting a measure of the economic benefits to individuals and society provided by institutions, in terms of employment rates and salaries in the short and long term (e.g., at 1 and 5 years postgraduation). She also cautioned that use of an economic indicator of benefits reflects the reliance on data that are available; numerous other benefits also result from higher education but are less easily quantified and measured.
Research and Development Activity
Higher education institutions have responsibilities beyond educating students, observed Dougherty, which should also be monitored. Obvious measures, such as spending on research and development, number of patents secured, or income earned through licenses, are important but capture primarily activity in the natural and biomedical sciences. Indicators for the humanities and social sciences are needed as well, in his view.
Job Placement and Earnings
For Dougherty, preparation for careers should be considered as an outcome for both individuals and institutions. Federal and state employment data are important sources of information on outcomes for students once they leave higher education, Dougherty explained. There are difficulties with interpreting these data, however. Labor markets are very volatile, he noted, and conditions may vary markedly from one state or region to the next. Thus, employment rates and earnings may be more related to labor market conditions than to the effectiveness of higher education. In his view, an indicator of job placements and earnings will be important but the data must be interpreted carefully.
“The real hole in what we know about higher education is that, for all intents and purposes, we are unable to say anything at the state or national level about what students are learning and how much of it they are learning,” argued Patrick Terenzini, and that was the focus of all of his suggested indicators. A widely cited regular report on higher education produced by the National Center for Public Policy and Higher Education, Measuring Up,24 he noted, provided measures of college preparation, participation, access, affordability, and completion, for example, but reported little about what students learn at postsecondary institutions. There has been some interest in expanding the sampling for the National Assessment of Adult Literacy to provide state level data for a few states, he added, but in his view, the literacy skills that are assessed are “quite basic.” At some point we need to look for measures of higher levels of development,” he noted.
The idea of expanding NAEP to cover postsecondary education has been proposed, Terenzini noted, but he acknowledged the difficulty of measuring so diffuse a construct as college learning. There is very little consensus on what students should learn “beyond some fairly high-flying, abstract statements about things like critical thinking,” he added, which is widely supported “until you start trying to define it.” Nevertheless, he noted that there is a considerable body of research on higher order thinking, occupational competence, and civic awareness and participation that provides the basis for possible indicators of college learning (Pascarella and Terenzini, 1991, 2005). Terenzini suggested several areas that would be worth probing, though there are obvious indicators for only some of them. Acknowledging the difficulty of measurement in this area, he noted that “we have to start somewhere.” He suggested several indicators:
• cognitive skills or functioning among college students and adults. This would include critical thinking, problem solving, synthesizing, the ability to evaluate evidence, and the ability to exercise judgment, and could be evaluated using such available measures as the ACT’s Collegiate Assessment of Academic Proficiency
or the Collegiate Learning Assessment.25 In Terenzini’s view, these would be the best available options, but he acknowledged that using either as an indicator of college learning would be challenging.
• occupational competence and preparedness for advanced practice in specific fields. For this, Terenzini would use data from licensure examinations (where available) in such professional fields as nursing, teaching, physical therapy, and engineering, as well as measures of preparedness for graduate study, including the Graduate Record Examination, the Medical College Admissions Test, and the Law School Admissions Test. American College Testing’s WorkKeys program, which evaluates 2-year college students’ preparedness in such areas as applied mathematics, locating information, and reading for information, could also be used.26
• civic awareness and responsibility. Cultivating a sense of membership in a community and the will to participate in that community has been a valued goal in higher education since its beginnings in the United States, Terenzini noted, and there are currently at least two instruments for it. The National Conference on Citizenship is developing a Civic Health Index, and the annual Bureau of the Census Current Population Survey supplements recently began including questions about volunteering, voting, and other indicators of civic health.
• global and intercultural competence. This indicator would include, for example, the ability to understand people who have different cultural backgrounds, both within the United States and worldwide, and the ability to work in groups.
• moral reasoning. Terenzini believes that it is an important goal of higher education to cultivate not any one set of moral stances, but rather the capacity to reach judgments of one’s own about right and wrong—as opposed to relying exclusively or primarily on religious tradition, parental authority, or other authorities for such judgments.
Navigational Capital, or Understanding of College Access
Along with college readiness (whether as a K-12 or higher education indicator), Dougherty argued, it is important also to have an indicator of navigational capital, which he defined as students’ knowledge of the college access and college success process (see Yosso and Solorzano, 2005). A few researchers have explored what happens to students when they get to community college or a for-profit school, he noted (e.g. Rosenbaum, Deil-Amern, and Person, 2006). The admissions and financial aid process; college academic requirements, organizational procedures, and expectations; and curricular pathways may all be unfamiliar and daunting for many students.
26A participant noted that the National Adult Literacy Study includes a measure of technological literacy that could be used to characterize skills in that area by age cohort.
Although he knows of no good way to measure how well students understand what is involved in the college process, he believes it is an important measure to consider as an aspiration. “Providing opportunities is not enough,” he commented, and the knowledge needed to successfully navigate this very complicated process “is very socially stratified.” That is, this factor may be especially important for community college students. Since many of them are the first in their families to attend college, they are less likely to have family members with knowledge about the higher education system who can advise them.
Participation in Different Kinds of Colleges and Programs
The structure of the higher education system, in terms of the array of 2- and 4-year institutions, as well as offerings in different fields, should also be tracked, in Dougherty’s view, because there is evidence that outcomes for students vary according to the type of institution they attend and the programs they complete (Long and Kurlaender, 2009; Pascarella and Terenzini, 2005). For example, he noted, it would be useful to show breakdowns of college choice and major by students’ family background. Such information is likely to illuminate findings about students’ later outcomes. Perna also addressed this issue, framing it as a measure of equity, in terms of the distribution of students by key demographic characteristics (e.g., gender, race/ethnicity, family income, disability status, and age) across different types of postsecondary institutions and higher education outcomes.
A related access issue is net cost and affordability for families, Dougherty added, which he suggested could be measured as the net costs of higher education in relation to average family income (National Center for Public Policy and Higher Education, 2008). Tierney also cited this issue, calling for a measure of debt incurred in relation to the income graduates can earn, which might also be treated as an institutional indicator. Perna addressed this issue as well, advocating a measure of the net cost of tuition and fees, minus grants, disaggregated by family income. Like Tierney, she believes it is also important to have a measure of the magnitude and manageability of student debt, perhaps the average graduate’s loan burden relative to starting salary.
To open discussion of the many ideas put forward by the panelists, Lisa Lynch identified four basic categories she saw in the suggested indicators: college readiness, affordability, access, and some way of measuring the value that higher education offers to individuals and society. She also noted that relatively little was said about technology—though she believes it is transforming the delivery of higher education—or about the changing demographics of the college-going population. Options for online learning, for example, may mean that students, faculty, and the institution itself could be located in entirely different places, which may complicate the collection of data and the possibilities for making comparisons across institution, student population, geographic location, and
time. At the same time, she added, the traditional student, who enters college at age 18 and attends more or less full-time, may soon be in the minority. Thus, she noted, data collection will need to adapt to the growing proportion of students who do not fit this mold.
In the discussion, participants expanded on these and other issues.
A Complex Sector to Measure
Richburg-Hayes noted that many of the indicators suggested for the higher education stage are not currently available and also that many will be difficult to standardize. Efforts are under way to begin collecting much of the necessary data, but the lack of standardization is a more difficult problem to solve, in her view. In many cases, colleges have different definitions for basic terms that will make it difficult to compare data. For example, for one college, “degree-seeking students” means all who enroll, while for another, that group may include only students who have formally submitted documentation of their intention to pursue a degree offered at the community college or transfer to a 4-year institution. A comparison of the graduation rates of these schools would need to factor in this difference. The purpose and rigor of coursework also vary considerably.
There are few metrics that are common across colleges and can be used to make valid comparisons across institutions and over time, she added. The postsecondary sector lacks a counterpart to the National Assessment of Educational Progress, which provides a standardized measure of proficiency in academic subjects that permits comparison of student cohorts across time and among of states. “We would not necessarily compare Harvard’s graduation rate with those of local community colleges,” she added, “but that is essentially what we are talking about doing with indicators.” Similarly, even though Harvard might accept a transfer credit for biology 101 from a community college, the course is likely to be qualitatively different from Harvard’s version.
This is not just a methodological issue, she continued: “There is an underlying conceptual problem with having indicators for higher education that we would need to tackle to avoid the apples to oranges problem.” This is not an insurmountable problem, in her view, but one that cannot be ignored. Others agreed. One commented, “I am not even sure that any two or three institutions can really be put in the same stratum.” It might be more useful, this participant suggested, to compare institutions by outcomes for particular demographic groups that are more comparable. “Being able to disaggregate to an appropriate level is essential to the validity and utility of the data,” this person added. Another noted that, without a new approach, “We will continue to praise the successes of the advantaged, elite institutions, at the expense of all the others.”
Focus on For-Profit Institutions
Many people question the value of for-profit institutions, noted Tierney, but in his view they have an important role. “If we want a small or relatively elite system,” he commented, then they may not have a role, but “if we want to expand enrollment, for-profit [institutions] have to play a role.” He believes that for-profit institutions are not going to take over higher education, but he added that few people would likely have
predicted a decade ago that 12 percent of college students would be attending for-profit institutions (this figure includes institutions that do not grant degrees or certificates). He sees a steady increase in the number of people pursuing higher education, as the demand for a better educated workforce grows nationwide. If participation rates in higher education are to increase, institutions will need to focus on two populations that have traditionally had low rates of college attendance: low-income, first-generation students of color and working adults. Public postsecondary institutions do not currently have the capacity to expand enough to serve these groups adequately. For instance, he predicts that student participation in higher education in California is likely to increase by 100,000 per year every year for the next decade, while tight state budgets mean that funding for public institutions is either flat or declining.
Public postsecondary institutions will not be able to meet the need, and, in his view, for-profit ones will be needed to fill the gap. In 1967, fewer than 22,000 students, or less than one-third of 1 percent of postsecondary enrollment was in for-profit, degree-granting institutions, while today that number is 1.2 million, or 6.5 percent of the total (see Hentschke, Lechuga, and Tierney, 2010, and Horn, 2011, for data and other information on for-profit institutions). The for-profit sector is the fastest growing sector of higher education, and these institutions offer more online courses than their public and private counterparts do. They also serve high percentages of first-generation college students, students of color, and working adults, Tierney added.
For-profit institutions vary significantly in quality, however. This sector is growing very fast and, as in any industry that is growing fast, “there are some fly-by-night operations,” Tierney added. Even though the for-profit institutions may resist the push to measure their outcomes, they will ultimately benefit, in his view, if the distinctions between legitimate institutions and low-quality ones becomes more evident. “The danger if we don’t implement this correctly is that those students who need it the most—students of color, first-generation students, and working adults—will be left out,” he concluded.
Purposes for the Data
The diversity of the higher education sector is just one reason that many presenters and participants emphasized the importance of being able to disaggregate the data collected for many of the suggested indicators. There are many possible audiences for these data, Perna observed—the general public, consumers, government, employers, and institutions themselves, for example—and each may have different uses for them. Institutions might want to compare themselves to their peers as they pursue improvement, while other users might be more interested in whether funds expended on higher education are being used effectively and efficiently. Each purpose might point to different sorts of indicators, she added. In her view, the overarching goal should be monitoring the educational attainment of students at all levels.
Tools for disaggregating the data available on the national indicators website could, however, allow for multiple uses. Many participants suggested that it should be possible to explore data by type of institution (using more precise categories than, for example, 4-year and 2-year) and also by type of student (e.g., full time, part time, and by such demographic characteristics as race, gender, and socioeconomic status). The
possibility of isolating students who do and do not receive Pell grants, and those who do and do not enroll in developmental courses, for example, would provide a way of exploring socioeconomic factors, noted one participant. Another issue to consider is what would be necessary to make valid temporal and geographical comparisons, a participant observed. In order to support international or even state-by-state comparisons, it is important to control for differences in the composition of students populations across the entities compared. Otherwise, this participant noted, “you could generate data that carry a lot of weight in the media but don’t fairly represent what is going on.
Equity issues were not as prominent in the indicators suggested as some participants expected, and they raised several points. The legal system has played a large role in determining approaches to equity in the K-12 sector, one noted, but that has not been the case in the postsecondary setting. For higher education, he noted, “whether public or private, it is a moral obligation.” Affirmative action has been significantly scaled back, this person observed, so new ways are needed to consider the equity issues reflected in differential rates of participation in higher education, access and availability, and retention and graduation. There are two distinct issues, another added. One is the idea that there is a responsibility to address past inequities to minority and other groups, and ensure that those inequities do not persist, by means of particular attention to the adequacy of public education for all. The other issue is the right of educational institutions to take particular steps to pursue diversity for the sake of its educational value.
Both issues could be illuminated by data on the distribution of different types of students along different pathways, on transfers, course-taking, and the labor market value of different pathways, and one participant noted that such data are incomplete at present. It appears that there is a disproportionate concentration of students from low-income families in for-profit institutions, one person noted, wondering if data could be collected to shed light on questions such as, “Who are these kids? What fields are they going into? How do they fare?”
Another agreed, noting that simply documenting the differences in employment, experiences, earnings, and other benefits that may accrue from higher education, across groups and pathways, is important. “Part of what a national indicators system should do is lay out what is happening,” this person observed. “Then others can ask why, which characteristics and forces are contributing to differences across groups.”
Some of the suggested indicators address a related issue—what students actually learn in college—but none directly address the quality of institutions, one participant noted. There are possible measures, this person added, such as exposure to curriculum relevant to particular goals, instructional quality, student engagement, diversity, institutional culture, for example. It is relatively easy to track more concrete indicators, such as financial resources, accomplishments reflected in prizes awarded to faculty, or licensure awards for faculty inventions and those are important, several people noted. But in the K-12 sector, a participant commented, it has become clear that it is not enough
to look at how much money a district has: one has to see how it is flowing into individual schools and even classrooms. Similarly, he went on, in the postsecondary sector, “we need to look at the capacity of institutions for organizational learning, because so much of what we are talking about is organizational change.”
As the sector grows and grows more diverse, the challenge of measuring its status, its quality, and what it offers to those who consume it, will become more complex, participants suggested. “We talk about it as if it were a single, coordinated system,” one noted, “but it doesn’t operate as a system—it is a nonsystem.”