Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 The Value of Education Research Using Student and School Records VALUE of LONGITUDINAL STUDENT RECORD DATA FOR RESEARCH Jane Hannaway (Urban Institute) directs the Center for the Analysis of Longitudinal Data in Educational Research (CALDER), funded by the U.S. Department of Educationâs Institute of Education Sciences. She described four features of state education databases that make them par- ticularly valuable for analysis and research. First, because they include unique student identifiers, the databases allow researchers to link individual education records over time in order to develop measures of individual learning gains. Researchers can use these measures to eliminate many confounding variables. In the past, investigators sometimes compared student achievement in a classroom at two different points in time, but the classroom might be made up of different students at the later point in time. Perhaps more importantly, these data allow researchers to address the greatest threat to the validity of many educational studiesâthe fact that students are not randomly assigned to classrooms and schools. Investigators can use the individual measures of academic achievement and other student characteristics in these databases to statistically adjust for the lack of random assignment. Second, Hannaway said, some state databases include unique teacher identifiers that allow researchers to link teacher records with student records and track patterns over time. This feature of the databases has â See http://www.caldercenter.org/. 23
24 PROTECTING STUDENT RECORDS allowed CALDER research teams to demonstrate the effect of the teacher on student learning and show how widely individual teachers vary in their effectiveness. This feature has also enabled research on the factors that may account for the variation in teacher effectiveness, such as cre- dentials, training, classroom behavior, and experience. Hannaway argued that these studies are important because they help to clarify which factors do indeed promote student achievement, which in turn has implications for cost efficiency. For example, many school districts provide higher salaries to teachers with masterâs degrees, but recent CALDER studies indicate that, among elementary school teachers, the presence or absence of a masterâs degree does not affect student learning gains. Third, the databases consist of census data, including all students and teachers in the state public education system. This feature of the databases allows an investigator to conduct multiple comparisons. For example, in her recent study of Teach for America teachers in North Carolina, Hannaway was able to compare Teach for America teachers with other new teachers, with all teachers, and with fully licensed and credentialed teachers. She described the potential of the databases for multiple comparisons as âvery importantâ for policy purposes and as a valuable complement to random assignment studies. Fourth, the databases incorporate historical records, a feature that is critical to understanding the effects of a change in education policy. For example, when studying Floridaâs A Plus accountability policy, Hannaway expected that the policy would have its largest effects on the lowest per- forming schools. Contrary to expectations, she initially found that the low-performing schools were less likely to change their behavior than the high-performing schoolsâsuggesting that the accountability policy was not working as intended. However, after analyzing additional data from an earlier period, when another policy targeted many of the same low- performing schools, they concluded that the previous policy had already generated behavioral change in the low-performing schools. Without the historical data, Hannaway said, âyou could come to a very faulty infer- ence . . . in policy research.â In response to a question, Hannaway said that the quality of district and state data varies. For example, one North Carolina school district employing a large number of Teach for America teachers provided her research team with data, after a long delay. When Hannaway compared these data with a separate list provided by Teach for America, she found an overlap of only 25 percent. In contrast, some states, including North Carolina and New York, have invested in their data systems, are working with multiple researchers, and have accurate, reliable data, she said.
THE VALUE OF EDUCATION RESEARCH 25 USING LONGITUDINAL STUDENT RECORD DATA IN HIGHER EDUCATION RESEARCH Tom Bailey (Columbia University) explained that, as director of both the Community College Research Center, which often analyzes longitu- dinal student record data from state databases, and the National Cen- ter for Postsecondary Research, which conducts more time-consuming and expensive random assignment studies, he sees the value of both approaches. He asserted that it was impossible to carry out âmeaningful analysis of student experiences in higher educationâ without longitudi- nal data from student records, and that the lack of such analyses limits understanding of higher education. He and other researchers would like to be able track students across higher education institutions in order to address such critical questions as whether community colleges are suc- cessful in preparing students to transfer to four-year colleges and how well elementary and secondary schools prepare students for higher edu- cation. They would also like to track students within higher education institutions. Bailey observed that the Family Educational Rights and Privacy Act (FERPA) is only one of several barriers that stand in the way of using edu- cation records for these types of analysis. One barrier is the fact that, until recently, most people had faith in the quality of U.S. higher education. Because colleges were assumed to be effective, public debates focused on access, rather than on the quality of higher education. Second, because public funding is based on enrollments, community college administra- tors often define success in terms of current enrollments, rather than thinking about how to improve student success over the course of their college years. Third, increasing student mobility poses a challenge to measuring the performance of individual higher education institutions. Bailey likened policy makersâ current focus on accountability in higher education without attention to individual student progress to General Motors examining its performance without gathering data on car sales over time. National longitudinal databases maintained by the National Center for Education Statistics have yielded valuable knowledge and understand- ing of student progress in higher education, Bailey said. The Centerâs National Education Longitudinal Study of 1988 (NELS 88) collects and maintains data on students over time, including their college transcripts, providing the source for many published studies. However, the data come from limited national samples of all students, including a smaller group of about 1,000 to 1,500 students who have ever attended community col- lege. Given the small size of the national community college sample, it is impossible to analyze student progress in a single state, in demographic subgroups, or in a single educational institution. In contrast, the state
26 PROTECTING STUDENT RECORDS longitudinal databases, which include records on every student, are much larger, allowing the Community College Research Center to conduct stud- ies of single colleges, subgroups, and other important topics. Bailey expressed surprise at how rarely community colleges analyze their own internal student records. Many of the 83 colleges participating in the Achieving the Dream project lack information on such questions as how many and which type of students succeed in developmental educa- tion and go on to take regular college courses. Although FERPA poses no barrier to a college in analyzing the progress of its own students, weak information technology (IT) systems pose significant barriers. Community college IT systems are designed to track enrollments once each year and to send these data to the state for reimbursement, rather than to track individual students over time. Although some states and colleges are trying to improve their IT systems, many do not place a high priority on analysis of student progress. According to Bailey, members of community college boards do not understand what kinds of research could be con- ducted using student records and how that research might improve their educational programs. Studies using longitudinal data to track student progress over time have already yielded important insights, because they allow researchers to track student responses to educational interventions over long time periods, Bailey observed. For example, a study in Ohio (Purnell and Blank, 2004) found that guidance counseling had strong positive effects on student success in the first two semesters of community college, but these effects had vanished two years later. Studies of developmental education in Florida (Calcagno and Long, 2008) and Ohio (Bettinger and Long, 2008) found that taking these remedial courses had little effect over three years, but greater effects over six years. In another analysis, the Washington State Community College Board found that less affluent students tended to enroll in occupational programs, while their more affluent counterparts more often enrolled in prebaccalaureate transfer programs. The board took these findings to the state legislature and won approval for com- munity colleges to offer applied bachelorâs degrees and for expansion of transfer programs. In another example, Bailey presented an analysis of student progres- sion in mathematics developmental and gatekeeper courses at a single institution. Among students assigned to developmental mathematics courses, 34 percent never enrolled and another 13 percent completed a course but never enrolled in the following gatekeeper course. This kind of information can be surprising to community college leaders, who often focus on improving instruction in individual courses without considering how to ensure that students actually attend classes they are assigned to. In the few states whose longitudinal databases link elementary and
THE VALUE OF EDUCATION RESEARCH 27 secondary school records with higher education records, it is possible to assess the effectiveness of K-12 education in preparing young people for higher education. A 2006 survey (Ewell and Boeke, 2006) found that only 11 state databases included these links, slowing research on this impor- tant topic. For example, dual enrollment programs, in which high school students may take college courses, are growing rapidly, but little research is available on their effectiveness. The Community College Research Cen- ter has done a preliminary study of dual enrollment in Florida, because it has one of the few databases with linked records for K-12 and higher education. Bailey argued that it is also important to link student data among col- leges, in order to assess community collegesâ effectiveness in preparing students for transfer to four-year colleges. More broadly, the increased mobility of all college students makes linked data across colleges crucial for any analysis of higher education outcomes. Without such linked data sets, education officials must rely on weak measures, such as the Gradu- ation Rate Survey. This measure, which includes only full-time students, is a weak indicator of community college outcomes, since 66 percent of community college students enroll part-time. In addition, the survey excludes transfers and students who register in the spring, and it tracks students for only three years. To measure community college outcomes more accurately, Bailey and colleagues analyzed longitudinal data on individual college students from the National Center for Education Statisticsâ Beginning Postsecondary Students study. They found that, although only 22.9 percent of students graduated from the institution at which they initially enrolled within three years, nearly 46 percent graduated from either their original institu- tion or another institution after six years. State education databases allow development of much more flexible accountability measures, in Baileyâs view. For example, his center has analyzed three- and six-year graduation rates for different groups of students, including transfer students and part-time students, at Floridaâs 28 community colleges. While improved accountability measures are important, Bailey said, the real value of the state databases is in allowing more comprehensive and sophisticated analysis of student progress. In conclusion, Bailey reiterated that research on higher education faces many barriers besides FERPA. Individual colleges and state higher education systems could potentially conduct a great deal of valuable research, but this will require a change of priorities, improvement in their IT systems, and increases in their analytic capabilities. While acknowledg- ing that his center wants to build partnerships with the states, bringing its skills and research priorities, he said it was critically important to increase the statesâ own skills and priorities.
28 PROTECTING STUDENT RECORDS Responding to a question, Bailey said that Florida and Washington are among the very few states that have linked employment data with education data in their longitudinal databases. For example, research- ers analyzing linked data in Washington found that, among community college students who took adult basic education courses, those who con- tinued in other courses and completed least 30 credit hours earned more money later than those who completed fewer credit hours (Prince and Jenkins, 2005). On the basis of this research, the state created a new pro- gram integrating adult basic education and occupational skills training. BENEFITS OF RESEARCH ACCESS TO LONGITUDINAL STUDENT RECORD DATA Susanna Loeb (Stanford University) opened by discussing how FERPA affects university-based researchersâ access to data from individual stu- dent records. She said that FERPA allows an education agency to share personally identifiable information from student records without writ- ten consent if the disclosure is to âorganizations conducting studies for, or on behalf of, educational agencies or institutions for the purpose of . . . improving instruction.â The study must be conducted âin a manner that does not permit personal identification of parents and students by individuals other than representatives of the organizationâ (U.S. Code, Title 34, Part 99, Section 31.6). While this provision has allowed valuable research in education policy and practice, Loeb said, it has been inter- preted in very different ways. Some schools and education agencies have shared data, while others have not. In her view, the difficulty of interpret- ing the law has required both researchers and school personnel to expend substantial effort on compliance. Addressing the question of why an education agency might want to give researchers access to its data, Loeb said that education policy mak- ers often seek research evidence to inform their decisions. However, most school districts and state departments of education have quite limited capacity to conduct research. Researchers at universities and think tanks can provide the time and some of the expertise needed to make the best use of the information that education agencies have. In addition, outside researchers often have the flexibility to look at medium-run and long-run questions that do not help as directly with day-to-day decisions but can inform better decisions in the future. The first benefit of allowing access is that researchers have time to compile and analyze data, Loeb said. Because linking and cleaning data from multiple sources is time-consuming, very few states and school districts have done so. For example, she belongs to a team of researchers studying the teacher workforces of New York City and New York state,
THE VALUE OF EDUCATION RESEARCH 29 who have obtained, compiled, checked, and cleaned data from over 10 different sources. From New York public schools, the team obtained data on student demographics and test scores and teachersâ years on the job. From New York state, they obtained data on individual teachers, includ- ing whether they were certified, their scores on the certification exam, and which teacher education program they had completed. In addition, the team identified the institutions from which individual teachers earned their undergraduate degrees and combined this information with the Barronâs ranking of college selectivity to construct variables measuring the selectivity of the college from which each teacher graduated. While the research team had time to devote to this process, it is unlikely that any single education agency in New York would be able to compile all of these data sets. FERPA protections apply both to the individual student data and also to the individual teachers when they were students. Dedicating this time to accessing and compiling data sets, Loeb said, has allowed her team to conduct several important studies, including an analysis of the impact of the No Child Left Behind Act provision requiring school districts to employ only âhighly qualified teachers.â In response to the law, the New York City Department of Education eliminated emer- gency certified teachers between 2002 and 2004, replacing them with teachers prepared by alternative certification programs, including Teach for America and New York Cityâs Teaching Fellows Program (see Figure 3-1). As a result of this change, the average math SAT scores of teachers in the poorest schools increased dramatically. Today, the poorest schools employ higher scoring new teachers than the richer schools. The second benefit of allowing access is that researchers provide expertise. Although school district and state personnel can often answer day-to-day questions by providing accurate, timely descriptive statis- tics, outside researchers are able to analyze longitudinal data in much more sophisticated ways. They conduct value-added analyses to assess how much various factors contribute to student learning over time and difference-in-difference analyses to compare patterns in two different time periods. Outside researchers also use a variety of techniques for simulating experiments. In addition, they are using longitudinal data and âputting experiments on top of them,â Loeb said. After randomly assign- ing students or schools (or both) to treatment and control groups, Loeb said, the researchers are not required to gather survey data from the two groups, relying instead on the data that are collected on an ongoing basis in a state or school district database. Loeb offered two examples of important findings resulting from out- side researchersâ expertise. First, her team used value-added modeling of longitudinal data to estimate the effect of the âhighly qualified teacherâ requirement on student achievement (Boyd et al., 2008). They found that
30 PROTECTING STUDENT RECORDS 6,000 5,000 Number of New Teachers 4,000 3,000 2,000 1,000 0 2000 2001 2002 2003 2004 2005 College recommended and other Teaching Fellows and TFA Temporary license FIGURE 3-1â Number of new teachers in New York City by pathway, 2000-2005. NOTE: TFA = Teach for America. Source: Boyd et al. (2008: Figure 6). the largest increases in teacher effectiveness were in low-income schools, where the weakest teachers were eliminated, whereas the policy had little impact on teacher effectiveness in the richer schools. These improvements in teacher qualifications in the poorest of schools reduced the gap between rich and poor schools in student achievement by 25 percent (see Figure 3-2). Second, she described studies by Jacob and Lefgren (2004, 2007) of a policy introduced in Chicago public schools requiring students scoring below a specific cut score on a reading and mathematics test to be retained in grade. The researchers used regression discontinuity analysis to com- pare quite similar students whose scores were below and above the cutoff scoreâan improved approach over previous studies, which often simply compared the academic achievement of students who were retained with the achievement of other students who were not. In contrast to previ- ous studies, which generally have found that retention has a negative effect on student achievement, Jacob and Lefgren (2007) found increases in measured academic achievement one year later among students who were retained in third grade. However, in comparison to students not held back, these gains vanished by the time the students reached sixth grade. The third benefit of sharing student record data with outside research- ers is that researchersâ broad perspective allows them to address questions relevant to long-run policy. For example, Boyd et al. (2005) combined data from college applications to the State University of New York with infor- mation from the College Board to describe how close to home teachers
THE VALUE OF EDUCATION RESEARCH 31 15 Proportion of Teachers 10 5 0 â.15 â.1 â.05 0 .05 .1 .15 .2 Average Impact on Students in Standard Deviations Rich 2001 Poor 2001 Rich 2005 Poor 2005 FIGURE 3-2â Effect of all observed teacher qualifications on students in grades 4 and 5 mathematics achievement, most affluent and poorest deciles of schools, 2001 and 2005. Source: Boyd et al. (2008: Figure 8). tend to teach. They found that most public school teachers in New York take their first public school teaching job very close to their hometowns or to where they attended college. Teacher candidates coming from sub- urban or rural hometowns strongly prefer to remain in those areas, rather than teach in urban districts. Their findings have particular implications for the long-term policies of urban districts, which are net importers of teachers. The study suggests that urban districts must offer salaries, working conditions, or student populations that are more attractive than those of the surrounding suburban districts to attract sufficiently quali- fied candidates. The broad perspective that outside researchers bring to education questions is apparent in studies that yield important policy information for more than one education agency. For example, an analysis conducted as part of the study of New York City teachers (Boyd et al., 2005) identified differences in the effectiveness of various teacher education programs, as measured by student achievement. The study also identified features of teacher preparation programs associated with greater gains in student
32 PROTECTING STUDENT RECORDS achievementâsuch as providing preparation for teaching practice and developing knowledge of content areas. These findings have implica- tions for other organizationsâspecifically, colleges providing teacher educationâas well as for the New York City Department of Education. Despite these important benefits to education agencies that share data, Loeb said, researchers often find it difficult and costly to gain access to education data sets. She asserted that the many local school districts, states, and higher education institutions that are interpreting FERPA lack clarity about how to comply. As a result, people in each organization have to think about compliance before providing access. For example, Loeb said, the research team studying the New York state and city teacher workforces had to obtain approvals for the research from over 20 differ- ent state and local education agencies and higher education institutions. Obtaining approval from the institutional review boards at 18 different colleges and universities engaged in teacher preparation was nearly a full-time job for one member of the research team, and compiling the data took all of another researcherâs time. The process led to a different data- sharing agreement with each organization. The team has a contract to act as an agent of New York State and has signed memoranda of understand- ing with many school districts across the country, each of which is slightly different from the others. A few districts do not want a formal memoran- dum of understanding but require the research team to fill out a form. This process has âhuge time costs,â Loeb said, partly because schools and agencies are nervous about complying with FERPA. For example, although her workshop paper (Loeb, 2008) includes sample language from a memorandum of understanding with one school district, most of the school districts were unwilling to publicly share their memoranda, because of uncertainty about compliance. Another result of the process is that researchers must work with incomplete and unrepresentative data, because agencies that do not want to share their data use FERPA as an excuse not to provide them. Even agencies that are willing to share data sometimes do and sometimes do not, depending on how much time they have and whether they know and trust the researchers. Ultimately, Loeb said, the extent of data sharing depends on research- ersâ ability to develop trust with individual education officials and ana- lysts, which has benefits. For example, her research on the New York teacher workforce was strengthened by extensive discussions with city and state officials. She explained that she is able to access other data from other school districts because she is part of a group at Stanford that gives executive training to superintendents from around the country about the benefits of sharing data with researchers. The bottom line, Loeb con- cluded, is that there would be benefits to making FERPA âa little bit more understandableâ to school districts and state departments of education.
THE VALUE OF EDUCATION RESEARCH 33 She said that the National Center for Education Statisticsâ data licens- ing system was a good model for protecting confidentiality while also providing access. Although the protections are much stricter than those included in her teamâs contracts and memoranda of understanding with other organizations, it is âmuch, much easier and less time-consuming for usâ to comply, because there is a manual to follow. VALUE OF RESEARCH PARTNERSHIPS Barbara Schneider complimented the speakers, observing that CALDER was doing âthe most important work on the state longitudinal databases that we haveâ and that no one was conducting the kinds of analyses of community college education that Baileyâs group was under- taking. However, she expressed deep concern about the value and impor- tance of education research. Observing that none of the states has the time or analytic capacity to carry out the types of studies described by the panelists, she said that the real barrier to increased access to state data has been that the researcher has gone in, taken the stateâs data, and then the state officials never hear from the researcher againâleading to negative feelings about researchers. Schneider called on researchers to establish a new form of relationship with state education officials, including the ideas not only of researchers, but also of the state, and emphasizing the shared interests of both parties. Praising Loebâs âspectacularâ research, Schneider said its results are important, particularly the finding that, at the high school level, the alternative certified teachers are more effective than traditionally pre- pared teachers, as measured by student achievement. The real question, she said, is what will happen when Loeb and colleagues publish these findings, which reflect negatively on the traditional teacher education institutions that provided data to Loebâs research team. She asked how researchers can go back to agencies and institutions with which they have signed memoranda of understanding to discuss findings that may be negative and, if so, whether the agencies or institutions might pressure the researchers not to publish such findings. She asked about the long- term implications, particularly in light of her call for a new relationship between researchers and education agencies. Loeb responded that she has observed a change in the way her team interacts with New York City school officials. Five years ago, she said, no one wanted to know about any weaknesses in the teacher workforce, but today all school officials want information on their teachersâ âfixed effectâ on student achievement, are happy to share that information with the city, and want to know how they can do better. She said that, as university- based researchers, her team retains control over the research information
34 PROTECTING STUDENT RECORDS and will publish it. At the same time, however, education officials are likely to recognize errors in the teamâs data or in its interpretation of the data. Recognizing the value of this expertise, Loebâs team shares draft papers with agency officials, allowing them 30 days to comment, a pro- cess she described as âgood for them and for us.â Bailey said Schneider had raised a potentially serious issue, as it can create problems if researchers find that an existing policy is ineffective. His team, too, shares draft papers with state officials and discusses the drafts before publication. He said he believes that, in some cases, con- tinued access to data has been limited because of studies reaching nega- tive conclusions and described this as an ongoing problem. However, he emphasized that good access to state education data often is the result of long-term investments by a few states. For example, Florida has held a three-day conference annually for the past 20 years, including all those responsible for sending data into the comprehensive state database, to discuss technical issues. Similarly, the state of Washington has a very good database on community colleges that has been developed with strong political support over 15 years. These states not only have better quality data but also have easier relationships with researchers when discussing such issues as negative findings about education policies. Bailey sug- gested supporting sustained state efforts like these. Hannaway noted that, when accepting a grant or contract from an education agency, CALDER always retains the right to publish research results but is flexible about when to publish. The center tries not to blind- side education agencies that have provided longitudinal data. In addition, the researchers try not to prejudge educational programs that are still at an âincubator stage.â The researchers take time to develop trust with education agencies and to ensure that the researchers fully understand the policy or program they are investigating. Helen Ladd (Duke University) said that, although researchers involved in establishing the North Caro- lina Education Research Data Center developed trusting relationships with the state and school districts, the center now makes the data sets available to outside researchers, both inside and outside North Carolina. This could have drawbacks if an outside researcher conducted a weak study that would put the North Carolina Department of Education on the defensive. Felice Levine observed that these concerns about publish- ing negative results, while very important, represented a dimension of conducting responsible, ethical research that is not specific to FERPA. Focusing more specifically on FERPA, Levine observed that access to personally identifiable student record data is often provided without requiring written consent under the lawâs exception for studies conducted âfor, or on behalf of,â an education agency. She asked whether contracts between researchers and agencies reflecting this provision of the law
THE VALUE OF EDUCATION RESEARCH 35 always protect researchersâ autonomy to publish their findings. Loeb responded that her teamâs memoranda of understanding do guarantee the right to publish, as required by the team membersâ universities. Bailey expressed the view that researchersâ right to publish can be guaranteed in the language of the contracts they negotiate, but the real question was whether researchers would be allowed further data access after publish- ing negative findings about a state or district. His centerâs researchers have sometimes encountered problems when an individual staff member with whom they have developed a relationship leaves the agency, which has sometimes led to limits on access or lengthy delays before approval of the next data request. A member of the audience suggested that the take-home message of the panel, including the examples of successful research, was that FERPA and the Common Rule were âreally not much of a problemâ for researchers. Hannaway disagreed, saying that it was important not to underestimate the costs of obtaining access to these data sets and the âtenuousnessâ of the relationships researchers had established with states and school districts. Martin Orland (WestEd) asked whether there were cases in which researchers had tried to gain access to data but FERPA posed a barrier. Hannaway said this had happened in Texas: John Kain, at the University of Texas at Dallas, established relationships and negotiated data-sharing agreements with the state and local school districts, which included con- fidentiality protections in compliance with FERPA, in the Texas Schools Project. With support from the Spencer Foundation, Kainâs team compiled these and other data from multiple sources into a comprehensive longitu- dinal database with individually linked records on K-12 and higher educa- tion and employment outcomes (Kain, 2000). Analyses using the database yielded important findings about student achievement gaps (Kain and Singleton, 1996) and teachers in Texas (e.g., Hanushek, Kain, and Rivkin, 2004). However, Hannaway said, a change in the stateâs interpretation of FERPA led it to block access to more recent data (Hanushek, 2007). Although the researchers made several efforts to obtain renewed access, including an appeal to the state legislature, they were unsuccessful. Describing the stateâs decision as âarbitrary,â Hannaway called for central guidance on how to interpret FERPA. Marilyn Seastrom said that the value of developing trusting relation- â The University of Texas at Dallas has recently established a state-designated Education Research Center in collaboration with the Texas School Project. According to its website, the new center will assemble, clean, and document deidentified K-12 and higher education data for analysis by the center and will also facilitate secure use by outside researchers in compli- ance with FERPA (http://www.utdallas.edu/erc/about/ [accessed July 2008]).
36 PROTECTING STUDENT RECORDS ships with the states is not unique to outside researchers. National Center for Education Statistics analysts who create the central core data always send the data for each state back to the state where it originated. This is done partly to verify and edit the data, but also so that the state depart- ment of education knows in advance what information will be made public. The center has an âelaborate processâ of keeping state education officials involved and informed. Shelly Martinez (Office of Management and Budget) agreed with Seastrom that these issues of data access are not unique to university researchers. She explained that she participates in the Federal Committee on Statistical Methodology, whose members discuss the use and confi- dentiality of administrative records across all federal agencies. Martinez observed that there are confidentiality laws similar to FERPA in every field, including health care, in which HIPAA protects individual health records. She said that federal statistical agencies need clear guidance on how to interpret these laws across a variety of situations, because federal agencies are often âin just as tenuous a situation as many of youâ when they seek access to state or federal administrative data. Based on her monthly discussions with federal analysts studying nutrition, income, and other topicsâall of whom face similar challengesâshe suggested developing broad, systematic solutions, as well as addressing the more specific data access challenges posed by FERPA. In conclusion, Ladd observed that it is important to remember that access to data is sometimes limited by technical weaknesses in state IT systems, not only by FERPA. â See http://www.fcsm.gov/committees/cdac/.