The workshop began with two presentations designed to provide background to the issues and to situate them in the context of prior evidence. The first presentation offered an overview of previous reports published by the National Academies on the general topics of data access and human research protection; the second discussed what is known about the functioning of institutional review boards (IRBs), particularly those involved with the social and behavioral sciences.
Connie Citro, director of the National Research Council’s (NRC’s) Committee on National Statistics, offered an overview of reports published by the NRC and the Institute of Medicine (IOM) that addresses the general topic of human subjects protection. The NRC and IOM have a long history of attention to this subject, she said, and a major focus of this attention has been the area of privacy, confidentiality, and data access, which is important in social and behavioral research.
One of the earliest studies discussed (National Research Council, 1979) pertained to public opinion about privacy and confidentiality protection and how it influences individuals’ responses to government surveys. Six years later, Sharing Research Data (National Research Council, 1985) addressed the ethical problem of researchers keeping their data to themselves and called for a new approach that emphasized sharing research data. That creates a tension, Citro noted, because “if you share
research data, what happens to confidentiality?” Numerous subsequent reports have presented ways to address that tension and find the proper balance between protecting the confidentiality of the individuals involved in studies and providing access to the studies’ data, which is critical to advancing scientific research. Box S1-1 lists the NRC and IOM reports on the general topic of privacy, confidentiality, and data access.
The 2005 NRC report, Expanding Access to Research Data: Reconciling Risks and Opportunities, is particularly germane, Citro said. It lays out a justification for why research access to rich data is so essential for a healthy social, behavioral, and economic research enterprise. Another report, Putting People on the Map: Protecting Confidentiality with Linked Social-Spatial Data, discusses the challenges posed to confidentiality by data that include details about geographic location, in addition to the usual factors, such as age, sex, occupation, and health (National Research Council, 2007). Another major report that grapples with the issues of confidentiality and access to data is Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research (Institute of Medicine, 2009).
More challenges to confidentiality are likely in the future, Citro said.
National Research Council and Institute of Medicine
Reports on Privacy, Confidentiality, and Data Access
|1979||Privacy and Confidentiality as Factors in Survey Response|
|1985||Sharing Research Data|
|1993||Private Lives and Public Policies: Confidentiality and Accessibility of Government Statistics|
|2000||Protecting Data Privacy in Health Services Research|
|2000||Improving Access to and Confidentiality of Research Data: Report of a Workshop|
|2005||Expanding Access to Research Data: Reconciling Risks and Opportunities|
|2006||Effect of the HIPAA Privacy Rule on Health Research: Proceedings of a Workshop|
|2007||Engaging Privacy and Technology in a Digital Age|
|2007||Putting People on the Map: Protecting Confidentiality with Linked Social-Spatial Data|
|2009||Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research|
|2010||Conducting Biosocial Surveys: Collecting, Storing, Accessing, and Protecting Biospecimens and Biodata|
|SOURCE: Citro presentation at Workshop on Proposed Revisions to the Common Rule in Relation to the Behavioral and Social Sciences, Washington, DC, March 21, 2013.|
“Big data,” the term used to refer to the enormous amounts of data from the Internet and other sources that are threatening to overwhelm current capabilities for analysis, may pose one such challenge. There are already hints of the sorts of capabilities that big data will usher in, such as recent reports of use of the online social networking service Twitter to predict weekly unemployment claims. “What do data confidentiality and access mean in that context?” Citro asked.
The NRC and IOM have also done two systemwide studies on protection of human research participants, she noted Responsible Research: A Systems Approach to Protecting Research Participants (Institute of Medicine, 2002) and Protecting Participants and Facilitating Social and Behavioral Sciences Research (National Research Council, 2003). The former was focused mainly on biomedical research, while the latter dealt with the social and behavioral sciences.
A 2010 NRC report, Conducting Biosocial Surveys: Collecting, Storing, Accessing, and Protecting Biospecimens and Biodata, looked at the issues raised when biospecimens are included in social science surveys, Citro noted. Not only does this pose various challenges for social scientists related to the collection, storing, and accessing of the biological samples, it also adds a layer of complexity to the privacy and confidentiality issues. Finally, there have been several reports concerning studies of three special populations: children, students, and prisoners.
Looking across these studies, Citro said, one can discern a number of big-picture lessons. The first is that human subjects protection is an extremely complex topic. It involves three different areas, each quite complex by itself: the biological and psychological characteristics of humans; the physical, social, economic, and technological environment; and the scientific research enterprise, with its research methods, ethical principles, and management structures. Furthermore, these three areas—the human, the environmental, and the research aspects—are constantly evolving, as is our knowledge of them, which means that human subjects protection must also constantly evolve. Citro also noted that mandating a one-size-fits-all approach is likely to have unintended consequences and might actually do harm, either to the research or the participants. The reports also point to the idea that there is often no need to reinvent the wheel, and that available models should be used. “Aristotle’s ‘Golden Mean’ is very applicable,” she added, “although it’s actually hard to implement in a regulatory environment.” The social and behavioral sciences, she concluded, “need to be continually vigilant and proactive to achieve useful improvements in regulations.”
The Changing Environment Related to Data Access and Confidentiality Protection
To understand the evolving challenges related to data access and confidentiality, Citro said, it helps to consider how information technology has changed over the past 50 years. In the 1960s, relatively little information was available in digital form. Most was on paper, such as printed census reports, and digital information was not particularly easy to manipulate. Programming a computer, for example, required the use of a stack of paper punch cards. “So there were less data available, and that actually made it easier to protect,” she said.
In the 1970s and 1980s, the growing power of computers, and the development of personal computers, led to richer datasets and a greater ability to analyze the data, but it also made the data harder to protect. “It was in this era that the government started to get agitated about [privacy and confidentiality],” Citro noted. This concern led to the 1974 Privacy Act and a great deal of discussion about the best ways to protect the privacy of human research subjects and the confidentiality of their data.
By the mid-1990s, the widespread use of the Internet “ballooned the availability of data,” she explained, and made it much harder to protect the confidentiality of individuals. Since then, the availability of biosocial data, geospatial data, linked survey data, and administrative data has increased even more.
With the increasing threats to confidentiality, particularly those posed by the Internet, Citro explained, many data providers, such as statistical agencies, began tightening the rules on access. But with pressure from a variety of sources, the data providers and academia developed new ways and means of access. For example, Citro noted, there are now statistical techniques for synthesizing public-use microdata so that the probability of individuals being identified is very low. There are also a number of research data centers around the country where researchers can work with data onsite. The data are more easily protected because they never leave the center. Another approach has been to license individual researchers, allowing them to work with confidential data as long as they agree to certain strict rules about protecting confidentiality. Researchers increasingly are also using remote monitored access, where they work online with data that are maintained behind a firewall at the agency in which they are housed. Much of the work in developing these new forms of access has been done by federal statistical agencies, but a number of nonprofit data archives have also become involved, such as the Inter-university Consortium for Political and Social Research, headquartered at the University of Michigan, Ann Arbor, and NORC at the University of Chicago.
Implications for IRBs
Based on this overview, Citro offered a number of implications for proposals for revising the Common Rule.
First, she said, although IRBs definitely need better guidance about appropriate levels and methods of confidentiality protection, the approach taken in the Health Insurance Portability and Accountability Act (HIPAA), which has been proposed in the advance notice of proposed rulemaking (ANPRM), is not the right direction to follow. According to IOM reports and other researchers active in this area, HIPAA is outmoded even within its own domain of protecting administrative health records. It overprotects in many ways, Citro suggested, for example by not permitting geographic identification below a state level. It underprotects in other ways, for example, potential re-identification for many rich social and behavioral datasets.
For an alternative to the HIPAA approach, Citro pointed in particular to the recommendations of the 2003 report Protecting Participants and Facilitating Social and Behavioral Sciences Research and of the 2005 report Expanding Access to Research Data: Reconciling Risks and Opportunities. Both recommended exempting secondary research that uses data from established organizations whose confidentiality protection is known and can be certified to be state of the art. Some IRBs, she noted, already do have lists of organizations that can be trusted to maintain the confidentiality of data, but this approach is not yet part of the national Common Rule.
Risks and Harms
The NRC and IOM reports also address possible risks and harms in social and behavioral research. A number of reports have concluded that IRBs do not have sufficient evidence of the risks that participants actually face from participating in surveys and other kinds of research in these fields. Protecting Participants (National Research Council, 2003) recommends that researchers debrief the participants in their studies to learn more about what these participants understand and believe about the risks of participation. Reports have also recommended that the Office for Human Research Protections (OHRP) and other agencies fund research aimed at exploring both perceived risks and actual harms. The goal, Citro said, is to have evidence-based guidance on risk so they can “hit the Golden Mean between over- and underprotection.”
Even though consent forms have been revised regularly over the years, Citro said, some observers have suggested that the changes often have not improved matters and have taken up valuable time that IRBs could have devoted to other tasks. Another critique is that in many cases obtaining written informed consent is not necessary and, moreover, that insistence on written consent may be excessive and counterproductive, because it discourages participation.
The ANPRM includes proposals on obtaining re-consent (new consent for new uses of previously collected data). In Citro’s opinion, the proposed new rule is not clear. “If people have already consented for research, it certainly doesn’t help research, and it doesn’t benefit the participants unless that original consent was very limited and specific.”
However, she said, getting consent for administrative records in research is another matter. The 2003 report urged that those collecting the data should attempt to get consent from individuals for the use of their data for research purposes at the time the data are collected.
On the issue of exactly what consent forms should look like, Citro said that the typical advice has been to supply guidance instead of hard-and-fast rules. Protecting Participants (National Research Council, 2003) calls for agencies to supply detailed examples of consent guidance, which can help IRBs avoid both over- and underprotection. “The problem here,” she continued, “is that regulatory bodies, including their legal counsel, are often uncomfortable with ambiguity. They want something clear-cut.”
One problem facing those who attempt to improve the performance of IRBs is that there is little evidence on how they really function. Thus, Protecting Participants (National Research Council, 2003) called on OHRP to request yearly information from IRBs on their operating procedures and outcomes, including such things as the percentages of studies that are exempted, given expedited review, or subjected to a full review. A further recommendation was that federal agencies should fund in-depth research into the functioning of IRBs that could be used in the development of performance guidelines.
The 2003 report also offered a useful framework for thinking about IRBs, Citro said. It likened their reviews of research protocols to manufacturing production processes. In carrying out such processes, she said, one goal is to allow for appropriate variation while at the same time minimizing the extremes. Thinking of IRBs in these terms might help the workshop participants frame their discussions, she suggested.
Challenges for Social, Behavioral, and Educational Researchers
Citro concluded by discussing a number of challenges facing
researchers in the social and behavioral sciences. This family of disciplines has often seemed to be a stepchild in the context of human subjects protection because the focus has been on the protection of human subjects in biomedical research. In 1974, when the Department of Health, Education, and Welfare released regulations that were the precursor to the Common Rule, and again in 1979, when the proposed revisions to those regulations were released, the social and behavioral research community reacted vigorously, she explained. Many researchers saw the regulations as focused almost exclusively on the concerns of the biomedical research community and worried that they would make it much more difficult to carry out social and behavioral studies—yet without improving the protection of research subjects in those fields. Some of the responses were extreme, Citro noted—some researchers suggested that social and behavioral studies did not require any oversight—but the response did lead to the creation of categories for exempt and expedited research, even if some IRBs have been slow to use those categories. The social and behavioral community also succeeded in having a number of types of studies added to those categories in a 1998 revision of the Common Rule.
Some of the IRB-related problems facing researchers in social and behavioral fields can be traced to major issues with biomedical research, including the deaths of some participants in the late 1990s, which led to the establishment of the OHRP and strengthening of protections for human research subjects. These developments led to tightened scrutiny by IRBs, which in turn led social and behavioral researchers to become even more frustrated with what they perceived as a one-size-fits-all approach to human subjects protection, Citro explained. After the 2002 IOM report, Responsible Research: A Systems Approach to Protecting Research Participants, which focused on biomedical issues, the NRC produced Protecting Participants and Facilitating Social and Behavioral Sciences Research in 2003 so that social and behavioral issues would not get overlooked.
In Citro’s opinion, the ANPRM is well intentioned with regard to social and behavioral research and offers some good ideas, but is not completely adequate. “An important lack,” she said, “is that it does not reflect the hard-won knowledge we have gained in such areas as the continuing balancing act on confidentiality protection and data access, the need for guidelines for informed consent, and, in general, the role that detailed, evidence-based guidance and guidelines and effective training … could play instead of hard rules.” Hard rules, she added, can “both underprotect and overprotect.”
In conclusion, Citro observed that, “based on past experience this workshop cannot be the be-all and the end-all. You can’t just have a good discussion and go away, satisfied that everything is okay. You’re going to need to work hard to push and continually be vigilant in trying to
improve the Common Rule and the IRB implementation to appropriately protect participants and facilitate research.”
The next presenter was Jeffery Rodamar, of the Department of Education, who noted that the views presented in his talk were his own and did not necessarily reflect the official position of the department. He explained that his perspective on IRBs has been shaped by a variety of experiences throughout his career. He has been a regulator working daily with other regulators, researchers, IRBs, program staff, and study subjects; has had experience working with other agencies and dealing with inter-agency issues; and has been a legislative staffer for a committee dealing with education and labor issues. He has also been a social and behavioral researcher, a consumer of research, an advocate of evidence-based practice, and a long-time critic of IRBs and regulations that needlessly hinder rigorous research.
Rodamar began by listing a number of common complaints about IRBs, including that they take too long and that the delays harm studies; that they cost too much; that reviews are often flawed; and that they undermine research with burdensome requirements. Indeed, there are many strong opinions about the IRB system, Rodamar said, but unfortunately there is relatively little hard evidence about how well IRBs function and what researchers’ experiences with them are really like.
Data on IRB Performance
To illustrate, Rodamar listed a number of the common criticisms about IRBs and analyzed the evidence supporting them. For example, a study by the Federal Demonstration Partnership (Decker et al., 2007; Rockwell, 2009) on the burden created by IRBs reported that principal investigators on federal grants estimated that they spent about 42 percent of their time on “administrative burden” and that of the 24 administrative tasks in the survey, researchers reported that IRB-related tasks account for the largest burden. While the study, which had 6,295 respondents, may look convincing, there are a number of reasons for caution, Rodamar said. There was only a 26 percent response rate, for example, so the results may have been skewed, with researchers unhappy with IRBs being more likely to respond. The burden score for IRBs was quite similar to the scores for other common administrative tasks, such as applying for grants and hiring and training staff. Thus, Rodamar explained, the IRB burden does not seem to be significantly different from that of many other tasks that
scientists take for granted as part of the process of carrying out funded research.
There are several studies of the costs to an institution of operating an IRB,1 Rodamar said, although there are not many on how much IRB review costs a researcher. A 2003 study of a hypothetical cancer research project found that institutions reported that about 1.4 percent of the total time to conduct a study funded by the National Institutes of Health (NIH) would be spent on all of the activities—including IRB review—that took place prior to actually beginning the research (Emanuel et al., 2003). The figure for a similar study funded by the pharmaceutical industry (Emanuel et al., 2003) showed that about 2.9 percent of the total study time was devoted to activities that took place prior to research. The percentage of total costs devoted to activities that took place before the start of research was in line with these figures.
To explore how long it takes to get IRB approval, Rodamar combined data from two studies, one done by the Association for the Accreditation of Human Research Protection Programs (AAHRPP) (2011) and one performed by IRBNet (2011). He concluded that for an expedited review it takes an average of 14.8 days to have a protocol reviewed and an average of 27.6 days to gain approval. For a full review, it takes an average of 23.3 days for the review to be carried out and 48.1 days to gain approval. Because IRBs typically meet every 30 days, he observed, those times seem relatively prompt.
Rodamar noted, however, that there are reports of studies that have taken many months or even years to be approved. These delays might be caused by the IRBs, he observed, or by the researchers applying for approval; better data are needed to explain such delays. There are very few studies that are not eventually approved by their IRBs. AAHRPP data for 2011 showed that 60 percent of IRBs had not rejected a single study and that about another 20 percent had rejected only one study.
A variety of factors influence the length of time it takes an IRB to review a proposal, Rodamar explained. One factor is the complexity of the proposal. For instance, the University of Nebraska reported that in 2008 the average length of time to approve simpler protocols was 18 to 24 days, while the more complex protocols took an average of 63 days (University of Nebraska–Lincoln, 2013). IRBNet reports that roughly a third of the time required for IRB approvals is accounted for by researchers’
1Several studies have been conducted of the cost to a university or other entity of operating an IRB. For example, Jeremy Sugarman and colleagues (2005) published a survey of 121 U.S. medical schools (63 responded) that showed that those academic centers spent an average of $750,000 (ranging from $400,000 to $1.15 million). The range of costs was $400 per study review at high-volume medical centers to $600 at low-volume centers: see also Wagner et al. (2010).
omissions and errors in providing information needed for review and responding to IRB comments, he noted (IRBNet, 2011). He noted that case studies indicate that relatively small changes in IRB operations, and in training and support for researchers, can result in substantial reductions in the time required to receive IRB approval (Rosenberg, 2011; University of Nebraska–Lincoln, 2013).
Researchers’ Attitudes Toward IRBs
Rodamar also discussed the attitudes of social and behavioral researchers toward IRBs. He noted that there is a widespread impression that researchers in these fields see IRBs as more a hindrance than a help, but he noted that studies of the issue do not support this perception. One study asked researchers to rate their own IRBs versus an ideal IRB on a variety of elements, such as “timely review” and “competency in distinguishing exempt from nonexempt research,” using a seven-point scale. The gap between ratings for real and ideal IRBs was not particularly large, Rodamar noted, always less than one point. A 2012 survey found that the attitudes of social and behavioral researchers towards IRBs were not significantly different from the attitudes of biomedical researchers (DeVries et al., 2006; Pennell and Lepkowski, 2010).
Rodamar also explored the perception that social and behavioral research is closely regulated despite the fact that most of it poses little risk to the participants. To the contrary, he said, most such research does not require IRB review under the Common Rule for one or more of the following reasons: (1) it is not “human subjects research” as defined by the regulation, (2) it is not funded by a covered source, or (3) it falls under one or more of the exemptions. For example, Rodamar said, in 2009, review by the full IRB was required for only about 4 percent of social and behavioral studies conducted at the University of Michigan (Kim, 2009). Of the social and behavioral studies submitted to the University of Michigan IRB, 35 percent were deemed to be exempt, and another 61 percent were given expedited review.
Rodamar next discussed data on enforcement, noting that, “we hear a lot about how burdensome the feds are.” A review of new compliance oversight cases initiated by OHRP between 1990 and 2011 (Borror, 2012) shows that the highest number of new cases in any one year was 91. From 2008 to the most recent year for which data are available, they never exceeded eight per year. These data cover thousands of studies in the United States and many more outside the United States, Rodamar noted. Most cases are handled through phone calls, e-mails, and letters, and most investigations center on informed consent forms, IRB meeting minutes, and other procedural issues, he added.
Perhaps because of this focus on procedural issues, little is actually known about the effectiveness of IRBs in protecting research subjects, Rodamar said. While there are few reports of problems, it is not clear whether this is because there are no problems or because the problems simply do not get reported, he noted, adding that in this area also, better data are needed.
Risks Posed by Social and Behavioral Research
As an example of the sorts of SBE research that can lead to serious harms to the participants, Rodamar mentioned the work of a sociology graduate student at the University of Chicago who studied people living in Chicago public housing (Venkatesh, 2008). Venkatesh learned how much various individuals were earning from such illegal activities as selling drugs, prostitution, and stealing cars. “In the course of his study, he disclosed some of this information to the people in the gangs,” Rodamar said. “Soon you had people who provided the information being seriously beat up, and various other harms coming to them.” In another example, Rodamar noted, Fulbright scholar Alexander van Schaick was asked by a U.S. embassy official to provide information on any Venezuelan or Cuban doctors or field workers that he encountered during his studies in Bolivia (Zwerling, 2011). The information requested included names, addresses, and activities. Sharing this sort of information could have put the foreign workers—or van Schaick—at risk, Rodamar said. Thus, he added, while much social and behavioral research poses minimal risks for participants, there are reasons for caution.
Rodamar also noted that the threat to the confidentiality of individuals included in large datasets is real. One study has found, for example, that using just three facts—birth date, sex, and five-digit zip code—it is possible to identify 87 percent of the U.S. population (Sweeney, 2000). Educational datasets offer particular challenges, he added, because “education research often involves small samples, with longitudinal data, of students in classrooms in schools that may be linkable to external data sources to re-identify subjects.” Perhaps even more worrisome are two recent articles in Science, Rodamar observed, which reported that researchers were able to figure out the identities of DNA donors (Bohannon, 2013; Gymrek et al., 2013). There are a rapidly growing number of linkable datasets that can be used to re-identify genetic and other data, Rodamar noted.
These risks are important to many people, Rodamar said. Studies have shown that people are less willing to participate in studies when they are told about the risks to confidentiality (Singer, 2011). People also worry that details about their behaviors—including online behaviors—might be made available to various entities without their consent.
Over time, the response rates to surveys in the United States have been dropping steadily. “Clearly this is not all happening due to IRBs, but we have to worry about the impacts that IRBs can have in amplifying things that are going on anyway,” Rodamar noted. On the other hand, he added, IRBs can play a positive role by, for example, lending credibility to studies in the eyes of the potential participants, which can benefit study recruitment, and also by helping to avoid problems with studies that can lead to greater mistrust and increased regulation of research.
Rodamar concluded with the observation that while the Common Rule is not perfect, it is arguably a notable success story. Since its adoption a quarter of a century ago, the quantity and quality of research have continued to increase and there have been few reports of serious problems or unethical research in the United States. There do remain some concerns about IRBs. Some worry that they are too influential in setting the research agenda, for example, or that they have created a certain amount of self-censorship among scientists. It has also been suggested that IRBs favor more traditional approaches over studies that introduce new research methods or that challenge existing paradigms.
To resolve uncertainties over the roles that IRBs play, Rodamar said, it will be important to collect evidence on their functioning and effects, with the ultimate goal of making decisions about them based on systematic evidence rather than on anecdotes and small-scale studies. There is movement toward monitoring the effects of regulations and on creating and modifying regulations based on evidence, he said, and it is “a wave we can ride” (Greenstone, 2009; Executive Office of the President, 2012).
Association for the Accreditation of Human Research Protection Programs. (2011). 2011 metrics on Human Research Protection Program performance. Available: https://admin.share.aahrpp.org/Website%20Documents/2011_%20Metrics_%20on_%20Human_Research_%20Protection_Performance.PDF [June 2013].
Bohannon, J. (2013). Genealogy databases enable naming of anonymous DNA donors. Science 339:262.
Borror, K.C. (2012). When the feds come a-knockin’: How to prepare for an OHRP evaluation of your program. Webinar, Feb. 23. Available: http://www.hhs.gov/ohrp/education/training/ded_webinar.html [June 2013].
Citro, C. (2013, March). Review of the evidence: Previous reports by the National Academies. Presentation to the National Research Council Workshop on Proposed Revisions to the Common Rule in Relation to the Behavioral and Social Sciences, Washington, DC. Available: http://www.tvworldwide.com/events/nas/130321/# [June 2013].
Decker, R.S., L. Wimsatt, A.G. Trice, and J.A. Konstan. (2007). A profile of federal-grant administrative burden among federal demonstration partnership faculty. Available: http://www.iscintelligence.com/archivos_subidos/usfacultyburden_5.pdf [May 2013].
DeVries, R., M.S. Anderson, and B.C. Martinson. (2006). Normal misbehavior: Scientists talk about the ethics of research. Journal of Empirical Research on Human Research Ethics 1(1):43–50.
Emanuel, E.J., L.E. Schnipper, D.Y. Kamin, J. Levinson, and A.S. Lichter. (2003). The costs of conducting clinical research. Journal of Clinical Oncology 21(22):4145–4150.
Executive Office of the President. (2012). Smarter regulations through retrospective review. Available: http://www.whitehouse.gov/sites/default/files/lookback_report_rev_final.pdf [May 2013].
Greenstone, M. (2009). Toward a culture of persistent regulatory experimentation and evaluation. In D. Moss and J. Cisternino, Eds., New perspectives on regulation (pp. 111–125). Cambridge, MA: The Tobin Project.
Gymrek, M., A.L. McGuire, D. Golan, E. Halperin, and Y. Erlich. (2013). Identifying personal genomes by surname inference. Science 339:321–324.
Institute of Medicine. (2002). Responsible research: A systems approach to protecting research participants. Committee on Assessing the System for Protecting Human Research Participants. D.D. Federman, K.E. Hanna, and L. Lyman Rodriguez, Eds. Board on Health Sciences Policy. Washington, DC: The National Academies Press.
Institute of Medicine. (2009). Beyond the HIPAA privacy rule: Enhancing privacy, improving health through research. Committee on Health Research and the Privacy of Health Information: The HIPAA Privacy Rule. S.J. Nass, L.A. Levit, and L.O. Gostin, Eds. Board on Health Sciences Policy and Board on Health Care Services. Washington, DC: The National Academies Press.
IRBNet. (2011). National Research Network: 2010 benchmark report. Available: http://irbnetresources.org/news/benchmark.html [May 2013].
Kim, S. (2009). IRB oversight of minimal risk research: Are we seeing the big picture? Presentation at the Office of Human Research Protections Community Forum on Reducing Regulatory Burden: Real Strategies for Real Change, Ann Arbor, MI, May 14. Available: http://www.hrpp.umich.edu/education/OHRP2009/presentations/minimal-risk.pdf [June 2013].
National Research Council. (1979). Privacy and confidentiality as factors in survey response. Panel on Privacy and Confidentiality as Factors in Survey Response, Committee on National Statistics. Assembly of Behavioral and Social Sciences. Washington, DC: National Academy Press.
National Research Council. (1985). Sharing research data. Committee on National Statistics. S.E. Fienberg, M.E. Martin, and M.L. Straf, Eds. Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.
National Research Council. (2003). Protecting participants and facilitating social and behavioral sciences research. Panel on Institutional Review Boards, Surveys, and Social Science Research. C.F. Citro, D.R. Ilgen, and C.B. Marrett, Eds. Committee on National Statistics and Board on Behavioral, Cognitive, and Sensory Sciences. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
National Research Council. (2005). Expanding access to research data: Reconciling risks and opportunities. Panel on Data Access for Research Purposes, Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
National Research Council. (2007). Putting people on the map: Protecting confidentiality with linked social-spatial data. Panel on Confidentiality Issues Arising from the Integration of Remotely Sensed and Self-Identifying Data. M.P. Gutmann and P.C. Stern, Eds. Committee on the Human Dimensions of Global Change, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
National Research Council. (2010). Conducting biosocial surveys: Collecting, storing, accessing, and protecting biospecimens and biodata. Panel on Collecting, Storing, Accessing, and Protecting Biological Specimens and Biodata in Social Surveys. R.M. Hauser, M. Weinstein, R. Pool, and B. Cohen, Eds. Committee on National Statistics and Committee on Population, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Pennell, S., and J. Lepkowski. (2010). 2009 Follow-up survey of investigator experiences in human research. Available: http://www.src.isr.umich.edu/docs/Survey_of_Investigator_Experiences_in_Human_Research_2009_Follow_Up.pdf [May 2013].
Rockwell, S. (2009). The FDP Faculty Burden Survey. Management Review Research 16(2):29–44. Available: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2887040 [May 2013].
Rosenberg, R. (2011). Streamlining IRB approval process yields dramatic results in turnaround time for University of Maryland. Center Watch, April 25. Available: http://www.huronconsultinggroup.com/Insights/Perspective/Life_Sciences/~/media/Insights-Media-Content/streamlining-irb-approval.pdf [May 2013].
Singer, E. (2011). Toward a benefit-cost theory of survey participation: Evidence, further tests, and implications. Journal of Official Statistics 27(2):379–392.
Sugarman, J., K. Getz, J.L. Speckman, et al. (2005). The cost of institutional review boards in academic medical centers. New England Journal of Medicine 352(17):1825–1827.
Sweeney, L. (2000). Simple demographics often identify people uniquely. Carnegie Mellon University, Data Privacy Working Paper 3, Pittsburgh, PA.
University of Nebraska–Lincoln. (2013). IRB frequently asked questions. Available: http://research.unl.edu/orr/irbfaq.shtml [May 2013].
Venkatesh, S. (2008). Gang leader for a day: A rogue sociologist takes to the streets. New York: Penguin Press.
Wagner, T.H., C. Murray, J. Goldberg, J.M. Adler, and J. Abrams. (2010). Costs and benefits of the National Cancer Institute central institutional review board. Journal of Clinical Oncology 28(4):662–666.
Zwerling, P. (Ed.). (2011). The CIA on campus: Essays on academic freedom and the national security state. Jefferson, NC: McFarland.