National Academies Press: OpenBook
« Previous: 2. Overview of OMB Guidelines and Agency Concerns
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

3
Draft Agency-Specific Guidelines

The following section includes descriptions of some preliminary draft guidelines developed by selected agencies, followed by several critiques by representatives of scientific organizations. All presentations were made at the third workshop on May 30, 2002.

SCOPE AND COVERAGE OF THE GUIDELINES

Department of Commerce

Lisa K. Westerback of the Department of Commerce said that Commerce had decided to apply broad “umbrella guidelines” to the agency as a whole, and develop specific guidelines or standards for each operating unit. The reason she cited was the “diversity of operating unit missions.” The process was led by the chief information officer (CIO) and supported by a cross-department team. The CIO was to file an annual agency report on data quality to the OMB, while the operating units were to publish their own reports.

The agency also designed agency-wide standards for data quality and asked individual units to “adopt or adapt” these standards “where it makes sense,” including statements on disclaimers, utility, integrity, and administrative mechanisms for corrections. Eventually, Commerce will use a single department standard for financial information, noted Dr. Westerback.

Dr. Westerback concluded that Commerce “is an information agency,” and that “quality is already a hallmark of our information products. We didn’t need this [process], but we welcome the opportunity to document it all.”

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

Department of Health and Human Services (DHHS)

Mr. James Scanlon said that DHHS had used a process similar to that of the Department of Commerce, publishing its guidelines in two parts: a series of guidelines for the entire agency, and a draft or template to be adopted or adapted by each of the operating agencies and offices.

To create these guidelines, the HHS assembled a data quality working group under its Data Policy Council. This Council is responsible for overseeing the dissemination of substantive information by the agency, including:

  • results of scientific research studies

  • statistical and analytic studies and products

  • programmatic and regulatory information, including program evaluations

  • public health surveillance, epidemiological and risk assessment studies and information; and

  • authoritative health, medical, and safety information initiated or sponsored by HHS.

The guidelines applied only to information initiated or sponsored by HHS, and bearing its imprimatur. They do not apply to extramural research, where dissemination is the responsibility of the investigator, or to intramural research published independently by the investigator. Mr. Scanlon further noted that “information” was defined as “any communication or representation of facts or knowledge, in any medium or form.” Information did not include:

  • distribution limited to government employees, contractors, or grantees

  • opinions

  • intra- or interagency use or sharing of information

  • responses to FOIA, FACA, or the Privacy Act

  • hyperlinks to data disseminated by others; and

  • correspondence limited to individuals, press releases, archival records, subpoenas, or judicial proceedings

DISCUSSION

Professor Morrison said that there might be some confusion about what started out to be an OMB exemption for press releases, on the assumption that they constitute opinions rather than facts. He noted, however, that “we think that ‘x is a carcinogen’ sounds very similar to ‘x is a

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

carcinogen.’ ” He advised caution when seeking a broad exemption for any kind of information, including press releases, testimony to congress, submission to states, or other communications.

A questioner asked for the rationale for issuing a disclaimer for information that is not disseminated. Dr. Westerback said that even within her bureau there were different policies regarding dissemination. When the Bureau of Economic Analysis published income and productivity information, such data were considered disseminated. If a BEA economist, however, publishes a paper under his/her own name, it does not necessarily represent the view of the agency and is not considered disseminated. The policy of NIST, however, is that information one of their scientists publishes represents the views of the agency and is disseminated.

Mr. Scanlon said that the presentation of papers by DHHS scientists is one of the ways the agencies regularly disseminate information.

Robert Ashby of the Department of Transportation said that for testimony to Congress, the political process already deals efficiently with inaccurate data. It would be superfluous to layer another procedural framework on this process through the Data Quality Act.

Professor Morrison said it seemed clear that congressional testimony is dissemination. A concern is that agency people called to testify on short notice may not have time to review the quality of their data.

A questioner asked whether a study concerning a controversial issue could be challenged before the study or the rule-making processes were complete. Dr. Westerback responded that the new law would not remove the regular rule-making process, which would remain the first priority.

A final question concerned whether the new law meant that an agency would be required to disseminate information that it was not otherwise planning to disseminate. Professor Morrison said that the guidelines did not appear to mean this or to apply to data an agency never planned to disseminate.

CORRECTION AND APPEALS PROCESS

Department of Transportation

Mr. Robert Ashby gave the results of a brief, informal survey he had conducted of other agencies:

  • Most of agencies will require those who request a correction to fill in a standard fact sheet, including name, reason for the request, way in which the person was affected, and so on.

  • Time frame: Most agencies will allow 30 to 90 days to file a request for correction, with 45 to 60 days the most common time limit. Agencies

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

will allow themselves 45 to 60 days to consider their response. The appeals process would use the same time frame.

  • Who is an affected person? According to most agencies, it is a person who can show that he or she is harmed in some way, or would benefit from a correction of the information. One agency offered more detail: “someone who has suffered injury to a legally protected interest, who can show a causal connection between agency action and injury, and who can show that correction will correct the injury.” An agency might rule that one person was not affected by a ruling, and would not receive a response, while it might rule that another person was affected by the same ruling and would receive a response.

  • The issue of ‘filters’: The following conditions might filter out a response: the request does not pertain to disseminated information or “information” at all; the request is frivolous, trivial, or made in bad faith; the request is duplicative (e.g., one of many form letters, to which only one response is needed); it does not “state a claim”; or it disrupts agency operations.

  • Who responds to the request? Many agencies said this would be the head of the unit that originally issued the information. In rule making, some agencies specifically said that a request for correction would be answered in the final rule or document rather than in a separate corrections process. The purpose is to avoid creating another layer on an already existing process.

  • Is correction required? The agency may agree that some information could be improved, but would correct it only if it would serve a “useful purpose.” Some corrections would require significant resources or might not advance the material interest of the public or the requester. “You don’t want the correction process driving your budget,” said Mr. Ashby.

  • What is the standard for accepting an appeal? The statute specifies cases where information is not within an acceptable degree of error or precision. This loose definition has not yet been narrowed or tested.

  • Who decides whether to respond to a request for an appeal? Many agencies said that this should be someone different from the person who received the original request for correction: possibly an associate administrator or an executive panel, probably of three people in order to maximize both expertise and objectivity.

Department of Education

Dr. Marilyn Seastrom said that the Department of Education had maintained written standards for information quality since 1992. In accordance with the OMB guidelines, they were adding an appeals process.

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

The correction process will begin with a consultation with a contact person for the “information product,” which in some cases “may do away with need for correction.” The next step would be a request for a correction by an affected person who thinks a product does not meet the guidelines. To make a request, a person would have to provide personal identification, describe the information (name, office/author, specific item); the potential impact of the error; what benefit would be achieved by correction; and reasons for the request (including elements of the guidelines not followed).

The request will then be reviewed for clarity and completeness. If the elements are in order, the request will be forwarded to the appropriate program office.

During a 60-day response period, the program office may issue a request for clarification; an explanation of why the request is rejected; the findings of a review; or a statement that more time is needed. The findings will include a description of the results and what level of correction will be made.

For appeals, a requester must submit an appeals package within 30 days following receipt of the official response. The appeal request will go one level higher, to the CIO. The CIO has another 60-day period for response, which will be either an answer to the request or a statement that more time is needed.

Dr. Seastrom acknowledged that this process is still at the theoretical stage, and that the real test will be “how we [DOE] end up operationalizing it, and work with program offices to process it.”

Environmental Protection Agency

Ms. Barbara Pace noted that the guidelines were intended to provide guidance, not rules, and that the corrections process had been built on an existing process. Requests for corrections would be received and tracked by EPA’s Office of Environmental Information.

The agency was still weighing the issue of information received from external sources (such as grantees), which she acknowledged to be “a tough one,” and whether there should be time limits for corrections and appeals.

Requests for appeals would be received by the assistant administrator in charge of the program in question, who would make a decision with the help of an executive panel.

The agency will have a mechanism to filter out requests deemed to be frivolous or otherwise ineligible, and this, along with other features, would follow the notice and comment system already in place. “A separate appeals process isn’t really necessary,” she said, adding that while

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

some people had expressed a desire for one, they had offered few reasons why or how it would work. She said that the agency viewed it “as very difficult to establish a separate mechanism” for appeals.

An important consideration, she said, was to balance priorities and resources in meeting requests to correct completed products. “We may elect not to correct such things,” she said. “Also, if we don’t have a lot of detail on affected persons, it’s not clear how we’ll use that as a screening mechanism.”

In commenting on reviewability of complaints by the courts, she suggested that it was not clear why the issuance of guidelines should change the existing landscape for judicial review. She said that the courts already take into account several factors, including standing, the nature of the action, and whether an action is final. Under existing law, the dissemination of information is not a reviewable action. “It is not clear that this would change the legal landscape,” she said.

She concluded by saying that the variability among agency processes is not necessarily bad. Each agency would be expected to tailor its corrections process to their particular mission.

DISCUSSION

Mr. Anderson addressed the mechanism of answering requests for correction in the final rule. He suggested that seeking earlier opportunities to exchange information would provide “an opportunity to test it,” which might be preferable to a potentially lengthy challenge process.

Mr. Ashby responded that agencies would still respond “to legitimate questions about data that will be used for rule making. It’s just doing our job right. We’ll go out and try to fix it.”

Ms. Pace said that the testing of data is already built in to the EPA’s notice and comment process, during which it issues notices of “data availability.”

Laura Cleary, Public Citizen, asked what sort of administrative review would be required of agencies. Mr. Ashby replied that agencies may not have the obligation to correct certain information, such as “expensive” information, but suggested that there should be a “proportionality” in the appeals mechanism. That is, corrections deemed reasonable would be made, while those requiring significant resources might have to be reviewed individually.

A representative of the Natural Resources Defense Council asked about the EPA’s intention to weave an appeal into their notice and comment process, and whether that would comply with the intentions of the Act. Mr. Ashby commented that the rules for handling an appeal through the Data Quality Act would be similar to those for handling an appeal

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

through the notice and comment process. For example, if a complaint were filed after the comment period expired, it would be filtered out. There already exist ways for outside parties to petition for reconsideration or amendment of a rule if, for example, it is based on inaccurate information. That can be processed as a petition to reconsider the rule itself. There also would be gray areas, Mr. Ashby said, in the case of information that is important to one or a few individuals but has little bearing on the validity of the rule.

One participant said that in the experience of his organization, the EPA may propose a rule and operate under it for a year or more before it becomes final. During the period before it is finalized, he said, the rule might cause harm if it were based on faulty data, and yet there was no appeal before finalization. Ms. Pace agreed that a proposed rule may be “out there for a while,” but that she felt confident the existing appeals process would cover any valid complaints. Mr. Ashby said that a rule that had not yet been finalized would still be subject to the OMB guidelines, and that the only questions were the timing and process for responding.

SUBSTANTIVE ISSUES: “INFLUENTIAL” AND “QUALITY”

Department of Transportation

Mr. Robert Ashby noted the difficulty in determining whether information is “influential,” which the OMB guidelines define as having a “clear and substantial impact” on decisions. Giving the results of his informal survey, Mr. Ashby said that the EPA had addressed this issue directly, listing categories of the most important agency actions, including those whose economic impact could be $100 million or more, or constituted the basis for new or revised policy. According to Mr. Ashby, some agencies had not attempted to define influential. Most assumed that certain kinds of information would be influential, given the mission of their particular agency. For example, the Department of Labor said that the Consumer Price Index and Producer Price Index were inherently influential. The State Department said “influential” information was a “narrow category” focused on “objective and quantifiable information forming the basis for policy decisions by the department.”

For the DOT, Mr. Ashby said, a “clear and substantial” impact would be one that “the agency thinks has a high probability of occurring.” He used the “clear and convincing” evidence standard as an analogy, which is “a little more than a preponderance,” adding, “You want more than that.”

Mr. Ashby noted that virtually every decision made by an agency is “important to someone.” He cited the example of rust standards that are

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

set for highway bridge construction; such standards are of critical importance to members of an organization called the National Hot Dip Galvanizers Association, but to few other people. But for information to be “influential,” it should be “outcome-determinative” for significant public- or private-sector rule-making, and must also concern scientific, statistical, or financial information, as described by OMB.

Beyond rule making, Mr. Ashby suggested that information will probably be considered influential if its effect is both broad and deep. For example, new standards for mammography would probably be influential because they would affect a great many people (breadth) for a compelling reason (depth). On the other hand, most decisions are either broad or deep, but not both. For example, the DOT’s quarterly reports of on-time performance of airlines does not affect many individuals, although those companies affected place great importance on the results.

“Most of these will be judgment calls” by each agency, Mr. Ashby said, and some people will always disagree with those calls. As an aside, he noted that as stated in their comment letters, the position of the Chamber of Commerce was that all information pertaining to rule making should be considered influential; in another example, the American Bar Association did not agree with the use of an arbitrary line of $100 million in economic impact to determine influential information.

RISK ASSESSMENT

Food and Drug Administration

Dr. Jane Axelrad addressed the use of the Safe Drinking Water Act (SDWA) as a model for risk assessment in issues where safety is a central concern. She said that FDA strongly supports the OMB guidelines, which have “enough inherent flexibility” to allow the agency to implement them in ways that are helpful to its mission.

Dr. Axelrad suggested that each agency would likely use its own template for risk assessment, tailored to its particular needs. The FDA, for example, must use qualitative judgments and balance risks against benefits in regulating the manufacture and use of drugs, cosmetics, animal feed, and other products. The agency also must decide what information ought to be included in drug labeling and how that information should be organized. Such actions do not lend themselves to the quantitative risk assessment used in SWDA, and it may be difficult to prove that the information is of high quality. In a risk/benefit environment, the use of peer review, as modeled in SDWA standards, is problematic, both because there may be no single “right” answer to an evaluation, and because much

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

of the information used by FDA is received from third parties that prohibit data sharing for proprietary reasons.

As a result, the FDA will adapt SDWA to meet its special needs by using the following criteria for product approvals and other kinds of qualitative risk assessments:

  • Use the best available science and supporting studies, including peer review when possible;

  • Use data collected by accepted methods; and

  • Ensure that information disseminated publicly about risks is clear.

The agency uses the following criteria for risk assessment that can be quantitative:

  • The three criteria listed above;

  • State appropriate upper-bound and/or lower-bound risk estimates;

  • Identify data gaps, other significant uncertainties;

  • Identify studies that would assist in reducing data gaps and uncertainties; and

  • Identify additional studies that support or fail to support the findings of the assessment and explain why they were not used.

Dr. Axelrad concluded by saying that the FDA had received few comments about its proposed guidelines.

DISCUSSION

Mr. Goldberg of the Mitre Corporation asked how agencies should handle intramural data created for its own purposes but later expected to become influential. Dr. Axelrad said that FDA hadn’t yet confronted that situation, but that influential information would usually be identified when it was prepared for dissemination. She said that staff in program offices will have to be aware of information that might be influential at a later date.

Frederick Anderson returned to the issue of data quality, asking whether it was not better to “clear up questions about data before the potentially lengthy process of rule making. “Unless you seek opportunities to exchange data,” he said, “you’re missing opportunities to test it.” He asked whether it would be a good idea to have a pre-rule data identification and challenge process. Dr. Axelrad said that an agency already will respond to legitimate questions about data that will be used for rule-making. Ms. Pace added that the EPA already does this when it issues notices of data availability. “Anything we do would have to go into the

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

notice and comment process anyway,” she said. Mr. Ashby said that the OMB guidelines were “really an embellishment on the original statute.” On the low end, a properly framed appeal for information might function as a due diligence check—that is, did the agency in fact follow proper procedures in producing the information. For more serious appeals, the level of review would rise. “We suggest that there be a built-in proportionality,” Mr. Ashby added.

Evaluation of Several Agency Guidelines

Representatives of several scientific societies offered brief evaluations of the guidelines of agencies they followed or worked with.

Federation of American Societies of Experimental Biology (FASEB)

Dr. Howard Garrison offered an assessment of the draft guidelines adopted by two agencies, the NIH and NSF. Both, he said, “responded responsibly,” while showing “striking differences” because of their different missions.

The NSF focused on its dissemination of substantive information, especially scientific reports, program summaries, and reports used for policy. They omitted information gathered by grantees. The NSF assessed the utility of its data programs regularly by external review panels and assured objectivity through rigorous statistical methods, which led to good reproducibility. Not all statistical summaries were reproducible by outside parties due to confidentiality, but the NSF had “strong guidelines and a distinguished tradition” for producing such documents and for setting a standard for other agencies. The cost of this quality was often high in terms of timeliness, he said, citing a report he had just received that was based on data collected in fall 2000—a year and a half earlier. At the same time, the report was a “model of transparency,” showing detailed methodologies, clear presentation for both lay and professional audiences, wide availability in print and electronic forms, with “exemplary” summaries and statistical tables of report.

The NIH, Dr. Garrison said, faced a more complex challenge, with its 27 institutes and centers. The guidelines differed somewhat for each entity, and were tied to their different products. They were based on existing quality assurance programs, and limited to information used for official NIH statements. Information from grantees was not covered. Nonetheless the guidelines covered a large amount of information, including more than 400 publications per year, and a hundred thousand pages of material on the Web site. All of it had been subjected to peer and internal review. Studies deemed influential received three checks, to en-

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

sure (1) exemplary data quality, (2) transparency, through references, documentation, disclosure of potential sources of error, and disclaimers, and (3) peer review, considered the most important check. He noted that scientists view peer review as an integral feature of quality assurance. When scientists prepare papers, they build in careful documentation of their procedures to prevent misunderstandings.

In conclusion, said Dr. Garrison, both agencies had developed comprehensive policies to demonstrate the quality of the data they disseminated, as well as mechanisms for addressing challenges and providing reasonable avenues for affected parties to request corrections.

American Institute of Biological Sciences

Ms. Ellen Paul reviewed the responses to OMB guidelines by the USDA and the Department of the Interior.

The USDA, she said, had considered both internally and externally produced information under the four OMB standards (objectivity, reproducibility, utility, integrity). But she also raised several areas for improvement. Reproducibility had not been addressed, she said, nor had the issue of what was influential. More serious, she said, was that the agency had proposed consulting with potential users in advance of undertaking a research project. “One can imagine that some users would say, don’t do the study at all if it may result in a regulation they don’t want. There is nothing in the guidelines to resolve that.” She added that such an approach would also neglect the USDA’s internal needs.

Ms. Paul also said that in the agency’s discussion of risk assessment, one would expect some mention of the Safe Drinking Water Act standards; this did not appear, nor was there any other discussion of risk assessment, the use of models, or how the use of models would be affected by the guidelines. Finally, she said that for its correction process, the agency had “put the burden of proof on the complainant.” Also, the agency’s requirements in the correction process were not legally binding,1 and a challenge could be filed at any time without penalty. There was no

1  

While an agency may agree that the data are incorrect, the agency is not legally required to change the data. As Ms. Paul noted, many factors can influence whether an agency decides to correct information following a complaint. In summarizing his informal survey of agencies, Robert Ashby of the Department of Transportation said that agencies might agree that some information could be improved, but would correct it only if it would serve a “useful purpose.” Some corrections would require significant resources or might not advance the material interest of the public or the requester. “You don’t want the correction process driving your budget,” said Mr. Ashby.

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

anticipation that complaints might be filed ad seriatim for months or years.2

Turning to the Department of Interior guidelines, Ms. Paul said that most of the department’s primary research was done by the U.S. Geological Survey, and most of that was intramural. Other bureaus published information, but did not have the capacity for the kind of review contemplated by the OMB guidelines. The department’s guidelines, she said, were limited to two points: first, the agency would do what the OMB requested, and second, it would instruct departmental bureaus to implement the guidelines. This response did not address the issue of different kinds of research (funded, contractor, grantee), and did not address the four standards individually, except to repeat the OMB definitions.

The DOI guidelines did address data quality procedures, but she criticized the agency’s response for not anticipating issues that might arise during challenges, especially the right of the researcher to respond. Ms. Paul suggested that the prospect of unlimited challenges by people trying to obstruct research would likely dissuade talented young researchers from joining the department.

American Association for the Advancement of Science (AAAS)

Dr. Joanne Carney reviewed two agencies, the NSF and EPA. She said that in her view both agencies seemed to provide transparency. She added that the AAAS placed high value on peer review in its own work. Peer review achieved transparency by substantiating the data and controls, and by explaining uncertainties behind the data.

For NSF, Dr. Carney said she agreed with the opinion of Dr. Garrison. The agency was clear about its processes, and the ways it verified the utility and objectivity of its information. It was clear in saying that grantees have sole responsibility for preparing their own information. For statistical data, NSF is clear about its methods of collection, sources, and limitations. She said that the AAAS is a regular user of NSF data, which it found to be of high quality, if not always as timely as some users would like.

The EPA had recently held both a public meeting and an online comment period in regard to its guidelines. Dr. Carney noted that EPA’s guidelines were a “thorough job, given the nature of the work they do and the products they provide.” She said it was important for the EPA to clearly articulate that the guidelines are not intended to replace existing procedures or statutory guidelines. Existing procedures already provide

2  

Several speakers noted the possibility that serial suits could be used as a harassment tactic to encumber research and ultimately delay agency action.

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×

for public comment and requests for correction. EPA had already clarified that extramural researchers were responsible for deciding how to publish research results that were not associated with agency policy – a “separation of church and state.” She cautioned that the agency should take care not to let guidelines “overly hamper” the pursuit of basic research. She said that the agency provided a disclaimer saying that for information initially not covered under guidelines (produced before October 1, 2002), and then disseminated after October 1, 2002, the agency should try to ensure that such information is reproducible and otherwise of high quality.

The EPA addressed the category of “influential” information, including a case-by-case analysis and a description of the existing standard of information that might have economic significance of $100 million or more. Dr. Carney said that the first category of influential was overly broad: that is, information disseminated “in support of top agency actions” was not defined. Also, the agency acknowledged that the issue of third-party data is controversial. Such data should still be utilized, but the confidentiality of the researcher needs to be observed. Some individuals have claimed that EPA is “hiding behind those confidentiality laws,” but she added that one can “still look at analytical results without violating confidentiality.” She agreed with EPA that often one must use the “best available” information, because science is an ongoing process that does not achieve answers that are final or fixed. “We will know more as scientific research moves forward. But sometimes we have to look at what’s available today and make a decision. The EPA did a good job there.”

Dr. Carney concluded by saying that EPA needs to be more specific about its time limits for correction requests, and how a petitioner has to demonstrate being harmed by the information. These guidelines need to be clear, she said, or the agency will be “overburdened.”

Next Steps in the Data Quality Process

Over the course of the Spring and Summer 2002, agencies will receive comments from the public and will work with OMB on revising and finalizing their agency-specific guidelines. OMB’s Guidelines go into effect on October 1, 2002.

Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
This page in the original is blank.
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 23
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 24
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 25
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 26
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 27
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 28
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 29
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 30
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 31
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 32
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 33
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 34
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 35
Suggested Citation:"3. Draft Agency-Specific Guidelines." National Research Council. 2003. Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/10666.
×
Page 36
Next: Appendix A: Ad Hoc Committee Members' Biographies »
Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report Get This Book
×
 Ensuring the Quality of Data Disseminated by the Federal Government: Workshop Report
Buy Paperback | $45.00 Buy Ebook | $35.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!