A STATISTICAL AGENCY SHOULD HAVE a research program that is relevant to its activities. Because a small agency may not be able to afford an appropriate research program, agencies should collaborate and share research results and methods (see Practice 13). Agencies can also augment their staff resources for research by using outside experts.
At least two major components should be part of a statistical agency’s research program: (1) research on the substantive issues for which the agency’s data are compiled, taking care not to take policy positions; and (2) research to evaluate and improve statistical methods and operational procedures, such as data processing ﬂow. In addition, research should be conducted to understand how an agency’s information is used, both inside and outside the government, for policy analysis, decision making, and public understanding (see Practice 6).
Research on data uses and users can contribute to future improvements in the concept and design of data collections and the format of data products. For example, public-use files of statistical microdata were developed in response to the analytic needs of government and academic researchers. Beginning with an understanding of the variety of uses and users of an agency’s data, more in-depth research on the policy uses of an agency’s information might, for example, explore the use of data in microsimulation and other economic models that are used in decision making (see National Research Council, 1991a,b, 1997a, 2000b, 2001b, 2003a, 2010d).
SUBSTANTIVE RESEARCH AND ANALYSIS
A statistical agency should include staff with responsibility for conducting objective substantive analyses of the data that the agency compiles, such as analyses that assess trends over time or compare population groups. Substantive analyses provided by an agency should be relevant to policy by addressing topics of public interest and concern. However, such analyses should not include positions on policy options or be designed to reﬂect any particular policy agenda (see Martin, 1981; Norwood, 1975; Triplett, 1991).
The existence and output of an analytical staff can contribute not only to the knowledge base in the applicable subject areas, but also to the credibility, relevance, accuracy, timeliness, and cost-effectiveness of the agency’s data collection programs. Benefits that a strong subject-matter staff bring to a statistical agency include:
- Agency analysts are able to understand the need for and purposes of the data from a statistical program and how the data will be used. Such information must be available to refine the design and methods an agency is using to produce the data.
- Agency analysts have access to the complete microdata and so are better able than outside analysts to understand and describe the limitations of the data for analytic purposes and to identify errors or shortcomings in the data that can lead to subsequent improvements.
- Substantive research by agency analysts can benefit from and help reinforce an agency’s credibility through its commitment to openness and maintaining independence from political inﬂuence.
- Substantive research can assist in formulating an agency’s data program, suggesting changes in priorities, concepts, and needs for new data or discontinuance of outmoded or little-used series.
An agency’s subject-matter analysts should be encouraged and have ample opportunity to build networks with analysts in other agencies, academia, the private sector, other countries, and relevant international organizations. Analysts should also be encouraged and have ample opportunity to present their work at relevant conferences and in working papers and refereed journal articles. The goal is for the agency to have widely recognized expertise in the subject areas in its mission.
The leaders of a statistical agency should take steps to ensure that the agency’s subject-matter analysts and its methodological and operational staff are able to interact in a constructive manner. Overcoming barriers to communication is essential so that insights from subject-matter analysis can be translated effectively into improved data collection program design, methodology, and operations.
RESEARCH ON METHODOLOGY AND OPERATIONS
It is important for statistical agencies to be innovative in the methods used for data collection, processing, estimation, analysis, and dissemination, with the goal of improving data accuracy and timeliness and operational efficiency and reducing respondent burden. Careful evaluation of new methods is required to assess their benefits and costs in comparison with current methods and to determine effective implementation strategies, including the development of methods for bridging time series before and after a change in procedures.
Research on methodology and operational procedures must be ongoing. Currently, some of the important topics for research include:
- determining best uses of paradata to optimize costs and timeliness of data collection and estimation and accuracy of results (see National Research Council, 2013a);
- addressing challenges for computer-assisted interviews, which have included lengthy times to implement questionnaire changes and difficulty in providing adequate documentation of questionnaire content and pathways (see National Research Council, 2003c);
- understanding and minimizing mode effects on quality when obtaining data in two or more different ways (Internet, mail, telephone, and face-to-face response; see National Research Council, 2007b);
- improving the adequacy of the documentation of Internet data products and guidance for users with a wide range of analytical skills and understanding (see National Research Council, 2012);
- developing new methods of confidentiality protection (see National Academies of Sciences, Engineering, and Medicine, 2017b); and
- accelerating the use of multiple data sources by developing measures of error for alternate sources and identifying optimal ways to combine them to achieve such goals as reducing burden and costs and improving accuracy and timeliness, recognizing that it is likely not possible to achieve improvements on all dimensions at once (see Practices 3 and 9).
With regard to conceptualization and measurement of error, to the extent possible, statistical agencies should work to adapt the concept of total survey error, which has guided the design of probability surveys, to nonsurvey data sources. The total survey error framework includes bias (nonsampling error) and variance (sampling error). It can be adapted to administrative records alone or in combination with surveys, so long as the statistical agency can obtain sufficient information on sources of error in the records (e.g., coverage). There are nontraditional data sources for which
measuring error alone or in combination with surveys will be difficult, if not infeasible, and for which it will be necessary to label any statistics as experimental (see Practice 3).
It is noteworthy that many current practices in statistical agencies were developed through research they conducted or obtained from other agencies. Federal statistical agencies, frequently in partnership with academic researchers, pioneered the use of statistical probability sampling, the national economic accounts, input-output models, and other analytic methods. The U.S. Census Bureau pioneered the use of computers for processing the census. Several statistical agencies use academic principles of cognitive psychology—a research strand dating back to the early 1980s (see National Research Council, 1984)—to improve the design of questionnaires, the clarity of data presentation, and the ease of use of electronic data collection and dissemination tools. History has shown repeatedly that methodological and operations research can lead to large productivity gains in statistical activities at relatively low cost (see, e.g., Citro, 2016; National Research Council, 2010c).
An effective statistical agency actively partners with the academic community for methodological research. It also seeks out academic and industry expertise for improving data collection, processing, and dissemination operations. For example, a statistical agency can learn techniques and best practices for improving software development processes from computer scientists (see National Research Council, 2003c, 2004d). An effective agency also learns from and contributes to methodological research of statistical agencies in other countries and relevant international organizations.
Statistical agency management should take steps to ensure that methodological research staff are able to interact constructively with operational staff so that improvements to operations can be readily identified and implemented. Agency leaders should also strongly support methodological research and feasibility testing for major data collection programs, through such means as a methods panel that is operated in parallel with the agency’s main program. This kind of testing is essential so that a program does not become locked into methods and procedures that are increasingly out of date and, at the same time, to assess new methods in a test environment before they are put into production.