Possible Remedies to Barriers
This chapter covers the workshop discussions of ways to remedy the barriers to innovation discussed in Chapter 3. The background papers by Dillman (1996) and Habermann (2010a) are summarized first, followed by the discussion of the workshop participants.
OVERCOMING THE ENVIRONMENTS THAT HINDER INNOVATION
In his paper, Dillman outlined three steps to overcome the three interconnected features of large government survey organizations that make it difficult to create an environment of innovation, discussed in Chapter 3.
First, he noted that for research to positively affect government survey practice, much of it must be done on government surveys, but the professionals are not there in sufficient numbers to do this. In particular, he singled out the need to bring into the government professionals who are able to resolve issues of measurement and nonresponse error.
The second step is to build the capacity to understand and deal with sources of nonsampling error into the operations culture. This is necessary because many of the decisions that are made at the operations level directly affect measurement and nonresponse. As examples, his paper mentions “question wording and order, the way a form or questionnaire is constructed on a page, the information included as part of an address, the class of postage used, whether letters include dates, the contents of those letters, whether letterhead stationery is used, and the kind of mail
processing equipment purchased” (Dillman, 1996, p. 122). He stresses the importance of instilling this capability into the information technology parts of an organization, since, on one hand, the acquisition of information technology is one of the main driving forces in innovation, and, on the other hand, information technology units are often concerned with operational efficiency and not reduction of survey error. Adoption of new technologies should contribute to the reduction of survey errors, not exacerbate them.
The third step he proposed is to take measures to eliminate problems in reliable and timely communications that may be caused by the overly hierarchical nature of the organization. These measures have two components. One is that communication should not be viewed as a control function but as a means to promote the flow of information to as many people as possible in as short a time as possible. The other is to ensure that neither the research nor the operations culture entirely dominates the other. He notes (Dillman, 1996, p. 123): “A government survey organization that allows either the research culture or operations culture to control the other will neither be innovative in an effective way nor will it conduct, in the long run, high quality surveys. The organizational structure needed is one that encourages each to influence the other and allows disagreements to be worked out quickly, at lower levels under an umbrella of shared purpose.”
Dillman also observes that the barriers and remedies that he proposes are not unique to federal statistical agencies (Dillman, 1996, p. 124): “Universities, large corporations, and others all find themselves struggling with how to facilitate needed innovation, rather than unnecessarily thwarting it.”
LEADERSHIP AND FOCUS
As did the workshop participants, the Habermann paper (2010a) stresses the importance of leadership:
It is the leadership of each agency—including the senior managers—who are responsible for fostering an innovative spirit and for carrying out innovation in that agency. It is leadership who are responsible for inspiring and rewarding staff, and for developing solutions in spite of the constraints placed on the agency. It is, after all, agency leadership that must ensure that the necessary investments in innovation are made and the necessary changes to business processes are accomplished. Moreover, it is not sufficient to manage an agency; one has to manage a technical agency. Leadership then can assist in attracting and retaining staff through improved (less bureaucratic) working conditions and fostering a spirit of excitement. Leadership must also wrestle with the difficult
problem of encouraging change in an environment where day-to-day operational outputs are of paramount importance to the public.
Although there is no silver bullet to solve the problem of finding tomorrow’s leaders for the statistical agencies, Habermann’s paper suggests that there is a system-wide change that might make a significant difference for the future. This change would elevate the status of the heads of statistical agencies in their department and with Congress, as well as allow for easier access for these leaders to the decision-making processes of the executive branch. Currently some, though not all of the heads of the statistical agencies, are presidential appointees requiring Senate confirmation (known as PAS). The change would be to require that each agency head be a PAS with a term appointment. In his paper, Habermann asserts that such a change would raise the visibility of all the statistical agencies and make it easier for them to promote their budgets and to gain the appropriate measure of independence necessary for success.
Although leadership is a necessary condition for innovation, it may not be sufficient. In particular, Habermann’s paper addresses three other areas that, in his view, require system-wide attention: recruitment, working relations with academic researchers and with private contractors, and a more centralized focus for research and innovation.
Habermann’s paper acknowledges that there are many excellent statisticians in the federal statistical system. However, their numbers are not sufficient to meet the demands placed on the system for more innovation. Moreover, as mentioned earlier, the skill set required for research and innovation is relatively rare. The primary method of attracting researchers into the federal statistical system is through recruitment of new graduates—who must be U.S. citizens—at the master’s and doctorate levels. However, many otherwise excellent candidates are not U.S. citizens, and, with few exceptions, they cannot be hired by federal statistical agencies. To change this would be a significant task and would require legislation, and the paper acknowledges that such a change is unlikely in the current political climate. Consequently, according to Habermann’s paper, consideration might be given to another idea: creating a private not-for-profit research center, devoted to the problems of the federal statistical system, which is not hindered by the hiring constraints of federal agencies.
Relations with Academic Researchers and Private Contractors
Even if the federal statistical system did employ significant numbers of staff with the correct skill set, it would still be beneficial, and in fact necessary, for them to work cooperatively with researchers in academic institutions and the private sector. System-wide leadership is needed for agencies to develop the ability to enter into flexible cooperative agreements with academic and other institutions. Although there are cooperative agreements in some agencies, even in these cases it is often the department that makes the final decision. According to Habermann’s paper, it would be helpful if all statistical agencies could have flexibility in their grant-making ability and in working with universities to support graduate students.
A Central Focus for Research and Innovation
The ability to undertake significant innovation projects requires a critical mass of research personnel in many agencies. This critical mass is lacking in most agencies. Even if the agencies were able to recruit noncitizens, the inability to create such a critical mass would still exist simply because of the numbers of staff required. Moreover, the absence of a central focus makes it difficult for the statistical system, acting a whole, to prioritize and focus on system-wide innovation problems.
Habermann’s paper discusses two options that could aid in the solution of these problems. One approach would be to empower a single agency (e.g., the Census Bureau, the Bureau of Labor Statistics) to act as the focus for innovation for the entire statistical system. Direction for the single-agency research program could come from the Statistical Policy Office of the Office of Management and Budget (OMB) through the Interagency Committee on Statistical Policy (ICSP). Such an approach would require a level of integration that does not now exist in the federal statistical system, and the success of this approach would require successful solutions to the problems of recruitment and working relationships discussed above.
If it is not possible to simplify the recruitment process or to hire non-citizens, and if the contracting rules prove intractable to change, then a more innovative approach to providing a central focus for research and innovation may be required. Habermann’s paper discusses the approach of a private not-for-profit research center. The employees would not be federal employees and therefore would not be subject to the recruitment and retention rules of the federal government. Such a center could take advantage of contracting rules that would allow flexible working relationships with the government. It would also require authority from OMB through the ICSP. In this approach, research would take place at
the center, at universities, and by federal contractors (under contract with the center). Habermann’s paper notes that the number of employees at this center need not be large.
Although several possible remedies to overcoming the barriers to innovation were discussed by the workshop participants, changes in leadership and research programs were among the most commented on. Three other topics in the discussion were periodic review and feedback from users, innovation incubators, and the criteria for successful innovation.
The need to improve leadership was mentioned by many participants as critical to improving innovation, and some had specific ideas about how leadership can provide remedies to innovation barriers.
Robert Groves said that statistical agencies need to reexamine the existing boundaries between agencies. In particular, he suggested, leadership was needed so that the system could reconceptualize agency boundaries and the nature of collaborative activities between agencies, as well as the boundaries with outside entities. Noting that small and large agencies have different barriers and therefore different solutions, he said that one issue is how to systemically encourage coalitions and alliances.
Cynthia Clark (National Agricultural Statistics Service) observed that leadership is needed to develop policies that encourage the movement of staff between different organizations and that this would help break down some of the existing barriers between agencies.
Steven Landefeld (Bureau of Economic Analysis) stressed the importance of leadership in providing the correct incentives for innovative work and changing the incentives as conditions warrant. Since the federal statistical system is a decentralized one and likely to stay that way, he pointed out the need for more centralized leadership and direction. He extended the concept of incentives to advisory committees, noting that it is also important for advisory committees to have the correct incentives to maximize their usefulness. Landefeld stated that, although there is never going to be a Statistics USA centralized statistical system, in his view stronger leadership is needed at the top with the authority to make some decisions about priorities in the budget process and about cross-cutting priorities.
The need for incentives for outside researchers and for agency staff was also noted by John Eltinge (Bureau of Labor Statistics).
Clark noted the need to make greater use of cooperative agreements
with states and land grant universities as well as the need to encourage the development of interdisciplinary teams. Constance Citro remarked that it might be beneficial to extend the ability that the National Agricultural Statistics Service has for making cooperative agreements to other agencies, but that would require new legislation.
Several of the workshop attendees remarked on the importance of research in enhancing innovation in the federal statistical system. Both Landefeld and Steven Dillingham (Bureau of Transportation Statistics) observed that cross-cutting research could be centralized, as suggested in Habermann’s paper. Landefeld, however, said that individual agencies would still need the ability to carry out research on specific topics relevant to their missions, such as national accounts.
Manuel de la Puente (Social Security Administration) pointed out that extramural research can bring into an agency outside people with new skills and abilities. To promote the synergies of extramural research, it is important to pair up outside researchers with internal agency staff. Edward Sondik (National Center for Health Statistics) agreed, asserting that it is stimulating for staff to be involved in extramural work.
Both Thomas Louis and Roderick Little (University of Michigan) commented on the type of research being performed by federal agencies. Little said that research by the federal statistical system may be skewed too far toward the observational end of the spectrum and not enough toward the experimental design end. Louis concurred, saying that it is critical for experiments to have the ability to compare approach A with approach B. In building a productive research program, it is necessary to attract researchers from outside the federal system. One way to do this, according to Groves, is through the use of the Intergovernmental Personnel Act. This allows researchers, primarily from academic institutions, to leave the academic world for a specified period of time, work in a statistical agency, and then return.
Periodic Review and Feedback from Users
The importance of a periodic review of statistical programs to overcome barriers to innovation was stressed by several participants. Allan Schirm and Little discussed the need for regular evaluations of statistical programs, including their fitness for use. David Banks (Duke University) said that it would be useful to hold a workshop every five years on how to organize a statistical agency as if for the first time.
With respect to review and evaluation, Groves noted that creative
destruction of statistical programs is not usually practiced by statistical agencies, although external pressures, such as budgets, can induce it.
With respect to periodic reviews, Clyde Tucker pointed out that 20 years have passed since enactment of the Government Performance and Results Act, and there have been few serious interagency efforts to develop quantitative performance metrics. He noted that the best example of such a metric is the OMB standard on nonresponse bias.
Both E.J. Reedy (Ewing Marion Kauffman Foundation) and Edward Spar (Council of Professional Association on Federal Statistics) commented on the issue of feedback from users. Reedy stated that agencies, from the top down, need to interact with users to provide and get feedback and connect with data users. He suggested that agencies should put a notice about seeking feedback on the websites where their data reside, so users see it when they want to download data. Spar observed that agencies usually provide information to users on what they (the agencies) are doing; they need to make greater efforts to understand the users’ perspectives. In obtaining that feedback, William O’Hare (Annie E. Casey Foundation) suggested that clearer communication can build public support. In that vein, agencies need to learn how to use Facebook, Twitter, and other social networking tools, he said.
Several participants supported the concept of innovation incubators or laboratories in which ideas for innovation could be developed and tested. Ron Bianchi noted that this could be coupled with the concept of promoting contests for innovation, as is done at the National Aeronautics and Space Administration.
Eltinge said it is important that the statistical system and the individual agencies learn how to infuse new technologies into the system. The example he used was how, in the 1940s, Iowa farmers learned to accept the mechanization of agriculture and hybrid seed corn.
John Haltiwanger suggested that the statistical agencies could work with foundations, communities, and corporations, such as IBM, that have experience with building incubators and allowing them to flourish. The hard part is ensuring that the innovation ideas from incubators are not left to wither and die. He noted that it takes senior management buy-in to move incubation products into the mainstream of an organization.
Emerson Elliott (National Council for Accreditation of Teacher Education) suggested that OMB could establish a culture of innovation. For that to take place, the ICSP would be very important. In this respect, Eltinge observed that the statistical system cannot keep living off previous capital investments in innovation; it needs to make new investments in capital.
Peter Meyer (Bureau of Labor Statistics) suggested a specific incubator project: a Wikipedia type of website across the statistical agencies, which he called Statipedia. Such a website could be used to build a common online glossary of terms and to share experiences and knowledge. Thus, for example, computer source code and technical innovations could be shared across agencies. Barry Nussbaum was enthusiastic about the idea and offered cooperation in hosting the website.
Criteria for Successful Innovation
Ivan Fellegi identified three criteria for a successful research or innovation program. First, such a program has to be directly linked to the operational activities of the agency, so that it is driven by acute issues of practice or by opportunities detected in practice. In addition to being directly linked, there needs to be distance between research and practice, and there also needs to be a balance between the independence of the research and its relevance.
He also offered comments on the ideas discussed in Habermann’s paper. He agreed with the crucial role of leadership for maintaining independence while ensuring relevance. However, he thought that research centralized in one agency or in a federally funded research and development center was going too far, because of the distance problem. He said that the research function would not be aware of the operational requirements and would not be relevant to the practice issues of the agency.
Instead, Fellegi outlined four specific steps that could be taken to foster innovation in statistical agencies:
the bureaucratic barriers to efficient and effective contracting and recruitment could be removed, with the lead to this taken by OMB;
an organized marketing of the problems and opportunities of the federal statistical system to academic institutions could be undertaken;
case studies of successful—and unsuccessful—examples of innovations could be compiled and disseminated; and
progress could be measured periodically.