Barriers to Innovation
This chapter covers ways to improve the extent and pace of innovation in the federal statistical system by addressing the barriers to innovation, most of which were discussed in the workshop’s third session. The main arguments on this topic from the papers by Dillman (1996) and Habermann (2010a) are summarized first, followed by the points made by the workshop participants.
In Dillman’s (1996) paper, he reflects on his days at the U.S. Census Bureau in the early 1990s and his involvement in projects to reduce measurement and nonresponse errors. He observes that three interconnected features of large government survey organizations make it difficult to create an environment of innovation:
the coexistence of research and operations cultures,
major differences in the dominant value systems of the research and operations cultures, and
the difficulty of resolving these differences in a hierarchical organization.
Coexistence of Research and Operations Cultures
As he notes (Dillman, 1996, p. 115):
Government survey agencies face tasks unlike those usually faced in universities or private sector work. Some government surveys are of great scale and complexity, so that not only do they present huge operational problems, but much of the specific knowledge for designing and implementing the surveys must come from research which only the host agency can design and implement.
A circumstance is thereby created that might be likened to, for example, an aircraft manufacturer attempting to operate an airline while continuing to design aircraft. Putting pilots and flight attendants into the same room with aeronautical and thermal systems engineers—each representing multi-million dollar enterprises with equal investment in the outcome of their common research project—could produce some unpredictable as well as strange outcomes.
It is perhaps inevitable that tension will develop between (a) employees responsible for the visible outcomes of the organization, whom the public and Congress are aware of and keep account of, and (b) employees who are part of the research arm of the organization, who are responsible for learning why and how things work and how to improve them. From the perspective of the operations culture (Dillman, 1996, p. 115):
Good research project is “practice” in order to form impressions of whether something works. Control groups are not really necessary, and the fewer treatment groups the better. From this perspective the real value of research is as the rehearsal is to a stage performance, where one dares not fail. At the same time, those from the research culture have in mind carefully designed treatment factors and a full factorial design. Preordained rules of assignment to treatment and control groups as well as rules for interpretation of evidence must be scrupulously followed.
Both of these cultures are essential to success. However, as Dillman notes, the perceived needs of one culture can often interfere significantly with the needs of the other. The results of the coexistence of these two cultures are “differences in the core value systems of each culture and a division of responsibility that results in the overemphasis of some issues at the expense of others” (Dillman, 1996, p. 116). This is a major barrier to innovation.
Two Value Systems
As described by Dillman, the values and skills of the people in the two cultures are significantly different. Statistics is the core value of those in the research culture. However, people trained in measurement and nonresponse issues are few in number, and, Dillman asserts, they generally lack influence in the design process. The give and take of working
groups may result in decisions about sampling error being made by statisticians and decisions about measurement error and nonresponse being made by operations staff. However, Dillman argues that the skills that are valued in the operations culture have almost nothing to do with reducing measurement and nonresponse errors; rather, the valued skills involve organizing large numbers of people to get tasks done accurately, on time, and at a low cost. The result is that, although measurement and nonresponse issues have emerged as increasingly important sources of data collection error, there has not been a corresponding emergence of significant numbers of professionals to design theoretically based projects needed to ensure the development and implementation of appropriate innovations for resolving the error. Dillman characterizes this situation as a barrier to innovation.
Difficulty of Resolving the Culture Differences
Dillman notes that government statistical agencies, such as the Census Bureau, are highly complex organizations with many different tasks.1 Not only are there several levels of hierarchy inside the organization, which makes identifying the originator of any material difficult, but also, and more importantly, there are several levels of hierarchy outside the statistical agency. He noted, for example, that the then-proposed 2000 research and development program of the Census Bureau might go through a minimum of eight entities outside the agency. Differences in organizational hierarchies may be resolved in a number of ways, such as by which entity is the most powerful or by who won the last time. As a result, compromises are common, and the decisions made by organizational hierarchies are often based on different issues from those that originally led to the negotiation. Dillman asserted that, in particular, measurement and nonresponse issues are decided in this kind of complex hierarchical process, and they “lose” to operations issues on one hand and statistical issues on the other. He notes (Dillman, 1996, p. 119):
From the standpoint of innovation in a rapidly changing technological environment, though, hierarchical processes make such cultural and value system concerns more difficult to resolve. The down-side of hierarchy for innovation is that it forces large amounts of critical information upwards through a series of smaller and smaller funnels. This process is slow, but the information that eventually gets through represents only
a part of the original message. In addition, the information finally communicated may bear very little resemblance to the original message.
A further problem of hierarchical decision making, according to Dillman, is that horizontal flows of innovative ideas and the promotion of active discussion of these ideas at an early stage are discouraged. In the absence of a more formal regularized process to discuss possible innovations, the ideas that do flow horizontally are usually by word of mouth. One result is that the ideas often become distorted and misunderstood by those in the hierarchy. He ends by noting that “the effects of hierarchical processes [are] to slow down and sometimes thwart altogether needed innovation and change” (Dillman, 1996, p. 120).
The paper by Habermann (2010a) agrees with Dillman about the problems produced by the coexistence of different cultures in a statistical agency, but it considers several additional barriers:
the lack of investments in research and innovation by statistical agencies;
the inability or unwillingness of the administration or Congress or both to take a long-term view of research and innovation;
the low standing of statistics as a priority;
the inability of statistical agencies to make a sufficiently strong case for innovation funds;
the inability to attract the needed leadership;
the difficulty of creating an environment in which questioning conventional logic is welcomed;
insufficient numbers of appropriately trained statisticians;
insufficient numbers of appropriately trained managers and leaders;
inadequate pay for senior-level researchers;
the inability to recruit non-U.S. citizens;
the lack of critical mass of research staff in many agencies;
contracting rules that inhibit working relationships with academics and contractors; and
lengthy bureaucratic hiring procedures.
Several of these barriers concern leadership in the federal statistical system. This theme of the importance and need for leadership was echoed by many of the workshop participants.
In discussing the critical path to innovation, John Haltiwanger (University of Maryland) pointed out the necessity for somebody at the top of an organization to take responsibility for innovation. Leaders must make it clear that they want innovation, and they want it now. Innovation does not just happen. With respect to not being able to attract sufficient numbers of new staff with the correct skill set, he observed that prospective students know which organizations are research friendly and that leadership is needed to create research friendly organizations.
Following on this subject, Ivan Fellegi (Statistics Canada) noted that one of the principal barriers to attracting staff is one of image; however, he said, the image of a statistical agency can be changed. In identifying the barriers to innovation, he pointedly observed, one need not look any further than to the participants in the workshop, who included the leadership of many U.S. federal statistical agencies. He suggested that, although it is not easy, successful statistical offices can create an environment for innovation. Leaders create this environment or culture through their own behavior and the structures they put in place. Moreover, leadership is required to ensure that the right questions are being asked and that the answers to the questions are correctly and appropriately dealt with.
This theme of the importance of leadership was picked up by several other participants. Ruth Ann Killion (U.S. Census Bureau) noted that mangers are responsible for developing a culture of innovation, and Andrew White (National Center for Education Statistics) suggested that implementing a culture of innovation comes from the senior leaders of an agency. He essentially agreed with Fellegi, commenting that everyone attending the workshop is responsible for instituting that culture.
Jennifer Madans noted that this type of meeting has been held before, and little if anything has come from the effort. She observed that the leaders of the statistical system—all of the people in the room—are responsible for the lack of progress. Although it is true that a decentralized system makes it hard to have a joint response, she said, the federal statistical system nevertheless needs to act or die.
Clyde Tucker commented on the importance of leadership in the area of administrative data. If the federal statistical system is to accomplish the necessary organizational change so that the system can take advantage of administrative data, then leadership will be needed to do so. Leadership is particularly important for administrative data, because statisticians are trained and in their usual activities are used to working with surveys and not administrative data.
Constance Citro noted that agency staff often do not know how the
data they provide to the public are used. To understand the problems and attributes of the data that agencies produce, it is critical that they analyze those data. It is leadership that sets the direction for an agency to ensure that this kind of analysis is performed.
Richard Newell asked if the barrier was the inability to implement existing ideas as opposed to developing new ideas. In order to be innovative, agencies must have a thirst for new knowledge, and leaders have to create an environment for slaking such a thirst.
Communication and Collaboration
In addition to leadership, the other most frequently mentioned barrier to innovation is the lack of communication and collaboration within and between agencies. This issue was raised by Brian Harris-Kojetin (U.S. Office of Management and Budget), who said that in his observations not only do different agencies that experience the same problem not talk to each other, but even within an agency there can be significant problems of communication.
Marilyn Seastrom (National Center for Education Statistics) developed this theme with reference to specific issues that are common to agencies in the federal statistical system. She suggested that the key to innovation in cost savings, data dissemination tools, data visualization, and metadata standards is collaboration among agencies.
Barry Nussbaum (U.S. Environmental Protection Agency) suggested that the problem of communication extends to the fundamental issue of understanding the needs of users. This is critical for success in innovation: staff have to work with data and understand how they were acquired and what their properties are. To do that requires working in the weeds of data acquisition, where one can truly understand the properties of data and the basics of data collection.
In this context, Haltiwanger said a related problem is the inability of researchers inside and outside the government to drill down in their analysis. He observed that this inability to go deeper into the relationships among variables is due to a lack of data sharing among agencies, limited access to microdata, and lack of federal and state cooperation. In general, he said, improving communications within an agency could bear a great deal of fruit in terms of breaking down barriers to innovation.
Citro noted that one of the barriers related to communication is a lack of understanding between the user community and the research community and between the federal statistical agencies and private contractors. Allen Schirm (Mathematica Policy Research) pointed to the workshop itself as an example of the need to broaden interactions and establish better communication. He noted the absence of significant numbers of young
people at the workshop. He commented that it would also be beneficial if the Interagency Committee on Statistical Policy (ICSP) Working Group on Innovation had participation from the private sector.
Operational and Innovation Cultures
Several participants responded to the points in the Dillman (1996) paper on the coexistence of an operations culture and a research culture. Lynda Carlson agreed that operational matters often trump work on innovation. In response to earlier comments from Graham Kalton about the need for large, major innovation projects, Madans suggested that implementation of big innovation projects can set up a tension between who is thought to be doing innovation and who is not. One of the results of this tension can be a lack of communication between the operations staff and the research staff. It is important, therefore, to build innovation into everyone’s job.
Thomas Louis also suggested that one of the challenges is to make innovation a part of the usual work of staff. Nussbaum reported on experiences at the U.S. Environmental Protection Agency (EPA), stating that the secret to innovation at the EPA is not to call it innovation. In fact, setting up a bureaucratic process that is supposed to innovate inhibits individual initiative.
Fear of Failure
Carlson proposed another barrier to innovation—the fear of failure—that was not discussed in any of the background papers. She noted that innovation is by its very nature risky and that the budget and performance process as practiced by the federal government militates against taking risk. It is safer not to try to innovate and thus avoid the penalties that these systems would impose for failure.
Ronald Fecso (U.S. Government Accountability Office) agreed that fear is an important barrier, and performance plans impede risk-taking by encouraging people to take the easier path of the status quo. It is important to get over the fear of failure, he said. Moreover, the system needs to focus more on understanding user needs and answering the important new questions and not just on improving the answers to existing questions.
Edward Spar (Council of Professional Associations on Federal Statistics) emphasized the point that with risk comes the possibility of failure. Although this may appear to be obvious, people must accept this and learn to live with it. In this vein, Ron Bianchi (Economic Research Service, U.S. Department of Agriculture) suggested that the concept of a risk-free
idea is in a sense vacuous, since a risk-free idea is equivalent to a work-free zone.
Several workshop participants mentioned the lack of successful case studies as a barrier to innovation, commenting on the need for better case studies of innovation that could be used by agencies. According to Thomas Mesenbourg (U.S. Census Bureau), components of such studies include who drove the innovation project, who opposed it, what challenges were encountered, and what were the keys to success. Newell observed that it might be useful to identify a set of best practices and to include in the case studies the human capital dimension and descriptions of the research infrastructure and internal practices. Sally Morton (University of Pittsburgh) provided a cautionary note with respect to case studies. Although they may be helpful, she said, barriers are changing over time and what worked previously may not succeed now.
Christopher Carroll (Council of Economic Advisers) extended the idea of case studies to comparisons with others outside the system. It is important for each agency to compare how it is doing with others. More specifically, the statistical system and individual agencies should seek to understand why federal statistics are different from other statistics that are ostensibly measuring the same thing. For example: Why do different estimates for employment result from household surveys and payroll surveys?
Lack of Statistical Expertise
Harris-Kojetin said in his view the most important barrier is a lack of skilled people and expertise. In this regard, it is important to recognize that there is a wide demand for statisticians outside government, and they may have expertise that is lacking inside the government. Therefore, the federal statistical system should remove any barriers that prevent linking communities inside and outside government that employ statisticians. For example, the federal system should look to other mature organizations that have encountered the barriers discussed in the background papers and in the workshop discussions.
This theme of insufficient skilled personnel as a barrier was taken up by Kalton, who emphasized the importance of updating skills. He pointed out that this need to update skills is more important than in the past. Statistics has evolved into many subdisciplines, each of which requires particular high-level skills. An example of this was provided by Judith Rowe (Princeton University), who pointed out that there are two kinds of
administrative data, mandatory and voluntary, and that they require different kinds of statistical solutions. Carlson added that the skills needed for innovation projects are often lacking in both agencies and government contractors.
Organizational and Process Barriers
Some barriers can be thought of as structural in nature—that is, they exist because of processes imposed on an agency as a unit in the federal government. Other barriers may be amenable to individual agency initiative. Haltiwanger pictured the federal statistical system as a huge, lumbering organization, which he called a barrier to innovation. Innovation needs to be a grassroots activity, he argued, and it is better to have many smaller activities rather than a few major ones. For innovation to succeed, it is not necessary to have a grand plan with huge numbers of people. Agencies have to understand how to build incubators of innovation and allow them to flourish. Robert Groves also liked the idea of multiple grassroots projects.
Among the system-wide barriers that were noted during the discussion, several related to the nature of the budget process and procurement rules. Michael Horrigan (Bureau of Labor Statistics) said that one of the barriers to innovation is that the budget process requires more certainty than one has when trying to innovate. It is difficult to obtain funding for projects in the absence of a guaranteed outcome, and being up-front about the risk endangers the likelihood of obtaining funding.
In the same vein, Mesenbourg asserted that one needs a clear imperative or mandate to fund large improvement projects. As an example, he stated that the recent recession did create such a mandate for improvements in data collection in the service sector, and the Longitudinal Employer-Household Dynamics Program became a very easy sell in the wake of the recent financial crisis.
With respect to procurement and recruitment, Carlson said that current government-wide rules make it difficult, if not impossible, to enter into flexible agreements with contractors to pursue innovation projects.
Groves suggested that there are different barriers for small and large agencies, and therefore the solutions may be different for different agencies. Particularly for small agencies, coalitions across agencies and improved contracting procedures are important, he said. The issue then becomes one of how to systemically encourage coalitions and alliances.
Newell pointed to a different kind of structural problem: a monopolistic environment and lack of competition that can lead to stagnation and a failure to innovate. Agencies that find themselves in a monopolis-
tic position may not be attentive to changing demand and may develop institutional structures that resist change.
With respect to organizational barriers in an individual agency, Haltiwanger said that in his view the location of a research organization within the agency is very important. Agency research organizations are properly dedicated to the function of research, although they can still be integrated into the work of the agency.
Users as a Barrier
With respect to the barriers to innovation, Eva Jacobs (Bureau of Labor Statistics) noted that sometimes the user is the barrier. For example, users of the consumer price index (CPI) and other surveys do not like change in the time series or the questions that are asked. In order to produce anything new in the CPI, you have to sell it, she said—both to labor organizations and to business organizations.