may matter much less than the timeliness of the idea and the readiness of the environment to address it in a successful way. Many old ideas, considered as failing concepts, resurfaced years later at the “right time” and made a significant difference. In other words, the key question is not so much, What are the new ideas? but rather, What are the ideas whose time has come?

  • Measurement of effectiveness and performance. The challenges of software measurement as discussed in the previous chapters—with respect to process measures, architecture evaluation, evidence to support assurance, and overall extent of system capability—apply also to software engineering research. We lack, for example, good ways to measure the impact of any specific research result on software quality, which stems in part from the lack of good measures of software quality. Without reliable, validated measures it is hard to quantify the impact of innovations in software producibility, even those that are widely credited with improving quality, such as the introduction of strong typing into programming languages or traceability in software-development databases. This is analogous to the productivity paradox, recently resolved.13 Because software is an enabling technology—a building material rather than a built structure—it may not fit with research program management models that focus on production of artifacts with immediately, clearly, and decisively measurable value.

  • Timescale for impact. Frequently, it is only after a significant research investment has been made and proof of concept demonstrated that industry has stepped in to transition a new concept into a commercial or in-house product. Also, there are many novel products/services that result from multiple, independent research results, none of which is decisive in isolation, but which when creatively combined lead to breakthroughs. Although it may appear that a new development emerged overnight, further inspection usually reveals decades of breakthroughs and incremental advances and insights, primarily funded from federal grants, before a new approach becomes commonly accepted and widely available. CSTB’s 2003 report Innovation in Information Technology reinforces this point. It states, “One of the most important messages … is the long, unpredictable incubation period—requiring steady work and funding—between initial exploration and commercial deployment. Starting a project that requires considerable time often seems risky, but the payoff from successes justifies backing researchers who have vision.”

AREAS FOR FUTURE RESEARCH INVESTMENT

In this section, the committee identifies seven areas for potential future research investment and, for each area, a set of specific topics that the committee identifies as both promising and especially relevant to defense software producibility. These selections are made on the basis of the criteria outlined at the beginning of this chapter. The descriptions summarize scope, challenges, ideas, and pathways to impact. But, obviously, these descriptions are not (even summary) program plans—the development of program plans from technical descriptions requires consideration of the various program management risk issues,14 development of management processes and plans on the basis of the risk identification, identification of collaborating stakeholders, and other program management functions. In the development of program plans, choices must be made regarding scale of the research endeavor and the extent of prototype engineering, field validation, and other activities that are required to assess the value of emerging research results. In some areas, a larger number of smaller projects may be most effective, while in other areas more experimental engineering is required and the research goals may be best addressed

13

This is analogous to the so-called “productivity paradox,” according to which economists struggled to account for the productivity benefits that accrued from investments made by firms in IT. The productivity improvements due to IT are now identified, but for a long time there was speculation regarding whether the issue was productivity or the ability to measure particular influences on productivity. (This issue is also taken up in Chapter 1.)

14

An inventory of risk issues for research program management appears in Chapter 4 of NRC, 2002, Information Technology Research, Innovation, and E-Government, Washington, DC: National Academy Press. Available online at http://www.nap.edu/catalog.php?record_id=10355. Last accessed August 20, 2010.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement