This chapter provides some illustrative examples for implementing the committee’s vision for convergent engineering research centers (CERCs). They are for the National Science Foundation’s (NSF’s) consideration only and are not intended to be prescriptive. Three examples are provided: creating a CERC handbook, options for improving the proposal process, and considerations for choosing appropriate performance metrics.
In multidisciplinary teams, such as CERCs, each technical discipline has its own words, concepts, methods, and, ultimately, business models. As described in the 2015 National Research Council (NRC) report Enhancing the Effectiveness of Team Science,1 unless the team shares a common language, poor results are the rule. An NSF CERC handbook would describe the basic concepts and collaboration processes that partners use to work together successfully.2 The NSF Innovation Corps (I-Corps) program utilizes a prescribed customer and partner discovery process for technology innovations.3 The NSF CERC handbook would include some of those ideas plus others required for forming major new innovations that would address grand-challenge-like opportunities.
As documented in the 2015 NRC report on team science, the only effective way for people to understand novel concepts is by using them while working on their projects. Training that is not task-focused is generally ineffective. Introducing the NSF CERC handbook to teams at the start of their projects in a workshop-like format is a best practice. Early introduction allows the teams to learn and apply the concepts while improving their proposals. The CERC handbook would also describe best practices for promoting innovation during the execution of the project.
1 National Research Council, 2015, Enhancing the Effectiveness of Team Science, The National Academies Press, Washington, D.C.
2 Enterprises that are systematic high-value innovators, such as IDEO, Apple, GE, SRI International, the Defense Advanced Research Projects Agency, ARPA-E (Advanced Research Projects Agency-Energy), I-ARPA (Intelligence Advanced Research Projects Activity), and the National Science Foundation’s I-Corps (Innovation Corps), have value-creation handbooks.
3 I-Corps teams are made up of a principal investigator (PI), typically a faculty member; an entrepreneurial lead (EL), typically a graduate student or postdoctoral researcher; and an industry mentor (IM), drawn from the business community. During the 7-week course, the team conducts 100 interviews with potential customers and partners while developing and refining a business model. This intensive engagement between the academic team and the business world mimics that of a startup.
Rapid improvement is driven by continuous, constructive feedback. The CERC handbook could facilitate recurring forums in which research teams come together in person or virtually to present their value propositions, listen to critiques from their teammates, and learn from one another.4
FINDING 6-1: Successful innovation programs in industry and government often rely on common reference tools that describe best practices for the formation of research and development (R&D) teams and facilitating innovation.
RECOMMENDATION 6-1: The National Science Foundation should consider creating a convergent engineering research centers (CERC) handbook describing best practices in team research and facilitation of innovation that is appropriate for use by CERC researchers and students.
NSF has a proposal process that has been effective for decades.5 It is based on white papers, pre-proposals, full proposals, and site visits. These remain essential steps in creating high-performance CERCs.
Compelling CERC proposals must include an important societal research opportunity, a deep understanding of the key technical challenges involved, a working hypothesis for the solution to address these challenges, identification of the multiple disciplines needed to solve it, and the formation of the best academic and industry teams. Addressing these issues requires a rigorous preproposal process. Specifically, it is improbable that the appropriate team can be assembled until the opportunity and its specific technical challenges are understood.
There is significant consensus across many international programs that effective pre-proposal planning can lead to stronger proposals (in terms of team formation, commitment, and identifying integrated challenge goals). In particular, structured exercises designed to support proposal development can help ensure more effective identification (and refinement) of collective research goals and elicit more detailed commitment from industrial partners. Pre-proposal development exercises can also support more effective team formation by more clearly identifying capability and expertise gaps, more clearly revealing the complementary capabilities of potential team members, and creating awareness among potential team members (or collaborators) of individual expectations regarding project outcomes and impact.6
NSF currently has a pre-proposal requirement for ERCs that contains detailed instructions for what must be included. However, the committee believes that achieving the formation of an outstanding team and deep collaboration in future CERCs will require more upfront work by NSF and the proposing teams before the submission of final proposals. Today this upfront effort is to a great degree done after the center is formed, which is inefficient. The committee looked at models used by other agencies, especially the Defense Advanced Research Projects Agency (DARPA), and came up with ideas for NSF to consider for how to optimize a pre-proposal process for future CERCs. An example is provided in Box 6.1.
FINDING 6-2: NSF is to be commended for using a pre-proposal process in the development of ERC proposals. However, to facilitate the success of its CERCs, it could employ still other models for this process that call for greater focus on developing a center’s initial value proposition and optimizing team formation.
RECOMMENDATION 6-2: The National Science Foundation should consider developing a rigorous preproposal process that allows for the identification and incubation of high-value societal opportunities with a compelling working hypothesis for the solution of the underlying challenges and for the formation of the best research team. This process should be codified in its convergent engineering research centers (CERC) handbook and reviewed and improved periodically to assure its value for achieving the vision for new CERCs.
4 C.R. Carlson and W.W. Wilmot, 2006, Innovation: The Five Disciplines for Creating What Customers Want, Random House, New York.
5 National Science Foundation, “Gen-3 Engineering Research Centers (ERC) Partnerships in Transformational Research, Education, and Technology,” http://www.nsf.gov/pubs/2015/nsf15589/nsf15589.htm#prep, accessed November 19, 2016.
6 E. O’Sullivan, 2016, “A Review of International Approaches to Center-Based, Multidisciplinary Engineering Research,” paper commissioned for this study, available at https://www.nae.edu/Projects/147474.aspx.
Various metrics can be used to judge the performance of CERCs: number of students graduated, number of scientific publications published, standing within the research community, industry participation, number of innovations and start-ups spawned, products commercialized, and overall economic and social impact. Many of these metrics—for instance, the number of participating companies, patent disclosures, and underrepresented minorities or women—can foster a “box-checking” mentality that is not useful. In addition, intellectual property that is not used and companies that are formed with little or no capitalization are not indications of success. Today’s ERC performance metric reports can be hundreds of pages long. Such detailed reporting takes precious resources from the centers and provides only limited useful information.
Meaningful Quantitative Metrics
The goal for metrics is to measure results, not just outputs. Commonly used metrics include licenses, start-ups, and students graduated with the right skills. However, to a great extent, these measures are outputs. The metric that matters is the economic, health, or security impacts created across society. An example would be evidence of widespread use of a center’s intellectual property within high-value commercial products or in other applications that have broad, recognized societal benefits, such as new standards or tools. Metrics should not discourage CERCs from taking calculated risks in the name of making breakthroughs or from changing course and redefining objectives in response to a changing technological environment. Metrics for the CERCs must be serious but flexible enough to allow the right decisions to be made over the lifetime of the CERC. The goal is to accelerate learning and achievement.
Metrics should not discourage CERCs from taking calculated risks in the name of making breakthroughs or from changing course and redefining objectives in response to a changing technological environment. Metrics for
the CERCs must be serious but flexible enough to allow the right decisions to be made over the lifetime of the CERC. The goal is to accelerate learning and achievement.
The committee observed that often, in attempts to hold centers to high standards in order to ensure transformative activities and outputs, the load of measuring every activity can impede innovation, especially in an evolving or emergent discovery environment. In the future, emerging technologies such as business analytics and metrics platforms, already in use in major corporations, should be able to capture this information automatically and thus reduce reporting burdens.
These metrics can be part of a performance “dashboard” that is easily updated continuously, rather than producing an overly detailed written report every year. Yearly reports are out of date when delivered, and they do not allow continuous adjustments as the CERC progresses. A one-page dashboard containing the essential metrics can provide valuable and effective feedback.
With emerging collaboration platforms, all CERC activities will be open to all appropriate partners. The dashboard, NSF CERC Handbook, the value propositions being developed by the teams, and the progress toward center goals will be visible to everyone. Thus, everyone, including students and the government, will be fully apprised of the team’s ongoing performance. This will greatly accelerate research and team learning while allowing new teammates to be rapidly added.
Collaboration in today’s workplaces is increasingly mediated by communication tools, whether for asynchronous communication (e.g., e-mail and document creation and management tools such as Office 365), synchronous communication (e.g., real-time collaboration tools such as Slack), and in-person and online meetings (e.g., Skype and Go-To-Meeting). These tools allow universities and businesses to track, document, and analyze the effectiveness of collaboration in their environments in ways that previously were not possible. For example, a company called VoloMetrix,7 recently acquired by Microsoft, created software that monitors e-mail traffic and meetings along with other metrics and provides analytic insights that tie employee activities to business results. Microsoft’s Office 365 Delve Analytics can track the document creation and modification and give people insights on the progress of their work group and the tasks that remain to be performed. These are just a few examples. The number of new business intelligence tools being introduced into the market is increasing. Looking forward, these tools will provide the opportunity to improve not only traditional corporate work but also the way research is performed, managed, and documented.
FINDING 6-3a: Reporting requirements for centers can be burdensome and often do not reflect important impacts or the dynamism of center activities.
FINDING 6-3b: Emerging collaboration platforms allow real-time tracking and longitudinal follow-up of center research activities and of students, faculty, and collaborators who have been engaged at the centers, all with less burden on the centers.
RECOMMENDATION 6-3: Metrics should be minimal, essential, and aligned with center milestones and processes and should be defined in a center’s strategic plan. The convergent engineering research centers should use state-of-the-art web-based collaboration platforms, such as performance dashboards, to amplify team collaboration and simplify reporting requirements.
Evolution of Metrics
Appropriate performance metrics vary according to the stage of maturity of the centers, and on whether the research problem is related more to economic or other measures of societal benefit. Very few performance metrics of substance can be obtained during the first 1 to 3 years of a CERC’s existence.8 That is because the teams are just beginning their research. The creation of significant new papers and commercial innovations from a CERC’s
7 H. Clancy, 2015, “Microsoft buys ‘people analytics’ startup VoloMetrix,” Fortune, September 3, http://fortune.com/2015/09/03/microsoftbuys-volometrix/.
8 Exceptions may arise when centers grow out of pre-existing collaborative university-industry research efforts.
initiatives during that period is unlikely. The best practice is therefore to measure how well the teams are using the team research and value-creation methodologies, including metrics for collaboration such as jointly authored papers or conference presentations, weekly discussions with colleagues, and quarterly value-creation forums.
Later in the life of the center, metrics should be based on progress toward the projected impact in the economic, security, or societal domains—that is, on outcomes rather than specific outputs such as papers or patents. Given the scope and complexity of problems that are expected to be tackled by CERCs, the full impact of a CERC’s work may not be apparent during the life of the center itself, so the assessment of progress must be based on the strategic plan’s milestones.
The CERC, as part of its strategic plan, should identify and collect the metrics appropriate for the Center during its lifetime. Many of these may initially be qualitative and not outputs or outcomes. During the later years of NSF funding of the CERC, there may be significant numbers of papers published, patents filed, and so on. The real measures of outcomes resulting from CERC activities, such as economic value, innovation-savvy engineering graduates who rise to be leaders, and the like, may not be ascertainable until years after NSF funding has ended. Developing and implementing a framework for measuring and assessing the actual impact of the CERCs will enable NSF to demonstrate the value and impact of the CERC. NSF can develop a robust database, including economic and noneconomic outcomes over the short and long terms. The framework would need to include a system to track outcomes from a center for at least 10 years after NSF funding has ended. To observe real economic value, it may in some cases need to be longer.
FINDING 6-4: Appropriate performance metrics for CERCs will vary according to their time in operation and the type of research problem they have chosen to address.
RECOMMENDATION 6-4: Early in the life of convergent engineering research centers (CERCs), performance metrics should be based on their adherence to team research and value creation best practices. Later in the CERC’s National Science Foundation funding life, metrics should be based on the CERC’s impact on the economic, security, or societal domains as laid out in its strategic plan.
Some international center programs highlight the importance of performance metrics that are tailored to the “impact logic” of the center being evaluated. For example, generating patents is not an objective for some centers because the participating partners or sectors do not have this as part of their business logic. Some international programs give funded centers the freedom to track and report additional novel metrics that are not specified in official reporting forms, but identified by the centers themselves.9
FINDING 6-5: The real impact of a CERC, including the career contributions of its students and faculty, changes resulting from new science, and creation of novel technologies, will usually not be fully apparent during the NSF funding life of the center.
RECOMMENDATION 6-5: The National Science Foundation should develop and implement a framework for retrospective studies of the economic and societal impacts of the convergent engineering research centers (CERCs) and apply lessons learned in the establishment of new CERCs.
9 E. O’Sullivan, 2016, “A Review of International Approaches to Center-Based, Multidisciplinary Engineering Research,” paper commissioned for this study, available at https://www.nae.edu/Projects/147474.aspx.