The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Best Practices in Assessment of Research and Development Organizations
programs have developed that focus primarily with respect to thrusts on materials, power sources on propulsion, protection, sensing, and human interfacing. Recognizing that results from these programs must ultimately be integrated to yield a final system, cross-organizational efforts have been launched, and the area of autonomous systems has been elevated to the level of senior-level visibility. This effort represents a very large component of the ARL science and technology program, yet it grew from many, rather separate, local programs and then was wrapped up for coordination. Many of these programs have been maturing for years. In some respects this ARL Autonomous Systems Program resembles in kind what might be found in the NIST Advanced Manufacturing Program several years from now. How is the Autonomous Systems Program at ARL progressing? What is the technical quality of the work being done? Are appropriate areas being covered? Are required cross-disciplinary issues being addressed? Those were the challenging questions posed by the ARL to the ARL Technical Advisory Board (ARLTAB) of the NRC.
The ARLTAB is charged with regular assessments of the technical quality of the work done at the ARL. Peer panels of experts visit the roughly disciplinary directorates of the ARL on an annual basis and view most of the ongoing program over a 2-year period. To address the onetime charge of assessing the Autonomous Systems Program, the NRC identified technical experts from the various existing panels and assembled them into a large panel with the range of skills covered by the program description. The panel heard overview talks presented by leaders of the various thrusts and integrated programs. They formed smaller groups and held simultaneous sessions, which were generally organized around posters. They had an opportunity to meet directly with scientists and engineers to get the pulse of current activity.
This challenging set of presentations and the format of the review created a first-time opportunity for program self-evaluation. Representatives of some parts of the program were learning more about other parts. This learning vehicle proved useful for researchers in a large, loosely connected set of R&D programs in separate buildings on two campuses. One important advantage in such a review is being able to assemble the panel from a large cadre of experts, each of whom has familiarity with the global mission of the laboratory and an in-depth knowledge of part of the laboratory from previous service on a disciplinary peer-review panel.
Both examples of cross-organizational programs cited above represented R&D within a large laboratory structured mainly along disciplinary lines. They spanned the time frame from program inception to maturity. Matrix management is not the overarching structure in the organizations cited, nor are most efforts cross-organizational. However, in each instance cited, important aspects of program assessment would not have occurred through a process organized according to disciplines. Also noteworthy, in each case cited, is that the crosscutting effort benefited significantly from the establishment of the peer group from a large cadre of experts who were already fully engaged in disciplinary assessment of elements of the cited programs. Panels organized for assessment of technical directorates are an ideal source of expertise for specialized, ad hoc panels targeted at cross-organizational issues.