program manager and others he or she invites usually hold a 1-day meeting. Every year, a 6- to 10-page report provides a synopsis of the work. Clearly, that report does not encompass all the activities under the award in detail, but it does cite published or otherwise presented research results. The funding is geared toward producing results and publishing them. An annual contractor’s meeting is also held at which all PIs present their results. During the third year of the award, a more extensive review is conducted. The report on this review meeting from the program director is assessed at higher levels in the DOD to decide on renewal.

Shamma continued by briefly describing the nature of the UCLA-led (partners are the California Institute of Technology, Cornell University, and the Massachusetts Institute of Technology) MURI effort, which was focused on control of multivehicle systems and multiagent decision systems. Managerially, all of the institutions subcontract to UCLA. The effort is in its fourth year, having successfully been renewed during the option period. It concentrates on the computational end of control theory, not so much on hardware and prototype development, although the group has a few hardware test beds that were funded under a DOD Defense University Research Instrumentation (DURI) award, a program specifically designed to help fund special equipment needs.

During the MURI effort, members of the UCLA-led team had several opportunities for interaction with each other. Members attended short courses taught by team members from other universities to gain an overview of the research being performed throughout the MURI. Shamma and a few postdoctoral fellows spent time at an Air Force base. Students in the program participated in intercampus student visits, spending anywhere from a week to an entire school quarter at other institutions working with team members. The team performed unified test bed studies to describe results using a unified hardware platform and prototype. This allowed the team to contextualize the results in terms of a particular challenge, whether a simulation challenge or a hardware challenge.

Shamma then critiqued his experience in the MURI program as a co-PI and compared it with his experience as a participant on other DARPA, NSF, and NASA proposals or projects. He described the review process as humane. Feedback is provided early in the process, before the proposers have made any large initial investment; usually it comes straight after a phone call to introduce the idea to the DOD program manager and then after the white paper. Given that weeding-out process, only a few groups compete for the final award. In contrast, some NSF solicitations in which Shamma had participated received over 1,000 letters of intent for approximately 30 awards. He also mentioned that in the MURI proposal process, there would always be a front-runner who had assisted the program director in identifying areas of research. However, front-runners are not guaranteed funding.

Shamma also mentioned that there was no direct competition with DOD investigators in the MURI process. In contrast, in a NASA solicitation for which he had submitted a proposal, 15 of the 20 winning proposals were NASA proposals—a discouraging result. Shamma said he understood the need to include NASA researchers in the funding but believed it would be better to separate the sources of funding into internal and external sources for the purposes of competition. NASA attendees interjected that this particular outcome had been an artifact of the methods used in the past and was not the way NASA planned to solicit in the future. Shamma insisted that some of the university recipients of the funding had offices at NASA Ames. He called it an unclear competition.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement