In addition to addressing the assessment criteria, there are several other observations in a variety of areas. First, while there seems to be a significant number of collaborations of various sorts, it is often not clear how those collaborations really interact with ARL programs (versus simply being funded grants), and what part of the results reported from the collaborations are due to ARL versus external researchers and contractors. This information is important when trying to judge the overall level of expertise of the ARL staff. The proliferation of CTAs and ITAs in particular represents collaborations that have not had as much review as other activities have had, and the Board cannot properly judge their overall effect on ARL’s portfolio.
Second, judging the understanding of the state of the art would be aided by more explicit discussion in reviews about CISD’s view of the state of the art elsewhere and by knowing in what metrics one would see improvements as a reflection of success in ARL projects.
The work at CISD continues to be generally well targeted on Army needs. The machine translation work continues to drive deployments into the field and helps in the processing of newly discovered document troves. BED continues to keep its Army and national science niche in defining and predicting the characteristics of meteorological phenomena that are critically important to fixing the properties of the atmosphere on time and space scales relevant to rural and, of increasing importance, urban battlefield situations. The growth in focus on networking at multiple levels correlates directly with the growth in the network-centric battlefield and the need to integrate disparate information sources in real time to support decision making.
Prior ARLTAB assessments have noted the recognized exceptional contributions of the machine translation work. This continues to be the case.
Judging the contributions of much of the rest of CISD to the broader community remains more difficult. There seems to be a significant variance across the divisions in the number of publications, the quality of the publication forums, and the impact of the work. A variety of indexes are used in academia for such purposes; they include the H-index for references and impact factors for publication venues. Data sources for computing such indexes can be found at Web sites such as those for Googlescholar, ISI Web of Science, Science Citation Index, and Citeseer. Performing such self-evaluations in advance of reviews would help both the Board and ARL to identify where the lead contributions are coming from and which venues should be targeted to maximize the exposure of research results.