A detailed methodology is used by the Army’s analysis community to develop measures of performance (MOPs) and measures of effectiveness (MOEs), especially those needed to make design trades among alternatives during design and development and to determine the relative contributions of multiple factors.
The development of MOPs and MOEs follows a sequence of steps similar to the following:
• Identify a military utility (e.g., enhanced tactical small unit (TSU) effectiveness in stability operations) that can be impacted by a Doctrine, Organization, Training, Materiel, Leadership and Education, Personnel and Facilities (DOTMLPF) effort (e.g., access to local sociocognitive networks).
• With this military utility in mind, identify supporting objectives (e.g., determine access to sociocognitive databases).
• Once objectives are formulated, identify essential elements of analysis (EEAs), which are basically the key questions one might ask to support the objectives. For example, an EEA might be, What role does information exchange across a network play in the utilization of these sociocognitive databases?
• Identify issues that are derived from the EEAs. For example, information exchange (especially for digital images and streaming video) is very poor at the TSU level. Bandwidth rate is one issue. Another is that operations tempo does not give TSUs enough time to download, evaluate, and make judgments based on available information—that is it is very easy to reach information overload.
• Identify hypotheses that address each issue. For example, Soldiers and TSUs would benefit from advancements in dynamic communications, information, and sociocognitive networks for enhancements of information exchange and assessment of information.
• Identify the data needed to prove or disprove each hypothesis.
• Identify metrics (the MOPs and/or MOEs) needed to collect the needed data.
• Develop scenarios that will generate opportunities for the collection of data to measure performance and effectiveness.
In conducting assessments for the dismounted TSU and Soldier, the Army should use a methodology similar to that outlined above to create the most appropriate system and system-of-systems metrics—MOPs and MOEs. The MOPs should assess what Soldiers or TSUs achieve in terms of technical performance. In general, MOPs used by the Army are quantitative, but they can also apply qualitative attributes to task accomplishment. Simply put, MOPs measure what Soldiers and TSUs are doing but encourage the system designer and evaluator to ask whether the TSU or the Soldier is doing the right things to achieve the desired effect. Examples of Soldier MOPs include measurable enhancements to Soldier mobility and endurance (e.g., due to offloading physical and mental loads, enhancing nutrition, improving sleep cycles, and altering mission duration times); measures of ability to develop Level I situational awareness; ability to be “culturally correct” when interacting with local nationals; reductions in the probability of being hit by threat munitions because of improvements in agility; and assessments of the Soldier’s sensory (visual, auditory, tactile, and olfactory) perception using measures such as detection, position, recognition, identification, time, distance, error, etc. Examples of TSU MOPs include measures of ability to integrate nonorganic fires and effects, ability of TSU to shoot down incoming unmanned aerial threats (e.g., small drones), and the ability of the squad to offload and then recover equipment before and after a mission, as well as the time needed for TSU leaders to accurately convey appropriate parts of a mission plan (or fragmentation order) to all members of the TSU, and the time needed to achieve a mission.
MOEs assess the impact of the actions of the TSU and the individual Soldier on the effectiveness of achieving mission and task objectives. These measures assess changes in behavior, capability, or operational environment; they do not measure task performance. They measure what is accomplished and help to verify whether objectives, goals and end states are being met. They are typically more subjective than MOPs and can be defined as either qualitative or quantitative measures. For instance, an MOE may be based on quantitative measures to reflect a trend and show progress toward a measurable threshold. Examples of Soldier MOEs include the percentage of time a Soldier is distracted from focusing on the mission/objective, measures of the ability of a Soldier to exploit his situational understanding, and measures of the ability of a Soldier to contribute to TSU effectiveness. Examples of TSU MOEs include measures of ability to engage enemy threats outside the range of enemy weapons, ability to successfully achieve the commanders intent, percentage of time the TSU is surprised by the enemy, ability of the squad to rapidly adapt (mentally and physically) to loss of personnel or a warfighting capability, ability to enhance individual Soldier Level II and
In addition to MOEs and MOPs, a systems engineering approach will also require appropriate indicators. An indicator is an event that serves as evidence that an effect is being accomplished or, for an MOP, that an output outcome is being achieved. Good indicators are clear, concise, and, most important, reasonably related to an MOP or MOE. Indicators may be quantitative (e.g., number of weapons needed and/or shots needed to shoot down a drone) or they may be qualitative (e.g., number of subject matter experts who agree that a TSU achieved the commander's intent). A single indicator can support more than one MOP or MOE. For example, a reduction in the number and length of radio calls within a TSU may be an indicator that there is better shared situational awareness (an MOE); a significant positive impact of Soldier, TSU, and leadership training methodologies (an MOE); more time for the Soldier and TSU to focus on assigned tasks and missions (an MOE); enhanced individual cognitive performance (an MOP); a well-designed Soldier-centric interface to the TSU network (an MOP); and a high-performing information-sharing system on the TSU network (an MOP).
Why MOPs and MOEs Are Important
The lack of adequate MOPs and MOEs has brought a lack of accountability for dismounted TSU and Soldier performance. Perhaps more important for the subject of this report, the lack of MOPs and MOEs that realistically assess both human and materiel contributions to required capabilities has vitiated real progress toward holistic design and evaluation of the TSU and the Soldier, despite a history of advice on achieving that end (see Appendix D).
Compared to the Marine Corps, the Army light infantry squad has had an unstable organizational structure and size. No one at the Infantry School or in the research and development (R&D) centers could give the committee a rationale for the current nine-person size of the dismounted TSU other than military judgment. In fact, other infantry-like formations in both the Army (e.g., Special Forces) and other military Services have explored alternative squad sizes and structures, and there seems to be no clear consensus that the current squad size is optimal for any specific environment, let alone all environments encompassed by unified land operations.
At the Army Maneuver Center of Excellence and at the R&D centers visited by the committee, many training technology demonstrations were briefed, but few had been widely adopted. Comments from the two roundtables with postdeployment noncommissioned officers (NCOs) suggest these combat-proven TSU leaders were unaware of many of these training technologies and were too
rushed in their deployment readiness training to make use of new training opportunities or approaches.1 The committee found no evidence of objective metrics to indicate state of training for units preparing to deploy or in theater.
Communications, intelligence, and logistics technologies briefed at the R&D centers visited by the committee as having been demonstrated with positive results and available for more widespread fielding or use were described in the NCO roundtables as “Conex-fillers,” which were too much trouble to learn how to use and exploit. Although some of the NCOs attributed these lost technology opportunities to “drive-by fielding,” the committee believes a more likely explanation is the lack of appropriate tactics, technique and procedures to guide their use, of system integration, and of training resources to enable TSU mastery of the available technology prior to deployment. Accountability to TSU performance metrics would be an incentive for TSU leaders to continually seek better approaches, including new technologies.
Substantial knowledge exists about the relationships between nutrition and physical and cognitive performance directly pertinent to TSU performance. Medical and food technology scientists at the Natick Soldier Research, Development, and Engineering Center reported that available rations were underused or misused. Indeed, based on the committee’s observations during site visits, this appears to be a feature of infantry training. Early-stage Soldiers learn informally during training how to take apart or “field strip” the carefully constructed and designed rations now being deployed. Leaders trained to see the relation between what their Soldiers are or are not consuming and trainers teaching Soldiers how to eat could improve overall performance and endurance and make better use of the rations provided. Furthermore, having MOPs and MOEs for field performance will provide baseline performance levels from which to evaluate potentially useful new developments.
MULTIVARIATE ANALYSIS OF MOP AND MOE DATA
To adequately assess the military performance and effectiveness of the Soldier and the TSU as a complex system or system of systems encompassing all the DOTMLPF domains, a systems analysis effort such as multivariate analysis is needed that can support detailed estimation and prediction techniques. A multivariate analysis involves the observation and analysis of multiple variables at the same time. From a systems engineering perspective, this type of analysis is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the military performance or effectiveness being assessed. Variables are identified as dependent (that which is being
1Informal discussions between the Committee, and noncommissioned officers and officer candidates, during Meeting 3, July 12, 2012, in Ft. Benning, Georgia, and between the Board on Army Science and Technology and noncommissioned officers and officer candidates, February 23, 2010, in Fort Bliss, Texas.
Depending on the design of the experiment, a variable may be dependent in one case and independent in another. If the amount of information flowing to an individual is varied, one may observe an increase in the cognitive workload (the workload may go up with too much information; it may also go up with too little or no information). In this case, the cognitive workload is the dependent variable, which is a function of the amount of information (independent variable) that is presented to the Soldier. In this case, the cognitive workload could be used as a level of performance.
Likewise, if cognitive workload is varied, one may observe a variability in the quality (e.g., in terms of appropriateness or timeliness) of decisions being made. In this case, the quality of decisions (dependent variable) is a function of the cognitive workload (independent variable) and the quality of the decisions is a measure of effectiveness. Note also that “appropriateness” of decisions may be a subjective assessment that needs “indicators”—for example, blue forces unnecessarily sent in harm’s way, choice of approach route that does not offer the tactical advantage of other routes, calls missed from subordinates and supervisors) to validate its assessment.