This chapter points out that:

As discussed in the previous chapters, the meteorological and oceanographic (METOC) enterprise can be viewed as an organized effort to provide information useful to naval operations about the current and future state of the environment. This process is not perfect, but instead introduces errors that can be characterized by associated uncertainties. Thus, environmental information either explicitly or implicitly contains uncertainty that is inherit in the stochastic nature of environ
Below are the first 10 and last 10 pages of uncorrected machineread text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapterrepresentative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 70
4
Improving Environmental Information by Reducing Uncertainty
This chapter points out that:
uncertainties about environmental conditions, and information describing them, can have tactical significance;
the most costeffective method for reducing the impact of uncertainty may vary with different types of environmental information or its intended use;
when additional observations are key to making significant improvements in the quality of environmental information (i.e., reducing uncertainty to acceptable levels), the most costeffective means should be explored first; and
delivering a sensor to a denied location of interest may represent a significant component of the cost of collecting observations. Thus, using sensors or sensing systems employed for intelligence, surveillance, or reconnaissance purposes may present an attractive option so long as the primary mission is not significantly impaired.
As discussed in the previous chapters, the meteorological and oceanographic (METOC) enterprise can be viewed as an organized effort to provide information useful to naval operations about the current and future state of the environment. This process is not perfect, but instead introduces errors that can be characterized by associated uncertainties. Thus, environmental information either explicitly or implicitly contains uncertainty that is inherit in the stochastic nature of environ
OCR for page 70
mental processes or that is introduced by imperfect sampling or numerical calculations using data from imperfect sampling. Understanding and living with these uncertainties, especially understanding when it is important to reduce them and how, are a primary focus of this report.
Webster’s defines uncertainty as (1) the quality or state of being uncertain and (2) something that is uncertain. Thus, to the layperson, such predictions might be viewed as efforts to reduce the uncertainty associated with the nature of the terrain just over the horizon or forthcoming weather conditions. In statistics, the physical sciences, and other technical fields, however, the term uncertainty holds several specific definitions that can be expressed mathematically.
According to Ferson and Ginzburg (1996), there are two basic kinds of uncertainty. The first kind, objective uncertainty, arises from variability in the underlying stochastic system. The second kind, subjective (epistemic) uncertainty, results from incomplete knowledge of a system. Objective uncertainty cannot be eliminated from a prediction regardless of the number of previous observations of the state of a particular environmental condition (e.g., no matter how many thousand previous waves are observed, the height of the next incoming wave cannot be predicted without some uncertainty). Data collection and research into the environmental processes shaping future states can be used to understand and, to some degree, reduce uncertainty, but not without some additional cost (not just in terms of resource expenditure but also in tactical advantage as some efforts may alert opponents to a pending military operation). Probability theory, and other approaches, may provide methods appropriate for projecting random variability through the calculations that result in quantitative predictions of future states.
TACTICAL IMPLICATIONS OF UNCERTAINTY
Military decisionmakers rarely deal with uncertainty in a formal, statistical manner. Nevertheless, many decisions can be examined in such terms. In situations where uncertainty about environmental conditions is straightforward—that is, when uncertainty about current conditions will not impact predictions of future states—commonsense approaches are very effective and are typical of sound decisionmaking. Such examples are common in military situations. By examining such tactical decisions involving environmental information one can gain insight into how more complex situations may be dealt with.
In the following example from mine warfare (modified from an example presented at the Symposium on Oceanography and Mine Warfare organized by the National Academies’ Ocean Studies Board, in Corpus Christi, Texas, in September 1998; see National Research Council, 2000), coalition forces have been tasked with securing a seaport to expedite pacification of a hostile coastal nation (see Figure 41). The geology of two offshore islands lends itself to the natural development of many small boulders, which occur on the adjacent sea
OCR for page 70
FIGURE 41 Cartoon of the distribution of minelike objects and mines offshore of a foreign harbor. Since resources must be expended to determine whether each object is or is not a mine, the cost of uncertainty associated with Qroute 1 is much greater than that for Qroute 2. Thus, Q2 is the preferred route.
floor in large numbers. Typical seafloor mapping techniques can resolve objects of this size, referred to as minelike objects, but distinguishing mines from other minelike objects requires the use of unmanned underwater vehicles (UUVs), EOD (explosive ordinance disposal) divers, or other means. Because this is timeconsuming and often dangerous work, practical decisionmaking simply identifies a shipping lane (referred to by the U.S. Navy as a Qroute) with the fewest number of minelike objects. Thus, in Figure 41 the uncertainty associated with the nature of minelike objects (are they mines or not?) is greater along Q1 than along Q2. Q2 becomes the obvious choice, and appropriate mine countermeasures would be applied as needed. No sophisticated or rigorous effort to quantify uncertainty is needed. It is interesting to note, however, that if the opposing forces understood the nature of the seafloor and incorporated consideration of uncertainty even in an informal manner into their minelaying plans, a more efficient use of the limited number of mines available could have been achieved.
As shown in Figure 42, by concentrating mine laying in areas with a lower density of naturally occurring minelike objects, opposing forces would eliminate the most obvious Qroute, forcing coalition forces to expand additional resources to identify and eliminate mines. This practical tactical application may seem implausible, as Q1 is not actually protected by mines. However, if the com
OCR for page 70
FIGURE 42 Cartoon of the distribution of minelike objects and mines offshore of a foreign harbor. Since resources must be expended to determine whether each object is or is not a mine, the cost of uncertainty associated with either possible route is roughly equal.
mander of the opposing force is convinced that coalition forces will delay efforts to secure the harbor until risk to transport ships can be held to a minimum, mine laying becomes more of a delaying tactic and the distribution of mines in Figure 42 is a better tactical decision. This example points out several aspects of dealing with uncertainty that can be explored in more rigorous ways in more complex examples. Foremost among these is the decision to balance the benefit of reducing uncertainty against the cost. In more complex decisionmaking scenarios involving environmental uncertainty, cost versus benefit will become a more important factor.
Propagation of Uncertainty in Complex Systems
In the previous example, uncertain knowledge of the true nature of minelike objects is easily quantifiable, and given some assumption about the resources needed to determine the true nature of those minelike objects, the cost of that uncertainty can be approximated. In more complex situations, where uncertainty in one or more parameters is compounded when values are used as input to mathematical models of future states or farfield conditions, understanding the impacts (i.e., the cost of uncertainty) becomes more complicated. Understanding how
OCR for page 70
A sidescan sonar unit is lowered from the highspeed vessel Joint Venture (HSVX1) into the waters off the coast of California during Fleet Battle Experiment Juliet. Sonar was used to locate underwater mines, to enable safe navigation of amphibious forces to reach the shoreline, during exercises in support of “Millennium Challenge 2002” (Photo courtesy of the U.S. Navy).
OCR for page 70
uncertainty propagates through the various steps involved in converting data into information is an important component of the production process and of conveying the value of that information to the decisionmaker (i.e., the commanding officer).
Environmental uncertainty propagates through a chain to the commanding officer to influence his decision; moreover, there is feedback to the several links that can be used to optimize his decision process. Figure 43 illustrates several aspects of this. For sonar systems the useful environmental databases are typically bathymetric charts, GDEMs, DDB (Digital Data Base) at various scales, ETOPO5 by location and time of year, sound speed profiles (e.g., as generated by MODAS [Modular Ocean Data Assimilation System]), and bottom loss (BLUG [bottom loss upgrade]), plus other terms that enter the sonar equation. The coverage and resolution of these databases drive various acoustical prediction models often incorporated as tactical decision aids (SFMPL and PCIMAT) for describing how the environment modulates the acoustic propagation. Since the ocean is a very reverberant and refractive medium, the propagation can be quite complicated. One of the current perceptions is that these tools are solely limited by the fidelity and resolution of the environment.
There are, however, some realms where the propagation physics are not well modeled. If there is environmental uncertainty, there is also acoustic uncertainty, and it is not yet clear how to robustly describe how this uncertainty propagates. Certainly, there needs to be feedback from the acousticians to the environmental characterization of what is needed. Next, the acoustical output enters the signal
FIGURE 43 Propagation of uncertainty in a sonar system and the advantages that appropriate performance feedback can play in reducing its impact.
OCR for page 70
The Remote Minehunting System is an organic, offboard mine reconnaissance system that will offer carrier battle group ships an effective defense against mines by using an unmanned remote vehicle. Current plans call for the system to be first installed aboard the destroyer USS Pinckney (DDG 91) in 2004 (Photo courtesy of the U.S. Navy).
OCR for page 70
processor. While there is a good understanding of the processor, acoustical output is often so highly nonlinear with respect to errors in the acoustic models, statistical fluctuations, and system calibration errors that it is often difficult to provide a statistical prediction of its output. The socalled sonar equation is a useful guide, and receiver operating characteristics can be predicted, but they are only descriptive. There is also feedback to the rest of the components, as changing certain parameters of the signal processing can adapt and optimize the performance. Finally, the commanding officer must integrate all this uncertainty. He observes what is happening with the incoming data and must reconcile this to the prediction models. He wants to maintain his tactical advantage of position, situational awareness, and risks. If he is confident in his environmental predictions, he can exploit them to fulfill his mission objectives.
Cost of Uncertainty
In this section the goal of the METOC enterprise is assumed to be to reduce uncertainty (i.e., lack of knowledge about the nature of environmental conditions at some future time or different location) due to environmental processes, with the operational cost of that uncertainty providing guidance to optimum strategies. During tours of the committee to the various centers, members continually tried to explore the issue of expressing uncertainty in METOC. One of the most interesting insights came from a METOC officer who described his working relationship with a previous operational commander1 in terms of a betting metaphor. With each forecast, particularly those with tactical importance, he was asked to rate his confidence in the forecast for the mission in terms of one of three categories: no bet, minimal bet, and high bet. This case of an individual (and successful) relationship between a METOC officer and his operational commander illustrates several important points. First, it recognizes the fact of uncertainty and the fact that it should affect command decisions. Second, it institutionalizes uncertainty for operational purposes in a simple set of levels (a useful example of a concept of operation or CONOPS). Third, it includes an implicit assessment of the impact of the prediction on the operation in placing the bet. That is, uncertainty in certain situations or for certain variables has no operational impact, while in other situations it is critical.
This creative solution to dealing with prediction uncertainties parallels the concepts of traditional risk analysis practiced in the business world, as discussed in Chapter 2. In risk analysis, potential adverse conditions are identified, and then estimates are made of the probability and consequences of their occurrence.
1
Formerly referred to as commanders in charge or CINCs, current Department of Defense (DOD) doctrine refers to them as operational commanders with two exceptions, COMPACFLEET and COMPAC, who are still considered CINCs.
OCR for page 70
Rigid Hull Inflatable Boats stand ready for launch in the ship’s well deck, in preparation for an upcoming Mine Countermeasures Exercise. Following the decommissioning of the mine countermeasure support ship USS Inchon (MCS 12), amphibious assault ships have provided transportation and support to the mine countermeasures units operating out of Naval Station Ingleside and Naval Air Station Corpus Christi (Photo courtesy of the U.S. Navy).
OCR for page 70
Consequences of Uncertainty—Relative Operating Characteristics
In the meteorology literature the consequences of prediction uncertainty have been studied using the concept of Relative Operating Characteristics (ROCs; see Mason and Graham, 1999, for a good introduction). ROCs form a basis for understanding decisions made from predictions for which confidence intervals are available (in this case through ensemble forecasts). In their discussion, Mason and Graham study the problem of issuing warnings for either drought or heavy rainfall seasons over eastern Africa, but the approach is equally applicable to decisions on whether to send the fleet to sea prior to an impending hurricane or a go/no go decision for a SEAL infiltration.
ROC analysis is based on contingency tables, matrices that compare the joint probability of the prediction and occurrence of events (see Table 41). For example, if wave heights were correctly predicted to be too large for a SEAL operation, the forecast would be counted as a “hit,” while correct prediction of lowwave energy (no warning) is a “correct rejection.” Similarly, incorrect prediction of high waves (issuing a “no go warning”) is an “incorrect warning,” while incorrect prediction of safe conditions is called a “miss.” Each of these outcomes has a cost that can be assessed by the Navy. For example, a miss might be viewed as more expensive than an incorrect warning since the lives of SEALS could be threatened. The cost of uncertainty, then, is the cost of each prediction failure mode times the probability of its occurrence. The extension of this calculation to a range of variables, missions, and conditions is straightforward but is best illustrated by way of an example.
TABLE 41 Contingency Table for Risk Analysisa
aStatistics are built up from a history of realizations of predictions and their subsequent true outcomes. The upper categories indicate cases where the forecast call was “go” or “no go” for some particular operation or action. The lefthand categories indicate the actual conditions that ensued.
OCR for page 70
Example Use of ROC to Find the Cost of Uncertainty
Consider a SEAL infiltration mission onto a sandy beach. The mission depends on a number of environmental variables. This discussion will start with an examination of one variable, wave height, first introducing uncertainty and then moving on to quantify the cost of that uncertainty. The concept will then expand to the composite cost of all variables and then the merging of costs from more than one mission.
For illustration, threshold criterion for wave height for this mission is assumed to be 1.5 m. Further assume that the predicted wave height for the time of the operation is 1.1 m. In addition, assume there is a known uncertainty to this estimate, represented by a standard deviation of 0.3 m (in the interest of brevity, discussion of possible methods for estimating confidence intervals is omitted here but is an important consideration). Thus, the actual wave height at the time of operation can be represented by a probability distribution function (see Figure 44). From this figure it can be seen that waves will most likely not be an impediment, and we would give a “weather go” to the operation. However, there is a 9 percent chance (marked on the graph) that the actual wave height will be greater than the threshold. Thus, if these same circumstances occurred many times, in 9 percent of these cases our prediction would be in error, and we would count these cases in a contingency table as cases of “incorrect go” (Table 41). Similarly, if the prediction had been for waves of 1.6 m in height, the METOC prediction would be “no go” or “hold.” Most of the time this would be the correct call. However, there would again be a number of cases (37 percent) where the actual waves at the time of operation were less than the threshold. These would constitute an “incorrect hold” and would contribute to the incorrect hold quadrant of the contingency table. A full contingency table is found by computing an ensemble of example cases (see Table 42).
Each type of prediction error from the contingency table would have an associated cost. The cost of an incorrect go is probably judged to be high, since such a situation can lead not only to mission failure but also threat to life. On the other hand, the cost of an incorrect hold may be much lower, merely representing a missed mission. However, if the mission is critical or is a critical component of a complex interdependent mission package, the cost of an incorrect hold will rise. For example, a missed evacuation of key civilian personnel prior to a conflict may necessitate very expensive and dangerous rescue operations later on and so would be associated with a higher cost for an incorrect hold. This would not mean that the SEAL operation would be attempted in impossible conditions, but it does mean that the cost of an incorrect hold could be very high depending on the duration.2 The impacts of missed predictions can be represented in a cost
2
In reality, threshold criteria may be better viewed in terms of a greenyellowred paradigm with a band of wave conditions where the operation becomes dangerous but still possible. Contingency tables can still be built (incorrect red, etc.) and the chance of prediction error (and associated costs) estimated. For simplicity we limit this illustration to the binary go/no go paradigm.
OCR for page 70
ENVIRONMENTAL ASSESSMENT
Assessing the environment of the 4D battlespace remains among the most vexing tasks facing the naval METOC community. In particular, as the pace and scale of warfare operations increase and become more spatially fragmented, providing accurate environmental information in a timely manner so that warfighters can meaningfully exploit their environment becomes a daunting task. Relevant issues in the development of REA capabilities are questions regarding:
determination of appropriate environmental parameters for different mission areas,
the granularity (resolution and sampling frequency) of gathered environmental data,
sensor optimization (types of sensors and optimized sensor arrays) for obtaining environmental information at the appropriate granularity,
quality control and quality assurance of rapidly acquired environmental data,
processing/analysis of data,
development and dissemination of relevant derived products, and
personnel training in the use and significance of environmental data products.
REA Parameters
Defining appropriate REA parameters requires close integration among the naval METOC community and it customers, the warfighters, as well as sensor developers. For each mission area, a decision system identifying those required parameters, desired parameters, and relatively unimportant parameters could be available (the General Requirements Data Base is a start toward a more robust parameter identification system).
REA Granularity and Sampling Frequency
REA data acquisition and assimilation needs for improved modeling scales and refresh rates need to be identified (see Tables 45 and 46). These needs are applicable to aerosol modeling efforts, highresolution tide/surf/wave models, atmospheric dispersion models, etc. In addition, data needs for datasparse regions of the earth will need to be identified and strategies to populate these regions with sensor arrays developed.
Additional data are necessary in order to enhance initialization of models at all scales and are particularly important as the forecast frequency becomes shorter (from 12 hours to 6 hours or less). Denser sensor arrays will also yield improved tropical storm intensity forecasts and improved storm track prediction.
OCR for page 70
TABLE 45 Present REA Granularity and Production Cycle
Atmosphere/ocean modeling scales
Global: 81 km (FNMOC)
Regional: 27, 9, 3 km (FNMOC)
“On scene”: 27, 9, 3 km (FNMOC, regional centers)
Nowcast: 3 km and less
Atmosphere/ocean timescales
Global: 24 to 144 hours
Regional: 12 to 48 hours (72 hours possible)
“On scene”: 12 to 48 hours (72 hours possible)
Nowcast (6.2 R&D): 0 to 6 hours
TABLE 46 Future REA Granularity and Production Cycle
Atmosphere/ocean modeling scales
Global: 27 km (FNMOC)—coupled
Regional: 3, 1, 0.3 km (FNMOC)—coupled
“On scene”: 3, 1, 0.3 km (FNMOC, regional centers)
Nowcast: 3 km and less
Atmosphere/ocean timescales
Global: 24 to 240 hours (ensembles)
Regional: 72 to 96 hours
“On scene”: 72 to 96 hours (72 hours possible)
Nowcast (6.2 R&D): 0 to 6 hours
SENSOR AND SENSOR ARRAY OPTIMIZATION
The need for environmental data on short timescales throughout the 4D battlespace may drive the development of new sensors or multisensor packages or new ways to deploy sensors (e.g., unmanned airborne vehicles [UAVs] and UUVs). Obviously, the necessity for acquiring environmental data will be weighed against other options (such as deployment of weapons systems on UAVs and UUVs), and a decision system for selecting appropriate sensors will need to be developed. Optimization of sensors and sensor arrays will again require close coordination between METOC personnel and warfighters to ensure that appropriate data are collected and transformed into meaningful METOC products.
Quality Control and Quality Assurance of REA Data
Acquiring large quantities of environmental data will require development of new algorithms to assess and assure data quality. While this process might be
OCR for page 70
Timely weather forecasts play a critical role in the safe operation of aircraft at sea (Photo courtesy of the U.S. Navy).
most efficiently developed utilizing an automated system of some kind, it should be recognized that in some instances data outliers may represent significant perturbations in the environment that might be of interest to warfighters because those perturbations could significantly impact a mission. Thus, mechanisms for recognizing these perturbations and including or excluding them from model runs should be developed. At the very least, some system for determining whether anomalous environmental readings represent real environmental anomalies or sensor malfunctions should be developed. Quality control and quality assurance schemes will also aid in verification and validation of highfrequency forecast models.
Processing and Analysis of Data
Present capabilities in the naval METOC community to provide REA are evolving, and several successful trial programs are being evaluated (e.g., Distributed Atmospheric Mesoscale Prediction System, or DAMPS, and NFN). Each of
OCR for page 70
these REA test systems relies on throughthesensor data gathering, assimilation, processing, and dissemination. In each case, sophisticated METOC computer models are forward deployed such that onscene modeling capabilities provide warfighters with enhanced environmental information related to the immediate area of operations. Each of these systems has reachback capabilities or can serve as the primary datagathering/processing node and is capable of providing a common operation picture through network dissemination.
Development of Relevant Data Products
The METOC community needs to receive frequent evaluations of its products from the enduser community at sea to determine the relevance of data products forwarded to the fleet. During wartime, METOC products should have a high degree of customizability such that different actors in the fleet might be able to extract environmental information most relevant to their immediate warfighting needs. For instance, naval aviators involved in close air support and amphibious assault groups have a need for access to nearsurface wind data, but each actor needs to be able to interpret wind data differently. Aviators need to know how wind speed and direction might affect weapons performance, whereas amphibious assault personnel need to know the effect of wind velocity and direction on sea state. In each instance the available data need to be presented as relevant products.
Personnel Training
As the sophistication and complexity of forwarddeployed REA sensors, sensor arrays, processing/analytical capabilities, and forecast products increase, there will be an increasing need for highly trained analysts to accompany the fleet in order to provide interpretive expertise. The naval METOC community should attempt to identify or develop personnel for these roles and implement career reward systems for those individuals who may serve in these billets. A recurring theme among a number of the sites visited by the committee was the lack of a career path for navy aerographers and METOC specialists. In view of the evolving importance of METOC (especially forwarddeployed REA capabilities), the Office of the Oceanographer of the Navy in particular should initiate a plan to enable motivated personnel to pursue this career path.
Statistical and Other Approaches to Decisionmaking in the Face of Uncertainty
Many fields of endeavor face the consequences of decisionmaking in the presence of uncertainty (for a good introduction, see Berger, 1985). One means of providing quantitative support to decisionmaking in the presence of uncertainty is through the use of Bayesian statistical analysis (Bayesian statistical analy
OCR for page 70
Traditional means of atmospheric sampling are both costly and manpower intensive. New capabilities in remote sensing provide for more timely and efficient synoptic measurements (Photo courtesy of the U.S. Navy).
sis is a distinct field from Bayesian decision theory). This section discusses the problem of building models of complex systems having many parameters that are unknown or only partially known and the use of Bayesian methodology. There are numerous papers and books related to the application of Bayesian analyses that have some implications for using Bayesian statistical approaches to uncertainty analysis (e.g., Berger, 1985; Gelman et al., 1995; Punt and Hilborn, 1997). This discussion is intended simply to suggest that such techniques may have value
OCR for page 70
in helping set priorities for data collection under a variety of conditions. Thus, while aspects of these approaches are discussed here, the reader is encouraged to more fully explore these through the rich literature that exists.
There are three major elements in the Bayesian approach to statistics that should be indicated clearly: (1) likelihood of describing the observed data, (2) quantification of prior beliefs about a parameter in the form of a probability distribution and incorporation of these beliefs into the analysis, and (3) inferences about parameters and other unobserved quantities of interest based exclusively on the probability of those quantities given the observed data and prior probability distributions.
In a fully Bayesian model, unknown parameters for a system are replaced by known distributions for those parameters observed previously, usually called priors. If there is more than one parameter, each individual distribution, as well as the joint probability distributions, must be described.
A distinction must be made between Bayesian models, which assign distributions to the parameters, and Bayesian methods, which provide point estimates and intervals based on the Bayesian model. The properties of the methods can be assessed from the perspective of the Bayesian model or from the frequentist9 perspective. Historically, the “true” Bayesian analyst relied heavily on the use of priors. However, the modern Bayesian has evolved a much more pragmatic view. If parameters can be assigned reasonable priors based on scientific knowledge, these are used (Kass and Wasserman, 1996). Otherwise, “noninformative” or “reference” priors are used.10 These priors are, in effect, designed to give resulting methods properties that are nearly identical to those of the standard frequentist methods. Thus, the Bayesian model and methodology can simply be routes that lead to good statistical procedures, generally ones with nearly optimal frequentist properties. In fact, Bayesian methods can work well from a frequentist perspective as long as the priors are reasonably vague about the true state of nature. In addition to providing point estimates with frequentist optimality properties, the posterior intervals for those parameter estimates are, in large datasets, very close to confidence intervals. Part of the modern Bayesian tool kit involves assessing the sensitivity of the conclusions to the priors chosen, to ensure that the exact form of the priors did not have a significant effect in the analysis.
There are two general classes of Bayesian methods. Both are based on the posterior density, which describes the conditional probabilities of the parameters
9
Frequentist statistical theory measures the quality of an estimator based on repeated sampling with a fixed nonrandom set of parameters. Bayesian statistical theory measures the quality of an estimator based on repeated sampling in which the parameters also vary according to the prior distributions. Most beginning statistics courses focus on frequentist methods such as the t test and analysis of variance.
10
Such priors are sometimes “improper” in that the specified prior density is not a true density because it does not integrate to 1. A prior distribution is proper if it integrates to 1.
OCR for page 70
given the observed data. This is, in effect, a modified version of the models’ prior distribution, where the modification updates the prior based on new information provided by the data. In one form of methodology, this posterior distribution is maximized over all parameters to obtain “maximum a posteriori” estimators. It has the same potential problem as maximum likelihood in that it may require maximization of a highdimension function that has multiple local maxima. The second class of methods generates point estimators for the parameters by finding their expectations under the posterior density. In this class the problem of highdimension maximization is replaced with the problem of highdimension integration.
Over the past 10 years, Bayesian approaches have incorporated improved computational methods. Formerly, the process of averaging over the posterior distribution was carried out by traditional methods of numerical integration, which became dramatically more difficult as the number of different parameters in the model increased. In the modern approach the necessary mean values are calculated by simulation using a variety of computational devices related more to statistics than traditional numerical methods. Although this can greatly increase the efficiency of multiparameter calculations, the model priors must be specified with structures that make the simulation approach feasible.
Although they are not dealt with extensively here, a number of classes of models and methods have an intermediate character. For example, there are “empirical Bayes” methods in which some parameters are viewed as arising from a distribution that is not completely known but rather known up to several parameters. There are also “penalized likelihood methods” in which the likelihood is maximized after addition of a term that avoids undesirable solutions by assigning large penalty values to unfeasible parameter values. The net effect is much like having a prior that assigns greater weight to more reasonable solutions and then maximizes the resulting posterior. Another methodology used to handle many nuisance parameters is the “integrated” likelihood in which priors are assigned to some of the parameters to integrate them out while the others are treated as unknown. This provides a natural hybrid modeling method that could have fishery applications.
Limitations of Applying Formal Decision Theory
The case for Bayesian methods presented above must be tempered with some limitations of any formal approach to decision theory. Bayesian decision and riskbenefit analysis needs an assignment of the a priori and transition probabilities as development of a cost matrix. This has led to many historical and philosphical discussions about the foundations of this approach. How does one assign a cost matrix? How does one assign a cost to casualties? Often political risks defy a cost assignment that is needed when calculating the posterior probability of mission success or failure.
OCR for page 70
Similarly, the probabilities must be assigned. Does one estimate prioris from a database? If so, how much variability is there in the estimate? What is an appropriate assignment in a changing environment? Extremals (i.e., lowprobability events) are especially vexsome because they happen so rarely. Transition probabilities are also problematic. The outcome of an observation can often depend on the system used or a tactical decision aid whose performance is questionable given an environmental database. Again, how does one quantify these probabilities and incorporate the variability into the analysis?
The important point here is that the inputs needed by a Bayesianlike approach are often not readily available or precisely defined without error. Estimating and/or assigning these are difficult problems in their own right. There needs to be an investment and infrastructure to quantify the needed probabilities and costs for this approach. This cannot be done on intuition or subjectively for this renders the results equally subjective. Certainly, one can give conditional outputs and one of the important aspects of the Bayesian approach is quantitative consideration of the inputs.
Whether setting priorities based on some costbenefit analysis or simply determining which actions are the most pressing involves placing some value on the outcome. Such formalized efforts require considerable effort to establish a cost function associated with any given action (or even lack of action). In addition, understanding the transfer function (how costs or benefits are accrued) requires detailed understanding of how various processes interact. In a complex system, such as the battlespace, simplifying assumptions can reduce the numerical complexity but also introduce the possibility of reducing the problem to unrealistic levels. At some point, the resulting numerical model simply becomes invalid. Overcoming these barriers will require a consistent and committed effort, one that is likely best undertaken by ONR, working closely with the METOC community and warfighters. Such an effort, however, would likely bring greater focus and awareness of exactly how important specific environmental information is or can be.
In addition, Bayesian approaches present several important issues of which the user should be aware, including the following:
Specification of priors in a Bayesian model is an emerging art. To do a fully Bayesian analysis in a complex setting, with no reasonable prior distributions available from scientific information, requires a careful construction whose effect on the final analysis is not clear without sensitivity analysis. If some priors are available but not for all parameters, a hybrid methodology, which does not yet fully exist, is preferred.
In a complex model it is known by direct calculation that the Bayesian posterior mean may not be consistent in repeated sampling. However, little information is available to identify circumstances that could lead to bad estimators.
OCR for page 70
Although modern computer methods have revolutionized the use of Bayesian methods, they have created some new problems. One important practical issue for Monte CarloMarkov Chain methods is construction of the stopping rule for the simulation. This problem has yet to receive a fully satisfactory solution for multimodal distributions. Additionally, if one constructs a Bayesian model with noninformative priors, it is possible that even though there is no Bayesian solution in the sense that a posterior density distribution does not exist, the computer will still generate what appears to be a valid posterior distribution (Hobert and Casella, 1996).
There is a relative paucity of techniques and methods both for diagnostics of model fit, which is usually done with residual diagnostics or goodnessoffit tests, and for making methods more robust to deviations from the model. Formally speaking, a Bayesian model is a closed system of undeniable truth, lacking an exterior viewpoint to make a rational model assessment or to construct estimators that are robust to the modelbuilding process. To do so and retain the Bayesian structure requires constructing an even more complex Bayesian model that includes all reasonable alternatives to the model in question and then assessing the posterior probability of the original model in this setting.
Despite progress in the numerical integration of posterior densities, models with large numbers of parameters can be difficult to integrate. Two of the most commonly used methods—ampling importance resampling and Monte CarloMarkov Chain—have difficulty with multimodalor, highly nonelliptical, density surfaces.
Other Approaches
There is an ongoing effort to address uncertainty by a variety of means. One approach that may deserve additional investigation in its application to environmental uncertainty is “fuzzy arithmetic.” Fuzzy numbers have been described by Ferson and Kuhn (1994) as a generalization of intervals that can serve as representations of values whose magnitudes are known with uncertainty. Fuzzy numbers can be thought of as a stack of intervals, each at a different level of presumption (alpha), which ranges from 0 to 1. The range of values is narrowest at alpha level I, which corresponds to the optimistic assumption about the breadth of uncertainty.
Using fuzzy arithmetic software minimizes the possibility of computational mistakes in complex calculations. In summary, fuzzy arithmetic is possibly an even more effective method than interval analysis for accommodating subjective uncertainty; its utility should be examined more carefully for use in addressing uncertainty in environmental information for naval use.
OCR for page 70
SUMMARY
ROC is discussed above as a possible method for placing value on METOC information, and decision theory is examined for its potential value for quantifying optimum investment strategies. The committee was not familiar with examples of these approaches having been applied to the type of environmental problems faced by the U.S. Navy and Marine Corps—with one notable exception. ONR, which supported much pioneering work on decision theory, recently made capturing uncertainty in acoustic propagation the focus of a department research initiative (DRI). This DRI, entitled “Capturing Uncertainty in the Common Tactical/Environmental Picture,” is scheduled to continue through 2004 and is intended to “to characterize, quantify, and transfer uncertainty in the ocean environment to calculations of acoustic fields and to the subsequent use of acoustic fields in performance prediction, in estimating and displaying the state of targets, and in other Navy relevant applications.” This initiative, though early in its history, appears to offer some opportunity to address a significant tactical problem in undersea warfare (see Chapter 2 for more discussion). ONR should expand this initiative to cover other environmental problems and processes in order to more directly support operational challenges facing the fleet and Marine Corps and the METOC community supporting them.
In the absence of more specific knowledge of the ultimate payoff from such focused ONR activities, this introductory discussion should serve to raise a number of important points concerning METOC within the U.S. Navy and Marine Corps. Foremost is the need to recognize uncertainty in METOC prediction as fundamental to the value system for METOC investment. In the absence of some method to quantify the cost of METOC uncertainty—that is, the risk due to METOC prediction failure and the likelihood and cost of that failure—no objective basis can be found for R&D investment strategies or data acquisition.
As discussed previously, uncertainty can be reduced in many instances by the acquisition of additional data. However, the cost of such acquisition may exceed the benefit derived from its use. In some instances the stochastic nature of some processes means that some uncertainty will remain regardless of the number of observations made (objective uncertainty). In other cases increased understanding of the system, whether through acquisition of additional data or better understanding of the process involved, may be impractical due to lack of assets or time. In other cases, it may be practical to collect additional data, but the need for more accurate predictions of future states is low (i.e., naval operations are not sensitive to the process involved), making additional data collection unnecessary. Understanding the nature of the uncertainty, and its potential cost, associated with any environmental parameter of interest is a key step in improving the quality of environmental information provided to the fleet and Marine Corps. War gaming, SeaTesting, and other approaches to developing naval doctrine are important opportunities to more fully evaluate the importance of reliable and
OCR for page 70
accurate environmental information and the impact of uncertainty or efforts to reduce the cost of that uncertainty. The Oceanographer of the Navy and the Chief of Naval Research should make every effort to ensure that environmental conditions and predictions of those conditions are realistically depicted in simulations and that their effects are faithfully and accurately captured so that the fleet and Marine Corps, as well as the METOC community, can more fully evaluate the importance of METOC information in naval operations.
Once a decision for additional information has been made, it seems sensible to assume that the cost of acquiring those data is a function of their availability. It appears that too often data collected previously, whether for other purposes or not, are not made use of by the METOC community. Thus, the benefit of collecting those data is not fully utilized. Several opportunities for less expensive data acquisition bear fuller evaluation. Because data for denied areas are often the most valuable, especially for REA, particular emphasis should be placed on making fuller use of data collected in these areas. Typically, the best source of such data is the electronic intelligence community, as it possesses the most capable assets. Many data collected in these areas are classified simply because their recovery involves acknowledgement of sensitive ground assets (e.g., it appears that information from UAVs collected for the Defense Threat Reduction Agency [DTRA] may be held for an extended period of time before it is released to METOC archives, so as not to release information about the existence of informationgathering assets).
Current sensors provide data not only in optical bands but also from active sensors (radar) and in other passive bands. Each provides a new window into the environment. Exploitation techniques are already available in the literature and could easily be forward deployed. It is strongly recommended that existing intelligencegathering sensors be examined for potential exploitation as dualuse METOC sensors. Particular emphasis should be placed on airborne sensor packages, since these are forwarddeployed theater assets that are already under the control of operational commanders. Data collection could be either specifically tasked or en route to other missions, while analysis could be easily handled onboard.