issues are not addressed directly by SSHAC). In this regard, it is useful to consider separately two questions:
Is the aleatory/epistemic classification unique and clear?
Why is a separate treatment of epistemic and aleatory uncertainty needed and to what degree should it be pursued in a PSHA analysis?
Embedded in the second question are issues of utilization of results in which epistemic uncertainty and aleatory uncertainty are separated (i.e., of results stated in a “probability of frequency” format), either in the process of conducting the PSHA study or in the process of decision making by the ultimate user. In this chapter the panel briefly reviews SSHAC 's position on these issues and makes some recommendations.
SSHAC correctly points out that the classification of uncertainty as epistemic or aleatory depends on the model used to represent seismicity and ground motion. For example, epistemic uncertainty would be much greater if, in the assessment of seismic hazard at an eastern U.S. site, instead of representing random seismicity through homogeneous Poisson sources one used a model with an uncertain number of faults, each with an uncertain location, orientation, extent, state of stress, distribution of asperities, and so forth. As little is known about such faults, the total uncertainty about future seismicity and the calculated mean hazard curves would be about the same, irrespective of which model is used. However, the amount of epistemic uncertainty would be markedly different; it would be much greater for the more detailed, fault-based model. Consequently, the fractile hazard curves that represent epistemic uncertainty would also differ greatly.
A reasonable interpretation of the probabilistic models used in seismic hazard analysis is that they represent not intrinsic randomness but uncertainty on the part of the analyst about the actual states and laws of nature—for example, about the number of earthquakes of magnitude 6 to 7 that will occur in the next 50 years in a given crust volume. According