National Academies Press: OpenBook
« Previous: 5 RISK IDENTIFICATION
Page 49
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 49
Page 50
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 50
Page 51
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 51
Page 52
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 52
Page 53
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 53
Page 54
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 54
Page 55
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 55
Page 56
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 56
Page 57
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 57
Page 58
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 58
Page 59
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 59
Page 60
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 60
Page 61
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 61
Page 62
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 62
Page 63
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 63
Page 64
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 64
Page 65
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 65
Page 66
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 66
Page 67
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 67
Page 68
Suggested Citation:"6 RISK ASSESSMENT." National Academies of Sciences, Engineering, and Medicine. 2012. Guide for the Process of Managing Risk on Rapid Renewal Projects. Washington, DC: The National Academies Press. doi: 10.17226/22665.
×
Page 68

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

47 INTRODUCTION After identifying risks and opportunities as described in Chapter 5, the next step is to understand the importance that each risk and opportunity has on the project’s perfor- mance measures. Assessing the severity of each risk and opportunity allows the DOT to better plan risk management actions and make better project decisions. Objectives The primary objective of risk assessment is to adequately determine the significance of each risk and opportunity, to determine those risks and opportunities that should be refined further (e.g., by gathering additional information) or reduced (if possible) through proactive risk management actions (Chapter 8). Secondarily, when considered collectively over the complete set of risks and opportunities, this significance can provide some insight into ultimate project perfor- mance. A more quantitative determination of ultimate project performance is discussed in Chapter 7 and plans for managing that performance (including establishing and managing contingencies) are discussed in Chapter 8. Another objective of risk assessment is to complete this step in the overall risk management process effi- ciently, producing accurate and defensible results that are compatible with the other steps of the process. How this information will be used in later steps of the process will determine its requirements. In all cases, facilitated consensus among a broad group of project-team and project-independent experts is key to successful risk assessment. 6 RISK ASSESSMENT Adequately but efficiently assess the severity (combination of likelihood and various conse- quences), and therefore significance, of each of the risks (including opportunities) in the risk register.

48 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS Philosophy and Concepts There are several important concepts regarding risk assessment that affect the accu- racy and defensibility of the results, as well as the effort, including • Implicit versus explicit rankings; • Qualitative versus quantitative assessments; • Subjective versus objective assessments; and • Level of detail. Implicit Versus Explicit Rankings The significance of a risk or opportunity is defined in terms of its severity, or likely effect on project performance. This significance can be determined by ranking the vari- ous risks and opportunities in one of two basic ways: • Implicitly assessing each risk’s likelihood of occurring and its impacts on project performance if it occurs (e.g., Risk A is more significant than Risk B), both with respect to individual performance measures and to a combined measure. How- ever, because of the many complexities involved (i.e., the difficulty in implicitly combining and adequately accounting for so many factors), this is difficult to do accurately and defensibly. • Explicitly assessing and then appropriately combining the risk factors that charac- terize each risk, including — Likelihood that the risk occurs (e.g., 25% chance); and — Magnitude of the consequences (impacts) to each performance measure if the risk occurs (e.g., $2 million cost increase and 6-month delay to construction). Assessing the individual risk factors is generally less complex, more tractable, generally more accurate and defensible, as well as more informative—if done appropriately— than implicit assessment. Generally this approach also allows for both identifying and accurately evaluating potential risk management actions (Chapter 8), as well as pro- viding a foundation for risk analysis (Chapter 7), if needed. This guide focuses on the explicit approach. Qualitative Versus Quantitative Assessments Qualitative assessment involves characterizing the likelihood and consequences in terms of nonquantitative ratings. A risk might be assessed to have a High (H) likelihood of occurrence and a corresponding Medium (M) cost impact and Low (L) schedule impact if it occurs. Another approach is to use numerical ratings (e.g., 1 through 5) instead of H, M, and L ratings. In both cases, these ratings typically are not defined with respect to quantitative values. On the benefit side, qualitative assessments may be relatively quick to conduct and provide a simple visual rating (depending on the method used). Drawbacks of qualitative assessments can include the following: • Ratings can be vague, if qualitative ratings are not tied to specific values (e.g., what does a “High” likelihood of occurrence really mean?). As a result, different people

49 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS can interpret qualitative ratings in different ways, which might lead to inaccuracies or problems in developing consensus. • If the ratings (e.g., for likelihood and consequence) are not combined, then no overall measure of the risk is possible, which means that the register of risks can- not be ranked or prioritized. • If the ratings are combined, the resulting risk rankings are generally ambiguous, relative (not absolute), and can even be misleading. To rank a risk based on as- sessed risk factors, the risk factors must generally be combined in some fashion. The most logical approach is to first determine the combined consequence rating from the various consequence types and then to determine the rank as the product of the likelihood rating and the combined consequence rating. However, qualita- tive ratings cannot actually be added or multiplied and, because the risk-factor ratings are often vague, the resulting risk ranking is ambiguous. For example, suppose a risk has been assessed to have a High (H) likelihood and a Low (L) combined consequence [which in turn was based on a Low (L) cost consequence and a Low (L) schedule consequence]. Is the ranking for this risk H × L = M? And does this risk have the same ranking as another risk with M × M = M? And is this the same ranking as L × H = M? Quantitative assessment generally involves characterizing the risk factors in one of two ways: • Ratings. In terms of ratings that are defined by appropriate numerical scales (e.g., a High likelihood of occurrence might be defined as a probability of occurrence between 40% and 70%). An example of this type of semiquantitative assessment is presented later in this chapter. • Numerically. Directly in terms of numerical values, which avoids ratings altogether. For example, a risk might be assessed to have a 25% probability of oc- curring, and if it occurs, would result in a mean value of $1 million additional cost and 2-month project delay during construction. An example of this type of quantitative assessment is also presented later in this chapter. However, to adequately quantify the uncertainty in project performance, it is generally necessary to assess the uncer- tainties in (and the correlations among) the various “con- ditional” consequences of the most significant risks, as well as in the base cost and schedule factors (see Chapter 4). This can be done in terms of likely ranges (continuous probability distributions) or scenarios (discrete probability distributions), as discussed further in Chapter 7. • Mean value is the probability-weighted average value. • Conditional value is the value if the risk occurs (ignoring the probability of that risk occurring). • Unconditional mean value is the mean value considering (that is, accounting for) the probability of that risk occurring.

50 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS The benefits of quantitative assessments can include the following: • There is no ambiguity in values. • Risk-factor assessments can be meaningfully combined (analytically rather than subjectively): — Risk likelihood and consequence can be combined. For example, the “uncon- ditional” mean value of additional cost associated with a particular risk simply equals the product of the conditional mean value of additional cost if the risk occurs and the probability that it will occur. — The change in the various project performance measures (i.e., sensitivity) asso- ciated with each risk can be determined. For example, for additive project per- formance measures (such as uninflated cost), either (a) the conditional impacts can be used to determine the conditional change in the performance measure, which is then weighted by its probability of occurrence; or (b) the unconditional impacts can be used directly. However, for nonadditive performance measures (e.g., schedule), these two ap- proaches might give different results, and so conditional impacts should be used. — Changes in various individual project performance measures associated with each risk can be combined to create a single performance measure for that risk, as a measure of risk severity. For example, the value (in terms of equivalent cost, in dollars) of schedule, disruption, and longevity can be determined and then combined with capital or direct cost, to determine a single combined performance measure in monetary terms. A method for determin ing the equivalent monetary value for non- monetary performance measures is described later in this chapter. — If the set of risks is comprehensive and non overlapping, then the changes in project performance measures associ- ated with that set of risks can be determined. For example, the mean value of the change in uninflated project cost associated with all the risks is the sum over all risks of the unconditional mean value of additional uninflated cost associated with each risk. • Risks can be ranked meaningfully and appropriately based on their unconditional mean values by consequence type (e.g., uninflated cost increase, schedule impact) or more completely by combined consequence (severity). • The basis for quantitative risk analysis (Chapter 7) and for quantitative evaluation of possible risk reduction actions is formed, as part of risk management planning (Chapter 8). • Performance measure: For example, cost in monetary terms versus schedule in nonmonetary terms. • Combined performance measure: Non- monetary performance measures trans- lated into equivalent monetary terms via trade-off value (i.e., willingness to pay to change) and then combined. • Severity: Change in combined perfor- mance measure.

51 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS The drawbacks of quantitative assessments can include the following: • Additional effort is required to adequately — Assess the risk factors more precisely and achieve consensus among a broad group of experts. This is especially true if full uncertainty in conditional con- sequences of risks, as well as in base cost and schedule factors, is assessed, in which case correlations and dependencies must also be considered. This is dis- cussed further in Chapter 7. — Determine (by analysis) the change in project performance measures associated with the assessed risk factors, especially for nonadditive performance measures. This can be done to various degrees of approximation and can become very complicated and prone to error (especially for full uncertainty). This is dis- cussed further in Chapter 7. — Assess the trade-off values to determine equivalent costs of nonmonetary perfor- mance measures so that a single combined performance measure can be devel- oped. This is typically a policy (rather than a technical) issue, which should be addressed by DOT management. • If computing total project risks (i.e., combining the set of risks), a nonoverlapping and comprehensive set of risks is required to avoid double-counting and missing any items, respectively. A suitable allowance (e.g., loosely based on an 80:20 rule that suggests 80% of the total is associated with 20% of the items) is generally used for unidentified risks to make the set comprehensive. For example, a 50% chance of an extra 50% of identified risks, or a 100% chance of an extra 0% to 50% of identified risk, might be used for this allowance. Subjective Versus Objective Assessment When an adequate database of information related to a particular risk is available, an objective, or statistical, approach can be used to assess the risk factors. However, this is rarely the case in transportation construction projects and, in particular, for in- novative rapid renewal projects. Similarly, when appropriate analytical methods are available to calculate changes in performance measures as a function of the risk fac- tors, then this objective approach can be used, as opposed to assessing those changes in performance measures directly; for example, it is better to assess the change in an activity duration and then analyze the change in project completion date (considering critical path) than to assess the change in project completion date directly. For example: If schedule delay is 2 months and the value of such delay has been estab- lished at $1 million/month (for deferred operations), then the delay’s equivalent cost is $2 million, plus any other time-related delay (increased overheads and escalation). The delay’s equivalent cost can be compared directly to capital cost.

52 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS However, when statistical information or appropriate analytical methods are not available, the opinion of subject-matter experts, based on all available information, can be elicited, de-biased, and quantified in the form of subjective assessments. Because most transportation projects—and particularly rapid renewal projects—are relatively unique, adequate data generally are not available, and properly obtained subjective assessments usually are required to develop risk-factor assessments. Subjective assess- ments, when properly developed and documented, and especially if they represent a consensus among a wide group of experts, are widely accepted in risk assessment prac- tice. However, subjective assessments are subject to bias, which must be identified and mitigated. Guidance on how to mitigate bias is provided later in this chapter. Level of Detail The level of detail, and therefore effort, put into risk assessment should be consistent with the level of information available on the project’s cost and schedule, the size and complexity of the project, and the objectives for the risk assessment. For example, if the objective for the risk assessment is • Simply to roughly identify the top risks, then less detail and precision (in terms of approximation, as opposed to the number of digits) is required. • To be able to quantify the benefits of proposed risk management actions, then higher-quality and more-detailed assessments and analysis are required. • To quantify the uncertainty in project performance, then full uncertainty in (and correlation among) the various factors and more-detailed probabilistic analysis are needed, as discussed further in Chapter 7. PROCESS OF RISK ASSESSMENT Methods As mentioned previously, various methods exist to conduct risk assessment via risk factors (as well as implicitly). Several of the more common methods for assessing and combining risk factors include, in increasing level of complexity, • Qualitative: — Red/Yellow/Green. This method uses qualitative ratings for risk factors, which generally are not defined and are combined subjectively. — Rating Scale. This method uses numerical ratings, which generally are neither appropriately defined nor appropriately combined. • Quantitative: — Mean-Value Ratings. This method is an extension of the qualitative methods mentioned above, with mean-value ratings based on defined numerical scales and combined appropriately (analytically), resulting in mean risk severity ratings. — Mean Values. As its name implies, this method bypasses ratings altogether, in- stead quantifying risk factors directly in terms of mean values (e.g., dollars,

53 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS time), which are combined appropriately (analytically), and results in mean risk severity values (dollars) and mean performance values (e.g., dollars, time). — Full Uncertainty. This method involves quantifying the uncertainties in (and correlations among) the risk factors, as well as the base factors, and then appro- priately combining all of the uncertainties (analytically), as discussed in Chap- ter 7, and results in probability distributions for project performance and con- tributions to specific target percentiles of project performance. Quantitative Mean-Value Method The mean-value method characterizes individual risk factors directly in terms of mean values in the corresponding units or dimensions (e.g., probabilities in percent, con- sequences in dollars and time). Ideally, as discussed later, consensus among a broad group of experts is achieved on these mean values, appropriately considering (either statistically or subjectively) all available information. These mean values of the various risk factors (i.e., probability and conditional consequence by type to specific activities) are then appropriately combined (e.g., by analysis) to determine a mean change in each performance measure, as well as a mean change in a combined performance measure (severity), in terms of equivalent inflated project cost (see next box). Equivalent inflated project cost is one possible combined performance measure (as described previously). The change in equivalent inflated project cost resulting from a risk reflects the following: (a) the indirect cost of delays in the form of additional overhead or staffing costs, (b) the time-value equivalent cost of schedule delay in terms of additional monetary inflation, (c) the time-value equivalent cost of schedule, dis- ruptions, and longevity in terms of value; and (d) the direct-cost consequence in unin- flated monetary terms. If the set of risks is comprehensive and nonoverlapping, then mean total (i.e., base + risk) performance can also be approximated by appropriately combining the base and individual risks, from which the mean collective risk can be determined. However, because this is approximate, it must be done carefully to avoid misleading results. In any case, because it ignores uncertainty in performance, the results should not be used for budgeting (see Chapter 7). This is the most straightforward method discussed in this chapter because it avoids the ambiguities of intermediate risk-factor ratings and their combination. This method’s results can be the least ambiguous and perhaps the most useful, assuming that the DOT wants to use risk assessment results in some quantitative way, pro- viding absolute measures of risk severity and a basis for quantitative risk analysis if needed (Chapter 7). The only drawback is that significant effort might be required to adequately assess the mean values for each risk factor of each risk, and to adequately conduct the analyses to convert the mean values of the risk factors into the mean value of severity. An example of this type of assessment, including an example calculation of the mean value of severity (in equivalent cost terms) and of the collective risk, is shown here. Automating this analysis clearly is the key. The companion Simplified Risk Management Training course addresses this method in more detail. It includes a form (Figure 6.1) and a Microsoft Excel work- book template (see Appendix C) for conducting this type of risk assessment (including

54 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS automatic analyses of risk severity and mean base + risk performance), appropriately considering risks and opportunities, as well as the performance measures and activities for rapid renewal, especially for simple projects. The risks are defined as impacts (by activity) to the base, with values specified (in equivalent monetary terms) for the vari- ous performance measures to determine longevity and severity (see Chapter 4). Quantitative Mean-Value Rating Method In this method, rating scales are used instead of actual mean values. These scales are predefined so that each rating (e.g., H) corresponds to a specific range of values. Ultimately, for calculations, a mean value is assumed for each category and used in the same way as for the quantitative mean-value method. For example, if a probability rat- ing of M was defined to represent a range from 40% to 70%, for calculations, a mean value of 55% would be used. This approach therefore involves more approximation, which is the method’s main disadvantage compared with the mean-value method. An example of the mean-value rating assessment is shown below. In this simple example (using only three categories), a High cost consequence rating corresponds to a range of cost change between $100,000 and $1 million, whereas a High probability rating corresponds to a range of probabilities between 50% and 100%. For visualiza- tion, the assessments can be color-coded (e.g., red for High, yellow for Medium, and green for Low), as shown. After the risk factors and risk-factor ratings have been defined, the risk factors (i.e., likelihood and various consequence types) for each risk are assessed using the defined scales. Again, ideally the facilitator will achieve consensus among a broad group of experts. These assessments typically can be done very quickly by compar- ing the predefined rating scales, which is the main advantage of this method over the mean-value method. These risk-factor ratings are then combined to get an equivalent combined mean severity rating, via either 14 2014.01.13 R09 07 Guide Chapter 6_final for composition.docx mean value of severity. An example of this type of assessment, including an example calculation of the mean value of severity (in equivalent cost terms) and of the collective risk, is shown here. Automating this analysis clearly is the key. The companion Simplified Risk Management Training course addresses this method in more detail. It includes a form (Figure 6.1) and a Microsoft Excel workbook template (see Appendix C) for conducting this type of risk assessment (including automatic analyses of risk severity and mean base + risk performance), appropriately considering risks and opportunities, as well as the performance measures and activities for rapid renewal, especially for simple projects. The risks are defined as impacts (by activity) to the base, with values specified (in equivalent monetary terms) for the various performance measures to determine longevity and severity (see Chapter 4). [Insert Figure 6.1] [caption] Figure 6.1. Form (Appendix C). <H3>Quantitative Mean-Value Rating Method In thi method, r ting scales ar used instead of actual mean values. These scales are predefined so that each rating (e.g., H) corresponds to a specific range of values. Ultimately, for Figure 6.1. Form (Appendix C).

55 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS 1. An approach that first converts the individual ratings into their equivalent mean values (e.g., middle of the range), then analytically combines those mean values into individual mean performance measures and then a mean combined performance measure in the same way as the mean-value method does, and finally converts the combined value back into an equivalent combined mean severity rating (i.e., an overall mean severity rating for the risk, considering all consequence types or perfor- mance measures). Because the combined value is determined before being translated into a rating, risks can be approximately ranked even within each consequence type. 2. An approach that prespecifies the severity rating as a function of the risk factor ratings (e.g., by matrices), which in turn can be determined beforehand either — Analytically, determining the risk severity rating for each possible combination of risk-factor ratings in the same way as discussed above; or — Subjectively, based on consensus among a wide group of experts, which is dif- ficult to do accurately and defensibly, but relatively easy to do analytically. However, in this method, risks cannot be ranked within a category (e.g., all Highs are equal). 3. Pure direct subjective assessment, implicitly considering how the various risk fac- tors combine. However, as discussed above, this can be difficult to do accurately and defensibly, but may be relatively easy to do analytically and can be very inef- ficient to do individually for each risk (e.g., in a workshop). The companion Simplified Risk Management Training course also addresses Method 1 in more detail, including the same form (Figure 6.2) and spreadsheet template (see Appendix C) as used for the mean-value method (in which mean values and ratings can be mixed). Five (rather than three) ratings (VL, L, M, H, VH) are used, including negative values for opportunities. This is applicable for relatively simple projects. Figure 6.2. Forms (Appendix C) 17 2014.01.13 R09 07 Guide Chapter 6_final for composition.docx Figure 6.2. Forms (Appendix C).

56 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS Example of a Quantitative Mean-Value Assessment (this is not the hypothetical case study) For a project, the base performance has been established and a set of risks (relative to that base) has been identi- fied and their factors (mean value of impacts of various types by activity and likelihood of occurrence) have been assessed quantitatively. For each risk, severity is calculated as follows: • Calculate the mean-value change in each performance measure as a function of the mean value of uncondi- tional consequences. • Combine those mean-value changes in each performance measure into a mean-value change in the combined performance measure. If the set of risks is comprehensive and nonoverlapping, then the mean value of the performance measure can be approximately determined by simply combining the changes associated with each risk. For example: • Unconditional schedule-change consequence: Schedule critical path change is determined, and related ex- tended overheads (OHs) are added to direct cost: — for Risk R1: (6-month delay to ROW – 0 base float for ROW) × 15% probability = 0.9 month (mean-value change to schedule performance measure) — for Risk B1: (2 months to procurement – 0 base float for procurement) × 40% probability = 0.8 month (mean-value change to schedule performance measure) — for Risks R1 and B1: 0.9 month + 0.8 month = 1.7 months • Unconditional cost-change consequence: Direct-cost change must be inflated to account for: (1) schedule delay and the associated additional OH costs (at $0.1M/month for preconstruction), and (2) additional infla- tion of total cost due to schedule delay: — for Risk R1: {[$0.5M direct uninflated cost to ROW + (6-month delay to ROW – 0 base float or ROW) × $0.1M/month (extended OH for ROW)] × 1.10 (inflation factor for additional direct cost, including delay, for ROW) + $100M (remaining cost after ROW) × 0.02 (increase in inflation in remaining cost after ROW due to 6-month delay in ROW)} × 15% probability = $0.48M (YOE) — for Risk B1: {[$2.0M direct uninflated cost to construction + (2-month delay to procurement – 0 float for procurement) × $0.1M/month (extended OH for procurement)] × 1.20 (inflation factor for additional direct cost, including delay, for construction) + $90M (remaining cost after procurement) × 0.01 (increase in inflation in remaining cost after procurement due to 2-month delay in procurement)} × 40% prob- ability = $1.42M (YOE) — for Risks R1 and B1: $0.48M (YOE) + $1.42M (YOE) = $1.90M (YOE) • Unconditional disruption consequence change is determined as follows: — For Risk R1: 0 person-hours × 15% probability = 0 person-hours — For Risk B1: 0 person-hours × 40% probability = 0 person-hours — For R1 and B1: 0 person-hours + 0 person-hours = 0 person-hours (continued)

57 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS • Longevity change is determined (see Chapter 4) based on changes in cost and disruption associated with operations and maintenance and replacement, as well as schedule of replacement, and various trade-offs, but is zero in this case and not shown. • Overall severity for a risk, in terms of a combined performance measure, is then determined (see Chapter 4) from changes in individual performance measures and separately assessed trade-offs among the performance measures: — For Risk R1: 0.9 month × $0.5M/month (delay value, separate from extended OHs and inflation) + $0.48M + 0 person-hours × $10/person-hour (disruption value) = $0.93M — For Risk B1: 0.8 month × $0.5M/month (delay value, separate from extended OHs and inflation) + $1.42M + 0 person-hours × $10/person-hour (disruption value) = $1.82M — For R1 and B1: $0.93M + $1.82M = $2.75M Note: M = million. The above example of a quantitative mean-value assessment (both inputs and outputs) has been summarized in the table below. Risk Scenario for Conditional Consequence to each Performance Measurea Scenario Probability è Risk Severity (equiv $) Direct-Cost Change (uninflated $) Schedule Change (months) Disruption Change (h) … … … … … … R1. Landowner unwilling to sell key property $0.5M to ROW 6 to ROW 0 15% through ROW $0.93M … … … … … … B1. Poor bidding climate for general contractor $2M to construction 2 to procurement 0 40% through procurement $1.82M … … … … … … ê ê Total Unconditional Consequence $1.90M 1.7 0 è $2.75M Note: M = million, ROW = right-of-way, YOE = year-of-expenditure (i.e., inflated). a If risk occurs.

58 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS Example of Quantitative Mean-Value Rating Assessment (this is not the hypothetical case study) Similar to the previous example, the base performance for a project has been established and a set of risks (relative to that base) has been identified and their factors (mean value of impacts of various types by activity and likeli- hood of occurrence) have been assessed qualitatively (i.e., L, M, H in this example). These risk-factor ratings are defined below. The risk-factor ratings are converted into approximate mean values, and then risk severity is calcu- lated by first calculating the mean-value change in each performance measure as a function of the mean value of unconditional consequences, and then combining those mean-value changes in each performance measure into a mean-value change in the combined performance measure in the same way as for the mean-value method (see previous example), which is then translated back into a rating (as also defined below). For example, to determine the effect of Risk R1 on project completion date • H (>3 months) assessed change to duration of ROW translates to about 6 months • L (<20%) assessed probability of occurrence translates to about 10% • Mean change in critical path can be determined to be (6-month delay to ROW – 0-month base float for ROW) × 10% probability = 0.6 month (which translates back to L schedule change). Note that the mean- value ratings result in slightly different mean values than the mean value (see previous example) because of approximation associated with ranges. Rating definitions are as follows: Rating Consequence Probability SeveritydCost Changea Schedule Changeb Disruption Changec L <$100,000 <1 <10,000 <0.2 <$200,000 M $100,000– $1,000,000 1–3 10,000–100,000 0.2–0.5 $200,000– $2,000,000 H >$1,000,000 >3 >100,000 >0.5 >$2,000,000 a Cost change in direct uninflated dollars (to specific activity). b Schedule change in months of delay to specific activity (regardless of critical path). c Disruption change in equivalent person-hours (to specific activity). d Severity in equivalent inflated dollars. (continued)

59 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS Other Methods The qualitative red/yellow/green method is essentially the same as the quantitative mean-value rating method, except that • The ratings involve only three categories (H, M, L), which are quick and color- coded (and thus visual). However, the ratings are generally undefined and thus ambiguous (How much is “High”? What is the relationship between the risk con- sequence and the performance measure?). • The risk factors are usually combined in a purely subjective (rather than in an ana- lytical) way to assess risk severity. If not assessed directly (i.e., implicitly consider- ing the various risk-factor ratings), this combination is sometimes done through predefined matrices showing which combinations of likelihood and various con- sequences result in various categories of risk, although, generally, there still would not be any mathematical basis for the matrix (only judgment). Conceivably, these matrices could be developed beforehand through analysis, similar to what would be done for the mean-value rating method. The above example of quantitative mean-value rating assessment (both inputs and outputs) has been summarized in the table below. Risk Scenario for Conditional Consequence to each Performance Measurea Scenario Probability è Risk Severity Cost Change Schedule Change Disruption Change … … … … … … R1. Landowner unwilling to sell key property M to ROW H to ROW L L M … … … … … … B1. Poor bidding climate for general contractor H to construction M to procurement L M M … … … … … … ê ê Total unconditional consequence H M L è H a If risk occurs.

60 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS • Risks are only roughly categorized (e.g., as High) without any ranking within categories. • Except by judgment, total risks cannot be determined (e.g., M + M =?). There is no significant advantage to this method compared with the quantitative mean-value rating method, except that it does not require analysis to determine risk severity as a function of the risk-factor rating. However, this generally results in much less accuracy (and often even errors) in the subsequent severity ratings, with little increase of efficiency because the analysis can be done relatively easily. Hence, this method is not generally recommended. The qualitative rating-scale method is basically an extension of the red/yellow/ green method, and attempts to improve how the risk factors are combined to deter- mine risk severity. This method is very similar to the mean-value method, except that dimensionless, numerical rating scales (rather than mean values for the mean-value method, or just L, M, H for the red/yellow/green method) are generally used for the risk factors. For example, 1 = “rare” to 5 = “certain” for likelihood, and 1 = “low” to 5 = “catastrophic” for consequences. These numerical ratings are then combined in essentially the same mathematical way as for the mean-value method, to determine unconditional consequences and then severity for each risk. For example, the numeri- cal ratings for likelihood (e.g., P = 1) and combined consequences (e.g., C = 3), which in turn are either assessed directly or determined from the various types of conse- quences (e.g., as the maximum rating among them), are simply multiplied to determine the severity for each risk (e.g., severity = 1 × 3 = 3). The set of risks (i.e., all risks in the risk register) can then be categorized and ranked on the basis of the severity of each individual risk. This is intended to address a few of the problems associated with the red/yellow/green method (i.e., combining risk factors and ranking risks within catego- ries), while still being quick. However, this rating scale approach to combining likelihood and consequence rat- ings is only mathematically correct if the rating scales have been appropriately defined and the factors appropriately assessed (e.g., consequences in terms of changes in perfor- mance measures). This means that if ratings are being multiplied (as described above), then the individual rating scales should be linear, so that, for example, a consequence of 2 is twice as bad as a consequence of 1, and a likelihood of 4 is twice as high as a likelihood of 2. Otherwise, if the scales are not appropriately defined, the combination of individual likelihood and consequence ratings will produce severity ratings that might scale nonlinearly or even be noncomparable (e.g., does 1 × 3 = 3 × 1?). Conceiv- ably, like the mean-value rating method, these numerical ratings (if adequately defined) can be translated into mean values and then used, in which case it is essentially the same as the mean-value rating method. However, even if done properly, this method provides only a relative measure of risk (i.e., in terms of the nondimensional rating scales, such as 1–5), and not an absolute measure (e.g., in terms of dollars or months), which would be needed to evaluate cost–benefit of possible risk reduction actions (see Chapter 8). Hence, there is no advantage to this method over the mean-value method, and hence it is generally not recommended.

61 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS Guidance This chapter has introduced a number of concepts and methods related to risk assess- ment. Although this guide is not meant to be a how-to document (the companion Simplified Risk Management Training course materials address implementation), it is worthwhile here to provide some key guidance related to the previously introduced concepts and methods. Risks (including opportunities) are uncertain events that might or might not hap- pen, and if they happen, could result in uncertain (i.e., difficult-to-predict) conse- quences to the project’s performance measures. Risk assessment attempts to “wrap its arms around” each risk, and characterize and quantify (or qualify) it. This can be difficult, considering variability in conditions under which the project will be planned and constructed, and uncertainty in (i.e., our lack of knowledge or ignorance about) those conditions and what problems and opportunities exist, and what their impacts might be if they occur. Therefore, a few key points are notable when conducting risk assessment to ensure that the assessment reasonably, accurately, and defensibly quanti- fies (or qualifies) the risks and opportunities: • Consequences Must Be Consistent with Likelihoods. The assessed consequences reflect the anticipated magnitude of a risk’s impacts. The magnitude of the im- pacts implies a particular likelihood of occurrence. For example, catastrophic impacts are usually less likely than are minor impacts (but not always, depending on whether thresholds are defined). A number of realistic or feasible scenarios or outcomes could be defined for a particular risk. Therefore, the authors recommend Example of a Qualitative “Red/Yellow/Green” Assessment Severity Rating Conditional Consequence Scenario Rating L M H S ce n ar io P ro b ab ili ty R at in g L L L M M L M H H M H H Conditional Consequence Scenario Rating Conditional Cost Consequence Rating L M H C on d it io n al S ch ed u le C on se q u en ce R at in g L L M H M M M H H H H H Note: Risk severity either is assessed directly (implicitly considering conditional consequence scenario and scenario probability ratings) or is based on predefined matrices (e.g., as shown above), which must be carefully developed to avoid errors. ï

62 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS defining a realistic risk scenario that pairs consistent likelihood and consequence values. Note that from a mean-value perspective, it is the combination of risk- factor values (i.e., the mean risk) that matters, assuming realistic scenarios. Hence, for example, a risk with a 25% probability of occurrence and a $4 million cost impact is equivalent to a risk with a 50% probability of a $2 million cost impact, because both have a mean risk of $1 million. Having said this, however, extreme scenarios (i.e., very low likelihoods of catastrophic consequences) are not usually selected as the basis for mean-value assessments if other, more average scenarios are possible. • Bias Must Be Identified and mitigated. The goal of risk-factor assessment is to obtain accurate, defensible assessments. As mentioned previously, subjective as- sessments are usually required to assess risk factors but are subject to bias. Bias essentially comes in two forms (Roberds 1990): — “Motivational bias” occurs when someone says something that contradicts what they believe. This bias can be difficult to detect and counter, but is often present when participants have a stake in a project’s continued survival or other conflict of interest. It can also occur when experts intentionally inject some con- servatism into their assessments or intentionally exclude some scenarios. The various types of motivational biases include ▪ Management—telling them what they want to hear; ▪ Expert—wanting to appear knowledgeable; ▪ Conflict—being self-serving; ▪ Conservative—erring on the “safe” side; and ▪ Peer pressure—going with the crowd. — “Cognitive bias” occurs when someone believes something that is inconsistent with the facts. Most people will overestimate what they know about a particu- lar topic, which leads to overoptimism and to underestimating uncertainty. The various types of cognitive biases include ▪ Anchoring—focusing on the starting point (e.g., neglecting extremes); ▪ Overconfidence—ignoring unlikely possibilities; ▪ Coherence/Conjunctive Distortions—ignoring combination of component parts (e.g., if Event x requires a set of y independent events, then P[x] = ∏y P[y]); ▪ Availability—focusing on easily recalled info; ▪ Base Rate—focusing on the most specific information (neglecting data-based frequency of occurrence); and ▪ Representativeness—ignoring relevance of different types of information (treating all information equally). These biases can often be effectively countered by a qualified facilitator and use of project-independent subject-matter experts. However, simply being aware of these potential biases is the first step toward mitigating them. In addition, avoiding these

63 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS other common pitfalls (which a qualified facilitator should also help with) can mitigate bias: — Poor problem structure (e.g., ambiguous definition of what is to be assessed, such as an average value or a random value); — Adverse group interactions (e.g., dominance by one person); — Ignoring important relationships among factors; and/or — Failing to consider all possibilities and all available information appropriately. • Experiment with Methods for Assessing Risk Factors. A few methods are covered in the companion Simplified Risk Management Training course, but DOTs should be aware that numerous approaches are available to help ensure reasonable risk- factor assessments. A particular approach or tool might resonate better with one group than another, and so the DOT can experiment with each group to determine which works best for that group. Example methods include — Ranges, which use thresholds; — Comparative probabilities, which compare the likelihood of the risk being assessed against the likelihood of common events (e.g., coin toss or roll of a die) with known probabilities, bracketing and converging on the risk; — Ranking and relative difference, which first ranks possible outcomes by pair- wise comparison, then assesses relative likelihoods (in terms of ratios) by pairwise comparison, then uses the ratios from the comparisons to determine individual probabilities; — Probability wheel, which uses a wheel with a rotating wheel segment to visually cue for probability, or converging confidence intervals by pairwise comparison; — Decomposition, which is the process of graphically breaking down a risk into its component causes or sequence of events or outcomes. Decomposition can be accomplished using well-established graphical tools: ▪ “Event trees” (also known as “probability trees”) are useful for graphically defining scenarios of outcomes and the corresponding probabilities and con- sequences that might result from a triggering risk event. ▪ “Fault trees” can be used to evaluate the probability that a risk (“failure event”) occurs, by building up the various combinations of events that are required to trigger the risk’s occurrence. — Full probability distributions (see Chapter 7). • Use Appropriate Methods for Combining Risk Factors. As described previously, a variety of methods are available for combining risk-factor assessments into a measure of risk severity, ranging from implicit subjective assessment to explicit mean-value assessment and analysis to detailed probabilistic analysis (as discussed in Chapter 7). Applying these methods involves different levels of skill and effort, and they result in different levels of accuracy and defensibility. The appropriateness of any particular method depends on how the information will be used, as well as

64 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS the nature of the risk-factor assessments. Within this context, the analysis of sever- ity should adequately consider (a) all relevant performance objectives and trade- offs among them, (b) the uncertainties in meeting those performance objectives, and (c) how each risk or opportunity affects meeting those objectives, including the relationship between the risk consequence factors (e.g., uninflated direct cost, schedule delay, disruption), as assessed, and the performance objectives (e.g., in- flated total cost, overall project schedule). As previously noted, for relatively simple projects, a Microsoft Excel workbook template has been developed to document the assessments and automatically calculate risk severity and mean performance. CONCLUSIONS ON RISK ASSESSMENT The objective of risk assessment is to adequately describe the severity of project risks, to rank the risks for subsequent risk reduction planning, and if done quantitatively, forms a basis for probabilistic risk analysis, if needed (e.g., to objectively establish budgets or contingencies). Various methods are available for conducting risk assess- ment, and each has its strengths and weaknesses: • Qualitative methods are quick but prone to inaccuracy with limited usefulness. • Quantitative methods involve more effort but are more accurate and useful, al- though a statistical basis has limited applicability whereas a subjective basis is prone to bias (requiring mitigation by facilitator). Two of the methods (mean-value ratings and mean values), which are appropriate for relatively simple projects, have been incorporated in specific forms and in a Microsoft Excel workbook template, available online at www.trb.org/Main/Blurbs.168369.aspx. The DOT should select an appropriate method depending on its objectives for the risk assessment. Regardless of the chosen method, the DOT should take steps to ensure that risks are assessed defensibly, accurately, and efficiently, and documented appropriately (in the risk register). A qualified risk facilitator who guides the assess- ment process (at the appropriate level of detail, considering the model and factors involved), mitigates bias, and develops consensus among a broad group of project- team and independent experts is key. Example The hypothetical QDOT case study (see Appendix D), which is used throughout the guide to adequately illustrate the various steps of the risk management process and includes a risk management plan (RMP), involves assess- ments of each of the risks in the risk register (using the methods and guidance described in this chapter), as docu- mented in Appendix D and Appendix E, RMP, Chapter 3, summarized below. (continued)

65 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS QDOT initially decided that assessing the current risks in terms of mean-value ratings (e.g., L, M, and H) would be sufficient for its intended use of the risk assessment results (i.e., prioritizing the risks for proactive individual risk reduction). Hence, the group first defined mean-value rating scales for the various risk factors: • Each of the three types (cost, schedule, and disruption) of impacts of occurrence (e.g., a Medium (M) cost impact was defined to correspond to a value between 3% and 10% of the base project cost, in uninflated dollars); • The probability of occurrence (e.g., M probability corresponded to a probability of occurrence between 0.2 and 0.4); and • The severity of combined impacts (considering the probability of occurrence and trade-offs) (e.g., M severity was defined to correspond to a value between 3% and 10% of the base combined performance, in equivalent inflated dollars). Risk Factor Rating Scale Definitions for QDOT US-555 and SR-111 Project SHRP2 R09: Guide for Managing Risks for Rapid Renewal Projects (FINAL 15 February 2011) pg 6-33 Example The hypothetical QDOT case study (see Appendix D), which is used throughout the guide to adequately illustrate the various steps of the risk management process and includes a risk management plan (RMP), involves assessments of each of the risks in the risk register (using the methods and guidance described in this chapter), as documented in Appendix D and Appendix E, RMP, Chapter 3, summarized below. QDOT initially decided that assessing the current risks in terms of mean-value ratings (e.g., L, M, and H) would be sufficient for its intended use of the risk assessment results (i.e., prioritizing the risks for proactive individual risk reduction). Hence, the group first defined mean-value rating scales for the various risk factors:  Each of the three types (cost, schedule, and disruption) of impacts of occurrence (e.g., a Medium (M) cost impact was defined to correspond to a value between 3% and 10% of the base project cost, in uninflated dollars);  The probability of occurrence (e.g., M probability corresponded to a probability of occurrence between 0.2 and 0.4); nd  The severity of combined impacts (considering the probability of occurrence and trade-offs) (e.g., M severity was defined to correspond to a value between 3% and 10% of the base combined performance, in equivalent inflated dollars). Risk Factor Rating Scale Definitions for QDOT US-555 and SR-111 Project The group then discussed each of the identified risks in the risk register and quantified (by consensus) each of them in terms of mean-value ratings (or sometimes directly in terms of mean values) for the following, before any additional mitigation: (a) the cost, schedule, and disruption impacts (and the affected activity) if the risk occurs; and (b) the probability that the risk (as defined by its impacts) will occur (during the particular project phase under which it is categorized). Subsequently, a quantitative risk analysis was conducted, for which these unmitigated assessments were refined; see Appendix E, Addendum X. QDOT then used these assessments to determine (using an appropriate risk model, for example, the Microsoft Excel workbook template that incorporates the algorithms presented in this chapter: (a) the approximate unmitigated mean-value contribution of each risk to the project objectives of cost, schedule, and disruption; and (b) by combining with QDOT’s established value trade-offs among the objectives, an unmitigated mean-value longevity and then severity for each risk, based on which the risks were ranked. Subsequently, a quantitative risk analysis was conducted, for which the contribution of each risk and other uncertainty to the potential budget, before any additional mitigation, w s determined more accurately; se Appendix E, RMP, Addendum X. The group then discussed each of the identified risks in the risk register and quantified (by consensus) each of them in terms of mean-value ratings (or sometimes directly in terms of mean values) for the following, before any additional mitigation: (a) the cost, schedule, and disruption impacts (and the affected activity) if the risk occurs; and (b) the probability that the risk (as defined by its impacts) will occur (during the particular project phase under which it is categorized). Subsequently, a quantitative risk analysis was conducted, for which these unmitigated assessments were refined; see Appendix E, Addendum X. QDOT then used these assessments to determine (using an appropriate risk model, for example, the Microsoft Excel workbo k template that incorporates the algorithms presented in this chapter: (a) the approximate unmiti- gated mean-value contribution of each risk to the project objectives of cost, schedule, and disruption; and (b) by combining with QDOT’s established value trade-offs among the objectives, an unmitigated mean-value longevity (continued)

66 GUIDE FOR THE PROCESS OF MANAGING RISK ON RAPID RENEWAL PROJECTS and then severity for each risk, based on which the risks were ranked. Subsequently, a quantitative risk analysis was conducted, for which the contribution of each risk and other uncertainty to the potential budget, before any additional mitigation, was determined more accurately; see Appendix E, RMP, Addendum X. Unmitigated Risk Factor Assessments for Select Rapid Renewal Risks for QDOT US-555/SR-111 Project Phase Example Risk or Opportunity Probability of Occurrence Mean-Value or Ratings to Affected Activity Mean Cost Change if Occurs Mean Duration Change if Occurs Mean Disruption Change if Occurs Preliminary design/ environmental process PD13. Change in environmental documentation L +M to preliminary design / environmental process +H to preliminary design / environmental process 0 Right-of-way, utilities, and railroad RU3. Unwilling sellers H +M to ROW/ Util/RR 0 0 Procurement CP2. Uncertain D-B contracting market conditions at time of bid 25% +10% of base (i.e., +$1.2M) to D-B construction +1 month to procurement 0 Construction CN3. Problems with planned accelerated bridge construction technique H +L to D-B construction +L to D-B construction +L to D-B construction Unmitigated Risk Severity Determination and Ranking for Select Rapid Renewal Risks for QDOT US-555/SR-111 Project Project Phase Example Risk or Opportunity Mean Severity (equiv YOE $M or Rating scale definition above) Fine. Rank Preliminary design/ environmental process PD13. Change in environmental documentation L 11 Right-of-way, utilities, and railroad RU3. Unwilling sellers M 4 Procurement CP2. Uncertain D-B contracting market conditions at time of bid 0.38 9 Construction CN3. Problems with planned accelerated bridge construction (technology, procurement, and implementation) L 12 .

Next: 7 RISK ANALYSIS »
Guide for the Process of Managing Risk on Rapid Renewal Projects Get This Book
×
 Guide for the Process of Managing Risk on Rapid Renewal Projects
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) S2-R09-RW-2: Guide for the Process of Managing Risk on Rapid Renewal Projects describes a formal and structured risk management approach specifically for rapid renewal design and construction projects that is designed to help adequately and efficiently anticipate, evaluate, and address unexpected problems or “risks” before they occur.

In addition to the report, the project developed three electronic tools to assist with successfully implementing the guide:

• The rapid renewal risk management planning template will assist users with working through the overall risk management process.

• The hypothetical project using risk management planning template employs sample data to help provide an example to users about how to use the rapid renewal risk management template

• The user’s guide for risk management planning template will provide further instructions to users who use the rapid renewal risk management template

Renewal Project R09 also produced a PowerPoint presentation on risk management planning.

Disclaimer: This software is offered as is, without warranty or promise of support of any kind either expressed or implied. Under no circumstance will the National Academy of Sciences or the Transportation Research Board (collectively "TRB") be liable for any loss or damage caused by the installation or operation of this product. TRB makes no representation or warranty of any kind, expressed or implied, in fact or in law, including without limitation, the warranty of merchantability or the warranty of fitness for a particular purpose, and shall not in any case be liable for any consequential or special damages.

Errata: When this prepublication was released on February 14, 2013, the PDF did not include the appendices to the report. As of February 27, 2013, that error has been corrected.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!