5
INFRASTRUCTURE IMPROVEMENT THROUGH PERFORMANCE-BASED MANAGEMENT

As illustrated at the conclusion of Chapter 4, the result of applying the committee's process and framework will be a multidimensional assessment of the performance of a particular infrastructure system. Certainly there are challenges in making this assessment, as has been discussed: the involvement of many stakeholders, dealing with multiple measures of performance, collecting and analyzing required data, and adjusting the assessment process to the specific decision situation.

There are also other issues in infrastructure decision making based on performance assessment. The committee discussed these issues in three principal areas: (1) dealing with multiple objectives, dimensions of performance, and stakeholders' points of view, (2) dealing with multiple jurisdictions and multiple infrastructure modes to reach conclusions about system performance, and (3) the significance of uncertainty and risk in infrastructure decisions.

MULTIPLE OBJECTIVES AND VIEWS

Infrastructure performance has multiple dimensions, essentially because infrastructure is intended to serve multiple objectives. The explicit recognition of the multi-objective nature of performance in the assessment process will help create an environment in which decision makers and analysts are able to maintain appropriate roles and in which information essential to effective decision making can be generated and conveyed. Explicitly dealing with the multiple points of view that inevitably come



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 83
Measuring and Improving Infrastructure Performance 5 INFRASTRUCTURE IMPROVEMENT THROUGH PERFORMANCE-BASED MANAGEMENT As illustrated at the conclusion of Chapter 4, the result of applying the committee's process and framework will be a multidimensional assessment of the performance of a particular infrastructure system. Certainly there are challenges in making this assessment, as has been discussed: the involvement of many stakeholders, dealing with multiple measures of performance, collecting and analyzing required data, and adjusting the assessment process to the specific decision situation. There are also other issues in infrastructure decision making based on performance assessment. The committee discussed these issues in three principal areas: (1) dealing with multiple objectives, dimensions of performance, and stakeholders' points of view, (2) dealing with multiple jurisdictions and multiple infrastructure modes to reach conclusions about system performance, and (3) the significance of uncertainty and risk in infrastructure decisions. MULTIPLE OBJECTIVES AND VIEWS Infrastructure performance has multiple dimensions, essentially because infrastructure is intended to serve multiple objectives. The explicit recognition of the multi-objective nature of performance in the assessment process will help create an environment in which decision makers and analysts are able to maintain appropriate roles and in which information essential to effective decision making can be generated and conveyed. Explicitly dealing with the multiple points of view that inevitably come

OCR for page 83
Measuring and Improving Infrastructure Performance into play helps to ensure that decisions are consistent with public views and less likely to encounter the resistance embodied in the ''Not In My Backyard" response to infrastructure actions. Considering Multiple Objectives A fundamental feature of multi-objective problems is that there is no single, optimal solution. Instead, the focus of problem solving and decision making is finding a set of solutions that seem "better" than others, that is, they are not clearly dominated by any other, and exploring the tradeoffs among the objectives implied by choosing one of these "better" (i.e., nondominated, noninferior, efficient, or Pareto optimal) solutions over another. In other words, the measure of good infrastructure decision making is that no one can produce a clearly better plan of action. Over the last 25 years, dozens of techniques have been developed for analyzing multi-objective problems.1 Rich in variety, reflecting the range of problems and decision contexts for which they were developed, the methods can be conveniently grouped into two categories: generating methods and preference-oriented methods. As the name implies, generating methods are particularly useful for generating ''better" solutions to a problem. Their aim is to create either an approximation or exact representation of the set of nondominated solutions, which will form the basis for exploration of the tradeoffs among objectives. There is no attempt made to incorporate decision makers' preferences in any formal or explicit manner. By contrast, preference-oriented use explicit quantitative statements of decision makers' preferences to identify a preferred solution (Cohon, 1978). Though preference-oriented techniques can help policy makers understand the implications of preferences and preference conflicts for decision making, many of them suffer from several disadvantages. They tend to reveal little information about the set of "better" solutions, thus limiting the insight gained from analysis. Also, they are rigid in the way preferences must be stated and are sensitive to characteristics of decision making processes typical of environmental problems. The presence of multiple decision makers can cause complications that defeat most of the preference-based methods. A combination of methods often works best. A generating technique would be emphasized first to develop an appreciation of the range of choices and the tradeoffs. The planning workshop or design "charette" sometimes used for infrastructure planning is an example of a generating method.2 In reacting to the results generated, decision makers may be able to articulate preferences, for instance, a particular portion of the nondominated set worthy of further, detailed exploration.

OCR for page 83
Measuring and Improving Infrastructure Performance A key to the successful implementation of a multi-objective analysis lies in early and frequent involvement of all participants. In that very important sense, the theoretical foundations of multi-objective methods can be viewed as defining the assessment process described in this study. The multiple objective methods do not and probably should not yield single solutions to problems, however comforting that prospect might seem. Instead, these methods and the infrastructure performance assessment process highlight tradeoffs that must be made in the real world of decision making. The specifics of how to convey tradeoffs to decision makers in an accessible manner are themselves challenging from several viewpoints. At the heart of most of these challenges is the rapid increase in dimensionality mentioned earlier. With only two or three objectives, choices and tradeoffs may be illustrated by conventional two-and three-dimensional graphs. It is not at all obvious how best to visually portray a tradeoff response surface in, say, four or five dimensions. The option always exists of showing just two-dimensional slices of the surface at a time (e.g., between highway speed limit and likely accident rates), but such an approach can sometimes fail to convey an appreciation for the interconnectedness of the problem across all dimensions. Some success has been achieved with projection formats, notably the value path, in which all objective values are projected to parallel (usually normalized) scales and the points on these scales associated with a particular solution connected to show that they indeed do represent one solution alternative.3 Such approaches can effectively deal with certain of the other problematic aspects of real-world decision-making problems, specifically the presence of noncommensurate objectives. The scales in a value path can represent a common quantified metric (e.g., dollars, noise levels, maximum wait times), but they can also represent qualitative measures, (e.g., most to least preferable aesthetic attributes). Note that for this latter objective the need for quantification is not eliminated; the quantification is kept implicit in the display. This in turn points up another important caveat. The intent here is most assuredly not to deceive the user that qualitative objectives somehow escape the need for quantification. Assumptions must be clearly stated and the details of the underlying quantification made explicit. Computer-based systems have in recent years significantly enhanced analysts' ability to elicit decision makers' preferences and apply these preferences consistently in making decisions. The analytical hierarchy process (AHP) and the simple multi-attribute rating technique (SMART) are two increasingly widely. used procedures for which simplified software has been developed.

OCR for page 83
Measuring and Improving Infrastructure Performance Considering Multiple Points of View Inevitably, at least in the initial stages of assessment and decision making, different decision makers or stakeholders will have different perspectives on the assessment and different preferences for possible actions to ''improve" performance. Resolving these differences is not simply a technical problem but involves ethical questions as well. Schulze and Kneese (1981) discuss the ethical aspects of such decision making, where many people bear the impact of resulting actions. Their context was deciding on levels of risk (a topic that receives further attention in this chapter), but their basic points apply more generally to infrastructure performance. They characterize four primary bases for making decisions: utilitarian, delivering the greatest good for the greatest number of people; egalitarian, measuring the well-being of the group (i.e., society) by the well-being of the worst-off person in that group; elitist, measuring the well-being of the group by the well-being of the best-off individual; and libertarian, an amalgam of principles, in that individual freedoms prevail except where others may be harmed. Each of these four bases is used in infrastructure development and management, although seldom in a pure form. For example, locations of fixed-route transit lines (e.g., rail rapid transit and light rail) and stations may be characterized as typically utilitarian, selected to give access to the greatest possible number of people within an area. Drinking-water standards are egalitarian, set to ensure that no one is likely to contract an illness because of water-borne pathogens. Some people assert that U.S. highway policy is elitist regarding urban personal mobility, favoring those who can afford to own automobiles, although others note that highways often represent the lowest public component of cost for high mobility. Our general approach to the management of much of the infrastructure is essentially libertarian, although the lack of information about harm being done (e.g., air pollution being generated, safety hazards posed by driving above speed limits) can make these decisions seem faulty. In practical terms, taking care to involve stakeholders and applying a process that helps to ensure that their interests are effectively represented would significantly enhance confidence in infrastructure decision making. DEALING WITH MULTIPLE JURISDICTIONS AND MODES While different levels of government have well-defined roles in the planning, development, Operation, maintenance, and financing of urban infrastructure systems, the systems themselves do not respect jurisdictional boundaries. In transportation, state-owned and-operated high-

OCR for page 83
Measuring and Improving Infrastructure Performance ways, local streets, regional transit facilities and services, and airports serving a regional population can all be found within one local jurisdiction. Air and water pollution and transportation functions have multiregional and national span. Water distribution and wastewater collection systems often cross municipal borders, and solid waste collection increasingly requires regional approaches to recycling and managing disposal site capacity. Similarly, there are a variety of issues that create interrelationships across infrastructure modes that can only be addressed through cooperation among the agencies responsible for each mode (i.e., transportation, water, wastewater, and solid waste). The potential impact on water quality of leachate from solid waste disposal and stormwater runoff from highways are examples of cross-mode issues. As a result of these cross-jurisdiction and cross-mode issues, improving the performance of urban infrastructure will require significant cooperation across jurisdictions and across agencies with responsibility for different infrastructure modes. Regional agencies, special-purpose authorities and districts, joint-power agreements, and other voluntary or legislatively defined arrangements have been used to provide for regional and cooperative approaches. In addition, federal and state legislation funding infrastructure often requires multijurisdictional cooperation and involvement as well as broad public involvement as a condition for funding eligibility. Generally, requirements for multijurisdictional cooperation within an infrastructure mode have been more prevalent than requirements for cooperation across infrastructure modes. However, recent growth management legislation in a number of states has mandated coordination of development planning with the provision of all infrastructure required to support that development. The committee found that legislation, organizational relationships, and other institutional factors are often the most challenging obstacles to effective performance assessment as well as management. The committee recommends that responsible agencies undertake a critical self-assessment to determine the nature and extent of specific regulations, organizational relationships, jurisdictional limitations, customary practices, or other factors that may constitute impediments to adoption of the proposed infrastructure performance measurement framework and assessment process. Such a self-assessment could be conducted within the context of a specific infrastructure management problem or as a generic review, but it will necessarily involve time, money, and a concerted effort to motivate active community involvement with open, candid discussion. The assessment should conclude with explicit recommendations of institutional change that may be needed to enable a systemwide approach to management of infrastructure performance.

OCR for page 83
Measuring and Improving Infrastructure Performance While the existence of a variety of legal, regulatory, and financial mechanisms to encourage multijurisdictional cooperation is widespread, formal regional management is unlikely to be possible in many areas. A variety of less formal or comprehensive arrangements—for example, study commissions, cooperative regional councils, state planning programs, university-based regional research and policy institutions—may be established to accomplish many of the same ends. The strength of these arrangements and the degree to which regional approaches are followed and supported varies widely from area to area. The following factors are likely to influence the degree of multijurisdictional cooperation: degree to which regional agencies or approaches are legislatively defined; extent to which regional or multijurisdictional entities have independent taxing authority; regional vision for growth and development around which substantial consensus has been achieved and which can serve as a catalyst for joint action; severity of problems that can only be effectively addressed by joint action (i.e., congestion, solid waste disposal capacity, water supply and quality); and strong public and private sector leadership. Within infrastructure modes, there are many examples of strong multi-jurisdictional approaches. Both Portland and the Twin Cities provided a number of cases of successful regionalism. However, as further improvements in infrastructure system performance are sought, it is likely that improving multijurisdictional cooperation will be a critical step. Cross-modal cooperation will also be critical to improving infrastructure performance and is less prevalent than multijurisdictional cooperation within a modal area. In many cases, improved cross-modal cooperation can be accomplished within a particular jurisdictional level by encouraging more coordination among departments or agencies with responsibilities for different infrastructure systems. The committee observed that in some cases federal programs hinder or preclude cross-modal cooperation. Many capital grant regulatory programs are mode-specific. The committee recommends that federal infrastructure policies and regulations be reviewed in detail and revised as needed to accommodate local decision making processes and performance measurement frameworks. There are valid national interests in local infrastructure performance—for example, uniformly high standards of public health and safety—and local decisions should be made within the context of those interests. Nevertheless, one measure of federal policy effectiveness should be its sensitivity to local variations in objectives and subsequent performance assessment.

OCR for page 83
Measuring and Improving Infrastructure Performance A final important factor to be considered in measuring infrastructure system performance is the extent of the system being included in the analysis. While some aspects of performance may be related to a single facility, other dimensions of performance may require consideration of a group of interrelated facilities or an entire infrastructure system. For example, the structural capacity of a bridge can be measured and evaluated independently of other bridges or elements in the transportation system, but the traffic service provided by the same bridge can only be measured in the context of other elements of the highway system of which it is a part. Similarly, other improvements in transportation, water distribution, and wastewater collection facilities must be considered as part of the systems or subsystems affected by a change in one facility. While infrastructure professionals have always recognized the importance of "system effects," continued improvement in computer-based forecasting and simulation methods and new technology for measuring and monitoring system conditions have made more sophisticated approaches for assessing system performance widely available. Remote sensing, real-time monitoring, and network analysis and simulation models provide powerful new capabilities for measuring systemwide conditions and evaluating system changes. UNCERTAINTY AND RISK IN INFRASTRUCTURE DECISION MAKING Lack of sufficient information is a source of uncertainty in performance assessment and thus in decision making as well. Uncertainty is generally inherent in infrastructure performance assessment because information is never complete, the future can only be projected and not accurately predicted, and people's perceptions and judgments depend on the specific context in which they make a decision. Related to uncertainty is the notion of "risk," which involves both uncertainty and some kind of loss or damage that might be received if particular events do occur. This loss or danger, in turn, results from the interaction of "hazard," the source of danger (e.g., a toxic substance in water) and safeguards taken to protect against the hazard (e.g., water treatment to remove the substance). Risk is then the possibility of loss or injury or the probability of such loss. Reliability, one of the three principal dimensions of performance, is essentially a measure of uncertainty. As such, this dimension has a crucial link with risk analysis. Risk itself may be selected as a component of effectiveness. Analysis of risk has become an important tool for setting policy in such areas as drug regulation and setting of environmental standards. Many of the principles and procedures used in risk analysis apply as well to all aspects of performance assessment. Hence, while the principles and methods of risk analysis are well beyond the scope of this study, the com-

OCR for page 83
Measuring and Improving Infrastructure Performance mittee agreed that the relationship of these topics to performance assessment warrants consideration. Risk is never zero, but it can be small. Included under the heading "safeguards" is the idea of simple awareness. Awareness of hazard reduces risk. Thus, if we know there is a pothole in the road pavement around the comer, it poses less risk to us than if we drive around not knowing it is there. Generally, the assessed level of risk is influenced by awareness and perceptions.4 Analytical Methods Attempts to deal analytically with uncertainty, risk, and reliability have frequently depended on complex applications of statistics and the mathematical theory of probability. Such methods are often useful but generally pose an ever-present danger of becoming overly sophisticated and unsupported by the availability of basic data. In addition, decision makers often cannot readily assimilate sophisticated statistical and probabilistic concepts in making actual choices among alternative courses of action. Several methods have been developed, however, that can deal effectively with these concepts. The most successful applications tend to consider uncertainty in ways that fit naturally into common decision-making contexts, specifically the consideration of alternative scenarios. This approach too is no panacea, but if a manageable number of alternative scenarios can be agreed on and if some agreement can be reached regarding the assignment of relative probabilities of occurrence to these scenarios, then methods exist with which the information contained in multiple scenarios can be aggregated to the point of possible policy relevance. Notable among these methods is regret theory, which had its origin as an alternative to the maximizing of expected utility as a basis for decision making (von Neumann and Morgenstern, 1947). "Regret" in this context refers to the disappointment, loss, or damage experienced when things do not occur as hoped or planned. Regret theory has to do with choosing courses of action that will control possible regret. The derision maker may forego some possibly greater benefit that might accrue if all goes well under one course of action, choosing instead some other course that has less potential for loss. For example, adopting a new process for sewage treatment may save money and reduce the concentration of plant nutrients in the effluent if the process works successfully, but it might force extensive dumping of raw sewage into the river if the process fails. Adopting the apparently more expensive but proven method is a decision to avoid regret.

OCR for page 83
Measuring and Improving Infrastructure Performance The Matter of Values The overarching concerns in dealing with matters of uncertainty, reliability, and risk typically involve the question," How effective (e.g., safe, inexpensive, nonpolluting, nondisruptive) and reliable is effective and reliable enough?" Given that information is incomplete, ability to project outcomes is limited, and budgets for avoiding hazards or adopting safeguards are restricted, this question frequently arises in infrastructure decision making. In the end, the answer is generally a matter of values and cannot be resolved except through public discussion. The committee found that community views inevitably become a part of the decision-making process, sometimes through public resistance when their views are inadequately considered. Efficiencies are to be gained when those views are solicited and considered early in the performance assessment process. For major decisions such as building new transit systems or waste disposal plants or imposing downtown parking controls or regional water-use restrictions, conflicts in values among various stakeholders are likely. In addition, values and preferences among difficult and unpleasant consequences may not be clearly defined or even well formed for many stakeholders. For example, many people in a region may have little basis for anticipating what will be involved in imposing traffic control measures to meet air pollution regulations. Their opinions will form after they experience the result and may lead to a call for very different ways to achieve stated objectives. In our society, the answer to the question of "How effective... ?" is often determined in the political process not by scientific analysis. This is as it should be, but a number of factors make the political process somewhat cumbersome in determining acceptable performance. Regardless of how the decision is made, it is meant to represent how members of the public would individually make the acceptable performance decision. The collective decision is meant to reflect both the judgments and perceptions of each person and their values. Different individuals, however, may have widely varying judgments and perceptions and very diverse values. There are no simple solutions to the collective decision in such a case. Such decisions cannot please everyone. In any specific case, regardless of the level of acceptable risk resulting from the decision, many individuals can be quite disappointed and disagree with the alternative chosen. A factor that worsens these problems is the high level of technical details that is involved in many performance decisions. In most cases, these details are either not known or not understood by members of the general public. In many of the cases, there is no way that the public can become completely informed. The fact is that one must be a specialist to

OCR for page 83
Measuring and Improving Infrastructure Performance understand many of the technicalities, and there are enough technical details that no one individual can be a specialist in all aspects of an accept-able-risk decision. That the solution to an acceptable performance decision is not based solely on technical considerations complicates how such a decision should be made. There are numerous ethical constraints on the entire decision process as well, as discussed in preceding sections of this chapter. Some of these constraints come directly from the charters of various governmental organizations, while others are historical in nature. Ethical constraints mean that there are certain alternatives and certain decision processes that simply cannot be followed. For example, a decision process that excludes the participation of the people who would bear substantial adverse impact (e.g., their farms would be taken to build a new airport) is unethical, although many such decisions were once made largely in secret. Since the ethics for our form of representative democracy are based on the consideration of the rights of individuals, such a decision process conflicts with our basic approach to government. Ethics are also involved with the political question of choosing the person or group that is responsible for making acceptable performance decisions. This responsibility carries with it the understanding that the collective decision process will be representative and consistent with our political ethics and social values and thus acceptable to the public. In order to determine acceptable performance with collective decisions, the decision process itself must be acceptable. The Role of Regulatory Agencies In many situations in our society, the general responsibility for making decisions about acceptable performance rests with a government regulatory agency. The legislative charters for these regulatory agencies, however, often state general, vague objectives for what the agency should do and seldom clearly indicate what the specific objectives of the agency should be or how to measure or achieve the regulatory objectives. These critical questions are left open for the agency to decide for itself, often outside of the effects of public participation. In principle the regulatory agency provides a mechanism for the collective decisions that must be made on acceptable performance. The typical mechanism is that the regulators identify specific technical alternatives for achieving adequate performance. Then information on these risks for each of the alternatives is gathered and a recommendation or ruling is made. This ruling has the effect of either choosing the alternative or specifying guidelines by which it should be chosen by others. Rarely are the technical complications discussed here explicitly addressed to the level. of detail that might be useful. While systematic, scientific analysis has many appealing

OCR for page 83
Measuring and Improving Infrastructure Performance features for aiding (but not replacing) the regulatory agency's decision making, exploiting this potential requires that the technical features and the social, political, and ethical aspects complicating the problem be explicitly recognized and addressed. NOTES 1   For example, refer to Cohon (1978), Chankong and Haimes (1983), Zeleny (1982), Steuer (1986), and Szidarovsky (1986) for reviews of these methods. 2   These techniques often involve public meetings in which infrastructure professionals work with public participants to propose alternative ways of solving a particular problem, such as a highway route location. Such meetings were held in Baltimore's development of the East Boston Street improvement plan. 3   The field of multi-objective programming and decision making is represented by a substantial body of literature. A thorough review of this literature would be beyond the scope of the present study and of limited value to most participants in performance assessment. Good introductions to the principles and techniques suggested here may be found in Chankong and Haimes (1983), Cohon (1978), Steuer (1986), Szidarovsky (1986), and Zeleny (1982). 4   For example, see Cole and Withey, 1981.