Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 59
Measuring and Improving Infrastructure Performance 4 MEASURES OF INFRASTRUCTURE PERFORMANCE The selection of specific measures of infrastructure performance is central to the assessment process. The committee recommends that measures be used that span the three broad dimensions of effectiveness, reliability, and cost, but there are many more detailed concerns that fall within these principal dimensions. The committee's general measurement framework is the large two-part matrix or table illustrated in Figure 4-1. Rows represent the projects, subsystems, or elements that make up the infrastructure systems being assessed, for example, transportation or water supply, specific transit lines, or landfill operations. Columns in the first part of the table represent the system inventory: indicators of the size, geographic extent, annual costs, employment, and other characteristics of the infrastructure system under consideration. Columns in the second part of the table depict measures of the various aspects of performance selected by stakeholders and decision makers and measured in the assessment process. Taken as a whole, the two-part table presents the results of applying the assessment process described in Chapter 3. This chapter presents many examples of inventory and performance measures. Because performance should be assessed with the involvement of stakeholders—infrastructure's owners, operators, users, and neighbors—the specific set of measures used may differ from place to place and from time to time, as discussed in chapters 2 and 3, although a desire for comparability across regions may necessitate inclusion of common basic measures. The committee recommends that local agencies with responsi-
OCR for page 60
Measuring and Improving Infrastructure Performance FIGURE 4-1 General framework of performance measures. bilities for infrastructure management explicitly define a comprehensive set of performance measures. The measures selected should reflect the concerns of stakeholders about the important consequences of infrastructure systems and recognize interrelationships across infrastructure modes and jurisdictions. The committee's framework—in particular, effectiveness, reliability, and cost as the principal dimensions of performance—is a useful basis for defining these measures. There are many sources of example measures on which agencies can draw. At the national level, President Clinton's "Reinventing Government" initiative has generated extensive discussion of how management performance of government agencies can be measured and improved. The Intermodal Surface Transportation Efficiency Act of 1991 has spawned studies in the state departments of transportation to develop measures of intermodal performance.1 The APWA has issued a report that offers many rules of thumb for assessing local agency management practices. 2 The committee reviewed examples of such work and found them useful but generally less comprehensive and detailed than the framework the committee had in mind. There are fewer sources of information on the functional interactions across infrastructure modes. Urban planners, for example, have sought to devise mathematical models that can forecast the influence of infrastructure investment on patterns of land use in a metropolitan area. Economists similarly have attempted to assess the influence of total infrastructure
OCR for page 61
Measuring and Improving Infrastructure Performance investment rates on national and regional economic output. Such efforts have had only limited success and their value remains controversial. The committee concluded that describing these intermodal interactions in ways that can aid decision makers warrants further research. TAKING STOCK The first steps in the performance assessment process are directed at developing a broad inventory of the infrastructure system. Table 4-1 illustrates the specific types of measures such an inventory might include. The first part of the inventory (i.e., the first column in Table 4-1), entails the objectives, goals, aims, or vision that stakeholders set for the system. As indicated by the first row of the table and as discussed in Chapter 2, many measures will be common to all elements of infrastructure, such as those related to economic productivity and opportunity, protection and improvement of public health and safety, protection and enhancement of environmental quality (both natural and built environment), provision of jobs and economic stimulus, and reduction of income inequalities. Broad goals may be stated more specifically when individual infrastructure modes are considered. Transportation systems, for example, are expected to provide access, mobility, and efficiency of movement. These objectives presumably contribute to economic productivity. Protecting environmental quality (e.g., by reducing air pollution) is a goal that communities may set for the transportation system, beyond the essential service the system delivers. In contrast, municipal waste systems may enhance productivity, but they are intended essentially to protect public health and enhance environmental quality. The specific size, condition, historical expenditures, technology, and area of extent of the system are then recorded (i.e., as indicated in the next column of the table). If a comprehensive database and monitoring system have been set up in a region, all this information will be readily available. Geographic information systems that many local and regional planning and management agencies are establishing enable the user to display infrastructure system information at varying levels of detail and geographic scope, with relatively lithe effort. Before such information systems were developed, the inventory tasks might have involved laborious data collection, mapping of data, resizing of maps to common scales, and voluminous tabulations. Comprehensive performance assessment under such conditions would be cost-prohibitive for all but the most important decisions. Inventorying the scope and context of the infrastructure system (i.e., the third column of Table 4-1) involves political, institutional, and social concerns. These data also may be contained in a GIS drawing for example on the U.S. decennial population census, zoning and subdivision records,
OCR for page 62
Measuring and Improving Infrastructure Performance TABLE 4-1 Framework and Measures of System Inventory* Public Works Element, Type; Example Goals, Objectives Scale, Condition, and Geographic Distribution Scope and Context Generic: all elements or types • Enhance economic productivity, opportunity • Improve public health, safety • Protect, enhance environmental quality • Provide jobs and economic stimulus • Reduce income inequalities • System size • Condition • System cost • Technology • Area of extent • Political jurisdictions • Formal institutions • Informal, community structure Examples for Major Classes Transportation Systems • Improve access • Increase mobility • Move goods efficiently • Protect safety • Reduce air pollution • Increase construction spending • Subsidize public transit operations • System size - Lane-miles, track-miles - Number of bridges, airports - Fleet size and mix - Area covered, network configuration - Runway length, terminal gates • Condition - Pavement cracking - Bridge load capacity - Track condition • System cost - Replacement cost (construction) - Annual O&M expenditures • Technology - Fuel types - Fleet age distribution • Area of extent - Natural barriers - Airsheds, basins • Political jurisdictions - System ownership - Pricing authority - Funding and taxing arrangements • Formal institutions - Construction - Operations - Intermodal coordination • Informal, community structure - Ridership - Advocacy groups (e.g., bicycle, pedestrian) - Land developers - Business groups - Environmental resistance groups (e.g., airport noise) - Neighborhood associations Water Supply • Provide adequate, reliable, sources of water • Protect and improve public health • Provide fire protection • Enable and support landscaping, gardening, agriculture • Provide recreation and environmental amenity • Support biodiversity • System size - Miles of main, distributer - Number of reservoirs, treatment plants - Area piped - Total storage capacity • Condition - Pipe leakage - Reservoir percent of design capacity - Design supply (treatment) capacity • System cost - Replacement cost (construction) - Annual O&M expenditures • Political jurisdictions - System ownership - Rate-setting, financing - Consumers, service area - Supply sources • Formal institutions - Utility - Regulatory authorities - Bonding, financing authorities • Informal, community structure - Land developers - Major users (e.g., industries) - Recreation interests
OCR for page 63
Measuring and Improving Infrastructure Performance Public Works Element, Type; Example Goals, Objectives Scale, Condition, and Geographic Distribution Scope and Context • Technology - Treatment process-Supply main materials • Area of extent - Drainage basins - Catchment areas - Recharge areas Wastewater (Sewage and stormwater) • Remove sanitary, industrial wastes • Control, reduce health hazard • Provide flood control, protection • System size - Miles of main, collector - Number of treatment plants - Area sewered - Separate/combined system • Condition - Pipe leakage, infiltration - Plant percent of design capacity • System cost - Replacement cost (construction) - Annual O&M expenditures - Average unit treatment cost • Technology - Treatment process - Main materials • Area extent - Drainage basins - Recharge areas - Ecosystems, biomes • Political jurisdictions - System ownership - Service area - Rate setting, financing - Receiving waters - Disposal sites • Formal institutions - Construction - Operations - Maintenance - Regulatory authorities • Informal, community structure - Major producers (e.g., industrial concerns) - Advocacy groups - Treatment and disposal neighbors - Recreational interests Municipal Waste • Remove wastes • Reduce materials • Avoid exposure of low income people to toxic materials • System size - Number of collection vehicles - Number of collection, transfer, disposal sites, facilities - Landfill design capacity - Labor force • Condition - Incinerator age - Landfill percent of design capacity - Haul distance • System cost - Replacement cost (construction) - Annual O&M expenditures • Political jurisdictions - Collection areas - Disposal sites - Transportation routes • Formal institutions - Municipal agencies - Concessionaires, contractors - Recycling and disposal firms - Regulatory agencies • Informal, community structure - Major producers (e.g., industrial concerns) - Advocacy groups - Treatment and disposal neighbors
OCR for page 64
Measuring and Improving Infrastructure Performance Public Works Element, Type; Example Goals, Objectives Scale, Condition, and Geographic Distribution Scope and Context • Technology - Disposal system and processes - Recycling processes • Area extent - Ecosystems, biomes - Airsheds - Groundwater regimes * Assessment may be made at local, regional, or national level; level will influence choice of appropriate inventory descriptors. Specific goals and objectives may vary substantially among particular projects and programs. Absence of a goal objective, or descriptor does not necessarily imply that the missing item is not relevant to the type of infrastructure being considered. The four major classes shown are based on the work of the NCPWI; other infrastructure modes could be included. The table serves as an example and should be revised to suit specific applications of the framework public health and education department records, and the like. In some cases, useful data have been collected by city or state agencies, but differing formats and frequency of data collection from different sources may not allow easy comparison of projects or systems across jurisdictions. Sometimes a major infrastructure project provides an opportunity for assembling a database that can subsequently be maintained for other uses. Taken as a whole, the inventory represented by measures in Table 4-1 is a snapshot of the infrastructure system as seen from several perspectives. Like a photograph, the inventory represents a particular time and is taken to serve a particular function, giving perhaps a closeup look at some small part of a region's infrastructure or a broad view of the region within a statewide context. For example, if the performance of a single sewage treatment plant is being measured, the inventory will be a ''closeup" listing of particular equipment such as filters, aerators, and chlorinators, but if the decision to be made concerns federal policies to reduce untreated overflow from combined sewer systems nationwide, such detail in the inventory of combined systems would be superfluous. Data availability can influence the clarity and coverage of this inventory snapshot. Some data are collected to meet the requirements of particular aspects of public policy and are restricted in their coverage. For example, health data reported to the Centers for Disease Control in Atlanta, are aggregated and archived at the National Center for Health Statistics. Such data can often be used to establish regional baselines or look for trends
OCR for page 65
Measuring and Improving Infrastructure Performance that indicate changes in infrastructure performance. Similarly, the National Highway Transportation Safety Administration (NHTSA) maintains files of accident statistics, and the Environmental Protection Agency maintains water pollution data under the auspices of the National Pollution Discharge Elimination System (NPDES). Data files on hazardous materials transport and the sites of known toxic wastes have also been assembled. Because these data sources lack common purpose, assembly of the data and their analysis for infrastructure performance assessment can be inordinately challenging and expensive. DATA AS A CONCERN Sometimes data are not immediately available because data collection is felt by government officials to be too costly or insufficiently useful, or because private-sector firms offer it at prices that public agencies have been unwilling to pay. Demonstrating the usefulness of data will prompt data collection and justify its cost. This demonstration can occur by showing how other jurisdictions are using data or, over time, by innovation that presents new data requirements. For instance, 20 years ago few firms kept track of the true costs of inventory because of the way most firms were functionally organized. When the field of logistics developed, it was demonstrated that goods in inventory have an opportunity cost (turned into cash, the goods could be invested and return interest). Once this was recognized, carrying cost of inventory became and continues to be standard practice for firms. The costs existed 20 years ago, but they were not recognized. The lesson is that those who do not collect data today may need to be educated as to the potential use of such data. A tradeoff must generally be made between desirability and availability of data. Over time a prototype system and formats for data collection may be developed that will greatly reduce the effort required in this inventory stage of performance assessment. Such a system might be analogous to the Generally Acceptable Accounting Procedures used in monitoring private businesses' financial performance. Knowledge of political and economic relationships within the region may be less formally inventoried but is embodied in the participation of key stakeholders and so becomes an informal part of the inventory. The committee found that lack of data and subsequent inability to measure the infrastructure system and its performance in many cases limits the system's susceptibility to effective management. The committee therefore recommends that data be collected on a continuing basis to enable long-term performance measurement and assessment. Each region with infrastructure decision making authority should establish a system for continuing data collection to maintain its infrastructure
OCR for page 66
Measuring and Improving Infrastructure Performance inventory and enable longer-term performance monitoring. Metropolitan areas with basic databases and modeling tools already in place should seek to integrate information on separate infrastructure modes into a uniform and accessible system so that existing data sets are documented in consistent ways and within the context of relevant national data collection activities such as the NPDES, NHTSA, and Centers for Disease Control programs already mentioned. These federal agencies should ensure that their national data sets are compatible with one another (e.g., in geographic detail, time periods, and indexing), computerized, and available on line to users via computer and telecommunications access modes. The committee recognized that many metropolitan areas do not have an agency with clearly defined responsibility or authority to assemble such data. State and federal agencies have the scope to serve as catalysts for establishing regional data collection programs, as the preceding discussion illustrates, but the effort should have a firm local base if it is to succeed. Often an individual local government official or a nongovernmental entity (e.g., a university-based research center) willing to assume leadership can be instrumental in this effort. PRINCIPLES FOR SELECTING PERFORMANCE MEASURES Once the inventory has been made, performance measures must be selected or developed. Tables 4-2, 4-3, and 4-4 display a range of example measures for, effectiveness, reliability, and cost, respectively. As the committee frequently reminded itself throughout its deliberations, the point of assessing infrastructure performance is to provide a better basis for decision making about how resources are used and, ultimately, to enhance the performance of infrastructure in particular regions. Starting from this premise, the committee agreed on several principles that should guide the selection of performance measures: Each measure of performance should be meaningful and appropriate to the needs of the decision makers (e.g., elected and other officials, business community, citizens groups, local residents), such that each measure reflects specific goals, regulations, or community vision of the purposes of its infrastructure; the measures indicate the outcomes resulting from infrastructure service availability and delivery (e.g., access and movement, health, economic activity, safety); the measures reflect local conditions and current or pending issues and decisions to be made;
OCR for page 67
Measuring and Improving Infrastructure Performance the measures facilitate comparisons among alternative means of providing the service, for example, private versus public sector, regional versus local management, alternative agencies or departments responsible; and all stakeholders can accept each measure as a meaningful and objectively measurable indicator or as a reasonable proxy upon which discussion may be based. As a set, measures should support a thorough assessment of performance in which all important management concerns are addressed; there is a balanced treatment of qualitative as well as quantitative aspects of performance; trends indicating likely future performance during the facility life cycle can be observed; preventive as well as corrective management actions can be considered to maintain acceptable performance throughout the facility life cycle; asset values and depreciation of facilities and equipment during the facility life cycle can be assessed; and comparisons of performance across regions are facilitated when multiregional funding or management issues are involved. The costs of measurement should be reasonable in relation to the costs of actions being considered; the possible consequences of the decisions and the value stakeholders place on those consequences; and the possibility that changes in goals or regulations will alter the set of appropriate measures and needs for data. The decision-making environment; the nature of the decision-making process; who derides what is to be done at the local, state, and federal levels; and why certain decisions are made define the context within which performance measures will be used to achieve improvement. These principles must be applied within that context—that is, by stakeholders undertaking the assessment process described in Chapter 3. MEASURES OF EFFECTIVENESS The committee proposed that effectiveness—the ability of the system to provide the services the community expects—may generally be described in terms of its capacity and delivery of services, the quality of services delivered, the system's compliance with regulatory concerns, and the system's broad impact on the community. As Table 4-2 illustrates, each of these four aspects of effectiveness encompasses an extensive and varied
OCR for page 68
Measuring and Improving Infrastructure Performance TABLE 4-2 Framework and Measures of System Effectiveness* Inventory (Table 4-1) Dimensions, Indicators, and Example Measures of Effectiveness* Public works element, type Service Delivery/Capacity (engineering specifications, technical output, quantity delivered; supply and consumption) Quality of Service to Users (customer acceptance, satisfaction, willingness to pay) Regulatory Concerns Community Concerns, Community-wide Impact, Externalities Generic: all elements or types • Output (per unit time, e.g., hour, day, month; peak, average, minimum) • Technical productivity (output per unit input) • Utilization (per unit time, e.g., hour, day, month; peak, average, minimum) • Access/coverage (e.g., fraction of population served) • Contingency • Consumer Safety • Satisfaction • Availability on demand/congestion • Environmental/ ecological quality • Service-related (i.e., pricing) • External (i.e., Clean Air Act) • Economic impact • Public health & safety • Social well-being (quality of life) • Environmental residuals and byproducts (i.e., pollution & other NEPA impacts) • National security • Equity (i.e., distribution of costs, benefits, consequences)
OCR for page 69
Measuring and Improving Infrastructure Performance Major Classes Typical Example Measures of Effectiveness* Transportation Systems • Output - Vehicle movements - Seat-miles - Route closures (hours), breakdowns • Technical productivity - Seat-miles per labor hour - Seat-miles per route mile - Operating cost per passenger - Passenger-miles per seat mile - Percent of bridges with weight restriction noise emission targets - Average fuel consumption - Average fuel consumption • Utilization - Mode split - Trip purpose distribution - Number of trips - Passenger-miles • Access/coverage - By jurisdiction - Special segments (e.g., mobility impaired) • Contingency - Emergency response capability - Severe weather response experience • Consumer Safety - Accident events - Value of losses - Fatalities per capita total, or per annual user • Satisfaction - Level of service - Average speed - Space per passenger - On-time service - Fare, cost to use - Ride quality • Availability on demand - Average wait time - Route closures (hours), breakdowns • Environmental/ ecological quality - Air pollution emissions rates - Road treatment chemical pollution (e.g., winter salt) • Service-related - Access to international routes - Vehicle inspection effectiveness - Speed limits • External - Noise control restrictions - Air pollution emissions restrictions • Fleet fuel efficiency standards • Economic impact - Person-hours of travel time • Transport industry sales - Access-based property value increases • Public health & safety - Pedestrian accident rate • Social well-being - Neighborhood disruption - Construction/repair disruptions • Residuals and other NEPA impacts - Construction wastes - Road salt in stormwater runoff • National security a. Bridge heights adequate for military vehicles • Equity - Income versus mode split communities - Service to minority communities
OCR for page 72
Measuring and Improving Infrastructure Performance Major Classes Typical Example Measures of Effectiveness* Municipal Waste • Output - Tons collectable - Special waste collection, disposal potential (e.g., medical, nuclear) • Technical productivity - Cost per ton - Tons collected per truck - Ton-miles haul per ton • Utilization - Tons collected, per capita or per job • Access/coverage - Collection area - Industrial customers • Contingency - Alternatives in event of pollution regulatory change • Consumer safety - Hazardous waste control • Satisfaction - Community perception of risk - Storage space required per household, employee • Availability on demand - Disposal reserve capacity to accommodate growth - Disposal restrictions • Environmental/ ecological quality - Air, water pollution emissions • Service-related - Recycling requirements - Incinerator moritoria • External - Clean Air Act restrictions • Economic impact - Disposal restrictions - Resource recovery - Landfill areas with restricted development potential • Public health & safety - Worker accident rates • Social well-being - Street cleanliness • Residuals & other NEPA impacts - Incinerator emissions - Wetlands affected - Population exposed to disposal site effects • National security • Equity - People adjacent to disposal sites, haul routes * Dimensions and measures may be added or deleted to match local objectives. Absence of measures does not necessarily imply that indicator is less relevant to the type of infrastructure being considered. All measures may be determined at particular reliability levels. The table serves as an example and should be modified to suit specific projects and programs.
OCR for page 73
Measuring and Improving Infrastructure Performance TABLE 4-3 Examples of Measures of System Reliability Type of indicator, measure Example measures Deterministic a. Engineering safety factors b. Percentage contingency allowances c. Risk class ratings Statistical, probabilistic d. Confidence limits e. Conditional probabilities (Bayesian statistics) f. Risk functions Composite (typically deterministic indicators of statistical variation) g. Demand peak indicators h. Peak-to-capacity ratios i. Return frequency (e.g., floods) j. Fault-tree analysis TABLE 4-4 Examples of Measures of System Cost Basic indicator Example measures 1. Investment, replacement, capital, or initial cost a. Planning and design costs b. Construction costs c. Equity d. Debt 2. Recurrent or O&M cost a. Operations costs b. Maintenance costs c. Repair and replacement costs d. Depreciation costs e. Depletion costs 3. Timing and source a. Timing of expenditure b. Discount and interest rates c. Exchange rates and restrictions (e.g., local versus foreign currency) d. Sources of funds, by program (e.g., federal or state, taxing authority) e. Service life set of specific indicators and measures. As discussed with respect to the inventory and Table 4-1), many types of measures are generic to all modes of infrastructure. These generic measures, shown in the first row of Table 4-2, derive directly from the goals and objectives of infrastructure. Each one, however, may require a list of several more detailed subsidiary measures to reflect the concerns of a particular mode's performance. Many of these more detailed measures are suggested in the mode-specific rows of Table 4-2.
OCR for page 74
Measuring and Improving Infrastructure Performance ''Service delivery/capacity'' and "quality of service to users," in the second and third columns of Table 4-2, include many of the concerns that engineers, planners, public health personnel, and other infrastructure professionals seek to address in design and system management. Items in the fourth column (regulatory concerns) also are addressed by these professionals but often are treated as a check on feasibility after the system's principal configuration has been determined. For example, highway engineers consider whether ambient air quality standards in a region are likely to be violated but seldom adjust highway pavement and intersection designs to reduce pollution emissions.3 A primary usefulness of measures in assessment is in the design guidelines, codes, regulatory standards, and other indicators that infrastructure professionals routinely use in their work. The decision makers undertaking a performance assessment should carefully examine each indicator to ensure that the measure really reflects stakeholders' current interests rather than some abstract or obsolete concept of "need" (as discussed in Chapter 2). The final column in Table 4-2, "other community concerns or impacts," includes many items that fall outside of the scope of the immediate requirements placed on the system. While economists refer to many of these items as "externalities," they often have immediate importance in decision making. For example, Portland's light raft transit system maintains its strong political support in part because its service encourages concentration of economic development along the train's route. This concentration yields benefits in terms of control of land as well as enhanced public demand for transit. Many communities are coming to recognize the importance of infrastructure's impact on the social, cultural, and aesthetic aspects of our environment. This fourth dimension of performance encompasses such matters as whether a highway in Baltimore divides or obliterates a formerly vibrant neighborhood, whether a ventilation tower in Boston is obtrusive within the architectural context of its surroundings, and how effectively the water-supply canals in Phoenix convey the significance of water and the importance of conserving it in the desert setting. Over time there is a tendency for public values to shift, bringing so-called externalities into the mainstream of decision making. For example, clean air was taken for granted in the planning and management of cities until motor-vehicle pollution emissions were found to be an important contributor to declining air quality in urban regions. Rising public concerns eventually led to passage of federal legislation that imposed emissions restrictions on vehicles and set ambient air quality standards. This tendency of values to change means that new aspects of performance may arise and be listed in this fourth dimension, as others already listed move toward the columns to the left.
OCR for page 75
Measuring and Improving Infrastructure Performance The measures included in Table 4-2 are meant to be a comprehensive but by no means exhaustive listing of performance indicators. As was the case for inventory measures, many effectiveness measures apply broadly to all modes. These generic measures (e.g., system output, user safety, economic effect) are shown in the first row of the table. More detailed measures, illustrated in subsequent rows, may apply to only a single mode or project. The particular measures used in any specific situation will depend on the scope and type of the decision to be made and the stakeholders in that decision. For example, decision makers concerned primarily about protection of public health will rely on indicators such as mortality, morbidity, and disability rates, rates of occurrence of specific sentinel illnesses, and costs of hospitalization and liability compensation. Federal agencies concerned with national spending and standards will want comparative local and regional analyses made using common measures specific to their programs. As explained in Chapter 3, the exercise of identifying and seeking to resolve conflicts among objectives and performance measures is an important part of the assessment process. Such comparative analyses, when linked to information on infrastructure design and management, can yield valuable insights on the merits of particular design and management practices. MEASURES OF RELIABILITY Performance measurement must unavoidably deal with uncertainty. This uncertainty stems first from the inherently statistical character of natural phenomena (e.g., the daily flow of water in a stream) with which infrastructure must contend and the characteristics (e.g., material strength, pipe condition, worker health) of the infrastructure itself. Added uncertainty comes from the inadequacies of data, many of which have been discussed in this report. Finally, assessing performance when changes in the system are being made requires forecasting of future conditions, which introduces more uncertainties. Reliability is a measure of these uncertainties. Reliability is described as the likelihood that infrastructure effectiveness will be maintained over an extended period of time or the probability that service will be available at least at specified levels and times during the design life of the infrastructure system. In principle each measure of effectiveness can be expressed in statistical terms. The confidence level at which the measurement is made is then an indicator of reliability with respect to that particular measure of effectiveness.4 Reliability is influenced by planning and implementation decisions as well as inherent uncertainties in the infrastructure system. Construction and operations often extend over periods of many years and affect characteristics of infrastructure elements and their behavior. People make judg-
OCR for page 76
Measuring and Improving Infrastructure Performance ments about the value or severity of outcomes of infrastructure-related decisions, also influencing reliability. For example, suppose heavier-than-average rainfall in an area caused slope failures that damaged pipelines, blocked highways, and seriously degraded water and energy supplies and road access for a large number of people. Initial measurements of soils properties, an important basis for designing the facilities, depended on a limited number of samples and tests. If development in the area were anticipated to be low, then decisions may have been made to accept a somewhat higher risk of slope failure. In hindsight, one might question whether adequate soil samples were taken or whether design assumptions were made wisely but, at the time, the responsible decision makers may have dealt with the uncertainties as well as they could. Collecting statistical data for large numbers of measures is costly and time-consuming. In addition, some indicators may not be quantified or easily measured in numerical terms. Other indicators of reliability may then be useful. Table 4-3 lists some of these measures. Reliability measures generally apply to all infrastructure modes but may be expressed differently from one application to the next. For example, many aspects of water supply and wastewater infrastructure are analyzed in terms of an anticipated peak flood or water flow. The peak is stated in terms of its anticipated frequency of recurrence, (e.g., the "100-year flood"). Similarly, many aspects of transportation infrastructure are analyzed in terms of a relatively infrequent peak level of traffic, for example, the "peak hour of the average day in the peak month" or the ''100th busiest hour." Such measures may be used as indicators of reliability. Engineers and other infrastructure professionals sometimes use a contingency allowance or "safety factor" to assess such parameters as structural load-carrying capacity.5 A higher safety factor is an indicator of greater reliability. As in the case of effectiveness, the specific measures may be selected to suit the problem or needs of the community and the decisions to be made. Regardless of which measures are selected, however, explicit recognition of uncertainty is a key element of the committee's concept of infrastructure performance. MEASURES OF COST Measuring infrastructure costs is often a complex financial exercise that goes well beyond simply recording expenditures for facilities construction, operations, and maintenance. The basic elements of expenditure, from which indicators of cost are derived, are included among the inventory measures in Table 4-1. As shown there, consideration must generally be
OCR for page 77
Measuring and Improving Infrastructure Performance given to the initial construction or replacement cost of facilities (also called investment or capital cost) and the recurring expenditures for operations and maintenance that will be required throughout the system's service life. Measures of cost will generally reflect such factors as the source of funds (i.e., who pays), timing of expenditures, and relative preferences for short-or long-term commitments. Table 4-4 presents a framework of factors that will influence cost measures. While costs are almost always measured in dollars or some other currency, the actual measure may be an equivalent present value of past and future expenditures, an equivalent uniform annual expenditure, an implied effective rate of return on investment, or some other computed indicator. The calculations may encompass all expenditures or other resource requirements, or only those coming from particular sources. For example, federal government programs that provide funds for state or local construction of new facilities may encourage much more construction than would otherwise be undertaken within the limits of state and local funding. The state or local government may then be responsible for operation and maintenance expenses for new facilities but have no adequate source of revenue to pay these costs. In contrast, private bond lenders may require that an infrastructure agency (e.g., a toll authority) seeking to borrow money for new construction collect and specifically reserve adequate revenues to cover these future expenses as well as to repay the borrowed amount. Whether a government agency or other entity has the ability and authority to tax or charge fees to recover the costs of infrastructure will also influence how some costs are viewed. Both budgetary and functional aspects of performance typically will influence decision making. BENCHMARKS AND STANDARDS FOR ASSESSMENT Understanding the measures of effectiveness, reliability, and cost in a particular situation is generally accomplished by comparing the measurements to some example or base. The base may be informal and derived from experience, as is the case when most people recognize that traffic congestion on a particular highway is severe or that brown water flowing from the tap is abnormal. Rules of thumb may provide somewhat more formal and numerical bases for judgment; for example, more than 10 people standing in line to check in at the airline ticket counter will represent a significant delay for travelers. In principle, a complete set of such bases is required for performance to be assessed. These bases for judgment are generally termed "benchmarks" and "standards."
OCR for page 78
Measuring and Improving Infrastructure Performance A benchmark is typically developed by observing the past behavior of a system or of comparable systems. For example, an airport may compare its total annual number of originating passengers to the benchmark of the previous year's count. An airline may compare its monthly average load factor to the reported industrywide average. 6 Availability of regularly collected data enables benchmarking and thereby facilities comparisons over time or among regions. When the basis for comparison is formally adopted by law, regulation, industry convention, or a consensus among stakeholders, it becomes a standard. Air pollution levels that exceed federal standards are a violation of enforceable regulations. Passenger delays exceeding 10 minutes for checking in at the airport ticket counter may be unsatisfactory in terms of the service standard an airline sets for itself. Standards may be derived from benchmarks, theoretical analyses, cross-sectional analyses, or other sources. The committee recommends that benchmarks or standards be developed for all measures of infrastructure performance. Data collection activities should be designed to facilitate benchmarking (e.g., by ensuring that comparable data are collected in different regions), and emphasis should be given to those aspects of performance for which data on past performance are especially sparse, such as stormwater runoff from transportation facilities and reliability of waste recycling processes. USING PERFORMANCE MEASURES The final result of assessment, the "bottom line," is a judgment that performance is adequate or good, or that it needs improvement. In this assessment, effectiveness, reliability, and cost interact with one another in complex ways, both functionally and in terms of how stakeholders and decision makers will make their judgments. High construction or maintenance costs, constraints on program funding available to cover certain types of cost, high interest costs, or potentially adverse consequences for some stakeholders, to suggest a few examples, lead decision makers to change their objectives or modify their priorities. The overall performance assessment will involve weighing and effectively trading off the various aspects of effectiveness, reliability, and cost. In its visits to Baltimore, Portland, and the Twin Cities, the committee heard about situations in which performance assessment was used effectively or could have helped support public discussion. The committee selected one such case as a basis for illustrating how a complete performance assessment might be made. Table 4-5 summarizes this hypothetical performance assessment, prepared by the committee using a situation described in Minneapolis. The final assessment findings reflect the com-
OCR for page 79
Measuring and Improving Infrastructure Performance mittee's views but are meant to be consistent with the conclusions drawn by community leaders, acting as stakeholders themselves, and representatives of commuters and other stakeholders who did not actually participate in the assessment. The situation concerns evaluation of an existing highway planned initially to have six lanes but constructed with only four. Two lanes were eliminated because of public concerns expressed about increased air pollution, neighborhood disruption, and other local impacts anticipated to occur when the new road was developed. However, no restriction was placed on land development, which grew substantially along the route once the new highway was opened to traffic. New development increased traffic on the highway, and the resulting traffic congestion brought with it not only the pollution originally feared but also lost time for commuters and concerns that downtown economic activities risked being strangled. Construction of a rail transit system has been proposed by some community members as a solution to the problem. Transportation studies made by local agencies indicate that such a transit system is unlikely to enhance overall system effectiveness, measured in such terms as average travel times, downtown travel, or energy consumption, because dispersed development patterns and the high cost of transit service seem likely to keep ridership low. The cost of developing and operating the transit system would be substantial, imposing high fiscal burdens on local and state governments. Overall performance of the Twin Cities' infrastructure seems unlikely to improve significantly with construction of raft transit, as proposed. 7 However, community leaders agree that the threat to the downtown economy seems to warrant other action. New technologies (e.g., intelligent highway-vehicle systems) may offer substantially improved traffic flow over the problem route and other parts of the regional network, reducing air pollution as well as relieving congestion.8 In the shorter term, traffic congestion might be relieved by diverting from the problem highway those vehicles bound for destinations outside the downtown area. These conclusions represent the completion of a performance assessment for making a planning decision. Subsequent repetitions of the process could involve location studies and designs for highway construction. The state and local transportation agencies' public participation and environmental review requirements would substantially broaden the base of stakeholders involved in these future rounds of assessment. The committee's example is necessarily brief and highly schematic. An actual performance assessment would include a significant depth of analysis and documentation. The level of analysis—and its cost—would depend on the nature of the decision to be made and the likely scrutiny to which the decision would be submitted. For example, a controversial situation subject to federal and
OCR for page 80
Measuring and Improving Infrastructure Performance TABLE 4-5 Example of Performance Measurement* Stage or product of assessment Information, measures, and findings Inventory System and motivation for assessment In town urban interstate highway was planned for six lanes as part of regional system but constructed with four lanes because of community. concerns for noise, air pollution, and neighborhood disruption. Subsequent urban growth in the corridor has generated high travel demand and daily long hours of congestion, contributing in turn to recurring episodes of ambient air pollution levels above federal standards Goals, vision • Enhance mobility in region • Serve and facilitate continued economic growth in corridor • Reduce air pollution levels by relieving congestion • Make region a center for information and related electronics industries Scale, condition, geographic distribution • Approximately 10 miles of four-lane urban expressway, in good condition • Entry ramps to left lane are common Scope and context • Federal funds are available for highway construction and rehabilitation • Federal funds may be available for capital expenditures for transit system expansion or upgrading • Segment serves primarily suburban commuters travelling to and through downtown area • State has established regional transit authority that operates public transit system; state transportation department retains authority for highway construction and improvements Effectiveness measures Capacity/delivery of services • Average daily traffic on the road is approximately four times planned capacity • Average daily rate of trips per capita in region is double what it was when the road was planned • Transit ridership has declined in absolute terms Quality of services • Each direction of segment operates at level-of-service "D" or below for approximately four hours each weekday • Frequent accidents at entry ramps • Air pollution emissions are high Regulatory concerns • EPA requires transportation control strategy to reduce ambient levels of pollutants
OCR for page 81
Measuring and Improving Infrastructure Performance Stage or product of assessment Information, measures, and findings Community concerns • Commuters losing hours of time, productivity • Access to downtown businesses substantially reduced • Downtown economic vitality is threatened Reliability measures • System has failed, relative to standards set when road was planned and constructed • Likelihood of future service improvements is low, unless action is taken Cost measures • Road constructed with 90 percent federal funding • Maintenance costs under current situation are not a problem for state Assessment of performance Threat to downtown vitality, warrants action to relieve congestion, but substantial shift to transit seems unlikely. New construction/upgrading of roads to divert traffic not destined for downtown should be explored. Over longer term, develop "intelligent vehicle-highway systems" to improve flow on route. * This example is based on the committee's observations in Minneapolis, Minnesota, but reflect the committee's views and findings only. Rows in this table correspond to columns in tables 4-1, 4-2, 4-3, and 4-4. state environmental reviews might warrant a great deal more analysis than a strategic planning discussion undertaken by community leaders following the start of a new political administration. Regardless of the level of depth and detail, infrastructure systems are so complex that one can only infer that an observed change in a performance measure—for example, the congestion that motivated this example assessment—is a consequence of actions initiated by the system's planner or manager. External events may have caused or partially caused the change. However, the inferred linkage of changes in performance to actions taken to implement decisions, for example, construction of the highway with only two lanes in each direction and no means to limit or direct urban growth, is a crucial step in assessment, marking the transition from one decision cycle to the next. In making this transition, decision makers are seeking to improve performance. Generally speaking, performance is clearly improved if one of the following two conditions is met:
OCR for page 82
Measuring and Improving Infrastructure Performance Some measures of effectiveness or reliability (or both) improve and none deteriorate, while costs decrease or do not change. Some measures of costs decrease and none increase, while no measures of effectiveness or reliability change. Such conditions characterize a proposal that clearly is what decision analysts term "nondominated" (as discussed in Chapter 5), and the decision that improvement has occurred is straightforward. Even if one of these conditions is not met, however, performance may be judged to have improved if the community gives sufficiently greater weight to the measures that have improved compared with those that have not. For example, residents of some neighborhoods request installation of speed bumps on their local streets even if their. taxes are raised to recover the public works expenditure. These residents prefer to sacrifice their riding quality to achieve the improved safety they attribute to reduced speed of other vehicles passing through the neighborhood. NOTES 1 Agencies and researchers in Oregon, Pennsylvania, Louisiana, and Minnesota, to name only a few, have presented their work in national forums. Finland and the members of the European Union are also pursuing such work. 2 public Works Management Practices, APWA Special Report #59, American Public Works Association, Chicago, August 1991. 3 They may, however, adjust route alignments or traffic signal timing because the consequent reductions in pollution emitted are more immediate and substantial. 4 "Confidence level" is a term used in statistics. A parameter that is known to have statistical variation (e.g., the strength of concrete) is estimated by testing samples and then computing from these tests a value for the parameter and the confidence one may have, on the basis of the tests, that the actual value of other (untested) samples are equal to or greater than the computed value. 5 "Safety factor" is generally defined as the ratio of the projected load at which failure would occur to the maximum anticipated load. A safety factor greater than 1.0 is considered safe. However, because of uncertainties in measurement and projection, common practice and sometimes building codes and other regulations may require that facilities be built and operated with safety factors of 1.5, 2.0, or higher. 6 "Load factor" is typically defined as the ratio of paid passengers to available seats on an aircraft, as a percentage. An airline might hope to maintain its load factors greater than, say, 65 percent. 7 The committee did note, however, that their visit to Portland illustrated where a different conclusion was drawn by local decision makers committed to implementing land use, parking, and other incentives or restrictions aimed at increasing transit ridership and discouraging automobile usage for downtown travel. Some stakeholders in the Twin Cities area will undoubtedly continue to maintain interest in rail transit development. 8 The committee was told that the University of Minnesota, aided in part by funding from the U.S. Department of Transportation, is active in research in this area.
Representative terms from entire chapter: