National Academies Press: OpenBook

Measuring and Improving Infrastructure Performance (1996)

Chapter: Executive Summary

« Previous: Front Matter
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

EXECUTIVE SUMMARY

The nation's infrastructure facilitates movement of people and goods, provides adequate safe water for drinking and other uses, provides energy where it is needed, removes wastes, and generally supports our economy and quality of life. Determining how well infrastructure is performing these tasks is essential to effective management of the assets infrastructure represents. Yet current practices for measuring performance are largely inadequate to respond to this management task. Responsibility for these valuable assets is primarily a local matter, with some 80 percent of the annual investment in infrastructure coming from local and state government sources or private enterprises. Nevertheless the federal government's influence on infrastructure development and management is substantial, exercised through many programs that provide funds for purchasing and construction, set standards, and otherwise seek to ensure the safety and efficacy of various parts of the nation's infrastructure. There currently is no integrated federal policy toward infrastructure as a whole. The Federal Infrastructure Strategy (FIS) is a three-year, interagency program directed and administered by the U.S. Army Corps of Engineers (USACE) to provide the substantive framework for determining whether such a policy is warranted and what its content might be.

SOURCE OF THIS STUDY

As a part of the FIS program, the USACE requested the National Research Council (NRC) to undertake a study on measuring and improving infrastructure performance. The NRC appointed the Committee on

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Measuring and Improving Infrastructure Performance, which started its work in October 1993 and met five times during a period of about 10 months. To provide a practical background for its study and to explore how concepts of performance are used by decision makers, the committee visited three titles selected to represent situations in which performance measures might be used: Baltimore, Maryland; Portland, Oregon; and Minneapolis-St. Paul, Minnesota. During these visits, the committee met with government officials and other knowledgeable professionals in each area to discuss particular projects and the region's infrastructure more generally. This document is a report of the committee's work. Principal findings and recommendations are summarized in tables ES-1 and ES-2 and on the following pages.

THE STUDY'S FOCUS AND LIMITS

The committee's point of departure was the work of the National Council on Public Works Improvement (NCPWI), embodied in the council's 1988 final report, Fragile Foundations. The committee's scope was limited from the study's start to the specific modes of infrastructure addressed in that report. For much of their discussion, the committee grouped these modes into four broad categories: (1) transportation, including highways, mass transit, and aviation; (2) water, including water resources and water supply; (3) wastewater (both sanitary sewage and stormwater runoff); and (4) municipal waste, including both solid and hazardous wastes. Other infrastructure modes, such as telecommunications, energy production and distribution, and parks and open space inevitably entered the committee's discussion but are beyond the scope of this report. However, the committee sought to generalize their discussions and to deal with performance of infrastructure as an integrated, multifunctional system. Many of the principles and recommendations discussed here apply to all infrastructure modes as a single system.

Infrastructure is built and serves regions on many scales, but the committee focused on issues arising from transportation, water, and waste within urban regions. The organizational context of these issues is primarily local governments, multijurisdictional bodies (e.g., regional councils), and states. This study's systemwide approach, that is, looking across infrastructure modes (water, transportation, wastewater, solid wastes) to define performance in an urban region, runs counter to the typical institutional structure of infrastructure. This institutional structure now consists largely of organizations concerned with both programs and projects within a single mode. Critics cite this structure as an obstacle to improved performance of the nation's infrastructure as a whole because it deters effective thinking about the interactions and tradeoffs among the various modes.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

TABLE ES-1 Summary of Principal Findings and Conclusions

Infrastructure Performance and its Measurement

1. Infrastructure constitutes valuable assets that provide a broad range of services at national, state, and local levels. Its performance is defined by the degree to which the system serves multilevel community objectives. Identifying these objectives and assessing and improving infrastructure performance occur through an essentially political process involving multiple stakeholders.

2. Performance measurement, a technical component of the broader task of performance assessment, is an essential step in effective decision making aimed at achieving improved performance of these valuable assets.

3. Despite the importance of measurement, current practices of measuring comprehensive system performance are generally inadequate. Most current measurement efforts are undertaken because they are mandated by federal or state governments or as an ad hoc response to a perceived problem or the demands of an impending short-term project.

4. No adequate, single measure of performance has been identified, nor should there be an expectation that one will emerge. Infrastructure systems are built and operated to meet basic but varied and complex social needs. Their performance must therefore be measured in the context of social objectives and the multiplicity of stakeholders who use and are affected by infrastructure systems.

5. Performance should be assessed on the basis of multiple measures chosen to reflect community objectives, which may conflict. Some performance measures are likely to be location-and situation-specific, but others have broad relevance. Performance benchmarks based on broad experience can be developed as helpful guides for decision makers.

6. The specific measures that communities use to characterize infrastructure performance may often be grouped into three broad categories: effectiveness, reliability, and cost. Each of these categories is itself multidimensional, and the specific measures used will depend on the location and nature of the problem to be decided.

Assessment Process

7. The performance assessment process by which objectives are defined, specific measures specified and conflicts among criteria reconciled is crucial. It is through this process that community values are articulated and decisions made about infrastructure development and management.

8. Methodologies do exist for structuring decision making that involve multiple stakeholders and criteria, but experience is limited in applying these methodologies to infrastructure.

9. Performance assessment requires good data. Continuing, coordinated data collection and monitoring are needed to establish benchmarking and performance assessment.

10. The subsystems of infrastructure—transportation, water, wastewater, hazardous and solid waste management, and others—exhibit both important physical interactions and relationships in budgeting and management. Effective performance management requires a broad systems perspective that encompasses these interactions and relationships. Most infrastructure institutions and analytical methodologies currently do not reflect this broad systems perspective.

11. The long-term and sometimes unintended consequences of infrastructure systems, whether beneficial or detrimental, frequently go far beyond the physical installations themselves. Community views of these consequences become a part of the assessment and decision-making process.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

TABLE ES-2 Summary of Recommendations

1. Local agencies with responsibilities for infrastructure management should explicitly define a comprehensive set of performance measures and set aside funds sufficient to sustain an adequate performance measurement process. The measures selected should reflect the concerns of stakeholders about the important consequences of infrastructure systems and recognize interrelationships across infrastructure modes and jurisdictions. The committee's framework of effectiveness, reliability, and cost is a useful basis for establishing these measures.

2. While not every aspect of performance is quantifiable, attempts should be made to devise quantitative indicators of qualitative aspects of performance. Quantitative measures should then be used to develop benchmarks that policy makers responsible for assessing infrastructure performance can use for setting goals and comparing performance among systems, considering effectiveness, reliability, and costs (including actual expenditures as compared to budgets).

3. Recognizing that infrastructure performance cannot be managed if it cannot be measured, data should be collected on a continuing basis to enable long-term performance measurement and assessment.

 

a. Each region with infrastructure decision-making authority should establish a system for continuing data collection to give performance assessment a more quantitative basis and enable longer term performance monitoring. Metropolitan areas with basic databases and modeling tools already in place should seek to integrate information on separate infrastructure modes into a uniform and accessible system, so that existing data sets are documented in consistent ways, within the context of relevant national data collection activities (e.g., federal Department of Transportation or Environmental Protection Agency statistics).

b. Federal agencies should assure that national data sets (that is, those collected by or under the requirements of federal programs), are compatible (e.g., in geographic detail, time periods, and indexing), computerized, and made electronically accessible.

c. All such performance data collection should be designed to facilitate benchmarking.

d. New data collection activities should give priority to those functional areas where data currently are sparse (e.g., highway stormwater runoff characteristics, solid waste recycling reliability).

4. Responsible agencies should adopt infrastructure performance measurement and assessment as an ongoing process essential to effective decision making. The selected set of performance measures should be periodically reviewed and revised as needed to respond to changing objectives, budgetary constraints, and regulations.

5. Responsible agencies should undertake a critical self-assessment to determine the nature and extent of specific regulations, organizational relationships, jurisdictional limitations, customary practices, or other factors that may constitute impediments to adoption of the proposed infrastructure performance measurement framework and assessment process. Such a self-assessment could be conducted within the context of a specific infrastructure management problem or as a generic review, but it necessarily will involve time, money, and a concerted effort to motivate active community involvement with open, candid discussion. The assessment should conclude with explicit recommendations of institutional change that may be needed to enable a systemwide approach to management of infrastructure performance.

6. Federal infrastructure policy and regulations should be revised as needed to accommodate local decision-making processes and performance measurement frameworks within the context of valid national interests in local infrastructure performance. Federal policy effectiveness should be evaluated on the basis of its sensitivity to local variations in performance assessment.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

The purpose of measuring performance is to support those who must make decisions about developing, operating, and maintaining infrastructure. These individuals are typically elected officials and the senior technical administrators in a region, for example, public works directors and planning directors. This latter group must on the one hand advise elected officials and the public on infrastructure and on the other hand direct the development and operations of infrastructure facilities and services. Likely the primary users of the committee's work, these individuals must assess how well infrastructure is performing its expected tasks.

The committee's work will result in a framework and process for measuring and assessing infrastructure performance that local decision makers and others concerned with development and management can use as a basis for discussion and action to enhance infrastructure's contribution to achieving their community's goals.

While the committee initially considered the premise that infrastructure performance measurement must occur within the existing institutional framework, its visits to Baltimore, Portland, and Minneapolis-St. Paul illustrated that institutional setting is crucial, that a variety of institutional structures are possible, and that changes can be made when change is warranted. The committee believes that in many areas institutional change may be needed over the longer term to permit the truly multijurisdictional and multimodal infrastructure management that will enable infrastructure systems to achieve their best performance.

DEFINING INFRASTRUCTURE PERFORMANCE

Generally, performance is the carrying out of a task or fulfillment of some promise or claim, and for infrastructure this means providing or enabling movement of goods and people, clean water supplies, waste disposal, and a variety of other services that support other economic and social activities, a safe and healthful environment, and a sustainably high quality of life. Infrastructure is a means to other ends, and the effectiveness, efficiency, and reliability of its contribution to these other ends must ultimately be the measures of infrastructure performance.

Judging whether performance is good or bad, adequate or inadequate, depends on the community's objectives. These objectives are generally set locally but include state and federal elements and widely accepted standards of practice. Performance must ultimately be assessed by the people who own, build, operate, use, or are neighbors to that infrastructure. The committee found that there are few benchmarks or norms of performance that apply to infrastructure as a system, or even that apply comprehensively to all aspects of performance of any one type of infrastructure. More benchmarks are needed to give decision makers a broad basis for judgments about infrastructure performance.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

While infrastructure is owned and operated by private enterprises as well as government agencies, its fundamental role makes infrastructure a public asset. Judgments about the adequacy of performance are typically made in a public setting. Many individuals and institutional and corporate entities make up the ''community'' that has a stake in infrastructure performance, and each stakeholder's view must be considered in assessing that performance. These individuals include providers of infrastructure services and individuals, households, and businesses that use infrastructure and are exposed to infrastructure's impacts. These stakeholders' perspectives focus at the level of city or county, state or province, nation, or broader, in watersheds, airsheds, neighborhoods, historic districts, and other naturally, socially, or economically defined areas. Reaching consensus can be difficult. Even when one person has clearly defined responsibility for making investments or operating decisions about some element of infrastructure, that person must be prepared for public scrutiny of his or her premises and conclusions.

The assessment process must ensure broad participation in making the judgment and determining the bases on which judgment is to be made. In short, infrastructure performance is defined by the community, and its measurement must start with the tasks the community wants its infrastructure to accomplish. This community, however, is inevitably broad and diverse, including regional and national as well as local perspectives.

Because of these many facets of infrastructure performance and its assessment, no single performance measure has yet been devised, nor is it likely that one can be. Infrastructure performance measurement must be multidimensional, reflecting the full range of the social objectives set for the infrastructure system. The committee nevertheless found that performance—the degree to which infrastructure provides the services that the community expects of that infrastructure—may be defined as a function of effectiveness, reliability, and cost. Infrastructure that reliably meets or exceeds community expectations, at an acceptably low cost, is performing well. The three principal dimensions of performance are each, in turn, complex and multifaceted, typically requiring several measures to indicate how well infrastructure is performing.

The challenge decision makers face in seeking to develop and manage infrastructure for high performance is one of applying money, land, energy, and other resources to deliver effective and reliable service to the broad community. These resources are used in planning, construction, operation, maintenance, and sometimes demolition of facilities; monitoring and regulating the safety and environmental consequences of these activities; and mitigating adverse impacts of infrastructure. The costs are incurred and paid at different times and places, by different agencies and groups (e.g., users, neighbors, taxpayers), and in nonmonetary and mone-

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

tary terms. Decisions are made in an uncertain world, with limits to how accurately effectiveness and cost can be measured. Storms, accidents, and sudden failure of materials and equipment can drastically alter the relationship of cost and effectiveness. Assessing performance is a way of dealing with effectiveness, reliability, and costs in an orderly manner to enable decisions to be made about the infrastructure system.

ASSESSING PERFORMANCE AND DECISION MAKING

Figure ES-1 illustrates the committee's concept of a generic assessment process. The first step is to clearly identify who the stakeholders are in the decision-making situation that motivates the assessment. The level at which the decision is made (e.g., local or state) and the type of decision (e.g., planning new facilities, determining how to implement a new regulation) will have much to do with who these stakeholders are.

The next steps deal with clearly defining the infrastructure system of interest, the boundaries and context of the system and area served, the objectives and vision the community sets for the system, and constraints (e.g., budgets, interagency relationships, jurisdictional constraints) and regulations that may limit feasibility of actions. Table ES-3 summarizes the types of information that such an inventory of the infrastructure system is likely to include and presents examples of specific indicators.

This inventory involves the use of databases of the types typically maintained by municipal and regional planning agencies, departments of transportation, water utilities, and sewer authorities. Few areas have brought these typically distinct databases together into a comprehensive resource that will support effective performance assessment. The opportunities for assembling such comprehensive databases are growing as data increasingly are being stored in highly accessible, computer-based geographical information systems that provide a common framework for storage, retrieval, and analysis.

Selecting measures and measuring performance are central tasks in assessment and may sometimes involve a substantial amount of public discussion. The specific measures may have general use but should always be appropriate for the particular situation being assessed. Local values have overarching influence on the selection and on all other steps in the assessment process.

The assessment process will yield a multidimensional set of specific measurements (qualitative as well as quantitative) of key indicators of performance. These measurements may be used by decision makers to decide what should be done to address the problem or demand or to realize the vision that motivated assessment. Tables ES-4, ES-5, and ES-6 make up the committee's performance measurement framework. These tables illustrate

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

FIGURES ES-1 Performance assessment as a generic process.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

TABLE ES-3 Framework and Measures of System Inventory*

Public Works Element, Type; Example Goals, Objects

Scale, Condition, and Geographic Distribution

Scope and Context

Generic: all elements or types

• Enhance economic productivity, opportunity

• Improve public health, safety

• Protect, enhance environmental quality

• Provide jobs and economic stimulus

• Reduce income inequalities

• System size

• Condition

• System cost

• Technology

• Area of extent

• Political jurisdictions

• Formal institutions

• Informal, community Structure

Examples for Major Classes Transportation Systems

• Improve access

• Increase mobility

• Move goods efficiently

• Protect safety

• Reduce air pollution

• Increase construction spending

• Subsidize public transit operations

• System size

- Lane-miles, track-miles

- Number of bridges, airports

- Fleet size and mix

- Area covered, network configuration

- Runway length, terminal gates

• Condition

- Pavement cracking

- Bridge load capacity

- Track condition

• System cost

- Replacement cost (construction)

- Annual O&M expenditures

• Technology

- Fuel types

- Fleet age distribution

• Area of extent

- Natural barriers

- Airsheds, basins

• Political jurisdictions

- System ownership

- Pricing authority

- Funding and taxing arrangements

• Formal institutions

-Construction

- Operations

- Intermodal coordination

• Informal, community, structure

- Ridership

- Advocacy groups (e.g., bicycle, pedestrian)

- Land developers

- Business groups

- Environmental resistance groups (e.g., airport noise)

- Neighborhood associations

Water Supply

• Provide adequate, reliable, sources of water

• Protect and improve public health

• Provide fire protection

• Enable and support landscaping, gardening, agriculture

• Provide recreation and environmental amenity

• Support biodiversity

• System size

- Miles of main, distributer

- Number of reservoirs, treatment plants

- Area piped

- Total storage capacity

• Condition

- Pipe leakage

- Reservoir percent of design capacity

- Design supply (treatment) capacity

• System cost

- Replacement cost (construction)

- Annual O&M expenditures

• Political jurisdictions

- System ownership

- Rate-setting, financing

- Consumers, service area

- Supply sources

• Formal institutions

- Utility

- Regulatory authorities

- Bonding, financing authorities

• Informal, community structure

- Land developers

- Major users (e.g., industries)

- Recreation interests

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Public Works Element, Type; Example Goals, Objectives

Scale, Condition, and Geographic Distribution

Scope and Context

 

• Technology

- Treatment process

- Supply main materials

• Area of extent

- Drainage basins

- Catchment areas

- Recharge areas

 

Wastewater (Sewage and stormwater)

• Remove sanitary, industrial wastes

• Control, reduce health hazard

• Provide flood control, protection

• System size

- Miles of main, collector

- Number of treatment plants

- Area sewered

- Separate/combined system

• Condition

- Pipe leakage, infiltration

- Plant percent of design capacity

• System cost

- Replacement cost (construction)

- Annual O&M expenditures

- Average unit treatment cost

• Technology

- Treatment process

- Main materials

• Area extent

- Drainage basins

- Recharge areas

- Ecosystems, biomes

• Political jurisdictions

- System ownership

- Service area

- Rate setting, financing

- Receiving waters

- Disposal sites

• Formal institutions

- Construction

- Operations

- Maintenance

- Regulatory authorities

• Informal, community structure

- Major producers (e.g., industrial concerns)

- Advocacy groups

- Treatment and disposal neighbors

- Recreational interests

Municipal Waste

• Remove wastes

• Reduce materials consumption

• Avoid exposure of low-income people to toxic materials

• System size

- Number of collection vehicles

- Number of collection, transfer, disposal sites, facilities

- Landfill design capacity

- Labor force

• Condition

- Incinerator age

- Landfill percent of design capacity

- Haul distance

• System cost

- Replacement cost (construction)

- Annual O&M expenditures

• Political jurisdictions

- Collection areas

- Disposal sites

- Transportation routes

• Formal institutions

- Municipal agencies

- Concessionaires, contractors

- Recycling and disposal firms

- Regulatory agencies

• Informal, community structure

- Major producers (e.g., industrial concerns)

- Advocacy groups

- Treatment and disposal neighbors

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Public Works Element, Type; Example Goals, Objectives

Scale, Condition, and Geographic Distribution

Scope and Context

 

• Technology

- Disposal system and process

- Recycling processes

• Area extent

- Ecosystems, biomes

- Airsheds

- Groundwater regimes

 

* Assessment may be made at local, regional, or national level; level will influence choice of appropriate inventory descriptors. Specific goals and objectives may vary substantially among particular projects and programs. Absence of a goal, objective, or descriptor does not necessarily imply that the missing item is not relevant to the type of infrastructure being considered. The four major classes shown are based on the work of the NCPWI; other infrastructure modes could be included. The table serves as an example and should be revised to suit specific applications of the framework.

the primary dimensions of performance and typical measures of effectiveness, reliability, and cost. The specific measures included in the tables represent a comprehensive but not necessarily complete listing that users might augment or narrow to suit their particular decision-making situation.

Effectiveness, or the ability of the system to provide the services the community expects, is generally described by (1) capacity and delivery of services, (2) quality of services delivered, (3) the system's compliance with regulatory concerns, and (4) the system's broad impact on the community. Each of these four primary dimensions of effectiveness encompasses an extensive and varied set of specific indicators and measures. The committee has suggested what some of those specific measures might be, but the actual measures used will depend on the specific context of the decisions to be made.

The final column, other community concerns or impacts, includes many items that fall outside the scope of the immediate requirements placed on the system, items that economists often refer to as "externalities." Over time there is a tendency for public values to shift, bringing these externalities into the mainstream of decision making. For example, dean air was taken for granted in the planning and management of highways until motor-vehicle pollution emissions were found to be an important contributor to declining air quality in urban regions. Rising public concerns led eventually to passage of federal legislation that imposed emissions restrictions on vehicles, set ambient air quality standards, and

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

TABLE ES-4 Framework and Measures of System Effectiveness*

Inventory (Table 4-1)

Dimensions, Indicators, and Example Measures of Effectiveness*

Public works element, type

Service Delivery/Capacity (engineering specifications, technical output, quantity delivered; supply and consumption)

Quality of Service of Users (customer acceptance, satisfaction, willingness to pay)

Regulatory Concerns

Community Concerns, Community-wide Impact, Externalities

Generic: all elements or types

• Output (per unit time, e.g., hour, day, month; peak, average, minimum)

• Technical productivity (output per unit input)

• Utilization (per unit time, e.g., hour, day, month; peak, average, minimum)

• Access/coverage (e.g., faction of population served)

• Contingency

• Consumer Safety

• Satisfaction

• Availability on demand/congestion

• Environmental/ecological quality

• Service-related (i.e., pricing)

• External (i.e., Clean Air Act)

• Economic impact

• Public health & safety

• Social well-being (quality of life)

• Environmental residuals and byproducts (i.e., pollution & other NEPA impacts)

• National security

• Equity (i.e., distribution of costs, benefits, consequences)

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Major Classes

Typical Example Measures of Effectiveness*

Transportation Systems

• Output

- Vehicle movements

- Seat-miles

- Route closures (hours), breakdowns

• Technical productivity

- Seat-miles per labor hour

- Seat-miles per route mile

- Operating cost per passenger

- Passenger-miles per seat mile

- Percent of bridges with weight restriction

- Percent of vehicles not meeting latest pollution, noise emission targets

- Average fuel consumption

• Utilization

- Mode split

- Trip purpose distribution

- Number of trips

- Passenger-miles

• Access/coverage

- By jurisdiction

- Special segments (e.g., mobility impaired)

• Contingency

- Emergency response capability

- Severe weather response experience

• Consumer Safety

- Accident events

- Value of losses

- Fatalities per capita total, or per annual user

• Satisfaction

- Level of service

- Average speed

- Space per passenger

- On-time service

- Fare, cost to use

- Ride quality

• Availability on demand

- Average wait time

- Route closures (hours), breakdowns

• Environmental/ecological quality

- Air pollution emissions rates

- Road treatment chemical pollution (e.g., winter salt)

• Service-related

- Access to international routes

- Vehicle inspection effectiveness

- Speed limits

• External

- Noise control restrictions

- Air pollution emissions restrictions

• Fleet fuel efficiency standards

• Economic impact

- Person-hours of travel time

• Transport industry sale

- Access-based property value increases

• Public health & safety

- Pedestrian accident rate

• Social well-being

- Neighborhood disruption

- Construction/repair disruptions

• Residuals and other NEPA impacts

- Construction wastes

- Road salt in stormwater runoff

• National security

- Bridge heights adequate for military vehicles

• Equity

- Income versus mode split

- Service to minority communities

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Major Classes

Typical Example Measures of Effectiveness*

Water Supply

• Output

- Gallons treated

- Storage volume

- Maximum head (pressure)

• Technical productivity

- Cost per gallon

- Leakage/loss rate

- Removable trace contaminants

• Utilization

- Per capita consumption

• Access/coverage

- Supply area

- Recycling volumes

• Contingency

- Fire delivery pressure

- Reserve storage

• Consumer safety

- Main breaks

- Fire protection service coverage

- Disease outbreak incidents

• Satisfaction

- Water taste, color

• Availability on demand

- Reservoir capacity versus demand (days)

- Service outages, restrictions

• Environmental/ ecological quality

- Low flow restrictions

• Service-related

- Conformance to Safe Drinking Water Act standards

• External

- Heavy metals in sludge

- Water drawn from wetlands sources

• Economic impact

- Water use restrictions

• Public Health & Safety

- Worker accident rates

• Social well-being

- Houses with indoor plumbing

- Public drinking fountains

• Residuals & other NEPA impacts

- Treatment of chemical waste

• Construction/repair disruptions

• National security

- Protection from terrorist acts

• Equity

- Catchment area versus service area

b. Low income houses with indoor plumbing

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Major Classes

Typical Example Measures of Effectiveness*

Wastewater (Sewage and stormwater)

• Output

- Gallons treated

- Stormwater storage volume

• Technical productivity

- Cost per gallon

- COD/BOD reduction

- Sludge load

- Percent of heavy metals, other toxics removed

- Sewered area per mile of main

• Utilization

- Per capita sewage treated

• Access/coverage

- Sewered area

• Contingency

- Capability to respond to toxics overload

- Overflow frequency (combined systems)

• Consumer Safety

- Main breaks

- Infiltration to water supply

- Compliance with NPDES requirements

• Satisfaction

- Back-up, retention, overflow during peak load periods.

- Noxious odor

• Availability on demand

- Peak capacity limitations

- Overflow incidence

• Environmental/ecological quality

- Wetlands habitats threatened by runoff

- Receiving waters quality

• Service-related

- Effluent restrictions

• External

- Endangered Species Act restrictions

• Economic impact

- Wastewater treatment requirements (e.g., for factories)

• Public health & safety

- Disease outbreak incidents

- Worker accident rates

• Social well-being

- Homes on system

• Residuals & other NEPA impacts

- Construction/repair disruptions

- Landfill volumes

- Hazardous by-products

- Odor in plant area

• National security

• Equity

- Distribution of flooding by neighborhoods

- Disposal impact area versus service area

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

Major Classes

Typical Example Measures of Effectiveness*

Municipal Waste

• Output

- Tons collectable

- Special waste collection, disposal potential (e.g., medical, nuclear)

• Technical productivity

- Cost per ton

- Tons collected per truck

- Ton-miles haul per ton

• Utilization

- Tons collected, per capita or per job

• Access/coverage

- Collection area

- Industrial customers

• Contingency

- Alternatives in event of pollution regulatory change

• Consumer safely

- Hazardous waste control

• Satisfaction

- Community perception of risk

- Storage space required per household, employee

• Availability on demand

- Disposal reserve capacity to accommodate growth

- Disposal restrictions

• Environmental/ecological quality

- Air, water pollution emissions

• Service-related

- Recycling requirements

- Incinerator moritoria

• External

- Clean Air Act restrictions

• Economic impact

- Disposal restrictions

- Resource recovery

- Landfill areas with restricted development potential

• Public health & safety

- Worker accident rates

• Social well-being

- Street cleanliness

• Residuals & other NEPA impacts

- Incinerator emissions

- Wetlands affected

- Population exposed to disposal site effects

• National security

• Equity

- People adjacent to disposal sites, haul routes

* Dimensions and measures may be added or deleted to match local objectives. Absence of measures does not necessarily imply that indicator is less relevant to the type of infrastructure being considered. All measures may be determined at particular reliability levels. The table serves as an example and should be modified to suit specific projects and programs.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

TABLE ES-5 Examples of Measures of System Reliability

Type of indicator, measure

Example measures

Deterministic

a. Engineering safety factors

b. Percentage contingency allowances

c. Risk class ratings

Statistical, probabilistic

d. Confidence limits

e. Conditional probabilities (Bayesian statistics)

f. Risk functions

Composite (typically deterministic indicators of statistical

g. Demand peak indicators

h. Peak-to-capacity ratios

i. Return frequency (e.g., floods)

j. Fault-tree analysis

TABLE ES-6 Examples of Measures of System Cost

Basic indicator

Example measures

1. Investment, replacement, capital, or initial cost

a. Planning and design costs

b. Construction costs

c. Equity

d. Debt

2. Recurrent or O&M cost

a. Operations costs

b. Maintenance costs

c. Repair and replacement costs

d. Depreciation costs

e. Depletion costs

3. Timing and source

a. Timing of expenditure

b. Discount and interest rates

c. Exchange rates and restrictions (e.g., local versus foreign currency)

d. Sources of funds, by program (e.g., federal or state, taxing authority)

e. Service life

generally had significant implications for infrastructure planning, implementation, and evaluation.

Reliability, a recognition of the various uncertainties inherent in infrastructure's services, is the likelihood that infrastructure effectiveness will be maintained over an extended period of time or the probability that service will be available at least at specified levels throughout the design life of the infrastructure system. Each measure of effectiveness can in principle be characterized by statistical indicators that measure the system's

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

reliability with respect to that particular element of effectiveness. However, other indicators such as engineering safety factors, anticipated frequencies of recurrence (e.g., the ''100-year flood''), or bases for identifying peak load (e.g., the "100th busiest hour") may be equally useful as measures of reliability.

The basic elements of cost are initial construction or replacement cost (also called investment cost) and the recurring expenditures for operations and maintenance that will be required throughout the facility's or system's service life. While total costs (measured in dollars) are always important, the questions of when money is spent, by whom, and from what budgets often have a great impact on the decisions that are ultimately made.

Infrastructure systems evolve through an ongoing cycle of planning, implementation (e.g., construction or rule-making), and evaluation of in-service operations. Performance measurement is an important aid to decision making at each of these three stages, but the measures may differ from one stage to the next.

Different institutional entities interact in a number of ways to influence decisions about infrastructure. Government bodies may enact laws and impose regulatory standards or planning and coordination requirements. Private entities may impose such requirements and standards as well, for example, when banks or insurance companies insist that borrowers meet certain conditions before financing is provided for infrastructure. Entities that control money wield great power in all stages of decision making. Negotiation among stakeholders often is the decisive final basis for decision. In all these cases, performance assessment can provide an orderly and ultimately defensible basis for decision making.

IMPROVING INFRASTRUCTURE PERFORMANCE

Recognizing the multiple dimensions of performance and the different points of view of stakeholders makes it clear that there is seldom a single, optimal solution for infrastructure problems. Improving infrastructure performance involves finding solutions that are the best that stakeholders can agree on at the time. There are technical and judgmental challenges in presenting the feasible tradeoffs among various aspects of performance, and techniques have been developed to help meet these challenges. Multiple criteria decision making, risk analysis, and discounted cash flow analysis are examples of such techniques. A key to the successful application of a multiple objective analysis lies in early and frequent involvement of all stakeholders.

Inevitably and appropriately, assessing performance adequacy is a matter of value judgments. The procedures used to reach a judgment will have ethical implications, which may become significant for important

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×

decisions. Developers of new infrastructure and regulatory agencies may arouse public resistance if these implications are not effectively addressed.

While different levels of government may have well-defined roles in the planning, development, operation, maintenance, and financing of urban infrastructure systems, the systems themselves do not respect jurisdictional boundaries. Similarly, there are a variety of issues that create interrelationships across infrastructure modes that can only be addressed through cooperation among the agencies responsible for each mode (e.g., transportation, water, wastewater, and solid waste). Improving infrastructure performance will require significant cooperation across jurisdictions and across agencies.

In many areas of the United States, regional agencies, special-purpose authorities and districts, joint power agreements, and other voluntary or legislatively defined arrangements have been used to provide for regional and cooperative approaches. In addition, federal and state legislation for funding infrastructure often requires multijurisdictional cooperation and involvement, as well as broad public involvement, as a condition for funding eligibility. The strength of these arrangements and the degree to which regional approaches are followed and supported vary widely from area to area. As improvements in infrastructure system performance are sought, improving multijurisdictional cooperation is likely to be crucial.

Increasingly powerful and cost-effective computer-based forecasting and simulation methods and new technology for measuring and monitoring system conditions have made more sophisticated approaches to assessing system performance widely available. Remote sensing, real-time monitoring, and network analysis and simulation models provide powerful new capabilities for measuring systemwide conditions and evaluating system changes. These tools will support more meaningful multijurisdictional and multimodal cooperation.

Despite the availability of such new tools, there remain many impediments to infrastructure performance measurement and management. Practitioners and researchers should work together to make further improvements in decision-making methods with multiple stakeholders, performance measures, and functional modes of infrastructure working together as a system. Data collection and management and benchmarking will continue to be needed to build a firm basis for achieving high performance from the nation's infrastructure.

Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
This page in the original is blank.
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 1
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 2
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 3
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 4
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 5
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 6
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 7
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 8
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 9
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 10
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 11
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 12
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 13
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 14
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 15
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 16
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 17
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 18
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 19
Suggested Citation:"Executive Summary." National Research Council. 1996. Measuring and Improving Infrastructure Performance. Washington, DC: The National Academies Press. doi: 10.17226/4929.
×
Page 20
Next: 1 Introduction »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!