Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 32
Measuring and Improving Infrastructure Performance 2 INFRASTRUCTURE PERFORMANCE AND ITS MEASUREMENT The word "performance" is widely used in many contexts. When applied to infrastructure, the word generally is understood to mean supplying clean water, moving people and goods from one point to another, or removing wastes, but judging infrastructure performance is a complex matter. What are the characteristics of "clean" water? Suppose a water system delivers water that is accepted by the community as ''clean," but the supply is less than consumers would like to have at certain times of day. Suppose again that the system can be adjusted to increase volumes, but the cost is high. What if developing new supplies means damming a stream to build a reservoir? Facing such questions and contingencies, providers, users, owners, and neighbors of the facilities and services of infrastructure typically differ—often widely—in their views of the relative importance of any single aspect of infrastructure. As a multifunctional system, infrastructure provides a range of specific services that differ substantially from one mode to another (e.g., transportation, wastewater management). Although costs, social and economic benefits, reliability, environmental consequences, and other factors are widely recognized as important aspects, there is no single generally accepted list, framework, or method for comprehensively describing infrastructure performance. In developing its 1988 report, the NCPWI reviewed "various proxy measures for factors that influence the demand for and supply of public works services..." but found that "none of the individual measures... gives a clear or convincing picture of the state of the nation's infrastructure
OCR for page 33
Measuring and Improving Infrastructure Performance because they measure only certain aspects of demand or supply" (NCPWI, 1988). The NCPWI then commissioned new studies to undertake "an assessment of the performance of the nation's infrastructure," which measured performance in terms of "four measures: physical assets, product delivery, quality of service, and cost-effectiveness." Table 2-1 presents ''illustrative measures" the NCPWI cited for physical assets, product delivery, and quality of service. The NCPWI report only hints, however, at a clear definition for the term "performance,'' saying simply that "demand for and supply of public works services jointly determine performance levels and the quality of services provided" (NCPWI, 1988). A necessary early step in this study therefore was adopting an explicit definition of performance. The committee agreed that no single indicator or index is likely to be a sufficient practical measure of infrastructure performance. Table 2-1 thus became the point of departure for the committee's efforts, and in key aspects the committee diverged substantially from the NCPWI's earlier work. THE BASIC CONCEPT OF PERFORMANCE If "performance" is, as a dictionary defines it, the execution of a task or fulfillment of a promise or claim, then infrastructure performance is the accomplishment of tasks set for the system or its parts by the society that builds, operates, uses, or is neighbor to that infrastructure. In short, the bases for measuring infrastructure performance are defined by the broad community. As has already been noted, this community includes national-and state-level as well as local perspectives. As a consequence, there generally may be many measures of performance, and they may vary from place to place. The tasks the community wants its infrastructure to accomplish initially have to do with moving goods and people or providing clean water, but society sets broader tasks as well. Infrastructure provides jobs to the people who construct, operate, and maintain its facilities and services. By providing more or better services in some regions or to some social groups, infrastructure fosters differential patterns of income, economic opportunity, and growth. As a market and test bed for new technologies, infrastructure enhances or retards technological innovation and the resulting growth of economic productivity. The public objection that its facilities sometimes engender is evidence that infrastructure is failing to meet social, cultural, or aesthetic purposes. The effectiveness of infrastructure as a public investment serving these broader ends also is an essential aspect of infrastructure performance.1 It would be tempting to suppose that a simple indicator of infrastructure performance could be devised, a single index of how well the system
OCR for page 34
Measuring and Improving Infrastructure Performance TABLE 2-1 Illustrative Measures of Infrastructure Performance, as presented by the National Council on Public Works Improvement (Source: NCPWI, 1988) Public Works Physical Assets Service Delivery Quality of Service to Users Highways Lane-miles Number of bridges Vehicle registration Fleet size Passenger miles Vehicle miles Ton-miles Congestion or travel time Pavement condition Volume/Capacity ratio Accident rates Population with easy access to freeways Airports Number of aircraft Commercial seat-miles Number and type of airports Passenger miles Enplanements Aircraft movements Number and length of delays Accident rates Near miss rates Population with easy access Transit Number of buses Miles of heavy rail Subway seat-miles Bus miles Passenger miles Percent of work trips Transit trips Average delays Breakdown frequency Population with easy access Elderly/handicapped access Crowding: passenger miles per seat-mile Water Supply Water production capacity Number of water facilities Miles of water main Compliance with MCLs Reserve capacity Finished water production Fraction of population served Water shortages Rate of water main breaks Incidence of waterborne disease Finished water purity Loss ratios Wastewater Treatment Capacity (mgd) Number of plants Miles of sewer Compliance rate Reserve capacity Infiltration/inflow Volume treated Fraction of population served Compliance with designated stream uses (local) Sewage treatment plant downtime Sewer moratoria
OCR for page 35
Measuring and Improving Infrastructure Performance Public Works Physical Assets Service Delivery Quality of Service to Users Water Resources Number of ports, waterways Reservoir storage capacity Number of dams Miles of levees, dikes Cargo ton-miles Recreation days Flood protected acreage Irrigated acreage Kwh hydropower produced Shipping delays Dam failure rates Power loss rate Value of irrigated agricultural product Value of flood damages averted Solid Waste Landfill capacity Incinerator capacity Number of solid waste trucks Tons of trash collected Tons landfilled Tons incinerated Collection service interruptions Facility downtime Rate of groundwater contamination is meeting objectives. However, for the many reasons already cited the committee found that no adequate, single measure of performance has been identified, nor should there be an expectation that one will emerge. Infrastructure systems are built and operated to meet basic social needs, but those needs are varied and complex. Many people, acting individually and in groups, will have objectives for what infrastructure should do. These stakeholders, at local, state, national, and even international levels, will make their own judgments about whether their objectives are being met. Infrastructure performance must be measured in the context of social objectives and the multiplicity of stakeholders who use and are affected by the infrastructure system. Making infrastructure effective in achieving its objectives requires money, land, energy, and other resources. These costs are incurred in planning, construction, operation, maintenance, and sometimes demolition of facilities. There are costs of using the facilities to provide services, of monitoring and regulating the safety and environmental consequences of these activities, and of mitigating adverse impacts of infrastructure. These costs are incurred and paid at different times and places, by different agencies and groups (e.g., users, neighbors, taxpayers), and in nonmonetary and monetary terms. The relationship of these various costs to infrastructure's effectiveness in achieving its tasks is central to the definition of performance. This relationship of effectiveness and costs exists in an uncertain world. In the best of times, there are limits to the degree to which these relationships can be accurately measured and related to one another. In the worst of times, storms, accidents, and sudden failures of materials and equip-
OCR for page 36
Measuring and Improving Infrastructure Performance ment drastically alter these relationships. Long gestation periods and service lives mean that costs of facilities may change and levels of usage may differ dramatically from early expectations. Nevertheless, despite the general uncertainty that underlies infrastructure's ability to provide its services, society expects reliable service. Reliability—the likelihood that infrastructure effectiveness will be maintained over an extended period of time—is another component of performance. Infrastructure performance is the degree to which infrastructure provides the services that the community expects of that infrastructure, and communities may choose to measure performance in terms of specific indicators reflecting their own objectives. The committee concluded that these indicators generally fall into three broad categories, measuring performance as a function of effectiveness, reliability, and cost. Infrastructure that reliably meets or exceeds broad community expectations, at an acceptably low cost, is performing well. Indicators of these three principal dimensions of performance are considered in detail in Chapter 5. PERFORMANCE COMPARED WITH OTHER CONCEPTS: NEED, DEMAND, AND BENEFITS As the committee defines it, performance is related to other concepts used in infrastructure management and decision making. One such concept is "need." The term and its underlying engineering concepts appear widely in public works policy analysis, especially as a basis for determining appropriate levels and allocation of state highway construction monies. A congressional advisory committee defined need "in terms of the investment required to construct, reconstruct, rehabilitate, or repair capital facilities so they may provide a desired level of service, given expected patterns of growth and development" (U.S. Congress Joint Economic Committee, 1984). If forecasts of future highway usage show that the present system of highways is likely to become very congested, the "need" for new highway capacity is inferred and becomes the basis for planning new construction. Service standards that define the "desired level of service" are an important determinant of need. The advisory committee concluded that a clear understanding of need and the influence of standards on needs assessment was lacking on a national scale. Infrastructure investment needs projections in some states were found by the advisory committee to be "quite speculative," sometimes representing little more than "wish lists" of the agencies responsible for construction. The advisory committee recommended, among other things, that a study be made of economic, social, and environmental relevance of diverse standards governing the nation's infrastructure construction.
OCR for page 37
Measuring and Improving Infrastructure Performance NCPWI noted the shortcomings of engineering "need." Because the link between service standards and costs is obscured when need is calculated, the NCPWI concluded that the concept is a faulty basis for decision making. The study committee agreed with that assessment. The NCPWI then considered whether the economist's concept of "demand," useful in describing consumers' behavior, applies well to infrastructure. "Demand" reflects the relationship between levels of service and the price that recipients of that service must pay. As the price gets higher, the demand for a particular level of service—that is, the number of people willing to pay—generally declines. Demand may potentially be greater than available supply when prices are below what people are able and willing to pay. While many of infrastructure's services might be priced as though they were being offered in an open market, such pricing rarely occurs. Failure to charge for the use of clean air and water (and other so-called "free goods"), inability to restrict access to services, giving some users a "free ride," and the use of general taxes rather than user fees to finance facility construction and operations are among the many factors that distort the relationships between prices and levels of service.2 For such reasons, the NCPWI concluded that performance is determined jointly by demand and supply. The study committee endorses that conclusion. The committee's definition of performance is most closely allied with principles of cost-effectiveness analysis. As is the case with cost-effectiveness analysis, the scope of performance assessment as defined in this study is limited to the objectives set for the system in question, that is, the tasks that infrastructure is to perform. Infrastructure may have other benefits or adverse impacts that go beyond the immediate concerns of transportation, water supply, or waste removal. These consequences are often long term, sometimes unintended, and frequently extend far beyond infrastructure's facilities. Urban highways are said by some people to have been responsible for urban sprawl and weakening of the sense of community needed to sustain older residential areas. Extensions of trunk sewers and water supplies are similarly credited with enabling suburban growth in previously undeveloped areas and with destruction of wildlife habitat. The committee found that such impacts become concerns in performance assessment when community expectations recognize them as results to be sought or avoided. They then become part of the performance assessment process and subsequent decision making. For example, federal legislation (e.g., the Clean Air Act) mandates that transportation systems reduce their emissions of carbon monoxide and other air pollutants. Passage of that law effectively converted an often neglected environmental impact into a major component of performance. Unpolluted air, formerly an economic "externality," became a measure of how well the infrastructure is doing its
OCR for page 38
Measuring and Improving Infrastructure Performance job. Similarly, federal dean water requirements have added dramatically to the number of pollutants to be considered in determining whether a water system's performance is adequate. THE VARIETY OF STAKEHOLDERS The committee's concept of performance depends on the composition of the "community" associated with the infrastructure. Many individuals and entities have a stake in infrastructure's performance. At a minimum, there is the distinction between providers of infrastructure services and users. Providers include individuals, private firms, government agencies, and regulated or other public-private entities that own, design, build, operate, maintain, and deliver infrastructure's services. Users are individuals or corporate entities. Sometimes the distinction is difficult to make or depends on context. For example, the driver of a transit bus and the agency or company that operates the bus fleet are users of the road at the same time that they are providers of services to people seeking transportation. Those who are not providers or are not directly served by infrastructure but who nonetheless have a stake in its performance may be termed "non-users." All residents of a metropolitan area, for example, are exposed to the air pollution originating from highway vehicles. The owners and drivers of those vehicles are exposed as well but may have a different perspective on pollution control strategies than would be held by their transit-riding neighbors. Then there is the distinction of levels at which infrastructure is viewed, from the individual or household to the national or international scale. Political entities, for example, city or county, state, province, or nation, can serve as a convenient designation for an increasingly broad perspective, and for some aspects of infrastructure these entities have functional significance. For example, decisions about highway construction and electric power regulation are made largely at state levels, while water supplies and solid waste processing are primarily the concern of local governments. National and international concerns arise as well, for example, when state or local actions restrict interstate commerce, violate national standards, or influence activities covered by treaty. Regions defined on other bases, however, have importance for infrastructure that at least equals and often exceeds that of political divisions. Metropolitan areas, for example, are identified by the spread of their populations across the land, influenced but seldom limited by political boundaries. River and stream drainage basins are the natural bases for thinking about wastewater management. Neighborhoods, historic districts, and other socially or economically defined areas may also have a stake in infrastructure performance.
OCR for page 39
Measuring and Improving Infrastructure Performance DIMENSIONS OF EFFECTIVENESS The several objectives that stakeholders set for the infrastructure system each will have one or more distinct dimensions or elements. For example, an objective of transportation infrastructure may be to facilitate mobility in an area. Movement of people (e.g., going to work or school) and goods (e.g., deliveries to homes and stores in the area) are dimensions of how well the broader objective is being met. Each such dimension3 should be distinguished as a single aspect of effectiveness (and hence of performance as well) that can be discussed and measured with minimal reference to other aspects, for example, traffic congestion on a highway versus the stormwater runoff from that highway. In principle the links between each objective and one or more dimensions of effectiveness should be readily apparent and can be visualized as a graph such as that illustrated in Figure 2-1. Each dimension will in turn have associated with it one or more indicators or measures of effectiveness—signs, symbols, or statistics (typically numerical) that people understand to convey information about how well infrastructure is accomplishing its tasks. These measures may be based on some generally used scale (e.g., volume of water or traffic) or relative to a benchmark (e.g., observed throughput as a fraction of theoretical maximum throughput). FIGURE 2-1 Dimensions of effectiveness link to objectives infrastructure is to achieve.
OCR for page 40
Measuring and Improving Infrastructure Performance Because the objectives set for infrastructure may change from time to time and place to place, the dimensions of effectiveness may change as well. In medieval Europe, for example, rubbish and other wastes were often dumped just outside the city with an expectation only that the height of the mound should not enable attackers to easily scale the city's walls. Today other dimensions of infrastructure effectiveness are important, for example, when distant incinerators and landfills are expected to accommodate the wastes but not emit noxious fumes or infiltrate groundwater. As discussed in Chapter 3, the process of engaging stakeholders in selecting a comprehensive, appropriate, and operational set of measures of infrastructure effectiveness is for many purposes the most important task in performance measurement. Understanding the logical relationship among objectives, dimensions, and measures of effectiveness is helpful in judging why one or another measure may be appropriate, but it is the measures alone that people will use to assess effectiveness and performance. The committee recommends the framework of measures presented in Chapter 4. DETERMINING WHETHER PERFORMANCE IS "GOOD" Because of the multitude of measures describing performance and the different points of view of stakeholders, judging whether performance in a particular situation is "good" or "adequate" may not be easy. Issues of scale and aggregation influence the assessment. For example, infrastructure is expected to provide its various services reliably for long periods of time, but there is always a chance that service will be interrupted. Interruptions sometimes occur due to structural failures, unusually high usage, required maintenance, or other causes, but a certain degree of redundancy and flexibility in the system can allow performance to remain satisfactory, at least when viewed on a broad scale. The people directly exposed to local disruptions, however, are likely to be less than fully satisfied, even if they acknowledge that some interruptions of service are in principle acceptable. Similarly, infrastructure services for any one user may be disruptive to the services others receive. In an airport passenger terminal, for example, the arrival or departure of each flight potentially will interfere with the flow of passengers and baggage of other flights. Each airline and passenger served seeks good quality service but may suffer delays, inconvenience, or monetary costs because others seek service as well. Performance of the terminal as a whole will generally differ from what the individual user experiences. The committee noted that many people agree that infrastructure improvements have an impact on economic development and that disparate levels of investment in infrastructure can cause disparate rates or levels of development in cities and communities. Communities accept that without adequate service they will suffer in comparison (or competition) with those that have better
OCR for page 41
Measuring and Improving Infrastructure Performance infrastructure. However, because infrastructure development typically draws on broad sources of funding, central issues in many decisions about infrastructure relate to the questions of who benefits and who pays. These questions arise over the immediate and longer terms and within the jurisdictions where facilities are located and managed as opposed to the broader region where the infrastructure's impacts are felt. The questions concern both intermodal (e.g., water and transport) and intermedia (e.g., air or water pollution) interactions. Many of the resources that infrastructure uses or influences—for example, air, water, open space—are traditionally thought of as what economists term free goods, for which there is no distinct cost.4 Concerns about environmental impacts and limitations on consumable resources have motivated increasing interest in establishing the bases for valuing these free goods, and these values shift the performance assessment even when they are not included as measures of performance. 5 The committee also noted that despite the agreement on infrastructure's importance, the judgment of what levels of performance are "good" or "appropriate" may be defined somewhat differently within the context of the specific institutional, technical, social, political, and economic make up of a region. Sometimes decisions are based primarily on whether federal funds are available. In particular, political jurisdictions or single-mode institutions (e.g., departments of transportation, power authorities) with adequate funds can develop projects while others cannot, regardless of whether arguments might have been made for different priorities at regional or national levels. Some committee members observed that one result may be the development of "excess" capacity in parts of the infrastructure system. Infrastructure users may experience this ''excess" capacity as a high level of service, while the analyst might conclude that users are being effectively subsidized to use the infrastructure's services at charges lower than full cost. This may not be "good'' performance. The committee observed the distinction often made between infrastructure services that "must" be provided and what could be delivered if one chose to commit the resources. With water supply, for example, a community may have water that is basically healthful and meets requirements of the Clean Water Act. Nevertheless, some people may not like the taste or for other reasons choose to purchase bottled water. The choice is available to those who can afford the higher cost but does not indicate whether the system's performance is or is not "good." BASES FOR JUDGING GOOD PERFORMANCE The committee observed that when such judgments are to be made there is potentially some tension between public perception and opinion on the one hand and infrastructure professionals acting as experts on the
OCR for page 42
Measuring and Improving Infrastructure Performance other. There is an analogy in public health: inoculations and other preventive actions are available to protect against a variety of conditions, but people often seem unwilling to be inoculated. Sometimes education enhances willingness, and sometimes action is required by statute. A balance is struck in public policy among the various costs and risks as they are perceived by the experts and the public. Over time this balance can change and does: public issues evolve and attitudes shift; new information becomes available and technology advances. Such changes have potentially strong impact on what services infrastructure is expected to provide. A system that seemed optimal when it was designed and implemented may become obsolete. Actions that one generation thought were a good idea may be seen differently by a new generation.6 The committee found generally that there are few benchmarks or norms of "good" performance that apply to infrastructure as a system, or even that apply comprehensively to all aspects of performance of any one type of infrastructure. Sometimes decisions are based on nothing more than whether the public has complained. Decision makers, however, often seek guidance as to what are acceptable and achievable levels of performance in particular contexts. Decision makers seek this guidance for decisions that span a wide range of scope and detail. At one level, decisions are made about whether to make major investments in infrastructure development and operations, for example, in constructing a new incinerator with new combustion and air pollution emissions technology. At another level, the decisions concern design and operations details such as the reconstruction of street pavement or the scheduling of trash collections. Because infrastructure is intended at the higher level to support economic and social activity without adverse environmental consequences, performance ultimately has something to do with the outcomes from its use, for example, regional economic growth and quality of life. However, attempting to quantify the link between infrastructure investments and operations on the one hand and these outcomes on the other hand is difficult, uncertain, and likely to be controversial. As discussed in chapters 3 and 4, measuring the output and consumption of services—for example, vehicle-miles of travel, gallons of water delivered—without reference to subsequent use of those services and ultimate outcomes does not really measure performance but is an essential step. The committee found that federal standards and standards-setting procedures are influential in motivating measurement but may not foster "good" performance. Because problems are not the same everywhere, dries are sometimes forced to incur costs meeting standards that are locally less of a concern than other aspects of performance. The Safe Drinking
OCR for page 43
Measuring and Improving Infrastructure Performance Water Act, for example, requires the Environmental Protection Agency (EPA) to issue regulations for a substantial number of chemical contaminants (83 by recent count, with additional contaminants to be added every 5 years). The EPA has actually written regulations for perhaps 30 of those contaminants. Yet in early 1993, newspapers around the country reported on large numbers of illnesses caused by microsporidium bacterial infection of Milwaukee's treatment facilities, a problem not covered by federal regulation. Similarly, the Federal Highway Administration sought in the 1970s to impose a 55-miles-per-hour (mph) speed limit on all interstate highways, citing highway safety statistics and automobile energy consumption benefits as the basis for setting this standard. The uniform maximum speed limit proved unpopular and more difficult to enforce in some areas of the country than others. When the states were given the authority to reestablish their own speed limits, many chose to return to the nominal 70 mph they had previously adopted. In view of such experience, the committee concluded that performance overall should be assessed on the basis of multiple measures chosen to reflect the community's objectives. Some performance measures are likely to be location-and situation-specific, but others have broad relevance. In all cases, developing performance benchmarks that reflect the experience of past performance achieved in many communities will yield helpful guides for decision makers. DIVERGING FROM THE NCPWI'S FRAMEWORK The NCPWI's framework for assessing performance, embodied in Table 2-1, was the point of departure for the committee's work, but as the preceding text explains, the committee quickly diverged from this earlier work. The differences between the committee's recommendations, presented in the following chapters, and the NCPWI's work are matters of both concept and detail. The NCPWI characterized performance in terms of four dimensions: physical assets, service delivery, quality of service to users, and cost-effectiveness.7 The first three of these were then expanded into the lists of illustrative measures included in their table. The fourth dimension the NCPWI referred to as "economic performance" and suggested that its measures fall into two broad categories: economic efficiency and cost-effectiveness. Economic efficiency of a project or program was said to be "reflected by the excess of its benefits over costs," presumably measured in monetary terms. The NCPWI's report stated that "efficiency is only one goal of public works programs and sometimes it is ignored altogether," for example, when Congress asserted (in the 1972 Clean Water Act, P.L. 92-500) that certain levels of pollution control are to be achieved without regard for
OCR for page 44
Measuring and Improving Infrastructure Performance cost. Cost-effectiveness was said to provide "simpler measures of services delivered per dollar spent," and was therefore a more generally useful "performance measure." Within this study committee's framework, the first of the NCPWI's dimensions (physical assets) is not a true dimension of performance, because it has nothing to do with the tasks infrastructure is to perform. In the committee's view, the number of buses or water production capacity of the treatment plant are simply characteristics of the infrastructure, statistics to be recorded when the system is inventoried. Developing this inventory is important and is considered in chapters 3 and 4 but is only a prelude to performance measurement. The study committee agrees that service delivery and quality of service to users, the second and third of the NCPWI's dimensions, are indeed key dimensions of effectiveness (and therefore of performance). The committee suggests in Chapter 4 that there are others as well. The committee's inclusion of reliability in their framework addresses a dimension of performance given only passing consideration by the NCPWI. Specific measures included in Table 2-1, such as number and length of (airport) delays and reserve capacity (e.g., water supply), have something to do with uncertainties and interruptions of service but do not represent comprehensive measures of reliability. The committee agreed that the NCPWI's economic measures have much to do with assessing performance. In this committee's view, however, cost is a dimension of performance, but cost-effectiveness, economic efficiency analysis, and other methods such as multicriteria optimization and nondimensional scaling address tradeoffs among cost and other performance dimensions. Such methods for considering tradeoffs are helpful to decision makers seeking to assess performance and choose among options for improving performance of infrastructure. No one method can be expected to yield a generalized single-number measure of performance as a whole. Performance is essentially and unavoidably multidimensioned. NOTES 1. Even if public funds are not employed, infrastructure is invariably a public investment because it uses land and other natural and community resources that are valuable and could be used in other ways by the public or reserved for future uses. 2. Such issues, the focus of a vast literature and continuing research by economists interested in consumer theory, public welfare, environmental economics, and related fields, are well beyond the scope of this discussion. 3. Definitions of how the committee uses such terms as "dimension," "measure," and "indicator" are included in the glossary in Appendix E.
OCR for page 45
Measuring and Improving Infrastructure Performance 4. Of particular relevance to infrastructure is the low value (i.e., typically no value) typically assigned to the space underneath public fights of way. Some people argue that the uncoordinated location of utilities is a result of this absence of value. 5. Such methods as "hedonic" pricing and contingent valuation use statistical analysis and market analogies to infer a market price for such goods. 6. Some members of the community may view infrastructure actions as the work of particular groups that stand to benefit at the expense of other groups. Proponents of such views have cited evidence, for example, of the racial make up of neighborhoods where solid waste facilities are located as a basis for questioning whether equity criteria of "environmental justice" are being met. These criteria could be among the factors influencing what a community judges to be "good" performance. 7. These were termed performance "measures" in the NCPWI's report, as were the constituent items listed in Table 2-1 (NCPWI, 1988).
Representative terms from entire chapter: