National Academies Press: OpenBook

Transit Service Evaluation Standards (2019)

Chapter: Chapter 3 - Survey Results

« Previous: Chapter 2 - Literature Review
Page 26
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 26
Page 27
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 27
Page 28
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 29
Page 30
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 30
Page 31
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 31
Page 32
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 32
Page 33
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 33
Page 34
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 34
Page 35
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 35
Page 36
Suggested Citation:"Chapter 3 - Survey Results." National Academies of Sciences, Engineering, and Medicine. 2019. Transit Service Evaluation Standards. Washington, DC: The National Academies Press. doi: 10.17226/25446.
×
Page 36

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

26 This chapter presents the results of a survey of transit agencies regarding transit service evalu- ation standards. The survey was designed to elicit information on 1. Data sources, 2. Data collection and analysis protocols, 3. How standards are defined or updated, 4. Priorities among performance measures, 5. How standards are used in making service decisions, 6. Unintended consequences, and 7. Assessment and lessons learned Fifty-one completed surveys were received from the 59 transit agencies in the sample, a response rate of 86%. Not all agencies answered all the questions. The survey responses were broken down by system size. The sample included 12 small agencies (fewer than 75 peak vehicles), 19 medium-sized agencies (between 75 and 299 peak vehicles), and 20 large agencies (300 or more peak vehicles). Most responding agencies (65%) had formal service standards or guidelines that had been approved by their governing board, ranging from 58% of medium-sized agencies to 75% of large agencies (Figure 4). Two-thirds of responding agencies reported using service evaluation standards for more than 10 years. A majority of respondents (60%) had updated performance standards or measures within the past 5 years. The most common update was changes to the metrics (redefining how they were measured or how acceptable performance was defined). New metrics for existing modes and new or previously unaddressed modes were added. Some respondents reported changes in goals and objectives or in the evaluation process. Several agencies made global changes that addressed metrics and process, for example, measuring frequent routes by ridership and coverage routes by access to service. Data Sources and Data Collection and Analysis Protocols Almost 70% of responding agencies reported using both automated and manual data col- lection techniques. Only one agency reported sole use of manual data collection. Manual data collection is done via traffic checkers or on-board surveys. APC and AVL systems were used by 86% of respondents, while 80% reported use of automated fare collection (AFC). Use of APC and AVL systems was more common as agency size increased. C H A P T E R 3 Survey Results

Survey Results 27 When APCs were first used in transit, agencies would equip a percentage of their buses with APCs and rotate these buses throughout the system. Of the 38 respondents that used APCs, 66% had equipped all buses with APCs. Among other agencies, the percentage of buses with APCs ranged from 10% to 95%, with a median of 60%. Figure 5 indicates that small and midsized agencies were more likely to have APCs on the entire fleet. Agencies that either did not have APCs or did not have APCs on all buses used sampling tech- niques to collect data. The most reported technique was to follow the NTD sampling guidelines (30% of agencies that reported using sampling techniques). Twenty-two percent of respondents to this question reported developing their own random samples (some are stratified random 67% 58% 75% 0% 21% 10% 17% 11% 10%8% 11% 0% 8% 0% 5% 0% 20% 40% 60% 80% 100% Small Medium Large Formal guidelines approved by our governing board Formal guidelines approved internally but not by our governing board Informal guidelines used by operations and/or planning departments Evaluation standards are being developed No guidelines Source: Survey results. Figure 4. Adoption and use of service guidelines. 80% 94% 47% 0% 20% 40% 60% 80% 100% Agencies with all buses equipped with APCs Small Medium Large Source: Survey results Figure 5. Percentage of agencies with APCs on all buses.

28 Transit Service Evaluation Standards samples) that yield statistically significant results. Rotation of APC buses through the scheduled service, supplemented by field checks and/or data on a specific trip from the previous time period, were also used. More than 90% of respondents said they collected data at the system, route, and day-type (weekday/Saturday/Sunday) levels. More than 80% said they collected data at more detailed levels (time of day, stop, and individual trip). Data collection at the vehicle and operator levels was less common but was conducted by a majority of respondents. The survey asked respondents how often their agencies collected, summarized, and reported data. Figure 6 presents the results. Several agencies noted that the reporting frequency varied, depending on the data item. Higher-level data, such as total ridership, was reported more frequently than detailed data such as ridership by stop or time of day. Monthly reporting was most common across agencies of all sizes, followed by annual and quarterly reporting. Some larger systems said they analyzed and reported data by operator pick, typically three or four times per year. The respondents were split in terms of whether they considered time of year in the data col- lection process to account for seasonal variations. Medium-size systems were more likely to consider the time of year, while large systems were much less likely to do so. Respondents sug- gested that time of year was more likely to be considered if school open/school closed schedules were used extensively. Many respondents noted that they compared monthly results with the same month in the previous year, thus removing the need to account for seasonal variations. Who collects the data? Figure 7 indicates that this responsibility is most likely to be in the planning department, regardless of agency size. Multiple departments shared this responsibility in 18% of responding agencies. Large agencies were more likely to have a department dedicated to data collection and performance evaluation. Who analyzes the data? In agencies without a dedicated department, responsibility for the development and application of performance standards rests either with the planning depart- ment or with senior management. At the majority of small and midsized agencies, senior management had primary responsibility, while responsibility was delegated to the planning department at the majority of large agencies. 80% 65% 50% 40% 24% 28% 40% 18% 22% 12% 17% 22% 0% 20% 40% 60% 80% 100% Small Medium Large Monthly Annually Quarterly Continuously/daily By operator pick Source: Survey results (multiple responses allowed). Figure 6. Frequency of collecting, summarizing, and reporting data.

Survey Results 29 Definition of Standards The origin of performance standards was most often a senior management initiative at small and midsized agencies. At large agencies, a majority of respondents cited a department-level initiative. Multiple responses were allowed, since there was not necessarily one impetus for per- formance standards. Involvement of the board or governing body of the agency in the origin of performance standards increased with the size of the agency. All respondents reported performance evaluation standards for local bus service, although one agency noted that the bus standards had not been formally approved by its board. Para- transit service standards were noted by 62% of all respondents and express bus service stan- dards by 53%. Agencies that operated bus rapid transit or rail generally had performance standards specific to these modes. Only one agency (operating local and express bus and paratransit) reported a standard that applied across all modes operated. Priorities Among Performance Standards Respondents were split over the question of whether certain performance standards were viewed as more important than others, either by the agency or its governing board. While a trend toward larger agencies responding affirmatively to this question because of greater complexity in the performance evaluation process might have been expected, there were only minor differ- ences by size of the agency. Twenty-one respondents noted that certain standards do receive more attention. These are as follows: • Ridership and productivity (riders, riders per revenue hour), • Financial standards (subsidy per boarding, farebox recovery ratio, net income projection), • Service quality (reliability, overcrowding, on-time performance, wait time, customer and employee satisfaction), 30% 53% 44% 20% 24% 11%12% 28% 20% 6% 11%10% 6% 20% 6% 0% 20% 40% 60% Small Medium Large Planning Department Finance Department Multiple Departments Single person - small agency Department Dedicated to Performance Data and Analysis Operations Department Operations Planning Department Source: Survey results. Figure 7. Department with primary responsibility for data collection.

30 Transit Service Evaluation Standards • Service availability (coverage, bus stop locations/amenities, target service levels), and • Safety (accidents, safety). Ridership and productivity standards were important across agencies of all sizes. Service availability standards were emphasized by small agencies, which often prioritized coverage over frequency. Midsized and large agencies were more likely to focus on financial and ser- vice quality standards. Safety, while important at all agencies, received particular attention at large agencies. Respondents also indicated that performance on standards that are not viewed as among the most important continues to be reported to the board or governing body (sometimes less often or in an appendix) and used internally because of the value of these standards to specific departments. Approximately one-quarter of responding agencies had discontinued use of certain perfor- mance standards in the past 5 years. All small agencies and more than 80% of large agencies reported that they had not dropped any performance standards from their evaluation process. For whatever reason, a majority of midsized agencies had discontinued certain performance standards. Possibly midsized agencies tweak their performance evaluation process more often than large or small agencies. In some cases, performance standards were shifted between a policy plan and internal guide- lines or removed because the evaluation standard was changed. Two agencies reported that performance standards were no longer enforced or had been downgraded to “advisory” status. Among the standards discontinued were detailed on-time performance, cost per trip, farebox recovery ratio, and ridership (which was seen as more an indicator of larger economic condi- tions and trends than a transit performance indicator). Performance Standards and Decision-Making A typical process in the application of performance standards is as follows: 1. Data are collected and analyzed. 2. Issues are identified. 3. Corrective action (which may be specified in the standards) is taken. At least 70% of responding agencies of all sizes indicated that service changes had been made as a direct result of the service evaluation process. This is somewhat less likely to happen at small transit agencies; three of the four “no” responses to the survey question were from small agencies. A possible explanation is that governing bodies of smaller agencies, which tend to be city councils or county commissions, may be more likely to consider factors other than perfor- mance in making service-related decisions. Responses to the question of how long an agency monitored the trend of a metric before taking action varied. Between 12 and 18 months was the most common response among small agencies. Larger agencies tended to act more quickly, within 12 months. Half of the mid- sized agencies reported that no time was specified or that it depended on the specific metric. Agencies made various types of changes as a result of the evaluation process. The following were mentioned by at least 75% of respondents, with little variation by system size: • Changes to schedules, • Changes to frequency,

Survey Results 31 • Changes to span of service, and • Route truncation, rerouting, or discontinuation. Among agencies that had made service changes as a direct result of service evaluation, almost 90% of respondents reported that their agency’s governing board had never withheld approval of a proposed service change arising from the service evaluation process. The percentage of responses ranged from 80% at large agencies to 100% at small agencies. Only two respondents indicated that their agency’s governing board had changed its level of support for the service evaluation process as a result of public controversy over proposed service changes. A majority of respondents in each size category replied that their agency had always taken public input into account before making changes. In a similar vein, a large majority of respondents in each size category indicated that their agency’s governing board had never directed staff to change performance evaluation standards as a result of public controversy. Respondents cited a variety of approaches to sharing decisions based on performance stan- dards throughout the agency, especially with frontline personnel. The most common approach was to meet with or communicate directly with frontline staff, including bus operators and supervisors. A few respondents reported a formal process, while others noted frankly that the communication process needed improvement. A majority of respondents from agencies of all sizes (75% of small and midsized agencies and all small agencies) stated that employees were surveyed to obtain their input on improving agency performance. Unintended Consequences Attempts to meet a specific performance standard can have unintended consequences, as reported by 38% of all respondents. Unintended consequences can be sorted into several dif- ferent categories: • Interaction between standards, including decreased productivity from attempts to improve reliability, decreased maintenance and associated impacts from an emphasis on on-time performance, decreased on-time performance from combining routes to improve cross- town service, and increased complaints as a result of providing additional tools for customer feedback; • Inflexible application of standards, including route discontinuation instead of modification of its service span (standard called for all routes to have the same span of service), failure to account for near-term development along a route before discontinuation, and the need for an exemption to the sunset clause for a new route; • Operator impacts from stop realignment and from the effect on operator break time from trying to squeeze out productivity from bus routes; • An underestimation of support by certain segments of the population for poorly performing routes; • Impacts on specific groups, including customers with disabilities and high school students; and • Regulatory impact resulting in a review of transit data by the inspector general. The first two categories are especially relevant in the development and application of service performance measures. It is not uncommon for improvements in response to poor performance on one measure to affect performance in other areas. Flexibility in applying a given standard is also advisable. A comment from one respondent summarizes the issue: “Major service changes do not have entirely predictable results.”

32 Transit Service Evaluation Standards Agency Assessment of Designing and Applying Service Evaluation Standards The survey asked transit agencies to rate their efforts to design and apply service evaluation standards. The ratings were generally positive (Figure 8). Overall, 22% of respondents rated their efforts as “very successful” and an additional 44% rated their efforts as “somewhat suc- cessful.” Compared with overall results, midsized agencies were more likely to rate their efforts as “very successful” (35%), large agencies as “somewhat successful” (56%), and small agencies as “somewhat successful” or “neutral” (40%). Challenges Respondents rated various potential elements in the development and use of service stan- dards as major challenges, minor challenges, or not an issue. Limited staff time was rated as a major challenge by more than half of all respondents. Accuracy of data was the only other element mentioned by at least one-quarter of respondents as a major challenge. Small and midsized agencies were most likely to cite limited staff time and accuracy of data as major challenges, with accuracy of data as a greater concern among small agencies. Large agencies were more likely than others to cite lack of consensus on appropriate measures and difficulty in obtaining approval for performance-based proposals as major challenges (Figure 9). Respondents also answered an open-ended question to describe the single major challenge in development and use of service evaluation standards as well as strategies to overcome this challenge. Limited staff time, data quality and accuracy, the process of achieving buy-in and consensus around evaluation standards and what to do if these are not met, and how to bal- ance evaluation results against competing goals were the four most often-mentioned chal- lenges. Other respondents mentioned the lack of understanding regarding the purpose and value of service standards at all levels (staff, management, customers, and general public), data collection, and how best to explain performance evaluation to the public as the one major challenge. 10% 35% 17% 40% 35% 56% 40% 18% 11%10% 6% 6%6% 11% 0% 20% 40% 60% Small Medium Large Very successful Somewhat successful Neutral Somewhat unsuccessful Mixed/unsure Source: Survey results. Figure 8. Agency rating of efforts to design and apply service evaluation standards.

Survey Results 33 The most common strategy for overcoming the one major challenge was ongoing education within the organization and with stakeholders and the public. Respondents also mentioned oversight of and investment in new technologies for collecting and evaluating data and attention to data collection protocols as strategies to address the one major challenge. Benefits and Drawbacks The two most-cited benefits of service evaluation standards were a transparent rationale for service changes and an objective, data-based framework for analysis (Figure 10). While similar, the “transparent rationale” benefit reflects the process of change, while the “objective” benefit refers to the analysis leading to a proposed change. A transparent rationale for change was more likely to be cited as a benefit by large and midsized agencies, while an objective, data-based frame- work for analysis was cited more often by small. Small agencies were also more likely to value a process that is easy to understand. Other benefits included a uniform application of standards, a richer narrative of how transit works, peer comparisons, and clear directives for planners. Respondents also noted drawbacks of transit service evaluation standards (Figure 11). Obtain- ing buy-in by the board and others (stakeholders, internal staff, and the public) goes hand in hand with an ongoing need for education about the standards and the performance evaluation process. The need to strike a balance between rigid application of standards and the flexibility to consider qualitative factors and/or public input can be difficult. The process can be complex and time-consuming, especially for small and midsized agencies with limited staff. Inability to increase service where warranted as a result of budget constraints and inconsistent application of standards were also perceived as drawbacks. Two respondents reported no drawbacks in the use and application of transit service evaluation standards. Small Medium Large Source: Survey results (multiple responses allowed). 60% 59% 37% 50% 29% 11% 20% 18% 16% 20% 18% 11%10% 12% 21% 10% 18% 5% 10% 6% 16% 10% 6% 5% 10% 12%10% 6% 5% 0% 20% 40% 60% 80% Limited staff time Data sources do not agree Lack of consensus on measures Difficulty in obtaining approval Multiple reporting agencies Accuracy of data Lack of data Lack of interest by governing body Disagreement on KPIs Too many indicators Figure 9. Major challenges.

34 Transit Service Evaluation Standards 10% 47% 50% 40% 18% 33% 10% 12% 17% 10% 12% 11% 10% 6% 6% 10% 11% 20% 0% 20% 40% 60% Small Medium Large Transparent rationale for changes Objective, data-based analysis Uniform application of standards Richer narrative of how transit works Peer comparisons Clear directive for planners Easy to understand Source: Survey results (multiple responses allowed). Figure 10. Benefits of transit service evaluation standards. 22% 31% 31% 11% 25% 25% 44% 25% 13% 6% 19% 11% 6% 13% 0% 20% 40% 60% Small Medium Large Buy-in by all/need to educate Balance: rigid/flexible Complex/time-consuming Goals versus budget constraints Inconsistent application Source: Survey results (multiple responses allowed). Figure 11. Drawbacks of transit service evaluation standards.

Survey Results 35 More than half of all respondents reported the most successful (as defined by the respondents) action taken on the basis of service evaluation standards was to adjust service levels in one of four ways: modifying or reallocating service, enabling cuts to low-demand routes, identifying opportunities for successful service improvements, or guiding a major redesign of the exist- ing network Among the range of successful actions, small agencies were able to reduce service on low-performing routes and also reported using standards to guide placement of bus stops and stop amenities. Midsized agencies were more likely to cite identifying successful system improvements or guiding a system redesign as the most successful action, and large agencies were more likely to report modifying or reallocating service. Respondents were asked, “If you could change ONE aspect in the development and use of service evaluation standards by your agency, what would you change?” The most common responses were to formalize the process (both standards and actions to be taken) and apply it uniformly, to educate everyone involved about the process, and to automate data collection and analysis. Education was stressed by large agencies, while midsized and small agencies empha- sized data collection and a formalized process. Maintaining a focus on the evaluation process and potential improvements was a high-level suggestion from multiple agencies. Lessons Learned One respondent summarized the ideal for the development and application of performance standards: “Judiciously developed, wisely applied.” More specific lessons learned that might be helpful for other transit agencies are described in the paragraphs below. • Build flexibility into the performance standards and evaluation process. Design standards as guidelines, not hard and fast rules. Develop standards that provide management and the board with some flexibility but that can force decisions when there is truly justification for them. Allow exceptions for lifeline services. Balance service standards with public engage- ment. Avoid making things too black and white. Performance standards are a form of triage but they are not always definitive. Delve deeper to understand why routes perform as they do. • Use verified and understandable data as the core of the evaluation process. Make sure data sources are accurate and consistent. Base decisions on actual data and make sure that data are readily available to make and support a decision. Good data are the foundation for develop- ing and implementing service evaluation standards. Be consistent in how data are collected and analyzed. Ensure you have the capability to measure the metrics being proposed without spending excessive staff time in collecting, cleaning, and analyzing data. Think very carefully before including metrics that require extensive data collection effort. • Keep it simple. Keep official metrics simple, broad, and few in number. Lay out clear and simple standards and rules with very few exceptions. Take a minute to think about what value a metric brings to your organization. Just because data are available does not mean analysis will bring value. Establish measures that the operating department can understand and act upon. The less complex, the better. • Choose standards that can lead directly to taking action. Link standard problem-solving actions to underperforming services (e.g., “If service x is underperforming on metric y, do z to correct it.”). This may seem contrary to the lesson regarding flexibility, but the purpose of developing and using standards is to improve service. Tying improvement initiatives to improved performance is essential. Evaluation and improvement need to go hand in hand. Develop a few principal objectives that measure the agency goals and show how individual services meet these goals or will lead to changes. Develop other standards that can provide clearly defined support for service development. Provide guidance for what action to take if a route fails multiple standards. One agency noted the need to do a better job of showing the impacts of potential changes—for example, actions to reduce overcrowding—on customer

36 Transit Service Evaluation Standards travel time and ridership. Ideally, identical performance metrics are used to assess service and to adjust service. • Keep the board/governing body involved. Take time to brief board members regarding the service standards and how they are calculated. Brief new board members at the start of their term. Use a simple metric (e.g., passengers per hour) to show the board how to measure dif- ferent routes and overall performance versus peers. Have the standards adopted formally by the board; this provides credibility and justification for difficult recommendations. Make sure the board knows how standards will be applied. Make frequent reports regarding compliance with performance standards and levels of overall performance. Education of individual board members is critical. • Base performance standards on agency mission and goals. Use only metrics that relate to the strategic plan or core mission. Be sure to establish standards that speak to the goals that transit is trying to accomplish, especially as it relates to local priorities. Part of the role of per- formance evaluation standards is to explain the transit story. Ensure that policies and metrics are aligned to facilitate or enable the allocation of resources to fix problems revealed by the metrics. Define stretch goals as well as minimum acceptable standards to achieve continuous improvement. • Communicate. Be transparent in your process with the public and your board. Obtain buy-in from all stakeholders. Get everyone involved at the beginning and keep everyone involved so that all have some ownership in and input to the process. Manage the evaluation process and communicate with all interested parties. Collect and share feedback. Coordinate service evaluation standards with the regional provider on common corridors. Learn from other agencies. Benchmarking can be very helpful in this regard. • Educate. Do not assume that everyone knows what a standard means or how it is used. Conduct as much education and outreach as possible. If management changes, educate the new administration on existing dashboards and performance standards. Educate new board members and other stakeholders about the performance evaluation process. • Present performance quarterly to management only and keep reports to the board to an annual basis. Essentially, no one outside of planning, scheduling, and operations has the time or expertise to talk about ridership at a route-by-route level, but if you do not discuss your system- and mode-level trends at least once or twice a year, your customers will begin to think you are not doing your job. • Be prepared to be accountable to your own standards. Do not be afraid to test standards that may not be mainstream but may have significant impacts on your service and your network.

Next: Chapter 4 - Case Examples »
Transit Service Evaluation Standards Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Synthesis 139: Transit Service Evaluation Standards provides an overview of the purpose, use, and application of performance measures, service evaluation standards, and data collection methods at North American transit agencies.

The report addresses the service evaluation process, from the selection of appropriate metrics through development of service evaluation standards and data collection and analysis to the identification of actions to improve service and implementation.

The report also documents effective practices in the development and use of service evaluation standards. The report includes an analysis of the state of the practice of the service evaluation process in agencies of different sizes, geographic locations, and modes.

Appendix D contains performance evaluation standards and guidelines provided by 23 agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!