Cover Image

Not for Sale

View/Hide Left Panel
Click for next page ( 137

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 136
CHAPTER 10 Evaluating and Improving ASD Efforts The final step in the cycle of ASD efforts is an evaluation of what has been accomplished. Such an assessment would provide a systematic, critical, and unbiased review and appraisal of the methods and procedures used, as well as the results obtained. Before starting a new ASD effort-- whether pursuing expanded service with the same carrier, a different carrier to a different desti- nation, or a yet-to-be-defined alternative, ASD teams must ask: What worked and what did not? Why is evaluation so important? Sound program The evaluation step is often overlooked as part of any program. But sound program manage- ment requires an evaluation to improve and build upon past efforts, whether they were success- management ful or not. This evaluation should not be an exercise in assessing blame; rather, it is a critical com- requires an evalua- ponent of the planning process and refining future ASD efforts, as shown in Figure 10.1. tion to improve Evaluation "closes the loop" with the planning and implementation cycle. Measurement and build upon allows for improvement. past efforts. How is effectiveness in ASD measured? Measurement Measuring the effectiveness of an air service initiative is conceptually straightforward, but can allows for be very difficult to accomplish. The ASD program is generally gauged against the overall ASD improvement. goal. The basic conceptual question is: Was the airport's ASD program responsible for the out- come, or were some other intervening factors major contributors? (See Figure 10.2.) Essentially, there are three key components needed to determine whether an ASD technique worked effec- tively or not: knowing exactly what the objectives were, measuring the outcomes, and account- ing for any other factors that may have contributed to or detracted from the efforts. Additionally, it is important to quantify the relative cost-effectiveness of the ASD program. No airport or community has an unlimited pool of resources that it can apply toward air ser- vice development. Each community (or airport, if no community funds are at risk) has to decide what the limits are on its investment in air service and what return on investment it considers adequate. Rigorous evaluations can become major econometric and financial studies. The study team is not presuming to describe in detail how an airport would conduct such a detailed analysis. To the extent that an airport or community is particularly interested in pursuing one, the study team would suggest engaging the assistance of the economics or business department of a local uni- versity, which might be able to help with the modeling as a teaching device. 138

OCR for page 136
Evaluating and Improving ASD Efforts 139 Identify deficiencies No Identify available resources Yes Revise program? Determine ASD goals Evaluate efforts Select ASD tools Present case to airlines Figure 10.1. Summary of the ASD process. Objectives Most ASD programs have relatively straightforward goals or objectives (e.g., adding ser- vice to a new hub). As discussed in Chapter 7, other commonly used objectives include the following: Retaining existing service Adding service to a new destination Adding frequencies to current services Lowering fares/introducing new competitive service Improving service reliability Upgrading aircraft Increasing access to global networks Progress toward these types of goals is easily measured. Other goals--such as decreasing the amount of passengers leaked through various marketing efforts--can require more complicated measurement to determine how well the effort succeeded. Other possible criteria against which the effectiveness of ASD programs is sometimes judged are not included here. For example, several airports reported that their programs were success- ful in part because their staff learned how to better plan and implement ASD efforts. Although that may be valuable, unless professional development was one of the formal ASD goals, judg- ing a program on that basis would not be appropriate. Great outcome Previous situation ASD Program Poor outcome Intervening factors? Figure 10.2. Conceptual overview of an evaluation methodology.

OCR for page 136
140 Passenger Air Service Development Techniques Measuring Outcomes Survey results With the types of goals and objectives noted previously, measuring the Most responding airports indicated eventual outcomes is relatively simple. Most common operational data will that they used basic operational give a general sense of how the ASD program worked. (See Table 10.1) data (e.g., departures, frequencies, These data are available from the operating carriers and from U.S.DOT (see or enplanements) to determine Chapter 5). whether their programs were For example, if a program's objective was to add service from a new net- effective. work carrier to improve passenger flows to a different part of the country or Others reported that they either to add competition in one-stop markets, the most obvious measure is the had no idea how to evaluate their total number of nonstop destinations served. The airport would also want to programs or simply did not attempt record the total number of (daily or weekly) departures, as well as some indi- any evaluation. cation of the total available outbound or inbound capacity, reflected in the number of available seats. Additional Measures Coupled with gaining the new service is how passengers reacted to the service. This evaluation would provide important feedback on the assumptions used in the business case (e.g., what the likely passenger response to new service over a particular hub would be, whether the new service would stimulate new travel or help re-capture traffic that was previously leaked to a nearby com- peting airport). For example: It is vitally Did the service attract the number of passengers expected? What were the passenger loads? Did the service help reduce passenger leakage to other airports? important to: Is the businessleisure mix as anticipated? Examine how pas- How do fares in connecting markets compare to those available before the service started? sengers reacted to Considering the importance of maintaining existing service, particularly in difficult economic the new service times, it is vitally important that the effects of any ASD program be considered on the incum- Consider the bent carriers: effects of any ASD Did service to the new hub shift passengers from the incumbent's connecting service? program on the What happened to their load factors? incumbent Regardless of the airport's goals or objectives, the key point to remember is that changes need carriers. to be measured that occurred not only with the targeted airline, but also with all traffic and ser- vice at the airport. Introducing any change in service may produce "ripple effects" on other car- Table 10.1. ASD objectives and primary measures of effectiveness. Goal/Objective Principal Outcome Measures Departures Retaining existing service Available seats Nonstop destinations served Service from new entrant network Departures carrier Available seats Adding frequencies Departures Lowering fares Average fares Increasing access to global networks Trips with an international segment Departures performed vs. scheduled Improving service reliability On-time departures On-time arrivals Upgrading type of aircraft Available seats

OCR for page 136
Evaluating and Improving ASD Efforts 141 riers' service and passenger traffic. A full accounting of the results of the ASD program needs to incorporate considerations of those effects as well. Survey results Financial Measures Roughly half of the airports that Evaluation of the results of the ASD efforts should include a financial com- were surveyed reported that they ponent. At even a relatively high level, the total cost per additional passenger also tracked financial effects of or operation should be able to be calculated. That "net cost" should include their ASD efforts. Measures were offsetting revenues derived from increased traveler spending on any conces- both revenue and cost related. sions and parking. For example, several tracked parking revenues and spending At the same time, it's also important to bear in mind that air service is pro- on concessions that correspond ducing a significant economic impact on the community and region. There with changes in enplanements. is a "multiplier effect" of dollars spent by both business and leisure travel. This Some tracked airport costs per effect is one reason why communities are willing to invest in attracting and enplaned passenger. retaining air service. For example, according to the Air Transport Association, the trade association that represents most large U.S. airlines, in 2005, every $100 in civil aviationrelated output generated an additional $275 in addi- tional demand, and every 100 civil aviation jobs generated 314 jobs in other industries. ATA's estimates were based on information and a model from the U.S. Department of Commerce, Bureau of Economic Analysis. Challenges in Measuring the Outcomes of Marketing Efforts Many airports' ASD goals are linked to a marketing campaign. Several airports surveyed had engaged in various marketing efforts to re-capture a greater share of passengers that they were losing to nearby airports. Others were promoting service by a new carrier--whether a mainline carrier, regional affiliate, or a niche carrier such as Allegiant. Airports also have tried to boost their local enplanements by marketing their own unique advantages and potential costs savings to travelers. For small airports these strengths might be free or inexpensive parking or the ease of traveling from an uncongested airport. Evaluating marketing efforts can be complicated. On the one hand, airports can simply track total enplanements to see if the number of passengers increased after the marketing began. That would provide a high-level measure of the result. Airports would probably want to get more in- depth information, however. Marketing is often directed at a specific market segment, so the air- port would want to understand the extent to which they reached those audiences, and whether the message achieved the intended effect. For example, an airport--perhaps working in conjunction with a local resort or attraction-- may decide that it would like to boost inbound traffic, which could be either business or leisure traffic. One market may focus on inbound leisure travel (e.g., Moab, Utah), while another may target luxury leisure travelers in conjunction with local resorts. For example, Hailey, Idaho, allo- cated $175,000 for marketing, including direct sales, direct mail, print advertising, Internet mar- keting, and radio advertising. Marketing was to be targeted at people living in the Los Angeles area who may be interested in visiting nearby Sun Valley and residents in the Sun Valley area who may be interested in traveling to Los Angeles for business or personal reasons. Sustained Changes The ASD team may have succeeded in attracting new service, but that may not mean that the ASD program was an unqualified success. Did the service become self-sustaining without finan- cial assistance? The U.S.DOT Office of Inspector General and GAO generally regard a program as being success- ful if the benefits (whether increased enplanements or new service) were sustained for 12 months. If the benefits were sustained for less than that, U.S.DOT regards the program as a "partial success."

OCR for page 136
142 Passenger Air Service Development Techniques Data from U.S.DOT (11). Figure 10.3. U.S.DOT example of using enplanement data to evaluate ASD effort. U.S.DOT provided an example of what it considers to be a "successful" case study where a community attracted an additional carrier to serve a different hub, and illustrated the effects of new service with passenger enplanement and flight activity data (11). Figures 10.3 and 10.4 show enplanement and operations data from the U.S.DOT report that illustrate successful ASD efforts. In cases where the benefits disappeared at the end of the financial assistance, U.S.DOT consid- ers those programs to be "failures." It is difficult to disagree with that characterization. The ser- vice may have existed for some months and some in the community may have benefited from the service while it operated. But the bottom line is that the airport and the community invested great time and effort into attracting service that, for whatever reason, was not commercially viable. Several of the airports that participated in this project lost service during 2008. However, most small community airports lost service during the year. Some of that service had been in place for several years. Compared to November 2007, nearly every airport in the United States lost some service by fall 2008, according to schedules filed with the Official Airline Guide (OAG). Given Data from U.S.DOT (11). Figure 10.4. U.S.DOT example of using operations data to evaluate ASD effort.