Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
S-1 Introduction Traffic forecasts are projections of future traffic conditions on existing or proposed road- ways. Traffic forecasts are used to inform important decisions about transportation projects, including the selection of which projects to build and certain design elements of those proj- ects. It is in the public interest to ensure that those decisions are based on the most accurate and objective possible forecasts. However, it is important to also recognize that forecasts will always be influenced by some factors that are unexpected, unpredictable, and difficult to anticipate. Therefore, it is pru- dent to quantify the expected inaccuracy of traffic forecasts and consider that uncertainty in making decisions. Together, more accurate traffic forecasts and a better understanding of the uncertainty around traffic forecasts can lead to a more efficient allocation of resources and build public confidence in the agencies that produce those forecasts. Although forecast accuracy has long been a topic of interest to scholars and to critics of transportation planners and policy makers, the evidence on forecast accuracy remains limited. Those scholars and critics have offered possible reasons for forecast inaccuracy, including poor data on which forecasts are based, incorrect assumptions about future conditions, limitations of the forecasting methods used, and political motivations that sometimes cause people to distort forecasts intentionally. Much of the existing work considers toll roads, rail transit, or mega-projects, and often that work is more speculative about possible causes of inaccuracy than definitive about actual causes. Only a small set of empirical studies examine non-toll traffic forecast accuracy in the United States. According to David Hartgen (2013), âthe greatest knowledge gap in U.S. travel demand modeling is the unknown accuracy of U.S. urban road traffic forecasts.â For several reasons, few studies of the topic exist: Assembling data to study forecast accuracy can be cumbersome. It involves sorting through forecast documents and files created years earlier, and requires collecting data on the project (ideally, both before and after it opens). In their review of forecast accu- racy studies, Nicolaisen and Driscoll (2014) state, âThe lack of availability for necessary data items is a general problem and probably the biggest limitation to advances in the field.â This research aims to fill that gap, focusing specifically on project-level traffic forecasts of public roads in the United States. It assembles the largest known database of traffic fore- cast accuracy. It reports empirical evidence on the accuracy of these forecasts and factors related to accuracy. It goes on to consider a series of case studies aimed at providing a better understanding of the sources of forecast inaccuracy. Together, the products of this research provide empirical evidence about the accuracy of past traffic forecasts. Most past studies have been analytical rather than prescriptive. They report what was observed but offer little advice to planners, engineers, or policy makers as to how they may S U M M A R Y Traffic Forecasting Accuracy Assessment Research âThe greatest knowledge gap in U.S. travel demand modeling is the unknown accuracy of U.S. urban road traffic forecasts.â âDavid Hartgen (2013) âThe lack of availability for necessary data items is a general problem and probably the biggest limitation to advances in the field.â âNicolaisen and Driscoll (2014)
S-2 Traffic Forecasting Accuracy Assessment Research improve forecasting practice. In contrast, this study makes specific recommendations by which state departments of transportation (DOTs), metropolitan planning organizations (MPOs), and others can improve the accuracy of traffic forecasts going forward. Other fields have demonstrated the effectiveness of reviews that have led to the adoption of improved forecasting practice, including the National Oceanic and Atmospheric Administration (NOAA), which adopted a highly successful Hurricane Forecasting Improvement Program. While attentive to insights and explanations in the assessment of past forecasts, this study emphasizes improving practice. It addresses what agencies can do to improve the accuracy of traffic forecasts, and how to consider the uncertainty inherent in forecasts when making decisions about the transportation system. Research Approach A review of the literature revealed that most prior studies of traffic forecasting accuracy had adopted either of two types of studies to assess the accuracy of the forecasts that they studied. The two approaches are complementary, and the project team employed both in this study. The first approach relies on gathering a large sample of forecasts for which data were collected and forecasts made sufficiently long ago that the horizon year of the forecasts has already come. This approach makes it possible to compare the forecast traffic against mea- sured traffic flows on the facilities for which the forecasts were made. With a large sample of such forecasts, the project team used statistical analysis to examine correlations between forecast accuracy and data inputs, facility types, methods used to conduct the forecasts, and factors exogenous to the forecasts that influenced their accuracy. This analysis was based on a large sample of cases and will be referred to throughout this report as the Large-N analysis. The Large-N analysis required compiling a database of forecast and counted traffic for 1,291 projects from six states and four European countries. These projects are composed of 3,912 individual segments. Segments include sections of roadway between major inter- sections, opposing directions of a freeway, or ramps in an interchange. The forecast accuracy database developed in NCHRP Project 08-110 contains information that includes forecast and actual traffic volumes, as well as information such as the type of improvement, the facility type, the forecast method, and the project location. It is the largest known data- base for assessing traffic forecast accuracy, and it allowed development of distributions of forecast errors, analysis of relationships between measured traffic volumes and the forecast traffic volumes and a variety of potentially descriptive variables. The second type of study identified in the literature consisted of case studies of particular facilities in which forecasts were made at some date in the past, the projects were planned in detail and built, and resulting traffic flows were observed. Most were case studies of a single project or of a small number of projects, using customized data collection that included review of historical documents, before-and-after surveys of travelers, and inter- views of those who participated in project decision making. For example, the FTA conducts such before-and-after analyses of patronage and cost forecasts for major capital investments in public transit. The depth of the analysis may lead researchers to identify sources of forecast errorsâsuch as errors in inputs, incorrect assumptions, model specifications, and changes in the project definition. It is, however, difficult to generalize from particular case studies. This study included six case studies that are referred to throughout this report as deep dives. The Large-N analysis and deep dives complement one another by shining different lights on the same problem. The former address the question, âHow accurate are traffic forecasts?â while the latter address the question, âWhat are the sources of forecast error?â
Summary S-3 A third question addressed in this research is, âHow can forecasting practice be improved?â To address this question, the project team conducted a workshop with traffic forecasting practitioners in which the lessons learned from this research could be reviewed and considered. Figure S-1 shows how these three questions relate to the methods and outputs of this research. This research is intended to help agencies like state DOTs and MPOs improve their future forecasts. Accordingly, this report focuses not only on analysis of past projects but also on establishing a process of continual improvement. The proposed process is informed by the lessons learned conducting the empirical analysis and recognizes the capacity of and challenges facing organizations that collect the data, calibrate and operate the models, and report findings in highly politicized environments. To be sure that the recommendations are useful in practice, the project team made every effort to learn from the agencies that had produced the forecasts, and drew from the experience of the team members as practicing traffic forecasters. The team tried to replicate what agency forecasters had done, including running the travel demand models whenever possible. A workshop was conducted with practitioners from state DOTs and MPOs to present findings. The resulting recommendations focus on the process of collecting data about forecasts, learning from comparisons to actual outcomes, and using the insights gained to improve future forecasts and to understand the uncertainty around those forecasts. How Accurate Are Traffic Forecasts? To decision makers, the most important question about forecast accuracy may be, âGiven a forecast, what range of likely outcomes should be expected?â In this study, the Large-N analysis aimed to answer that question by comparing traffic forecasts made at the time projects were planned with flows measured after the projects were completed. The comparisons made in the Large-N analysis were for average daily traffic (ADT) in the first post-opening year for which traffic counts are available. The accuracy of the forecast for a project was reported as the percent difference from forecast (PDFF), expressed as follows for Project i: = â âPDFF Counted Forecast Volume Forecast Volume 100% (S-1)i Question: How accurate are traffic forecasts? Method: Statistical analysis of actual versus forecast traffic for a large sample of projects after they open (Large-N analysis). Output: Distribution of expected traffic volume as a function of forecast volume. Question: What are the sources of forecast error? Method: Deep dives examining the forecasts made for six substantial projects after the project had opened and actual data were available. Output: Estimated effects of known errors, and remaining unknown error. Question: How can forecasting practice be improved? Method: Derive lessons from this research and review with practitioners. Output: Recommendations for how to learn from past traffic forecasts. Figure S-1. Research questions and methods.
S-4 Traffic Forecasting Accuracy Assessment Research Using this formulation, negative values indicate outcomes lower than forecast, and posi- tive values indicate outcomes higher than forecast. The formulation appealed to the research team because it expresses the error as a function of the forecast, which always is known earlier than traffic counts. The distribution of the PDFF when measured this way over the dataset could be used to portray the systematic performance of traffic forecasts. On average among the cases examined, the counted traffic volume was about 6% lower than the forecast volume, showing some bias. The mean of the absolute PDFF, which is a measure of the spread, was about 17%. In this study, 9 in 10 project outcomes fell within the range of -38% of the forecast to +37% of the forecast. When presented as a function of forecasted volumes, the PDFF also showed that percent- age errors decreased as traffic volumes increased; in other words, that traffic forecasts were more accurate, in percentage terms, for higher-volume roads. Quantile regression was used to explore the uncertainties inherent in forecasting traffic. Quantile regression is similar to standard regression, but rather than estimating a line of best fit through the center of a cloud of points, it estimates the lines along the edges, corresponding to specific percentiles. Using the data for projects included in this study, the project team developed several quantile regression models of the actual traffic as a function of the forecast traffic for the 5th, 20th, 50th (median), 80th, and 95th percentiles. Figure S-2 shows the simplest of these models, with the percentiles representing the uncertainty in outcomes expected around a forecast. Additional models were estimated in this research to test the effects of various descriptive variables. Plots such as Figure S-2 provide a means of estimating the range of uncertainty around a forecast at the time the forecast is made. The lines in the graph depict various percentile values and can be interpreted as the expected ranges of measured traffic in relation to a fore- cast volume. For example, it can be expected that 95% of all projects with a forecast ADT of 30,000 will have counted traffic below 46,578. Only 5% of the projects will experience 0 10000 20000 30000 40000 50000 60000 0 10000 20000 30000 40000 50000 60000 Ex pe ct ed A D T Forecast ADT Perfect Forecast 5th Percentile Median 95th Percentile 20th Percentile 80th Percentile Figure S-2. Expected ranges of actual traffic (base model). The quantile regression models presented in Part II of this research provide a means of estimating the range of uncertainty around a forecast. Spread: Among the cases examined, the counted traffic volume was an average of about 17% different from the forecast traffic in either direction, as measured by the mean of the absolute PDFF. Bias: On average among the cases examined, the counted traffic volume was about 6% lower than forecast.
Summary S-5 counted traffic below 17,898. Discounting other variables, for a forecast volume of 30,000 the range 45,578 to 17,898 will capture the likely volumes experienced by 90% of the projects examined. In this research, additional models were estimated to test the effects of various descriptive variables. As a communication tool, plots can be used to graphically depict the uncertainties based on the calculated percentiles, essentially showing the estimated range of uncertainty around a given forecast at the time the forecast is made. In this research, additional models were estimated to test the effects of various descriptive variables. Part II provides a more detailed discussion of this process. Several observations emerged from the Large-N analysis. Summarized here, these observations and the supporting data are described in more detail in Part II of the report. 1. Traffic forecasts show a modest bias, with measured ADT about 6% lower than forecast ADT. At a project level, the mean PDFF is â5.6% and the median is â7.5%. The difference between the mean and median values occurs because the distribution is asymmetric; that is, counted traffic is more likely to be lower than forecast values, but the long right-hand tail of the distribution indicates that a small number of projects experienced traffic much higher than the forecast. 2. Traffic forecasts show a significant spread, with a mean absolute PDFF of 25% at the segment level and 17% at a project level. Some 90% of segment forecasts fall within the range â45% to +66%, and 90% of project-level forecasts fall within the range of â38% to +37%. 3. Traffic forecasts are more accurate for higher-volume roads. This relationship was observed in the deep dives, and echoes the maximum desirable deviation guidance from NCHRP Report 765: Analytical Travel Forecasting Approaches for Project-Level Planning and Design (CDM Smith et al. 2014), which advises tighter targets for calibrating a travel model for higher volume links. 4. Traffic forecasts are more accurate for higher functional classes, over and above the volume effect already described. The results of the research teamâs quantile regres- sions show narrower forecast windows for freeways than for arterials and for arterials than for collectors and locals. The counted volumes on lower-class roads are more likely to be lower than the forecast volumes. These results may be due to limitations of zone size and network detail. 5. The unemployment rate in the opening year is an important determinant of forecast accuracy. Traffic occurs where there is economic activity and unemployment rates reflect this relationship. For each percentage point of increase (e.g., from 5% to 6%) in the unemployment rate in the projectâs opening year, the median estimated traffic decreases by 3%. For example, consider two roads, each with the same forecast, but with one road scheduled to open in 2005 with an unemployment rate of 4.5% and the other road scheduled to open in 2010 with an unemployment rate of 9.5%. The opening year ADT would be expected to be 15% lower for the project that opens in 2010 ((9.5 - 4.5) p 0.03). 6. Forecasts implicitly assume that economic conditions present in the year the forecast is made will perpetuate. A high unemployment rate in the year the forecast is produced is more likely to result in ADT in the horizon year that is higher than the forecast, whereas a low unemployment rate in the year the forecast is produced would have the opposite effect. 7. Traffic forecasts become less accurate as the forecast horizon increases, but the result is asymmetric, with actual ADT more likely to be higher than the forecast ADT as the forecast horizon increases. The forecast horizon is the length of time into the future for which forecasts are prepared, which is measured as the number
S-6 Traffic Forecasting Accuracy Assessment Research of years between when the forecast is made and when the project opens. In this study, the quantile regression results showed that the median, 80th percentile, and 95th percentile estimates increase with an increase in this variable, but that the 5th and 20th percentile estimates either stay flat or increase by a smaller amount. 8. Regional travel models produce more accurate forecasts than traffic count trends. The mean absolute PDFF for regional travel models is 16.9% compared to 22.2% for traffic count trends. In addition, the quantile regression models showed that using a travel model narrows the uncertainty window. 9. Some agencies have made more accurate forecasts than others have. In this research, the agencies with the most accurate forecasts (with more than a handful of projects) had a mean absolute PDFF (MAPDFF) of 13.7%, compared to 32% for the least accurate forecasts. A portion of these differences was significant in the quantile regression models. 10. Traffic forecasts have improved over time. This observation was evident both in the research teamâs assessment of the year the forecast was produced and in the opening year. Forecasts for projects that opened during the 1990s were especially poor, exhibit- ing mean volumes 15% higher than forecast, with a MAPDFF of 28.1%. The quantile regression models for forecasting showed that, whereas older forecasts do not show a significant bias relative to newer forecasts, they do have a broader uncertainty windowâalthough this result may be confounded by the types of projects recorded since 2000 in the database, which tended to be more routine. 11. Of the forecasts reviewed, 95% were accurate to within half of a lane. The project team found that, for 1% of cases, the actual traffic was higher than forecast and addi- tional lanes would be needed to maintain the forecast level of service. Conversely, for 4% of cases, actual traffic was lower than forecast, and the same level of service could be maintained with fewer lanes. All these observations were based on a large sample of projects but, because it was not a random sample of all highway projects, the project teamâs ability to generalize from the analysis was limited. The years in which projects in the database opened to traffic ranged from 1970 to 2017, with about 90% of the projects opening to traffic in 2003 or later. Earlier projects were more likely to be major infrastructure capital investment projects, and more recent projects were more often routine resurfacing projects on existing roadways. If a fore- caster has an interest in a specific type of project, there is value in repeating this analysis using a sample of projects more similar to the type for which the forecast is to be made. Part I of this report describes how to do this for an agencyâs own forecasts, and the data used in this research are being made publicly available to support future research. What Are the Sources of Forecast Error? The statistical analysis provided useful indications of the magnitudes of errors and suggested factors predictive of more or less accurate forecasts, but it was limited in its ability to determine the causes of forecast error. To better understand these causes, the project team conducted six deep dives: case studies of traffic forecasts in different states for highways having differing contexts and forecast results. The projects chosen for the deep dives included a new bridge, the expansion and extension of an arterial on the fringe of an urban area, a major new expressway built as a toll road, the rebuilding and expansion of an urban freeway, and a state highway bypass around a small town. The projects and their locations were: â¢ Eastown Road Extension, Lima, Ohio; â¢ Indian Street Bridge, Palm City, Florida; Regional travel models produce more accurate forecasts than traffic count trends. Of the forecasts reviewed, 95% were accurate to within half of a lane.
Summary S-7 â¢ Central Artery Tunnel, Boston, Massachusetts; â¢ Cynthiana Bypass, Cynthiana, Kentucky; â¢ South Bay Expressway, San Diego, California; and â¢ US-41 (later renamed I-41), Brown County, Wisconsin. For each deep dive, the project team identified factors that could contribute to forecast inaccuracy and quantified the contribution of each where it was possible to do so. For example, although population and regional employment forecasts are used to inform forecasts of traffic growth, the population may not grow as was forecast, or an economic downturn may cause shortfalls in expected job growth. The cases examined in the deep dives afforded the project team the benefits of hindsight: Because the actual population and employment data in the projectsâ opening years were available, the project team could calculate how the forecast traffic volumes would have changed if they were based on the true population and employment values. In some cases, the project team had access to the data and models that had been used to create the original traffic forecasts and could re-run those models with corrected inputs. In other cases, the team relied on published elasticities to adjust the traffic volume forecasts. Part II of this report provides a summary of the Large-N analysis that was conducted for the six deep dive projects. Known sources of potential error in the forecasts for each project were listed and assessed as factors. The specific factors examined with each case varied somewhat based on the available data, models, and documentation for the project. For example, if a given forecast report did not document fuel price assumptions, that factor was not included in the analysis for that case study. For most of the deep dives, population, employment, and fuel price were listed as factors. Additional factors listed for individual cases were car ownership, travel time/speed, and external trips only. For one case, three notable factors were listedâsocioeconomic growth, border crossing, and toll ratesâbut the available documentation did not allow the effect of these factors on traffic volume to be quantified. For each case, the analysis provided for calculation of the PDFF remaining after adjust- ments had been made for all factors considered. Expressed as a percentage, the PDFF remain- ing represents for each case the error that remains for unspecified reasons (e.g., limitations in the forecast method, inaccurate assumptions that the project team could not test, or other unknown factors). Several observations can be made based on the deep dives: 1. The reasons for forecast inaccuracy are diverse. It was clear from the limited sample that the reasons for forecast inaccuracy are diverse; nonetheless, the need to reconcile varying external forecasts, variable travel speeds, uncertainties in population and employment forecasts, and short-term variations from long-term trends were all iden- tified as contributing factors in one or more of the deep dives. 2. For all six projects considered, the traffic forecasts show optimism bias. For each project, the observed traffic was less than the forecast traffic, and for all projects except US-41, correcting for the factors listed reduced the differences between forecast and observed traffic volumes. 3. Employment, population, and fuel price forecasts frequently contribute to traffic forecast inaccuracy. Adjustments to the forecasts using elasticities and model re-runs confirmed that significant errors in opening-year forecasts of employment, fuel prices, and travel speed had a major role in the overestimation of traffic volumes. In addition, it was observed that macro-economic conditions in the opening year influence forecast accuracy, particularly for projects that opened during or after an economic downturn. Although certain known factors such as population, employ- ment, and fuel price forecasts are impor- tant contributors, the reasons for forecast inaccuracy are diverse and do not lend them- selves to easy solutions.
S-8 Traffic Forecasting Accuracy Assessment Research 4. Assumptions about external traffic and travel speed also affect traffic forecasts. For the Cynthiana Bypass project in Kentucky, the estimated growth rate for external trips was the largest source of forecast error. Travel speed was an important factor for the Eastown Road Extension in Ohio because inaccurate speeds led to too much diversion from competing roads. The limited number of projects examined and the diversity of reasons for forecast inaccuracy make it difficult to generalize the findings of this study to identify a simple way of improving forecasts. The project team could not determine why optimism bias occurs, nor could it determine why optimism bias remains after adjusting for known errors. The best the project team could do from these case studies was to observe the presence of this bias. Nonetheless, the findings can help forecasters anticipate the likely sources of problems, such as through traffic for a bypass project (Cynthiana Bypass), traffic diverted from a parallel facility (Eastown Road Extension), or cross-border traffic (South Bay Expressway)âand give extra scrutiny in forecasting to those issues. The deep dive cases that provided access to the original traffic forecasting model runs and data were the sources of the most insights, and these can provide future opportunities to study the effects of forecasting methods and data in greater depth. How Can Traffic Forecasting Practice Be Improved? One of the most important and overarching conclusions of this study is that agencies should take far more seriously the analysis of their past forecasting errors so that they can learn from the cumulative record. Forecasts are essential elements in the creation of effective highway plans and project designs, and because forecasts are always subject to error, agencies should document their forecasts and revisit them in order to identify assumptions and other factors that lead to errors. As illustrated by the quantile regression of the data in this Large-N analysis, systematically tracking forecast accuracy provides insight into the range of uncertainty surrounding traffic forecasts. Especially for complex and expensive capital investment projects, the most efficacious type of forecasting could well involve the development of ranges of future traffic. Instead of dismissing forecasts as inherently subject to error, agencies could make forecasts more useful and more believable to the public if they embrace the uncertainty as an element of all forecasting. Building on this conclusion, the authors of this study offer four recommendations for improving the practice of traffic forecasting. These recommendations are directed to technical staff at state DOTs, MPOs, and other, similar organizations. Recommendation 1: Use a range of forecasts to communicate uncertainty. Consistent with past research, the results of this study show distributions of experienced traffic volumes around the forecast volumes. These distributions provide a basic understand- ing of the uncertainty in outcomes surrounding forecasts. Forecasting seeks both to minimize the bias in this distribution and to reduce the variance such that the forecasts more closely align with counted traffic, but it is not realistic to expect perfection. Instead of perfection, the goal should be to achieve forecasts that are âgood enoughâ to make an informed decision about a project. In determining what âgood enoughâ means, one threshold might be that the forecasts are close enough to the actual outcomes that the decisions would remain the same if the decision had been made with perfect knowledge. To evaluate whether a forecast is sufficient to inform the decision at hand, the project team recommends that forecasters explicitly acknowledge the uncertainty inherent in
Summary S-9 forecasting by reporting the forecast in terms of a range. If an actual future traffic count at the low or high end of the range would not change the decision, the decision makers can safely proceed with little worry about the risk of an inaccurate forecast. On the other hand, if an actual future traffic count at the low or high end of the range would change the decision, then that should be considered a warning flag. Further study may be warranted to better understand the risks involved, or decision makers may choose to select an alternative with lower risk. The main body of this report describes in more detail the quantile regression models that provide a means of estimating the range of uncertainty around a traffic forecast and presents guidance on how to use these models. Recommendation 2: Systematically archive traffic forecasts and collect observed data before and after the project opens. Understanding the historical accuracy of forecasts has value in part because the historical data provide empirical evidence of the uncertainty in outcomes surrounding the forecasts. Recommendation 1 is predicated on collecting and maintaining an archive of the data needed to analyze the distributions of experienced traffic around the forecast traffic. The six deep dives provided a base of information; however, to continue to refine the accuracy of forecasts, the project team recommends that agencies responsible for traffic forecasts systematically archive both traffic forecasts and observed data on project outcomes before and after the project opens. Because it is much more difficult to assemble the necessary data after the fact, the project team recommends that agencies archive their forecasts at the time they are made. More can be learned from projects for which more information has been collected and retained; however, collecting and storing the information requires effort. Accordingly, the team recommends that agencies employ three tiers of archivingâbronze, silver, or goldâwith each tier building upon the previous. Assigning the project to one of the three tiers can enable the agency to balance the importance of the project against the effort required to compile and store the data: â¢ Bronze. The first tier archives basic information, recording the type of project, the fore- cast, and the method of forecasting. After the project opens, data about the measured traffic should be added. Bronze-level archiving is recommended for all project-level traffic forecasts. â¢ Silver. The second tier archives additional information, documenting specific details about the project and assumptions made in creating the forecast. Silver-level archiving is recommended for large projects and projects that represent new or innovative solu- tions. Silver-level archiving also is recommended for a sample of typical projects so that the agency can monitor forecast accuracy in relation to projects that comprise the largest number of forecasts. â¢ Gold. The highest tier archives details that focus on making the traffic forecast repro- ducible after project opening. Gold-level archiving collects and stores the information needed to more clearly identify the sources of forecasting error. Gold-level archiving is recommended for unique projects and innovative projects that have not been previously forecast. As with the second tier, Gold-level archiving also is recommended for a sample of typical projects. For each of the three tiers, Part I of this report provides specific recommendations regard- ing what information to archive and how to archive it efficiently.
S-10 Traffic Forecasting Accuracy Assessment Research Recommendation 3: Periodically report the accuracy of forecasts relative to observed data. The project team recommends that agencies responsible for producing traffic forecasts periodically report the accuracy of their forecasts relative to the outcomes measured when the roads are in service. Doing so will accomplish several things: â¢ Such reporting reveals any bias in the traffic forecasts, such as the observation in this research that observed traffic is, on average, 6% lower than forecast. Even if that bias cannot be attributed to a particular source, understanding its presence and magnitude provides more information to the decision making process. â¢ It also provides the empirical information necessary to estimate the uncertainty surrounding their traffic forecasts, as described in Recommendation 1. â¢ For agencies with a history of producing accurate forecasts, the reporting also provides an opportunity to demonstrate their good work. Agencies that produce highly accurate forecasts also could be justified in using a narrower range when estimating the uncertainty around similar forecasts in the future. Part I of this report discusses the recommended content of forecast accuracy reports, and suggests specific methods by which to report forecast accuracy. Recommendation 4: Consider the results of past accuracy assessments in improving traffic forecasting methods. The project team is not aware of efforts to consider how well travel models perform in forecasting as a means to improve the next generation of travel models. That should change. Therefore, the team recommends that when agencies set out to improve their traffic forecasting methods or to update their travel demand models, they consider the results of past forecast accuracy assessments in doing so. Agencies may approach this task in several ways: â¢ If deep dives reveal specific sources of error in past forecasts, then those sources should be given extra scrutiny when developing new methods. Conversely, if deep dives reveal that a particular process is not a major source of error, then additional resources need not be allocated to further refining that process. â¢ Data collected on counted traffic volumes (Recommendation 2) can be used as a benchmark against which to test a new travel model. Rather than focusing the valida- tion on the modelâs fit against base-year data, this approach tests whether the new model can replicate the change that occurs when a new project opens. The model is tested in the way it will be used, and this approach offers a much more rigorous means of testing. â¢ To the extent that Large-N analyses can be used to demonstrate better accuracy for one method over another, that information should inform the selection of methods for future use. The project team was not able to demonstrate such differences in this research, largely because of challenges in isolating the effect of the method on accuracy versus the type of project and other factors. A more rigorous research design would control for these factors by testing multiple methods for the same project, or by more carefully recording the details of all projects so they can be more fully considered in the analysis. Part I of this report describes the specific ways in which forecast accuracy data can be used to improve traffic forecasting methods. The project team recommends that these approaches be integrated into travel model development and improvement projects.
Summary S-11 Why Should Transportation Agencies Implement These Recommendations? Conscientious forecasters strive for objectivity, but this effort does not necessarily ensure that their forecasts are accurate, nor does it ensure that their forecasts are viewed as credible in the eyes of decision makers or the public. Accuracy and credibility are important and related. An inaccurate forecast may lead to a sub-optimal decision for a project, but it may also undermine the trust in forecasts made for other projects. To meet this challenge, the project team recommends that agencies apply the four recommendations as part of a strategy of deliberate transparency. As an agency builds a track record of increasingly accurate forecasts, that track record provides evidence with which the agency can build trust in its abilities and establish the credibility of future forecasts. In this context, reporting inaccurate results also plays an important role: specifically, it demonstrates a willingness to learn and improve in the same way as scientists, who report data that contradicts their initial hypotheses. In addition, acknowledging the uncertainty inherent in forecasting and reporting forecasts in the context of a range offers a way for agencies to protect their credibility. A single-point traffic forecast that differs from the actual traffic by 15% might easily be criticized as inaccurate. If the same forecast were reported with a range of accuracy of Â±20%, however, the forecast might be considered accurate becauseâeven with a difference of 15%âthe actual traffic falls within the reported range. Several transportation agencies provided the data to make this research possible. All the participating agencies have agreed to allow their data to be publicly shared, making it available for future researchers who may wish to continue this work. The agencies that shared data for this study are a model of transparency and should be celebrated for their efforts to learn from past forecasts and engage in a process of continued improvement. Structure of This Report This report is structured in three parts: â¢ Part I, the Guidance Document, provides more detailed guidance on implementing the recommendations for improving traffic forecasting practice. In Part I, Chapter 1 provides an overview of the research and describes the recommendations. Chapters 2 through 5 correspond to each of the four recommendations, providing additional details on how to implement them. â¢ Part II, the Technical Report, presents the methods of analysis used in the research and documents the results on which the guidance is based. â¢ Part III, the Appendices, provides resources to facilitate the implementation of the recom- mendations and provides additional information and technical details about the literature review, the Large-N analysis, and the deep dives that were conducted for this research. The report also is accompanied by several downloadable and electronically accessible resources. These resources are explained further in the chapter text, with summary descrip- tions and links provided in Appendix A. The downloadable resources can be accessed via the NCHRP Research Report 934 project page (at www.trb.org) and include: â¢ An Excel file containing a spreadsheet implementation of the quantile regression models that agencies can use to run their own Large-N analyses; â¢ Two Word files that offer customizable versions of the annotated outlines provided in Appendices B and C to this report; Acknowledging the uncertainty inherent in forecasting and reporting a range is a way for the forecasting agency to protect its own credibility. The agencies that shared data for this study are a model of transparency and should be celebrated for their efforts to learn from past forecasts and engage in a process of continued improvement.
S-12 Traffic Forecasting Accuracy Assessment Research â¢ An Excel file containing working versions of the deep dive assessment tables referenced in Appendix C; and â¢ A PowerPoint presentation file that summarizes key findings from this research. Using the links provided in Part III, Appendix A, agencies also can access online software (called Forecast Cards) and the accompanying data repository (called Forecast Cards Data). The data repository contains the datasets that were used in this research, and it has been designed so that more data can be added as new projects are planned or opened. The Forecast Cards system can be used with the existing data repository or with local data that have been formatted to work with the system. The Forecast Cards system can thus be used by agencies that wish to track forecast accuracy in accordance with the Bronze archiving standard, and agencies that wish to do so can voluntarily share their archived data for future research by uploading it to the data repository.