National Academies Press: OpenBook

Traffic Forecasting Accuracy Assessment Research (2019)

Chapter: Part II: Technical Report

« Previous: Part I: Guidance Document
Page 116
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 116
Page 117
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 117
Page 118
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 118
Page 119
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 119
Page 120
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 120
Page 121
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 121
Page 122
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 122
Page 123
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 123
Page 124
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 124
Page 125
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 125
Page 126
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 126
Page 127
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 127
Page 128
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 128
Page 129
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 129
Page 130
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 130
Page 131
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 131
Page 132
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 132
Page 133
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 133
Page 134
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 134
Page 135
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 135
Page 136
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 136
Page 137
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 137
Page 138
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 138
Page 139
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 139
Page 140
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 140
Page 141
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 141
Page 142
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 142
Page 143
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 143
Page 144
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 144
Page 145
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 145
Page 146
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 146
Page 147
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 147
Page 148
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 148
Page 149
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 149
Page 150
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 150
Page 151
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 151
Page 152
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 152
Page 153
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 153
Page 154
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 154
Page 155
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 155
Page 156
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 156
Page 157
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 157
Page 158
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 158
Page 159
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 159
Page 160
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 160
Page 161
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 161
Page 162
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 162
Page 163
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 163
Page 164
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 164
Page 165
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 165
Page 166
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 166
Page 167
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 167
Page 168
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 168
Page 169
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 169
Page 170
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 170
Page 171
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 171
Page 172
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 172
Page 173
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 173
Page 174
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 174
Page 175
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 175
Page 176
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 176
Page 177
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 177
Page 178
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 178
Page 179
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 179
Page 180
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 180
Page 181
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 181
Page 182
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 182
Page 183
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 183
Page 184
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 184
Page 185
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 185
Page 186
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 186
Page 187
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 187
Page 188
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 188
Page 189
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 189
Page 190
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 190
Page 191
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 191
Page 192
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 192
Page 193
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 193
Page 194
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 194
Page 195
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 195
Page 196
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 196
Page 197
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 197
Page 198
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 198
Page 199
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 199
Page 200
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 200
Page 201
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 201
Page 202
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 202
Page 203
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 203
Page 204
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 204
Page 205
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 205
Page 206
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 206
Page 207
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 207
Page 208
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 208
Page 209
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 209
Page 210
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 210
Page 211
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 211
Page 212
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 212
Page 213
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 213
Page 214
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 214
Page 215
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 215
Page 216
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 216
Page 217
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 217
Page 218
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 218
Page 219
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 219
Page 220
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 220
Page 221
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 221
Page 222
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 222
Page 223
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 223
Page 224
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 224
Page 225
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 225
Page 226
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 226
Page 227
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 227
Page 228
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 228
Page 229
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 229
Page 230
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 230
Page 231
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 231
Page 232
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 232
Page 233
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 233
Page 234
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 234
Page 235
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 235
Page 236
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 236
Page 237
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 237
Page 238
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 238
Page 239
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 239
Page 240
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 240
Page 241
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 241
Page 242
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 242
Page 243
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 243
Page 244
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 244
Page 245
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 245
Page 246
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 246
Page 247
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 247
Page 248
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 248
Page 249
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 249
Page 250
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 250
Page 251
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 251
Page 252
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 252
Page 253
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 253
Page 254
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 254
Page 255
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 255
Page 256
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 256
Page 257
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 257
Page 258
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 258
Page 259
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 259
Page 260
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 260
Page 261
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 261
Page 262
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 262
Page 263
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 263
Page 264
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 264
Page 265
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 265
Page 266
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 266
Page 267
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 267
Page 268
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 268
Page 269
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 269
Page 270
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 270
Page 271
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 271
Page 272
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 272
Page 273
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 273
Page 274
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 274
Page 275
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 275
Page 276
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 276
Page 277
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 277
Page 278
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 278
Page 279
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 279
Page 280
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 280
Page 281
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 281
Page 282
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 282
Page 283
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 283
Page 284
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 284
Page 285
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 285
Page 286
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 286
Page 287
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 287
Page 288
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 288
Page 289
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 289
Page 290
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 290
Page 291
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 291
Page 292
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 292
Page 293
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 293
Page 294
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 294
Page 295
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 295
Page 296
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 296
Page 297
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 297
Page 298
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 298
Page 299
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 299
Page 300
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 300
Page 301
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 301
Page 302
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 302
Page 303
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 303
Page 304
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 304
Page 305
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 305
Page 306
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 306
Page 307
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 307
Page 308
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 308
Page 309
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 309
Page 310
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 310
Page 311
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 311
Page 312
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 312
Page 313
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 313
Page 314
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 314
Page 315
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 315
Page 316
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 316
Page 317
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 317
Page 318
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 318
Page 319
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 319
Page 320
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 320
Page 321
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 321
Page 322
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 322
Page 323
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 323
Page 324
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 324
Page 325
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 325
Page 326
Suggested Citation:"Part II: Technical Report." National Academies of Sciences, Engineering, and Medicine. 2019. Traffic Forecasting Accuracy Assessment Research. Washington, DC: The National Academies Press. doi: 10.17226/25637.
×
Page 326

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

II-1 NCHRP Research Report 934 Traffic Forecasting Accuracy Assessment Research Part II: Technical Report

Traffic Forecasting Accuracy Assessment Research Technical Report II-2 Part II Contents Introduction ............................................................................................................................. II-4  Research Objective ......................................................................................................................... II-4  Overall Approach ............................................................................................................................ II-4  Analysis Questions ................................................................................................................... II-6  Process Questions .................................................................................................................... II-6  Report Contents ............................................................................................................................... II-7  Large-N Analysis .................................................................................................................... II-8  Introduction ..................................................................................................................................... II-8  Data and Methodology .................................................................................................................... II-9  Data .......................................................................................................................................... II-9  Database Structure ................................................................................................................. II-10  Methodology ........................................................................................................................... II-12  Results ........................................................................................................................................... II-16  Overall Distribution ............................................................................................................... II-16  Forecast Volume .................................................................................................................... II-17  Quantile Regression Results ......................................................................................................... II-18  Deep Dives ............................................................................................................................ II-23  Introduction ................................................................................................................................... II-23  Methodology ................................................................................................................................. II-25  Sources of Error as cited in Existing Literature ..................................................................... II-25  Procedure for Analysis ........................................................................................................... II-27  Results ........................................................................................................................................... II-33  Eastown Road Extension Project, Lima, Ohio ....................................................................... II-33  Indian River Bridge, Palm City, Florida ................................................................................ II-35  Central Artery Tunnel, Boston, Massachusetts ...................................................................... II-39  Cynthiana Bypass, Cynthiana, Kentucky ............................................................................... II-40  South Bay Expressway, San Diego, California ...................................................................... II-42  US 41, Brown County, Wisconsin ......................................................................................... II-44  Conclusions ........................................................................................................................... II-46  Research Questions ....................................................................................................................... II-46  Large-N Findings .......................................................................................................................... II-46  Deep Dive Findings ...................................................................................................................... II-50 

Traffic Forecasting Accuracy Assessment Research Technical Report II-3 Process Findings ........................................................................................................................... II-52  References ............................................................................................................................................... II-55  Appendix A: Literature Review  Appendix B: Large N Analysis  Appendix C: Deep Dives  List of Figures Figure 1: Distribution of Percent Difference from Forecast (Project Level) .............................................. 16  Figure 2: Percent Difference from Forecast as a function of Forecast Volume (Project Level) ................. 17  Figure 3: Expected Ranges of Actual Traffic (Base Model) ....................................................................... 22  Figure 4: Example of Post-Opening Project Evaluation Count Comparisons (Highways England 2015) . 27  Figure 5: Method for Testing Effects of Errors in Forecast Inputs ............................................................. 30  Figure 6: Project Corridor for Eastown Road Extension ............................................................................ 34  Figure 7: Project Corridor for Indian River Bridge Project ........................................................................ 36  Figure 8 Martin County Unemployment Rate Chart ................................................................................... 37  Figure 9 Median Age (in years) in Southeast Florida Counties .................................................................. 38  Figure 10: Central Artery/Tunnel Projects .................................................................................................. 39  Figure 11: Project Corridor (Cynthiana Bypass) ........................................................................................ 41  Figure 12: Project Study Area (South Bay Expressway) ............................................................................ 43  Figure 13: Project Study Area (US 41 Brown County) .............................................................................. 44  Figure 14: Distribution of Percent Difference from Forecast (Project Level) ............................................ 48  List of Tables Table 2: Summary of Available Data ........................................................................................................... 9  Table 3: Key Fields in NCHRP 08-110 Database ....................................................................................... 11  Table 4: Overall Percent Difference from Forecast .................................................................................... 17  Table 5: Forecast Inaccuracy by Forecast Volume Group (Project Level) ................................................. 17  Table 6: Descriptive Variables for Regression Models .............................................................................. 19  Table 7: Quantile Regression Results [Actual Count=f(Forecast Volume)] ............................................... 20  Table 8: Range of Actual Traffic Volume over Forecast Volume [Actual Count=F(Forecast Volume)] .. 21  Table 9: Range of Percent Difference from Forecast as a Function of Forecast Volume [Actual Count=F(Forecast Volume)] ....................................................................................................................... 21  Table 10: Projects selected for Deep Dive Analysis ................................................................................... 24  Table 11 Sources of Forecast Error Cited in Existing Literature ................................................................ 26  Table 12 Decomposition of Forecast Errors from Andersson et al. (2016) ................................................ 28  Table 13 Deep Dive Count Worksheet ....................................................................................................... 30  Table 14 Sources of Forecast Error to be Considered by Deep Dives ........................................................ 31  Table 15 Deep Dive Worksheet .................................................................................................................. 32  Table 16 External Trip Distribution using Both Competing Bridges ......................................................... 38  Table 17: Known sources of forecast inaccuracy for deep dives ................................................................ 52 

Traffic Forecasting Accuracy Assessment Research Technical Report II-4 Introduction Research Objective The Fixing American’s Surface Transportation Act (“FAST Act”), signed by President Obama in December 2015, provides $41.5 billion each year in roadway and bridge funding (U.S. Department of Transportation, Federal Highway Administration n.d.). Traffic forecasts are used – in part – to decide on how these public dollars are invested, through environmental studies, capital cost estimations and cost-benefit analyses. However, “the greatest knowledge gap in US travel demand modeling is the unknown accuracy of US urban road traffic forecasts” (Hartgen 2013). A relatively small set of empirical studies have examined non-tolled traffic forecasting accuracy in the United States. There is a need for research to expand the assessment and documentation of traffic forecasting experiences around the country to improve future modeling and forecasting applications, with the goal of ensuring that transportation funding dollars are invested wisely. The objective of this study is to develop a process to analyze and improve the accuracy, reliability and utility of project-level traffic forecasts. Overall Approach A review of past studies of forecast accuracy reveals two main methods of evaluating the accuracy of forecasts: Deep Dives and Large-N studies. Deep Dives are examples in which a single project is analyzed in detail to determine what went right and what went wrong in the forecast. Individual Before-and-After studies from the FTA Capital Investment Grant Program are classic examples of Deep Dives. These studies often involve custom data collection before and after the project, such as onboard transit surveys. The sources of forecast errors—such as errors in inputs, model issues or changes in the project definition—are considered and identified. The advantage of Deep Dives is that they allow a complex set of issue to be thoroughly investigated. They also reveal the importance of assumptions made by modelers in relation to data and the particular models that were used. The disadvantage is that it is often unclear whether the lessons from one project can be generalized to others. In contrast, Large N studies consider a larger sample of projects in less depth. Flyvbjerg (2005) extols the virtues of Large N studies as the necessary means of coming to general conclusions. Often, Large N studies include a statistical analysis of the error and bias observed in forecasts compared to actual data. Flyvbjerg et al. (2006) consider a Large N analysis of 183 road and 27 rail projects, and Standard and Poor’s conducts a Large N analysis with a sample of 150 toll road forecasts (Bain and

Traffic Forecasting Accuracy Assessment Research Technical Report II-5 Polakovic 2005). Other examples of Large N studies are the Minnesota, Wisconsin and Ohio analyses (Buck and Sillence 2014; Giaimo and Byram 2013; Parthasarathi and Levinson 2010). The two approaches are not mutually exclusive, and this research uses both in a complementary manner. The Large N analysis includes compiling a database of forecast and actual traffic for about 1300 projects from six states and four European countries. It is the largest known database for assessing forecast accuracy and allows us to statistically analyze the relationship between the actual traffic volumes, the forecast traffic volumes, and a variety of potentially descriptive variables. We also conduct a series of five Deep Dives in which we attempt to understand the reasons for forecast inaccuracy for specific projects. For several of those Deep Dives, we are able to reproduce the original travel model runs, which allows us to test the effect of improving specific aspects of the forecasts. The focus on process requires that this research make recommendations about how to go about the analysis. Our approach to doing so involves three related components: 1. Learn from others. Past efforts at evaluating forecast accuracy have seen mixed success, and there are important lessons to be learned from those efforts. For example, the lack of data availability is identified as the biggest obstacle to progress in the field (Nicolaisen and Driscoll 2014), and when conducting Large-N analyses, the analysis of outliers is very important, with large errors commonly due to a mismatch between the forecast location and the count location (Byram 2015). The included literature review includes a focus on methods of evaluation. This section identifies approaches used by others to conduct such analyses for the purpose of providing a menu of options to this study. 2. Try it ourselves. One thing we have learned as travel forecasters is that the details matter. Therefore, there is no substitute for trying a proposed approach as a means of working through those details. This study involves just that, analyzing a set of five Deep Dives, and analyzing the Large-N data we have compiled. 3. Ask stakeholders what works for them. For the process to be effective, it must be implemented by the actors involved in generating traffic forecasts, specifically the DOTs, MPOs and other (non-federal) agencies responsible for those forecasts. Therefore, it is important that the recommended process fits with their needs and priorities. The research is divided into two phases, with Phase 1 encompassing steps 1 and 2 above, and Phase 2 encompassing step 3. Phase 1 involves analyzing existing data, focusing on what can be learned by conducting the analysis ourselves. This phase includes both the statistical analysis of the Large-N data, and Deep Dives into specific case studies. Phase 2 involves establishing the process for future data, and focuses on engaging stakeholders to find out what works for them. It included a stakeholder workshop to review the Phase 1 findings, and solicited input prior to making the final set of recommendations. It also included developing a working traffic forecast archive and information system to support the data collection necessary in order for the recommendations to be implemented. This research recognizes that there is value not only in establishing the process, but also in the findings of the analysis itself. These findings establish a baseline understanding of forecast accuracy, that can later be updated as the process is applied. The following sections describe key questions we aimed to answer with respect to both the analysis and the process.

Traffic Forecasting Accuracy Assessment Research Technical Report II-6 Analysis Questions The analysis sought to answer several specific and complementary questions. The issues addressed by the Large-N analysis are largely descriptive in nature: 1. What is the distribution of forecast errors across the sample as a whole? 2. Can we detect statistically significant bias in the forecasts? If so, is that bias a function of specific factors, such as the type of project, the time between the forecast and the opening year, or the methods used? 3. After adjusting for any bias, how precise are the forecasts? Is the precision a function of specific factors, such as the type of project, the time between the forecast and the opening year, or the methods used? Taken together, these analyses provide a means of describing the historic range of forecast errors that have been observed for certain types of projects, albeit with some limitations on the types of data collected. An important caveat to such a result is similar to the axiom “past returns do not guarantee future performance”. While it is useful to describe the historic performance of traffic forecasts for the purpose of establishing a track record, there remain legitimate reasons why future accuracy may differ. These could be in a positive way—that better models and data produce more accurate forecasts—or in a negative way—that if all the cars start driving themselves, it could change the transportation system to a degree that overwhelms other sources of inaccuracy. Since there is little we can say empirically about events that have not yet happened, we must settle in this case for understanding past performance. While these descriptive measures are useful and can shed light on certain factors associated with forecast errors, it does not shed light on why the forecasts may be in error. The Deep Dives focus on addressing the following questions: 1. What aspects of the forecasts (such as population forecasts, project scope, etc.) can we clearly identify as being accurate or inaccurate? 2. If we had gotten those aspects right, how much would it change the traffic forecast? The goal is to attribute as much of the error as possible to known factors. The remaining error will be for “unknown reasons” and we will be able to say little about it beyond the fact that it is not due to the aspects we identified and quantified. Process Questions Conducting these analyses also provides the opportunity to evaluate the process itself. The second set of research questions focus on establishing an effective process. There are four questions of particular interest: 1. What information should be archived from a forecast? 2. What data should be collected about actual project outcomes? 3. Which measures should be reported in future Large-N studies? 4. Can we define a template for future Deep Dives?

Traffic Forecasting Accuracy Assessment Research Technical Report II-7 The nature of our analysis is that we are compiling data from a number of different sources, and there are differences in the data that are available from those sources, as well as how those data are structured. These differences will limit what we can do with the data but will also provide an opportunity to demonstrate what can and cannot be done with different amounts of data. Starting with the Large-N analysis, we went through a significant effort to define a common set of fields in our forecast accuracy database, with definitions as consistent as possible. For example, different agencies may code the project type using different codes, and we were required to interpret what each means and establish a common definition. In the final database, we observe that each agency has about two-thirds of the fields complete, but that it is not the same two-thirds across all agencies. For model estimation, this will necessitate that we include dummy terms for missing data on each field to avoid having to drop the records altogether. If we find that a particular term is useful or insightful when available, we may recommend that all agencies begin to collect information about that field. The same would apply to data on the actual project outcomes. For the six Deep Dives, we have different levels of detail available to each. We have full model runs available for three, allowing us to test changes in inputs or in certain dimensions. We have detailed traffic forecasting reports for another two, we will rely on publicly available documents, such as the Environmental Impact Statements (EISs), for the last. The output is a recommended Deep Dive process that can be applied for future updates. Report Contents This technical report documents our findings based on the analysis of existing data, and the process we used to reach those findings. The issues identified in the analysis paves way for a set of recommendations for forecasters to adopt in their practice in addition to evaluating their results. These recommendations are presented in Part 1: Guidance Document. The remainder of this report is structured as follows:  Chapter 2: Large-N Analysis. In this section, we analyze the overall accuracy using a database of about 1300 projects that have forecast and counted Average Daily Traffic (ADT).  Chapter 3: Deep Dives. In Chapter 3, we analyze in detail what was right and wrong with a set of five specific forecasts.  Chapter 4: Conclusions. In Chapter 4, we summarize the key findings from both portions of the analysis and re-visit the analysis questions and process questions discussed here. Along with the technical report comes several attachments: the detailed Literature Review, data exploration with several categorical variables as part of the Large-N Analysis, a Python script for conducting the analysis, an Excel sheet with the results of quantile regression as well as the R-script and individual Deep Dive reports. These are included in the appendices.

Traffic Forecasting Accuracy Assessment Research Technical Report II-8 Large-N Analysis Introduction The current assessment of traffic forecasting accuracy in NCHRP 08-110 builds upon past efforts. There have been several researches done on assessing the accuracy of traffic forecasts, although most of them have been focused on toll roads. The inspiration seems to be from toll road forecasts having a bearing on investor expectations and that is why their accuracy is more important. As an evidence to this, The Australia Government (2012) cited ‘‘inaccurate and over-optimistic’’ traffic forecasts as a threat to investor confidence. Three lawsuits now underway challenge the forecasts for toll road traffic that subsequently came in significantly under projections (Bain 2013). Inaccuracy of toll-road traffic forecasts have been investigated in both International (Bain 2011a; Bain and Polakovic 2005; Flyvbjerg et al. 2006b; Gomez et al. 2016; Kriger et al. 2006; Odeck and Welde 2017a) and US context (Kriger et al. 2006). These researches identified that in most case actual traffic have been less than predicted. They attributed this error in forecast to less toll road capacity (when opened, compared with forecast), elapsed time of operation (roads opened longer had higher traffic levels), time of construction (longer construction time delayed traffic growth and increased the error), toll road length (shorter roads attracted less traffic), cash payment (modern no-cash payment increased traffic), and fixed/ distance-based tolling (fixed tolls reduced traffic). Bain (2011b) identified the toll culture (existence of toll roads previously, toll acceptance etc.) and errors in data collection as well as unforeseen micro-economic growth in the locality as sources of inaccuracy. Flyvbjerg et al. (2006) attributed the errors to the uncertainties in trip generation and land-use pattern. From 2002-2005, Standard & Poor's publicly released annual reports on the accuracy of toll road, bridge and tunnel projects worldwide. The 2005 report (Bain and Polakovic 2005), the most recent report available publicly, analyzed 104 projects. They found that the demand forecasts for those projects suffered from optimism bias, and this bias persisted into the first five years of operation. Despite the assessment of the forecast errors for toll roads, there have not been comparable research into the same for non-tolled roads. A few recent studies have examined the accuracy of non- tolled roadway forecasts: Buck and Sillence (2014) demonstrated the value of using travel demand models in Wisconsin to improve traffic forecast accuracy and provided a framework for future accuracy studies. Parthasarathi and Levinson (2010) examined the accuracy of traffic forecasts for one city in Minnesota. (Giaimo and Byram 2013) examined the accuracy of over 2,000 traffic forecasts in Ohio produced between 2000-2012. They found the traffic forecasts slightly high, but within the standard error of the traffic count data. In the study of 39 road projects in Virginia, (Miller et al. 2016) reported that the median absolute percent error of all studies was about 40%. (For a detailed review of the past assessments into Traffic Forecast Accuracy, please refer to Appendix A: Literature Review)

Traffic Forecasting Accuracy Assessment Research Technical Report II-9 The first phase of NCHRP 08-110 conducts a similar analysis using data on forecast and actual traffic for a combined data set of about 1,300 projects from six states and four European countries. Section 2.2 details the Data and Methodology adopted for this portion of the research and the following section presents the top-level description of forecast inaccuracy from our analysis. The more detailed analysis results (forecast inaccuracy as functions of several descriptive variables) are given in Appendix B: Detailed Large-N Analysis. The Quantile Regression model is presented in Section 2.4 along with the general findings from the analysis in the next section of the chapter. Data and Methodology This analysis uses the database compiled as part of the NCHRP 08-110 project which contains traffic forecast and actual traffic information for road projects in several states. The records are compiled from existing databases maintained by the DOTs, traffic forecasting reports and project reports or traffic/environmental impact statements as well as database from similar research efforts. The database contains information on the project itself (unique project ID, improvement type, facility type, location, length), forecast (year forecast produced, forecast year, methodology etc.) and the actual traffic count information. The forecasted traffic and actual traffic for a project on a target year are compared and several metrics have been calculated to ascertain the level of inaccuracy in traffic forecast. Data Data are included from six states: Florida, Massachusetts (one project), Michigan, Minnesota, Ohio and Wisconsin, as well as from four European countries: Denmark, Norway, Sweden and the United Kingdom. Additional data are available from Virginia and Kentucky that can be incorporated at a future date. A short summary of the available information, with the State names replaced by Agency Code to protect anonymity, is presented in Table : Table 1: Summary of Available Data Agency All Projects Opened Projects Number of Segments Number of Unique Projects Number of Segments Number of Unique Projects Agency A 1123 385 425 381 Agency B 12 1 12 1 Agency C 38 7 5 3 Agency D 2176 103 1292 99 Agency E 12413 1863 1242 562 Agency F 463 132 463 132 Agency G 472 120 472 113 Total Segments 16697 2611 3911 1291 A segment is a different portion of roadway for which a forecast is provided. For example, forecasts for an interchange improvement project may contain segment-level estimates for both

Traffic Forecasting Accuracy Assessment Research Technical Report II-10 directions of the freeway, for both directions of the crossing arterial, and for each of the ramps. Some of these projects have not yet opened, some of the segments do not have actual count data associated with them, and others do not pass our quality control checks for inclusion in the statistical analysis (the filtering process is described below). While all records are retained for future use, the Large-N analysis is based on a filtered subset of 1,291 projects and 3,911 segments. There are a range of projects included. The opening year varies from 1970 to 2017, with about 90% of the projects opening in year 2003 or later. While the exact nature and scale of the project is not always known, inspection reveals that the older projects are more likely to be major infrastructure projects, and the newer projects are more likely to be routine work for the DOT, e.g. resurfacing works on existing roadway. For example, almost half of the projects are design forecasts for repaving. Such differences are driven largely by data availability. Some state agencies have begun tracking all forecasts as a matter of course, and the records that do so rarely go back more than 10-15 years. The older projects are derived from someone going back to study and enter paper reports or scans of paper reports, with the availability of documentation and the interest in spending the effort to examine higher for bigger projects. Thus, it is not a random sample of projects, and there are there are notable differences not only in the methods used across agencies, but also in the mix of projects included in the database. This is an important limitation that readers should bear in mind as they understand and interpret our results. Database Structure The Traffic Forecast Database accumulated as part of the project provides a starting point for the Large-N Analysis. The data are available in the form of a Forecast Cards, as described in Section 3.2 of Part 1: Guidance Document. The primary fields on the Forecast Database can be classified into three types: 1. Project Information 2. Forecast Information and 3. Actual Traffic Count Information The Project Information table has all the information specific to the project characteristics. This includes Project/Report ID unique to a project, Project Description, Year when the project/report was completed, type of project, City or Location where project took place, State, Construction cost, etc. Forecast Information includes the data related to the traffic forecast: the forecast itself along with who made the forecast, at and for what year. It also includes the type of forecast year (opening, mid- design or design year), the methodology used to forecast, whether any post-processing been done or not and similar information. Information regarding the actual traffic includes the actual traffic volume in a particular segment, year of observation and project opening year. The key fields in the database is given in Table 1.

Traffic Forecasting Accuracy Assessment Research Technical Report II-11 Table 1: Key Fields in NCHRP 08-110 Database Name Description Brief Description Brief written description of the project Project Year Year of the project or Construction Year or the Year the Forecast Report was produced Length Project Length in miles Functional Class Type of facility (Interstate, Ramp, Major/Minor Arterial etc.) Improvement Type Type of project (Resurfacing, Adding lanes, New construction etc.) Area Type Functional Class Area type where the facility lies (Rural, Urban etc.) Construction Cost Project construction cost State State code. Internal Project ID Project ID or Report ID or Request ID County County in which the facility lies Toll Type What kind of tolls are applied on the facility (No tolls, Static, Dynamic etc.). Year of Observation Year the actual traffic count was collected Count Actual Traffic Count Count Units Units used to collect count information (AADT, AWDT) Station Identifier Count station ID or other identifiers for count station. Traffic Forecast Forecasted Traffic volume. Forecast Units Units used to forecast traffic (AADT, AWDT) Forecast Year Year of forecast. Forecast Year Type Period of forecast like opening, mid-design or design period. Year Forecast Produced The year the forecast was produced/generated. Forecasting Agency Organization which was responsible for this forecast. Forecast Methodology Method used to forecast traffic (Traffic Count Trend, Regional Travel Demand Model, Project Specific Model etc.) Post Processing Methodology Any post processing or alternative methodology used. Post Processing Explanation Explanation, as warranted, in case post processing methodology is used. Segment Description Description of the segment for which this forecast was done. For a fair comparison, the count units and the forecast units should be the same. Common units include:  Average Daily Traffic (ADT): the average number of vehicles that travel through a specific point of a road over a short duration time period (Federal Highway Administration 2018).  Annual Average Daily Traffic (AADT): the mean traffic volume across all days for a year for a given location along a roadway (Federal Highway Administration 2018).  Average Weekday Traffic (AWDT): the mean traffic volume specifically on weekdays for a given location along a roadway. Whether it is an annual average is not always clearly defined.

Traffic Forecasting Accuracy Assessment Research Technical Report II-12  Typical Weekday Traffic: The traffic volume on a typical weekday, often defined as non-holiday weeks when school is in session. This is commonly output from travel models. Because reporting practices differ by state, the data used in this study vary by the source. Therefore, we use the more general term ADT throughout this report. Methodology Briefly, the goal of Large-N analysis is to answer: How close were the forecasts to observed volumes? (Miller et al. 2016). In order to facilitate that, the researchers have generally looked at two sets of similar data: one during the base year and the other one in the forecast year. From the database and project reports, we see that traffic forecasts are usually done for three analysis years: 1. Opening Year, 2. Mid-Design or Interim Year (usually 10 years after Opening), and 3. Design Year (usually 20 years from Opening). This research evaluates the accuracy of opening year forecasts for the practical reason that the interim and design years have not yet been reached for the vast majority of projects. In cases where the opening year traffic counts were unavailable, we resorted to taking the earliest traffic count available after a reasonable estimated year of completion and compared it with the scaled up forecast volume. We scaled the forecast to the year of the first post-opening count so that both data points are in the same year. We did this by linearly interpolating the forecast traffic between the forecast opening year and the design year. (The European projects are taken from Nicolaisen’s PhD Thesis (Nicolaisen 2012) and have already been scaled to match the count year using a 1.5% annual growth rate. We maintain this logic for the European projects, but do the interpolation between opening and design year for US projects.) One of the differences in methodologies in previous Large N studies is the how they define errors. Miller et al. (2016a), CDM Smith et al. (2014), and Tsai, Mulley, and Clifton (2014) define error as the Predicted Volume minus the Actual Volume such that a positive result is an overprediction. Odeck and Welde (2017), Welde and Odeck (2011), and Flyvbjerg, Holm, and Buhl (2005) defined error the other way, such that a positive value represents underprediction. A popular metric used to determine the accuracy of traffic forecasts is the “half a lane” criterion. This criterion specifies that the forecast is accurate if the forecast volume varies from measured volume by less than half a lane’s capacity from the constructed facility’s capacity. If the forecast is more than half a lane less than the facility’s capacity, the facility could have been constructed with one fewer lane in each direction. If the forecast was more than half a lane than the facility capacity, the facility needs one additional lane in each direction. Calculating whether a forecast is within a half a lane requires several assumptions, such as the share of the daily traffic that occurs in the peak hour. Researchers have evaluated the accuracy of project level traffic forecasts by comparing them with the actual traffic counts. There are also two schools of thought when presenting the error as a percentage: over the actual traffic (Tsai, Mulley, and Clifton 2014; Miller et al. 2016) versus over the forecast traffic (Flyvbjerg, Holm, and Buhl 2005; Nicolaisen and Næss 2015; Odeck and Welde 2017).

Traffic Forecasting Accuracy Assessment Research Technical Report II-13 An advantage of the former is that the percentage is expressed in terms of a real quantity (observed traffic); an advantage of the latter is that when the forecast is made, uncertainty can be expressed in terms of the forecast value since the observed value is unknown (Miller et al. 2016). Beside these two methods, Bain (2009a) and Parthasarathi and Levinson (2010) evaluated the forecast performance by taking the ratio of Actual and Forecast Traffic. In this study, we continue in the convention as described in (Odeck and Welde 2017a) in which they express the percent error as the actual count minus the forecast volume divided by the forecast volume. We recognize that the Odeck and Welde approach is a different from the standard convention of expressing percent error with the actual observation in the denominator. We find it more useful for understanding to express the error as a function of the forecast volume because the forecast volume is known at the time the project decision is made while the actual volume is not. This means that if we know that we might expect a 10% difference, then that 10% can be applied to the forecast volume. To make this distinction clear, we express this as the percent difference from forecast (PDFF): PDFFi Actual Count-Forecast VolumeForecast Volume *100% (1) Where PDFFi is the percent difference from forecast for project i. Negative values indicate that the actual outcome is lower than the forecast (over-prediction), and positive values indicate the actual outcome is higher than the forecast (under-prediction). The appeal of this expression is that it expresses the error as a function of the forecast, which is known first. The distribution of the PDFF over the dataset will be able to answer the systematic performance of traffic forecasts. As for expressing the error over the dataset, the use of Mean Percent Error and Mean Absolute Percent Error have varied in different researches. Mean Absolute Percentage Error has been acknowledged to “allow [researchers] to better understand the absolute size of inaccuracies across project” (Odeck and Welde 2017) since positive and negative errors tend to offset each other in case of calculating the Mean Percent Error. We continue in this tradition, but again translate it into the language of percent difference from forecast: Mean Absolute Percent Difference from Forecast MAPDFF 1n *∑ |PDFFi|ni 1 (2) Where n is the total number of projects. While assessing the project forecast accuracy, one question arises: what constitutes an observation? A typical road project is usually divided into several links or segments within the project boundary. The links are usually on different alignments or carrying traffic to different directions. To uniquely identify each project in the database a column was specified, titled “Internal Project ID”. This column typically contains the unique Financial ID of the project, Report Number, Control Number etc. Under the same Internal Project ID, forecast and traffic count information for the different segments are recorded with unique Segment ID. Analysis thus can be done on two levels: 1. Segment Level: assessing the accuracy of the forecast for each different segment or link. 2. Project Level: assessing the total accuracy of forecast for each individual project, identified by their Unique Internal Project ID.

Traffic Forecasting Accuracy Assessment Research Technical Report II-14 The limitation of presenting accuracy metrics at a segment level is that the observations are not independent. Consider, for example, a project with three segments connected end-to-end. It is reasonable to expect that the PDFFs on these segments is correlated—perhaps uniformly high or low. Whether we treat these as one combined observation or three independent observations, we would expect the average PDFF to be roughly the same. There would be a difference, however, in the measured t-statistics, where the larger sample size from a segment level analysis could suggest significance where a project level analysis would not. Segment-level analysis is not without its merit however, since a few measures of inaccuracy is better represented in a segment level. For example, when we are assessing the inaccuracy over roadways of different functional class, segment level results present a better representation than aggregated results over the entire project. Project level analysis seems to be free of the correlation across observations described, but still the question remains on how to assess the accuracy for a project. In the Virginia Study (Miller et al. 2016) where each project consisted of links ranging from 1 to 2493 in number, the researchers took the Median Absolute Percent Error over the segments or links for individual projects and then used the Mean to express the level of accuracy. Nicolaisen (2012) measured the accuracy by taking the sum of forecast and actual traffic volumes on the segments in a project. Another method that can be used is taking the weighted traffic volume as described in (Miller et al. 2016): Weighted Traffic Volume ∑ ∗ ∑ (3) The issue with using the weighted traffic volume (forecasted and actual) is the absence of length data in most of the records. In addition, taking the total traffic as Nicolaisen (2012) will not be able to show the relation between forecast accuracy and project type by vehicles serviced. Taking these into consideration, in this study we measure the inaccuracy at the project-level using average traffic volumes, where each segment within a project is given equal weight. We report the distribution of PDFF at project level and segment level, where relevant. For project-level analysis, we took the average of the traffic volumes and measured the error statistics by comparing the average forecast and average actual traffic. Aggregating the counts and forecast across the segments/links was done by the unique identifier in the column “Internal Project ID”. The variables for analysis were also aggregated by the same unique identifier, albeit with different measures for maintaining uniformity. Improvement Type, Area Type and Functional Class of a project were taken to be the same as the most prevalent one among the segments. For example, if most of the segments in a project are of Improvement Type 1 (resurfacing/reconstruction/no major improvement), the project is considered to be of Improvement Type 1. Forecast Methodology is the same across the segments for a project, so are Unemployment Rates and Years of Forecast and Observation. Mean of these values were taken for the project level analysis. Based on the nature of the NCHRP 08-110 database, we can select some variables that might dictate future adjustments in the forecasts. These variables are: the type of Project (Improvement Type), the methodology used (Forecast Methodology), roadway type (Functional Class), area type (Area Type Functional Class) and the forecast horizon (difference between year forecast produced and year of opening).

Traffic Forecasting Accuracy Assessment Research Technical Report II-15 Odeck and Welde (2017) employed an econometric approach to determine the bias and the efficiency of the estimates by regressing the actual value as a function of the forecast value using equation: 𝑦 𝛼 𝛽𝑦 𝜀 4 where 𝑦 is the actual traffic on project i, 𝑦 is the forecast traffic on project i and 𝜀 is a random error term. α and β are estimated terms in the regression. Here α=0 and β=1 implies unbiasedness. Li and Hensher (2010) conducted Ordinary Least Squares and Random Effect Linear Regression Model to explain the variation in the error forecast as a percentage over the explanatory variables (year open, elapsed time since opening etc.). Miller et al. (2016a) performed ANOVA (Analysis Of Variation) test on Median Absolute Percentage Error on a limited number of explanatory variables (difference between forecast year and opening year, forecast method, duration of forecast and number of recessions between base year and forecast year). Both researches found their models to be a good fit to explain the errors. The end-goal of such analysis is to present the range of errors of forecast based on several variables like when the project was opened, difference in the forecast year and existing year etc. This research will do so by following the Odeck and Welde (2017) structure, but introducing additional terms as descriptive variables: 𝑦 𝛼 𝛽𝑦 𝛾𝑋 𝜀 5 where Xi is a vector of descriptive variables associated with project i, and γ is a vector of estimated model coefficients associated with those descriptive variables. To consider multiplicative effects as opposed to additive effects, we can scale the regressors by the forecast value: 𝑦 𝛼 𝛽𝑦 𝛾𝑋 𝑦 𝜀 6 In such a formulation, γ =0 indicates no effect of that particular term, while positive values would scale up the forecast by that amount and negative values would scale down the forecast by that amount. In addition to the estimation of biases, we are also interested in how the distribution of errors relates to different descriptive variables. For example, it may be that forecasts with longer time horizons remain unbiased, but have a higher spread, as measured by the MAPDFF. To do this, we extend the above framework to use quantile regression instead of ordinary least square (OLS) regression. Whereas OLS predicts the mean value, quantile regression predicts the values for specific percentiles in the distribution (Cade and Noon 2003). Quantile regression has been used in transportation in the past for applications such as quantifying the effect of weather on travel time and travel time reliability (Zhang and Chen 2017), where an event may have a limited effect on the mean value but increase the likelihood of a long delay. It has also been used to estimate error bounds for real time traffic predictions (Pereira et al. 2014), an application more analogous to this project.

Traffic Forecasting Accuracy Assessment Research Technical Report II-16 In our case, we estimate quantile regression models of the actual count as a function of the forecast and other descriptive variables. We do so for the 5th percentile, the 20th percentile, the median, the 80th percentile and the 95th percentile. This establishes an uncertainty window in which the median value provides our expected value, or an “adjusted forecast”, the 5th or 20th percentiles provide a lower bound on the expected value, and the 80th and 95th percentiles provide upper bounds. Results Overall Distribution Generally speaking, traffic forecasts have been found to be over-predicting: actual traffic volumes after project has been completed are lower than what has been forecasted, as shown in Figure 1, which show a right-skewed distribution. The 3911 unique records/segments are part of 1291 unique projects. We notice a general over-estimation of traffic across the projects. The distribution of percent difference from forecast shown in Figure 1 is heavier on the negative side, i.e. actual volumes are generally lower than traffic forecasts. The mean of the absolute percent difference from forecast is 17.29% with a standard deviation of 24.81. The Kernel Density Estimator displays an almost normal distribution, albeit with long tails. On an average, the traffic forecasts for a project are off by 3500 vpd. Figure 1: Distribution of Percent Difference from Forecast (Project Level) We should expect over-predictions because, in many cases, these forecasts are used in design engineering. A design based on over-predicted traffic will be over-built and will not see that extra capacity utilized. On the other hand, if the under-predicted traffic is used as a basis for design, it would mean adding capacity at a later time at a greater cost to meet the demand. 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100

Traffic Forecasting Accuracy Assessment Research Technical Report II-17 Table 2: Overall Percent Difference from Forecast Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile Project Level 1291 17.29 -5.62 -7.49 24.81 -37.56 36.96 Forecast Volume Figure 2 reports the PDFF as a function of forecast volume at the project level. An interesting observation from the figure is the low percentage values as the traffic volumes increase. This is understandable, since the percentages were taken as a ratio over the forecasted volume. Unless the actual traffic differs by a large margin, the PDFFs will not have risen to a big amount. When using the “half a lane”, we find that 95% of forecasts reviewed are “accurate to within half of a lane”. Figure 2: Percent Difference from Forecast as a function of Forecast Volume (Project Level) Table 3 shows descriptive measures of percent difference from forecast of the forecasts by volume group for segments and projects. The measures represent the spread of the Percent Difference from Forecast in forecast, with the Mean, Standard Deviation and 5th and 95th percentile values. The MAPDFF value for each category presents how much the actual traffic deviates from the forecast value. Mean is the central tendency of the data. Standard Deviation and the 5th and 95th percentile data represent the spread of the distribution. 90% of the data points fall between the 5th and 95th percentile values. Table 3: Forecast Inaccuracy by Forecast Volume Group (Project Level) Traffic Forecast Range (ADT) Observations Mean Absolute Percent Difference Mean Median Standard Deviation 5th Percentile 95th Percentile 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100

Traffic Forecasting Accuracy Assessment Research Technical Report II-18 from Forecast (0, 3000] 133 24.59 -1.85 -5.75 42.15 -45.01 75.17 (3000, 6000] 142 20.53 -0.37 -4.64 29.74 -36.50 50.33 (6000, 9000] 125 16.75 -5.68 -8.80 21.94 -35.29 36.67 (9000, 13000] 145 15.59 -4.66 -7.29 19.99 -31.34 34.45 (13000, 17000] 143 17.41 -6.20 -6.53 21.61 -37.76 30.65 (17000, 22000] 113 17.98 -5.65 -8.31 25.47 -41.62 37.85 (22000, 30000] 133 19.54 -5.65 -8.47 25.36 -40.31 41.75 (30000, 40000] 115 15.56 -9.78 -10.26 18.23 -39.54 12.26 (40000, 60000] 137 13.18 -8.95 -7.68 16.01 -34.44 7.49 (60000+ 105 10.20 -8.96 -7.90 9.90 -24.50 3.68 One observation from Table 3 is that as the forecast volume increases, the distribution of the percent difference from forecast have smaller spreads in addition to the MAPDFF value getting smaller. For example, for forecast volume between 30000 and 40000 ADT, percent difference from forecast for 90% of the projects lie between -39.54% and 12.26% with absolute deviation of 16.17% on average. Quantile Regression Results The uncertainties involved in forecasting traffic call for assessing the risks and subsequently developing a range of traffic forecast that can be expected on a project. Considering the current dataset to be representative, i.e. “national average”, we developed several quantile regression models to assess the biases in the forecasts on the variables described in the previous chapter. The models were developed on the 5th, 20th, 50th (median), 80th and 95th percentile values. Apart from detecting bias of the traffic forecast, other goal of such econometric analyses is to obtain the range of actual traffic as a function of the forecast traffic and other project-specific criteria. The variables in the analysis are explained in Table 4.

Traffic Forecasting Accuracy Assessment Research Technical Report II-19 Table 4: Descriptive Variables for Regression Models Variable Name Explanation AdjustedForecast Forecasted ADT value for a segment/link or project. AdjustedForecast_over30k Variable to account for links with ADT value greater than 30,000. Defined as: If Forecast > 30,000 then value=Forecast – 30,000 Scale_UnemploymentRate_OpeningYear Unemployment rate in the project opening year. Scale_UnemploymentRate_YearProduced Unemployment rate in the year forecast was produced Scale_YP_Missing Binary Variable to account for missing information in the Year Forecast Produced Column in the NCHRP database. Scale_DiffYear Difference in the year forecast produced and forecast year i.e. Forecast Horizon. Scale_IT_AddCapacity Binary Variable for projects that add capacity to existing roadway. Reference class is the Resurfacing/Repaving/Minor Improvement projects Scale_IT_NewRoad Binary Variable for new construction projects. Scale_IT_Unknown Binary Variable for projects of unknown improvement type. Scale_FM_TravelModel Binary Variable for forecasts done using Travel Model. Reference class is the forecasts done using Traffic Count Trend. Scale_FM_Unknown Binary Variable for forecasts done using unknown methodology. Scale_FA_Consultant Binary Variable for forecaster. Reference class being State DoTs. Scale_Agency_BCF Binary Variable for projects under the jurisdiction of Agency B, C or F. Reference class being Agency A. Scale_Europe_AD Binary Variable for European Projects. Scale_OY_1960_1990 Binary Variable for projects opened to traffic before 1990. The reference value for opening year is 2013 and later. Scale_OY_1991_2002 Binary Variable for projects opened to traffic from 1991 to 2002. Scale_OY_2003_2008 Binary Variable for projects opened to traffic from 2003 to 2008. Scale_OY_2009_2012 Binary Variable for projects opened to traffic from 2009 to 2012. Scale_FC_Arterial Binary Variable for forecasts on Major or Minor Arterials. Interstate or Limited Access Facility are kept as reference class. Scale_FC_CollectorLocal Binary Variable for forecasts on Collectors and Local Roads. Scale_FC_Unknown Binary Variable for forecasts on roadways of unknown functional class. In the first model, we regressed the actual count on the forecast traffic volume. The structure follows the Equation 4 reported previously: 𝑦 𝛼 𝛽𝑦 𝜀

Traffic Forecasting Accuracy Assessment Research Technical Report II-20 where 𝑦 is the actual traffic on project i, 𝑦 is the forecast traffic on project i, and 𝜀 is a random error term. 𝛼 and 𝛽 are estimated terms in the regression. Here α=0 and β=1 implies unbiasedness. The quantile regression parameter estimates the change in a specified quantile of the response variable produced by a one unit change in the predictor variable. This allows comparing how some percentiles of the actual traffic may be more affected by forecast volume than other percentiles. This is reflected in the change in the size of the regression coefficient. Table 5 presents the regression statistics (coefficients or 𝛼 and β values and the t value to assess the significance). For the median, we observe that the intercept is not significantly different from zero, but the slope (the forecast volume coefficient) is significantly different from one, which we can interpret as a detectable bias. Table 5: Quantile Regression Results [Actual Count=f(Forecast Volume)] 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile Pseudo R- Squared 0.433 0.619 0.723 0.750 0.748 Coef. t value Coef. t value Coef. t value Coef. t value Coef. t value Intercept -826.73 -10.55 -434.03 -5.06 37.15 0.54 1395.74 6.59 2940.45 6.50 Forecast Volume 0.62 30.68 0.81 89.56 0.94 148.10 1.05 76.12 1.42 42.26 In addition to detecting bias, these quantile regression models can be applied to obtain an uncertainty window around a forecast as follows: 5th Percentile Estimate = -827 + 0.62 * Forecast 20th Percentile Estimate = -434 + 0.81 * Forecast Median Estimate = 37 + 0.94 * Forecast 80th Percentile Estimate = 1396 + 1.05 * Forecast 95th Percentile Estimate = 2940 + 1.42 * Forecast So if a forecast is 10,000 ADT on a road, we would expect that the median number of vehicles to actually show up on the facility is 9,437 ADT (37 + 0.94 * 10,000), which we can refer to as our median estimate, or alternatively an expected value or adjusted forecast. We would expect that for 5% of forecasts we do, the actual traffic will be less than 5,373, and that for 5% of forecasts we do, the actual traffic will be more than 17,140 ADT. The 20th and 80th percentile values can be calculated

Traffic Forecasting Accuracy Assessment Research Technical Report II-21 similarly. Table 6 and Table 8 give the ranges of Actual Traffic and Percent Difference from Forecast over the forecasted traffic volume respectively. Table 6: Range of Actual Traffic Volume over Forecast Volume [Actual Count=F(Forecast Volume)] Forecast Forecast Window (Estimate) 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile 0 -827 -434 37 1,396 2,940 5000 2,294 3,612 4,742 6,670 10,047 10000 5,415 7,658 9,448 11,944 17,153 15000 8,536 11,705 14,153 17,218 24,259 20000 11,656 15,751 18,859 22,492 31,365 25000 14,777 19,797 23,564 27,766 38,471 30000 17,898 23,843 28,269 33,040 45,578 35000 21,019 27,890 32,975 38,314 52,684 40000 24,139 31,936 37,680 43,588 59,790 45000 27,260 35,982 42,385 48,862 66,896 50000 30,381 40,028 47,091 54,136 74,002 55000 33,502 44,075 51,796 59,410 81,109 60000 36,622 48,121 56,501 64,684 88,215 Table 7: Range of Percent Difference from Forecast as a Function of Forecast Volume [Actual Count=F(Forecast Volume)] Forecast Forecast Window: Percent Difference from Forecast 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile 0 5000 -54% -28% -5% 33% 101% 10000 -46% -23% -6% 19% 72% 15000 -43% -22% -6% 15% 62% 20000 -42% -21% -6% 12% 57% 25000 -41% -21% -6% 11% 54% 30000 -40% -21% -6% 10% 52% 35000 -40% -20% -6% 9% 51% 40000 -40% -20% -6% 9% 49% 45000 -39% -20% -6% 9% 49% 50000 -39% -20% -6% 8% 48% 55000 -39% -20% -6% 8% 47% 60000 -39% -20% -6% 8% 47% Applying the coefficients as an equation, we constructed ranges of actual traffic and percent difference from forecast for different forecast volumes (Figure 3).

Traffic Forecasting Accuracy Assessment Research Technical Report II-22 Figure 3: Expected Ranges of Actual Traffic (Base Model) The lines depicting various percentile values can be interpreted as the range of actual traffic over a forecast volume. For example, it can be expected that 95% of all projects with the forecasted ADT of 30,000 will have actual traffic below 45,578. Only 5% of the projects experience actual traffic less than 17,898. Not considering other variables, this range (45,578 to 17,898 for forecast volume of 30,000) holds 90% of the projects. Perfect Forecast 5th Percentile Median 95th Percentile 20th Percentile 80th … 0 10000 20000 30000 40000 50000 60000 0 10000 20000 30000 40000 50000 60000 Ex pe ct ed  A DT Forecast ADT

Traffic Forecasting Accuracy Assessment Research Technical Report II-23 Deep Dives Introduction The previously explained Large-N analysis measures the error and can shed light on certain factors associated with forecast errors, but it does not explain why the forecasts may be in error. The Deep Dives fills that gap to the extent possible. This analysis focuses on addressing the following questions:  What aspects of the forecasts (such as population forecasts, project scope, etc.) can we clearly identify as being accurate or inaccurate?  If we had gotten those aspects right, how much would it change the traffic forecast? The goal here is to attribute as much of the error as possible to known factors. The remaining error will be for “unknown reasons” and we will be able to say little about it beyond the fact that it is not due to the aspects we identified and quantified. Deep Dives guide the efforts into identifying the reasons behind forecast errors. The specific methods for answering these questions varies across the Deep Dives, depending both on the process options being considered and on the data available for that project. This research conducted five Deep Dives, with a sixth that was only partially completed due to lack of clarity in forecast documents, to provide a range of project types, and a range of available data for analysis. We aimed to find projects where: 1. The project is already open, and we expect to be able to find post-opening data. 2. The project is big enough to have a meaningful impact. 3. We have detailed information available about the forecasts. Ideally, this would be in the form of archived model runs. Lacking that, detailed forecast reports would be beneficial, and if those are unavailable, we would rely on the environmental impact statements or other public documents. 4. The projects as a set show some diversity of types. We found it to be surprisingly difficult to find suitable case studies. We found that points one and three were in direct conflict. We did find a few agencies who are doing a commendable job of archiving forecasts, but even in the best cases, the archives get thin more than about 10 years back, and projects forecast less than 10 years ago are often not open. In addition, for longer timeframes, staff have often turned over, and institutional memory can be lost. Here, the most promise came

Traffic Forecasting Accuracy Assessment Research Technical Report II-24 from finding long-time staff who happened to be good at keeping their own records or saw the value in saving the information. In our search, we aimed for big projects, with the idea that they were more important to start with, they would be better documented, and they would show more meaningful impacts. What we found, though, was that many major projects opened over the last decade have been tolled. This is natural given current funding constraints, but given that toll forecasts have been studied more extensively elsewhere, we wanted them to be a part of, but not the dominant part of, our study. The resulting Deep Dives that have been selected (Table 8) provide for a reasonable diversity of project types and available data. They include a new bridge, the expansion and extension of an arterial on the fringe of an urban area, a major new expressway built as a toll road, the rebuild and expansion of an urban freeway, and a state highway bypass around a small town. Table 8: Projects selected for Deep Dive Analysis Project Name Brief Description Eastown Road Extension Project, Lima, Ohio Widened a 2.5-mile segment of the arterial from 2 lanes to 5 lanes and extended the arterial an additional mile Indian River Bridge, Palm City, Florida This 0.6 mile long bridge with four travel lanes in total. runs along CR 714 (Martin Highway), connecting with the Indian River Street and goes across the St. Lucie River. Central Artery Tunnel, Boston, Massachusetts Reconstruction of Interstate Highway 93 (I-93) in downtown Boston, the extension of I-90 to Logan International Airport, the construction of two new bridges over the Charles River, six interchanges and the Rose Kennedy Greenway in the space vacated by the previous elevated I-93 Central Artery in Boston, Massachusetts. Cynthiana Bypass, Cynthiana, Kentucky A 2-lane, state highway bypass project, to the west of the City from the southern terminus where US 62S and US27S meet. South Bay Expressway, San Diego, California A 9.2-mile tolled highway segment of SR 125 in eastern San Diego, CA. This project was funded as a Public Private Partnership (P3). It opened in 2007 and the P3 filed for bankruptcy in 2010. US 41 (later renamed as I-41), Brown County, Wisconsin A project of capacity addition, reconstruction of nine interchanges, constructing 24 roundabouts, adding collector- distributer lanes, and building two system interchanges located in Brown County, Wisconsin. Section 3.2 describes the methodology adopted in conducting the deep dives. Section 3.3 gives a short description of each of the projects selected for Deep Dives and a discussion of the findings. The detailed reports for each of the project is given in the Appendix. The last section, Section 3.4 presents the generalized findings from the Deep Dive analysis alongside a discussion.

Traffic Forecasting Accuracy Assessment Research Technical Report II-25 Methodology Sources of Error as cited in Existing Literature There have been quite a few researches done on the traffic forecast inaccuracies. Consulting the existing literature (journal articles, DOT reports etc.), we have identified several key reasons for forecast inaccuracy, as summarized in Table 9 (for detailed Literature Review, please see Appendix A: Literature Review). The issues and items are listed in order of their importance; that is, how many times they have been cited, with the citations column indicating how many papers mention that topic as a source of forecast error. Some of the errors have also been quantified by measuring their elasticity with travel demand, and others have not. In most cases, the literature simply identifies that the topic could be a source of error from a logical standpoint, rather than clearly showing the amount of error due to that cause. Several observations can be made here. First, there is significant overlap in the identified sources of error. For example, GDP and economic conditions are clearly related to employment, just as land use and housing projections are related to population projections. This is not necessarily a problem but means that there is room to consolidate. Second, the list focuses largely on the assumptions and inputs fed into the models. This is useful, in that these factors can generally be observed independently of the count, allowing us to better evaluate their effect. Things like the model structure and the stability of travel behavior over time are not often cited. These factors could still be sources of error, but they are more difficult to evaluate as such, which may be why they are less often identified. The exception here is the trip generation/traveler characteristics, which is about limitations of a model component and the associated data itself.

Traffic Forecasting Accuracy Assessment Research Technical Report II-26 Table 9 Sources of Forecast Error Cited in Existing Literature Items Notes Citations Quantified? GDP/GDP Growth/GDP Per capita/Economic Condition GDP, particularly GDP per capita for the Traffic Analysis Zone as well as for the entire state are particularly prominent since they have bearing on car ownership, employment and even toll culture. From (Andersson et al. 2017) “In forecasts since 2005, GDP has been the largest source of error due to the sluggish economic growth since 2008”. 8 Yes Employment E.g. (Kain 1990) found changes in employment and population growth to account for most of the changes in transit ridership. 7 Yes Recession/Short term Economic Fluctuation (Miller et al. 2016) quantified the effect of number of economic recession on forecast accuracy. Can be argued that it belongs in the same category as GDP growth. 7 Yes Trip Generation/Travel Characteristics The availability of appropriate data and their quality, in particular traffic counts, network characteristics, travel costs etc. 5 No Land Use Changes/ Housing Prediction/ Location of the Project Refers to changes in build environment that are not specific to the project (Andersson et al. 2017). (Flyvbjerg et al. 2006b) found that 26% of projects experience problems regarding the change in land use. 5 No Population Projection/Household Survey Also projection of population distribution. Relevant for the TAZ. Can be linked with Employment, Car ownership. 4 Yes Fuel Price/Efficiency Base price, tax and fuel economy. 4 Yes Car Ownership Car ownership in each household in the TAZ. Affects the VMT and Travel Characteristics. (Andersson et al. 2017) terms errors in car ownership calculations as model-specification error. 3 Yes Time Savings from travelling in the proposed route or Value of Time/ Willingness to pay Monetary value given by travelers to travel time. Alternate route choices may have an effect. 3 No Toll Culture Better performance for countries that had a “history” of toll roads, compared with those for which road tolling was new (Bain 2011b). 3 No Forecast Duration Number of years between forecast year and base year. According to (Miller et al. 2016) as the difference decreases, accuracy increases. 3 Yes In our Deep Dives, we investigated these issues for each project. The goal of the deep dives was to quantify each of the errors and find out how they influence the forecast/travel demand. By doing so, we aim to quantify the relative importance of these factors, at least for a small sample of projects. This provides readers with information about where they should focus their efforts to improve forecasts.

Traffic Forecasting Accuracy Assessment Research Technical Report II-27 Procedure for Analysis The UK Post-Opening Project Evaluations (POPE) take an approach of evaluating both a do- minimum and a do something scenario, as illustrated by their assessment of M6 improvements (Highways England 2015). Figure 4 shows the count comparisons included in this report. They evaluate the do minimum forecast scenario against a count year prior to construction, and the do something scenario with a count year after construction. As shown in the second of the tables, this helps to decipher whether any differences are due to the net project effect, or due to differences in background growth. Figure 4: Example of Post-Opening Project Evaluation Count Comparisons (Highways England 2015) As for how we can quantify the sources of error, (Andersson et al. 2017) can be a good start. In this study, the actual and forecast for passenger traffic (in Vehicle Kilometer Travelled (VKT)) were compared for 8 national reference forecasts spanning several decades. From forecast reports, the input assumptions (GDP, fuel price etc.) were obtained and their changes were found from national statistics. The forecasts were then adjusted for the errors in input assumption. The primary approach in this regard was to calculate elasticities of traffic with respect to the input variables (income, fuel price, car ownership, population). Cross-sectional elasticities were calculated by increasing variables by 10% at a time and calculating the resulting change in VKT (𝛿): 𝜀 ln 1 𝛿ln 1 10%𝑥 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 13 Where 𝜀 is the elasticity, x is the change in the variable of interest, and 𝛿 is the resulting change in VKT. A reference model was used as a comparison forecast to see how much better the actual forecasts did relative to pre-existing trends. The reference model was estimated as a time series model regressing VKT on GDP/capita and fuel price:

Traffic Forecasting Accuracy Assessment Research Technical Report II-28 ln 𝑉𝐾𝑇 𝛼 𝛽 ln 𝐺𝐷𝑃𝑐𝑎𝑝 𝛾 ln 𝑓𝑢𝑒𝑙 𝑝𝑟𝑖𝑐𝑒 𝜖 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 14 Where 𝛼, 𝛽 and 𝛾 are estimated model parameters, and 𝜖 is a random error term. The interesting observation here are that the researchers used elasticities from the existing transportation model (older models were not available) to adjust forecasts and the elasticities (cross sectional and time series) have remained remarkably close to each other over time. Elasticities could alternatively be taken from the literature or from another source. The changes in the forecast were documented for the cumulative adjustments in input variables, i.e. each adjustment builds on the preceding one and adjusts one more variable. The overall accuracy was assessed by comparing the Root Mean Squared Errors of the original and adjusted forecasts to the actual outcomes. The final table from their research (taken directly from the paper) is reproduced here as a reference point, shown as Table 10. The numbers are the percent growth in VKT in the region from the base year. Each row in the table shows a different forecast. The first column shows the expected percent growth based on the trend line. The second shows the forecast percent growth in VKT, and the last column shows the actual percent growth. The intermediate columns show what the forecast would be if it had the correct population growth, fuel price, fuel economy, car ownership and GDP, with each building incrementally upon the others. The analysis shows that correcting for these five forecasts would reduce the RMSE of the forecasts from 0.38 to 0.12. Table 10 Decomposition of Forecast Errors from Andersson et al. (2016) Trend Forecast Adj. for pop. growth & adj. for fuel price & adj. for fuel economy & adj. for car ownership & adj. for GDP Actual TPR 1980 (1980–1990) (%) 46 5 6 11 11 17 16 25 TPR 1990 (1990–2000) (%) 25 20 20 16 15 9 9 8 TPR 1990 (1990–2010) (%) 50 31 31 26 25 18 16 19 VTI 1992 (1991–2005) (%) 38 26 26 17 18 16 17 15 VTI 1992 (1991–2013) (%) 59 41 41 26 32 30 24 20 Samplan 1996 (1993–2010) (%) 38 30 30 21 24 20 24 20 Samplan 1999 (1997–2010) (%) 19 20 24 12 11 10 12 16 SIKA 2005 LU (2001–2013) (%) 10 18 20 11 12 10 9 11 Trp Adm 2009 (2006–2013) (%) 10 6 8 3 6 7 5 3 Root mean square Error 0.64 0.38 0.39 0.18 0.21 0.14 0.12

Traffic Forecasting Accuracy Assessment Research Technical Report II-29 A modification to the aforementioned analysis may be warranted, as the continuous adjustment and subsequent changes in traffic growth doesn’t quantify the effect of each of the variables on traffic forecast accuracy and does not address the issue of uncertainty and inherent variation in each of the components. This approach is similar to the US Federal Government guidance for road traffic forecasting (FHWA 2010, p.23) which recommends an “incremental buildup” of the forecast variables. The effect on the forecast can be quantified in this way. First, the change in forecast value, a delta between the opening year forecast and the actual observed traffic count in the opening year is calculated. Change in Value 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 15 Second, a factor of the effect on forecast by exponentiating an elasticity of the common source errors and natural-log of the change rate in forecast value is calculated. This factor is applied to the actual forecast volume to generate an adjusted forecast. Effect on Forecast 𝑒𝑥𝑝 ∗ 1 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 16 Adjusted Forecast 1 𝐸𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 ∗ 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑜𝑙𝑢𝑚𝑒 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 17 In order to quantify the effects of individual variables to forecast accuracy, a sensitivity analysis can also be performed. Kain (1990) evaluated the effect of various explanatory variables by comparing their elasticities with transit ridership across several scenarios (traditional, moderate, aggressive and conservative) to base year statistics. The conservative scenario is the one devised by the author, while the other scenarios were used in the model to project transit ridership into 2010 from the base year of 1986. The modus operandi for the sensitivity analysis is shown in Figure 5. (Lemp and Kockelman 2009) suggest that while sensitivity tests allow for greater understanding of the magnitude of uncertainty in the model, it does not provide a probability of a particular outcome occurring. The authors recommend Monte Carlo simulation to generate possible scenarios, although such an approach goes beyond the analysis conducted in this project.

Traffic Forecasting Accuracy Assessment Research Technical Report II-30 Figure 5: Method for Testing Effects of Errors in Forecast Inputs The Deep Dives will begin with a comparison of the actual and forecast ADT, similar to what is shown in Table 11. If pre-construction counts and forecasts are available, they can be added to the table, as is done for the POPE. For each cell, the values and the percent difference from forecasts will be reported. Table 11 Deep Dive Count Worksheet Given the review, and our own assessment of the important factors associated with forecast error, the Deep Dives focus on evaluating each of the items listed in Table 14. Each Deep Dive follows a similar structure, working through the list of factors, and attempt to identify whether the item is an important source of error for the forecast, and if so, attempt to quantify how much it would change the forecast if the forecasters had gotten it right. The last column in Table 14 identifies whether we expect to be able to quantify the effect of that item on the resulting forecast. The top 7 factors are generally model inputs, and it is reasonable to expect that we could observe the actual outcomes, and apply an elasticity or updated model run to evaluate the effect of having the correct input on the forecast. We expect the remaining factors to be more difficult to quantify, and expect to address them qualitatively if they are identified as being important. Project Segment & Direction Base Year Count Base Year Forecast (if different) Opening Year Count Opening Year Forecast % Growth in Count % Growth in Forecast

Traffic Forecasting Accuracy Assessment Research Technical Report II-31 Table 12 Sources of Forecast Error to be Considered by Deep Dives Items Definition Quantifiable Employment The actual employment (or GDP) differs from what was projected. Yes Population/Household The actual population or households differ from what was projected. Yes Car Ownership Actual car ownership differs from projection. Should note whether car ownership is endogenous or exogenous to the forecast. Yes Fuel Price/Efficiency The average fuel price or fuel efficiency different from expectations. Yes Travel Time/Speed Travel time comparison of the facility itself and alternative routes. Yes Toll Sensitivity/Value of Time The sensitivity to tolls, or the value of the tolls themselves is in error. For example, Anam, S. (2016) study on Coleman Bridge found that the project considered two toll amounts ($1 and $0.75), however by the time of opening/horizon year it got to $0.85 and $2. Yes Project Scope The project was built to different specifications than was assumed at the time of the forecast. For example, budget constraints meant that only 4 lanes were built instead of 6. Yes Rest of Network Assumptions There were assumptions about related projects that would be constructed that differed from what was actually built. Yes Model Deficiency/Issues Limitations of the model itself. This could include possible errors, or limitations of the method. For example, the project was built in a tourist area, but the model was not able to account for tourism. No Data Deficiency/Issues Limitations of the data available at the time of the forecast. For example, erroneous or outdated counts were used as the basis for pivoting. No Unexpected Changes In the latter portion of the 20th century, this could include the rise of 2-worker households or other broad social trends. In the 21st century, this could include technology changes, such as self-driving cars. No Other Other issues that are not articulated above. No

Traffic Forecasting Accuracy Assessment Research Technical Report II-32 For those inputs that can be quantified, the Deep Dives are structured around a worksheet as outlined in Table 13. The first row shows the original traffic forecast, the actual count, and the forecast error. The subsequent rows show the effect of different inputs. For example, the second row show the forecast and actual employment. An elasticity is shown, as well as the effect of correcting for errors in that input on the forecast itself. Then the remaining percent difference from forecast will be shown if the forecast is adjusted to account for errors in that input. This is similar to the structure used by (Andersson et al. 2017) with the exception that the method for filling in the table is left open-ended. This is deliberate, and allows us adapt based on what is available for each Deep Dive. For example, when only the public documents, such as an EIS, are available, we may look at the total regional employment, and apply an elasticity to adjust the forecast. If we have the model run available, we may scale the employment in the TAZ file and re- run the model. In some cases, we may not be able to evaluate the effect of a certain factor, so we will note that, and identify what information we would want in order to do so. This diversity of Deep Dives allows us to comment both on what we find, and on the effectiveness of attempting such evaluation with different levels of data. Table 13 Deep Dive Worksheet Each Deep Dive is documented as a case study following a semi-standard format. The sections include: 1. Project Description. A brief description of the project itself. 2. Forecasts. Notes on the forecast itself, including the method, the year the forecast was developed, the expected opening year, the design year, and the sources of information available about the forecast. 3. Comparison to Actual Outcomes. Notes on the actual project outcomes and associated data sources. Comparison to forecast, equivalent to Table 11. 4. Evaluation of Sources of Forecast Error. Evaluation of the factors contributing to forecast error. It can be either a quantitative or a qualitative assessment of each. It may not be possible to evaluate all factors, but they should be considered. Includes a table similar to Table 13. 5. Discussion. Items Forecast Value Actual Value Elasticity Effect on Forecast Remaining Percent difference from forecast for Adjusted Forecast Original Traffic Forecast N/A N/A Employment Population/Household Car Ownership Fuel Price/Efficiency Travel Time/Speed Toll Sensitivity/Value of Time Project Scope Rest of Network Assumptions Adjusted Traffic Forecast N/A N/A N/A

Traffic Forecasting Accuracy Assessment Research Technical Report II-33 Results Eastown Road Extension Project, Lima, Ohio Eastown Road expansion is a project in the city of Lima, Ohio that widened a 2.5-mile segment of the arterial from 2 lanes to 5 lanes and extended the arterial an additional mile. This north-south arterial is located on the western edge of the city of Lima in Allen County, Ohio. The project extended Eastown Road from just north of Elida Road in the north to Spencerville Road in the south. The project included a 2.5-mile expansion from 2 lanes to 5 lanes on the segment between Elida Road and West Elm Street and a 1-mile extension further south to Spencerville Road. The traffic forecasts on Eastown Road were generally over-estimated by about 25%, with the extension segment over-estimated by 75%. It should be noted that there is a possible error in the observed counts on the extension segment. The project opened in 2009, which was the time of peak economic recession and high gas prices in the country. As a result, over-estimation of employment and under-estimation of fuel price in the opening year were two key contributors to the forecasting errors in this project. Additionally, the modeled travel speeds on certain segments of the project were over-estimated by up to 15%. This was the third key contributor to the forecasting error in this project. Population and car ownership forecasts were very similar to the observed values and contributed a tiny portion to the forecasting error.

Traffic Forecasting Accuracy Assessment Research Technical Report II-34 Figure 6: Project Corridor for Eastown Road Extension

Traffic Forecasting Accuracy Assessment Research Technical Report II-35 Adjustments to the forecasts using elasticities and model re-runs confirmed that significant errors in opening year forecasts of employment, fuel price and travel speed had a major role in the over-estimation of traffic volumes on Eastown Road. The traffic forecasts on the project segments that were widened from 2 lanes to 5 lanes improved from an average over-estimation of 25% to 3% after accounting for the corrected exogenous forecasts and project assumptions. The forecasts on the extension segment improved from 75% to 39% over-estimation. Overall, the prevailing macro-economic conditions around the opening year played a major part in the accuracy of the forecasts for the Eastown Road project expansion. This is a major uncertainty that is extremely difficult to directly consider at the time of preparation of traffic forecasts given the various modeling parameters that could change in an economic downturn. One way to account for this is to evaluate and document the change in traffic forecasts using reduced employment and higher fuel prices. It is unknown whether risk and uncertainty were considered in the traffic forecasts due to the absence of project documentation. For future forecasting efforts, it is suggested that a copy of the project and traffic forecasting documentation be saved along with the actual model used to generate the forecasts. Indian River Bridge, Palm City, Florida The Indian River Street Bridge is a new bridge construction project located in Palm City, Florida (Martin County). The bridge is 0.6 miles long with four travel lanes in total (two lanes in each direction). This bridge runs along CR 714 (Martin Highway), connecting with the Indian River Street and goes across the St. Lucie River. The Indian River Street Bridge acts as a reliever bridge for Palm City Bridge (old bridge) which is approximately one mile north of the new bridge. It is also expected to provide relief to the existing SR 714 corridor which is connected with the Palm City Bridge. The study area boundaries extend from Florida’s Turnpike to the west, Federal Highway (US 1) to the east, I-95 crossing of the St. Lucie Canal to the south and the Martin /St. Lucie county line to the north. Figure 7 shows the study area for this project.

Traffic Forecasting Accuracy Assessment Research Technical Report II-36 Figure 7: Project Corridor for Indian River Bridge Project This project concentrated on multiple alternatives and later finalized on a new four lane bridge construction. The updated study was reported in 2003. The construction was started in 2009 and was completed in 2014. The estimated construction cost of the project is $63.9 million. This project is interesting because it provides an opportunity to examine a new bridge project crossing over a river, with clear diversion effects, and it also has a detailed modeling information available. The model was build using Transportation Planning program (TRANPLAN). FDOT District 4 provided archived model runs and detailed project reports to support this Deep Dive analysis. The model forecasts on Indian River Street (new construction) was generally over-estimated by about 60% and on Palm City bridge (competing route) it was over-estimated by 36%. After applying corrections through elasticity, the error in forecast on new bridge was reduced to 56% and on competing bridge it was 29%. Model alterations resulted in new forecast volumes which were 59% off for new bridge and 34% off for old bridge. Adjustments were made in the model forecast based on the elasticity and the model re-runs. The elasticity study showed more promising results as compared to the model adjustments. Fuel price was an influencing factor during the corrections by elasticity. Inclusion of fuel price effect in the model could have been beneficial in reducing error. However, both methods could only explain the part of

Traffic Forecasting Accuracy Assessment Research Technical Report II-37 the forecasting error. Clearly, there are other factors which are not accounted in the model that has caused overall underestimation of the traffic in the study area and especially on the Indian River Street. One of the sources of error might be the forecasting methodology. The opening year traffic is forecasted by scaling the design year model volumes in accordance with the existing counts. Since, the new bridge has no existing count information, such procedure may give rise to inaccurate forecasts. However, it is challenging to develop a more robust forecasting methodology for projects where no existing count is available. Also, the bridge is too intense of a change in infrastructure since it connects the two different lands through just one link, generating few comparable alternative paths. The effect of economic downturn might impact the travel behavior of a particular region for years following the recession. For example, below figure shows the clear impact of 2008 recession on the Martin county unemployment. 2010 to 2012 are the years with peak unemployment resulting in slow traffic and its bearings must have been carried to 2014 year. Due to loss of job, not only the work trips are impacted, but also leisure trips are hampered. Recession is assumed to cause change in value off time, which will also result in updated co-efficient for the highway assignment purposes. Change in job location while maintaining same housing location would alter the route selection for an individual. This would clearly change the travel patterns for the following years. This effect could be better studied by comparing the trips from “Big Data” sources (e.g., Streetlight or AirSage data) before and after the recession years. Figure 8 Martin County Unemployment Rate Chart External trips account for 9% of the traffic on the new bridge and only 2% of the traffic on the Palm City bridge (refer Table 14). This supports the assumption that the new and the old bridge is majorly used by the internal population. Further analysis comparing the modeled trip patterns to “Big Data” sources might reveal travel patterns that were insufficiently represented in the model.

Traffic Forecasting Accuracy Assessment Research Technical Report II-38 Table 14 External Trip Distribution using Both Competing Bridges 2025 Original RUN 2025 New Run External Trips New Bridge Old Bridge Total New Bridge Old Bridge Total I 95 1,563 - 1,563 1,523 6 1,529 Turnpike 1,227 936 2,163 1,298 968 2,266 US 1 1,095 174 1,269 1,080 286 1,366 Total 3,885 1,110 4,995 3,901 1,260 5,161 Another possibility is that Martin, St. Lucie and Indian River Counties show the steepest increase in the median age of the population (see Figure 9). This suggests a lot of retirees moved into this region. Retirees tend to travel less than working families. This may explain why the population of St Lucie county was underestimated by 22%, yet the traffic forecast for all links in study area were overestimated. The travel model did not have a component that adjusted travel rates based on the number of workers in the household, which may have contributed to the over-estimation. Figure 9 Median Age (in years) in Southeast Florida Counties Overall, the prevailing macro-economic conditions around the opening year played a major part in the accuracy of the forecasts for this project. Other exogenous factors causing the over-estimate may be the increase in fuel prices and an increase in retirees. Both factors could not be replicated precisely in the travel model used for the Indian Street River Bridge. Further analysis using “Big Data” sources could add more insight on the over-estimation of traffic. This study highlights importance of archiving not only the model runs and forecast reports, but also the validation approach used during model development. 20 25 30 35 40 45 50 55 1970 1980 1990 2000 2010 M ed ia n  Ag e Year Median Age (in years) in Southeast Florida Counties (Source: BEBR) U.S. FLORIDA Indian River Martin Miami‐Dade Palm Beach St. Lucie

Traffic Forecasting Accuracy Assessment Research Technical Report II-39 Central Artery Tunnel, Boston, Massachusetts The I-93 Central Artery/Tunnel Project (CA/T), popularly known as the Big Dig, is a megaproject which includes the reconstruction of Interstate Highway 93 (I-93) in downtown Boston, the extension of I-90 to Logan International Airport, the construction of two new bridges over the Charles River, six interchanges and the Rose Kennedy Greenway in the space vacated by the previous elevated I-93 Central Artery in Boston, Massachusetts. The 7.8 miles of highway construction, about half in tunnels. The study area for this Deep Dive consists of the I-93 in downtown Boston and the I-90 near Ted Williams Tunnel that connects to Logan Airport under Boston Harbor. A highlight of the CA/T Project was the replacement of the elevated I-93 Central Artery with the underground expressway. It was built to reduce traffic congestion and improve mobility and environment in one of the most congested parts of Boston and the U.S., and to establish the groundwork for economic growth. Figure 10: Central Artery/Tunnel Projects The CTPS’s backcasting report showed that roadways in the CA/T Project were generally overestimated, ranging from 1 to 22 percent, with one roadway segment underestimated by 6 percent. Overall, traffic forecasting accuracy improved after correcting the exogenous forecasts and project assumptions. Nine of twelve roadway segments decreased forecast error as a result.

Traffic Forecasting Accuracy Assessment Research Technical Report II-40 It should be noted that there is abundant documentation on the CA/T projects but virtually all of it is associated with project management, construction, project finance and economic impacts. It is unknown whether risk and uncertainty were considered during the project due to the absence of documentation on the subject. For future forecasting efforts, it is suggested that a copy of the forecasting documentation and assumption be archived along with the travel model files used to generate the forecasts. Cynthiana Bypass, Cynthiana, Kentucky The Cynthiana Bypass is a 2-lane, state highway bypass project located in Cynthiana, Kentucky. The study area included the Cynthiana city limits and immediate environs in Harrison County, Kentucky. The project created a bypass to the west of the City, starting at a southern terminus where US 62S and US 27S meet, and extending northwards to a point north of the City along Main Street/US 27N. The length of the bypass is 3.6 miles, and includes a new bridge across the South Fork of the Licking River, north of the City.

Traffic Forecasting Accuracy Assessment Research Technical Report II-41 Figure 11: Project Corridor (Cynthiana Bypass) The traffic forecasts on the Cynthiana Bypass were generally over-estimated by about 45%, with the notable exception of the northernmost section, which was estimated to within 4% of observed values. As would be expected for a bypass project, the biggest source of error in the model forecast was the overestimated growth factor (2.5% per year) in external counts. Three out of four segments of the project showed a significant improvement after accounting for the corrected external forecasts.

Traffic Forecasting Accuracy Assessment Research Technical Report II-42 The project opened in 2012, which was shortly after the peak of the economic recession and during a time of high gas prices. As a result, over-estimation of employment in the opening year was a contributor to the forecasting errors in this project. Population forecasts were very similar to the observed values and did not contribute to the forecasting error (in fact, correcting for actual population alone made the forecasts a bit worse). Risk and uncertainty were not explicitly considered in the traffic forecasts. Project documentation was not archived by the project owners. Fortunately, a copy of the documentation was obtained from the consultant who happened to keep a paper copy in her personal files (she had long since left employment at the consulting company that was contracted to do the study. For future forecasting efforts, it is suggested that copies of the project and traffic forecasting documentation be saved along with the actual models used to generate the forecasts by the project owners (in this case, the state highway authority). South Bay Expressway, San Diego, California South Bay Expressway (SBX) is a 9.2-mile tolled highway segment of SR 125 in eastern San Diego, CA. SBX generally runs north-south from SR 54 near Sweetwater Reservoir to SR 905/SR 11 in Otay Mesa, CA near the US-Mexico Border. A 3.2-mile untolled link to the existing freeway network at the northern end was publicly funded and built with the construction of the private toll road. Originally developed as a Public Private Partnership, SBX opened in November 2007. Initial traffic and revenue was below expectations and the company was involved in ongoing litigation with contractors. In March 2010 the operator filed for bankruptcy. In July 2011, SANDAG agreed to purchase the lease from the operator, taking control of the remainder of the 35-year lease in November 20111. The original study area boundary was essentially the entire San Diego Region. The South Bay Expressway is the easternmost north-south expressway in San Diego. SBX was originally developed to accommodate the rapidly growing residential and industrial South Bay area and to provide improved access to the US-Mexico Border Crossing facility at Otay Mesa. The original South Bay Expressway Analysis was for the first toll facility in San Diego. South Bay Expressway was developed as permitted by California AB 680, passed by the California legislature in 1989. Under the agreement, the concessionaire developed the project and constructed the road in return for operating and maintaining the facility and collecting toll revenue for 35 years, until 2042. As per the agreement, the State of California owns the facility, but leases it to the concessionaire. After the original concessionaire declared bankruptcy SANDAG purchased the concession in December 2011 and will retain tolling control until the facility reverts back to Caltrans in 2042. As opposed to maximizing revenue on the facility, SANDAG sets the toll prices to relieve congestion on the I-5 and I-805. A map of the corridor and current toll rates are shown in Figure 12. 1 https://www.transportation.gov/tifia/financed-projects/south-bay-expressway

Traffic Forecasting Accuracy Assessment Research Technical Report II-43 Figure 12: Project Study Area (South Bay Expressway) This Deep Dive is not meant to be a criticism of the forecasts developed in 2003. While hindsight allows for us to understand the warning signs, very few in the world saw the Global Financial Crisis and housing bubble coming. Although the TIFIA Risk Analysis Report showed that the early year project development forecasts had a probability of less than 5%, and this included only risks associated with toll revenues, projections of construction costs and operating costs were held constant, the USDOT certainly did not forecast the impending Global Financial Crisis. The report shows the importance of detailed risk assessments and the importance of understanding the major drivers of the forecasts and how changes in modeling and growth assumptions impact traffic and revenue forecasts. If a more conservative approach were taken in the development of the project, it is unlikely that a P3 would have found this an appropriate project. At the least the concessionaire would structure the deal differently. Though researching through forecasts and comparable data, one recommendation is for every project to develop clear model performance metrics during the forecast period that can be checked against observed data. Much like data collected for transit before and after studies this data would provide clear insight to the forecasting process and could be used in each region (and collectively in the US) to understand common forecasting errors. These metrics may include:  Socio-economic variables such as population, and employment at sub regional levels (focusing on the project corridors);  Regional VMT VHT values;  Consistent ADT measures as specific points in the corridor (a plan to collect annual traffic counts on the facility for the first 5-10 years of opening); and  Consistent definitions of other measures to be collected and maintained. For toll facilities this could be annual or daily transactions, revenue miles traveled, daily or annual revenue, average toll rates, etc.

Traffic Forecasting Accuracy Assessment Research Technical Report II-44 US 41, Brown County, Wisconsin The US 41 Project in Brown County is a project of capacity addition, reconstruction of nine interchanges, constructing 24 roundabouts, adding collector-distributer lanes, and building two system interchanges located in Brown County, Wisconsin. The US 41 Project aimed to improve safety and road capacity by replacing old and deteriorating pavement, and outdated design infrastructure with new standards. The Project area is approximately 14-mile portion of the US 41 in Brown County, Wisconsin that covers US 41 from Orange Lane near the County Road F interchange to the County Road M interchange in Brown County. Figure 1 shows the area of the Project with five roadway segments. The study area for this deep dive, however, is only the 3.3-miles roadway that the FEIS of the Memorial Drive to County M segment covered (segment 5 in Figure 13). Out of the five segments, the Memorial Drive to County M is the only segment that required EIS because of its potential environmental impacts by building two system interchanges at WIS 29 and I-43 with tall flyover type of ramps building on top of swamp land. The other four segments in Figure 1 underwent increase in lanes from four to six, or eight with some having auxiliary lanes, which resulted in re-evaluating the original Environmental Assessment (EA), completed in 2002, or completing a new EA. Figure 13: Project Study Area (US 41 Brown County) The US 41 Project was part of the 31-miles US 41 highway reconstruction project in Winnebago and Brown county. The project areas in two counties are not connected along the US 41 as the figure

Traffic Forecasting Accuracy Assessment Research Technical Report II-45 shows but adjacent to the two major cities -- Green Bay and Oshkosh in both counties. The US 41 Project was the largest reconstruction project in the history of the Northeast Region in Wisconsin. The US 41 Project mainly increased highway lanes from four to six, or eight with some having auxiliary lanes. It replaced old and deteriorating pavement and outdated design infrastructure, which resulted in reconstruction of nine interchanges, constructing 24 roundabouts, adding collector- distributer lanes, and building two system interchanges. The Project was to improve safety and upgrade a transportation link that supports an important economic vitality between the Southeastern and the North Eastern Wisconsin. The original traffic forecasts were slightly overestimated by 3 to 10 percent for three study sites but they were generally close. It should be noted that the traffic count for site 3 was the preliminary ADT not the final ADT. The highest delta between the traffic forecast and the opening year count for site 3 may derive from the usage of the preliminary estimate. The traffic forecasting accuracy improved after correcting the exogenous population forecast. However, the fuel price adjustment increased the percent difference from forecast. This could have been accounted for that the change in fuel price had little effect on the traffic volumes in the study area where public transportation is not a reasonable alternative mode. However, this interpretation can be wrong due to the uncertainty in how the fuel price impact was implemented in the traffic forecast model. Availability of the archived model and its inputs would have provided deeper understanding of the parameters and methodology used for forecasting traffic for US 41 project. There are small number of documents and data available for the US 41 Project. It is unknown whether risk and uncertainty were considered during the project due to the inaccessibility of the documentation on this project. For future forecasting efforts, it is suggested that a copy of the forecasting documentation and assumption be archived along with the travel model files used to generate the forecasts.

Traffic Forecasting Accuracy Assessment Research Technical Report II-46 Conclusions Research Questions At the start of this report, we identified several research questions, each of which contributes to the project objective of developing a process to analyze and improve the accuracy, reliability and utility of project-level traffic forecasts. Those research questions relate both to the analysis of existing data and to the process to be followed to continue such analyses in the future. The first set of questions, addressed by the Large-N analysis, provide a means of describing the historic range of forecast errors that have been observed for certain project types. A second set of questions, addressed by the Deep Dives, sheds light on why the forecasts may be in error. The third set of research questions focus on establishing an effective process. This final chapter of the report revisits each of these questions, summarizing what we have and have not learned about each. Large-N Findings We can make a number of observations from the Large-N Analysis: 1. Traffic forecasts show a modest bias, with actual ADT about 6% lower than forecast ADT. The precise number depends upon which metric is used, but the results are in a similar range. The mean percent difference from forecast is +0.65% at a segment level and -5.6% at a project level. The median percent difference from forecast is -5.5% at a segment level and -7.5% at a project level. The difference between the mean and median values occurs because the distribution is asymmetric—actual values are more likely to be lower than forecast, but there is a long right-hand tail of the distribution where a small number of projects have actual traffic much higher than forecast (Section 2.3.1). When we consider the bias in an econometric framework, our median quantile regression model (section 2.4) has an intercept close to zero, but a slope of 0.94, which is significantly lower than 1. 2. Traffic forecasts show a significant spread, with a mean absolute percent difference from forecast of 25% at the segment level and 17% at a project level. 90% of segment forecasts fall within the range -45% to +66%, and 90% of project level forecasts fall within the range of -38% to +37%. (Appendix B, section 4.1) 3. Traffic forecasts are more accurate for higher volume roads. This can be observed in the figures and data presented in Appendix B, section 4.2. For example, for segments with 60,000 ADT or more, the MAPDFF is 12.4% compared to 24.74% overall. The result is confirmed by our quantile regression models, which have a slope closer to one for volumes greater than 30,000 ADT (Appendix B, section 5.3). This result echoes the maximum desirable deviation guidance from NCHRP 255 and NCHRP 765 where there are tighter targets for calibrating a travel model for higher volume links.

Traffic Forecasting Accuracy Assessment Research Technical Report II-47 4. Traffic forecasts are more accurate for higher functional classes, over and above the volume effect described above. Our quantile regression results show narrower forecast windows for freeways than for arterials, and for arterials than for collectors and locals. The actual volumes on lower-class roads are more likely to be lower than the forecasts. These challenges may be due to limitations of zone size and network detail, as well as less opportunity for inaccuracies to average themselves out on larger facilities. 5. The unemployment rate in the opening year is an important determinant of forecast accuracy. According to the models in Appendix B, section 5.2, for each point of increase (such as from 5% to 6% unemployment) in the unemployment rate in the opening year, the median estimate decreases by 3%. For example, consider two roads, each with the same forecast, but one scheduled to open in 2005 with an unemployment rate of 4.5% and one scheduled to open in 2010 with an unemployment rate of 9.5%. We would expect the actual opening year ADT to be 15% lower for the project that opens in 2010 ((9.5-4.5) * 0.03). 6. Forecasts appear to implicitly assume that the economic conditions present in the year the forecast is made will perpetuate. This can be observed in the same models in Appendix B, section 5.2 based on the coefficient on the unemployment rate in the year produced, which is positive. The positive coefficient means that a high unemployment rate in the year the forecast is produced is more likely to result in an actual ADT higher than the forecast, while a low unemployment rate in the year the forecast is produced would have the opposite effect. 7. Traffic forecasts become less accurate as the forecast horizon increases, but the result is asymmetric, with actual ADT more likely to be higher than forecast as the forecast horizon increases. The forecast horizon is the length of time into the future for which forecasts are prepared, which we measure as the number of years between when the forecast is made, and the project opens. The quantile regression results (Appendix B, section 5.2 and 5.3) show that the median, 80th percentile and 95th percentile estimates increase with an increase in the DiffYear variable, but that the 5th and 20th percentile estimates either stay flat or increase by a smaller amount. 8. Regional travel models produce more accurate forecasts than traffic count trends. The mean absolute percent difference from forecast for regional travel models is 16.9% compared to 22.2% for traffic count trends (Appendix B, section 4.12). In addition, the quantile regression models show that using a travel model narrows the uncertainty window. 9. Some agencies have more accurate forecasts than others. The best agencies (with more than a handful of projects) have a MAPDFF of 13.7%, compared to 32% for the worst. A portion of these differences show up as significant in the quantile regression models (Appendix B, section 5.2). 10. Traffic forecasts have improved over time. This can be observed both in our assessment of the year the forecast was produced and in the opening year. Forecasts for projects that opened in the 1990s were especially poor, exhibiting mean volumes 15% higher than forecast, with a MAPDFF of 28.1%. The quantile regression models for forecasting (Appendix B, section 5.3) show that while older forecasts do not show a significant bias relative to newer forecasts, they do have a broader uncertainty window. 11. We find that 95% of forecasts reviewed are “accurate to within half of a lane”. We find (Appendix B, section 4.14) that for 1% of cases, the actual traffic is higher than forecast and additional lanes would be needed to maintain the forecast level-of-service. Conversely, for 4% of cases, actual traffic is lower than forecast, and the same level-of-service could be maintained with fewer lanes.

Traffic Forecasting Accuracy Assessment Research Technical Report II-48 Revisiting our original research questions, we offer the following conclusions: What is the distribution of forecast errors across the sample as a whole? The forecast errors are best summarized by the distribution shown in Figure 14. Figure 14: Distribution of Percent Difference from Forecast (Project Level) Can we detect statistically significant bias in the forecasts? If so, is that bias a function of specific factors, such as the type of project, the time between the forecast and the opening year, or the methods used? Yes. Actual ADT is about 6% lower than forecast ADT, and this difference is statistically significant. Several factors are found to affect this bias, including economic conditions, forecast horizon, and facility type. We can use the sign of the coefficients in the 50th percentile estimates of the quantile regression models as a measure of this bias. Using the inclusive model, we find that economic conditions affect the bias. For each percent increase in the unemployment rate in the opening year (such as from 5% to 6%), the median expected ADT is 3% lower than forecast, all else being equal. For each percent increase in the unemployment rate in the forecast year, the median expected ADT is 1% higher. Facility type also affects the bias. Relative to freeways, the median expected ADT on arterials is 8% lower than forecast, and the median expected ADT on collectors and locals is 14% lower than forecast, all else being equal. For each additional year added to the forecast horizon, the median expected ADT increases by 1%. 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100

Traffic Forecasting Accuracy Assessment Research Technical Report II-49 After adjusting for any bias, how accurate are the forecasts? Is the accuracy a function of specific factors, such as the type of project, the time between the forecast and the opening year, or the methods used? The mean absolute percent difference from forecast of traffic forecasts is 17% at a project level, with 90% of project level forecasts fall within the range of -39% to +37%. This is affected by several factors, as listed above, including the functional class, the forecast horizon, economic conditions, and the forecasting method, agency and year. One of the more important factors is that forecasts tend to be more accurate for higher volume roads. It is important that we note a limitation of this study: the data used here are not necessarily a random or representative sample of all traffic forecasts. They were assembled based on availability and shared from different agencies and past researchers examining the topic. Therefore, the sample may contain some selection bias. For example, Agency A may have compiled data on the largest projects to have opened since 1990, while Agency B may have complied data on all forecasts to be prepared since 2000. Agency B’s sample will naturally contain more routine projects, and those with a shorter planning horizon, and these variables will be correlated with the agency itself and the methods that agency uses. In addition, the fields recorded by these two agencies may be different, meaning that they each contain missing data in a subset of data fields. The issue of how to treat missing data becomes important as we estimate the quantile regression models, and we take a different approach in the model for inference than in the model for forecasting. In the inference model, we estimate a separate coefficient on any attribute that is missing such that we do not bias the relative estimates of the other non-missing values. In the forecasting model, however, we take the view that someone developing a forecast will always know attributes such as the facility type, the forecasting method and whether or not the facility is a new road. Therefore, it is not logical to put forward a forecasting model that includes unknown facility type as an option, and we want any uncertainty associated with that to be reflected in the base model. Data limitations also arise when interpreting the model estimation results. This is most relevant for conclusions 9 and 10 above, that some agencies have more accurate forecasts than others and that traffic forecasts have improved over time. We know that the data provided by different agencies comes from different time periods, with different mixes of projects. From what we know examining the data, routine projects such as repaving and minor improvements are more likely to be recorded in more recent years, as records of those projects are less likely to be maintained over a span of decades. While we might think that forecasts get better over time because we now have access to better data, more computational power and better models, it may also be that the forecasting task has become easier over time. Infrastructure budgets are constrained, and states today build fewer big projects. The span between the 1970s and the 1990s was one of growing auto ownership and an increasing share of women in the workforce, which logically would lead to more VMT per capita and measured volumes higher than forecast, whereas both trends had largely played out by the 2000s. It is difficult to disentangle these factors, and we are left to speculate: if we are interested in drawing an uncertainty window around our present-day forecasts, how much credit should we take for recent improvement in forecasts?

Traffic Forecasting Accuracy Assessment Research Technical Report II-50 Deep Dive Findings The detailed Deep Dive reports for each of the projects are given in Appendix C. We can summarize the key findings as follows:  On the Eastown Road Expansion, actual volumes were 20% lower than forecast for the existing portion of the road and 43% lower than forecast for the extension. Correcting for errors in input values (employment, population/households, car ownership, fuel price and travel time) improved these forecast values to 25% and 3%. The travel speeds appear to be of particular importance in this case, with the actual speed lower than forecast on Eastown Road.  On the Indian River Bridge, actual volumes on the new bridge were 60% lower than forecast even though the base year validation was reasonable. Correcting errors in the inputs (employment, population and fuel price) only improved the forecasts slightly. It is not clear why the discrepancy occurred.  For the Central Artery Tunnel project, actual traffic on modified links was 4% lower than forecast, and actual traffic on new links was 16% lower than forecast. This represents a strong forecast for a massive project with a long time horizon. Correcting input errors (for employment, population and fuel price) would improve the forecast error to +3% for existing links and -10% for new links.  On the Cynthiana Bypass, actual traffic was about 30% lower than forecast for three of four bypass segments, and 4% lower than forecast for the fourth bypass segment. The major source of error on this project were the external traffic forecasts, where the actual traffic at external stations was 43% lower than forecast. Correcting this issue reduces the absolute error to less than 4% for three of four segments, although with this correction actual traffic on the fourth segment is higher than the adjusted forecast.  On the Southbay Expressway, the long term forecasts appear to be reasonably accurate, but a straight-line interpolation to the short-term creates large deviations. There appear to be three major contributors to this outcome. First, the project opened as a privately financed toll road in November 2007, just before the recession caused a decrease in demand. Second, an important travel market for the road is border crossings from Mexico, particularly for truck traffic, and border crossing decreased from their long term trend about the time the toll road opened. Third, the operator responded by increasing tolls, further reducing demand. The operator was unable to survive these challenges and went bankrupt in 2010. SanDAG bought the road and reduced tolls, while border crossings and economic conditions recovered.  For US 41 in Brown County, the original traffic forecasts slightly overestimated traffic by 3 to 10 percent for three study sites but they were generally close. The traffic forecasting accuracy improved after correcting the exogenous population forecast. However, the fuel price adjustment increased the forecast error. This could have been accounted for that the change in fuel price had little effect on the traffic volumes in the study area where public transportation is not a reasonable alternative mode. Similar to our findings from the Large N analysis, the traffic for the 5 projects chosen for Deep Dive Analysis was more likely to be to be over-predicted than under-predicted. Deep Dives expand

Traffic Forecasting Accuracy Assessment Research Technical Report II-51 our knowledge regarding this over-prediction by identifying the contributing sources to the inaccuracy. The key takeaways from the Deep Dive Analysis are presented below: 1. The reasons for forecast inaccuracy are diverse. While the above points list some of the factors that contribute to forecast inaccuracy, it is clear from our limited sample that the reasons for inaccuracies are diverse—external forecasts, speed travel speeds, population and employment forecasts, and short-term variations from a long-term trend have all be identified as contributing factors in one or more of the Deep Dives. 2. The forecasts for all six projects considered show “optimism bias”. For each project, the observed traffic is less than forecast, and for all except US 41, correcting for the factors listed reduces the difference between the forecast and observed traffic. 3. Employment, population and fuel price forecasts frequently contribute to forecast inaccuracy: Adjustments to the forecasts using elasticities and model re-runs confirmed that significant errors in opening year forecasts employment, fuel price and travel speed had a major role in the over-estimation of traffic volumes. In addition, we observe that macro- economic conditions in the opening year influence forecast accuracy, particularly for projects which opened during or after an economic downturn. 4. External traffic and travel speed assumptions also affect traffic forecasts: For the Bypass extension project in Cynthiana, the estimated growth rate for external trips was the largest source of forecast error. Travel speed was an important factor for the Eastown Road Extension because inaccurate speeds led to too much diversion from competing roads. Using the above observations, we can answer our original research questions as follows: What aspects of the forecasts (such as population forecasts, project scope, etc.) can we clearly identify as being accurate or inaccurate? We find that population, employment and fuel price are common factors contributing to forecast inaccuracy, and that external traffic and travel speed can be important in some cases. However, we also find that even within our small sample of Deep Dives, the reasons for forecast inaccuracy are diverse. If we had gotten those aspects right, how much would it change the traffic forecast? For each Deep Dive, we calculate how much the forecast would improve if we corrected for input errors that we could identify and quantify. In 4 of 5 cases, correcting the inputs corrected for most of the forecast error. For the Indian River Bridge project, correcting the inputs only improved the forecasts slightly. The summary of the deep dive findings is presented in Table 15.

Traffic Forecasting Accuracy Assessment Research Technical Report II-52 Table 15: Known sources of forecast inaccuracy for deep dives Project Original Percent Difference from Forecast Remaining percent difference from forecast  after adjusting for errors in: Remaining percent  difference from  forecast after all  adjustments Eastown Road Extension Project, Lima, Ohio -43% Employment -39% -28% Population/Household -38% Car Ownership -37% Fuel Price/Efficiency -34% Travel Time/Speed -28% Indian River Bridge, Palm City, Florida -60% Employment -59% -56% Population -61% Fuel Price -56% Central Artery Tunnel, Boston, Massachusetts -16% Employment -10% -10% Population -14% Fuel Price -10% Cynthiana Bypass, Cynthiana, Kentucky -27% Employment -25% -8% Population -25% External Trips Only 7% US 41 (later renamed as I-41), Brown County, Wisconsin -5% Population -4% -6% Fuel Price -6% Southbay Expressway, San Diego, California Revenue less than projected, leading to bankruptcy of P3. Socio Economic Growth The available documentation did not allow the effect of these factors on traffic volume to be quantified. Border crossing Toll rates Process Findings Conducting these analyses also provides the opportunity to evaluate the process of learning from traffic forecast accuracy assessments. Similar to other researchers, we found that the biggest challenge in completing the research was to acquire the data itself. We can make several observations about the effectiveness of the process used to conduct this research: 1. While it remains rare, several transportation agencies have started archiving their forecasts in recent years, and we are beginning to see the benefits of that foresight. The data used in the Large N analysis were provided by state DOTs and researchers who had studied traffic forecasts previously. Some of the efforts involved creating databases of all project-level traffic forecasts in recent years. Many of the projects in these databases have not yet opened. While our analysis was based on about 1300 projects that have opened, thousands more are available in the databases, waiting to be evaluated after they open. 2. Inconsistency among multiple sources of data limited our ability to draw certain conclusions. The data from different sources was generally provided in a tabular format,

Traffic Forecasting Accuracy Assessment Research Technical Report II-53 but the fields included were different depending on the source. For example, some included peak hour forecasts while others included only ADT. Some included details on the forecast method, while others did not. Reports used different categories for the type of improvement. We were able to normalize much of the data to a common format, but were left with missing values in a number of important fields. Because missing values are not random, it is difficult to reliably draw conclusions related to specific fields. 3. The available data are not a random sample of projects. Our analysis was based on available data, which are not a random sample of all transportation projects. The data provided by different sources included many different types of projects. For example, we have a large number of projects from some agencies for which the forecasts were made in 2005 or later and the projects are fairly routine. Conversely, for other sources, the forecasts were done 20 or more years and tend to be for larger projects. While the data show that more recent forecasts tend to be more accurate, it is difficult to determine whether this is because the methods have improved, or because the projects are more routine and thus less challenging to forecast accurately. 4. Project documentation is often insufficient to evaluate the sources of forecast error. In the Deep Dives, we found that the forecasting accuracy improved after accounting for several exogenous variables like employment rate and population. However, the effect of changes in other potentially important variables could not be ascertained for some of the projects. Improved documentation of the forecast methodology would make such assessments more informative, particularly with respect to the definition of the variables used in the model. 5. Forecast evaluation is most effective when archived model runs are available. Our most successful Deep Dives were those where we had archived model runs and associated inputs available. This provided deeper understanding of the parameters and methodology used for forecasting traffic and allowed us to test the effect of changes. While discussing this approach with colleagues early our research, several expressed skepticism that it would be practical to learn from archived model runs when the software and technology has changed in the years since they were performed. However, we were able to successfully run and learn from all three archived model runs that were provided. These included models as old as 15 or more years and developed using software several versions prior to current versions. 6. The best way to compare the accuracy of forecasting methods is by comparing competing forecasts for the same project. While our data do show that forecasts made with a travel demand model tend to be more accurate than those made by extrapolating traffic count trends, we are unable to draw conclusions about the accuracy of different types of models. This is in part because the details of the models and their features are not typically recorded in our data. Also, the model differences vary by agency, but so do other factors, such as the characteristics of the state or metro area and the type of project. This makes it more difficult to distinguish the effect of each of those factors individually. The best way to compare types of models would be to produce competing forecasts for the same set of projects and compare their accuracy. This would be equivalent to a controlled experiment accounting for all relevant factors. Starting from these observations, we consider these process questions: What information should be archived from a forecast?

Traffic Forecasting Accuracy Assessment Research Technical Report II-54 What data should be collected about actual project outcomes? Which measures should be reported in future Large-N studies? Can we define an example structure for future Deep Dives? The answers to these questions are in the recommendations documented in Part II: Guidance Document.

Traffic Forecasting Accuracy Assessment Research Technical Report II-55 References Andersson, M., Brundell-Freij, K., and Eliasson, J. (2017). “Validation of aggregate reference forecasts for passenger transport.” Transportation Research Part A: Policy and Practice, 96(Supplement C), 101–118. Australia Government. (2012). Addressing Issues in Patronage Forecasting for PPP/Toll Roads. Department of Infrastructure, Regional Development and Cities Canberra, Australia. Bain, R. (2011a). “On the reasonableness of traffic forecasts.” TEC Magazine. Bain, R. (2011b). “The Reasonableness of Traffic Forecasts Findings from a Small Survey.” Traffic Engineering and Control (TEC) Magazine. Bain, R. (2013). “Toll Roads: Big Trouble Down Under.” Infrastructure Journal. Bain, R., and Polakovic, L. (2005). “Traffic forecasting risk study update 2005: through ramp-up and beyond.” Standard & Poor’s, London. Buck, K., and Sillence, M. (2014). “A Review of the Accuracy of Wisconsin’s Traffic Forecasting Tools.” Byram, M. (2015). “Forecasts Accuracy of Certified Traffic for Design.” Cade, B. S., and Noon, B. R. (2003). “A gentle introduction to quantile regression for ecologists.” Frontiers in Ecology and the Environment, 1(8), 412–420. Federal Highway Administration. (2018). Traffic Data Computation Method Pocket Guide. Washington, D.C. Flyvbjerg, B. (2005). “Measuring inaccuracy in travel demand forecasting: methodological considerations regarding ramp up and sampling.” Transportation Research Part A: Policy and Practice, 39(6), 522–530. Flyvbjerg, B., Holm, S., K, M., and Buhl, S. L. (2006a). “Inaccuracy in Traffic Forecasts.” Transport Reviews, 26(1). Flyvbjerg, B., Holm, S., K, M., and Buhl, S. L. (2006b). “Inaccuracy in Traffic Forecasts.” Transport Reviews, 26(1). Giaimo, G., and Byram, M. (2013). “Improving Project Level Traffic Forecasts by Attacking the Problem from all Sides.” Columbus, OH. Gomez, J., Vassallo, J. M., and Herraiz, I. (2016). “Explaining light vehicle demand evolution in interurban toll roads: a dynamic panel data analysis in Spain.” Transportation, 43(4), 677–703. Hartgen, D. T. (2013). “Hubris or humility? Accuracy issues for the next 50 years of travel demand modeling.” Transportation, 40(6), 1133–1157. Highways England. (2015). Post Opening Project Evaluation: M6 Carlisle to Guards Mill Improvement. Kain, J. F. (1990). “Deception in Dallas: Strategic misrepresentation in rail transit promotion and evaluation.” Journal of the American Planning Association, 56(2), 184–196. Kriger, D., Shiu, S., and Naylor, S. (2006). Estimating Toll Road Demand and Revenue. Synthesis of Highway Practice, Transportation Research Board.

Traffic Forecasting Accuracy Assessment Research Technical Report II-56 Lemp, J., and Kockelman, K. (2009). “Understanding and Accommodating Risk and Uncertainty in Toll Road Projects.” Transportation Research Record: Journal of the Transportation Research Board, 2132, 106–112. Li, Z., and Hensher, D. A. (2010). “Toll Roads in Australia: An Overview of Characteristics and Accuracy of Demand Forecasts.” Transport Reviews, 30(5), 541–569. Miller, J. S., Anam, S., Amanin, J. W., and Matteo, R. A. (2016). A Retrospective Evaluation of Traffic Forecasting Techniques. Virginia Transportation Research Council. Nicolaisen, M. S., and Driscoll, P. A. (2014). “Ex-Post Evaluations of Demand Forecast Accuracy: A Literature Review.” Transport Reviews, 34(4), 540–557. Odeck, J., and Welde, M. (2017a). “The accuracy of toll road traffic forecasts: An econometric evaluation.” Transportation Research Part A: Policy and Practice, 101, 73–85. Odeck, J., and Welde, M. (2017b). “The accuracy of toll road traffic forecasts: An econometric evaluation.” Transportation Research Part A: Policy and Practice, 101, 73–85. Parthasarathi, P., and Levinson, D. (2010). “Post-construction evaluation of traffic forecast accuracy.” Transport Policy, 17(6), 428–443. Pereira, F. C., Antoniou, C., Fargas, J. A., and Ben-Akiva, M. (2014). “A Metamodel for Estimating Error Bounds in Real-Time Traffic Prediction Systems.” IEEE Transactions on Intelligent Transportation Systems, 15(3), 1310–1322. U.S. Department of Transportation, Federal Highway Administration. (n.d.). “Public information about the ‘FAST Act.’” <https://www.fhwa.dot.gov/fastact/funding.cfm> (Oct. 6, 2016). Zhang, X., and Chen, M. (2017). “Quantifying Effects from Weather on Travel Time and Reliability.” Washington, D.C., 14.

II-57 A P P E N D I X A Literature Review

Traffic Forecasting Accuracy Assessment Research Technical Report II-58 Appendix A Contents 1. INTRODUCTION ............................................................................................................................. II-59 2. A HISTORY OF FORECAST EVALUATIONS ........................................................................... II-59 3 EXISTING SYSTEMATIC REVIEW PROGRAMS ..................................................................... II-63 4 SUMMARY OF EXISTING OUTCOMES ..................................................................................... II-64 5 METHODS OF EVALUATION....................................................................................................... II-67 6 IDENTIFIED PROBLEMS WITH FORECAST ACCURACY ................................................... II-72 7 GAPS IN KNOWLEDGE ................................................................................................................. II-75 REFERENCES ...................................................................................................................................... II-76

Traffic Forecasting Accuracy Assessment Research Technical Report II-59 1. Introduction The current assessment of traffic forecasting accuracy in NCHRP 08-110 builds upon past efforts. This document summarizes those efforts, and what can be learned from them for the current study. It begins by reviewing a history of important past forecast evaluations, and then considers several existing systematic review programs. Beyond these formal programs, there has been an increased interest in the topic among the research community over the past several years. Next, the best existing evidence on the accuracy of travel forecasts is reviewed, summarized from a meta-analysis by (Nicolaisen and Driscoll 2014). A selection of studies is then reviewed in further detail for the purpose of considering 1) the methods used to analyze forecast accuracy, and 2) the issues cited as causes of inaccuracy. 2 A History of Forecast Evaluations Table 1 summarizes key aspects of previous studies evaluating forecast accuracy, providing a survey of the history of forecast evaluations. One of the first examples of an in-depth analysis of predictions of a major large-scale infrastructure investments was Webber’s 1976 study on San Francisco’s construction of the Bay Area Rapid Transit (BART) system (Webber 1976). BART was the first rail system constructed in a United States city whose dominant mode was the automobile. Webber analyzed virtually all forecast assumptions and predicted benefits. Other studies sponsored by research and government agencies also provided insights. Similar, but smaller scale, comparisons were made on other projects in the 1980s. A British study in 1981 examined the forecasts of 44 projects constructed between 1962-1971 (MacKinder and Evans 1981). The authors found no evidence that more recent or sophisticated modeling methods produced more accurate forecasts than earlier or more straightforward methods. In North America, the United States Department of Transportation produced a report in 1989 that examined the accuracy of 10 major transit investments funded by the federal government. This report (Pickrell 1989), which came to be known as the "Pickrell Report", caused a public stir by its findings: most projects under- achieved their projected ridership, while simultaneously accruing capital and operating costs larger than expected. While the Pickrell Report and a number of other accuracy evaluations are focused on transit projects, the resulting criticism often extends to travel forecasting in general. An aim of this research is to fully analyze roadway traffic forecast accuracy in its own right.

Traffic Forecasting Accuracy Assessment Research Technical Report II-60 The first examination into the reasons of travel forecast inaccuracy was an examination of the psychological biases in decision making under uncertainty in 1977. Kahneman and Tversky (1977) proposed the concept of the “inside view”, where intimate involvement with a project’s details during its planning and development phases leads to systemic over-estimates of its benefits and under- estimates of its costs. This was the first recognition of a systematic flaw in planning that is called ‘optimism bias’ in today’s literature. Kahneman and Tversky suggested the use of reference classes to correct these biases. Reference Class Forecasting is the use of the base-rate and distribution results from similar situations in the past to improve forecast accuracy. The benefits of reference class Table 1: Summary of historic studies

Traffic Forecasting Accuracy Assessment Research Technical Report II-61 forecasting were suggested in subsequent work by Bent Flyvbjerg (2007) and Schmitt (2016) to correct for biases in demand and cost forecasts. Because highways are a separate reference class than transit, it is necessary to build the body of observed project outcomes in the highway realm, as can be done through this research. The number of forecasting accuracy assessments have increased since the year 2000. Bent Flyvbjerg released his seminal work on forecasts for multiple modes (Flyvbjerg, Holm, and Buhl 2005). His article noted that demand forecasts were generally inaccurate and not becoming more accurate over time. His conclusions were based on over 210 transportation projects from across the world. He identified potential causes for this inaccuracy, including inaccurate assumptions and exogenous forecasts (tied to the concept of optimism bias), deliberately slanted forecasts, issues with the analytical tools and issues with construction or operation. Flyvbjerg suggested on way to improve forecast inaccuracy is to develop and apply reference classes to projects with large uncertainties. From 2002-2005, Standard & Poor's publicly released annual reports on the accuracy of toll road, bridge and tunnel projects worldwide. The 2005 report (Robert Bain and Polakovic 2005), the most recent report available publicly, analyzed 104 projects. They found that the demand forecasts for those projects were optimistically biased, and this bias persisted into the first five years of operation. They also found that variability of truck forecasts was much higher than lighter vehicles. The authors noted that their sample "undoubtedly reflects an over-representation of toll facilities with higher credit quality" and that actual demand accuracy for these types of projects is probably lower than documented in their report. In their 2004 report (Bain and Plantagie 2004), Standard and Poor's noted optimism bias in forecasts of toll facilities versus non-tolled roadways. The accuracy of non-tolled roads, based on a sample of over 150 projects from Flyvbjerg's database, was much better than toll road forecasts - with some projects heavily under-forecasting demand. They found generally a 20-30 percentage point skew (optimism bias) between the two sets of forecasts and noted this was consistent with their previous studies. The National Cooperative Highway Research Program (NCHRP) released a synthesis on estimated toll road demand and revenue in 2006 (Kriger, Shiu, and Naylor 2006). This study reported the accuracy of 26 toll road revenue forecasts, finding that forecast accuracy does not improve over time. It noted that “many of the problems that had been identified with the performance of traffic and revenue forecasts were related to the application of the model, less so to methods and algorithms”. More specifically, this finding is related to the assumptions needed to operationalize the models and not to the data or methods. It recommended analyzing the forecasting inputs and exogenous forecasts, and the improved treatment of uncertainties and risks. There have been a few recent studies examining the accuracy of non-tolled roadway forecasts. Buck and Sillence (2014) demonstrated the value of using travel demand models in Wisconsin to improve traffic forecast accuracy and provided a framework for future accuracy studies. Parthasarathi and Levinson (2010) examined the accuracy of traffic forecasts for one city in Minnesota. (Giaimo and Byram 2013) examined the accuracy of over 2,000 traffic forecasts in Ohio produced between 2000-2012. They found the traffic forecasts slightly high, but within the standard error of the traffic count data. They did not find any systematic problems with erroneous forecasts. The presentation also described an automated forecasting tool for “low risk” projects that relies on trendlines of

Traffic Forecasting Accuracy Assessment Research Technical Report II-62 historical traffic counts and adjustments following procedures outlined in NCHRP Report 255 (Pedersen and Samdahl 1982) and updated in NCHRP Report 765 (CDM Smith et al. 2014). The Federal Transit Administration (FTA) has conducted two additional studies analyzing predicted and actual outcomes of large-scale federally funded transit projects, one in 2003 (U.S. Department of Transportation: Federal Transit Administration 2003) and another in 2007 (Federal Transit Administration and Vanasse Hangen Brustlin 2008). The FTA is finding that transit forecasts are becoming more accurate over time, and attribute that improvement to better scrutiny of travel forecasts and the analytical tools used to produce the forecasts. (Schmitt 2016) presented the results of his analysis of all forecasts of project New Starts built in the United States through 2011. The forecasts were incorporated into the Transit Forecasting Accuracy Database (TFAD). The database contained 65 large-scale transit infrastructure projects from around the country. The research found that transit project assumptions have historical bias towards over-forecasting ridership. Using these data, Schmitt statistically identified 3 reference classes for transit forecasting. The research also investigated three commonly held beliefs regarding forecasting accuracy:  More recent projects are more accurate than older ones (i.e., we are getting more accurate as tools become more advanced),  Forecasts are more accurate in later stages of project development than in earlier stages (i.e., the more we know about the details of a project the more accurate we can forecast demand), and  Forecasts of smaller changes to the transit system are more accurate than larger changes (i.e., smaller changes are easier to predict than larger changes). It found that only the first commonly held belief had merit. Transit forecasts, on average, are biased but have been – slowly and non-monotonically – becoming more accurate over time. It is important to note, though, that this research has been focused on transit. This project will extend the research to highway projects. By the mid-2000s, some studies attempted to identify issues with forecasting practice or the associated analytical methods. NCHRP Special Report 288 (Transportation Research Board 2007) noted that “current practice in travel forecasting has several deficiencies that often diminish the value of these forecasts”. SR 288 identified four areas of deficiency: inherent weaknesses of the models themselves, errors introduced by modeling and forecasting practice, the lack or questionable reliability of data, and biases arising from the institutional climate in which models are used. The Travel Model Improvement Program released two reports to assist with these areas in 2013, “Improving Existing Travel Models and Forecasting Processes: A White Paper” (RSG 2013b) and “Managing Uncertainty and Risk in Travel Forecasting: A White Paper” (RSG 2013a).

Traffic Forecasting Accuracy Assessment Research Technical Report II-63 3 Existing Systematic Review Programs Although individual studies analyzing the accuracy of travel forecasts are becoming more and more prevalent today, programs of forecast reviews are still rare. There are only three well-known re- occurring programs dedicated to reviewing predicted and actual outcomes already in practice. The UK's Highways England in the Department for Transport, through their Post-Opening Project Evaluation (POPE) program (Highways England 2015), is the only known regular analytical review of non-tolled roadway forecasts in North America and Europe. It is by far the most impressive review of roadway forecasts. Highways England conducts a regular review of roadway forecasts, assessing the accuracy of demand, costs, accident, and travel time benefit forecasts. Over the past 11 years, the Highways England has reviewed smaller roadway projects (i.e., less than 10M British pounds). The Highways England also reviews large projects (i.e., greater than 10M British pounds) one and five years after each project's opening. A meta-analysis across all recent large projects occurs every two years. The FTA’s Capital Investment Grant program, commonly known as the “New Starts” program, requires Before and After Studies for every major project funded through the program (Federal Transit Administration 2016). Project sponsors are directed to archive the predictions and details supporting the predictions at two planning stages and at the federal funding decision stage. Approximately two years after project opening, project sponsors are required to gather information about the actual outcomes of five major aspects of the project: physical scope, capital cost, transit service levels, operating and maintenance costs and ridership. Project sponsors analyze the predictions and actual outcomes, and prepare a report summarizing the differences between the predictions and actual outcomes, documenting the reasons for those differences, and highlighting lessons learned that would inform FTA or other project sponsors on how methodologies or circumstances helped or hindered the predictions. FTA’s New Starts program allows project sponsors to enumerate the uncertainties inherent in their travel forecasts and provide information on how those uncertainties may impact the project forecast. FTA has presented the method of “build up” of uncertainties, with separate forecasts produced for individual sources of uncertainty, to help identify the key drivers of uncertainty from the travel model’s perspective. Similar approaches could be considered for highway projects. The National Oceanic and Atmospheric Administration’s Hurricane Forecasting Improvement Program (HFIP) is the only program that combines forecast accuracy evaluation with improved analytical methods, public communication of forecast uncertainty and societal benefits (National Oceanic and Atmospheric Administration 2010). The HFIP’s stated accuracy goals were hypothesized to require increased precision in data and analytical methods. The HFIP developed a process to justify and evaluate these investments by placing analytical methods into three streams:  Stream 1 consists of existing analytical methods and is used for official, real-time forecasts;  Stream 2 consists of advanced analytical methods that take advantage of increased computing power and increased data precision, but forecasts are made offline; and  Stream 1.5 consists of elements of Streams 1 and 2 that seem to hold the most promise, forecasts are made in real-time but are not official.

Traffic Forecasting Accuracy Assessment Research Technical Report II-64 The same input data is fed to all three streams. Efforts that demonstrate increased accuracy and skill are elevated to Stream 1.5 and eventually Stream 1. In this way, empirically proven methods are implemented very quickly. In five years, the HFIP has demonstrated a 10% improvement in tropical storm track and intensity forecasts (Toepfer 2015). The HFIP is the only known program that uses a forecast skill metric in addition to traditional accuracy metrics. Advanced analytical methods must not only be accurate, but also must provide better accuracy than simpler and more inexpensive methods. In this way, analytical methods proven to be better than simpler (termed “naïve”) methods are recommended for immediate implementation. Shortfalls in accuracy and skill are noted and used to prioritize future research efforts. The HFIP directly tied improvement goals in forecast accuracy to societal benefits. “Forecasts of higher accuracy and greater reliability are expected to lead to higher user confidence and improved public response, resulting in savings of life and property” (National Oceanic and Atmospheric Administration 2010). As the first years of the program produced many successes, the accuracy goals were increased to eventually provide residents a reliable 7 days’ advance warning of an impending storm. The estimated benefit of avoiding an unnecessary evacuation is $1,000 per person, and has been estimated to $225-380 million for larger storms (Toepfer 2015). In this way the HFIP sponsors are able to justify the cost of implementing more complex and expensive methods. 4 Summary of Existing Outcomes Nicolaisen and Driscoll (2014) provide a recent meta-analysis of the demand forecast accuracy literature. That meta-analysis is not repeated here, but it is summarized to provide an existing baseline estimate of expected forecast accuracy. Their analysis considers 12 studies that that have a sizable database of completed road and/or rail projects, that that provide distributions based on those projects, and that specify the sources of information considered. Table 2 shows the studies included, and Table 3 shows a summary of the results included. Both tables are reported directly from their paper. Table 2: Summary of studies included in Nicolaisen and Driscoll (2014) meta-analysis

Traffic Forecasting Accuracy Assessment Research Technical Report II-65 Their main finding is that the observed inaccuracy of forecasts varies based on the type of project:  For rail projects, the mean inaccuracy is negative, meaning that actual demand is less than the demand that was predicted. The general range is that actual demand is 16-44% less than forecast demand.  For toll road projects, the mean inaccuracy is also negative, indicating that actual demand is less than forecast. The meta-analysis considered two studies of toll roads, with (Robert Bain 2009a) showing a mean of -23% for a global sample of toll roads, and Welde and Odeck (2011) showing a mean of -3% in Norway.  For untolled road projects, the mean inaccuracy is positive, with most results showing 3-11% more traffic in reality than was forecast. Table 3: Summary of results included in Nicolaisen and Driscoll (2014) meta-analysis

Traffic Forecasting Accuracy Assessment Research Technical Report II-66 They also note that for all types of projects, there is considerable variation in the results, regardless of the mean. It should be noted, that there are limited studies available here, particularly of untolled roads in the United States, so these results should be considered with a degree of caution. Nonetheless, it is interesting to note the difference in direction for untolled road projects relative to rail and toll road projects, with the forecasts predicting too little demand for untolled roads, and too much demand for rail and toll roads. One can hypothesize possible explanations for this difference. Some possible explanations may be:  There could be a methodological difference such that transit and rail are more difficult to predict for technical reasons having to do with them being lower-share alternatives, the difficulty of estimating good values-of-time, or the challenges associated with identifying transit markets or transit users.  It may be that rail and toll road projects only get built when the forecasts show strong demand, whereas untolled road projects tend to get funded regardless. This could lead to optimism bias in the forecasts, as suggested by (Bent Flyvbjerg 2007) or it could lead to self-selection bias, as suggested by Eliasson and Fosgerau (2013), where projects with forecasts that happen to be too low don’t get built, and therefore don’t end up in the sample.  It could also be that the long term trends over the past 40 years associated with growing auto ownership, the entry of women into the workforce, and high levels of suburbanization combined to create a future that was not anticipated at the time the forecasts were made but is systematically biased to push people towards using roads and away from transit. While it is easy to speculate on the possible sources of errors, it is difficult to know for certain what the issue is. As Nicolaisen and Driscoll note: “The studies that make the greatest effort to address this aspect are rarely able to provide more than rough indications of causal mechanisms.” The go on to point out that a key challenge is the lack of the necessary data to conduct such studies, in particular the infrequent availability of archived forecasts. Specifically, they point out: “The lack of availability for necessary data items is a general problem and probably the biggest limitation to advances in the field.” It is from this starting point that NCHRP 08-110 begins—limited studies on untolled roads in the US, little information on the sources of forecast errors, and a general lack of data to conduct such studies. We consider the best approach to how to improve upon the situation. To do so, we consider a selection of additional studies for two purposes: first, to consider the methods for how to conduct accuracy evaluations, and second, to identify the factors for consideration that may be sources of forecasting error.

Traffic Forecasting Accuracy Assessment Research Technical Report II-67 5 Methods of Evaluation The next question of particular relevance to this study is how to go about assessing forecast accuracy. For this question, we consider a selection of studies as summarized in Table 4. Table 4 summarizes the research data, analysis procedure, key results, and suggestions or identified sources of error. These studies span different types of projects, including untolled roads, toll roads and rail or transit projects. These studies, as well as those identified in Table 1, reveal two main methods of evaluating the accuracy of forecasts: Deep Dives and Large N studies. Deep Dives are examples in which a single project is analyzed in detail to determine what went right and what went wrong in the forecast. Individual before-and-after studies from the FTA Capital Investment Grant Program are classic examples of Deep Dives. These studies often involve custom data collection before and after the project, such as onboard transit surveys. The sources of forecast errors—such as errors in inputs, model issues or changes in the project definition—are considered and identified. The advantage of Deep Dives is that they allow a complex set of issue to be thoroughly investigated. They also reveal the importance of assumptions made by modelers in relation to data and the particular models that were used. The disadvantage is that it is often unclear whether the lessons from one project can be generalized to others. In contrast, Large N studies consider a larger sample of projects in less depth. Flyvbjerg (2005) extols the virtues of Large N studies as the necessary means of coming to general conclusions. Often, Large N studies include a statistical analysis of the error and bias observed in forecasts compared to actual data. (B. Flyvbjerg et al. 2006) consider a Large N analysis of 183 road and 27 rail projects, and Standard and Poor’s conducts a Large N analysis with a sample of 150 toll road forecasts (Bain and Plantagie 2004). Other examples of Large N studies are the Minnesota, Wisconsin and Ohio analyses (Parthasarathi and Levinson 2010; Buck and Sillence 2014; Giaimo and Byram 2013). The two approaches are not mutually exclusive. For example, if enough Deep Dives are conducted, they can become the basis for a Large N analysis. Schmitt provides an example of this with his analysis of FTA funded rail projects (Schmitt 2016). This project will apply both Deep Dives and Large N analysis as complementary evaluation tools. Specifically, it will use Large N analysis to measure the amount and distribution of forecast errors, including those segmented by variables such as project type and various risk factors. It will use Deep Dives to explore the sources of forecast error—if we got the wrong answer, why are we wrong? Two recent studies provide the most complete current thinking on how to approach each. These will serve as a framework for this study to follow. Whereas most studies focus on reporting descriptive statistics of forecast errors, Odeck and Welde (2017) define and apply a formal econometric framework for evaluating traffic forecast accuracy. The descriptive statistics, typically the percentage error (PE), mean percentage error (MPE) and mean absolute percentage error (MAPE) are useful and will continue to be used for descriptive purposes. The econometric framework is advantageous because it provides a simple, but statistically robust method for estimating the bias. It does so by estimating the following regression: 𝑦 𝛼 𝛽𝑦 𝜀

Traffic Forecasting Accuracy Assessment Research Technical Report II-68 Where 𝑦 is the actual traffic on project i, 𝑦 is the forecast traffic on project i, and 𝜀 is a random error term. 𝛼 and 𝛽 are estimated terms in the regression. The null hypothesis is that the forecasts are unbiased, and in that case the estimated value of 𝛼 will be 0 and of 𝛽 will be 1. Odeck and Welde (2017) provide some minor variations on this approach, including a method to determine the efficiency of the forecasts, which are not repeated here. It is easy to see how this econometric framework can be extended to test additional segmentation, or additional terms in the regression. For example, either 𝛼 and 𝛽 can be segmented by the type of project, the agency conducting the forecast, or the number of years between the forecast and the opening year. This provides a framework from which a wealth of factors can be explored with different levels segmentation depending on the number of observations in each segment.

Traffic Forecasting Accuracy Assessment Research Technical Report II-69 Table 4: Summary of select studies and methods Paper Title Research Data Analysis Procedure Results Suggestions/sources of error Odeck and Welde 2017 The accuracy of toll road traffic forecasts: An econometric evaluation 68 toll road projects in Norway. Percentage error, Mean Absolute Percentage Error over the 68 datasets. Formally defines the econometric structure for analyzing accuracy. toll road traffic forecasts are underestimated but are close to accurate because the mean percentage error is a mere 4%. This result sharply contrasts international studies that resulted in large overestimations at more than −20%. (ii) The accuracy of forecasts has not improved since transport models became mandatory “It can be argued that one major advantage of the Norwegian system that other countries can learn from is this accumulated experience with forecasting where only one organization is responsible for overseeing the forecasting and where a standard software / framework is used, combined with little or no incentives to exaggerate forecasts.” Gomez, Vassallo, and Herraiz 2016 Explaining light vehicle demand evolution in interurban toll roads: a dynamic panel data analysis in Spain Spanish Toll Road Network Dynamic Panel to estimate Demand, Basically OLS regression. Relative change in travel demand induced by a relative change in variable Explanatory variables are- GDP (provincial), GDP (national), Employment (Prov. And National), GDP per capita (prov and nat.), Previous Year Demand, Toll rates, Fuel Price and Fuel cost (efficiency), Location (coast or interior) Employment and GDP per capita are more consistent explanatory variables for travel demand elasticity; Location; Li and Hensher 2010 Toll Roads in Australia: An Overview of Characteristics and Accuracy of Demand Forecasts Australian Toll Road network OLS and Panel Random Effect Regression Model Actual traffic about 45% lower than predicted during the first year of operation. All other factors remaining unchanged, that the percentage error in forecast reduces by 2.44 percentage points for every additional year since opening. less toll road capacity (when opened, compared with forecast), elapsed time of operation (roads opened longer had higher traffic levels), time of construction (longer construction time delayed traffic growth and increased the error), toll road length (shorter roads attracted less traffic), cash payment (modern no-cash payment increased traffic), and fixed/ distance based tolling (fixed tolls reduced traffic)

Traffic Forecasting Accuracy Assessment Research Technical Report II-70 Flyvbjerg et al. 2006 Inaccuracy in traffic forecasts 183 real projects around the world Actual minus forecasted traffic in percentage of forecasted traffic, in opening year about half of those road projects have a forecasting error of more than ±20%, and 25% of them error of ±40%. uncertainties about trip generation and land-use development (Robert Bain 2011b) On the reasonableness of traffic forecasts Survey of forecasters Surveyed forecasters to identify how accurate they expect forecasts to be Expected accuracy for an existing road is +/-15% 5 years out, +/32.5% 20 years out. Expected accuracy for new road is +/- 25% 5 years out and +/- 42.5% 20 years out. projections of population, GDP, car ownership, households, employment, fuel price (and/or efficiency) (Robert Bain 2009a) Error and optimism bias in toll road traffic forecasts 100 toll road projects Ratio of actual to forecast traffic, by years from opening On average, forecast / actual traffic has a mean of 0.77 and a standard deviation of 0.26. This means that traffic is under-predicted by 23% Experience with toll roads; tariff escalation; forecast horizon; toll facility details; surveys/data collection; private users; commercial users; micro-economics; traffic growth. European Court of Auditors 2013 European Court of Auditors, Are EU cohesion policy funds well spent on roads? Special Report No. 5, Luxembourg, Luxembourg. 24 road investment projects in Germany, Greece, Poland and Spain. Comparison of forecast vs actual Average Annual Daily Traffic (AADT) On average, the actual traffic was 15 % below the forecast traffic, but clearly improved safety and saved travel time. Consider travel time savings, safety, etc. as performance measures. Make improvements on the cost side. Federal Transit Administration 2016 Guidance on Before- and-After Studies of New Starts Projects Guidance for transit before-and- after studies Focus on trips-on-project, plus transit dependents and other measures population and employment forecasts, housing trends and costs, global and local economic conditions, other planned transportation improvements, time-of-day assumptions, parking prices, fuel prices, and long-term changes in vehicle technology Anam, Miller, and Amanin 2017 A Retrospective Evaluation of Traffic Forecasting Accuracy: Lessons Learned from Virginia 39 studies from Virginia 1. Obtain forecast volumes from the Virginia studies. 2. Obtain observed volumes corresponding to the forecast year and location. 3. Measure accuracy by comparing forecast volumes to observed volumes. 4. Document assumptions in assessing accuracy The average value of the median absolute percent error of all studies was about 40%. Forcast Method (trend based more accurate than activity based when number of economic recessions between base and forecast year is 2 or more and for long term durations. Long term trend based studies more accurate than short term), forecast duration decreases and accuracy increases.

Traffic Forecasting Accuracy Assessment Research Technical Report II-71 5. Identify explanatory factors of forecast accuracy. Kriger, Shiu, and Naylor 2006 Estimating toll road demand and revenue 15 US toll roads opened between 1986 and 1999 Survey of practitioners. Report forecast vs actual revenues by year after opening. on average, that the actual traffic was 35 % below the predicted traffic Long range demographic and socioeconomic forecasts: land use (job and household growth rates according to a variety of national, state and regional third party sources), short term economic fluctuations (local-oil price and subsequent sharp regional economic downturn), Travel Demand inputs inaccuracy, Value of Time and willingness to pay Nunez 2007 Sources of Errors and Biases in Traffic Forecasts for Toll Road Concessions 49 worldwide toll road concessions Considers the behavior of forecasters, promoters and users, focusing on strategic decisions and possible overconfidence. Estimates decreasing marginal utility of transport and value of time. There is a strong “winner’s curse” in toll road concessions. Using a single average value of time can lead to over-estimation. Further disaggregate values of time. Restructure concession process to minimize winners curse, especially with high uncertainty. (Andersson, Brundell-Freij, and Eliasson 2017a) Validation of aggregate reference forecasts for passenger transport compares eight Swedish national forecasts for passenger traffic made between 1975 and 2009 Forecasts compared against simple trendline as reference. Research adjusts forecasts to correct for input errors in: pop growth, fuel price, fuel economy, car ownership and GDP. Does so by applying elasticities. Since the early 1990 s, forecasts for car traffic have generally predicted growth rates of around 1.5% per year on average, whereas actual growth rates have been around 0.8% per year. The model-based forecasts still out-predict trendlines, and the models with updated inputs greatly out-predict trendlines. Errors in input assumption: average income (usually taken to be equal to GDP/capita), GDP Growth (verage absolute error over all forecasts is 3 percentage points), population, fuel price, car ownership (verage absolute error of 3 percentage points), and vehicle fuel economy. how population growth will be distributed among different types of municipalities, License holding (it explains 2/3 of the error in the Samplan 1999-forecast)

Traffic Forecasting Accuracy Assessment Research Technical Report II-72 A second recent study provides a strong framework for how to approach Deep Dives. (Andersson, Brundell-Freij, and Eliasson 2017a) examine aggregate (not project-level) forecast of car traffic in Sweden. There are two elements of interest in their approach. First, they compare the forecasts to a reference forecast, which would be a simple trendline as would be projected at the time the forecast was made. They argue that the value of the forecast occurs when it is able to out-perform this simple trendline, and they find that the forecasts examined generally do out-perform. They also find that if errors in the input data are corrected, the forecasts out-perform the trendline to a greater degree. This approach provides aa useful point of comparison, although it is more limiting in the case of project level forecasts because it cannot be applied to evaluating forecasts of new facilities. Second, (Andersson, Brundell-Freij, and Eliasson 2017a) consider the forecast versus actual values of five important input variables: fuel price, fuel economy, car ownership per person, growth in GDP per capita, and population growth. Using elasticities for each, they estimate how having the correct input value for each of these terms would affect the forecast. Table 5, taken directly from their paper, summarizes the results of their analysis. It shows that correcting for errors in these five inputs would reduce the root mean square error of the forecasts from 0.64 to 0.12, with the biggest benefit associated with getting the fuel price correct. This type of analysis is useful because it provides insight into why the forecasts are wrong, and where we should focus our efforts if we wish to improve the forecasts. The Deep Dives in NCHRP 08-110 will aim to provide a similar analysis for each project considered in detail—if we got this right, how much would we improve the forecast? 6 Identified Problems with Forecast Accuracy A component of the Deep Dives will be an effort to assess the sources of forecast error. A number of authors have proposed a range of hypotheses for what those sources may be. Generally, these can be grouped into three categories: technical problems, optimism bias and selection bias (Nicolaisen and Driscoll 2014; Flyvbjerg 2007; Eliasson and Fosgerau 2013). Technical problems include limitations of the data, the methods and the assumptions made during the process. It has been noted that in some cases the impact of the assumptions on the forecast is greater than the Table 5: Effect of correcting for input errors in forecasts of Swedish car traffic (Andersson, Brundell-Freij, and Eliasson 2017)

Traffic Forecasting Accuracy Assessment Research Technical Report II-73 forecasting model or method used. Optimism bias can be the result of the “insider’s view” or due to political pressure to achieve certain forecasts to justify the project. Selection bias could occur because projects with high forecasts, are more likely to get built, even if the underlying forecasts for all projects considered are unbiased. These latter two issues may explain the discrepancy between untolled highway projects versus rail transit and toll roads—because the forecasts can play a larger role in whether the latter get built, there is more potential for both optimism bias and selection bias. The impact of the core assumptions on the accuracy, or the lack of, of forecasts is in support of William Asher’s examination of forecasts in five areas: population, the economy (current dollar and real GNP), energy (electricity, petroleum consumption), transportation and technology (Ascher 1979). He found improvements in forecasting method a secondary precursor to achieving a higher degree of accuracy. According to Asher, failing to capture the reality of the future context leave little to the methodology. He also found that the more distant the forecast target date is, the less accurate becomes the forecast. He further identified systematic biases associated with the institutional sites of forecasts. In examining the possible sources of error, we consider the possible explanations offered by a selection of studies, as summarized in Table 6. The cells in the table indicate which of these studies has cited each of the issues in the column headers as a possible source of error. The most commonly cited sources are all related to the economy: employment, GDP and recession/economic conditions. Land use, population projections and housing are also commonly cited. It is important to note here that these studies are generally hypothesizing possible explanations, rather than clearly demonstrating source of error. Nonetheless, it is useful in that it provides an enumeration of factors that can be considered in detail in the Deep Dives.

Traffic Forecasting Accuracy Assessment Research Technical Report II-74 Paper Employ‐ ment GDP Recession  / Econ  Condition Trip Gen /  Travel  Character‐ istics Land Use Pop‐ ulation  Projection Housing  Prediction Car  Owner‐ ship Fuel Price Fuel  Efficiency Time  Savings Location Time of  Operation Toll Road  Capacity Length of  Road Cash  Payment  / Value of  Time Ramp Up  Period Tolling  Culture Time‐of‐ day Traffic calc‐ ulations Forecast  Duration Odeck and Welde 2017 Gomez, Vassallo, and  Herraiz 2016 Y Y Y Li  and Hensher 2010 Y Y Y Y Flyvbjerg et al. 2006 Y Y Bain 2011 Y Y Y Y Y Y Y Bain 2009 European Court of  Auditors  2013 Chatterjee er al. 1997 Y Spielberg et al. 2007 Y Y FTA 2013 Y Y Y Y Y Anam, S. et al. 2016 Y Y NCHRP Synthesis 364  (2006) Y Y Y Y Y Y Y Nunez 2007 Andersson, Brundell‐ Freij, and Eliasson 2017 Y Y Y Y Y Y Yang, Li, and Wu 2017 Y Y Y Y Total  Citing as  Issue 5 6 5 4 4 2 3 2 4 2 2 1 1 1 1 3 1 1 0 1 1 Table 6: Summary of select studies and factors cited as contributing to accuracy issues

Traffic Forecasting Accuracy Assessment Research Technical Report II-75 7 Gaps in Knowledge The research reviewed here provide a starting point for understanding existing evidence on forecast accuracy, as well as a strong foundation of how to approach such studies and what factors may contribute to inaccuracy. A limitation is that the projects considered are not necessarily representative of forecasts in general. There is a strong representation of rail projects (U.S. Department of Transportation: Federal Transit Administration 2003; Federal Transit Administration and Vanasse Hangen Brustlin 2008), toll roads (Robert Bain 2009; Odeck and Welde 2017; Kriger, Shiu, and Naylor 2006) and road projects in Europe (Andersson, Brundell-Freij, and Eliasson 2017; Welde and Odeck 2011; Highways England 2015), but limited studies of untolled traffic forecasts in the United States ( Anam, Miller, and Amanin 2017; Buck and Sillence 2014; Parthasarathi and Levinson 2010). In fact, (Hartgen 2013a) has called the unknown accuracy of US urban road forecasts “the greatest knowledge gap in US travel demand modeling.” NCHRP 08-110 seeks to close that gap.

Traffic Forecasting Accuracy Assessment Research Technical Report II-76 References Andersson, Matts, Karin Brundell-Freij, and Jonas Eliasson. 2017. “Validation of Aggregate Reference Forecasts for Passenger Transport.” Transportation Research Part A: Policy and Practice 96 (Supplement C): 101–18. https://doi.org/10.1016/j.tra.2016.12.008. Ascher, William. 1979. Forecasting: An Appraisal for Policy-Makers and Planners. Bain, R., and J. Plantagie. 2004. “Traffic Forecasting Risk: Study Update 2004.” In Proceedings f The European Transport Conference. Strasbourg, France. https://trid.trb.org/view.aspx?id=841273. Bain, Robert. 2009. “Error and Optimism Bias in Toll Road Traffic Forecasts.” Transportation 36 (5): 469–82. https://doi.org/10.1007/s11116-009-9199-7. Bain, Robert. 2011. “The Reasonableness of Traffic Forecasts Findings from a Small Survey.” Traffic Engineering and Control (TEC) Magazine, May 2011. Bain, Robert, and Lidia Polakovic. 2005. “Traffic Forecasting Risk Study Update 2005: Through Ramp-up and Beyond.” Standard & Poor’s, London. http://toolkit.pppinindia.com/pdf/standard-poors.pdf. Buck, Karl, and Mike Sillence. 2014. “A Review of the Accuracy of Wisconsin’s Traffic Forecasting Tools.” In . https://trid.trb.org/view/2014/C/1287942. CDM Smith, Alan Horowitz, Tom Creasy, Ram M Pendyala, Mei Chen, National Research Council (U.S.), Transportation Research Board, et al. 2014. Analytical Travel Forecasting Approaches for Project-Level Planning and Design. Washington, D.C.: Transportation Research Board. Eliasson, Jonas, and Mogens Fosgerau. 2013. “Cost Overruns and Demand Shortfalls – Deception or Selection?” Transportation Research Part B: Methodological 57: 105–13. https://doi.org/10.1016/j.trb.2013.09.005. European Court of Auditors. 2013. “Are EU Cohesion Policy Funds Well Spent on Roads?” Luxembourg: Publ. Office of the Europ. Union. https://www.eca.europa.eu/Lists/ECADocuments/SR13_05/SR13_05_EN.PDF. Federal Transit Administration. 2016. “Guidance on Before-and-After Studies of New Starts Projects.” Text. FTA. April 4, 2016. https://www.transit.dot.gov/funding/grant-programs/capital- investments/guidance-and-after-studies-new-starts-projects. Federal Transit Administration, and Vanasse Hangen Brustlin. 2008. “The Predicted and Actual Impacts of New Starts Projects -- 2007: Capital Cost and Ridership.” Flyvbjerg, B., M. K. S. Holm, and S. L. Buhl. 2005. “How (In)Accurate Are Demand Forecasts in Public Works Projects?: The Case of Transportation.” Journal of the American Planning Association 71 (2). https://trid.trb.org/view.aspx?id=755586. Flyvbjerg, B., Skamris Holm, M. K, and S. L. Buhl. 2006. “Inaccuracy in Traffic Forecasts.” Transport Reviews 26 (1). https://trid.trb.org/view/2006/C/781962. Flyvbjerg, Bent. 2005. “Measuring Inaccuracy in Travel Demand Forecasting: Methodological Considerations Regarding Ramp up and Sampling.” Transportation Research Part A: Policy and Practice 39 (6): 522–30. https://doi.org/10.1016/j.tra.2005.02.003. Flyvbjerg, Bent. 2007. “Policy and Planning for Large-Infrastructure Projects: Problems, Causes, Cures.” Environment and Planning B: Planning and Design 34 (4): 578 – 597. https://doi.org/10.1068/b32111. Giaimo, Greg, and Mark Byram. 2013. “Improving Project Level Traffic Forecasts by Attacking the Problem from All Sides.” presented at the The 14th Transportation Planning Applications Conference, Columbus, OH. Gomez, Juan, José Manuel Vassallo, and Israel Herraiz. 2016. “Explaining Light Vehicle Demand Evolution in Interurban Toll Roads: A Dynamic Panel Data Analysis in Spain.” Transportation 43 (4): 677–703. https://doi.org/10.1007/s11116-015-9612-3. Hartgen, David T. 2013. “Hubris or Humility? Accuracy Issues for the next 50 Years of Travel Demand Modeling.” Transportation 40 (6): 1133–57. https://doi.org/10.1007/s11116-013-9497-y. Highways England. 2015. “Post Opening Project Evaluation (POPE) of Major Schemes: Main Report.” Kahneman, Daniel, and Amos Tversky. 1977. “Intuitive Prediction: Biases and Corrective Procedures.” DTIC Document. http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA047747.

Traffic Forecasting Accuracy Assessment Research Technical Report II-77 Kriger, David, Suzette Shiu, and Sasha Naylor. 2006. “Estimating Toll Road Demand and Revenue.” Synthesis of Highway Practice NCHRP Synthesis 364. Transportation Research Board. https://trid.trb.org/view/2006/M/805554. Li, Zheng, and David A. Hensher. 2010. “Toll Roads in Australia: An Overview of Characteristics and Accuracy of Demand Forecasts.” Transport Reviews 30 (5): 541–69. https://doi.org/10.1080/01441640903211173. MacKinder, I. H., and S. E. Evans. 1981. “The Predictive Accuracy of British Transport Studies in Ubran Areas.” Transport and Road Research Laboratory. https://trid.trb.org/view.aspx?id=179881. National Oceanic and Atmospheric Administration. 2010. “Hurricane Forecast Improvement Program: Five- Year Strategic Plan.” http://www.hfip.org/documents/hfip_strategic_plan_yrs1-5_2010.pdf. Nicolaisen, Morten Skou, and Patrick Arthur Driscoll. 2014. “Ex-Post Evaluations of Demand Forecast Accuracy: A Literature Review.” Transport Reviews 34 (4): 540–57. https://doi.org/10.1080/01441647.2014.926428. Nunez, Antonio. 2007. “Sources of Errors and Biases in Traffic Forecasts for Toll Road Concessions.” PhD Thesis, Université Lumière Lyon 2. Odeck, James, and Morten Welde. 2017. “The Accuracy of Toll Road Traffic Forecasts: An Econometric Evaluation.” Transportation Research Part A: Policy and Practice 101 (July): 73–85. https://doi.org/10.1016/j.tra.2017.05.001. Parthasarathi, Pavithra, and David Levinson. 2010. “Post-Construction Evaluation of Traffic Forecast Accuracy.” Transport Policy 17 (6): 428–43. https://doi.org/10.1016/j.tranpol.2010.04.010. Pedersen, N.J., and D.R. Samdahl. 1982. “Highway Traffic Data for Urbanized Area Project Planning and Design.” National Cooperative Highway Research Program NCHRP 255. Washington, D.C.: Transportation Research Board. Pickrell, D. H. 1989. “Urban Rail Transit Projects: Forecast Versus Actual Ridership And Costs. Final Report.” https://trid.trb.org/view.aspx?id=299240. RSG. 2013a. “Managing Uncertainty and Risk in Travel Forecasting: A White Paper.” FHWA-HEP-14-030. Travel Model Improvement Program (TMIP). Washington, D.C.: Federal Highway Administration. RSG. 2013b. “Improving Existing Travel Models and Forecasting Processes: A White Paper.” FHWA-HEP- 14-019. Travel Model Improvement Program (TMIP). Washington, D.C.: Federal Highway Administration. Salwa Anam, John S. Miller, and Jasmine Amanin. 2017. “A Retrospective Evaluation of Traffic Forecasting Accuracy: Lessons Learned from Virginia.” In . Washington, D.C. Schmitt, David. 2016. “A Transit Forecasting Accuracy Database: Beginning to Enjoy the ‘Outside View.’” In Transportation Research Board 95th Annual Meeting. Washington, D.C. Toepfer, Fred. 2015. “Presentation to TRB Travel Demand Forecasting Committee Meeting.” January 13. Transportation Research Board. 2007. “Metropolitan Travel Forecasting: Current Practice and Future Direction.” Special Report 288. U.S. Department of Transportation: Federal Transit Administration. 2003. “Predicted and Actual Impacts of New Starts Projects: Capital Cost, Operating Cost and Ridership Data.” Webber, Melvin W. 1976. “The BART Experience—What Have We Learned?” Institute of Urban & Regional Development, October. http://escholarship.org/uc/item/7pd9k5g0. Welde, Morten, and James Odeck. 2011. “Do Planners Get It Right? The Accuracy of Travel Demand Forecasting in Norway.” EJTIR 1 (11): 80–95.

II-78 A P P E N D I X B Large N Analysis

Traffic Forecasting Accuracy Assessment Research Technical Report II-79 Contents 1. INTRODUCTION ............................................................................................................................ II-80 2. AVAILABLE DATA AND KEY CHALLENGES......................................................................... II-81 2.1 DATA ..............................................................................................................................................II-81 2.2 DATABASE STRUCTURE .................................................................................................................II-83 2.3 DECISION VARIABLES ....................................................................................................................II-85 3. METHODOLOGY ........................................................................................................................... II-87 3.1 METHODOLOGIES USED IN EXISTING LITERATURE .......................................................................II-87 3.2 EVALUATION YEAR .......................................................................................................................II-88 3.3 DEFINITION OF ERRORS .................................................................................................................II-89 3.4 DISTRIBUTION OF ERRORS .............................................................................................................II-90 3.4 BIAS DETECTION ............................................................................................................................II-91 3.5 LEVEL OF ANALYSIS: BY SEGMENT OR BY PROJECT .....................................................................II-92 3.6 DATA CLEANING AND FILTERING ..................................................................................................II-93 3.7 OUTLIERS .......................................................................................................................................II-94 3.8 CALCULATING THE NUMBER OF LANES REQUIRED .......................................................................II-95 4. DATA EXPLORATION .................................................................................................................. II-97 4.1 OVERALL DISTRIBUTION ...............................................................................................................II-99 4.2 FORECAST VOLUME .....................................................................................................................II-101 4.3 FUNCTIONAL CLASS .....................................................................................................................II-104 4.4 AREA TYPE ..................................................................................................................................II-105 4.5 TYPE OF PROJECT .........................................................................................................................II-106 4.6 TOLLS ...........................................................................................................................................II-107 4.7 YEAR FORECAST PRODUCED .......................................................................................................II-108 4.8 OPENING YEAR ............................................................................................................................II-110 4.9 FORECAST HORIZON ....................................................................................................................II-112 4.10 UNEMPLOYMENT RATE IN OPENING YEAR ...............................................................................II-113 4.11 CHANGE IN UNEMPLOYMENT RATE ..........................................................................................II-114 4.12 FORECAST METHOD ...................................................................................................................II-115 4.13 TYPE OF FORECASTER ................................................................................................................II-116 4.14 EFFECT ON NUMBER OF LANES..................................................................................................II-117 5. ECONOMETRIC ANALYSIS ...................................................................................................... II-118 5.1 BASE MODEL ...............................................................................................................................II-119 5.2 INCLUSIVE MODEL FOR INFERENCE .............................................................................................II-122 5.3 FORECASTING MODEL .................................................................................................................II-125 REFERENCES .................................................................................................................................... II-130

Traffic Forecasting Accuracy Assessment Research Technical Report II-80 1. Introduction The current assessment of traffic forecasting accuracy in NCHRP 08-110 builds upon past efforts. There have been several researches done on assessing the accuracy of traffic forecasts, although most of them have been focused on toll roads. The inspiration seems to be from the fact that toll road forecasts have a bearing on investor expectations and that is why their accuracy is more important. As an evidence to this, The Australia Government (2012) cited ‘‘inaccurate and over- optimistic’’ traffic forecasts as a threat to investor confidence. Three lawsuits now underway challenge the forecasts for toll road traffic that subsequently came in significantly under projections ( Bain 2013). Li and Hensher (2010) evaluated the accuracy of toll road traffic forecast in the Australian toll roads and found a general over-prediction of traffic. Actual traffic for the roads were about 45% lower than the predicted value on an average in the first year of operation. The accuracy doesn’t get better over time, as the percentage error reduces by only 2.44% each year after opening. They attributed this error in forecast to less toll road capacity (when opened, compared with forecast), elapsed time of operation (roads opened longer had higher traffic levels), time of construction (longer construction time delayed traffic growth and increased the error), toll road length (shorter roads attracted less traffic), cash payment (modern no-cash payment increased traffic), and fixed/ distance-based tolling (fixed tolls reduced traffic). Bain (2011b), on the other hand, took the ratio of actual to forecast traffic for 100 toll road projects and found under-prediction of 23% on an average. The factors he identified were mostly the toll culture (existence of toll roads previously, toll acceptance etc.) and errors in data collection as well as unforeseen micro-economic growth in the locality. This observation is supported in a research into the accuracy of toll road forecasts in the Spanish Toll Road Network (Gomez, Vassallo, and Herraiz 2016). According to their research, the economic factors (employment and GDP per capita) are consistent variables for travel demand estimation. Odeck and Welde (2017) looked into 68 Norwegian toll roads and found that while toll-road traffic is underestimated, they are close to accurate as the mean percentage error is a mere 4%. Flyvbjerg et al. (2006)conducted before-after study of 183 real projects around the world and compared the actual and forecasted traffic in the opening year and found about half of them having a forecast error of ±20% and a quarter were off by about 40% on either direction. He attributed the errors to the uncertainties in trip generation and land-use pattern. Similar results were reported in Kriger, Shiu, and Naylor (2006) which reviewed the forecasts for 15 US toll roads and found that, on average, the actual traffic was 35% below the predicted traffic. From 2002-2005, Standard & Poor's publicly released annual reports on the accuracy of toll road, bridge and tunnel projects worldwide. The 2005 report (Bain and Polakovic 2005) is the most recent report available publicly, analyzed 104 projects. They found that the demand forecasts for those projects were optimistically biased, and this bias persisted into the first five years of operation. They also found that variability of truck forecasts was much higher than lighter vehicles. The authors noted that their sample "undoubtedly reflects an over-representation of toll facilities with higher credit quality" and that actual demand accuracy for these types of projects is probably lower than documented in their report. Despite the works into assessing the forecast errors for toll roads, there haven’t been comparable research into the same for non-tolled roads. There have been a few recent studies examining the accuracy of non-tolled roadway forecasts. Buck and Sillence (2014) demonstrated the value of using travel demand models in Wisconsin to improve traffic forecast accuracy and provided

Traffic Forecasting Accuracy Assessment Research Technical Report II-81 a framework for future accuracy studies. Parthasarathi and Levinson (2010) examined the accuracy of traffic forecasts for one city in Minnesota. (Giaimo and Byram 2013) examined the accuracy of over 2,000 traffic forecasts in Ohio produced between 2000-2012. They found the traffic forecasts slightly high, but within the standard error of the traffic count data. They did not find any systematic problems with erroneous forecasts. The presentation also described an automated forecasting tool for “low risk” projects that relies on trendlines of historical traffic counts and adjustments following procedures outlined in NCHRP Report 255 (Pedersen and Samdahl 1982) and updated in NCHRP Report 765 (CDM Smith et al. 2014). In the study of 39 road projects in Virginia, (Miller et al. 2016) reported that the median absolute percent error of all studies was about 40%. This portion of NCHRP 08-110 aims to conduct a similar analysis using a data on forecast and actual traffic for a combined data set of about 1,300 projects from six states and four European countries. 2. Available Data and Key Challenges This analysis uses the database compiled as part of the NCHRP 08-110 project. The database contains Traffic Forecast and actual traffic information for road projects in several states. The records are compiled from existing database as maintained by the DOTs, ESAL reports and project reports or traffic/environmental impact statements as well as database from similar research efforts. The database contains information on the project itself (unique project ID, improvement type, facility type, location, length), forecast (year forecast produced, forecast year, methodology etc.) and the actual traffic count information. The primary metric used in comparing forecast accuracy is the Average Daily Traffic (ADT). 2.1 Data Data are included from six states: Florida, Massachusetts (one project), Michigan, Minnesota, Ohio and Wisconsin, as well as from four European countries: Denmark, Norway, Sweden and the United Kingdom. In addition, we have acquired data from Virginia and Kentucky, but the format of these data are different, and they will require additional effort to enter into the database. For example, in Kentucky we have traffic forecast reports, but limited context about the projects and when/if they opened, making it challenging to match the forecasts to opening year counts without adequate local knowledge. This is left as a future exercise. Since the database has been compiled by different state DOTs, there are inconsistencies in the database as well as missing information.. For example, Florida District 4 (D4) and District 5 (D5) data were provided in different formats-- D4 in excel format while D5 extracted from scanned pdf reports. Actual count information for Florida District 5 were obtained by matching the count station ID available in the report with the Florida Historical Traffic Count Database. The Michigan dataset was provided by Michigan DOT in the form of both pdf reports and excel table. The Minnesota dataset was gathered from previous studies in the form of excel table. The raw data is available at Minnesota Historical Society Archives and is in the form of scanned pdf reports produced by Minnesota DOT. Count maps were used to get the actual ADT information is also provided in scanned pdf format. Date of collection of data is between 2007-2009. Forecast beyond this timeframe are not included in the dataset. Due to unavailability of clear information, in most cases, some assumptions have been made. The assumptions are specific to States and are taken based on the data provided by the agencies.

Traffic Forecasting Accuracy Assessment Research Technical Report II-82 For example: in the Minnesota data, not much information is available in the reports for Forecast Methodology. Since these are old forecasts, it is assumed that the forecast was made using Traffic Count Trend. For detailed list of assumptions and description of the dataset, please see Appendix A. In several DOTs case, while actual counts were given on the same roadway, there was no mention of when the project was completed. Missing key information like Project Type/Type of Improvement, Roadway Facility Functional Class, Forecast Methodology etc. were more common. Since we are comparing the forecasted traffic to the actual traffic after the project has been opened, it was imperative that both these datapoints are collected in the same year. But in most cases, the database maintained by the state DOTs don’t clarify if the actual traffic counts are taken after the project has been completed or if the project has been completed in the year it was forecasted to open. So far, our database includes project and forecast information from Ohio, Wisconsin, Florida (Districts 4 and 5), Minnesota, Michigan and Denmark, Sweden, Norway and a few from the UK (European Projects from Nicolaisen (2012) database). The Wisconsin and Minnesota datasets come from two published researches on assessing forecast accuracy: Buck and Sillence (2014) for Wisconsin and Parthasarathi and Levinson (2010) for Minnesota. The Florida D-4 data were obtained from a published study as well ((Traffic Forecasting Sensitivity Analysis, 2015)) which compares the actual count in the forecasted year with the forecasted traffic. So we are assuming that the Actual Traffic Count listed in these datasets are taken after the project has been completed. As for Ohio dataset, for a few projects/segments the actual year of completion of the projects were given. For others, there was no indication whether the counts are taken after the project has opened or not. Similarly, Florida District 5 datasets were compiled from ESAL reports. Here again we do not have any indication of the actual opening year of the projects. A short summary of the available information, with the State names replaced by Agency Code to protect anonymity, is presented in Table 1:

Traffic Forecasting Accuracy Assessment Research Technical Report II-83 Table 1: Summary of Available Data Agency All Projects Opened Projects Number of Segments Number of Unique Projects Number of Segments Number of Unique Projects Agency A 1123 385 425 381 Agency B 12 1 12 1 Agency C 38 7 5 3 Agency D 2176 103 1292 99 Agency E 12413 1863 1242 562 Agency F 463 132 463 132 Agency G 472 120 472 113 Total Segments 16697 2611 3911 1291 In total, our database contains reports for 2,611 unique projects, with 16,697 segments associated with those projects. A segment is a different portion of roadway for which a forecast is provided. For example, forecasts for an interchange improvement project may contain segment-level estimates for both directions of the freeway, for both directions of the crossing arterial, and for each of the ramps. Some of these projects have not yet opened, some of the segments do not have actual count data associated with them, and others do not pass our quality control checks for inclusion in the statistical analysis (the filtering process is described below). While all records are retained for future use, the Large-N analysis is based on a filtered subset of 1,291 projects and 3,911 segments. There are a range of projects included. The opening year varies from 1970 to 2017, with about 90% of the projects opening in year 2003 or later. While the exact nature and scale of the project isn’t always known, inspection reveals that the older projects are more likely to be major infrastructure projects, and the newer projects are more likely to be routine work for the DOT, e.g. resurfacing works on existing roadway. For example, almost half of the projects are design forecasts for repaving. Such differences are driven largely by data availability. Some state agencies have begun tracking all forecasts as a matter of course, and the records to do so rarely go back more than 10-15 years. The older projects are derived from someone going back to study and enter paper reports or scans of paper reports, with the availability of documentation and the interest in spending the effort to examine higher for bigger projects. Thus, it is not a random sample of projects, and there are there are notable differences not only in the methods used across agencies, but also in the mix of projects included in the database. This is an important limitation that readers should bear in mind as they understand and interpret our results. 2.2 Database Structure The Traffic Forecast Database accumulated as part of the project provides a starting point for the Large-N Analysis. The data are available in the form of a Microsoft Access database, whose structure and use are documented in the previously provided report “NCHRP Database – User’s Guide”. The primary fields on the Forecast Database can be classified into three types:

Traffic Forecasting Accuracy Assessment Research Technical Report II-84 1. Project Information 2. Forecast Information and 3. Actual Traffic Count Information Project Information table has all the information specific to the project characteristics. This includes Project/Report ID unique to a project, Project Description, Year when the project/report was completed, type of project, City or Location where project took place, State, Construction cost, etc. Forecast Information includes the data related to the traffic forecast: the forecast itself along with who made the forecast, at and for what year. It also includes the type of forecast year (opening, mid- design or design year), the methodology used to forecast, whether any post-processing been done or not and similar information. Information regarding the actual traffic includes the actual traffic volume in a particular segment, year of observation and project opening year. The key fields in the database is given in Table 2. Table 2: Key Fields in NCHRP 08-110 Database Name Description Brief Description Brief written description of the project Project Year Year of the project or Construction Year or the Year the Forecast Report was produced Length Project Length in miles Functional Class Type of facility (Interstate, Ramp, Major/Minor Arterial etc.) Improvement Type Type of project (Resurfacing, Adding lanes, New construction etc.) Area Type Functional Class Area type where the facility lies (Rural, Urban etc.) Construction Cost Project construction cost State State code. Internal Project ID Project ID or Report ID or Request ID County County in which the facility lies Toll Type What kind of tolls are applied on the facility (No tolls, Static, Dynamic etc.). Year of Observation Year the actual traffic count was collected Count Actual Traffic Count Count Units Units used to collect count information. Station Identifier Count station ID or other identifiers for count station. Traffic Forecast Forecasted Traffic volume. Forecast Units Units used to forecast traffic (AADT,AAWT). Forecast Year Year of forecast. Forecast Year Type Period of forecast like opening, mid-design or design period. Year Forecast Produced The year the forecast was produced/generated. Forecasting Agency Organization which was responsible for this forecast. Forecast Methodology Method used to forecast traffic (Traffic Count Trend, Regional Travel Demand Model, Project Specific Model etc.) Post Processing Methodology Any post processing or alternative methodology used. Post Processing Explanation Explanation, as warranted, in case post processing methodology is used. Segment Description Description of the segment for which this forecast was done.

Traffic Forecasting Accuracy Assessment Research Technical Report II-85 2.3 Decision Variables Based on the nature of the NCHRP 08-110 database, we can select some variables that might dictate future adjustments in the forecasts. These variables are: the type of Project the methodology used, roadway type, area type and the forecast horizon (difference between year forecast produced and year of opening). Project Types are coded into the database as Improvement Type. Along with unknown improvement types, the improvement types are categorized into 12 types, which are further classified into Projects on Existing Roadway, New Construction Project and Unknown Project Type (Table 3). Table 3: Description of Project Types in the NCHRP Database ID in Database Improvement Type Unified Improvement Type 1 Resurfacing/Replacement/no minor improvements Project on Existing Roadway 2 In existing facility, add intersection capacity 3 In existing facility, add mainline/mid-block capacity in general purpose lane(s) 4 In existing facility, add new dedicated lane(s) 5 In existing facility, add new managed lane(s) 6 In existing facility, add new reversible lane(s) 7 New general-purpose lane(s) facility New Construction Project 8 New dedicated lane(s) facility 9 New managed lane(s) facility 10 New reversible lane(s) facility 11 Other New Facility 12 Unknown Improvement Unknown Project Type The Functional Class column in the database are coded according to the FHWA specified functional classification. For a few datasets, the functional classes of the roadway was provided in an older format, which were then converted into the new format (Table 4). Table 4: Description of Functional Class in the NCHRP Database ID in Database Functional Class 1 Interstate or Limited-access facility 2 Ramp 3 Principal Arterial 4 Minor Arterial 5 Major Collector 6 Minor Collector 7 Local 8 Unknown Functional Class

Traffic Forecasting Accuracy Assessment Research Technical Report II-86 The area type where the facility lies is mainly coded in four categories: Rural, Mostly Rural, Urban and Unknown area types (Table 5). The definition of these categories is consistent with the US Census Bureau’s definition of Urban and Rural areas. The Bureau defines urban areas as a territory that has at least 2,500 people. The percentage of people living in rural areas in a county determines whether the county is rural (100%), mostly rural (50-99%) or urban (<50%). Table 5: Description of Area Type in the NCHRP Database ID in Database Area Type 1 Rural 2 Mostly Rural 3 Urban 4 Unknown Area Type Forecast Methodology were identified from the project reports or the datasets given by the State DOTs. For example, for the Florida D-4 dataset, the methodology was derived from the Method column and then were reassigned into the NCHRP methodology (Table 6). For most of the database where the methodology is not clearly described, several assumptions have been made (see previous section) to sort them by the NCHRP codes. Table 6: Description of Forecast Methodology in the NCHRP Database ID in Database Forecast Methodology Explanation 1 Traffic Count Trend Compound and Linear Growth Rate, Linear Interpolation, Regression Models etc. using Historical ADT or traffic count on a specific count station. 2 Population Growth Rates Forecasts based on Socio-Economic data, population forecasts on TAZ or project catchment area. 3 Project-Specific Travel Model Travel Demand Model created specifically for a project. 4 Regional Travel Demand Model Travel Demand Model for a region, e.g. Central Florida Regional Travel Model (CFRPM), Florida Standard Urban Transportation Model Structure (FSUTMS) etc. 5 Professional Judgement Usually a combination of traffic count trend and Travel Demand Model volume. 6 Unknown Methodology No record of methodology used. Several assumptions have also been made to code the Forecasting Agency in the NCHRP format (Table 7). For example, for Florida District 4, Minnesota and Wisconsin projects, the agency has been assumed to be State DOT employees or members. Consultants under contract with State DOTs (like Florida District 5 projects) were categorized separately.

Traffic Forecasting Accuracy Assessment Research Technical Report II-87 Table 7: Description of Forecasting Agency in the NCHRP Database ID in Database Forecast Agency 1 State DOT 2 Metropolitan Planning Organization 3 City/County agency 4 Other public agency 5 Consultant 3. Methodology This study uses Large N analysis to measure the amount and distribution of forecast errors, including those segmented by variables such as project type and various risk factors. Large N studies consider a larger sample of projects in less depth. (Flyvbjerg 2005) extols the virtues of Large N studies as the necessary means of coming to general conclusions. Often, Large N studies include a statistical analysis of the error and bias observed in forecasts compared to actual data. (Flyvbjerg et al. 2006) considered a Large N analysis of 183 road and 27 rail projects, and Standard and Poor’s conducted a Large N analysis with a sample of 150 toll road forecasts (Bain and Plantagie 2004). Other examples of Large N studies are the Minnesota, Wisconsin and Ohio analyses (Parthasarathi and Levinson 2010; Buck and Sillence 2014; Giaimo and Byram 2013). This section presents a brief overview of the methodologies used in existing literature and explains the methodology used in current research. 3.1 Methodologies Used in Existing Literature Briefly, the goal of Large-N analysis is to answer: How close were the forecasts to observed volumes? (Miller et al. 2016). In order to facilitate that, the researchers have generally looked at two sets of similar data: one during the opening year and the other one in the design year. Several authors have evaluated the accuracy of project level traffic forecasts by comparing them with the actual traffic counts. Odeck and Welde (2017) looked at 68 Norwegian Toll Road Projects, implemented between 1975 and 2013 and calculated the mean percentage error against the forecast value. The similar procedure is used by Li and Hensher (2010) in their study of 14 toll roads in Australia and Flyvbjerg et al. (2006) for the 183 toll projects from around the world. A summary of existing research and the methodology used is given in Table 8.

Traffic Forecasting Accuracy Assessment Research Technical Report II-88 Table 8: Summary of Existing Large-N Methodologies Paper Research Data Analysis Procedure Odeck and Welde (2017) 68 Norwegian Toll Road Projects, implemented between 1975 and 2013. Mean Percentage Error compared with forecast value, Examine bias and Efficiency of estimates using econometric framework Li and Hensher (2010) 14 Toll Roads in Australia Mean Percentage error. Ordinary Least Square Regression model and Random effects regression models with Percentage Error as dependent variable Flyvbjerg et al. (2006) 183 projects around the world Percentage Error Bain (2009) 104 international toll road, bridge, and tunnel case studies. Actual/forecast traffic Miller, Anam, Amanin, and Matteo (2016) 39 studies from Virginia Mean Absolute Percentage Error for each segment, Median Absolute Percentage Error for individual projects (both compared over the Observed Value). Parthasarathi and Levinson (2010) 108 project reports obtained from MnDoT Actual/Forecast Traffic 3.2 Evaluation Year From the database and project reports, we see that traffic forecasts are usually done for three years: 1. Opening Year 2. Mid-Design or Interim Year (usually 10 years after Opening) 3. Design Year (usually 20 years from Opening) The actual traffic counts are obtained from the DoT’s count stations. For example, the Florida District 5 has detailed traffic counts from their count stations from 1972 to 2016. Matching the Count Stations with the traffic forecast report, we can get the actual traffic count for a year. Three calculations of errors or percent difference from forecast can be performed: 1. Percent Difference from the opening year forecast 2. Percent Difference from Interim/Mid-Design year forecast 3. Percent Difference in year in-between Opening and Mid-Design Year: In this case, the forecast traffic value can be interpolated. The purpose of taking errors for different years is to evaluate whether forecast performance improves over time. Li and Hensher (2010) report that all other factors remaining unchanged, that the error in forecast reduces by 2.54 percentage points for every additional year since opening i.e. we see annual improvements, on average, in the accuracy of forecasts as we move away from the start date. This finding is supported by Vassallo and Baeza (2007)with the evidence that traffic forecasting

Traffic Forecasting Accuracy Assessment Research Technical Report II-89 effectiveness for Spanish toll roads tends to improve over time, in particular the research claimed that the average year-one error was -35.18%, -31.14% for the second year, and -27.06% for the third year. This research will focus on the evaluation of opening year forecasts for the practical reason that the interim and design years have not yet been reached for the vast majority of projects. 3.3 Definition of Errors One of the differences in methodologies in previous Large N studies is the how they define errors. Miller et al. (2016a), CDM Smith et al. (2014), and Tsai, Mulley, and Clifton (2014) define error as the Predicted Volume minus the Actual Volume such that a positive result is an over-prediction. Odeck and Welde (2017), Welde and Odeck (2011), and Flyvbjerg, Holm, and Buhl (2005) defined error the other way, such that a positive value represents under-prediction. There are also two schools of thought when presenting the error as a percentage: over the actual traffic (Tsai, Mulley, and Clifton 2014; Miller et al. 2016) vs over the forecast traffic (Flyvbjerg, Holm, and Buhl 2005; Nicolaisen and Næss 2015; Odeck and Welde 2017). An advantage of the former is that the percentage is expressed in terms of a real quantity (observed traffic); an advantage of the latter is that when the forecast is made, uncertainty can be expressed in terms of the forecast value since the observed value is unknown (Miller et al. 2016). Beside these two methods, Bain (2009a) and Parthasarathi and Levinson (2010) evaluated the forecast performance by taking the ratio of Actual and Forecast Traffic. From the discussion above and the summary in Table 8, we see basically two schemes for evaluating forecast performance: as a percentage error and as a ratio. Within those schemes, there is some disagreement as to whether the percentage error should be taken relative over the observed count or over the forecast value, and as to the direction of the sign. In this study, we continue in the convention as described in Odeck and Welde (2017a) in which they express the percent error as the actual count minus the forecast volume divided by the forecast volume. We recognize that the Odeck and Welde approach is a different from the standard convention of expressing percent error with the actual observation in the denominator. We find it more useful for understanding to express the error as a function of the forecast volume because the forecast volume is known at the time the project decision is made while the actual volume is not. This means that if we know that we might expect a 10% error, then that 10% can be applied to the forecast volume. To make this distinction clear, we express this as the percent difference from forecast (PDFF): PDFFi Actual Count-Forecast VolumeForecast Volume *100% (1) Where PDFFi is the percent difference from forecast for project i. Negative values indicate that the actual outcome is lower than the forecast (over-prediction), and positive values indicate the actual outcome is higher than the forecast (under-prediction). The appeal of this expression is that it expresses the error as a function of the forecast, which is known first. The distribution of the PDFF over the dataset will be able to answer the systematic performance of traffic forecasts. As for expressing the percent difference from forecast over the dataset, the use of Mean Percent Difference from Forecast and Mean Absolute Percent Difference from Forecast have varied

Traffic Forecasting Accuracy Assessment Research Technical Report II-90 in different researches. Mean Absolute Percentage Difference from Forecast has been acknowledged to “allow [researchers] to better understand the absolute size of inaccuracies across project” (Odeck and Welde 2017) since positive and negative values tend to offset each other in case of calculating the Mean Percent Difference from Forecast. We continue in this tradition, but again translate it into the language of percent difference from forecast: Mean Absolute Percent Difference from Forecast MAPDFF 1n *∑ |PDFFi|ni 1 (2) Where n is the total number of projects. 3.4 Distribution of Errors Researchers have presented the results of their Large-N studies mostly in histograms of Percentage Error, as shown in Figure 1. Bain (2009a) further fitted the distribution in a distribution fitting software, which suggested a normal distribution with mean 0.77 and Standard Deviation 0.26. Goodness of fit was measured by Chi-squared statistics. To ascertain the significance of the statistics (biasedness), t-test was also performed. Source: Flyvbjerg, Holm, and Buhl (2005) Source: Bain (2009a) Figure 1: Example Histograms of Forecast Accuracy This research reports distributions of the percent difference from forecast in terms of the percentage difference, 𝑃𝐸 . The mean as reported in the distribution gives the central tendency of the dataset, with median as the 50th percentile value and standard deviation as the spread. For categorical variables, this research employs Violin Plots (Figure 2). Violin plots are similar to histograms and box plots in that they show an abstract representation of the probability distribution of the sample. Rather than showing counts of data points that fall into bins or order statistics, violin plots use kernel density estimation (KDE) to compute an empirical distribution of the sample. In this research, we used the 5th and 95th percentile values as inter-quartile range as depicted in Figure 2. The percentile values basically present the percentage of datapoints that fall below. In effect, this range depicts the 90% probability range of percent difference from forecast for any categorical variable.

Traffic Forecasting Accuracy Assessment Research Technical Report II-91 Figure 2: Anatomy of a Violin Plot 3.4 Bias Detection Odeck and Welde (2017) employed an econometric approach to determine the bias and the efficiency of the estimates by regressing the forecast value to actual value using equation: 𝑦 𝛼 𝛽𝑦 𝜀 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 3 where 𝑦 is the actual traffic on project i, 𝑦 is the forecast traffic on project i and 𝜀 is a random error term. α and β are estimated terms in the regression. Here α=0 and β=1 implies unbiasedness. Li and Hensher (2010) conducted Ordinary Least Squares and Random Effect Linear Regression Model to explain the variation in the error in forecast as a percentage over the explanatory variables (year open, elapsed time since opening etc.). Miller et al. (2016a) performed ANOVA (Analysis Of Variation) test on Median Absolute Percentage Error on a limited number of explanatory variables (difference between forecast year and opening year, forecast method, duration of forecast and number of recessions between base year and forecast year). Both researches found their models to be a good fit to explain the errors. The end-goal of such analysis is to present the range of errors of forecast based on several variables like when the project was opened, difference in the forecast year and existing year etc. This research will do so by following the Odeck and Welde (2017) structure, but introducing additional terms as descriptive variables: 𝑦 𝛼 𝛽𝑦 𝛾𝑋 𝜀 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 4 where Xi is a vector of descriptive variables associated with project i, and γ is a vector of estimated model coefficients associated with those descriptive variables. To consider multiplicative effects as opposed to additive effects, we will also consider a log-transformed model:

Traffic Forecasting Accuracy Assessment Research Technical Report II-92 𝑙𝑛 𝑦 𝛼 𝛽 𝑙𝑛 𝑦 𝛾 𝑙𝑛 𝑋 𝜀 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 5 which is equivalent to: 𝑦 𝑒 𝑦 𝑋 𝑒 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 6 In such a formulation, α=0 and β=1 still implies unbiasedness, ignoring all other terms. In addition to the estimation of biases, we are also interested in how the distribution of PDFF relates to different descriptive variables. For example, it may be that forecasts with longer time horizons remain unbiased, but have a higher spread, as measured by the MAPDFF. To do this, we extend the above framework to use quantile regression instead of ordinary least square (OLS) regression. Whereas OLS predicts the mean value, quantile regression predicts the values for specific percentiles in the distribution (Cade and Noon 2003). Quantile regression has been used in transportation in the past for applications such as quantifying the effect of weather on travel time and travel time reliability (Zhang and Chen 2017), where an event may have a limited effect on the mean value but increase the likelihood of a long delay. It has also been used to estimate error bounds for real time traffic predictions (Pereira et al. 2014), an application more analogous to this project. In our case, we estimate quantile regression models of the actual count as a function of the forecast and other descriptive variables. We do so for the 5th percentile, the 20th percentile, the median, the 80th percentile and the 95th percentile. This establishes an uncertainty window in which the median value provides our expected value, or an “adjusted forecast”, the 5th or 20th percentiles provide a lower bound on the expected value, and the 80th and 95th percentiles provide upper bounds. 3.5 Level of Analysis: by Segment or by Project While assessing the project forecast accuracy, one question arises: what constitutes an observation? A typical road project is usually divided into several links or segments within the project boundary. The links are usually on different alignments or carrying traffic to different directions. To uniquely identify each project in the database a column was specified, titled “Internal Project ID”. This column typically contains the unique Financial ID of the project, Report Number, Control Number etc. Under the same Internal Project ID, forecast and traffic count information for the different segments are recorded with unique Segment ID. Analysis thus can be done on two levels: 1. Segment Level: assessing the accuracy of the forecast for each different segment or link. 2. Project Level: assessing the total accuracy of forecast for each individual project, identified by their Unique Internal Project ID. The limitation of presenting accuracy metrics at a segment level is that the observations are not independent. Consider, for example, a project with three segments connected end-to-end. It is reasonable to expect that the PDFF on these segments is correlated—perhaps uniformly high or low. Whether we treat these as one combined observation or three independent observations, we would expect the average PDFF to be roughly the same. There would be a difference, however, in the measured t-statistics, where the larger sample size from a segment level analysis could suggest significance where a project level analysis would not.

Traffic Forecasting Accuracy Assessment Research Technical Report II-93 Project level analysis seems to be free of the correlation across observations described, but still the question remains on how to assess the accuracy for a project. In the Virginia Study (Miller et al. 2016a) where each project consisted of links ranging from 1 to 2493 in number, the researchers took the Median Absolute Percent Error over the segments or links for individual projects and then used the Mean to express the level of accuracy. Nicolaisen (2012) measured the accuracy by taking the sum of forecast and actual traffic volumes on the segments in a project. Another method that can be used is taking the weighted traffic volume as described in (Miller et al. 2016): 𝑊𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑇𝑟𝑎𝑓𝑓𝑖𝑐 𝑉𝑜𝑙𝑢𝑚𝑒 ∑ 𝑉𝑜𝑙𝑢𝑚𝑒 𝑜𝑛 𝑙𝑖𝑛𝑘 𝑖 ∗ 𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑙𝑖𝑛𝑘 𝑖∑ 𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑙𝑖𝑛𝑘 𝑖 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 8 The issue with using the weighted traffic volume (forecasted and actual) is the absence of length data in most of the records. In addition, taking the total traffic as Nicolaisen (2012) will not be able to show the relation between forecast accuracy and project type by vehicles serviced. Taking these into consideration, in this study we measure the inaccuracy at the project-level using average traffic volumes, where each segment within a project is given equal weight. We report the distribution of percent difference from forecast both at a project level and a segment level. The results, presented later in this chapter, show that averaging to the project level appears to average out some of the errors observed at a segment level. We report a number of one- dimensional metrics at a project level, but estimate the econometric models at a segment level. 3.6 Data Cleaning and Filtering As mentioned previously, our primary objective for analysis is to compare the forecasted traffic with the actual post-opening traffic. The NCHRP 08-110 Database presents challenges in the analysis due to the difference in record keeping practices of the contributing states (explained in Section 3.2.1). We arrived at a uniform scheme or algorithm to clean up the missing information and prepare the flat data for analysis. First of all, we filtered out the records in the database that don’t have any actual traffic count data and those which haven’t been completed yet. The second filter may seem redundant, but in the database we have records of actual traffic count even though the project was forecasted to be completed at a later date. This discrepancy occurred mostly for projects on existing roadways that have traffic count stations on them which produce regular count data. The second step was to select the appropriate actual traffic count for the records filtered out in the first step. This was necessary because in many cases traffic counts were collected on a regular basis on the same segments over several years. Selecting the earliest traffic count after project completion is often not obvious, because several state data don’t mention actual project completion date. For such types of projects, we employed the following reasoning: a. Categorize the segments by schedule risk. Based on the improvement types, we created low-risk and high-risk categories. The “resurfacing, slips, slides, safety improvements etc.” projects that are usually completed on or within one year of the forecast opening year are the low-risk ones. Complex projects like adding lanes, new construction or

Traffic Forecasting Accuracy Assessment Research Technical Report II-94 increasing capacity are built within two to three years of the planned opening date (Mark Byram, Personal Communication, April 3, 2018). b. Create a one-year buffer for low-risk project and a two-year buffer for high-risk project and keep the first traffic count outside the buffer. For example, if a project to add lanes has a forecast opening year of 2010, we would keep the first count available in year 2012 or later. We do this because we do not know if construction has been delayed from what was originally planned, and we want to avoid a situation where we evaluate a project against a traffic count taken before the project opened. Next, we scaled the forecast to the year of the first post-opening count so that both data points are in the same year. We did this by linearly interpolating the forecast traffic between the forecast opening year and the design year, usually 20 years later. (The European projects are taken from Nicolaisen’s PhD Thesis (Nicolaisen 2012) and have already been scaled to match the count year using a 1.5% annual growth rate. We maintain this logic for the European projects, but do the interpolation between opening and design year for US projects.) For project-level analysis, we took the average of the traffic volumes and measured the error statistics by comparing the average forecast and average actual traffic. Aggregating the counts and forecast across the segments/links was done by the unique identifier in the column “Internal Project ID”. The variables for analysis were also aggregated by the same unique identifier, albeit with different measures to for maintaining uniformity. Improvement Type, Area Type and Functional Class of a project were taken to be the same as the most prevalent one among the segments. For example, if most of the segments in a project are of Improvement Type 1, the project is considered to be of Improvement Type 1. Forecast Methodology is the same across the segments for a project, so are Unemployment Rates and Years of Forecast and Observation. Mean of these values were taken for the project level analysis. 3.7 Outliers As part of the data cleaning process, we conducted an outlier analysis in which we examined specific records with a high deviation between forecast and counts. To identify the outliers in the analysis, the first order of business was getting the links or segments that showed a significantly large percentage difference. As a trial, we selected the rows, or links, with greater than 75% of Absolute Percentage Difference from Forecast. In all, there are 399 segments with an Absolute PDFF value of over 75%, out of which 242 had PDFF over 100% and 88 went over 200% in PDFF. We then manually inspected each of these segments. We looked at the original database that was provided by the DOTs and the forecast reports where they were available to identify potential sources of difference. Except for 18 segments, all the other outliers (Absolute PDFF>75%) appeared to be due to input error in the database. This was in part due to a bug (which we have since corrected) in importing the data into our database where the macro attached the actual traffic count linked to a different segment under the same Project ID but without any actual count information. The rest of the outliers (except for the 18 remaining segments) also appeared to result from input errors in the data provided by DOTs. For example, in one segment the forecasted traffic volume was only 10780 while the actual count was 129013. The specific project was a resurfacing one over a

Traffic Forecasting Accuracy Assessment Research Technical Report II-95 ramp on an Interstate, which made us infer that the error was in reporting the actual count information. Another case we encountered had an incorrect Station ID which was assigned to a freeway while the project was on a ramp. Employing similar reasoning we looked at the outliers and decided to either keep or discard them from our analysis. This step of the filtering reduced the number of available segments from 4,112 to 3,912. 3.8 Calculating the Number of Lanes Required One of the implications of inaccurate forecast is how it would influence project decisions. The Number of Lanes required for the roadway to operate at a certain Level of Service is a variable dependent on the anticipated traffic. Miller et al. (2016a) in the Virginia Study explored a variant of this in the decision concerning the Level of Service (LOS). One of the projects (or studies as the research termed it as), had seen an LOS E instead of the target LOS of C because of forecast errors. The research identified two distinct factors that affect the impact of error on decision making: 1. The magnitude of the error and 2. The location of the error relative to the performance criterion. Replicating the methodology employed in the Virginia Study in our analysis is problematic because of the absence of several critical information to calculate the LOS. The existing and forecasted number of lanes and the K-factor used was not specified for most of the projects and we would be dealing with a very small sample size. Besides, other factors influencing the LOS e.g. Lane Width, Traffic Composition, Grade and Speed were not coded into the database. Another way to assess the impact of forecast error is to calculate the number of lanes required for a given traffic volume. Project traffic forecasts ultimately are used to determine how many lanes a corridor or project may require. Using the best available current year data, and projecting future values of Directional Design Hourly Volume (DDHV), Service Flow Rate for LOS I (SFi)and Peak Hour Factor(PHF), the number of lanes can be estimated. Using the methodology described in Highway Capacity Manual-2010 (HCM 2010) to calculate the Service Flow Rate per lane for a required LOS and PHF, the number of lanes can be determined. According to it, the simplified equation for estimating the capacity of a roadway section is: 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝐵𝑎𝑠𝑒 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 ∗ 𝑁 ∗ 𝑃𝐻𝐹 ∗ 𝑓 ∗ 𝑓 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 9 where N= Number of Lanes PHF= Peak Hour Factor 𝑓 =adjustment factor for heavy vehicles 𝑓 adjustment factor for driver population Rearranging the equation to determine the number of lanes for given traffic flow on a given direction, we get to:

Traffic Forecasting Accuracy Assessment Research Technical Report II-96 𝑁 𝑇𝑟𝑎𝑓𝑓𝑖𝑐 𝑉𝑜𝑙𝑢𝑚𝑒 𝑜𝑛 𝑎 𝑔𝑖𝑣𝑒𝑛 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛𝐵𝑎𝑠𝑒 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 ∗ 𝑃𝐻𝐹 ∗ 𝑓 ∗ 𝑓 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 10 The Traffic Volume on a Given Direction can be alternately named as Directional Design Hourly Volume, which can be determined using: 𝐷𝐷𝐻𝑉 𝐴𝐴𝐷𝑇 ∗ 𝐷𝑒𝑠𝑖𝑔𝑛 𝐻𝑜𝑢𝑟 𝑓𝑎𝑐𝑡𝑜𝑟 𝐾 ∗ 𝐷𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛𝑎𝑙 𝐷𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑓𝑎𝑐𝑡𝑜𝑟 𝐷 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 11 The K-factors represent typical conditions found around the state for relatively free-flow conditions and are considered to represent typical traffic demand on similar roads. The magnitude of the K-factor is directly related to the variability of traffic over time. Rural and recreational travel routes which are subject to occasional extreme traffic volumes generally exhibit the highest K-factors. The millions of tourists traveling on Interstate highways during a holiday are typical examples of the effect of recreational travel periods. Urban highways, with their repeating pattern of home-to-work trips, generally show less variability and, thus, have lower K-factors. Similarly, the directional distribution factor, D30, is based on the 200th Highest Hour Traffic Count Report. But the problem remains as to the availability of 𝐾 and 𝐷 information for projects. The Florida Department of Transport (FDOT) recommends values for the K and D-factor in case information on that is unavailable during project forecast. The following table is obtained from the Project Traffic Forecasting Handbook prepared by the FDOT. Table 9: Recommended 𝑲𝟑𝟎 and 𝑫𝟑𝟎 factors for Traffic Forecasting Road Type 𝑲𝟑𝟎 𝑫𝟑𝟎 Low Average High Low Average High Rural Freeway 9.6 11.8 14.6 52.3 54.8 57.3 Rural Arterial 9.4 11 15.6 51.1 58.1 49.6 Urban Freeway 9.4 9.7 10 50.4 55.8 61.2 Urban Arterial 9.2 10.2 11.5 50.8 57.9 67.1 HCM recommended range of values for selecting appropriate 𝐾 and 𝐷 factors for project forecast are also given in the following figures.

Traffic Forecasting Accuracy Assessment Research Technical Report II-97 Figure 3: HCM Recommended K Factor Range Figure 4: HCM Recommended D Factor Range For a simple analysis, we chose the average values in each subsection as recommended by the FDOT. The equations for determining the base capacity for the roadway types are also recommended in HCM 2010, which are presented in Table 10: Equations to Determine Service Flow Rate or Maximum Capacity. In the absence of information on Free Flow Speed, in our analysis we are assuming the maximum lane capacities by default. Table 10: Equations to Determine Service Flow Rate or Maximum Capacity Roadway Type Equation Freeway (Interstate) 1700+10*Free Flow Speed (FFS) up to 2400 Multilane Highway 1000+20FFS up to 2200 Rural 2-lane Highway Up to 1600 Signal Controlled Facility 1900*green ratio The Peak Hour Factors (PHF) are taken as the default values given in the Highway Capacity Manual 2000 (Transportation Research Board 2000): 0.92 for Urban facilities and 0.88 for rural ones. Assuming similar LOS for forecasted traffic and actual traffic and using Equations 10 and 11, we first calculated the number of lanes required for each case and then compared them with each other (details in Section 4.15). Note, we used the upper bounds for the N values, as specified in HCM. 4. Data Exploration This section presents the key findings from the Large N Analysis building on the methodology prescribed in Section 3. Reiterating the key points our analysis hinges upon: 1. Typical road projects are divided into one or more segments, 2. Traffic volume is generally predicted for opening year, mid-design year (typically 10 years from opening) and design year (usually 20 years into the future), 3. Actual traffic volume to compare against the forecast volume are taken for the year after the project has been completed. For records in the database that don’t have project completion date, a buffer of at least 1 year has been created based on the type of project,

Traffic Forecasting Accuracy Assessment Research Technical Report II-98 4. Error is calculated as the difference between Actual Volume and Forecasted Volume as Percent Difference from Forecast and so that negative value means over-prediction and positive means under-prediction, 5. For aggregation, the Mean of the Absolute Percent Difference from Forecast was used as the metric, since positive and negative values would neutralize each other in case the mean of the percent difference from forecasts were taken. The distributions, however, were taken on the Percent Difference from Forecast. Bearing these points in mind, the Large N analysis was done in two ways: by segments for the general distribution of PDFF and by project-level, for the effect of PDFF on an aggregated level. As described in Section3.2.1, the NCHRP 08-110 database contains about 16360 unique records. The records contain forecast information by segments, forecast year type (opening, mid-design or design year) and actual count information, if applicable. For analysis purpose, the filters as described in above were applied and we got to 4278 unique records. All of these 4278 records have a traffic forecast, actual count in a year after the project has, or presumed to have, opened. Rerunning the analysis with the outliers and duplicates removed, we were left with 3912 unique records. The data-frame to be analyzed contains project information (unique project ID, type of project, segment ID, roadway functional classification, area type), forecast information (year forecast was produced, forecast year, forecasted and adjusted traffic) and the actual count information (year of observation, count, stations ID). Based on the nature of the NCHRP 08-110 database, we can select some variables that might dictate future adjustments in the forecasts. These variables are: the type of Project (Improvement Type), the methodology used (Forecast Methodology), roadway type (Functional Class), area type (Area Type Functional Class) and the forecast horizon (difference between year forecast produced and year of opening). Table 11 tabulates the descriptive variables to be used in our analysis.

Traffic Forecasting Accuracy Assessment Research Technical Report II-99 Table 11: Descriptive Variables for Analysis Variable Explanation Forecast Volume We expect the percent difference from forecast to be larger for lower volume roads because there are less opportunities for PDFFs to average out. Functional Class To test whether accuracy differs for different functional class of roads. The distribution is done on the FHWA defined Functional Classes. Area Type To test whether urban or rural areas influence the forecast accuracy. Type of Project Distribution of PDFFs across different types of improvement, i.e. resurfacing project, adding lanes, new construction etc. Can be simplified as forecasts on Existing Roads and New Constructions. Tolls Relation between toll forecasts and untolled road forecasts. Opening Year Projects affected by a recession may have uniformly low forecasts. The Opening Year is taken to be the Year the actual traffic count was taken in our database. As for recession years, the years 2001 and 2008-9 were identified as recession years. Judging from the unemployment rate, the years affected by the recession was categorized. Year forecast produced To evaluate whether forecast accuracy has improved over the years. Forecast Horizon Derived variable from the difference between the Forecast Year and the Year Forecast was Produced. Test hypothesis that Forecasts are better when the opening year is closer to the year forecast was produced. Unemployment Rate in Opening Year To evaluate the effect of recessions on forecast accuracy. Change in Unemployment Rate This will be measured as the difference between the unemployment rate in the opening year and the unemployment rate in the year the forecast was produced. Forecast type To evaluate the relative accuracy of Trend Based Forecast or Model Based Forecast etc. Type of Forecaster To examine differences between forecasts made by DOTs, MPOs, consultants, or others. Agency To test whether some agencies produce more accurate forecasts than others. To be identified as Agency A, Agency B, etc. Review Indicates whether forecasts have gone through a review process. In the remainder of this chapter, we examine the overall distribution of percent difference from forecast, as well as the percent difference from forecast segmented by each of these factors. 4.1 Overall Distribution Generally speaking, traffic forecasts have been found to be over-predicting: actual traffic volumes after project has been completed are lower than what has been forecasted, as shown in Figure 5 and Figure 6, which show a right-skewed distribution. The Mean of the Absolute Percentage Difference from Forecast (MAPDFF) is 24.67% with a median of 16.69%, but these statistics are biased in the sense that multiple segments make up a single project, and a particular error or shortcoming of the method adopted is accumulating over a project. In segment-level, the traffic volumes are off by about 5150 vehicles per day on average.

Traffic Forecasting Accuracy Assessment Research Technical Report II-100 Figure 5: Distribution of Percent Difference from Forecast (Segment Level) The 3911 unique records/segments are part of 1291 unique projects. Similar to our segment-level analysis, we notice a general over-estimation of traffic across the projects. The distribution of Percent Difference from Forecast shown in Figure 5 is heavier on the negative side, i.e. actual volumes are generally lower than traffic forecasts. The mean of the absolute percent difference from forecast is 17.29% with a standard deviation of 24.81. The Kernel Density Estimator displays an almost normal distribution, albeit with long tails. On an average, the traffic forecasts for a project are off by 3500 vpd. 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100

Traffic Forecasting Accuracy Assessment Research Technical Report II-101 Figure 6: Distribution of Percent Difference from Forecast (Project Level) We should expect over-predictions because, in many cases, these forecasts are used in design engineering.. A design based on over-predicted traffic will be over-built and will not see that extra capacity utilized. On the other hand, if the under-predicted traffic is used as a basis for design, it would mean adding capacity at a later time at a greater cost to meet the demand. Table 12: Overall Percent Difference from Forecast Traffic Forecast Range (ADT) Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile Segment Level 3911 24.67 0.65 -5.49 41.92 -44.89 66.34 Project Level 1291 17.29 -5.62 -7.49 24.81 -37.56 36.96 4.2 Forecast Volume Figure 7 reports the difference from forecast as a function of forecast volume at the segment level. Figure 8 shows it reported at the project level. They are reported separately here because the traffic volume can be quite different for different segments within a project, such as may be the case of a freeway interchange where the mainline freeway volume is much higher than the ramp volumes. An interesting observation from Figure 7 is the low percentages as the traffic volumes increase. This is understandable, since the percentages were taken as a ratio over the forecasted volume. Unless the actual traffic differs by a large margin, the percentage differences will not have risen to a big amount. 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100

Traffic Forecasting Accuracy Assessment Research Technical Report II-102 Figure 7: Percent Difference from Forecast as a function of Forecast Volume (Segment Level) Figure 8: Percent Difference from Forecast as a function of Forecast Volume (Project Level) Tables 13 and 14 show descriptive measures of percent difference from forecast of the forecasts by volume group for segments and projects, respectively. The measures represent the spread of the Percent Difference from Forecast in forecast, with the Mean, Standard Deviation and 5th and 95th percentile values. The MAPDFF value for each category presents how much the actual traffic deviates from the forecast value. Mean is the central tendency of the data. Standard Deviation and the 5th and 95th percentile data represent the spread of the distribution. 90% of the data points fall between the 5th and 95th percentile values. 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100 𝑃𝐷𝐹𝐹 𝐴𝑐𝑡𝑢𝑎𝑙 ForecastForecast ∗ 100

Traffic Forecasting Accuracy Assessment Research Technical Report II-103 Table 13: Forecast Inaccuracy by Forecast Volume Group (Segment Level) Traffic Forecast Range (ADT) Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile (0, 3000] 359 36.17 14.04 -2.22 91.63 -44.78 106.91 (3000, 6000] 419 26.64 3.90 -3.33 38.91 -40.03 83.78 (6000, 9000] 394 24.83 -2.78 -8.93 33.06 -47.90 57.47 (9000, 13000] 465 23.17 -2.54 -6.03 30.11 -44.49 54.98 (13000, 17000] 353 25.31 -0.20 -3.34 34.49 -49.56 76.88 (17000, 22000] 360 25.02 -5.21 -10.40 34.67 -51.54 65.85 (22000, 30000] 415 28.01 3.87 -3.57 37.20 -47.40 77.78 (30000, 40000] 386 25.71 -0.17 -7.92 35.23 -44.64 72.84 (40000, 60000] 410 19.37 2.56 -0.89 26.34 -32.56 53.47 (60000+ 350 12.38 -7.14 -6.40 14.98 -28.42 17.50 Table 14: Forecast Inaccuracy by Forecast Volume Group (Project Level) Traffic Forecast Range (ADT) Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95 th Percentile (0, 3000] 133 24.59 -1.85 -5.75 42.15 -45.01 75.17 (3000, 6000] 142 20.53 -0.37 -4.64 29.74 -36.50 50.33 (6000, 9000] 125 16.75 -5.68 -8.80 21.94 -35.29 36.67 (9000, 13000] 145 15.59 -4.66 -7.29 19.99 -31.34 34.45 (13000, 17000] 143 17.41 -6.20 -6.53 21.61 -37.76 30.65 (17000, 22000] 113 17.98 -5.65 -8.31 25.47 -41.62 37.85 (22000, 30000] 133 19.54 -5.65 -8.47 25.36 -40.31 41.75 (30000, 40000] 115 15.56 -9.78 -10.26 18.23 -39.54 12.26 (40000, 60000] 137 13.18 -8.95 -7.68 16.01 -34.44 7.49 (60000+ 105 10.20 -8.96 -7.90 9.90 -24.50 3.68 One observation from Table 14 is that as the forecast volume increases, the distribution of the percent difference from forecast have smaller spreads in addition to the MAPDFF value getting smaller. For example, for forecast volume between 30000 and 40000 ADT, percent difference from forecast for 90% of the projects lie between -39.54% and 12.26% with absolute deviation of 15.56% on average.

Traffic Forecasting Accuracy Assessment Research Technical Report II-104 4.3 Functional Class The distribution of percent difference from forecast by functional class (Figure 9 and Table 15) are taken at the segment level, since a project may span over roadways of different functional class. Violin plots, as depicted in the figure shows quantitative data with a kernel density estimation of the underlying distribution. The thick black bars represent the 25th and 75th percentile values, in effect depicting the range of values where 50% of the data-points fall in. These reiterate the point made about over-prediction in forecasts: about 75% of the links have negative percent difference from forecast values for Interstates, Major Arterials and Collectors. About 70% of the Minor Arterial links have been over-predicted. Figure 9: Distribution of Percent Difference from Forecast by Functional Class (Segment Level Analysis) Compared among themselves, it appears that forecasts for Interstates or Limited Access Facilities fare better than other classes of roadway, both in terms of the absolute deviation and spread (Table 22). 90% of the records of this functional class fall between -27.81% and 10.44%. The spread is a greater for other functional classes (represented by the 5th and 95th percentile values).

Traffic Forecasting Accuracy Assessment Research Technical Report II-105 Table 15: Forecast Inaccuracy by Functional Class (Segment Level Analysis) Functional Class Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Interstate or Limited Access Facility 434 12.32 -9.21 -8.48 13.58 -27.81 10.44 Principle Arterial 837 16.95 -9.63 -10.89 19.38 -37.51 23.95 Minor Arterial 404 18.92 -8.26 -10.24 24.54 -41.50 29.26 Major Collector 258 20.67 -10.81 -11.10 26.92 -51.11 23.85 Minor Collector 19 22.53 -12.74 -8.66 24.30 -41.43 28.58 Local 1 46.67 46.67 46.67 46.67 46.67 Unknown Functional Class 1958 32.42 10.69 2.68 53.67 -48.75 86.21 4.4 Area Type The distribution and spread of forecast differences as a function of the area type is presented in Figure 10 and Table 16. Forecasts for both rural and urban areas are mostly over-predicting i.e. actual traffic is less than forecasted (65% of the links in rural area and 72% of links in urban areas). Figure 10: Distribution of Percent Difference from Forecast by Project Area Type (Segment Level Analysis) The spread for Urban areas (-39.37% to 27.14%) is greater than that for rural areas (-27.93% to 24.72%). The MAPDFF values for Rural and Urban areas (14.09% and 17.66% respectively) point to traffic in rural or mostly rural areas have a smaller deviation from predicted.

Traffic Forecasting Accuracy Assessment Research Technical Report II-106 Table 16: Forecast Inaccuracy by Area Type (Segment Level Analysis) Area Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Rural or Mostly Rural 210 14.09 -4.02 -5.56 18.22 -27.93 24.72 Urban 543 17.66 -8.05 -9.58 22.32 -39.37 27.14 Unknown Area Type 3047 23.86 -0.12 -5.00 33.89 -47.31 68.05 4.5 Type of Project As described in Section 2.2.1, the NCHRP 08-110 database has the improvement type of the project as a required field. A lot of the segments/projects don’t have any improvement type assigned but we can still unify the types coded in the database in three ways: 1. Improvement on Existing facility: Resurfacing, replacement and adding capacity to existing roadway. 2. New Construction: New general-purpose, dedicated, managed or reversible lane(s) facility and 3. Unknown Project Type. Among the 1291 projects, our database contains forecast and actual count information on only 28 new construction projects, while projects on existing roadway are 788 in number. About 75% of the projects on existing roadway in the database have percentage differences below 0% i.e. over- predicting the traffic. Similar proportions are obtained for New Constructions as well (Figure 11 and Table 24). Compared to aggregated differences over all types of project (MAPDFF of 17.29%), forecasts for existing roadway have on an average slightly less percentage differences (MAPDFF of 16.26%). Forecasts for New Constructions are even more accurate with an MAPDFF of 10.57%. Figure 11: Distribution of Percent Difference from Forecast by Project Type (Project Level Analysis)

Traffic Forecasting Accuracy Assessment Research Technical Report II-107 The difference in sample sizes make commenting on the relative accuracy of forecasts by project type difficult. But as the percentile values indicate, forecasts for new construction projects have a lower spread than that for existing roadways. Table 17: Forecast Inaccuracy by Project Type (Project Level) Project Type Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile Existing Road 899 16.26 -5.90 -7.43 23.55 -36.20 29.93 New Facility 28 10.57 -9.22 -8.76 9.54 -19.34 3.83 Unknown Type 364 20.36 -4.64 -7.64 28.38 -43.96 45.95 4.6 Tolls In our database we didn’t have much information about the toll roads. In all, there are forecast information on only 7 roads/links with Static Tolls on 1+ lanes The MAPDFF for the tolled roads is 20.41% with a maximum of 93.38%. The distribution in Figure 12 is not scaled by the number of observations. Table 18 presents the breakdown of the distribution by Toll Type on links. Figure 12: Distribution of Percent Difference from Forecast by Toll Types (Segment-Level Analysis)

Traffic Forecasting Accuracy Assessment Research Technical Report II-108 Table 18: Forecast Inaccuracy by Toll Type (Segment Level) Toll Type Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile No Tolls on 1+ lane 3432 23.66 -1.53 -6.55 32.87 -45.9 64.66 Static Tolls on 1+ lane 7 20.41 16.16 8.60 34.96 -7.97 68.85 4.7 Year Forecast Produced The NCHRP 08-110 database contains projects spanning from 1970s. Forecasts for the projects thus go even before that. In Figure 13 and Table 19 we compare the Percent Difference from Forecast for forecasts produced in each year. The MAPDFF has steadily gone down, in addition to the spread of the distribution getting smaller. Also noticeable is the overall “under-prediction” of traffic for projects that have been forecasted between 1981 to 1990 i.e. actual traffic is more than the forecasted volume. During the next decade (1991-2000), about 55% of the projects for which traffic was forecasted have had more traffic than forecasted. After 2000 however, almost 75% of the projects forecasted have seen less traffic than forecasted with an average absolute deviation of 15.7%. The improvement over time may suggest that the availability of better data and refinement as well as sophistication of forecasting methodology results in better forecast performance over the years. However, it could be affected by the mix of projects and broader socioeconomic trends. Many of the earlier projects were larger in scale, and the 1970s through 1990s were a time of growing auto ownership, the entry of women into the workforce, and higher VMT per capita. Our database in the 2000s, in contrast, includes more routine projects at a time of slower economic growth and slower growth in VMT per capita. Figure 13: Distribution of Percent Difference from Forecast by the Year Forecast Produced

Traffic Forecasting Accuracy Assessment Research Technical Report II-109 Table 19: Forecast Inaccuracy by Year Forecast Produced Year Forecast Produced Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Before 1980 94 30.76 11.25 8.98 39.89 -47.12 83.27 1981-1990 45 34.83 28.21 28.53 34.18 -19.96 86.28 1991-2000 51 23.17 11.13 -1.87 48.07 -24.79 53.56 2001-2010 924 15.79 -9.96 -10.32 18.23 -38.36 15.95 After 2010 177 11.83 -5.36 -2.65 18.81 -38.65 15.62 Analyzing the forecast accuracy for projects on existing roadways, we see similar trends; although after 2010 the MAPDFF has gone down from 15.79% in the previous decade to 11.83%. Figure 14 and Table 20 present the distribution of inaccuracy in projects on existing roads. Figure 14: Distribution of Percent Difference from Forecast for Projects on Existing Roadways by the Year Forecast Produced

Traffic Forecasting Accuracy Assessment Research Technical Report II-110 Table 20: Forecast Inaccuracy for Projects on Existing Roadways by Year Forecast Produced Year Forecast Produced Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Before 1980 26 25.59 21.13 21.22 25.87 -14.21 60.72 1981-1990 14 44.76 44.76 42.17 31.76 4.70 96.30 1991-2000 49 23.58 12.12 -1.87 48.74 -23.82 54.21 2001-2010 680 15.78 -9.54 -9.78 18.37 -38.59 18.50 After 2010 130 11.08 -4.51 -1.98 18.68 -32.53 16.39 4.8 Opening Year The distribution of Percent Difference from Forecast by the Project Opening Year presented in Figure 14 and Table 21 is a useful indicator of forecast performance over the years. As can be seen, the forecast performance has generally gotten better after 2000, with significantly low MAPDFF values than previous decade, as well as smaller spreads. Most of the projects (about 78%) that have opened to traffic between 1991 to 2002 have had more traffic than forecasted. Percent Difference from Forecast from 2003 to 2008 are more evenly spread (90% data points between -36.82% and 33.46%) while after 2012, actual count has been generally less than the forecasted value (78% of the projects). Figure 15: Distribution of Percent Difference from Forecast by Opening Year of Project The opening years have been categorized to assess the effect of recession (recession in 2001 and the great recession on 2008-09) on forecast performance. It is assumed that the 2001 recession would affect unemployment rate till 2002 and the great recession till 2012, based on the unemployment rate for the years. One thing to notice here is that during and after the recession years, the actual traffic has been lower than usual. The median values (corresponding to 50th percentile value) give a good

Traffic Forecasting Accuracy Assessment Research Technical Report II-111 approximation, as 50% of the projects opened since 2012 have traffic at least 5.78% less than the forecasted value. Table 21: Forecast Inaccuracy by Project Opening Year Opening Year Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Before 1990 92 30.14 12.98 9.64 38.24 -43.71 89.49 1991-2000 72 28.09 15.83 3.74 45.17 -28.66 62.88 2001-2002 15 15.65 6.69 3.74 22.50 -22.86 51.82 2003-2008 351 18.92 -7.98 -11.52 23.76 -36.82 33.46 2009-2012 512 14.22 -9.21 -8.46 17.08 -35.07 12.25 After 2012 249 13.56 -8.73 -5.78 18.41 -42.71 13.45 Again, it is not clear the degree to which the differences observed here are a function of different forecasting methods, events in the real world, or a mix of the two. Looking strictly at the projects done on existing roadways, a similar distribution is observed. The ranges have become tighter, with a lower MAPDFF value (except for the projects opening between 1991 and 2000). The distribution and statistical results are given in Figure 16 and Table 22. Figure 16: Distribution of Percent Difference from Forecast for Projects on Existing Roadways by Opening Year of Project

Traffic Forecasting Accuracy Assessment Research Technical Report II-112 Table 22: Forecast Inaccuracy in Projects on Existing Roadways by Opening Year Opening Year Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Before 1990 40 32.30 29.40 25.32 29.93 -11.69 90.59 1991-2000 49 23.58 12.12 -1.87 48.74 -23.82 54.21 2001-2002 11 13.88 3.47 -0.75 20.88 -24.81 34.60 2003-2008 247 17.69 -9.21 -11.94 20.24 -35.99 20.27 2009-2012 373 13.95 -8.82 -8.44 16.82 -35.23 13.72 After 2012 179 13.68 -8.65 -5.78 19.08 -42.45 14.12 4.9 Forecast Horizon Another question that comes to mind while evaluating the accuracy is whether the number of years elapsed between the time forecast was produced to the year project was opened has a bearing on the accuracy. As evident from Figure 15 and Table 23, the average of the absolute percent difference from forecast increases as the number of years elapsed increases, except for the same-year projections. The difference in years introduces other variables like micro and macro economy, change in land use and fuel price etc. that can directly affect the traffic. These are all variables that are difficult to predict, and their effect is evident. This finding is consistent with findings by Bain (2009) who identified the critical dependence of longer-term forecasts on macro-economic projections. According to Standard and Poor’s Studies (2002-2005)- “A number of comments were recorded about the relationship between economic growth and traffic growth; concerns being raised about traffic forecasts—particularly over longer horizons—relying on strong and sustained economic growth assumptions that resembled policy targets rather than unbiased assessments of future economic performance.” Figure 17: Distribution of Percentage Differences by Forecast Horizon

Traffic Forecasting Accuracy Assessment Research Technical Report II-113 Forecasts that go beyond 5 years in the future tend to have a wider spread and higher percent difference from forecast (90% of the data point fall within -44.73% to 72.07% with a MAPDFF of 29.55%). Table 23: Forecast Inaccuracy by Forecast Horizon Forecast Horizon (Years) Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile 0 165 20.10 8.08 0.00 34.77 -25.18 57.71 1 206 12.88 -9.20 -8.12 14.64 -36.32 11.38 2 340 15.23 -7.79 -7.64 19.93 -40.26 20.38 3 251 16.25 -10.36 -10.74 18.49 -37.02 17.29 4 131 16.05 -10.36 -12.16 16.87 -35.43 20.19 5 67 16.82 -10.44 -13.82 22.23 -43.99 13.40 5+ 131 29.55 4.71 -3.13 39.47 -44.73 72.07 A point on concern in this analysis has to be why the MAPDFF value as well as the range of forecast differences is higher for a forecast horizon of 0 year. 50% of the observation fall on either side of 0% difference. 4.10 Unemployment Rate in Opening Year The Unemployment Rate data was pulled from the Bureau of Labor Statistics at the State level, and then matched with the year the actual traffic count was taken. For European projects it is measured at the national level. The rates were categorized into 7 classes or range and the distribution of percent difference from forecast is presented in Figure 18. Except for Unemployment Rate below 3, percent difference from forecast hovers in the negative side i.e. over-prediction for all other ranges. For unemployment rate between 1 to 3, the actual traffic is more than the forecasted volume for most of the case, but this statistic should be taken with a grain of salt because of the small sample size. Figure 18: Distribution of Percent Difference from Forecast by Unemployment Rate in Opening Year

Traffic Forecasting Accuracy Assessment Research Technical Report II-114 72% the projects in the range of 7-8% Unemployment Rate over-predicts the traffic with average absolute deviation of 17.3%. Comparing between the ranges, unemployment rate between 3 and 5 seems to produce the maximum absolute deviate from forecast volume. Other ranges hovers close to the overall average. Breakdown of the statistics is given in Table 22. Table 24: Forecast Inaccuracy by Unemployment Rate in the Opening Year Unemployment Rate Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile Up to 3 4 19.44 16.73 13.08 21.70 -3.21 41.78 3-5 229 22.95 2.13 -2.84 36.05 -40.20 55.83 5-7 371 16.10 -7.35 -7.68 21.30 -39.70 26.86 7-8 128 17.30 -7.05 -6.45 24.00 -43.19 26.12 8-9 168 17.07 -5.41 -7.51 24.68 -33.34 35.09 9-10 35 18.17 -5.15 -11.22 22.33 -28.14 39.05 10+ 356 14.90 -8.68 -9.64 18.08 -34.43 19.60 4.11 Change in Unemployment Rate To assess the impact of change in unemployment rate on forecast inaccuracy, we took the difference of Unemployment Rate between the Project Opening Year and Year Forecast was Produced. At least 70% of the project for which the unemployment rate changed by at least ±4% exhibited actual traffic less than the forecast value. The distribution of percent difference from forecast is presented in Figure 17 and Table 23. Figure 19: Distribution of Percent Difference from Forecast by Change in Unemployment Rate from Forecast Year and Opening Year An interesting, but not quite unexpected, observation is the spread of the distribution for cases where the Unemployment Rate increased in the opening year from the year forecast was produced by at least 2 points. 90% of the projects fall between -36.1% to 26.67% for change of 2-4% and -35.26%

Traffic Forecasting Accuracy Assessment Research Technical Report to 18.78% for change of 4-6%. With the increase of unemployment rate, it stands to reason that the actual traffic would be less. The possibility of under-prediction would thus get even lower. Table 25: Forecast Inaccuracy by Change in Unemployment rate Change in Unemployment Rate Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5th Percentile 95th Percentile (-8, -6] 8 15.01 -8.69 -2.02 19.29 -32.69 15.29 (-6, -4] 93 14.91 -5.63 -7.18 20.30 -31.30 31.45 (-4, -2] 136 19.21 4.45 -0.67 31.39 -30.61 54.60 (-2, 0] 367 17.64 -4.27 -6.16 27.88 -38.82 36.58 (0, 2] 263 16.8 -6.00 -6.32 23.27 -40.58 30.62 (2, 4] 217 17.05 -8.01 -8.63 22.12 -36.09 26.67 (4, 6] 166 17.54 -11.75 -13.94 17.80 -35.26 18.78 4.12 Forecast Method One derivative of the Large-N analysis is assessing the performance of the tools at disposal for the state DOTs and MPOs. For project level traffic forecasting, NCHRP Report 765 examines different methods that are in use and presents a guideline for employing those. But one question should arise: does the forecast performance depend on the method used? As a follow up question, is a certain type of forecast methodology better for a certain type of project? Or even a certain type of roadway? In the NCHRP 08-110 database, a field is specified to record the method used to forecast the traffic for a project. The coded methodologies were: Traffic Count Trend, Population Growth Rate, Regional Travel Demand Model, Project-Specific Travel Demand Model, Professional Judgment and Unknown Methodology. Among the 1291 projects selected for our Large N Analysis, traffic for 252 were forecasted using Traffic Count Trend, 179 by Regional 4 Step Travel Demand Model and 177 by “Professional Judgement”. Professional Judgement refers to the usage of a combination of count trend and volume from demand model, as the forecaster saw fit. We have run into the problem of missing data here again, as 676 of the projects in our database have no data regarding the method used to forecast the traffic. Distribution of inaccuracy is presented in Figure 18 and Table 24. II-115

Traffic Forecasting Accuracy Assessment Research Technical Report Figure 20: Distribution of Percent Difference from Forecast by Forecast Methodology Table26: Forecast Inaccuracy by Forecast Methodology Forecast Methodology Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile Traffic Count Trend 252 22.21 -0.10 -5.22 31.24 -39.34 55.06 Population Growth Rate 7 11.32 -2.18 -0.35 13.56 -16.43 13.89 Regional Travel Demand Model 179 16.88 -8.42 -9.75 21.76 -44.91 27.16 Professional Judgement 177 17.84 -11.77 -11.94 19.87 -43.11 18.52 Unknown Methodology 676 15.49 -5.36 -6.45 23.67 -34.39 29.49 Looking at a glance to the distribution of percentage differences by forecast methodology (Table 24) we can perhaps say that forecasts done by Travel Demand Models are more accurate comparing the MAPDFF values (MAPDFF of Travel Demand Model is 16.88, compared to 22.21 of Traffic Count Trend). But it does not accurately portray the picture. As we know, trend analysis cannot be used on all types of projects while models can be used on virtually any type of project. 4.13 Type of Forecaster The distribution of forecast inaccuracy by the forecaster is presented in Figure 19 and Table 25. As can be seen, 90% of the projects forecasted by State DOTs fall in the range of -44.94% and 44.40. 50% of these projects are over-predicted. The spread for forecasts done by Consultants is lower (90% of the projects lie between -35.83% and 31.42%), as well as the mean absolute deviation (MAPDFF of 17.36% compared to 21.47% for State DOT produced forecasts). II-116

Traffic Forecasting Accuracy Assessment Research Technical Report II-117 Figure 21: Distribution of Percent Difference from Forecast by Type of Forecaster Table 27: Forecast Differences by Type of Forecaster Forecasting Agency Observations Mean Absolute Percent Difference from Forecast Mean Median Standard Deviation 5 th Percentile 95 th Percentile State DOT 489 21.47 -0.89 -5.58 32.34 -44.94 54.32 Metropolitan Planning Organization 2 6.86 -6.86 -6.86 0.90 -7.43 -6.29 Consultant 237 17.36 -6.36 -8.20 22.13 -35.85 31.42 4.14 Effect on Number of Lanes There is an old axiom that traffic forecast only need to be accurate to within half a lane. To test the extent to which we meet this standard, we calculated the Number of Lanes required for forecasted traffic and the actual traffic, assuming the same Level of Service. The method for this calculation is described in section 3.8. Comparing the two numbers, we found 37 links out of the 3912 (1.0%) that required an additional lane to allow the traffic to flow at the forecasted LOS. This such small number reinforces our interpretation of over-prediction in traffic forecast. As for these 37 links, if the assumptions regarding the number of lanes hold true, the LOS would get worse. 5 of the 37 are Minor Arterials, the rest are Interstate and Major Arterials (16 each). Conversely, analyzing for the links that over-estimate the traffic by an amount such that they could do with a lesser number of lanes per direction, we get to 158 links (4.2%). 92 of such links are Interstate, 64 are Principle Arterials and the rest are Minor Arterials.

Traffic Forecasting Accuracy Assessment Research Technical Report II-118 5. Econometric Analysis The uncertainties involved in forecasting traffic call for assessing the risks and subsequently developing a range of traffic forecast that can be expected on a project. Considering the current dataset to be representative, i.e. “national average”, we developed several quantile regression models to assess the biases in the forecasts on the variables described in the previous chapter. The models were developed on the 5th, 20th, 50th (median), 80th and 95th percentile values. Apart from detecting bias of the traffic forecast, other goal of such econometric analyses is to obtain the range of actual traffic as a function of the forecast traffic and other project-specific criteria. The variables in the analysis are explained in Table 28.

Traffic Forecasting Accuracy Assessment Research Technical Report II-119 Table 28: Descriptive Variables for Regression Models Variable Name Explanation AdjustedForecast Forecasted ADT value for a segment/link or project. AdjustedForecast_over30k Variable to account for links with ADT value greater than 30,000. Defined as: If Forecast > 30,000 then value=Forecast – 30,000 Scale_UnemploymentRate_OpeningYear Unemployment rate in the project opening year. Scale_UnemploymentRate_YearProduced Unemployment rate in the year forecast was produced Scale_YP_Missing Binary Variable to account for missing information in the Year Forecast Produced Column in the NCHRP database. Scale_DiffYear Difference in the year forecast produced and forecast year i.e. Forecast Horizon. Scale_IT_AddCapacity Binary Variable for projects that add capacity to existing roadway. Reference class is the Resurfacing/Repaving/Minor Improvement projects Scale_IT_NewRoad Binary Variable for new construction projects. Scale_IT_Unknown Binary Variable for projects of unknown improvement type. Scale_FM_TravelModel Binary Variable for forecasts done using Travel Model. Reference class is the forecasts done using Traffic Count Trend. Scale_FM_Unknown Binary Variable for forecasts done using unknown methodology. Scale_FA_Consultant Binary Variable for forecaster. Reference class being State DoTs. Scale_Agency_BCF Binary Variable for projects under the jurisdiction of Agency B, C or F. Reference class being Agency A. Scale_Europe_AD Binary Variable for European Projects. Scale_OY_1960_1990 Binary Variable for projects opened to traffic before 1990. The reference value for opening year is 2013 and later. Scale_OY_1991_2002 Binary Variable for projects opened to traffic from 1991 to 2002. Scale_OY_2003_2008 Binary Variable for projects opened to traffic from 2003 to 2008. Scale_OY_2009_2012 Binary Variable for projects opened to traffic from 2009 to 2012. Scale_FC_Arterial Binary Variable for forecasts on Major or Minor Arterials. Interstate or Limited Access Facility are kept as reference class. Scale_FC_CollectorLocal Binary Variable for forecasts on Collectors and Local Roads. Scale_FC_Unknown Binary Variable for forecasts on roadways of unknown functional class. 5.1 Base Model In the first model, we regressed the actual count on the forecast traffic volume. The structure follows the Equation 3 reported previously: 𝑦 𝛼 𝛽𝑦 𝜀

Traffic Forecasting Accuracy Assessment Research Technical Report II-120 where 𝑦 is the actual traffic on project i, 𝑦 is the forecast traffic on project i, and 𝜀 is a random error term. 𝛼 and 𝛽 are estimated terms in the regression. Here α=0 and β=1 implies unbiasedness. The quantile regression parameter estimates the change in a specified quantile of the response variable produced by a one unit change in the predictor variable. This allows comparing how some percentiles of the actual traffic may be more affected by forecast volume than other percentiles. This is reflected in the change in the size of the regression coefficient. Table 29 presents the regression statistics (coefficients or 𝛼 and β values and the t value to assess the significance). The highlighted cells are where -1.96<t-value<1.96 i.e. variables which are statistically insignificant at 95% confidence interval. For the median, we observe that the intercept is not significantly different from zero, but the slope (the forecast volume coefficient) is significantly different from one, which we can interpret as a detectable bias. Table 29: Quantile Regression Results [Actual Count=f(Forecast Volume)] 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile Pseudo R- Squared 0.433 0.619 0.723 0.750 0.748 Coef. t value Coef. t value Coef. t value Coef. t value Coef. t value Intercept -826.73 -10.55 -434.03 -5.06 37.15 0.54 1395.74 6.59 2940.45 6.50 Forecast Volume 0.62 30.68 0.81 89.56 0.94 148.10 1.05 76.12 1.42 42.26 In addition to detecting bias, these quantile regression models can be applied to obtain an uncertainty window around a forecast as follows: 5th Percentile Estimate = -827 + 0.62 * Forecast 20th Percentile Estimate = -434 + 0.81 * Forecast Median Estimate = 37 + 0.94 * Forecast 80th Percentile Estimate = 1396 + 1.05 * Forecast 95th Percentile Estimate = 2940 + 1.42 * Forecast So if I produce a forecast for 10,000 ADT on a road, I would expect that the median number of vehicles to actually show up on the facility is 9,437 ADT (37 + 0.94 * 10,000), which we can refer to as our median estimate, or alternatively an expected value or adjusted forecast. (We would appreciate input on the terminology.) I would expect that for 5% of forecasts I do, the actual traffic will be less than 5,415, and that for 5% of forecasts I do, the actual traffic will be more than 17,153 ADT. The 20th and 80th percentile values can be calculated similarly. Table 30 and Table 31 give the

Traffic Forecasting Accuracy Assessment Research Technical Report II-121 ranges of Actual Traffic and Percent Difference from Forecast over the forecasted traffic volume respectively. Table 30: Range of Actual Traffic Volume over Forecast Volume [Actual Count=F(Forecast Volume)] Forecast Forecast Window* 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile 0 -827 -434 37 1,396 2,940 5000 2,294 3,612 4,742 6,670 10,047 10000 5,415 7,658 9,448 11,944 17,153 15000 8,536 11,705 14,153 17,218 24,259 20000 11,656 15,751 18,859 22,492 31,365 25000 14,777 19,797 23,564 27,766 38,471 30000 17,898 23,843 28,269 33,040 45,578 35000 21,019 27,890 32,975 38,314 52,684 40000 24,139 31,936 37,680 43,588 59,790 45000 27,260 35,982 42,385 48,862 66,896 50000 30,381 40,028 47,091 54,136 74,002 55000 33,502 44,075 51,796 59,410 81,109 60000 36,622 48,121 56,501 64,684 88,215 *Estimate Table 31: Range of Percent Difference from Forecast as a Function of Forecast Volume [Actual Count=F(Forecast Volume)] Forecast Forecast Window: Percent Difference from Forecast 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile 0 5000 -54% -28% -5% 33% 101% 10000 -46% -23% -6% 19% 72% 15000 -43% -22% -6% 15% 62% 20000 -42% -21% -6% 12% 57% 25000 -41% -21% -6% 11% 54% 30000 -40% -21% -6% 10% 52% 35000 -40% -20% -6% 9% 51% 40000 -40% -20% -6% 9% 49% 45000 -39% -20% -6% 9% 49% 50000 -39% -20% -6% 8% 48% 55000 -39% -20% -6% 8% 47% 60000 -39% -20% -6% 8% 47% Applying the coefficients as an equation, we constructed ranges of actual traffic and percent difference from forecast for different forecast volumes (Figure 22).

Traffic Forecasting Accuracy Assessment Research Technical Report II-122 Figure 22: Expected Ranges of Actual Traffic (Base Model) The lines depicting various percentile values can be interpreted as the range of actual traffic over a forecast volume. For example, it can be expected that 95% of all projects with the forecasted ADT of 30,000 will have actual traffic below 45,578. Only 5% of the projects experience actual traffic less than 17,898. Not considering other variables, this range (45,578 to 17,898 for forecast volume of 30,000) holds 90% of the projects i.e. there is 90% probability of actual traffic being in this range. 5.2 Inclusive Model for Inference For our second model, we adopted the structure of Equation 4, regressing the actual volume on forecast volume and several other descriptive variables: 𝑦 𝛼 𝛽𝑦 𝛾 𝑋 , 𝑦 𝛾 𝑋 , 𝑦 . . . 𝛾 𝑋 , 𝑦 𝜀 where 𝑋 , through 𝑋 , are descriptive variables associated with project i, and 𝛾 through 𝛾 are estimated model coefficients associated with those descriptive variables. Each is multiplied by 𝑦 which makes the effect of that variable scale with the forecast volume (i.e. change the slope of the line) rather than be additive (i.e. shift the line up or down). For example, consider a median model where 𝛼 is 0, 𝛽 is 1 and there is a single descriptive variable, 𝑋 , , which is a binary flag which is 1 if the forecast is for a new road, and 0 otherwise. If 𝛾 has a value of -0.1 then it means that the median Perfect Forecast 5th Percentile Median 95th Percentile 20th Percentile 80th Percentile 0 10000 20000 30000 40000 50000 60000 0 10000 20000 30000 40000 50000 60000 Ex pe ct ed  A DT Forecast ADT

Traffic Forecasting Accuracy Assessment Research Technical Report II-123 actual value would be 10% lower than the forecast. If 𝛾 has a value of +0.1 then it means that the median actual value would be 10% higher than the forecast. The variables chosen in this analysis are given in Table 28. Distribution of forecast inaccuracy as a function of each of these variables are presented in Section 4. For the analysis, the reference class is Forecasts done for a Resurfacing Project on Interstate or Limited Access Facility using Traffic Count Trend. The reference project is opened on or later 2013. Looking at the results of the Quantile Regression (Table 32) we see that the variables that have the highest bearing on the actual count apart from the forecast volume are Opening Year and the functional class of the roadway. Positive coefficients signify increase in actual count compared to the reference class and decrease for negative ones. For example, according to our analysis, actual traffic count decreases as the unemployment rate in the Opening Year increases in value. This is reasonable since unemployment rate negatively affects the traffic. Again, with increase in unemployment rate in the year forecast was produced, the actual traffic increases. This direct proportionality can be attributed to the assumption of unchanged socio-economic state between the base year and future forecast year. Statistically significant coefficients on the binary variables compares the actual count with the reference class. For example, the coefficient for Travel Model is 0.017 (rounded to 0.02 in the table) for 80th percentile against the reference class of Traffic Count Trend. This means that using Travel Demand Model shifts the 80th percentile estimate up by 1.7% times the forecast traffic, compared to the traffic count trend. Accurate forecasts would have intercepts of zero and slopes of one. Therefore, coefficients that shift the slopes closer to one are associated with better forecasts. For the median forecasts, this is a measure of the degree of bias, but for the outlying percentiles it is a measure of the spread in the forecasts. In general, variables with positive coefficients in the 5th percentile model and negative coefficients in the 95th percentile model will be associated with more precise forecasts (a narrower uncertainty window), although it must be considered how they interact with the other terms in the models. An interesting observation from the aforementioned table is how much the actual traffic compares based on the Opening Year. For example, 95% of the projects opened between 1991 and 2002 have seen at least 31.2% more traffic compared to those opened after 2012. Similarly, Arterials, Collectors and Local Roads have less traffic compared to Interstates if other variables remain the same. Figure 23 plots the Actual Traffic vs Forecast Traffic for the 80th percentile using the coefficients in Table 32. Interpretation of the graph is that 80% of the projects on arterials or interstate have actual traffic that falls below their respective lines.

Traffic Forecasting Accuracy Assessment Research Technical Report II-124 Table 32: Quantile Regression Results [Inclusive Model] 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile Pseudo R-Squared 0.513 0.662 0.762 0.827 0.853 Coef. t value Coef. t value Coef. t value Coef. t value Coef. t value (Intercept) -75.44 -0.59 145.11 2.70 331.44 11.70 535.37 6.89 1616.95 6.90 Adjusted Forecast 0.80 8.49 0.71 14.84 1.05 24.26 1.08 19.47 0.92 12.45 Forecast_over30k 0.04 1.45 0.06 3.42 -0.01 -0.34 -0.17 -7.37 -0.26 -6.25 Unemployment Rate Opening Year -0.03 -2.89 -0.01 -2.42 -0.03 -5.88 -0.01 -2.48 0.00 -0.03 Unemployment Rate Year Produced 0.00 0.84 0.01 4.70 0.01 3.89 0.01 2.51 0.02 18.04 Scale_YP_Missing 0.03 0.64 0.10 3.31 0.06 2.55 0.02 0.14 0.34 7.35 Scale_DiffYear 0.00 -0.96 0.00 1.41 0.01 5.65 0.01 10.05 0.02 10.71 Scale_IT_AddCapacity 0.01 0.37 0.02 0.99 0.04 2.54 0.04 1.40 0.13 5.13 Scale_IT_NewRoad 0.05 2.43 0.04 3.51 0.03 1.80 -0.01 -0.74 -0.02 -3.56 Scale_IT_Unknown -0.07 -1.64 0.01 0.51 0.06 4.28 0.13 6.23 0.14 6.35 Scale_FM_TravelModel 0.04 2.19 0.07 2.60 0.00 0.06 0.02 1.13 0.03 1.01 Scale_FM_Unknown -0.05 -1.39 -0.04 -1.35 -0.03 -1.40 -0.02 -0.50 0.02 0.35 Scale_FA_Consultant -0.02 -0.55 0.02 0.46 0.02 0.75 0.02 0.64 0.04 0.85 Scale_Agency_BCF 0.01 0.20 -0.05 -1.75 -0.13 -5.85 -0.16 -4.03 -0.11 -1.84 Scale_Europe_AD 0.11 2.55 0.05 1.76 0.04 1.91 0.01 0.28 -0.01 -0.28 Scale_OY_1960_1990 0.01 0.18 -0.09 -2.62 -0.05 -1.36 -0.02 -0.55 0.02 0.83 Scale_OY_1991_2002 0.31 9.88 0.25 8.63 0.27 10.86 0.39 9.93 0.48 10.73 Scale_OY_2003_2008 0.12 3.88 0.12 4.73 0.05 2.72 0.07 4.79 0.12 5.42 Scale_OY_2009_2012 0.21 8.23 0.09 3.44 0.10 4.45 0.07 3.42 0.06 2.60 Scale_FC_Arterial -0.16 -10.38 -0.08 -4.79 -0.08 -5.13 -0.09 -5.00 -0.06 -1.73 Scale_FC_CollectorLocal -0.34 -3.65 -0.13 -3.63 -0.14 -5.03 -0.24 -12.46 -0.33 -2.19 Scale_FC_Unknown -0.15 -2.91 -0.08 -3.08 -0.14 -5.34 -0.16 -4.41 -0.13 -2.60

Traffic Forecasting Accuracy Assessment Research Technical Report II-125 Figure 23: Comparison of Actual Traffic for Arterials and Interstate for 80th Percentile using Inclusive Model While these models are useful for understanding which factors may bias forecasts and which factors may be associated with broader or narrower uncertainty windows, they are not useful at the time the forecast is made because not all variables are known. For example, while it is interesting to know how the unemployment rate in the opening year affects forecast accuracy, that information will obviously not be known until the project actually opens. Therefore, we estimated another, more limited set of models which can be applied at the time of forecasting, as presented in the next section. 5.3 Forecasting Model The uncertainty inherent in forecasting the traffic is hard to get rid of, even with advances in modelling procedures. (Hartgen 2013) suggests “converting traffic forecasts from a single point-based estimates to range based with probability of outcome.” For risk assessment of a project, this range of forecasts also come in handy. The goal of this analysis is to ascertain that limit- how much should we expect the actual traffic to vary against the forecast volume for a specific type of project, roadway etc.? Again considering the NCHRP database to be representative of the national average, we can create a confidence interval for traffic as a function of several variables, similar to our analysis in Sections 3.5.1 and 3.5.2. We employ the same quantile regression as above, but for the descriptive variables, we chose the ones that would be known at the time of producing the forecasts: 1. Forecasted Traffic, 2. Unemployment rate in the current year,  ‐   10,000   20,000   30,000   40,000   50,000   60,000   70,000 0 10000 20000 30000 40000 50000 60000 70000 Ac tu al  T ra ffi c Forecast Traffic Arterial Interstate

Traffic Forecasting Accuracy Assessment Research Technical Report II-126 3. Years before 2010, a control variable to account for forecasts getting better over the years, or after 2010, 4. Forecast horizon, or how many years into the future traffic is being forecast, 5. Improvement type, or if the project is on an existing road or constructing a new one with the earlier one as reference class, 6. Forecast methodology with traffic count trend as reference class and 7. Functional Class of the roadway, interstate being the reference. The results of the quantile regression for 5th, 20th, 50th, 80th and 95th percentiles are given in Table 33. The highlighted cells are where -1.96<t-value<1.96 i.e. variables which are statistically insignificant at 95% confidence interval. Interpretation of the coefficients are simple: positive coefficients mean positive slope on actual traffic and the opposite for negative coefficients. For the binary variables, the positive coefficients also mean increase in actual traffic compared to reference class. Table 33: Quantile Regression Results [Forecasting Model] 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile Pseudo R-Squared 0.475 0.631 0.739 0.804 0.830 Coef. t value Coef. t value Coef. t value Coef. t value Coef. t value (Intercept) -182.267 -1.769 154.578 3.082 255.551 4.667 287.909 3.943 976.786 4.787 Adjusted Forecast 0.705 15.972 0.732 36.186 0.891 45.198 1.027 44.195 1.254 23.880 Adjusted Forecast _over30k 0.024 0.568 0.057 3.053 -0.004 -0.219 -0.190 -8.296 -0.413 -9.887 Scale_Unemployment Rate_YearProduced -0.006 -1.411 0.005 2.770 0.002 0.871 0.007 2.762 0.010 1.865 Scale_YearForecast Produced_before2010 -0.007 -5.639 -0.005 -5.225 0.0002 0.270 0.004 3.913 0.003 2.359 Scale_DiffYear 0.006 2.809 0.009 6.682 0.008 5.620 0.014 8.234 0.020 10.501 Scale_IT_NewRoad 0.093 4.336 0.009 1.096 -0.008 -0.901 -0.036 -1.932 -0.090 -4.288 Scale_FM_TravelModel 0.068 3.307 0.014 1.631 -0.008 -0.516 -0.018 -1.252 -0.101 -7.356 Scale_FC_Arterial -0.150 -5.237 -0.061 -4.855 -0.062 -5.171 -0.084 -5.964 -0.116 -5.881 Scale_FC_CollectorLocal -0.212 -4.027 -0.111 -4.794 -0.126 -5.212 -0.201 -5.780 -0.321 -2.362 We can apply these coefficients in the form of an equation to create a forecast window for actual traffic as a function of the descriptive variables. A simple example is demonstrated here for a project with the following specification:  Forecast produced in the year 2018  Unemployment rate at State level in 2017 is 4%  Forecasting the traffic for 2020 i.e. forecast horizon of 2 years

Traffic Forecasting Accuracy Assessment Research Technical Report II-127  The project is a new construction project on a Minor Arterial  Forecast is done using a travel demand model. The contribution of the specific values of the variables to the equation is given in Tabl and the forecast window for actual traffic for different forecast volume is given in Tablle 35. Table 34: Contribution to Equation Variables Values 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile (Intercept) -182.27 154.58 255.55 287.91 976.79 Adjusted Forecast 0.70 0.73 0.89 1.03 1.25 Adjusted Forecast _over30k - - - - - - Scale_Unemployment Rate_YearProduced 4 -0.02 0.02 0.01 0.03 0.04 Scale_YearForecast Produced_before2010 - - - - - - Scale_DiffYear 2 0.01 0.02 0.02 0.03 0.04 Scale_IT_NewRoad 1 0.09 0.01 -0.01 -0.04 -0.09 Scale_FM_TravelModel 1 0.07 0.01 -0.01 -0.02 -0.10 Scale_FC_Arterial - - - - - - Scale_FC_CollectorLocal - - - - - - Table 35: Forecast Window for Forecast Model on Specified Values Forecast Volume 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile 0 -182 155 256 288 977 5000 4,087 4,116 4,740 5,429 6,687 10000 8,357 8,078 9,225 10,571 12,398 15000 12,626 12,040 13,709 15,712 18,109 20000 16,896 16,001 18,194 20,854 23,819 25000 21,165 19,963 22,678 25,995 29,530 30000 25,435 23,924 27,163 31,137 35,240 35000 29,823 28,173 31,626 35,327 38,885 40000 34,211 32,421 36,090 39,518 42,530 45000 38,599 36,670 40,553 43,709 46,175 50000 42,988 40,918 45,017 47,899 49,819 55000 47,376 45,166 49,480 52,090 53,464 60000 51,764 49,415 53,944 56,281 57,109

Traffic Forecasting Accuracy Assessment Research Technical Report II-128 Table 36: Percent Difference from Forecast Window for Forecast Model on Specified Values Forecast Volume 5th Percentile 20th Percentile 50th Percentile 80th Percentile 95th Percentile 0 5000 -18% -18% -5% 9% 34% 10000 -16% -19% -8% 6% 24% 15000 -16% -20% -9% 5% 21% 20000 -16% -20% -9% 4% 19% 25000 -15% -20% -9% 4% 18% 30000 -15% -20% -9% 4% 17% 35000 -15% -20% -10% 1% 11% 40000 -14% -19% -10% -1% 6% 45000 -14% -19% -10% -3% 3% 50000 -14% -18% -10% -4% 0% 55000 -14% -18% -10% -5% -3% 60000 -14% -18% -10% -6% -5% Let’s consider for the specifications described above, the traffic volume forecast for 2020 is 45,000. From Tabl 35 we get the 20th and 80th percentile values to be 36,670 and 39,518. These values mean that the probability of actual traffic being within these two values is 60%. The forecast window is graphically presented in Figure 24.

Traffic Forecasting Accuracy Assessment Research Technical Report II-129 Figure 24: Range of Actual Traffic as a function of Forecast Traffic Perfect Forecast 20th Percentile Median or 50th  Percentile 95th Percentile 5th Percentile 80th Percentile 0 10000 20000 30000 40000 50000 60000 0 10000 20000 30000 40000 50000 60000 Ex pe ct ed  A DT Forecast ADT

Traffic Forecasting Accuracy Assessment Research Technical Report II-130 References Australia Government. 2012. “Addressing Issues in Patronage Forecasting for PPP/Toll Roads.” Department of Infrastructure, Regional Development and Cities Canberra, Australia. https://infrastructure.gov.au/infrastructure/public_consultations/files/patronage_consultat ion_paper.pdf. Bain, Robert. 2009. “Error and Optimism Bias in Toll Road Traffic Forecasts.” Transportation 36 (5): 469–82. https://doi.org/10.1007/s11116-009-9199-7. Bain, Robert. 2011. “The Reasonableness of Traffic Forecasts Findings from a Small Survey.” Traffic Engineering and Control (TEC) Magazine, May 2011. Bain, Robert. 2013. “Toll Roads: Big Trouble Down Under.” Infrastructure Journal, January 17, 2013. Bain, Robert, and Jan Willem Plantagie. 2004. “Traffic Forecasting Risk: Study Update 2004.” http://www.people.hbs.edu/besty/projfinportal/s&p_traffic_risk_2004.pdf. Bain, Robert, and Lidia Polakovic. 2005. “Traffic Forecasting Risk Study Update 2005: Through Ramp-up and Beyond.” Standard & Poor’s, London. http://toolkit.pppinindia.com/pdf/standard-poors.pdf. Buck, Karl, and Mike Sillence. 2014. “A Review of the Accuracy of Wisconsin’s Traffic Forecasting Tools.” In . https://trid.trb.org/view/2014/C/1287942. Cade, Brian S., and Barry R. Noon. 2003. “A Gentle Introduction to Quantile Regression for Ecologists.” Frontiers in Ecology and the Environment 1 (8): 412–20. https://doi.org/10.1890/1540-9295(2003)001[0412:AGITQR]2.0.CO;2. CDM Smith, Alan Horowitz, Tom Creasy, Ram M Pendyala, Mei Chen, National Research Council (U.S.), Transportation Research Board, et al. 2014. Analytical Travel Forecasting Approaches for Project-Level Planning and Design. Washington, D.C.: Transportation Research Board. Flyvbjerg, B., M. K. S. Holm, and S. L. Buhl. 2005. “How (In)Accurate Are Demand Forecasts in Public Works Projects?: The Case of Transportation.” Journal of the American Planning Association 71 (2). https://trid.trb.org/view.aspx?id=755586. Flyvbjerg, B., Skamris Holm, M. K, and S. L. Buhl. 2006. “Inaccuracy in Traffic Forecasts.” Transport Reviews 26 (1). https://trid.trb.org/view/2006/C/781962. Flyvbjerg, Bent. 2005. “Measuring Inaccuracy in Travel Demand Forecasting: Methodological Considerations Regarding Ramp up and Sampling.” Transportation Research Part A: Policy and Practice 39 (6): 522–30. https://doi.org/10.1016/j.tra.2005.02.003. Giaimo, Greg, and Mark Byram. 2013. “Improving Project Level Traffic Forecasts by Attacking the Problem from All Sides.” presented at the The 14th Transportation Planning Applications Conference, Columbus, OH. Gomez, Juan, José Manuel Vassallo, and Israel Herraiz. 2016. “Explaining Light Vehicle Demand Evolution in Interurban Toll Roads: A Dynamic Panel Data Analysis in Spain.” Transportation 43 (4): 677–703. https://doi.org/10.1007/s11116-015-9612-3. Hartgen, David T. 2013. “Hubris or Humility? Accuracy Issues for the next 50 Years of Travel Demand Modeling.” Transportation 40 (6): 1133–57. https://doi.org/10.1007/s11116-013- 9497-y. Kriger, David, Suzette Shiu, and Sasha Naylor. 2006. “Estimating Toll Road Demand and Revenue.” Synthesis of Highway Practice NCHRP Synthesis 364. Transportation Research Board. https://trid.trb.org/view/2006/M/805554. Li, Zheng, and David A. Hensher. 2010. “Toll Roads in Australia: An Overview of Characteristics and Accuracy of Demand Forecasts.” Transport Reviews 30 (5): 541–69. https://doi.org/10.1080/01441640903211173.

Traffic Forecasting Accuracy Assessment Research Technical Report II-131 Marlin Engineering. 2015. “Traffic Forecasting Sensitivity Analysis.” TWO#13. Miller, John S., Salwa Anam, Jasmine W. Amanin, and Raleigh A. Matteo. 2016. “A Retrospective Evaluation of Traffic Forecasting Techniques.” FHWA/VTRC 17-R1. Virginia Transportation Research Council. http://ntl.bts.gov/lib/37000/37800/37804/10-r24.pdf. Nicolaisen, Morten Skou, and Petter Næss. 2015. “Roads to Nowhere: The Accuracy of Travel Demand Forecasts for Do-Nothing Alternatives.” Transport Policy 37 (0). https://trid.trb.org/view/2015/C/1334458. Odeck, James, and Morten Welde. 2017. “The Accuracy of Toll Road Traffic Forecasts: An Econometric Evaluation.” Transportation Research Part A: Policy and Practice 101 (July): 73–85. https://doi.org/10.1016/j.tra.2017.05.001. Parthasarathi, Pavithra, and David Levinson. 2010. “Post-Construction Evaluation of Traffic Forecast Accuracy.” Transport Policy 17 (6): 428–43. https://doi.org/10.1016/j.tranpol.2010.04.010. Pedersen, N.J., and D.R. Samdahl. 1982. “Highway Traffic Data for Urbanized Area Project Planning and Design.” National Cooperative Highway Research Program NCHRP 255. Washington, D.C.: Transportation Research Board. Pereira, Francisco C., Constantinos Antoniou, Joan Aguilar Fargas, and Moshe Ben-Akiva. 2014. “A Metamodel for Estimating Error Bounds in Real-Time Traffic Prediction Systems.” IEEE Transactions on Intelligent Transportation Systems 15 (3): 1310–22. https://doi.org/10.1109/TITS.2014.2300103. Transportation Research Board. 2000. Highway Capacity Manual 2000. Bk&CD-Rom edition. Place of publication not identified: Transportation Research Board. Tsai, Chi-Hong Patrick, Corinne Mulley, and Geoffrey Clifton. 2014. “Forecasting Public Transport Demand for the Sydney Greater Metropolitan Area: A Comparison of Univariate and Multivariate Methods.” Road & Transport Research: A Journal of Australian and New Zealand Research and Practice 23 (1): 51. Vassallo, J.M, and M. Baeza. 2007. “Why Traffic Forecasts in PPP Contracts Are Often Overestimated?” EIB University Research Sponsorship Programme. Welde, Morten, and James Odeck. 2011. “Do Planners Get It Right? The Accuracy of Travel Demand Forecasting in Norway.” EJTIR 1 (11): 80–95. Zhang, Xu, and Mei Chen. 2017. “Quantifying Effects from Weather on Travel Time and Reliability.” In , 14. Washington, D.C.

A P P E N D I X C Deep Dives II-132

Traffic Forecasting Accuracy Assessment Research Technical Report II-133 Appendix C Contents 1 INTRODUCTION ......................................................................................................................................... II‐136  2 METHODOLOGY ..................................................................................................................................... ‐‐II‐137  3 EASTOWN ROAD EXPANSION, LIMA, OHIO ................................................................................................ II‐139  3.1 INTRODUCTION ................................................................................................................................................ II‐139  3.2 PROJECT DESCRIPTION ...................................................................................................................................... II‐139  3.3 PREDICTED‐ACTUAL COMPARISON OF TRAFFIC FORECASTS ....................................................................................... II‐141  3.4 POTENTIAL SOURCES OF FORECAST ERROR ............................................................................................................ II‐142  3.5 CONTRIBUTING SOURCES TO FORECAST ERROR ...................................................................................................... II‐144  3.6 DISCUSSION .................................................................................................................................................... II‐152  4 INDIAN RIVER BRIDGE, PALM CITY, FLORIDA .............................................................................................. II‐153  4.1 INTRODUCTION ................................................................................................................................................ II‐153  4.2 PROJECT DESCRIPTION ...................................................................................................................................... II‐153  4.3 PREDICTED‐ACTUAL COMPARISON OF TRAFFIC FORECASTS ....................................................................................... II‐154  4.4 POTENTIAL SOURCES OF FORECAST ERROR ............................................................................................................ II‐157  4.5 CONTRIBUTING SOURCES TO FORECAST ERROR ...................................................................................................... II‐159  4.6 DISCUSSION .................................................................................................................................................... II‐165  5 CENTRAL ARTERY TUNNEL, BOSTON, MASSACHUSETTS ............................................................................. II‐168  5.1 INTRODUCTION ................................................................................................................................................ II‐168  5.2 PROJECT DESCRIPTION ...................................................................................................................................... II‐168  5.3 PREDICTED‐ACTUAL COMPARISON OF TRAFFIC FORECASTS ....................................................................................... II‐169  5.4 POTENTIAL SOURCES OF FORECAST ERROR ............................................................................................................ II‐171  5.5 CONTRIBUTING SOURCES TO FORECAST ERROR ...................................................................................................... II‐173  5.6 DISCUSSION .................................................................................................................................................... II‐177  6 CYNTHIANA BYPASS, CYNTHIANA, KENTUCKY ............................................................................................ II‐178  6.1 INTRODUCTION ................................................................................................................................................ II‐178  6.2 PROJECT DESCRIPTION ...................................................................................................................................... II‐178  6.3 PREDICTED‐ACTUAL COMPARISON OF TRAFFIC FORECASTS ....................................................................................... II‐180  6.4 POTENTIAL SOURCES OF FORECAST ERROR ............................................................................................................ II‐184  6.5 CONTRIBUTING SOURCES TO FORECAST ERROR ...................................................................................................... II‐185  6.6 DISCUSSION .................................................................................................................................................... II‐190  7 SOUTH BAY EXPRESSWAY, SAN DIEGO, CALIFORNIA .................................................................................. II‐191  7.1 INTRODUCTION ................................................................................................................................................ II‐191  7.2 PROJECT DESCRIPTION ...................................................................................................................................... II‐191  7.3 TRAFFIC FORECASTS METHODOLOGY ................................................................................................................... II‐192  7.4 POTENTIAL SOURCES OF FORECAST ERROR ............................................................................................................ II‐193  7.5 DISCUSSION .................................................................................................................................................... II‐199  8 US 41, BROWN COUNTY, WISCONSIN ........................................................................................................ II‐200  8.1 INTRODUCTION ................................................................................................................................................ II‐200  8.2 PROJECT DESCRIPTION ...................................................................................................................................... II‐200  8.3 PREDICTED‐ACTUAL COMPARISON OF TRAFFIC FORECASTS ....................................................................................... II‐203 

Traffic Forecasting Accuracy Assessment Research Technical Report II-134 8.4 POTENTIAL SOURCES OF FORECAST ERROR ............................................................................................................ II‐205  8.5 CONTRIBUTING SOURCES TO FORECAST ERROR ...................................................................................................... II‐206  8.6 DISCUSSION .................................................................................................................................................... II‐207  9 DISCUSSION .............................................................................................................................................. II‐208  REFERENCES ................................................................................................................................................. II‐211 

Traffic Forecasting Accuracy Assessment Research Technical Report II-135 List of Tables Table 1: Projects selected for Deep Dive Analysis ................................................................................... 137  Table 2 Sources of Forecast Error to be Considered by Deep Dives ....................................................... 138  Table 3: Traffic Forecast Accuracy- East Town Extension, Lima, Ohio .................................................. 142  Table 4: Input Accuracy Assessment Table (Eastown Road Extension) .................................................. 143  Table 5: Forecast Adjustment Table based on Elasticities for all Segments (Eastown Road Extension) 146  Table 6: Adjusted Forecast Table using the Model (Eastown Road Extension) ....................................... 151  Table 7 Comparison of Base Year and Opening Year Traffic Count and Opening Year Traffic Forecast (Indian River Bridge Project) ..................................................................................................................... 156  Table 8: Input Accuracy Assessment Table (Indian River Street Project) ................................................ 158  Table 9 Forecast Adjustment Table based on Elasticities for all Segments (Indian River Bridge Project) .................................................................................................................................................................. 161  Table 10: 2025 Original Model Run Socio-Economic Inputs by County ................................................... 164  Table 11 Adjusted Forecast Table using the Model (Indian River Bridge Project) ................................... 165  Table 12 External Trip Distribution using Both Competing Bridges .......................................................... 167  Table 13: Comparison of Base Year and Mid-Year Traffic Count and Mid-Year Traffic Forecast (CA/T Project) ...................................................................................................................................................... 171  Table 14: List of Exogenous Forecasts and Project Assumptions (CA/T Project) .................................... 172  Table 15: Forecast Adjustment Table based on Elasticities (CA/T Project) ............................................. 174  Table 16: Availability of Data for Cynthiana Bypass Project ..................................................................... 181  Table 17: External Forecasts and Errors .................................................................................................. 182  Table 18: Traffic Volume Accuracy Assessment (Cynthiana Bypass Project) ......................................... 184  Table 19: Input Accuracy Assessment Table (Cynthaina Bypass Project) ............................................... 185  Table 20: Forecast Adjustment Table based on Elasticities for all Segments (Cynthiana Bypass Project) .................................................................................................................................................................. 187  Table 21: Forecast Adjustment by Model (Cynthiana Bypass Project) .................................................... 189  Table 22 Existing and Forecasting Traffic (2005-2035), from USH 41 Traffic Study & EIS ..................... 204  Table 23: List of Exogenous Forecasts and Project Assumptions (US 41 Project) .................................. 205  Table 24: Forecast Adjustment Table based on Elasticities ..................................................................... 207  LIST OF FIGURES  Figure 1: Project Corridor for Eastown Road Extension ........................................................................... 140  Figure 2: Project Corridor for Indian River Bridge Project ........................................................................ 154  Figure 3: Project Corridors and important links (Indian River Street Project) ........................................... 155  Figure 4: Distribution of Traffic from Old and new Bridges ....................................................................... 157  Figure 5 Martin County Unemployment Rate Chart .................................................................................. 166  Figure 6 Median Age (in years) in Southeast Florida Counties ................................................................ 167  Figure 7: Central Artery/Tunnel Projects ................................................................................................... 169  Figure 8: Traffic Count Links in the Study Area (CA/T Project) ................................................................ 170  Figure 9: Project Corridor (Cynthiana Bypass) ......................................................................................... 179  Figure 10: Cynthiana Study Area Link Volumes ....................................................................................... 183  Figure 11: Project Study Area (South Bay Expressway) .......................................................................... 192  Figure 12: Model and Actual Full Length ETC Tolls on SBX .................................................................... 194  Figure 13: Comparison of Observed and Projected Population and Household Growth in San Diego County ....................................................................................................................................................... 195  Figure 14: Map of Change in Owner-Occupied Housing Units in San Diego County .............................. 196  Figure 15: San Diego Home Price Index, 1987-2018 ............................................................................... 197  Figure 16: US-Mexico Historical Border Crossings at Otay Mesa (Passenger Cars) ............................... 198  Figure 17: US-Mexico Historical Border Crossings at Otay Mesa (Trucks) .............................................. 198  Figure 18: Annual Revenue Forecasts on SBX ........................................................................................ 199  Figure 19: Project Study Area (US 41 Brown County) .............................................................................. 201  Figure 20: Areas of US 41 Project in Brown and Winnebago Counties ................................................... 202  Figure 21 US 41 Project by Numbers ....................................................................................................... 202  Figure 22 A Map of Wisconsin DOT regions & Fox Valley Area (within a red boundary)......................... 203  Figure 23. Traffic Count Locations in the study area ................................................................................ 204 

Traffic Forecasting Accuracy Assessment Research Technical Report II-136 1 Introduction The previously explained Large-N analysis measures the error or the Percent Difference from Forecast (PDFF) and can shed light on certain factors associated with forecast errors, but it does not shed light on why the forecasts may be in error. The Deep Dives fills that gap to the extent possible. This analysis focuses on addressing the following questions:  What aspects of the forecasts (such as population forecasts, project scope, etc.) can we clearly identify as being accurate or inaccurate?  If we had gotten those aspects right, how much would it change the traffic forecast? The goal here is to attribute as much of the error as possible to known factors. The remaining error will be for “unknown reasons” and we will be able to say little about it beyond the fact that it is not due to the aspects we identified and quantified. Deep Dives guide the efforts into identifying the reasons behind forecast errors. The specific methods for answering these questions varies across the Deep Dives, depending both on the process options being considered and on the data available for that project. This research conducted five Deep Dives. Initially six projects were selected, one of which were later discarded due to lack of clarity in forecast documents, to provide a range of project types, and a range of available data for analysis. We aimed to find projects where: 1. The project is already open, and we expect to be able to find post-opening data. 2. The project is big enough to have a meaningful impact. 3. We have detailed information available about the forecasts. Ideally, this would be in the form of archived model runs. Lacking that, detailed forecast reports would be beneficial, and if those are unavailable, we would rely on the environmental impact statements or other public documents. 4. The projects as a set show some diversity of types. We found it to be surprisingly difficult to find suitable case studies. We found that points one and three were in direct conflict. We did find a few agencies who are doing a commendable job of archiving forecasts, but even in the best cases, the archives get thin more than about 10 years back, and projects forecast less than 10 years ago are often not open. In addition, for longer timeframes, staff have often turned over, and institutional memory can be lost. Here, the most promise came from finding long-time staff who happened to be good at keeping their own records or saw the value in saving the information. In our search, we aimed for big projects, with the idea that they were more important to start with, they would be better documented, and they would show more meaningful impacts. What we found, though, was that many major projects opened over the last decade have been tolled. This is

Traffic Forecasting Accuracy Assessment Research Technical Report II-137 natural given current funding constraints, but given that toll forecasts have been studied more extensively elsewhere, we wanted them to be a part of, but not the dominant part of, our study. The resulting Deep Dives that have been selected (Table 1) provide for a reasonable diversity of project types and available data. They include a new bridge, the expansion and extension of an arterial on the fringe of an urban area, a major new expressway built as a toll road, the rebuild and expansion of an urban freeway, and a state highway bypass around a small town. Table 1: Projects selected for Deep Dive Analysis Project Name Brief Description Eastown Road Extension Project, Lima, Ohio Widened a 2.5-mile segment of the arterial from 2 lanes to 5 lanes and extended the arterial an additional mile Indian River Bridge, Palm City, Florida This 0.6 mile long bridge with four travel lanes in total. runs along CR 714 (Martin Highway), connecting with the Indian River Street and goes across the St. Lucie River. Central Artery Tunnel, Boston, Massachusetts Reconstruction of Interstate Highway 93 (I-93) in downtown Boston, the extension of I-90 to Logan International Airport, the construction of two new bridges over the Charles River, six interchanges and the Rose Kennedy Greenway in the space vacated by the previous elevated I-93 Central Artery in Boston, Massachusetts. Cynthiana Bypass, Cynthiana, Kentucky A 2-lane, state highway bypass project, to the west of the City from the southern terminus where US 62S and US27S meet. South Bay Expressway, San Diego, California A 9.2-mile tolled highway segment of SR 125 in eastern San Diego, CA. US 41 (later renamed as I-41), Brown County, Wisconsin A project of capacity addition, reconstruction of nine interchanges, constructing 24 roundabouts, adding collector- distributer lanes, and building two system interchanges located in Brown County, Wisconsin. Section 2 describes the methodology adopted in conducting the deep dives. Section 3 to 8 describes the findings from each of the deep dives. 2 Methodology The Deep Dives begin with a comparison of the actual and forecast ADT. Given the review, and our own assessment of the important factors associated with forecast error, the Deep Dives focus on evaluating each of the items listed in Table 2. Each Deep Dive follows a similar structure, working through the list of factors, and attempt to identify whether the item is an important source of error for the forecast, and if so, attempt to quantify how much it would change the forecast if the forecasters had gotten it right. The last column in Table 2 identifies whether we expect to be able to quantify the effect of that item on the resulting forecast. The top 7 factors are generally model inputs, and it is reasonable to expect that we could observe the actual outcomes, and apply an elasticity or updated model run to evaluate the effect of having the correct input on the forecast. We expect the remaining factors to be more difficult to quantify, and expect to address them qualitatively if they are identified as being important.

Traffic Forecasting Accuracy Assessment Research Technical Report II-138 Table 2 Sources of Forecast Error to be Considered by Deep Dives Items Definition Quantifiable Employment The actual employment (or GDP) differs from what was projected. Yes Population/Household The actual population or households differ from what was projected. Yes Car Ownership Actual car ownership differs from projection. Should note whether car ownership is endogenous or exogenous to the forecast. Yes Fuel Price/Efficiency The average fuel price or fuel efficiency different from expectations. Yes Travel Time/Speed Travel time comparison of the facility itself and alternative routes. Yes Toll Sensitivity/Value of Time The sensitivity to tolls, or the value of the tolls themselves is in error. For example, Anam, S. (2016) study on Coleman Bridge found that the project considered two toll amounts ($1 and $0.75), however by the time of opening/horizon year it got to $0.85 and $2. Yes Project Scope The project was built to different specifications than was assumed at the time of the forecast. For example, budget constraints meant that only 4 lanes were built instead of 6. Yes Rest of Network Assumptions There were assumptions about related projects that would be constructed that differed from what was actually built. Yes Model Deficiency/Issues Limitations of the model itself. This could include possible errors, or limitations of the method. For example, the project was built in a tourist area, but the model was not able to account for tourism. No Data Deficiency/Issues Limitations of the data available at the time of the forecast. For example, erroneous or outdated counts were used as the basis for pivoting. No Unexpected Changes In the latter portion of the 20th century, this could include the rise of 2-worker households or other broad social trends. In the 21st century, this could include technology changes, such as self-driving cars. No Other Other issues that are not articulated above. No

Traffic Forecasting Accuracy Assessment Research Technical Report II-139 3 Eastown Road Expansion, Lima, Ohio 3.1 Introduction Eastown Road expansion is a project in the city of Lima, Ohio that widened a 2.5-mile segment of the arterial from 2 lanes to 5 lanes and extended the arterial an additional mile. This north-south arterial is located on the western edge of the city of Lima in Allen County, Ohio. This chapter, written in June 2018, assesses the reliability and accuracy of traffic forecasts for the Eastown Road expansion project. Traffic forecasts for the project were prepared around 2000 for a 2009 opening year. The project opened around 2009. Traffic counts are available for 2010-2017 years, all post-opening. Section 3.2 describes the project. Section 3.3 compares the predicted and actual traffic volumes for all roadways in the study area where post-opening traffic counts are available. Section 3.4 enumerates the exogenous forecasts and sources of forecast error for the project. It also includes an assessment of the accuracy of the exogenous forecasts. Section 3.5 attempts to identify items discussed in Section 3.4 that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Section 3.6 summarizes the findings. 3.2 Project Description The project extended Eastown Road from just north of Elida Road in the north to Spencerville Road in the south. The project included a 2.5-mile expansion from 2 lanes to 5 lanes on the segment between Elida Road and West Elm Street and a 1-mile extension further south to Spencerville Road. Documentation of the project forecasts was unavailable at the time of writing. Hence aspects related to project costs, exact opening year, importance of the project to the local community and other characteristics could not be determined. Using historical Google Earth images, the opening year was identified to be the year 2009. Figure 1 shows the project corridor.

Traffic Forecasting Accuracy Assessment Research Technical Report II-140 Figure 1: Project Corridor for Eastown Road Extension W Elm 1 2 3 4 5 6 7 8 9 10 11 These are the 11 segments identified for traffic volume accuracy assessment

Traffic Forecasting Accuracy Assessment Research Technical Report II-141 3.3 Predicted-Actual Comparison of Traffic Forecasts The Ohio Department of Transportation (ODOT) made travel demand model runs available for this effort. These model runs have been used to report the predicted traffic on the project. The travel demand model area included the entire Lima Metropolitan Organization (MPO) region. The model is a traditional four step model in CUBE-Voyager software that includes trip generation, trip distribution, mode choice and traffic assignment steps. Additional components include a household model which develops the distributions required by the trip generation, and separate truck and external models. The model has a base year of 2000 and the modeled opening year for the Eastown project is 2009. The model runs included loaded highway networks for both the base and opening years. At the time of writing, CUBE v6.4.3 was the latest available version, and the model runs were presumably made using earlier versions of CUBE. For the opening year forecasts to be consistent with the additional model runs made to quantify sources of forecasting error as described in Section 3.5, the opening year scenario run was remade using CUBE v6.4.2. The loaded network thus generated from this new model run was used to report the link level opening year forecasts. It should be noted that there were very little (less than 2%) differences in the model volumes between this new model run and the original model run provided by ODOT. A total of 11 links with an Average Daily Traffic (ADT) traffic count available were identified in the project corridor. The following table (Table 3) lists each of these links with their forecast and observed ADT. The table includes an inaccuracy index in traffic forecasts that was estimated as: 𝑃𝑒𝑟𝑐𝑒𝑛𝑡 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑓𝑟𝑜𝑚 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐶𝑜𝑢𝑛𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 The first six segments constitute the Eastown Road project. The links are also identified in Figure 1.

Traffic Forecasting Accuracy Assessment Research Technical Report II-142 Table 3: Traffic Forecast Accuracy- East Town Extension, Lima, Ohio Seg# Project Segment and Direction Opening Year Count Opening Year Forecast (2009) Percent Difference from Forecast Count year used 1 North Eastown Road: North of Elida Road 8,474 10,262 -17% 2010 2 North Eastown Road: South of Elida Road 15,071 19,435 -22% 2017 3 North Eastown Road: North of Allentown Road 12,169 16,755 -27% 2010 4 North Eastown Road: South of Allentown Road 15,404 19,099 -19% 2010 5 South Eastown Road: North of Elm Street 15,219 17,181 -11% 2017 6 South Eastown Road: South of Elm Street 8,515 14,907 -43% 2010 7 Allentown Rd: West of Eastown Road 9,740 8,773 11% 2011 8 Elm St: West of Eastown Road 6,314 5,021 26% 2010 9 Elm St: East of Eastown Road 7,793 9,084 -14% 2010 10 Spencerville Rd: Far West of Eastown Rd 8,346 8,882 -6% 2011 11 Spencerville Rd: Far East of Eastown Rd 8,210 10,604 -23% 2011 The opening year count data was available from ODOT’s traffic website. Count data was available for either 2010 or 2011 for most segments except for segments 2 and 5 where only a 2017 count was available. Overall, the actual volumes were 20% lower than forecast on those segments of Eastown Road which were expanded from 2 lanes to 5 lanes. On the new extension segment south of Elm Street, the actual volume was 43% lower than the forecast. This could be a potential error in the count data at that location because there’s a 44% reduction in observed traffic counts on Eastown Road between north of Elm Street and south of Elm Street which cannot be explained by the housing and commercial activities near the intersection. Further, the traffic forecasts on three of the four legs (segments 5, 8, 9) at this intersection differ by only about 10-26% from the traffic counts. 3.4 Potential Sources of Forecast Error This section identifies the exogenous forecasts and project assumptions used in the development of the traffic forecasts. Exogenous forecasts are made outside of the immediate traffic forecasting process. Project assumptions are established during project development and serve as the basis for the traffic forecast. Exogenous forecasts and project assumptions are leading sources of forecast error. An example of exogenous forecasts are the population and employment forecasts, which are commonly identified as a major source of traffic forecasting error. These forecasts are usually made by outside planning agencies on a regular basis; that is, they are not prepared for any individual project. During project development, these forecasts are revised to match assumptions documented by the project team. In this example, population and employment forecasts are both an exogenous forecast and a project assumption.

Traffic Forecasting Accuracy Assessment Research Technical Report II-143 Past forecasting research has identified several exogenous forecasts and project assumptions as common sources of forecast error, including:  Macro-economic conditions (of the region or study area),  Population and employment forecasts,  Significant changes in land use,  Auto fuel prices,  Tolling pricing, sensitivity and price levels,  Auto ownership,  Changes in technology,  Travel times within the study area, and  Duration between year forecast produced and opening year. The following table (Table 4) lists all exogenous forecasts and project assumptions for which observed data is available. Table 4: Input Accuracy Assessment Table (Eastown Road Extension) Items Quantifiable Observed Opening Year Values Estimated Opening Year Values % Difference Employment* Yes 38,801 48,312 -20% Population** Yes 78,576 80,854 -3% Car Ownership** Yes 54,603 56,084 -3% Fuel Price*** Yes $2.34 $1.82 29% Travel Speed I Yes 47 mph 54 mph -13% Macro-economic Conditions No Data Sources: * https://www.bls.gov/ ** American Community Survey *** https://www.eia.gov/ I The travel speed mentioned in this table is specifically for the off-peak period. Observed value is obtained from - https://www.dot.state.oh.us/Divisions/Planning/TechServ/traffic/Pages/TMMS.aspx The table shows a list of all the items that are potential sources of forecasting error and specifically identifies those sources which are important to the Eastown Road expansion project. Observed values for all the factors mentioned in the table are for the year 2009, to be consistent with the observed opening year. The 2009 project opening year coincided with one of the worst economic downturns in the country. This resulted in significant unemployment throughout the country and as can be seen from the table, the actual employment around that period was 20% lower than the employment forecasts used for this project. The year 2008/09 also was around the time when fuel prices were at their peak. After adjusting the fuel prices to the opening year of 2009 for inflation, the actual fuel price was 29% higher than the estimated fuel price in the opening year. Travel speed on certain segments of Eastown Road was another key input that had an error, with the observed travel speeds being 13% lower than the modeled speed. The modeled travel speed

Traffic Forecasting Accuracy Assessment Research Technical Report II-144 mentioned in this table is specifically for the off-peak period. The modeled peak period travel speeds were similar to the observed speeds. Population and car-ownership forecasts were also quantified, but these were very similar to the observed data in the year 2009. It should be noted that car ownership was exogenous to the forecast. None of the other potential sources of forecasting error identified in the table were deemed to be important in the forecasts for this project. 3.5 Contributing Sources to Forecast Error Building upon the items discussed in Section 3.4, this section attempts to identify items that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Adjusted forecasts for the critical roadways are computed by applying an elasticity to the relative change between the actual and predicted values for each item in Section 3.4. The effect on the forecast can be quantified in this way. Only those items which could be quantified and deemed important for this project were adjusted. The effect on the forecast can be quantified in this way. First, the change in forecast value, a delta between the opening year forecast and the actual observed traffic count in the opening year is calculated. Change in Value 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 Second, a factor of the effect on forecast by exponentiating an elasticity of the common source errors and natural-log of the change rate in forecast value is calculated. This factor is applied to the actual forecast volume to generate an adjusted forecast. The forecasts are adjusted individually for each variable; the adjusted forecast from one row carrying over to the next as an input, thereby presenting a cumulative effect of adjustment. Effect on Forecast 𝑒𝑥𝑝 ∗ 1 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 1 Adjusted Forecast 1 𝐸𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 ∗ 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑜𝑙𝑢𝑚𝑒 𝐸𝑞𝑢𝑎𝑡𝑖𝑜𝑛 2 This deep dive analysis to adopt the best elasticity values, identified by (Ewing et al. 2014) via their cross-sectional and longitudinal models together, and other transportation literature (Dong et al. 2012; Dunkerley, Rohr, and Daly 2014). It is important to note that the elasticity values by Ewing et al. (2014) were related to Vehicle Miles Travelled (VMT) not traffic volumes. We were not able to find elasticity values specifically for traffic volumes with respect to employment, population, and fuel price. Further, we were not able to find the elasticity value of VMT or traffic volume with respect to employment. To this end, this elasticity study has two assumptions. First, the elasticity values of VMT

Traffic Forecasting Accuracy Assessment Research Technical Report II-145 with respect to population and fuel price is close to the elasticity values of traffic volumes given a high correlation between VMT and traffic volumes. Second, the elasticity value regarding employment is close to the one for the per capita income because of its high correlation. The results of quantifying the effect on the forecast are shown in the Table 5.

II-146 Table 5: Forecast Adjustment Table based on Elasticities for all Segments (Eastown Road Extension) Seg# Items Actual Value Forecast Value Change required in Forecast Value Elasticity Effect on Forecast (Equation 15) Starting Forecast Volume Adj Forecast Volume (Equation 16) Remaining % Difference for Adj Forecast 1 Employment 38,801 48,312 -20% 0.30 -6% 10,262 9,609 -12% 1 Population/Household 78,576 80,854 -3% 0.75 -2% 9,609 9,405 -10% 1 Car Ownership 54,603 56,084 -3% 0.30 -1% 9,405 9,330 -9% 1 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 9,330 8,872 -4% 1 Travel Time/Speed - - 0% (0.60) 0% 8,872 8,872 -4% 1 Original Traffic Forecast 8,474 10,262 -17% N/A N/A 1 Adjusted Traffic Forecast N/A N/A N/A 10,262 8,872 -4% 2 Employment 38,801 48,312 -20% 0.30 -6% 19,435 18,198 -17% 2 Population/Household 78,576 80,854 -3% 0.75 -2% 18,198 17,812 -15% 2 Car Ownership 54,603 56,084 -3% 0.30 -1% 17,812 17,670 -15% 2 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 17,670 16,803 -10% 2 Travel Time/Speed - - 0% (0.60) 0% 16,803 16,803 -10% 2 Original Traffic Forecast 15,071 19,435 -22% N/A N/A 2 Adjusted Traffic Forecast N/A N/A N/A 19,435 16,803 -10% 3 Employment 38,801 48,312 -20% 0.30 -6% 16,755 15,688 -22% 3 Population/Household 78,576 80,854 -3% 0.75 -2% 15,688 15,356 -21% 3 Car Ownership 54,603 56,084 -3% 0.30 -1% 15,356 15,233 -20% 3 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 15,233 14,486 -16% 3 Travel Time/Speed - - 0% (0.60) 0% 14,486 14,486 -16% 3 Original Traffic Forecast 12,169 16,755 -27% N/A N/A 3 Adjusted Traffic Forecast N/A N/A N/A 16,755 14,486 -16% 4 Employment 38,801 48,312 -20% 0.30 -6% 19,099 17,883 -14% 4 Population/Household 78,576 80,854 -3% 0.75 -2% 17,883 17,504 -12%

Traffic Forecasting Accuracy Assessment Research Technical Report II-147 4 Car Ownership 54,603 56,084 -3% 0.30 -1% 17,504 17,364 -11% Seg# Items Actual Value Forecast Value Change required in Forecast Value Elasticity Effect on Forecast Starting Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast 4 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 17,364 16,513 -7% 4 Travel Time/Speed 0.59 0.49 20% (0.60) -11% 16,513 14,772 4% 4 Original Traffic Forecast 15,404 19,099 -19% N/A N/A 4 Adjusted Traffic Forecast N/A N/A N/A 19,099 14,772 4% 5 Employment 38,801 48,312 -20% 0.30 -6% 17,181 16,087 -5% 5 Population/Household 78,576 80,854 -3% 0.75 -2% 16,087 15,746 -3% 5 Car Ownership 54,603 56,084 -3% 0.30 -1% 15,746 15,620 -3% 5 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 15,620 14,855 2% 5 Travel Time/Speed 0.83 0.72 15% (0.60) -8% 14,855 13,640 12% 5 Original Traffic Forecast 15,219 17,181 -11% N/A N/A 5 Adjusted Traffic Forecast N/A N/A N/A 17,181 13,640 12% 6 Employment 38,801 48,312 -20% 0.30 -6% 14,907 13,958 -39% 6 Population/Household 78,576 80,854 -3% 0.75 -2% 13,958 13,662 -38% 6 Car Ownership 54,603 56,084 -3% 0.30 -1% 13,662 13,553 -37% 6 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 13,553 12,888 -34% 6 Travel Time/Speed 1.28 1.11 15% (0.60) -8% 12,888 11,832 -28% 6 Original Traffic Forecast 8,515 14,907 -43% N/A N/A 6 Adjusted Traffic Forecast N/A N/A N/A 14,907 11,832 -28% 7 Employment 38,801 48,312 -20% 0.30 -6% 8,773 8,215 19% 7 Population/Household 78,576 80,854 -3% 0.75 -2% 8,215 8,040 21% 7 Car Ownership 54,603 56,084 -3% 0.30 -1% 8,040 7,976 22% 7 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 7,976 7,585 28% 7 Travel Time/Speed - - 0% (0.60) 0% 7,585 7,585 28%

Traffic Forecasting Accuracy Assessment Research Technical Report II-148 7 Original Traffic Forecast 9,740 8,773 11% N/A N/A 7 Adjusted Traffic Forecast N/A N/A N/A 8,773 7,585 28% Seg# Items Actual Value Forecast Value Change required in Forecast Value Elasticity Effect on Forecast Starting Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast 8 Employment 38,801 48,312 -20% 0.30 -6% 5,021 4,701 34% 8 Population/Household 78,576 80,854 -3% 0.75 -2% 4,701 4,602 37% 8 Car Ownership 54,603 56,084 -3% 0.30 -1% 4,602 4,565 38% 8 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 4,565 4,341 45% 8 Travel Time/Speed - - 0% (0.60) 0% 4,341 4,341 45% 8 Original Traffic Forecast 6,314 5,021 26% N/A N/A 8 Adjusted Traffic Forecast N/A N/A N/A 5,021 4,341 45% 9 Employment 38,801 48,312 -20% 0.30 -6% 9,084 8,506 -8% 9 Population/Household 78,576 80,854 -3% 0.75 -2% 8,506 8,325 -6% 9 Car Ownership 54,603 56,084 -3% 0.30 -1% 8,325 8,259 -6% 9 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 8,259 7,854 -1% 9 Travel Time/Speed - - 0% (0.60) 0% 7,854 7,854 -1% 9 Original Traffic Forecast 7,793 9,084 -14% N/A N/A 9 Adjusted Traffic Forecast N/A N/A N/A 9,084 7,854 -1% 10 Employment 38,801 48,312 -20% 0.30 -6% 8,882 8,317 0.3% 10 Population/Household 78,576 80,854 -3% 0.75 -2% 8,317 8,140 3% 10 Car Ownership 54,603 56,084 -3% 0.30 -1% 8,140 8,075 3% 10 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 8,075 7,679 9% 10 Travel Time/Speed - - 0% (0.60) 0% 7,679 7,679 9% 10 Original Traffic Forecast 8,346 8,882 -6% N/A N/A 10 Adjusted Traffic Forecast N/A N/A N/A 8,882 7,679 9%

Traffic Forecasting Accuracy Assessment Research Technical Report II-149 11 Employment 38,801 48,312 -20% 0.30 -6% 10,604 9,929 -17% 11 Population/Household 78,576 80,854 -3% 0.75 -2% 9,929 9,718 -16% 11 Car Ownership 54,603 56,084 -3% 0.30 -1% 9,718 9,641 -15% 11 Fuel Price/Efficiency $2.340 $1.820 29% (0.20) -5% 9,641 9,168 -10% 11 Travel Time/Speed - - 0% (0.60) 0% 9,168 9,168 -10% 11 Original Traffic Forecast 8,210 10,604 -23% N/A N/A 11 Adjusted Traffic Forecast N/A N/A N/A 10,604 9,168 -10% Seg# Items Actual Value Forecast Value Change required in Forecast Value Elasticity Effect on Forecast Starting Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast New Extension Employment 38,801 48,312 -20% 0.30 14,907 13,958 -39% Population/Household 78,576 80,854 -3% 0.75 13,958 13,662 -38% Car Ownership 54,603 56,084 -3% 0.30 13,662 13,553 -37% Fuel Price/Efficiency $2.340 $1.820 29% (0.20) 13,553 12,888 -34% Travel Time/Speed 1.28 1.11 15% (0.60) 12,888 11,832 -28% Original Traffic Forecast 8,515 14,907 -43% N/A N/A - - Adjusted Traffic Forecast N/A N/A N/A - - 14,907 11,832 -28% Modified Existing Links1 Employment 38,801 48,312 -20% 0.30 82,732 77,466 -14% Population/Household 78,576 80,854 -3% 0.75 77,466 75,823 -13% Car Ownership 54,603 56,084 -3% 0.30 75,823 75,217 -12% Fuel Price/Efficiency $2.340 $1.820 29% (0.20) 75,217 71,530 -7% Travel Time/Speed - - 0% (0.60) 71,530 68,574 -3% Original Traffic Forecast 66,337 82,732 -20% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 82,732 68,574 -3% 1 Links which existed before and were modified as part of the project

Traffic Forecasting Accuracy Assessment Research Technical Report II-150 Other links2 Employment 38,801 48,312 -20% 0.30 42,364 39,667 2% Population/Household 78,576 80,854 -3% 0.75 39,667 38,826 4% Car Ownership 54,603 56,084 -3% 0.30 38,826 38,516 5% Fuel Price/Efficiency $2.340 $1.820 29% (0.20) 38,516 36,628 10% Travel Time/Speed - - 0% (0.60) 36,628 36,628 Original Traffic Forecast 40,403 42,364 -5% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 42,364 36,628 10% 2 Links which were not part of the project, but for which the traffic was forecasted.

Traffic Forecasting Accuracy Assessment Research Technical Report II-151 The original forecast value was successively adjusted for each of the items identified as contributing sources of forecasting error and the final remaining percentage difference from forecast after all adjustments is shown in the table. Table 5 shows the detailed elasticity-based adjustments made for all the segments. The most significant impact on traffic volumes was due to employment and travel time corrections. Using these elasticity-based adjustments, the PDFF was improved on all the segments of Eastown Road (segments 1-6), though the actual volume on the portion of Eastown Road which was extended (segment 6) is still low by 28% in comparison to the adjusted forecast. Segment 6 has a potential error in the count data as described in Section 3.3. In addition to the elasticity-based adjustment, the travel model used to produce the traffic forecasts was re-run using corrected exogenous forecasts and project assumptions. The same items identified in Section 3.4 were adjusted sequentially in the model. Employment, population and car ownership were uniformly scaled down at the TAZ (traffic analysis zone) level in the model to match the observed values. For fuel price, the auto operating cost was changed in the model. According to a 2013 report from AAA (American Automobile Association 2013) on driving costs, approximately 20% of the auto operating cost is due to fuel price. Only this 20% portion of the auto operating cost in the model was adjusted to reflect the observed fuel price. The travel speeds on the corridor specific segments were changed wherever they were different from the observed values. The model adjustments were performed sequentially to obtain new model volumes. The results of this process for all segments are shown in Table 6. Table 6: Adjusted Forecast Table using the Model (Eastown Road Extension) Seg# Items Old Model Volume New Model Volume Observed Volume Difference (Observed- New) % Difference from Observed Volume % Difference from Old Model Volume 1 All Adjustments 10,262 9,375 8,474 -901 -10% -9% 2 19,435 16,810 15,071 -1,739 -10% -14% 3 16,755 14,148 12,169 -1,979 -14% -16% 4 19,099 15,337 15,404 67 0% -20% 5 17,181 13,679 15,219 1,540 11% -20% 6 14,907 12,486 8,515 -3,971 -32% -16% 7 8,773 8,249 9,740 1,491 18% -6% 8 5,021 4,601 6,314 1,713 37% -8% 9 9,084 8,999 7,793 -1,206 -13% -1% 10 8,882 8,413 8,346 -67 -1% -5% 11 10,604 9,491 8,210 -1,281 -13% -10% New Extension 14,907 12,486 8,515 -3,971 -32% -16% Modified Existing Links 82,732 69,349 66,337 -3,012 -4% -16% Other links 42,364 39,753 40,403 650 2% -6%

Traffic Forecasting Accuracy Assessment Research Technical Report II-152 Overall, the final adjusted forecasts using the model were very similar to those obtained from the elasticity-based adjustments, especially on Eastown Road (segments 1-6). 3.6 Discussion The actual volumes on Eastown Road were 20% lower than forecasts for the existing portion of the road and 43% lower than forecast for the extension. It should be noted that there is a possible error in the observed counts on the extension segment. The project opened in 2009, which was the time of peak economic recession and high gas prices in the country. As a result, over-estimation of employment and under-estimation of fuel price in the opening year were two key contributors to the forecasting errors in this project. Additionally, the observed travel speeds on certain segments of the project were 13% lower than the modeled speed. This was the third key contributor to the forecasting error in this project. Population and car ownership forecasts were very similar to the observed values and contributed a tiny portion to the forecasting error. Adjustments to the forecasts using elasticities and model re-runs confirmed that significant errors in opening year forecasts of employment, fuel price and travel speed had a major role in the over-estimation of traffic volumes on Eastown Road. The traffic forecasts on the project segments that were widened from 2 lanes to 5 lanes improved, and the actual volumes were now only 3% lower than the adjusted forecasts after accounting for the corrected exogenous forecasts and project assumptions. The forecasts on the extension segment improved as well, with the actual volumes now 28% lower than the adjusted forecasts. Overall, the prevailing macro-economic conditions around the opening year played a major part in the accuracy of the forecasts for the Eastown Road project expansion. This is a major uncertainty that is extremely difficult to directly consider at the time of preparation of traffic forecasts given the various modeling parameters that could change in an economic downturn. One way to account for this is to evaluate and document the change in traffic forecasts using reduced employment and higher fuel prices. It is unknown whether risk and uncertainty were considered in the traffic forecasts due to the absence of project documentation. For future forecasting efforts, it is suggested that a copy of the project and traffic forecasting documentation be saved along with the actual model used to generate the forecasts.

Traffic Forecasting Accuracy Assessment Research Technical Report II-153 4 Indian River Bridge, Palm City, Florida 4.1 Introduction The Indian River Street Bridge is a new bridge construction project located in Palm City, Florida (Martin County). The bridge is 0.6 miles long with four travel lanes in total (two lanes in each direction). This bridge runs along CR 714 (Martin Highway), connecting with the Indian River Street and goes across the St. Lucie River. This report, written in June 2018, assesses the reliability and accuracy of traffic forecasts for the Indian River Street project. Traffic forecasts for the project were reported in 2003 for the 2011, 2021 and 2031 forecast years. The project was scheduled to be opened in 2011. But, the project opened in 2014. The Annual Daily Traffic counts (ADT) are available for 2014 year. Section 4.2 describes the project. Section 4.3 compares the predicted and actual traffic volumes for all roadways in the study area where post-opening traffic counts are available. Section 4.4 enumerates the exogenous forecasts and sources of forecast error for the project. It also includes an assessment of the accuracy of the exogenous forecasts. Section 4.5 attempts to identify items discussed in Section 4 that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Section 4.6 summarizes the findings. 4.2 Project Description The Indian River Street Bridge acts as a reliever bridge for Palm City Bridge (old bridge) which is approximately one mile north of the new bridge. It is also expected to provide relief to the existing SR 714 corridor which is connected with the Palm City Bridge. The study area boundaries extend from Florida’s Turnpike to the west, Federal Highway (US 1) to the east, I-95 crossing of the St. Lucie Canal to the south and the Martin /St. Lucie county line to the north. Figure 2 shows the study area for this project. This project concentrated on multiple alternatives and later finalized on a new four lane bridge construction. The updated study was reported in 2003. The construction was started in 2009 and was completed in 2014. The estimated construction cost of the project is $63.9 million. This project is interesting because it provides an opportunity to examine a new bridge project crossing over a river, with clear diversion effects, and it also has a detailed modeling information available. The model was build using Transportation Planning program (TRANPLAN). FDOT District 4 provided archived model runs and detailed project reports to support this Deep Dive analysis.

Traffic Forecasting Accuracy Assessment Research Technical Report II-154 Figure 2: Project Corridor for Indian River Bridge Project 4.3 Predicted-Actual Comparison of Traffic Forecasts The New Bridge Crossing Alternative Corridor Alignment Report (Corridor Report) was completed in March of 2001 and was later updated in 2003. The study involves two major corridors SR 714 also referred to as North Corridor (includes Palm City Bridge) and CR 714 which is referred to as South corridor (includes Indian River Street Bridge). On top of new bridge construction, it also includes upgrading of CR 714 from three-lane rural section with a center two-way left turn lane to a four-lane arterial and other minor improvements to the side streets at signalized intersections. There are 47 links in the study area. Traffic forecasts were obtained for the 25 of the 45 links. The Treasure Coast Regional Planning Model (TCRPM II) 2025 Cost Feasible Model (A25) was used in the evaluation of this project. The base year for this model was 1996 and the horizon year was 2025. The 2001, 2011, 2021 and 2031 traffic volumes for the corridors in this study area were calculated using combination of linear regression, turning movement procedures, four-step travel demand model forecast and professional judgement. It incorporated model estimates and historical traffic information, where appropriate. Historical traffic count data was obtained between 1992 to 2001 for selected county and the FDOT count stations which were in the project study area. On few corridors, where the estimated traffic was unexplainable, professional judgement was made to adjust the growth rate and to reassign the design year traffic volumes. The detailed approach can be found in Revised

Traffic Forecasting Accuracy Assessment Research Technical Report II-155 Traffic Projection and Turning Movement Report for SR-714 and Martin Highway/Indian Street (Indian Street Bridge Crossing). There are 25 links with an Average Daily Traffic (ADT) traffic count. For this analysis, we have only concentrated on 11 links near the two bridges. Links on the CR 714 corridor are identified as South corridor and links on the SR 714 corridor are classified in South corridor. Rest of the links are cross-sectional links connecting north and south corridors. Figure 3 shows the locations of all the 11 links analyzed in this report. Segment 1 is the new Indian River Street bridge and Segment 5 refers to the existing Palm City bridge. Figure 3: Project Corridors and important links (Indian River Street Project) Table 7 lists each of these links with base year traffic count, their forecast and observed ADT in the opening year. Table adds an inaccuracy index in traffic forecasts that was estimated as: 𝑃𝑒𝑟𝑐𝑒𝑛𝑡 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑓𝑟𝑜𝑚 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐶𝑜𝑢𝑛𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 The new bridge was proposed to be opened in 2011, but actually opened in December 2013. Therefore, the 2014 counts are compared with the 2011 forecasted volumes in this exercise. The 2001 counts and 2011 forecasted volumes were obtained from the Indian Street Bridge PD&E (Table 4-4), and opening year counts in terms of Annual Daily Traffic (ADT) was assembled from the Martin County 2014 Roadway Level of Service Inventory Report. These are the 11 segments identified for traffic volume accuracy assessment 1 6 5 4 3 2 11 10 9 8 7

Traffic Forecasting Accuracy Assessment Research Technical Report II-156 Table 7 Comparison of Base Year and Opening Year Traffic Count and Opening Year Traffic Forecast (Indian River Bridge Project) Site ID Site Segment Report Base Year Opening Year Count Opening Year Forecast Percent Difference from Forecast 2001 2014 2011 1 South Corridor: CR 714 from St Lucie River to SR 76 [new bridge] Build Project 17,129 42,900 -60% 2 South Corridor: Indian Street from SR 76 to Willoughby Blvd. 14,500 21,866 27,600 -21% 3 South Corridor: CR 714 from West of Mapp road 9,900 18,213 22,300 -18% 4 North Corridor: SR 714 from SR 76 to Willoughby Blvd. 29,900 23,370 33,000 -29% 5 North Corridor: SR 714 from Mapp Road to Palm City Ave [old bridge] 48,000 33,675 52,800 -36% 6 North Corridor: SR 714 from Palm City Ave. to SR 76 43,300 33,675 46,400 -27% 7 North Corridor: SR 714 West of Mapp road 32,300 28,678 34,000 -16% 8 Palm City Ave - north of SR 714 8,800 7,010 9,700 -28% 9 SR 76 - north of Indian Street 22,200 21,883 23,900 -8% 10 Willoughby Blvd. from south of SR 714 9,000 9,565 17,800 -46% 11 Mapp Road from North of CR 714 14,600 11,835 17,000 -30% In general, for all the links in the study area, the volumes are overestimated by the model (Table 7). The percent difference is very high for the main corridors: the new bridge (speed limit of 45 mph) volume estimates are 60% over the actual counts while the old bridge (speed limit of 40 mph) volumes are 36% more than expected. The forecasted volume is also almost double for the Willoughby Blvd., which connects the SR 714 and CR 714 corridors. Surprisingly, the SR 76 corridor has reasonable opening volumes compared to the actual counts, given all the corridors surrounding it have high inaccuracy in estimating the opening year volumes. After analyzing further for the distribution of volumes from old and new bridge across the nearby area, it was observed that the SR 76 to the south of SR 714 hardly encounter any flow coming from the old bridge. The same condition is true for Willoughby Blvd.: it does not encounter much flow from any of the bridge and yet its traffic is overestimated by 46%. This might suggest that the overall traffic projected by the model is overestimated or the distribution of modeled trips do not match the actual trip distribution. Base year counts and modeled volumes were compared to ensure that the later assumption is not the case. It also suggests that most of the trips using the new and old bridges are within Martin County boundary lines. Figure 4 shows the flow of traffic from old and new bridges. The red and black lines show the amount of traffic coming from new and old bridge respectively on neighboring links. Thicker lines

Traffic Forecasting Accuracy Assessment Research Technical Report II-157 indicate higher traffic volumes. The thinnest line represents 1000 trips. Hence, the two bridges in yellow and blue colors have thickest line and it becomes thinner as the traffic is dissipated away from the bridges. Figure 4: Distribution of Traffic from Old and new Bridges 4.4 Potential Sources of Forecast Error This section identifies the exogenous forecasts and project assumptions used in the development of the traffic forecasts. Exogenous forecasts are made outside of the immediate traffic forecasting process. Project assumptions are established during project development and serve as the basis for the traffic forecast. Exogenous forecasts and project assumptions are leading sources of forecast error. An example of the exogenous forecasts are the population and employment forecasts, which are commonly identified as a major source of traffic forecasting error. These forecasts are usually made by outside planning agencies on a regular basis; that is, they are not prepared for any individual project. During project development, these forecasts are revised to match assumptions documented by the project team. In this example, population, car-ownership and employment forecasts are all an exogenous forecast and a project assumption. Legend: Palm Street Bridge [old bridge] Indian River Street Bridge [new bridge] Traffic from Old Bridge (1000 trips for each line) Traffic from New Bridge (1000 trips for each line)

Traffic Forecasting Accuracy Assessment Research Technical Report II-158 Past forecasting research has identified several exogenous forecasts and project assumptions as common sources of forecast error, including:  Macro-economic conditions (of the region or study area),  Population and employment forecasts,  Significant changes in land use,  Auto fuel prices,  Tolling pricing, sensitivity and price levels,  Auto ownership,  Changes in technology,  Travel times within the study area, and  Duration between year forecast produced and opening year. Table 8 shows a list of all the items that are potential sources of forecasting error and specifically identifies those sources which are important to the Indian River Street Bridge project. Observed values for all the factors mentioned in the table are for the year 2014, to be consistent with the observed opening year. The population, employment and car ownership values are for all the three counties in the model: Indian River, St. Lucie and Martin county. The 2011 estimates are calculated by interpolating the 1996 and 2025 socio-economic data from the model. In terms of population, the St. Lucie county region is currently developing fast. Therefore, even though the population was overestimated for Indian River and Martin county (by 1-3%), the total regional population was underestimated by 6% as the population for St. Lucie county was underestimated by 22%. There was a major economic recession in 2008 which was between the existing and the opening year of the project. This downturn resulted in significant unemployment throughout the country and therefore the travel demand model estimated opening year employment as 10% higher than the actual employment for the close by year. Table 8: Input Accuracy Assessment Table (Indian River Street Project) Items Quantifiable Observed Opening Year Values (2014) Estimated Opening Year Values (2011) % Difference Employment* Yes 177,966 198,138 -10% Population** Yes 574,564 542,395 6% Car Ownership** Yes 364,503 337,742 8% Fuel Price*** Yes $ 3.40 $ 1.91 78% Macro-economic Conditions No Data Source for Observed Value: * https://beta.bls.gov/ ** 2014 American Community Survey Data *** https://www.eia.gov/ The estimated opening year fuel price is a proxy fuel price after adjusting for the inflation between 1996 and 2014 and is specific to the lower Atlantic region of USA. Important point to note here is that 2014 was one of the years when the fuel price hike was observed. Although fuel prices are

Traffic Forecasting Accuracy Assessment Research Technical Report II-159 not used directly in this model, it should be noted that even after adjusting for inflation, the fuel prices in the opening year were under-estimated by 78%. None of the other potential sources of forecasting error identified in the table were deemed to be important in the forecasts for this project. 4.5 Contributing Sources to Forecast Error Building upon the items discussed in Section 4.4, this section attempts to identify items that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Adjusted forecasts for the critical roadways are computed by applying an elasticity to the relative change between the actual and predicted values for each item in Section 4. Only those items which could be quantified and deemed important for this project were adjusted. The effect on the forecast can be quantified in this way. First, the change in forecast value, a delta between the opening year forecast and the actual observed traffic count in the opening year is calculated. Change in Forecast Value 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 Second, a factor of the effect on forecast by exponentiating an elasticity of the common source errors and natural-log of the change rate in forecast value is calculated. This factor is applied to the actual forecast volume to generate an adjusted forecast. Effect on Forecast 𝑒𝑥𝑝 ∗ 1 Adjusted Forecast 1 𝐸𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 ∗ 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑜𝑙𝑢𝑚𝑒 This deep dive analysis to adopt the best elasticity values, identified by (Ewing et al. 2014) via their cross-sectional and longitudinal models together, and other transportation literature (Dong et al. 2012; Dunkerley, Rohr, and Daly 2014). It is important to note that the elasticity values by Ewing et al. (2014) were related to Vehicle Miles Travelled (VMT) not traffic volumes. We were not able to find elasticity values specifically for traffic volumes with respect to employment, population, and fuel price. Nor were we able to find the elasticity value of VMT nor traffic volume with respect to employment. To this end, this elasticity study has two assumptions. First, the elasticity values of VMT with respect to population and fuel price is close to the elasticity values of traffic volumes given a high correlation between VMT and traffic volumes. Second, the elasticity value regarding employment is close to the one for the per capita income because of its high correlation. The elasticity values used in this study are:  0.75 for population  0.3 for capita income (employment)  0.2 for fuel price

Traffic Forecasting Accuracy Assessment Research Technical Report II-160 The results of quantifying the effect on the forecast are shown in Table 9.

Table 9 Forecast Adjustment Table based on Elasticities for all Segments (Indian River Bridge Project) Seg# Items Actual Value Forecast Value Change required in Forecast Value Elasticity Effect on Forecast Actual Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast 1 Employment 177,966 198,138 -10% 0.30 -3% 42,900 41,540 -59% 1 Population 574,564 542,395 6% 0.75 4% 41,540 43,375 -61% 1 Fuel Price 3.40 1.91 78% (0.20) -11% 43,375 38,651 -56% 1 Original Traffic Forecast 17,129 42,900 150% N/A N/A 1 Adjusted Traffic Forecast N/A N/A N/A 42,900 38,651 -56% 2 Employment 177,966 198,138 -10% 0.30 -3% 27,600 26,725 -18% 2 Population 574,564 542,395 6% 0.75 4% 26,725 27,905 -22% 2 Fuel Price 3.40 1.91 78% (0.20) -11% 27,905 24,867 -12% 2 Original Traffic Forecast 21,866 27,600 26% N/A N/A 2 Adjusted Traffic Forecast N/A N/A N/A 27,600 24,867 -12% 3 Employment 177,966 198,138 -10% 0.30 -3% 22,300 21,593 -16% 3 Population 574,564 542,395 6% 0.75 4% 21,593 22,547 -19% 3 Fuel Price 3.40 1.91 78% (0.20) -11% 22,547 20,092 -9% 3 Original Traffic Forecast 18,213 22,300 22% N/A N/A 3 Adjusted Traffic Forecast N/A N/A N/A 22,300 20,092 -9% 4 Employment 177,966 198,138 -10% 0.30 -3% 33,000 31,954 -27% 4 Population 574,564 542,395 6% 0.75 4% 31,954 33,365 -30% 4 Fuel Price 3.40 1.91 78% (0.20) -11% 33,365 29,732 -21% 4 Original Traffic Forecast 23,370 33,000 41% N/A N/A 4 Adjusted Traffic Forecast N/A N/A N/A 33,000 29,732 -21% 5 Employment 177,966 198,138 -10% 0.30 -3% 52,800 51,126 -34% 5 Population 574,564 542,395 6% 0.75 4% 51,126 53,384 -37% 5 Fuel Price 3.40 1.91 78% (0.20) -11% 53,384 47,571 -29% 5 Original Traffic Forecast 33,675 52,800 57% N/A N/A 5 Adjusted Traffic Forecast N/A N/A N/A 52,800 47,571 -29% 6 Employment 177,966 198,138 -10% 0.30 -3% 46,400 44,929 -25%

Traffic Forecasting Accuracy Assessment Research Technical Report II-162 6 Population 574,564 542,395 6% 0.75 4% 44,929 46,913 -28% 6 Fuel Price 3.40 1.91 78% (0.20) -11% 46,913 41,805 -19% 6 Original Traffic Forecast 33,675 46,400 38% N/A N/A 6 Adjusted Traffic Forecast N/A N/A N/A 46,400 41,805 -19% 7 Employment 177,966 198,138 -10% 0.30 -3% 34,000 32,922 -13% 7 Population 574,564 542,395 6% 0.75 4% 32,922 34,376 -17% 7 Fuel Price 3.40 1.91 78% (0.20) -11% 34,376 30,633 -6% 7 Original Traffic Forecast 28,678 34,000 19% N/A N/A 7 Adjusted Traffic Forecast N/A N/A N/A 34,000 30,633 -6% 8 Employment 177,966 198,138 -10% 0.30 -3% 9,700 9,393 -25% 8 Population 574,564 542,395 6% 0.75 4% 9,393 9,807 -29% 8 Fuel Price 3.40 1.91 78% (0.20) -11% 9,807 8,739 -20% 8 Original Traffic Forecast 7,010 9,700 38% N/A N/A 8 Adjusted Traffic Forecast N/A N/A N/A 9,700 8,739 -20% 9 Employment 177,966 198,138 -10% 0.30 -3% 23,900 23,142 -5% 9 Population 574,564 542,395 6% 0.75 4% 23,142 24,164 -9% 9 Fuel Price 3.40 1.91 78% (0.20) -11% 24,164 21,533 2% 9 Original Traffic Forecast 21,883 23,900 9% N/A N/A 9 Adjusted Traffic Forecast N/A N/A N/A 23,900 21,533 2% 10 Employment 177,966 198,138 -10% 0.30 -3% 17,800 17,236 -44.5% 10 Population 574,564 542,395 6% 0.75 4% 17,236 17,997 -47% 10 Fuel Price 3.40 1.91 78% (0.20) -11% 17,997 16,037 -40% 10 Original Traffic Forecast 9,565 17,800 86% N/A N/A 10 Adjusted Traffic Forecast N/A N/A N/A 17,800 16,037 -40% 11 Employment 177,966 198,138 -10% 0.30 -3% 17,000 16,461 -28% 11 Population 574,564 542,395 6% 0.75 4% 16,461 17,188 -31% 11 Fuel Price 3.40 1.91 78% (0.20) -11% 17,188 15,316 -23% 11 Original Traffic Forecast 11,835 17,000 44% N/A N/A 11 Adjusted Traffic Forecast N/A N/A N/A 17,000 15,316 -23% Employment 177,966 198,138 0.30 42,900 41,540 -59%

Traffic Forecasting Accuracy Assessment Research Technical Report II-163 New Bridge Population 574,564 542,395 0.75 41,540 43,375 -61% Fuel Price 3.40 1.91 (0.20) 43,375 38,651 -56% Original Traffic Forecast 17,129 42,900 150% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 42,900 38,651 -56% Parallel Bridge Employment 177,966 198,138 0.30 52,800 51,126 -34% Population 574,564 542,395 0.75 51,126 53,384 -37% Fuel Price 3.40 1.91 (0.20) 53,384 47,571 -29% Original Traffic Forecast 33,675 52,800 57% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 52,800 47,571 -29% All other links Employment 177,966 198,138 0.30 231,700 224,355 -22% Population 574,564 542,395 0.75 224,355 234,263 -25% Fuel Price 3.40 1.91 (0.20) 234,263 208,754 -16% Original Traffic Forecast 176,095 231,700 32% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 231,700 208,754 -16%

Traffic Forecasting Accuracy Assessment Research Technical Report II-164 The original forecast value was successively adjusted for each of the items (except car- ownership) identified as contributing sources of forecasting error for all the segments. Table 9 shows that the employment and fuel price has negatively impacted the volumes on most of the links. The impact due to fuel price is the highest, compared to population and employment. After adjustments for all three important factors, percentage difference between the adjusted forecast and observed volumes is reduced by 4 points for the Indian River Street Bridge. It was reduced by 8% for the Palm City Bridge and by 6% for Willoughby Blvd. Rest of the links showed a 4-10 point improvement in the forecast. As seen from the table, even after these elasticity-based adjustments, most of the links (especially the two competing bridges) shows significantly higher volumes than observed ADT. The percent difference in the adjusted forecast for the Indian River Street bridge is -56% while that for the Palm City Street bridge is -29%. In addition to the elasticity-based adjustments, the travel demand model was re-run using corrected exogenous forecasts and project assumptions. The TCRPM II (A25) model was provided by the FDOT District 4 office, is a traditional 4-step model developed in TRANPLAN that includes Trip Generation, Trip Distribution, Mode Choice and Traffic Assignment. However, for this bridge study, the mode choice step was disabled. As a result, the fuel prices could not be adjusted in the new model run. Generally, the fuel prices are used in calculating operating cost of the vehicle which in turn is used in the mode choice segment. Thus, for this model, the operating cost was not part of the forecasting methodology. However, fuel price might be somewhat accounted in the model through latent sources through global model parameters. Therefore, the elasticity correction for fuel price was performed in Table 10. The TCRPM II model was converted in the latest available TRANPLAN version. As a result, the base 2025 forecasts were not exactly same as the 2025 forecasts reported in the technical memorandum. Moreover, the population and employment were adjusted sequentially to obtain the new model volumes. It is assumed for this analysis that the car-ownership and persons per household has not changed for any year. Therefore, the number of autos and the housing units were adjusted in the new run, such that, the average autos per household and the average persons per household were kept same as the original 2025 estimate. Table 10: 2025 Original Model Run Socio-Economic Inputs by County County Persons/Household Autos/ Household Persons Autos Martin 2.20 1.71 227,829 176,753 Indian River 2.40 1.58 183,736 121,163 St Lucie 2.43 1.61 321,931 212,733 The new 2011 link volumes were calculated by scaling down the link volumes from the newly generated 2025 loaded network using the originally reported 2011 and 2025 volume ratio (from the Technical Memorandum). The resulting 2011 estimates are summarized in Table 11. Overall, the socio-economic corrections had very little impact on the link volumes for any corridor. The difference in the forecast between the new and the competing bridges is still high; 59% and 34% respectively. Compared to the original forecast, these have only decreased by 3 points individually.

Traffic Forecasting Accuracy Assessment Research Technical Report II-165 Table 11 Adjusted Forecast Table using the Model (Indian River Bridge Project) Seg# Items Old Model Volume New Model Volume Observed Volume Difference (New - Observed) % Difference from Observed Volume % Difference from Old Model Volume 1 All Adjustments 42,900 41,767 17,129 -24,638 -59% -3% 2 27,600 26,770 21,866 -4,904 -18% -3% 3 22,300 21,479 18,213 -3,266 -15% -4% 4 33,000 32,466 23,370 -9,096 -28% -2% 5 52,800 51,324 33,675 -17,649 -34% -3% 6 46,400 43,959 33,675 -10,284 -23% -5% 7 34,000 32,481 28,678 -3,803 -12% -4% 8 9,700 8,338 7,010 -1,328 -16% -14% 9 23,900 24,022 21,883 -2,139 -9% 1% 10 17,800 17,388 9,565 -7,823 -45% -2% 11 17,000 17,093 11,835 -5,258 -31% 1% New Bridge 42,900 41,767 17,129 -24,638 -59% -3% Parallel Bridge 52,800 51,324 33,675 -17,649 -34% -3% Other links 231,700 223,996 176,095 -47,901 -21% -3% 4.6 Discussion The model forecasts on Indian River Street Bridge (new construction) was generally over- estimated by about 60% and on the Palm City bridge (competing route) it was over-estimated by 36%. After applying corrections through elasticity, the percent difference from forecast on new bridge was reduced to 56% and on competing bridge it was 29%. Model alterations resulted in new forecast volumes which were 59% off for new bridge and 34% off for old bridge. Adjustments were made in the model forecast based on the elasticity and the model re-runs. The elasticity study showed more promising results as compared to the model adjustments. Fuel price was an influencing factor during the corrections by elasticity. Inclusion of fuel price effect in the model could have been beneficial in reducing error. However, both methods could only explain the part of

Traffic Forecasting Accuracy Assessment Research Technical Report II-166 the forecasting error. Clearly, there are other factors which are not accounted in the model that has caused overall underestimation of the traffic in the study area and especially on the Indian River Street. One of the sources of error might be the forecasting methodology. The opening year traffic is forecasted by scaling the design year model volumes in accordance with the existing counts. Since, the new bridge has no existing count information, such procedure may give rise to inaccurate forecasts. However, it is challenging to develop a more robust forecasting methodology for projects where no existing count is available. Also, the bridge is too intense of a change in infrastructure since it connects the two different lands through just one link, generating few comparable alternative paths. The effect of economic downturn might impact the travel behavior of a particular region for years following the recession. For example, Figure 5 shows the clear impact of 2008 recession on the Martin county unemployment. 2010 to 2012 are the years with peak unemployment resulting in slow traffic and its bearings must have been carried to 2014 year. Due to loss of job, not only the work trips are impacted, but also leisure trips are hampered. The recession is assumed to cause change in value off time, which will also result in updated co-efficient for the highway assignment purposes. A change in a job location while maintaining the same housing location would alter the route selection for an individual. This would clearly change the travel patterns for the following years. This effect could be better studied by comparing the trips from “Big Data” sources (e.g., Streetlight or AirSage data) before and after the recession years. Figure 5 Martin County Unemployment Rate Chart External trips account for 9% of the traffic on the new bridge and only 2% of the traffic on the Palm City bridge (refer Table 12). This supports the assumption that the new and the old bridge is majorly used by the internal population. Further analysis comparing the modeled trip patterns to “Big Data” sources might reveal travel patterns that were insufficiently represented in the model.

Traffic Forecasting Accuracy Assessment Research Technical Report II-167 Table 12 External Trip Distribution using Both Competing Bridges 2025 Original RUN 2025 New Run External Trips New Bridge Old Bridge Total New Bridge Old Bridge Total I 95 1,563 - 1,563 1,523 6 1,529 Turnpike 1,227 936 2,163 1,298 968 2,266 US 1 1,095 174 1,269 1,080 286 1,366 Total 3,885 1,110 4,995 3,901 1,260 5,161 Another possibility is that Martin, St. Lucie and Indian River Counties show the steepest increase in the median age of the population (see Figure 6). This suggests a lot of retirees moved into this region. Retirees tend to travel less than working families. This may explain why the population of St Lucie county was underestimated by 22%, yet the traffic forecast for all links in study area were overestimated. The travel model did not have a component that adjusted travel rates based on the number of workers in the household, which may have contributed to the over-estimation. Figure 6 Median Age (in years) in Southeast Florida Counties Overall, the prevailing macro-economic conditions around the opening year played a major part in the accuracy of the forecasts for this project. Other exogenous factors causing the over-estimate may be the increase in fuel prices and an increase in retirees. Both factors could not be replicated precisely in the travel model used for the Indian Street River Bridge. Further analysis using “big data” sources could add more insight on the over-estimation of traffic. This study highlights importance of archiving not only the model runs and forecast reports, but also the validation approach used during model development. 20 25 30 35 40 45 50 55 1970 1980 1990 2000 2010 M ed ia n  Ag e Year Median Age (in years) in Southeast Florida Counties (Source: BEBR) U.S. FLORIDA Indian River Martin Miami‐Dade Palm Beach St. Lucie

Traffic Forecasting Accuracy Assessment Research Technical Report II-168 5 Central Artery Tunnel, Boston, Massachusetts 5.1 Introduction The I-93 Central Artery/Tunnel Project (CA/T), popularly known as the Big Dig, is a megaproject which includes the reconstruction of Interstate Highway 93 (I-93) in downtown Boston, the extension of I-90 to Logan International Airport, the construction of two new bridges over the Charles River, six interchanges and the Rose Kennedy Greenway in the space vacated by the previous elevated I-93 Central Artery in Boston, Massachusetts. The 7.8 miles of highway construction, about half in tunnels. This report, written in May 2018, assesses the accuracy of traffic forecasts for the CA/T Project in downtown Boston. This deep dive analysis was prepared based upon the best available resources, including publicly available documents, phone and email correspondence with local staff. The travel demand model data was not available. This report focuses on ten roadway links along the Central Artery corridor and two roadway links of the tunnel to Logan International Airport, totaling twelve links. Traffic forecasts were prepared in 1987 for the forecast year 2010. All roadways opened in 2005. Traffic counts are available for 1977,1987,1999, 2005 and 2010. This report consists of seven sections. Section 5.2 describes the project. Section 5.3 compares the predicted and actual traffic volumes for all roadways in the study area where post-opening traffic counts are available. Section 5.4 enumerates the exogenous forecasts and sources of forecast error for the project. It also includes an assessment of the accuracy of the exogenous forecasts. Section 5.5 attempts to identify items discussed in Section 5.4 that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Section 5.6 summarizes the findings. 5.2 Project Description The study area for this deep dive consists of the I-93 in downtown Boston and the I-90 near Ted Williams Tunnel that connects to Logan Airport under Boston Harbor. The 7.8-mile CA/T Project includes:  Replacement of the deteriorating elevated I-93 Central Artery with an eight-to-ten lane underground expressway, highlighting a pair of 1.5-mile tunnels,  Construction of the 1.6-mile new Ted Williams Tunnel to Logan International Airport,  3.5-mile extension of I-90 to the Ted Williams Tunnel,  Construction of the Leonard P. Zakim Bunker Hill Memorial Bridge and the Leverett Circle Connector Bridge over the Charles River,  Construction of six new interchanges, and  The Rose Kennedy Greenway in the space vacated by the previous elevated I-93 Central Artery in Boston, Massachusetts. A highlight of the CA/T Project was the replacement of the elevated I-93 Central Artery with the underground expressway. It was built to reduce traffic congestion and improve mobility and

Traffic Forecasting Accuracy Assessment Research Technical Report II-169 environment in one of the most congested parts of Boston and the U.S., and to establish the groundwork for economic growth. Figure 7: Central Artery/Tunnel Projects 5.3 Predicted-Actual Comparison of Traffic Forecasts There are twelve roadway links in the study area. The twelve links consists of ten roadway segments along the Central Artery corridor (five for northbound and the other five for southbound) and two links (located in a same location but reflecting a different direction) at the Ted Williams Tunnel. Figure 8 shows locations of the all links. A few sources, including an original Final Environmental Impact Statement (FEIS) in 1985, a Final Supplemental Environmental Impact Statement (FSEIS) in 1991, and a FEIS for Charles River Crossing in 1993 provided the mid-year 2010 traffic forecasts (no traffic forecasts for the 2005 opening year were available). Traffic forecasts in the documentation, however, were not consistent. The inconsistency was due in part to the change of a base year to 1987 in the 1991 FSEIS from 1982 in the 1985 FEIS. The Central Transportation Planning Staff (CTPS), the Boston region’s metropolitan planning organization (MPO) conducted a backcasting study in October 2014. The study provided traffic forecasts for the year 2010, which was retrieved from the 1991 Final Supplemental

Traffic Forecasting Accuracy Assessment Research Technical Report II-170 Environmental Impact Report (FSEIR)2, and 2010 traffic count data. Base year traffic counts were retrieved from the CTPS highway traffic volumes report3. The traffic forecasts were outputs of the traffic-only TranPlan model for which the CA/T Project team developed for the CA/T study area in 1980s. Figure 8: Traffic Count Links in the Study Area (CA/T Project) The following table lists each of these links with base year traffic count, their forecast and observed ADT in the forecast year. Table adds an inaccuracy index in traffic forecasts that was estimated as 2 The EIR is the state document while the EIS is the federal document. FSEIR/FSEIS were same for the CA/T Project but due to different environmental priorities, the order of document was different; confirmed by email correspondence with Bill Kuttner at CTPS on May 15, 2018. 3 CTPS express highway volumes, I-93/Central Artery Between Columbia Road, Dorchester, and Route 1, Charlestown, ftp://ctps.org/pub/Express_Highway_Volumes/20_I93_Central_Artery.pdf

Traffic Forecasting Accuracy Assessment Research Technical Report II-171 𝑃𝑒𝑟𝑐𝑒𝑛𝑡 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑓𝑟𝑜𝑚 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑀𝑖𝑑 𝑌𝑒𝑎𝑟 𝐶𝑜𝑢𝑛𝑡 𝑀𝑖𝑑 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡𝑀𝑖𝑑 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 Table 13: Comparison of Base Year and Mid-Year Traffic Count and Mid-Year Traffic Forecast (CA/T Project) Site ID Site Segment Base Year Mid-Year Count Mid-Year Forecast Percent Difference from Forecast 1987 2010 2010 1 (M) I-93 Northbound - I-90 On-Ramp to Government Center Off-Ramp 69,000 99,000 100,300 -1% 2 (M) I-93 Northbound - Frontage On-Ramp to I-90 On-Ramp 72,000 77,500 84,600 -8% 3 (M) I-93 Northbound - I-90 Off-Ramp to Mass. Avenue On-Ramp 64,000 52,000 54,000 -4% 4 (M) I-93 Northbound - Southampton to Mass. Avenue 90,000 103,000 113,900 -10% 5 (O) I-93 Northbound - North of Columbia Road 93,000 111,500 124,700 -11% 6 (M) I-93 Southbound - Dewey Square Off-Ramp to barrel split 91,000 91,500 86,300 6% 7 (M) I-93 Southbound - barrel converge to I-90 On-Ramp 89,000 74,500 82,300 -9% 8 (M) I-93 Southbound - Albany On-Ramp to Mass. Avenue Off-Ramp 83,000 115,000 119,300 -4% 9 (O) I-93 Southbound - Southampton to project limit 96,000 114,000 121,600 -6% 10 (O) I-93 Southbound - South of Columbia Road 90,000 108,000 111,300 -3% 11 (N) I-90 Westbound - Ted Williams Tunnel N/A 40,500 47,300 -14% 12 (N) I-90 Eastbound - Ted Williams Tunnel N/A 42,000 51,200 -18% New Links (average traffic) 41,250 49,250 -16% Modified Links (average traffic) 87,500 91,529 -4% Other Links (average traffic) 111,167 119,200 -7% Note: Site 12 in Figure 4 reflects both Site 11 and 12 (M) Modified; (N) New; (O) Other Source: CTPS backcasting report, 2014 Table 13 shows the traffic forecasts were generally accurate, with forecasting error ranging from -11 to +6% except for the two segments at the Ted William Tunnel (#11 and 12). The Ted William Tunnel is the only tolled roadway segment in the CA/T Project and a completely new segment unlike the other segments, which may explain its higher percent difference. 5.4 Potential Sources of Forecast Error This section identifies the exogenous forecasts and project assumptions used in the development of the traffic forecasts. Exogenous forecasts are made outside of the immediate traffic forecasting process. Project assumptions are established during project development and serve as the basis for the traffic forecast. Exogenous forecasts and project assumptions are leading sources of forecast error. An example are population and employment forecasts, which are commonly identified as a major source of traffic forecasting error. These forecasts are usually made by outside planning agencies on a regular basis; that is, they are not prepared for any individual project. During project development, these forecasts are revised to match assumptions documented by the project team.

Traffic Forecasting Accuracy Assessment Research Technical Report II-172 Past forecasting research has identified several exogenous forecasts and project assumptions as common sources of forecast error, including:  Macro-economic conditions (of the region or study area),  Population and employment forecasts,  Significant changes in land use,  Auto fuel prices,  Tolling pricing, sensitivity and price levels,  Auto ownership,  Changes in technology,  Travel times within the study area, and  Duration between year forecast produced and opening year. For the CA/T Project, the following table (Table 14) lists all exogenous forecasts and project assumptions for which observed data is available. It also includes an assessment of the accuracy of each item. Table 14: List of Exogenous Forecasts and Project Assumptions (CA/T Project) Items Quantifiable Observed Year 2010 Values Estimated Year 2010 Values % Difference Employment* Yes 424,000 538,000 -21% Population * Yes 210,000 198,000 6% Auto Fuel Price (price per gallon) ** Yes $2.86 $2.31 24% Macro-economic Conditions No Data Source for Observed Value: * CTPS report ** BLS, Office of Energy Efficiency & Renewable Energy & EPA Only a few exogenous forecasts and project assumptions were evaluated as potential sources of forecast errors for the CA/T Project due to the absence of available data. For the CA/T Project area, the team performed their own population and employment forecasts. For the rest of the Boston Region MPO, the CA/T team adopted MPO’s socio-demographic forecasts that used the trends of fertility, mortality and other standard demographic metrics. Table 14 shows that employment forecasts were over-estimated, population forecasts were generally accurate, and auto fuel prices were under-estimated. Information on other typical exogenous forecasts such as macro-economic conditions, car ownership, travel time and value of time is unavailable. For fuel price, a proxy fuel price forecast was estimated by multiplying the 1991 average gasoline price with an annual inflation rate between 1991 and 2010.

Traffic Forecasting Accuracy Assessment Research Technical Report II-173 5.5 Contributing Sources to Forecast Error Building upon the items discussed in Section 5.4, this section attempts to identify items that are important sources of forecast error or Percent Difference from Forecast and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Adjusted forecasts for the critical roadways are computed by applying an elasticity to the relative change between the actual and predicted values for each item in Section 5.4. Only those items which could be quantified and deemed important for this project were adjusted. The effect on the forecast can be quantified in this way. First, the change in forecast value, a delta between the opening year forecast and the actual observed traffic count in the opening year is calculated. Change in Forecast Value 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 Second, a factor of the effect on forecast by exponentiating an elasticity of the common source errors and natural-log of the change rate in forecast value is calculated. This factor is applied to the actual forecast volume to generate an adjusted forecast. Effect on Forecast 𝑒𝑥𝑝 ∗ 1 Adjusted Forecast 1 𝐸𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 ∗ 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑜𝑙𝑢𝑚𝑒 This deep dive analysis to adopt the best elasticity values, identified by (Ewing et al. 2014) via their cross-sectional and longitudinal models together, and other transportation literature (Dong et al. 2012; Dunkerley, Rohr, and Daly 2014). It is important to note that the elasticity values by Ewing et al. (2014) were related to Vehicle Miles Travelled (VMT) not traffic volumes. We were not able to find elasticity values specifically for traffic volumes with respect to employment, population, and fuel price. In addition, we were not able to find the elasticity value of VMT nor traffic volume with respect to employment. To this end, this elasticity study has two assumptions. First, the elasticity values of VMT with respect to population and fuel price is close to the elasticity values of traffic volumes given a high correlation between VMT and traffic volumes. Second, the elasticity value regarding employment is close to the one for the per capita income because of its high correlation, too. Finally, the elasticity values used in this study are:  0.75 for population  0.3 for capita income (employment)  0.2 for fuel price The results of quantifying the effect on the forecast are shown in the following table. The original forecast value was successively adjusted for each of the items identified as contributing sources of forecasting error for all the segments. The final remaining percentage difference after all adjustments is shown in the Table 15. Table 15 is sorted by the largest-to-smallest of the “remaining percent difference from forecast”. The segment ID reflects the sites shown in Figure 8.

uracy Assessment Research Technical Report Table 15: Forecast Adjustment Table based on Elasticities (CA/T Project) s Actual Value Forecast Value Change in Forecast Value Elasticity Effect on Forecast Actual Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast loyment 424,000 538,000 -21% 0.30 -7% 100,300 93,385 6.01% ulation 210,000 198,000 6% 0.75 5% 93,385 97,598 1.44% Price 2.86 2.31 24% (0.20) -4% 97,598 93,504 5.88% nal Traffic Forecast 99,000 100,300 1% N/A N/A sted Traffic Forecast N/A N/A N/A 100,300 93,504 5.88%  loyment 424,000 538,000 -21% 0.30 -7% 84,600 78,767 -1.61% ulation 210,000 198,000 6% 0.75 5% 78,767 82,321 -5.86% Price 2.86 2.31 24% (0.20) -4% 82,321 78,868 -1.73% nal Traffic Forecast 77,500 84,600 9% N/A N/A sted Traffic Forecast N/A N/A N/A 84,600 78,868 -1.73% loyment 424,000 538,000 -21% 0.30 -7% 54,000 50,277 3.43% ulation 210,000 198,000 6% 0.75 5% 50,277 52,545 -1.04% Price 2.86 2.31 24% (0.20) -4% 52,545 50,341 3.30% nal Traffic Forecast 52,000 54,000 4% N/A N/A sted Traffic Forecast N/A N/A N/A 54,000 50,341 3.30% loyment 424,000 538,000 -21% 0.30 -7% 113,900 106,047 -2.87% ulation 210,000 198,000 6% 0.75 5% 106,047 110,832 -7.07% Price 2.86 2.31 24% (0.20) -4% 110,832 106,182 -3.00% nal Traffic Forecast 103,000 113,900 11% N/A N/A sted Traffic Forecast N/A N/A N/A 113,900 106,182 -3.00% loyment 424,000 538,000 -21% 0.30 -7% 124,700 116,102 -3.96% ulation 210,000 198,000 6% 0.75 5% 116,102 121,341 -8.11% Price 2.86 2.31 24% (0.20) -4% 121,341 116,251 -4.09% nal Traffic Forecast 111,500 124,700 12% N/A N/A sted Traffic Forecast N/A N/A N/A 124,700 116,251 -4.09% l 424 000 38 000 21% 0 30 % 86 300 80 3 0

Traffic Forecasting Accuracy Assessment Research Technical Report II-175 6 Population 210,000 198,000 6% 0.75 5% 80,350 83,975 8.96% 6 Fuel Price 2.86 2.31 24% (0.20) -4% 83,975 80,452 13.73% 6 Original Traffic Forecast 91,500 86,300 -6% N/A N/A 6 Adjusted Traffic Forecast N/A N/A N/A 86,300 80,452 13.73% 7 Employment 424,000 538,000 -21% 0.30 -7% 82,300 76,626 -2.77% 7 Population 210,000 198,000 6% 0.75 5% 76,626 80,083 -6.97% 7 Fuel Price 2.86 2.31 24% (0.20) -4% 80,083 76,724 -2.90% 7 Original Traffic Forecast 74,500 82,300 10% N/A N/A 7 Adjusted Traffic Forecast N/A N/A N/A 82,300 76,724 -2.90% 8 Employment 424,000 538,000 -21% 0.30 -7% 119,300 111,075 3.53% 8 Population 210,000 198,000 6% 0.75 5% 111,075 116,086 -0.94% 8 Fuel Price 2.86 2.31 24% (0.20) -4% 116,086 111,216 3.40% 8 Original Traffic Forecast 115,000 119,300 4% N/A N/A 8 Adjusted Traffic Forecast N/A N/A N/A 119,300 111,216 3.40% 9 Employment 424,000 538,000 -21% 0.30 -7% 121,600 113,216 0.69% 9 Population 210,000 198,000 6% 0.75 5% 113,216 118,324 -3.65% 9 Fuel Price 2.86 2.31 24% (0.20) -4% 118,324 113,361 0.56% 9 Original Traffic Forecast 114,000 121,600 7% N/A N/A 9 Adjusted Traffic Forecast N/A N/A N/A 121,600 113,361 0.56% 10 Employment 424,000 538,000 -21% 0.30 -7% 111,300 103,626 4.22% 10 Population 210,000 198,000 6% 0.75 5% 103,626 108,302 -0.28% 10 Fuel Price 2.86 2.31 24% (0.20) -4% 108,302 103,759 4.09% 10 Original Traffic Forecast 108,000 111,300 3% N/A N/A 10 Adjusted Traffic Forecast N/A N/A N/A 111,300 103,759 4.09% 11 Employment 424,000 538,000 -21% 0.30 -7% 47,300 44,039 -8.04% 11 Population 210,000 198,000 6% 0.75 5% 44,039 46,026 -12.01% 11 Fuel Price 2.86 2.31 24% (0.20) -4% 46,026 44,095 -8.15% 11 Original Traffic Forecast 40,500 47,300 17% N/A N/A 11 Adjusted Traffic Forecast N/A N/A N/A 47,300 44,095 -8.15%

Traffic Forecasting Accuracy Assessment Research Technical Report II-176 12 Employment 424,000 538,000 -21% 0.30 -7% 51,200 47,670 -11.89% 12 Population 210,000 198,000 6% 0.75 5% 47,670 49,821 -15.70% 12 Fuel Price 2.86 2.31 24% (0.20) -4% 49,821 47,731 -12.01% 12 Original Traffic Forecast 42,000 51,200 22% N/A N/A 12 Adjusted Traffic Forecast N/A N/A N/A 51,200 47,731 -12.01% New Extension Employment 424,000 538,000 -21% 0.30 -7% 98,500 91,709 -10.04% Population 210,000 198,000 6% 0.75 5% 91,709 95,847 -13.93% Fuel Price 2.86 2.31 24% (0.20) -4% 95,847 91,826 -10.16% Original Traffic Forecast 82,500 98,500 19% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 98,500 91,826 -10.16% Modified Links Employment 424,000 538,000 -21% 0.30 -7% 640,700 596,527 2.68% Population 210,000 198,000 6% 0.75 5% 596,527 623,441 -1.75% Fuel Price 2.86 2.31 24% (0.20) -4% 623,441 597,287 2.55% Original Traffic Forecast 612,500 640,700 5% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 640,700 597,287 2.55% Other links Employment 424,000 538,000 -21% 0.30 -7% 357,600 332,945 0.17% Population 210,000 198,000 6% 0.75 5% 332,945 347,967 -4.16% Fuel Price 2.86 2.31 24% (0.20) -4% 347,967 333,370 0.04% Original Traffic Forecast 333,500 357,600 7% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 357,600 333,370 0.04%

Traffic Forecasting Accuracy Assessment Research Technical Report II-177 In general, the adjustments resulted in improved traffic forecast accuracy. Nine of the twelve study roadways experienced decrease in the forecast percent difference from forecast; that is, the accuracy of traffic forecast would have been better if the exogenous factors were accurately forecasted. 5.6 Discussion The CA/T Project replaced the deteriorating elevated I-93 Central Artery with a pair of 1.5 mile underground expressway tunnels, built 1.6 mile new Ted Williams Tunnel to Logan International Airport, extended the I-90 to the Ted Williams Tunnel, built two new bridges over the Charles River, six interchanges and the Rose Kennedy Greenway in the space vacated by the previous elevated I-93 Central Artery in Boston, Massachusetts. The CTPS’s backcasting report showed that roadways in the CA/T Project were generally overestimated, ranging from 1 to 22 percent, with one roadway segment under-estimated by 6 percent. Overall, traffic forecasting accuracy improved after correcting the exogenous forecasts and project assumptions. Nine of twelve roadway segments experienced a reduced percent difference from forecast as a result. It should be noted that there is abundant documentation on the CA/T projects but virtually all of it is associated with project management, construction, project finance and economic impacts. It is unknown whether risk and uncertainty were considered during the project due to the absence of documentation on the subject. For future forecasting efforts, it is suggested that a copy of the forecasting documentation and assumption be archived along with the travel model files used to generate the forecasts.

Traffic Forecasting Accuracy Assessment Research Technical Report II-178 6 Cynthiana Bypass, Cynthiana, Kentucky 6.1 Introduction The Cynthiana Bypass is a 2-lane, state highway bypass project located in Cynthiana, Kentucky. This report, written in June 2018, assesses the reliability and accuracy of traffic forecasts for the Cynthiana Bypass. Traffic forecasts for the project were prepared in 1994 for a 2010 opening year. (Traffic forecasts were provided for 2025 in 2003 using growth rates and diversion assumptions – ref G). The project opened in about 2012. Traffic counts are available for the 2014 and later, post- opening. Section 6.2 describes the project. Section 6.3 compares the predicted and actual traffic volumes for all roadways in the study area where post-opening traffic counts are available. Section 6.4 enumerates the exogenous forecasts and sources of forecast error for the project. It also includes an assessment of the accuracy of the exogenous forecasts. Section 6.5 attempts to identify items discussed in Section 6.4 that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Section 6. 6 summarizes the findings. 6.2 Project Description The study area included the Cynthiana city limits and immediate environs in Harrison County, Kentucky. The project created a bypass to the west of the City, starting at a southern terminus where US 62S and US 27S meet, and extending northwards to a point north of the City along Main Street/US 27N. The length of the bypass is 3.6 miles, and includes a new bridge across the South Fork of the Licking River, north of the City.

Traffic Forecasting Accuracy Assessment Research Technical Report II-179 Figure 9: Project Corridor (Cynthiana Bypass)

Traffic Forecasting Accuracy Assessment Research Technical Report II-180 6.3 Predicted-Actual Comparison of Traffic Forecasts The Kentucky Transportation Cabinet (KYTC) and their consultants provided travel demand model files and some documentation for this effort. These model runs and scattered memos/documentation have been used to analyze and report the predicted traffic on the project. An urban area transportation study was first conducted for Cynthiana in 1970. The study area included an area roughly 3.5 miles in diameter centered on downtown, including the Cynthiana incorporated and some adjacent area. In the late 1980s, the 1970 study and plans were updated in view of “the dynamic nature of industrial, commercial and residential development which (is) occurring in the Cynthiana area …”. A 1989 Cynthiana Urban Area Transportation Study Technical Document describes the development of a travel demand model, base year 1988 and future year 2010, but does not address the proposed bypass. However, a loaded TransCAD network, derived from an original MINUTP model, along with various MINUTP data input files for a 1994 base and 2020 future year (including bypass) was made available to the project team. Therefore, documentation of the project forecasts was unavailable at the time of writing. Aspects related to project costs, exact opening year, importance of the project to the local community and other characteristics could not be determined. Using historical Google Earth images, the opening year was identified to be between 2010, when construction had not yet begun and 2014, when the project was constructed and open to the public. Figure 1 shows the project corridor and 10 segments chosen for model assessment. Traffic growth factors for interim years were made available and used to estimate opening year (2014) model traffic (backed down from 2020). Socioeconomic data for the opening year were available in the 1989 document, for 1988 base and 2010 forecast, and the 2010 forecast socioeconomic data can be compared to actual census data from 2010, for the pre-bypass case. Several years are of interest in evaluating the performance of the Cynthiana travel model(s). See Table 16:  1988 – base year of original model, calibration/validation documentation available, used to forecast 2010 traffic without the bypass, actual counts and documented actual and forecast SE data available  1994 – base year of updated model, used to forecast 2020 traffic with the bypass, actual counts available, loaded model files available, limited SE data available in model files, no documentation available  2010 – Census year, and original forecast year without the bypass, actual counts available  2014 – First full year with bypass constructed, first year with actual counts available for network including bypass  2020 – forecast year for updated model – estimates can be growth factored back to 2014 for comparison

Traffic Forecasting Accuracy Assessment Research Technical Report II-181 Table 16: Availability of Data for Cynthiana Bypass Project Year Base Model Model Format Calibration/ Validation Docs? Forecast Model Traffic Counts (Network) Traffic Counts (Bypass) Pop/SE Data (actual) Pop/SE Data (Forecast) Pop/S E Docs? 1988 YES MINUTP YES 2010 YES - YES YES (2010) YES 1994 YES MINUTP NO 2020 YES - NO NO NO 2010 (1988 base) 1988 MINUTP - YES (no bypass) YES - YES - - 2014 - - - YES* YES YES NO NO - 2020 (1994 base) 1994 MINUTP and TRANSCA D (loaded networks) - YES (w/ bypass) - - - - - * estimated by backing down 2020 model forecasts Accuracy of Employment Forecast Employment was estimated in 1988 to be 4410 for the study area. An inspection of the aerial photography around Cynthiana suggests that 95 percent or more of the area employment is within the city limits. Using this 95% figure, the 1988 city limit employment was estimated to be about 4190. The actual census employment in 2010 within the city limits was 3905. Therefore, the average annual growth rate in employment for the 22-year period was -0.32 percent. However, the original modelers assumed employment in the study area would grow to 4850 by 2010, a growth rate of +0.43 percent. Therefore, the model over-predicted employment by some 18 percent. Accuracy of Population Forecast The 1980 Census population for Cynthiana proper indicated 5,881 persons. In 1988, the population of Cynthiana proper was 6,016 and the study area population was to be 7,685. Therefore, approximately 78.3% of the study area population was inside the city limits in 1988. When the future year 2010 model was built, the population for the Cynthiana study area was projected to grow to 8,455, a 10% increase over 1988 levels. The actual 2010 Census population for Cynthiana proper was 6,402, a 6.4% increase over 1988 levels). Population was therefore overestimated by approximately 3%.

Traffic Forecasting Accuracy Assessment Research Technical Report II-182 Accuracy of External Traffic Forecast The original model documentation indicated that a growth factor of 2.5% per year was to be used for future external traffic. A reconstructed TransCAD model was run for the “opening” year forecast for 2020. Traffic volumes from the 2020 models (original and recreated) were reduced by 2.5% per year for 6 years, and compared to 2014 (or 2014 estimated) ground counts. Table 2 shows the comparison of External trips at each cordon station between the 2014 original forecasts and 2014 counts. In general, the model overestimated external trips by 76 percent. Table 17: External Forecasts and Percent Differences Highway EE 1988 EE 2014 (2.5% growth) 2014 EI Prod. 2014 Tot. Vol. 2014 Count Diff %diff 356 W 185 352 899 1597 612 -985 -62% 36 N 440 836 2816 4490 3173 -1317 -29% 27 N 902 1714 3235 6663 3927 -2736 -41% 62 N 372 707 2096 3510 3476 -34 -1% 392 N 147 279 1198 1759 1250 -509 -29% 32 E 253 481 1897 2856 3188 332 12% 982 S 77 146 799 1089 1925 836 77% 27 S 850 1615 2816 6038 4740 -1298 -21% 62 S 714 1357 3095 5802 7449 1647 28% 32 W 158 300 1118 1718 1666 -52 -3% Accuracy of External Estimates The original model forecasted total EE trips in 2010 to be 9031. By applying the same percentage EE at each cordon station used in the original model to 2014 ground counts, an actual number of 5123 EE trips was estimated. Therefore, the original model overestimated external- external trips by some 76 percent ((9031-5122)/5122).

Traffic Forecasting Accuracy Assessment Research Technical Report II-183 Figure 10: Cynthiana Study Area Link Volumes Model runs Model runs included loaded highway networks for both the base and opening years. As the original models were written in MINUTP and that program was not available to the team, original model data was used to recreate the model in TransCAD. A 2020 TransCAD version of the model was provided to the research team by the KYTC (they had already converted it). However, only a loaded network was available (no TAZ map, or trip generation data were included in the TransCAD version provided). Therefore, the model team took original data from the MIMUTP text files and limited model documentation to recreate a 1988 (base year) and 2010 (opening year variants of the model). For the opening year forecasts to be consistent with the additional model runs made to quantify sources of forecasting error as described in section 6.5, the opening year scenario run was remade

Traffic Forecasting Accuracy Assessment Research Technical Report II-184 using TransCAD. As no base year loaded network files or data was available, a new 2020 model was created using original forecast data for 2010, growth factored up to 2020 using original assumptions of 2.5% growth. The loaded network thus generated from this new model run was used to report the link level opening year forecasts. Also, as the bypass was not opened until about 2012, all model forecasts and ground counts were adjusted to 2014 for comparison. It should be noted that there were very little (around 3%) difference in the model volumes between the new 2020 model run and the original model run provided by KYT A total of 4 links with an Average Daily Traffic (ADT) traffic count available were identified in the project corridor. Table 3 lists each of these links with their forecast and observed ADT. The table includes an inaccuracy index in traffic forecasts that was estimated as: 𝑃𝑒𝑟𝑐𝑒𝑛𝑡 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑓𝑟𝑜𝑚 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐶𝑜𝑢𝑛𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 The first four segments constitute the Cynthiana Bypass project. The links are also identified in Figure 9. Table 18: Traffic Volume Accuracy Assessment (Cynthiana Bypass Project) Seg# Project Segment and Direction Opening Year Count (2014) Opening Year Forecast (FACTORED TO 2014) Percent Difference from Forecast A US 62 to KY 32, 2 Lanes, Item no. 6-119.02, Source: V 2851 4372 -34.79% B KY 32 to KY 356, 2 Lanes, Item No. 6-119.02 3630 5152 -29.54% C KY 356 to KY 36, 2 Lanes, Item no 6-119.02 3039 4466 -31.95% D KY 36 to US 27 2975 3091 -3.75% 6.4 Potential Sources of Forecast Error This section identifies the exogenous forecasts and project assumptions used in the development of the traffic forecasts. Exogenous forecasts are made outside of the immediate traffic forecasting process. Project assumptions are established during project development and serve as the basis for the traffic forecast. Exogenous forecasts and project assumptions are leading sources of forecast error. An example are population and employment forecasts, which are commonly identified as a major source of traffic forecasting error. These forecasts are usually made by outside planning agencies on a regular basis; that is, they are not prepared for any individual project. During project development, these forecasts are revised to match assumptions documented by the project team. In this example, population and employment forecasts are both an exogenous forecast and a project assumption. Table 19 lists all exogenous forecasts and project assumptions for which observed data is available. It also includes an assessment of the accuracy of each item.

Traffic Forecasting Accuracy Assessment Research Technical Report II-185 Table 19: Input Accuracy Assessment Table (Cynthaina Bypass Project) Items Quantifiable Observed Opening Year Values* Estimated Opening Year Values** % Difference Employment* Yes 4,111 4,850 ‐15% Population** Yes 8,179 8,455 ‐3% External Traffic Yes 5,123  9,031 ‐43%  Data Sources: * American FactFinder plus assumptions of city/area split (78.3% pop, 95% empl) **Cynthiana UATS Technical Document The model documentation, in particular the Cynthiana Urban Area Transportation Study Technical Document, forecasts the traffic for the year 2010. The population and employment statistics also are estimated for the year 2010. Although the project was not concluded and opened to traffic before June 12, 20134, the demographic information was taken for 2010 in order to be consistent with the model assumptions. Reviewing the model specification, one of the key absences noticed is the assignment of friction factors. According to the technical report, “because an internal origin-destination survey was not made in Cynthiana, definite trip table frequency information for base year internal trips was not available. Therefore travel time factors (friction factors) could not be calculated. For the Cynthiana study, the factor for each trip length interval was initially given a value of one (1). This means that the trip length of travel time does not affect the trip making decision.” In the Future Year (2010) Model Development, the travel demand is considered directly related to the same factors that influence existing travel demand, i.e. population and employment. 6.5 Contributing Sources to Forecast Error Building upon the items discussed in Section 6.4, this section attempts to identify items that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Adjusted forecasts for the critical roadways are computed by applying an elasticity to the relative change between the actual and predicted values for each item in Section 6.4. The effect on the forecast can be quantified in this way. The effect on the forecast can be quantified in this way. First, the change in forecast value, a delta between the opening year forecast and the actual observed traffic count in the opening year is calculated. Change in Forecast Value 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 4 https://www.cynthianademocrat.com/content/cynthiana-bypass-round-about-open-wednesday

Traffic Forecasting Accuracy Assessment Research Technical Report II-186 Second, a factor of the effect on forecast by exponentiating an elasticity of the common source errors and natural-log of the change rate in forecast value is calculated. This factor is applied to the actual forecast volume to generate an adjusted forecast. Effect on Forecast 𝑒𝑥𝑝 ∗ 1 Adjusted Forecast 1 𝐸𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 ∗ 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑜𝑙𝑢𝑚𝑒 This deep dive analysis to adopt the best elasticity values, identified by (Ewing et al. 2014) via their cross-sectional and longitudinal models together, and other transportation literature (Dong et al. 2012; Dunkerley, Rohr, and Daly 2014). It is important to note that the elasticity values by Ewing et al. (2014) were related to Vehicle Miles Travelled (VMT) not traffic volumes. To the best of our knowledge and literature review, there is no literature, investigating elasticity values for traffic volumes with respect to employment, population, and fuel price. Also, no literature discussed elasticity value of VMT nor traffic volume with respect to employment. To this end, this elasticity study has two assumptions. First, the elasticity values of VMT with respect to population is close to the elasticity values of traffic volumes given a high correlation between VMT and traffic volumes. Second, the elasticity value regarding employment is close to the one for the per capita income because of its high correlation. The results of quantifying the effect on the forecast are shown in the Table 20.

Traffic Forecasting Accuracy Assessment Research Technical Report II-187 Table 20: Forecast Adjustment Table based on Elasticities for all Segments (Cynthiana Bypass Project) Seg# Items Actual Value Forecast Value Change in Forecast Value Elasticity Effect on Forecast Actual Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast A Employment 4,850 4,111 -15% 1.20 -18% 4,372 3,585 15% A Population/Household 8,455 8,179 -3% (0.10) 0% 3,585 3,597 127% A Original Traffic Forecast 4372 2857 -35% N/A N/A A Adjusted Traffic Forecast N/A N/A N/A 4,372 3,597 -21% B Employment 4,850 4,111 -15% (1.40) 26% 5,152 6,494 -37% B Population/Household 8,455 8,179 -3% (2.70) 9% 6,494 7,102 15% B Original Traffic Forecast 5152 3630 -30% N/A N/A B Adjusted Traffic Forecast N/A N/A N/A 5,152 7,102 -49% C Employment 4,850 4,111 -15% (4.00) 94% 4,466 8,652 -52% C Population/Household 8,455 8,179 -3% (5.30) 19% 8,652 10,315 -21% C Original Traffic Forecast 4466 3039 -32% N/A N/A C Adjusted Traffic Forecast N/A N/A N/A 4,466 10,315 -71% D Employment 4,850 4,111 -15% (6.60) 198% 3,091 9,203 -55% D Population/Household 8,455 8,179 -3% (7.90) 30% 9,203 11,962 -32% D Original Traffic Forecast 3091 2975 -4% N/A N/A D Adjusted Traffic Forecast N/A N/A N/A 3,091 11,962 -75%

Traffic Forecasting Accuracy Assessment Research Technical Report II-188 The original forecast value was successively adjusted for each of the items identified as contributing sources of forecasting error and the final remaining percentage difference after all adjustments is shown in the table. Table 20 shows the detailed elasticity-based adjustments made for all the segments. The most significant impact on traffic volumes was due to overestimated external traffic. Overoptimistic employment projections also contributed to model error. Using elasticity-based adjustments, the forecasting percent difference was improved on all the segments of bypass, though not significantly so. In addition to the elasticity-based adjustment, the travel model used to produce the traffic forecasts was re-run using corrected exogenous forecasts and project assumptions. The same items identified in Section 6.4 were adjusted sequentially to obtain new model volumes. The results of this process for all segments are shown in Table 21. Correcting for employment Adjusting for the employment overestimate, the model was re-rerun with an employment correction factor of 0.8201 (1-(4608-3905)/3905). With this correction, the model RMSE decreased from 42 to 40, and from 41 to 37 on the bypass segments. Thus, improving the employment forecast provided a modest improvement to the model. Correcting for population Adjusting for the population overestimate, the model was re-rerun with a correction factor of 0.9662 (1-(6618-6402)/6402). However, making this correction actually increased the model RMSE from 42 to 44, and the RMSE on the bypass segments would have increased from 41 to 44 had the population forecast been accurate. Obviously, population forecasting was not the problem. Correcting for population and employment Improving both population and employment data had the same effect of improving the employment forecast alone--the model RMSE decreased from 42 to 40, and the RMSE on the bypass segments decreased from 41 to 37. Overall, the employment and employment/population adjusted forecasts using the model were very similar to those obtained from the elasticity-based adjustments, especially on the bypass. Correcting for external trips The original model overestimated external-external trips by some 43 percent ((5123- 9031)/9031), so a correction factor of 0.5672 (5123/9031) was applied to the original EE matrix to re-run the model. Furthermore, the original model held internal attractions constant in the PA balancing for EI trips. As we usually have more confidence in cordon counts than in the sum of internal IE calculated attractions, productions at the external stations were held constant in the revised model. The model was re-run with improved external forecasts. This improvement alone resulted in

Traffic Forecasting Accuracy Assessment Research Technical Report II-189 decreasing the overall model RMSE from 42 to 39, but impressively decreased the RMSE of the bypass link RMSE from 41 to 17. Correcting for population, employment and external trips Lastly, the model was re-run with improved population, employment and external forecasts. Together, these improvements resulted in decreases similar to the improvement observed in improving the external estimates alone--the overall model RMSE decreased from 42 to 37, and the bypass links RMSE again decreased from 41 to 17. Table 21: Forecast Adjustment by Model (Cynthiana Bypass Project) Seg# Items Old Model Volume Elasticity adjusted value New Model Volume Observed Volume Difference (Observed- New) Elasticity %Diff from Observed Model % Diff from Observed Volume Model % Diff from Old Model A Employment Adjustments Only 4372 3585 4147 2851 -1296 -20.47% -31.25% 5.4% B 5152 6494 4965 3630 -1335 -44.10% -26.89% 3.8% C 4466 8652 4440 3039 -1401 -64.88% -31.55% 0.6% D 3091 9203 3091 2975 -116 -67.67% -3.75% 0.0% Project Total 4270.25 6983.5 4160.75 3123.75 -1037 -55.27% -24.92% 2.6% A Population and Employment Adjustments 4372 3597 4140 2851 -1289 -20.74% -31.14% 5.6% B 5152 7102 4951 3630 -1321 -48.89% -26.68% 4.1% C 4466 10315 4429 3039 -1390 -70.54% -31.38% 0.8% D 3091 11962 3087 2975 -112 -75.13% -3.63% 0.1% Project Total 4270.25 8244 4151.75 3123.75 -1028 -62.11% -24.76% 2.9% A External Adjustments Only 4372 Na 2963 2851 -112 -3.78% 47.6% B 5152 Na 3714 3630 -84 -2.26% 38.7% C 4466 Na 3090 3039 -51 -1.65% 44.5% D 3091 Na 1952 2975 1023 52.41% 58.4% Project Total 4270.25 2929.75 3123.75 194 6.62% 45.8% A All Adjustments 4372 Na 2823 2851 28 0.99% 54.9% B 5152 Na 3640 3630 -10 -0.27% 41.5% C 4466 Na 3150 3039 -111 -3.52% 41.8% D 3091 Na 1988 2975 987 49.65% 55.5% Project Total 4270.25 2900.25 3123.75 223.5 7.71% 47.2%

Traffic Forecasting Accuracy Assessment Research Technical Report II-190 6.6 Discussion The traffic forecasts on the Cynthiana Bypass were generally over-estimated by about 45%, with the notable exception of the northernmost section, which was estimated to within 4% of observed values. As would be expected for a bypass project, the biggest source of error in the model forecast was the overestimated growth factor (2.5% per year) in external counts. Three out of four segments of the project showed a significant improvement after accounting for the corrected external forecasts. The project opened in 2012, which was shortly after the peak of the economic recession and during a time of high gas prices. As a result, over-estimation of employment in the opening year was a contributor to the forecasting errors in this project. Population forecasts were very similar to the observed values and did not contribute to the forecasting error (in fact, correcting for actual population alone made the forecasts a bit worse). Risk and uncertainty were not explicitly considered in the traffic forecasts. Project documentation was not archived by the project owners. Fortunately, a copy of the documentation was obtained from the consultant who happened to keep a paper copy in her personal files (she had long since left employment at the consulting company that was contracted to do the study. For future forecasting efforts, it is suggested that copies of the project and traffic forecasting documentation be saved along with the actual models used to generate the forecasts by the project owners (in this case, the state highway authority).

Traffic Forecasting Accuracy Assessment Research Technical Report II-191 7 South Bay Expressway, San Diego, California 7.1 Introduction South Bay Expressway (SBX) is a 9.2-mile tolled highway segment of SR 125 in eastern San Diego, CA. SBX generally runs north-south from SR 54 near Sweetwater Reservoir to SR 905/SR 11 in Otay Mesa, CA near the US-Mexico Border. A 3.2-mile untolled link to the existing freeway network at the northern end was publicly funded and built with the construction of the private toll road. Originally developed as a Public Private Partnership, SBX opened in November 2007. Initial traffic and revenue were below expectations and the company was involved in ongoing litigation with contractors. In March 2010 the operator filed for bankruptcy. In July 2011, SANDAG agreed to purchase the lease from the operator, taking control of the remainder of the 35-year lease in November 20116. This report, written in July 2018, assesses the utility, reliability and accuracy of traffic forecasts for the South Bay Expressway. Traffic forecasts for the project were prepared in 2002 for the 2006, 2010 and 2020 forecast year(s). The project opened in 2007 although observed traffic count data are unavailable for the early years of the project. Where observed traffic data is available, they are in various formats, forms, and locations, prohibiting reasonable comparisons. The general narrative of the early years of the SBX is that traffic and revenue were well below forecasts, which in addition to ongoing litigation with contractors, caused the operator to file for bankruptcy. Section 7.2 of this report describes the project. Section 7.3 describes the traffic forecast methodology. Section 7.4 enumerates the exogenous forecasts and sources of forecast error for the project. 7.2 Project Description The original study area boundary was essentially the entire San Diego Region. The South Bay Expressway is the easternmost north-south expressway in San Diego. SBX was originally developed to accommodate the rapidly growing residential and industrial South Bay area and to provide improved access to the US-Mexico Border Crossing facility at Otay Mesa. The original South Bay Expressway Analysis was for the first toll facility in San Diego. South Bay Expressway was developed as permitted by California AB 680, passed by the California legislature in 1989. Under the agreement, the concessionaire developed the project and constructed the road in return for operating and maintaining the facility and collecting toll revenue for 35 years, until 2042. As per the agreement, the State of California owns the facility, but leases it to the concessionaire. After the original concessionaire declared bankruptcy SANDAG purchased the concession in December 2011 and will retain tolling control until the facility reverts back to Caltrans in 2042. As opposed to maximizing revenue on the facility, SANDAG sets the toll prices to relieve congestion on the I-5 and I-805. 6 https://www.transportation.gov/tifia/financed-projects/south-bay-expressway

Traffic Forecasting Accuracy Assessment Research Technical Report II-192 A map of the corridor and current toll rates are shown in Figure 11. Figure 11: Project Study Area (South Bay Expressway) 7.3 Traffic Forecasts Methodology The original the traffic and revenue models were not available for detailed investigation and comparison. The comparison is based on reviews of the traffic and revenue forecasting report and a technical due diligence analysis of the report. The general process of the forecasting is consistent with established practice, although as discussed in the next section, the model inputs and assumptions used to develop the forecasts were not appropriate. The project development traffic and revenue analysis models were based on the “Series 9” SANDAG Regional Travel Demand Forecasting Model, which was also not available. Like most corridor planning and forecasting analyses, the traffic and revenue forecasting process utilized the elements of the model to develop forecasts for future years, in this case for 2005, 2010, 2015 and 2020. Trip tables from the base SANDAG process were modified using information from new surveys and border crossing information. The traffic forecasts were developed with the TRANPLAN equilibrium highway assignment procedure. The highway assignment developed a free path and a tolled path (where appropriate) between all origins and destinations and calculated the percentage of trips using the tolled path utilizing the value of time for the travelers. This remains accepted forecasting methodology for individual toll facilities. The model process followed accepted methodology of modeling the AM and PM peak periods as well as the interpeak to develop traffic profiles in congested and uncongested periods. The Technical Due Diligence report concluded that the “analyses are based on generally accept industry standards and use reasonable modeling techniques” (Louis Berger Group). The report cautioned about socio-economic data growth levels and competing free network improvements.

Traffic Forecasting Accuracy Assessment Research Technical Report II-193 7.4 Potential Sources of Forecast Error This section identifies the exogenous forecasts and project assumptions used in the development of the traffic forecasts which contributed to the forecast error. Exogenous forecasts are made outside of the immediate traffic forecasting process. Project assumptions are established during project development and serve as the basis for the traffic and revenue forecast. Exogenous forecasts and project assumptions are potential sources of forecast error. In this case there is not ability to test the sensitivity of the input exogenous forecasts or assumptions or even compare to actual traffic and revenue values. However, analysis of several rounds of forecasting reports and limited observed traffic and revenue characteristics and comparison of input assumptions to the observed values leads us to surmise the sources of forecast error. Toll Rates The actual toll rates applied to SBX were higher than projected in the project development stage. Unplanned toll increases in the early years of operation further distanced the SBX full length toll from the project development toll rate. After SANDAG acquired the concession in 2011 and took over the operation of the facilities, they lowered the tolls in 2012, and have kept tolls the same in nominal terms, meaning the toll is lower in real terms year over year. Figure 2 shows the project development and actual and projected toll rates in real (2000$) and nominal terms. SANDAG has no plans to raise or lower the toll, although it makes sense that SANDAG will adjust the toll rate for inflation in the future (although this is not reflected in Figure 12, the projected toll assumes the $2.75 continues indefinitely). Note that future conversion between real and nominal tolls are based on an assumed 2.0% per year CPI growth rate into the future. The project development input toll estimates assumed ETC (Fastrack) tolls would increase with inflation. Real tolls (in 2000$) were assumed to be $2.25 for all offpeak trips, $2.50 from 2007 through 2014, increasing to $2.75 in 2015. The actual ETC tolls applied on SBX were $3.50 in 2007, raising to $3.85 in 2011. The tolls were reduced by SANDAG to current levels of $2.75 in 2012. The tolls remain unchanged (not adjusted for inflation) at $2.75 for Fastrak and $3.50 for Cash. Current tolls are 1/3 lower than the tolls assumed in the project development forecasts. This analysis focuses on the full length ETC toll. Cash toll were designed that cash tolls were as high or higher than the maximum ETC toll rate at each gate. This makes some cash toll rates much higher than the ETC rate.

Traffic Forecasting Accuracy Assessment Research Technical Report II-194 Figure 12: Model and Actual Full Length ETC Tolls on SBX Socioeconomic Growth Direct comparison of the socioeconomic variables used in the forecasting is impossible due to the lack of model files and the data presented in the model. The projected socioeconomic data used to develop the traffic and revenue forecasts were households and acres of non-residential development (retail development, industrial development, acres of office land use, etc). It goes without saying that the Global Financial Crisis and housing bubble had an impact on the forecasts. Although the forecasts developed in the early 2000s were probably not as bullish as the forecasts developed just a couple years later, the forecasts certainly to not reflect the economic impact felt from the economic highs in early 2006 through the recessionary impacts to 2012. Land use model inputs for the traffic and revenue models were based on the SANDAG Series 9 forecasts, adjusted in the forecasting process to meet the needs of the corridor. Comprehensive data from SANDAG comparing the base series 9 forecasts to the current SANDAG Series 13 forecasts is not possible due to lack of data availability. However, data was available to compare observed (and estimated 2018-2020) San Diego County population estimates to the San Diego County household SANDAG model inputs. Figure 13 compares San Diego County total number of households from the project planning study and compares the forecasts to actual population values for San Diego County, indexed so that 100 represents the index value of each variable in 2000. The chart shows that from 1990-2000, which is observed data in both data sets, the indexed households and population track very well, as you would expect with consistent household sizes7. The observed population data shows the slowdown in population growth from 2002-2006 was not 7 Note the number of households is shown in 5-year increments and interpolated for the intermediate years where as the population data (from the St. Louis Federal Reserve FRED database is annual).

Traffic Forecasting Accuracy Assessment Research Technical Report II-195 reflected in the model input household forecasts. This slowdown abated slightly but growth rates remained below annual growth forecasts until 2011, but never exceeded the projected household growth rates, indicating that the model input household forecasts were too high. This difference in household growth rates impact the trip generation component of the traffic model, and overall trip rates Figure 13: Comparison of Observed and Projected Population and Household Growth in San Diego County Housing Price In addition to total population and households, housing prices can have an impact on the perception of the localized economy and values of time. The housing crisis impacted San Diego and the South Bay and Chula Vista area both in terms of foreclosures and housing prices. Foreclosures, measured in changes in homeownership, was particularly devastating in the South Bay area with the number of owner occupied houses dropping significantly in the SBX corridor, as seen in Figure 14(Giuliano). Median home prices in the San Diego region peaked at $517,500 in November 2005 and then fell to $280,000 in January of 2009 (Giuliano). Figure 15 shows the San Home Price Index (S&P), which hit its maximum in March 2006 at 251.7 and fell to 145.4, even with 2002 levels, by May 2009. Since the low point of the recession the home price index has grown, returning to pre-recession levels in early 2018.

Traffic Forecasting Accuracy Assessment Research Technical Report II-196 Since 1970 there have been 5 recessions (1973-1975, 1980-1982, 1990-1991, 2001, 2008-2009). No forecast can accurately account for specific economic slowdowns or recessions. Exogenous socioeconomic forecasts are developed with economic cycles in mind, accounting for the impact of the cycle over the long term. Forecasters need to be aware of the potential short-term impacts of the economic cycle on traffic, ridership, and revenue forecasts for transportation projects. Figure 14: Map of Change in Owner-Occupied Housing Units in San Diego County8 Source: Public Private Partnerships in California, Phase II Report, Section VII: California Political Environment, July 2012

Traffic Forecasting Accuracy Assessment Research Technical Report II-197 Figure 15: San Diego Home Price Index, 1987-20189 Border Crossings A driver of traffic on the South Bay Expressway is the number of border crossings due to the proximity of the SBX to the Otay Mesa Border Crossing. The SBX T&R model used forecasts developed in the San Diego Region – Baja California Cross-Border Transportation Study from November 2000. This study projected a 25-year compound annual growth rate (1995 to 2020) of 3.2% p.a. for passenger cars and 2.8% p.a. for trucks. This compares reasonably well to the observed US- Mexico border crossings reported by the Bureau of Transportation Statistics, which shows a compound annual growth rate of 4.4% for passenger cars and 2.7% for trucks. While the overall border crossings forecasts have been close to observed growth for trucks and slightly underestimated for autos, the report only presents 1995 base and 2020 forecasts. The traffic and revenue models interpolated the intermediate years. Figure 16 and Figure 17 show the observed versus modeled auto and truck border crossing statistics and demonstrate the danger on interpreting long term growth rates through intermediate forecast years. The long-term growth has been reasonable (actually underestimated for autos), but in the short term it varies considerably causing risk to intermediate traffic forecasts. 9 Source: (S&P) Economic Research Division, Federal Reserve Bank of St. Louis

Traffic Forecasting Accuracy Assessment Research Technical Report II-198 Figure 16: US-Mexico Historical Border Crossings at Otay Mesa (Passenger Cars) Figure 17: US-Mexico Historical Border Crossings at Otay Mesa (Trucks) The hypothesis that the traffic and revenue forecasts were overestimated due to the inaccuracy of the border crossing forecasts could have validity for autos in 2010, which were 32% below the forecast value. Since that low point in border crossings, the number of passenger car border crossings have grown and now exceed the forecast by over 20%. Truck forecasts, while reflecting the changing economic conditions have remained within 10% of the forecast value since 2000. Traffic and Revenue Forecasts There is no consistent format of forecast or observed traffic on the SBX. Some reports refer to daily or annual transactions, others Average Daily Traffic (ADT), while others are Average Weekday Traffic (AWDT). There is an overall lack of observed transactional or traffic count data from opening to current conditions. Revenue data is also in different forms and varies greatly based on assumed toll rates which vary from the project development to the several rounds of reforecasts with different

Traffic Forecasting Accuracy Assessment Research Technical Report II-199 toll policies tested. Figure 18 shows the annual toll revenue forecast for the different analyses and observed toll revenue from the two fiscal years publicly available. The chart clearly shows the more recent toll revenue forecasts are much less aggressive than the project development and 2008 forecasts, which in addition to having higher toll rates, did not project or fully appreciate the GFC and housing bubble impact on transportation. Figure 18: Annual Revenue Forecasts on SBX 7.5 Discussion This Deep Dive is not meant to be a criticism of the forecasts developed in 2003. While hindsight allows for us to understand the warning signs, very few in the world saw the Global Financial Crisis and housing bubble coming. Although the TIFIA Risk Analysis Report showed that the early year project development forecasts had a probability of less than 5%, and this included only risks associated with toll revenues, projections of construction costs and operating costs were held constant, the USDOT certainly did not forecast the impending Global Financial Crisis. The report shows the importance of detailed risk assessments and the importance of understanding the major drivers of the forecasts and how changes in modeling and growth assumptions impact traffic and revenue forecasts. If a more conservative approach were taken in the development of the project, it is unlikely that a P3 would have found this an appropriate project. At the least the concessionaire would structure the deal differently. Though researching through forecasts and comparable data, one recommendation is for every project to develop clear model performance metrics during the forecast period that can be checked against observed data. Much like data collected for transit before and after studies this data would provide clear insight to the forecasting process and could be used in each region (and collectively in the US) to understand common forecasting errors. These metrics may include:

Traffic Forecasting Accuracy Assessment Research Technical Report II-200  Socio-economic variables such as population, and employment at sub regional levels (focusing on the project corridors);  Regional VMT VHT values;  Consistent ADT measures as specific points in the corridor (a plan to collect annual traffic counts on the facility for the first 5-10 years of opening); and  Consistent definitions of other measures to be collected and maintained. For toll facilities this could be annual or daily transactions, revenue miles traveled, daily or annual revenue, average toll rates, etc. 8 US 41, Brown County, Wisconsin 8.1 Introduction The US 41 Project in Brown County is a project of capacity addition, reconstruction of nine interchanges, constructing 24 roundabouts, adding collector-distributer lanes, and building two system interchanges located in Brown County, Wisconsin. This report, written in April 2018, assesses the accuracy of traffic forecasts for the US 41 Project in Brown County. The analysis focuses on investigating approximately 3.3-miles segment of observed pre/post – construction traffic counts and traffic forecasts, presented in the Final Environmental Impact Statement (FEIS) for US 41 Memorial Drive to County M, Brown County, Wisconsin. The FEIS provided traffic forecasts for four sites. Traffic forecasts were prepared in 2011 for the 2015 (Construction Year) and 2035 (Design Year) forecasts. The segment of the US 41 Project for this report opened in Spring of 2017. Traffic counts are available for 2009-2017 year. Compared to other deep dive analysis cases, the US 41 Project is, to some extent, an imperfect deep dive due to limited availability of traffic forecasts, counts and exogenous forecasts data such as employment forecasts. More details are discussed in following sections. This report consists of seven sections. Section 8.2 describes the project. Section 8.3 compares the predicted and actual traffic volumes for all roadways in the study area where post-opening traffic counts are available. Section 8.4 enumerates the exogenous forecasts and sources of forecast error for the project. It also includes an assessment of the accuracy of the exogenous forecasts. Section 8.5 attempts to identify items discussed in Section 8.4 that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Section 8.6 summarizes the findings. 8.2 Project Description The US 41 Project in Brown County aimed to improve safety and road capacity by replacing old and deteriorating pavement, and outdated design infrastructure with new standards. The Project area is approximately 14-mile portion of the US 41 in Brown County, Wisconsin that covers US 41 from Orange Lane near the County Road F interchange to the County Road M interchange in Brown County. Figure 1 shows the area of the Project with five roadway segments. The study area for this deep dive, however, is only the 3.3-miles roadway that the FEIS of the Memorial Drive to County M segment covered (segment 5 in Figure 19).

Traffic Forecasting Accuracy Assessment Research Technical Report II-201 Out of the five segments, the Memorial Drive to County M is the only segment that required EIS because of its potential environmental impacts by building two system interchanges at WIS 29 and I-43 with tall flyover type of ramps building on top of swamp land. The other four segments in Figure 1 underwent increase in lanes from four to six, or eight with some having auxiliary lanes, which resulted in re-evaluating the original Environmental Assessment (EA), completed in 2002, or completing a new EA. Figure 19: Project Study Area (US 41 Brown County) The US 41 Project in Brown County was part of the 31-miles US 41 highway reconstruction project in Winnebago and Brown county. The project areas in two counties are not connected along the US 41 as Figure 20 shows but adjacent to the two major cities -- Green Bay and Oshkosh in both counties. The US 41 Project was the largest reconstruction project in the history of the Northeast Region in Wisconsin.

Traffic Forecasting Accuracy Assessment Research Technical Report II-202 Figure 20: Areas of US 41 Project in Brown and Winnebago Counties Construction of the 17-miles Winnebago County portion was completed in 2014 with $450 million budget. It did not involve any heavy construction work as the counterpart did for two new system-wide interchanges at WIS 29 and I-43. The Brown County portion of the project took six years to complete from 2011 to 2017, costing approximately $1 billion. Figure 21 compares key numbers of the projects. The new traffic lanes and bridges are expected to last for 50-75 years. Figure 21 US 41 Project by Numbers

Traffic Forecasting Accuracy Assessment Research Technical Report II-203 Figure 22 A Map of Wisconsin DOT regions & Fox Valley Area (within a red boundary) The US 41 Project is important to Wisconsin and the region because it upgrades a transportation link that supports an important economic vitality in the Fox River Valley between the Southeastern and the North Eastern Wisconsin – two areas that contain more than half the state’s population and most of its workforce and manufacturing facilities (see Figure 22 that shows the map of WISDOT’s planning region and the Fox Valley area). 8.3 Predicted-Actual Comparison of Traffic Forecasts There are four links/roadways in this deep dive study area (see Figure 23 that displays the location of the links). The FEIS for the US 41 Memorial Drive to County M in Brown County10, provides an existing year traffic, 2005 and two future year forecasts, 2015 and 2035 for the four links. The traffic forecasts were model outputs from a regional travel demand model at the Brown County Planning Commission (BCPC), a regional Metropolitan Planning Organization (MPO). Traffic volumes were expressed as Average Daily Traffic (ADT) volumes, which reflect the average travel conditions rather than daily or seasonal fluctuations. 10 Final EIS - US 41 Memorial Drive to County M, Brown County, Wisconsin (WisDOT Project I.D. 1133-10-01), ftp://ftp.dot.wi.gov/dtsd/bts/environment/library/1133-10-01-F.pdf

Traffic Forecasting Accuracy Assessment Research Technical Report II-204 Figure 23. Traffic Count Locations in the study area Of the four links, there are three links with an Average Daily Traffic (ADT) traffic count, publicly available as of June 2018. The following table lists each of these links with their forecast and observed ADT. The table includes an inaccuracy index in traffic forecasts that was estimated as: 𝑃𝑒𝑟𝑐𝑒𝑛𝑡 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑓𝑟𝑜𝑚 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐶𝑜𝑢𝑛𝑡 𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑌𝑒𝑎𝑟 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 Table 22 Existing and Forecasting Traffic (2005-2035), from USH 41 Traffic Study & EIS Site ID Site Segment Base Year Opening Year Count Opening Year Forecast Percent Differen ce from forecasts 2005 2017 2017 1 US 41 Mainline, STH 29 to Velp Ave 61,200 71,547 73,400 -2.52% 2 US 41 Mainline, Velp Ave to I-43 56,800 69,300 3 US 41 Mainline, I-43 to Lineville Rd 50,200 54,300 60,300 -9.95% 4 I-43, Atkinson Drive to US 41 38,400 42,881 44,200 -2.98% Note: * Preliminary ADT, 06/06/2017 on the roadrunner web: https://trust.dot.state.wi.us/roadrunner/. According to WISDOT, a preliminary ADT is generated when the raw count is first processed, using factors based on continuous data from the previous year. Part of the annual processing of all traffic count data is the generation of new factors based on current year continuous data. These current year factors are then applied to all of the short-term counts taken during the year to compute a final ADT for each site. (http://wisconsindot.gov/Documents/projects/by-region/ne/23exp/23ls-a.pdf)

Traffic Forecasting Accuracy Assessment Research Technical Report II-205 Table 22 shows the traffic forecasts for the study sites were generally accurate, with percent difference from forecasts ranging from -10 to -3% for three study sites. 8.4 Potential Sources of Forecast Error This section identifies the exogenous forecasts and project assumptions used in the development of the traffic forecasts. Exogenous forecasts are made outside of the immediate traffic forecasting process. Project assumptions are established during project development and serve as the basis for the traffic forecast. Exogenous forecasts and project assumptions are leading sources of forecast error. An example are population and employment forecasts, which are commonly identified as a major source of traffic forecasting error. These forecasts are usually made by outside planning agencies on a regular basis; that is, they are not prepared for any individual project. During project development, these forecasts are revised to match assumptions documented by the project team. Past forecasting research has identified several exogenous forecasts and project assumptions as common sources of forecast error, including:  Macro-economic conditions (of the region or study area),  Population and employment forecasts,  Significant changes in land use,  Auto fuel prices,  Tolling pricing, sensitivity and price levels,  Auto ownership,  Changes in technology,  Travel times within the study area, and  Duration between year forecast produced and opening year. The following table lists all exogenous forecasts and project assumptions for which observed data is available. It also includes an assessment of the accuracy of each item. Table 23: List of Exogenous Forecasts and Project Assumptions (US 41 Project) Items Quantifiable Observed Year 2010 Values Estimated Year 2010 Values % Difference Population * Yes 135,897 138,775 ‐2% Auto Fuel Price (price per gallon) ** Yes $2.41 $2.73 ‐12% Study-Forecast Duration Yes 12 10 20% Data Source for Observed Value: * Sum of population in City of Green Bay, Village of Howard and Town of Suamico; FEIS, US 41 Memorial Drive to County M in Brown County (ftp://ftp.dot.wi.gov/dtsd/bts/environment/library/1133-10-01-F.pdf) ** BLS, Office of Energy Efficiency (CPI-All Urban Consumers (Current Series), All items in Boston-Cambridge-Newton, MA- NH, all urban consumers, not seasonally adjusted, https://www.bls.gov/data/) & Renewable Energy & EPA (New England (PADD 1A) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon) for year 2010, https://www.eia.gov/dnav/pet/pet_pri_gnd_dcus_YBOS_w.htm)

Traffic Forecasting Accuracy Assessment Research Technical Report II-206 Project teams had difficulties to find any available data sources for the US 41 deep dive analysis. Only an exogenous forecast and project assumptions were identified for the potential sources of forecast errors for the US 41 Project. Table 23 shows that population forecast was close to observed population, auto fuel prices were slightly over-estimated, and the opening year was delayed by two years. Information on other typical exogenous forecasts such as macro-economic conditions, car ownership, travel time and value of time is unavailable publicly. For fuel price, a proxy fuel price forecast was estimated by multiplying the 2005 average gasoline price with an annual inflation rate between 2005 and 2017. 8.5 Contributing Sources to Forecast Error Building upon the items discussed in Section 8.4, this section attempts to identify items that are important sources of forecast error and, if so, attempt to quantify how much it would change the forecast if the forecasters had accurate information about the item. Adjusted forecasts for the critical roadways are computed by applying an elasticity to the relative change between the actual and predicted values for each item in Section 8.4. Only those items which could be quantified and deemed important for this project were adjusted. The effect on the forecast can be quantified in this way. First, the change in forecast value, a delta between the opening year forecast and the actual observed traffic count in the opening year is calculated. Change in Forecast Value 𝐴𝑐𝑡𝑢𝑎𝑙 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑎𝑙𝑢𝑒 Second, a factor of the effect on forecast by exponentiating an elasticity of the common source errors and natural-log of the change rate in forecast value is calculated. This factor is applied to the actual forecast volume to generate an adjusted forecast. Effect on Forecast 𝑒𝑥𝑝 ∗ 1 Adjusted Forecast 1 𝐸𝑓𝑓𝑒𝑐𝑡 𝑜𝑛 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 ∗ 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑜𝑟𝑒𝑐𝑎𝑠𝑡 𝑉𝑜𝑙𝑢𝑚𝑒 This deep dive analysis to adopt the best elasticity values, identified by (Ewing et al. 2014) via their cross-sectional and longitudinal models together, and other transportation literature (Dong et al. 2012; Dunkerley, Rohr, and Daly 2014). It is important to note that the elasticity values by Ewing et al. (2014) were related to Vehicle Miles Travelled (VMT) not traffic volumes. To the best of our knowledge and literature review, there is no literature, investigating elasticity values for traffic volumes with respect to employment, population and fuel price. Also, no literature discussed elasticity value of VMT nor traffic volume with respect to employment. To this end, this deep dive analysis had two assumptions. First, the elasticity values of VMT with respect to population and fuel price is close to the elasticity values of traffic volumes given a high correlation between VMT and traffic volumes. Second, the elasticity value regarding employment is close to the one for the per capita income because of its high correlation, too. Finally, the elasticity values used in this study are:

Traffic Forecasting Accuracy Assessment Research Technical Report II-207  0.75 for population  0.2 for fuel price The results of quantifying the effect on the forecast are shown in the following table. The original forecast value was successively adjusted for each of the items identified as contributing sources of forecasting error for all the segments. The final remaining percentage difference after all adjustments is shown in the Table 24. Table 24 is sorted by the largest-to-smallest of the “remaining percent difference from forecast”. The segment ID reflects the sites shown in Figure 19. Table 24: Forecast Adjustment Table based on Elasticities Seg# Items Actual Value Forecast Value Change in Forecast Value Elasticity Effect on Forecast Actual Forecast Volume Adj Forecast Volume Remaining % Difference for Adj Forecast 3 Population 135,897 138,775 -2% 0.75 -2% 60,300 59,360 9% 3 Fuel Price 2.41 2.73 -0.12 -0.2 3% 59,360 60,893 12% 3 Original Traffic Forecast 54,300 60,300 -10% N/A N/A 3 Adjusted Traffic Forecast N/A N/A N/A 60,300 60,893 -11% 1 Population 135,897 138,775 -2.07% 0.75 -2% 73,400 72,255 1% 1 Fuel Price 2.41 2.73 -0.12 -0.2 3% 72,255 74,122 4% 1 Original Traffic Forecast 71,547 73,400 -3% N/A N/A 1 Adjusted Traffic Forecast N/A N/A N/A 73,400 74,122 -3% 4 Population 135,897 138,775 -2% 0.75 -2% 44,200 43,511 1% 4 Fuel Price 2.41 2.73 -0.12 -0.2 3% 43,511 44,635 4% 4 Original Traffic Forecast 42,881 44,200 -3% N/A N/A 4 Adjusted Traffic Forecast N/A N/A N/A 44,200 44,635 -4% Total Population 135,897 138,775 -2% 0.75 -2% 177,900 175,126 4% Fuel Price 2.41 2.73 -0.12 -0.2 3% 175,126 179,650 6% Original Traffic Forecast 168,728 177,900 5% N/A N/A Adjusted Traffic Forecast N/A N/A N/A 177,900 179,650 -6% In general, the adjustments resulted in slightly negative impact on traffic forecast accuracy. All three study sites experienced increase in the forecast percent difference from forecast by one to two percent. However, the cumulative impacts of each factor disclosed that the population adjustment resulted in improved traffic forecast accuracy. The overall negative impact came from the fuel price adjustment. 8.6 Discussion The US 41 Project mainly increased highway lanes from four to six, or eight with some having auxiliary lanes. It replaced old and deteriorating pavement and outdated design infrastructure, which resulted in reconstruction of nine interchanges, constructing 24 roundabouts, adding collector- distributer lanes, and building two system interchanges. The Project was to improve safety and upgrade a transportation link that supports an important economic vitality between the Southeastern

Traffic Forecasting Accuracy Assessment Research Technical Report II-208 and the North Eastern Wisconsin. The original traffic forecasts were slightly overestimated by 3 to 10 percent for three study sites but they were generally close. It should be noted that the traffic count for site 3 was the preliminary ADT not the final ADT. The highest delta between the traffic forecast and the opening year count for site 3 may derive from the usage of the preliminary estimate. The traffic forecasting accuracy improved after correcting the exogenous population forecast. However, the fuel price adjustment increased the percent difference from forecast. This could have been accounted for that the change in fuel price had little effect on the traffic volumes in the study area where public transportation is not a reasonable alternative mode. However, this interpretation can be wrong due to the uncertainty in how the fuel price impact was implemented in the traffic forecast model. Availability of the archived model and its inputs would have provided deeper understanding of the parameters and methodology used for forecasting traffic for US 41 project. There are small number of documents and data available for the US 41 Project. It is unknown whether risk and uncertainty were considered during the project due to the inaccessibility of the documentation on this project. For future forecasting efforts, it is suggested that a copy of the forecasting documentation and assumption be archived along with the travel model files used to generate the forecasts. 9 Discussion We can summarize the key findings about the forecasts from each Deep Dive as follows:  On the Eastown Road Expansion, actual volumes were 20% lower than forecast for the existing portion of the road and 43% lower than forecast for the extension. Correcting for errors in input values (employment, population/households, car ownership, fuel price and travel time) improved these values to 3% and 39%. The travel speeds appear to be of particular importance in this case, with the actual speed lower than forecast on Eastown Road.  On the Indian River Bridge, actual volumes on the new bridge were 60% lower than forecast even though the base year validation was reasonable. Correcting errors in the inputs (employment, population and fuel price) only improved the forecasts slightly. It is not clear why the discrepancy occurs.  For the Central Artery Tunnel project, actual traffic on modified links was 4% lower than forecast, and actual traffic on new links was 16% lower than forecast. This represents a strong forecast for a massive project with a long-time horizon. Correcting input errors (for employment, population and fuel price) would improve the forecast difference to +3% for existing links and -10% for new links.  On the Cynthiana Bypass, actual traffic was about 30% lower than forecast for three of four bypass segments, and 4% lower than forecast for the fourth bypass segment. The major source of error on this project were the external traffic forecasts, where the actual traffic at external stations was 43% lower than forecast. Correcting this issue reduces the absolute difference to less than 4% for three of four segments, although with this correction actual traffic on the fourth segment is higher than the adjusted forecast.  On the Southbay Expressway, the long-term forecasts appear to be reasonably good, but a straight-line interpolation to the short-term creates large deviations. There appear

Traffic Forecasting Accuracy Assessment Research Technical Report II-209 to be three major contributors to this outcome. First, the project opened as a privately financed toll road in November 2007, just before the recession caused a decrease in demand. Second, an important travel market for the road is border crossings from Mexico, particularly for truck traffic, and border crossing decreased from their long- term trend about the time the toll road opened. Third, the operator responded by increasing tolls, further reducing demand. The operator was unable to survive these factors and went bankrupt in 2010. SANDAG bought the road and reduced tolls, while border crossings and economic conditions recovered.  For US 41 in Brown County, the original traffic forecasts were slightly overestimated by 3 to 10 percent for three study sites but they were generally close. The traffic forecasting accuracy improved after correcting the exogenous population forecast. However, the fuel price adjustment increased the forecast error. This could have been accounted for that the change in fuel price had little effect on the traffic volumes in the study area where public transportation is not a reasonable alternative mode. Similar to our findings from the Large N analysis, the traffic for the 6 projects chosen for Deep Dive Analysis are more likely to be to be over-predicted than under-predicted. Deep Dives expands our knowledge regarding this over-prediction further by identifying the contributing sources to the inaccuracy. The key takeaways from the Deep Dive Analysis is presented below: 1. Employment, Population and Fuel Price Forecasts are common factors to contribute to forecast inaccuracy: Adjustments to the forecasts using elasticities and model re-runs confirmed that significant errors in opening year forecasts employment, fuel price and travel speed had a major role in the over-estimation of traffic volumes. In addition, we observe that macro-economic conditions in the opening year influence forecast accuracy. Understandably so, this has been observed for projects which opened during or after an economic downturn. This can be correlated with the over-estimation of employment and fuel price as well. Due to loss of jobs, not only the work trips are impacted, but also leisure trips are hampered. Recession is assumed to cause change in value off time, which will also result in updated co-efficient for the highway assignment purposes. Change in job location while maintaining same housing location would alter the route selection for an individual. This would clearly change the travel patterns for the following years. This is a major uncertainty that is extremely difficult to directly consider at the time of preparation of traffic forecasts given the various modeling parameters that could change in an economic downturn. One way to account for this is to evaluate and document the change in traffic forecasts using reduced employment and higher fuel prices. 2. External Traffic and Travel Speed assumption also affect traffic forecasts: For the Bypass extension project in Cynthiana, it was found that the estimated growth rate for external trips have the largest bearing on forecast error for that specific project. This needs further investigation for other types of projects as well. Effect of travel speed on forecast volume is understandable: with lower average speed in reality, the number of vehicles serviced by a roadway would get lower and hence result in over-estimation of traffic. But evaluation of the effect of these factors require archiving of the model, which brings us to our next point. 3. The reasons for forecast inaccuracy are diverse. While the above points list some of the factors that contribute to forecast inaccuracy, it is clear from our limited sample

Traffic Forecasting Accuracy Assessment Research Technical Report II-210 that the reasons for inaccuracies are diverse—external forecasts, speed travel speeds, population and employment forecasts, and short-term variations from a long-term trend have all be identified as contributing factors in one or more of the Deep Dives. This makes it difficult to generalize our findings to the broader population of forecasts, and it makes it hard to identify a simple way of improving forecasts. Nonetheless, the lessons here can help forecasters anticipate the types of problems that may occur. For example, one lesson may be to think through the travel markets that a project is likely to serve—such as through traffic for a bypass project (Cynthiana Bypass), traffic diverted from a parallel facility (Newtown Road Extension), or cross-border traffic (Southbay Expressway)—and give extra scrutiny in forecasting to those markets. 4. Better archiving of Models, Forecast Documentations and Validation Approaches: While the forecasting accuracy improved after accounting for several exogenous variables like employment rate, population, effect of changes in other could not ascertained for some of the projects. Improved documentation on the forecast methodology would make such assessments more informative, particularly on the definition of the variables used in the model. Availability of the archived model and its inputs would have provided deeper understanding of the parameters and methodology used for forecasting traffic.

Traffic Forecasting Accuracy Assessment Research Technical Report II-211 References American Automobile Association. 2013. “Your Driving Costs: How Much Are You Really Paying to Drive.” http://exchange.aaa.com/wp-content/uploads/2013/04/Your-Driving-Costs- 2013.pdf. Dong, Jing, Diane Davidson, Frank Southworth, and Tim Reuscher. 2012. “Analysis of Automobile Travel Demand Elasticities With Respect To Travel Cost.” Prepared for the Federal Highway Administration. Oak Ridge National Lab. Dunkerley, Fay, Charlene Rohr, and Andrew Daly. 2014. “Road Traffic Demand Elasticities: A Rapid Evidence Assessment,” 50. Ewing, Reid, Shima Hamidi, Frank Gallivan, Arthur C Nelson, and James B Grace. 2014. “Structural Equation Models of VMT Growth in US Urbanised Areas.” Urban Studies 51 (14): 3079–96. https://doi.org/10.1177/0042098013516521.

Traffic Forecasting Accuracy Assessment Research Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Accurate traffic forecasts for highway planning and design help ensure that public dollars are spent wisely. Forecasts inform discussions about whether, when, how, and where to invest public resources to manage traffic flow, widen and remodel existing facilities, and where to locate, align, and how to size new ones.

The TRB National Cooperative Highway Research Program's NCHRP Report 934: Traffic Forecasting Accuracy Assessment Research seeks to develop a process and methods by which to analyze and improve the accuracy, reliability, and utility of project-level traffic forecasts.

The report also includes tools for engineers and planners who are involved in generating traffic forecasts: Quantile Regression Models and a Traffic Forecast Accuracy Assessment.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!