Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 18
19 · Truck intercept survey ($175,000). update cycle. Two states (Connecticut and Ohio) indicated · Oregon Travel Behavior Survey ($125,000). using a 10-year cycle for major model revisions. Massachu- · Recreation/Tourism Activity Survey ($150,000). setts noted that its next update would likely be driven by air · Household Activity and Travel Survey ($1,000,000). quality conformity needs, whereas Indiana performs updates as needed for specific projects. Wisconsin paid $2.5 million for an NHTS add-on; how- ever, the data are also usable by MPOs within the state. Ken- User Support tucky paid just $176,000 for their NHTS add-on. Louisiana spent $100,000 on commodity flow data, which is typical. In- Training is considered an essential element of model deploy- diana spent $60,000 on D&B employment data. Some other ment. A training session is often provided by the consultant on states reported negligible data acquisition costs. model delivery; thereafter, training happens sporadically. Only a few states have regular training cycles, with details differing Staffing and Maintenance from state to state. For example, Oregon has an arrangement with a local university to supply training, Connecticut sends All states reported having the help of consultants when build- employees to FHWA urban modeling courses, and Kentucky ing their models. In some cases teams of consulting firms has an in-house annual training program lasting 2 days. contributed to model development. The dependence on con- sultants for maintenance varied considerably across states; Users of the model tended to be confined to the state DOT however, most states reported that routine maintenance was and its consultants. States with organized urban model user done in-house. groups (e.g., Florida and Iowa) can call on them for assis- tance with the statewide model, even though their members As with costs, staffing levels varied widely across states. are not primary users of the model. Web pages tended to be Staffing levels ranged from a one-half full-time equivalent located in those states with user groups. (FTE) in Florida, Indiana, and Kentucky to approximately three FTEs in Connecticut, Oregon, and Wisconsin. A little A little more than half of the states with models have made more than half of the states reported roughly one FTE. A few provisions for distributing them to consultants and MPOs. Se- states noted that modeling responsibilities were spread lected states will deliver their models to outside agencies or across multiple staff members, each spending only a fraction universities on request, although with conditions. For exam- amount of their time on the project. Some of the states with ple, Texas asks borrowers to sign a confidentiality agreement, lower staffing levels reported having a larger amount of con- Kentucky requests that borrowers sign an agreement as to ac- sultant help. Several states needed to add personnel as they ceptable use of the model, Wisconsin has procedures by which increased their modeling activities. it will allow the use of its model and the modeling software by outside parties, and Michigan only distributes trip tables and Model maintenance is required to keep it up to date in networks. The remaining states have had no experience with terms of network structure, demographic data, link data, and model distribution, and it is not clear whether these states have calibration data. Models not maintained become obsolete and a policy against or simply no need for distribution. useless. However, maintenance should not be so burdensome that staff does not have sufficient time for applications. States All states with models will make results of model runs with new models find that there is little need for maintenance, available on request. Requests are handled on a case-by-case but states with mature models experience a more constant ef- basis. Some states will do custom model runs on request; fort. A 50/50 split between maintenance and applications is however, those requests are fulfilled only for internal needs. typical among those states that were able to make an estimate. Often the format of the requested data must be negotiated. For example, Vermont asks outside recipients of model re- sults to sign a binding nondisclosure agreement to protect Time Frame sensitive employment information. Because situations vary significantly across states, there is no consensus as to how long it takes to build a model. Models INSTITUTIONAL BARRIERS TO MODELS in most states have evolved over many years; therefore, no Given that only about half of the states have active models, time estimate is possible. A reasonable range for states that there would appear to be reluctance on the part of some states recently built their models from scratch is 1 year (Delaware) to proceed with model development. to 4 years (Florida and Texas). Ohio, with an unusually am- bitious model, is taking 8 years (see chapter three). Agency Roadblocks Maintenance is largely a continuous process or on a very frequent cycle (1 to 2 years). Update cycles tend to coincide Only a few states reported having institutional barriers that with statewide plan updates, with most states using a 5-year needed to be overcome, and these barriers were not critical.