Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 157
SECTION VI--NCHRP PROJECT 17-18(3) GUIDANCE FOR IMPLEMENTATION OF THE AASHTO STRATEGIC HIGHWAY SAFETY PLAN Implementation Step 11: Assess and Transition the Program General Description The AASHTO Strategic Highway Safety Plan includes improvement in highway safety management. A key element of that is the conduct of properly designed program evaluations. The program evaluation will have been first designed in Step 8, which occurs prior to any field implementation. For details on designing an evaluation, please refer to Step 8. For an example of how the New Zealand Transport Authority takes this step as an important part of the process, see Appendix N. The program will usually have a specified operational period. An evaluation of both the process and performance will have begun prior to the start of implementation. It may also continue during the course of the implementation, and it will be completed after the operational period of the program. The overall effectiveness of the effort should be measured to determine if the investment was worthwhile and to guide top management on how to proceed into the post-program period. This often means that there is a need to quickly measure program effectiveness in order to provide a preliminary idea of the success or need for immediate modification. This will be particularly important early in development of the AASHTO Strategic Highway Safety Plan, as agencies learn what works best. Therefore, surrogates for safety impact may have to be used to arrive at early/interim conclusions. These usually include behavioral measures. This particular need for interim surrogate measures should be dealt with when the evaluation is designed, back in Step 8. However, a certain period, usually a minimum of a couple of years, will be required to properly measure the effectiveness and draw valid conclusions about programs designed to reduce highway fatalities when using direct safety performance measures. The results of the work are usually reported back to those who authorized it and the stakeholders, as well as any others in management who will be involved in determining the future of the program. Decisions must be made on how to continue or expand the effort, if at all. If a program is to be continued or expanded (as in the case of a pilot study), the results of its assessment may suggest modifications. In some cases, a decision may be needed to remove what has been placed in the highway environment as part of the program because of a negative impact being measured. Even a "permanent" installation (e.g., rumble strips) requires a decision regarding investment for future maintenance if it is to continue to be effective. Finally, the results of the evaluation using performance measures should be fed back into a knowledge base to improve future estimates of effectiveness. Specific Elements 1. Analysis 1.1. Summarize assessment data reported during the course of the program 1.2. Analyze both process and performance measures (both quantitative and qualitative) VI-22
OCR for page 158
SECTION VI--NCHRP PROJECT 17-18(3) GUIDANCE FOR IMPLEMENTATION OF THE AASHTO STRATEGIC HIGHWAY SAFETY PLAN 1.3. Evaluate the degree to which goals and objectives were achieved (using performance measures) 1.4. Estimate costs (especially vis-à-vis pre-implementation estimates) 1.5. Document anecdotal material that may provide insight for improving future programs and implementation efforts 1.6. Conduct and document debriefing sessions with persons involved in the program (including anecdotal evidence of effectiveness and recommended revisions) 2. Report results 3. Decide how to transition the program 3.1. Stop 3.2. Continue as is 3.3. Continue with revisions 3.4. Expand as is 3.5. Expand with revisions 3.6. Reverse some actions 4. Document data for creating or updating database of effectiveness estimates VI-23