Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3The work for this project was roughly divided into four stages, as shown in Figure 2.1. The initial task was to develop and describe a full-range assessment process for the TIM training program that would address what would be assessed, how the assessment would be done, and when each assessment step would take place. A parallel and related task was to perform a literature review, including an assessment of other relevant training initiatives, and develop a business model that specified how best to develop, implement, and sustain the TIM assessment tool. The research team then developed a set of use cases that served as a frame- work for the development of functional requirements; that framework guided subsequent system architectural design, development, and testing. Finally, the research team demon- strated the TIM assessment tool to SHRP 2 program staff, Federal Highway Administration (FHWA) personnel, and the Technical Expert Task Group (TETG). Key aspects of the research approach are discussed in the followed sections. Collaborative Nature of Project L32C The L32C project is tightly coupled with Projects L12, L32A, L32B, and other TIM training programs nationwide and requires support from many federal, state, and local agencies. That made L32C a highly collaborative effort. Figure 2.2 shows the interactions and communications between L32C and other projects and organizations. During the course of the work, the L32C team ⢠Attended project meetings with L32B, SHRP 2, and FHWA staff to review tasks, milestones, and timelines for Projects L32B and L32C and to discuss dependencies between the two projects; ⢠Engaged with National Highway Institute (NHI) regarding its e-learning platform and mechanisms for integrating with it; ⢠Reviewed the L12 and L32A projectsâ final reports and course materials; ⢠Reviewed Project L32A course evaluation questionnaires and student exam; ⢠Reviewed Project L32B results and initial system require- ments and design approaches; ⢠Reviewed NHIâs registration survey and questions for course evaluation; ⢠Attended an FHWA-run TIM train-the-trainer class held in Rhode Island; and ⢠Interviewed responder agency managers and training professionals. Conceptual Model for Training Evaluation Early in the project the research team decided to use the widely used and popular âKirkpatrick Modelâ as a conceptual refer- ence for the TIM assessment tool. Donald L. Kirkpatrickâs four-level evaluation model first appeared in a series of arti- cles published in 1959 and became popular with his 1994 book, Evaluating Training Programs (Kirkpatrick 1959; Kirkpatrick 1994). The idea behind the Kirkpatrick Model is to provide orga- nizations with meaningful ways to evaluate training programs or learning in the organization. The four levels of evaluation described by the model are depicted in Figure 2.3. Level 1. This level tries to ascertain how students feel about the training; it is a measure of student motivation and satis- faction. Students are typically asked to fill out evaluation or feedback forms immediately after the training ends. These forms usually include questions to evaluate instructors, train- ing materials, and training logistics. Level 2. This level measures how much the students have learned by attending the training. The measurements aim to find out what knowledge was learned, what skills were devel- oped or improved, and what attitudes were changed. Students C h a P T E r 2 Research Approach
4Level 4. This level measures the impact on the business as a result of students attending the training and their subse- quent on-the-job behavioral changes. The impact may be determined in terms of improved safety, increased productiv- ity and efficiency, and reduced staff turnover. This level of assessment is usually the most difficult because results take time to achieve; measurements are needed both before and after the training. The evaluation also needs to determine what business results have been achieved as a result of student participation in the training, as opposed to other organiza- tional initiatives. The research team applied this four-level evaluation model when designing the TIM assessment process and tool. System Development Methodology and Approach The research team followed a typical systems development life cycle (SDLC) approach to designing and developing the TIM assessment tool. The analysis stage of the project pro- vided an understanding the business requirements, an essen- tial first step in any SDLC methodology. From there the team ⢠Documented a set of use cases; ⢠Developed a concept of operations and initial meta- architecture for the system; ⢠Translated this foundational information into functional requirements; ⢠Established the systems architecture, developed functional specifications, and documented test cases; and ⢠Developed the software and performed unit testing. The high-level concept for the system which the team envi- sioned is shown in Figure 2.4. It consists of three major blocks of functionality: ⢠Survey Managementâfunctionality to execute a particu- lar kind of assessment, from a Level 1 (Reaction) survey at the conclusion of a training event to a Level 4 (Results) survey long after a training event or series of events. ⢠Constituent Managementâfunctionality to manage rela- tionships and communications with all key constituents (e.g., individual students, agency training officers and/or management, and trainers). ⢠Analysis and Reportingâfunctionality that enables program staff as well as participating agencies to analyze and report on training participation, needs, effectiveness, and so on. Custom software development is almost always the least desirable approach to system implementation, and that was certainly the case for this project. Controlling costs, ensuring typically need to complete evaluation forms or perform some type of tests both before and after the training. Level 3. This level measures whether on-the-job behavioral changes have occurred as a result of students attending the training, and if so, to what extent. Trainees, their immediate supervisors, and their subordinates or peers who often observe their behaviors may be asked to participate in this level of evaluation. The degree of assessment difficulty is increased at this level because behavioral changes often take time, and the right environment must be provided for the students to implement their behavioral changes. Additionally, those who participate in this evaluation need to be observant to note the behavioral changes that took place. Figure 2.1. SHRP 2 L32C project stages. Figure 2.2. Collaboration between the SHRP 2 L32C project and other projects and organizations.
5 usage fee, and the vendor hosts and manages the entire application environment in the cloud. ⢠Cloud computing services allow an organization to sub- scribe to a cloud-based, virtual computing, storage, and net- work environment and pay for usage on a time-, capacity-, and bandwidth-used basis. In this case the organization is responsible for licensing, installing, and maintaining the application that runs in this virtual environment. As described in more detail in Chapter 3 of this report, the teamâs approach was to use SaaS subscriptions for survey management and CRM functions and to base the analysis and reporting functions on desktop products that integrate with a cloud computing platform. sustainability, and preserving long-term flexibility are always important considerations, particularly in a research project with limited scope and funding. All of these factors pointed to the use of off-the-shelf tech- nology, with a focus on integration and customization via configuration, as opposed to writing code from scratch. For- tunately, the major functional elements of the envisioned sys- tem were all available in various cost-effective forms: ⢠Highly capable and popular survey management and con- stituent relationship management packages (which evolved from customer relationship management, or CRM) are available as software-as-a-service (SaaS) subscriptions. In this model, an organization pays a monthly or annual Survey Management System Survey design Survey execution Real-time results Basic reports and cross-tabs Constituent (Customer) Relationship Management Contact management Automated workflows Scheduled follow-ups Communications management Analysis & Reporting Tools Advanced Reports Multi-dimensional analytics Historical performance Figure 2.4. High-level system concept. Figure 2.3. Kirkpatrickâs four levels of learning evaluation.