National Academies Press: OpenBook

Naturalistic Driving Study: Field Data Collection (2014)

Chapter: Chapter 3 - Summary of Key Tasks and Performance

« Previous: Chapter 2 - Study Centers and Study Design
Page 23
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 23
Page 24
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 24
Page 25
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 25
Page 26
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 26
Page 27
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 27
Page 28
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 29
Page 30
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 30
Page 31
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 31
Page 32
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 32
Page 33
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 33
Page 34
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 34
Page 35
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 35
Page 36
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 36
Page 37
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 37
Page 38
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 38
Page 39
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 39
Page 40
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 40
Page 41
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 41
Page 42
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 42
Page 43
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 43
Page 44
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 44
Page 45
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 45
Page 46
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 46
Page 47
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 47
Page 48
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 48
Page 49
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 49
Page 50
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 50
Page 51
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 51
Page 52
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 52
Page 53
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 53
Page 54
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 54
Page 55
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 55
Page 56
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 56
Page 57
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 57
Page 58
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 58
Page 59
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 59
Page 60
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 60
Page 61
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 61
Page 62
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 62
Page 63
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 63
Page 64
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 64
Page 65
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 65
Page 66
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 66
Page 67
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 67
Page 68
Suggested Citation:"Chapter 3 - Summary of Key Tasks and Performance." National Academies of Sciences, Engineering, and Medicine. 2014. Naturalistic Driving Study: Field Data Collection. Washington, DC: The National Academies Press. doi: 10.17226/22367.
×
Page 68

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

23 Summary of Key Tasks and Performance This chapter summarizes the major tasks and activities con- ducted by the six study centers during the NDS. The topics covered in the following sections include 1. IRB activities 2. Recruiting 3. Consent and assessment 4. Installations 5. Participant management and fleet maintenance (includ- ing crash investigations) 6. Deinstallations In addition to describing the process and issues encountered as volunteer drivers were enrolled and monitored during this 3-year study, lessons learned during each step are summarized. IRB Activities Since the NDS involved human subjects, oversight by an Insti- tutional Review Board (IRB) was required. The role of the IRB is to review all research protocols involving human subjects before the project start and then periodically while the project is under way to ensure that the rights of the participants are protected, that participants are not subject to unreasonable harm (either physical or emotional), and that data and information about the participants are kept confidential. Depending on the nature of the project and an individual institution’s protocols, either a full review by the entire IRB committee, or an expedited review by a single qualified member of the committee is required. For multiyear projects like the NDS, an annual continuing review is also conducted. If there are any changes to the protocol during the course of the study, a formal modification, or amendment, describing the change must be submitted and reviewed by the IRB staff and possibly by the full IRB committee as well. VTTI prepared the initial IRB application for the NDS. This application was given a full review by the IRB at Virginia Tech (VT) and by the IRB for NAS. The six study centers were also required to have IRB oversight by their own (local) insti- tution or, alternatively, be subject to oversight from an “IRB of Record.” [The latter occurs when an IRB at one institution (in this case, Virginia Tech) assumes responsibility for human subject research being performed at another institution.] The NDS study protocol also acquired a Certificate of Confidential- ity from the National Institutes of Health (NIH). This certifi- cate, obtained by VTTI, covered all six study centers. A copy is provided in Appendix C. All staff at each of the study centers who interacted with the participants were required to receive Human Subjects Training in a manner approved by the cognizant IRB. In preparing their local IRB applications, staff at the six study center sites incor- porated information provided by VTTI into the application at their own institution. This included the research protocol, consent forms, compensation details, and safeguards for pro- tection of study participants. One study center (Seattle) found it necessary to provide additional background about the study protocol and about naturalistic data collection in general (i.e., as conducted in other studies) to help its local IRB properly understand how participant confidentiality and privacy would be protected. Table 3.1 summarizes the IRB type, the institution, and the specific IRB committee utilized at each NDS study center. Comments specific to each site are also provided. Note that four of the six sites (Bloomington, Seattle, State College, and Tampa) used their local IRB for oversight. Of these four, all but Tampa were required to undergo a full IRB review. The Tampa IRB did not require a full review because the study could be classified under Tampa’s expedited review Category 6 (research involves the collection of data from voice, video, digital, or image recordings made for research purposes) and Category 7 [the research is performed on individual or group characteristics or behavior (including, but not lim- ited to, research on perception, cognition, motivation, iden- tity, language, communication, cultural beliefs or practices, and social behavior) or will employ a survey, interview, oral C h A P t E R 3

24 Table 3.1 also shows the elapsed time (days) from application submission to IRB approval to provide realistic timelines for the process as it transpired across all sites. Note, however, that delays in obtaining IRB approval (as occurred in Bloomington) were not due to inaction on the part of the IRB or the study center applicants, but rather to delays in obtaining the Certificate of Confidentiality for the entire SHRP 2 program. IRB project approval was valid for 1 year. For each additional year of the study, a continuing review was conducted. Study centers for which a full IRB review was required for the initial application usually had full reviews during the annual continu- ing reviews. IRB Process At a large institution, it is not unusual for an IRB staff mem- ber to be assigned as a point person to a new research project to aid the researcher in navigating the IRB process, to explain how the IRB interprets different provisions, and to ensure consistency from study to study. In particular, this point history, focus group, program evaluation, human factors evaluation, or quality assurance methodologies]. The remaining two sites (Buffalo and Durham) each had an initial full review by their local IRB before the start of the study, but they subsequently adopted Virginia Tech’s IRB as the IRB of Record during the execution of the study. Durham planned for and requested the IRB of Record (with concur- rence from the local IRB) because the study team believed it would be more efficient given the multisite nature of the NDS and the likelihood that there would be periodic amendments to the study protocol. Buffalo requested the IRB of Record because the local IRB at the University at Buffalo had con- cerns about the NDS intent to retain data in the study from participants who were consented and started in the study but then withdrew early. The NAS and VTTI IRBs believed these data could be kept in the study based on precedents estab- lished for other research studies at the NIH. VTTI and the University at Buffalo ultimately signed an agreement allow- ing the VTTI IRB to assume responsibility for oversight of the study in Buffalo. Table 3.1. Summary of IRB Oversight Responsibility at Six Study Centers Study Center IRB Type Institution Providing IRB Oversight IRB Committee Name IRB Review Level Elapsed Time from Application to Approval Comment Bloomington, Indiana Local IRB Indiana University (IU) IU IRB-B ( Bloomington) IU IRB-1 Full 161 daysa IU has five IRBs on two campuses. IRB-1 on Indianapolis campus conducted one continuing review; rest done by IRB-B. Information shared between two IRBs. Buffalo, New York IRB of Record VT VT IRB for Protec- tion of Human Subjects Full (VTTI) NA Initial full review by University at Buffalo’s Social and Behavioral Sciences IRB Committee before adoption of VT IRB of Record. Durham, North Carolina IRB of Record VT VT IRB for Protec- tion of Human Subjects Full (VTTI) NA Initial full review by Westat IRB before planned adoption of VTTI’s IRB as primary for all subsequent reviews and amendments. Seattle, Washington Local IRB Battelle Battelle Internal IRB Full 115 daysa After IRB approval was obtained, most amendment submissions were handled without need for full-board review. State College, Pennsylvania Local IRB Penn State Office of Research Pro- tection IRB Full 58 days Penn State has two IRBs in its Office of Research Protections (ORP), which meet monthly. ORP conducts site visit after approval. Tampa, Florida Local IRB USF USF Social and Behavioral IRB Expedited 85 days Application for IRB approval as well as communication with IRB was via secure website. IRB chair determined that full review not needed. Expedited review was conducted. Note: NA = not available. a Bloomington submitted its application earlier than all other sites. Seattle and Bloomington experienced a delay of 2 months waiting for Certificate of Confidentiality from NIH.

25 The study center site certification occurred only after IRB approval was obtained, the Certificate of Confidentiality was in place, and a site visit was conducted. The initial installation date is provided for each site (for reference) as are the annual IRB Continuing Review dates. IRB Amendments After initial IRB approval was obtained, study centers main- tained regular communication with the IRB office; a variety of amendments were needed to address issues which came up during the course of the 3-year study. Most of the 18 amend- ment submissions were handled via an expedited review pro- cess (i.e., without needing a full-board review). Many were addressed in a few days, but several required 5 to 6 weeks. For example, changes to human subject participation and com- pensation of human subjects typically took more time and usually required a full review. The first four amendments related to secondary drivers. The fourth amendment also addressed modifications needed after the NIH review for the Certificate of Confidentiality. Amendment 5 clarified participant withdrawal and dismissal person is usually called upon to assist if there are adverse incidents with the study. Detailed documentation is required for each project, and each institution has its own forms and requirements. Some study center IRBs were more form-driven than VTTI. For example, in Bloomington, a typical set of forms included Documentation of Approvals, Study Protocol, Summary of Safeguards, a Children in Research form (required because minor drivers were involved), and an Investigators list. In addition, all consent, assent, and owner permission forms as well as recruitment materials and surveys had to be included as part of the documentation. As required by the IRB, all those who were going to have direct contact with participants (crash investigators, assessment per- sonnel, recruiters, schedulers, and installers) were required to have training in human subject research, either an investigators training course or certification from the Collaborative Institu- tional Training Initiative (CITI) or similar organization. One institution also required personnel to sign a statement annually that they had no conflicts of interest with the research work. The items and dates listed in Table 3.2 illustrate the IRB approval timelines for VTTI, NAS, and the six study centers. Table 3.2. Timeline for IRB Approvals and Certificate of Confidentiality Item VT IRB Buffalo and Durham NAS IRB Bloomington Seattle State College Tampa IRB tracking number 09-953 TRBX-P-05-01-A: Field Protocol 1005001386 0434 34363 Pro00001238 Interim VT approval (to start IRB application process) 11/6/2009 na na na na na Initial VT/NAS/local site IRB submissions 2/23/2010 8/30/2010 6/21/2010 10/29/2010 7/23/2010 8/9/2010 Initial VT/NAS/local site IRB approvals 5/6/2010 8/30/2010 11/29/2010 2/21/2011 9/18/2010 11/2/2010 VT IRB of Record executed at local site NY 10/22/2010 NC 8/24/2010 na na na na na Certificate of Confidentiality submission (NIH) 7/21/2010 na na na na na Certificate of Confidentiality approval (NIH) 10/14/2010 10/14/2010 10/14/2010 10/14/2010 10/14/2010 10/14/2010 Initial site certification (after IRB approval and site visit) NY 9/14/2010 NC 9/09/2010 na 2/28/2011 2/22/2011 3/29/2011 11/11/2010 Initial Install Date NY 10/25/2010 NC 11/08/2010 na 1/27/2011 2/23/2011 2/15/2011 11/16/2010 IRB Continuing Review 1 5/6/2011 6/21/2011 10/13/2011 7/26/2011 8/25/2011 10/7/2011 IRB Continuing Review 2 4/20/2012 6/21/2012 8/21/2012 5/23/2013 8/17/2012 11/1/2012 IRB Continuing Review 3 4/9/2013 6/18/2013 7/18/2013 Pending 7/5/2013 Pending IRB Continuing Review 4 na na 7/19/2013 na na na Continuing Review expires 5/3/2014 6/18/2014 7/17/2014 5/31/2014 7/4/2014 11/1/2013 Source: Based on initial data provided by S. Lee, VTTI (Aug. 21, 2013). Notes: na = not applicable.

26 Communication with the IRB before an application is pre- pared and submitted is important to help the researchers prepare for the level of IRB review (full or expedited) that is most appropriate. In addition, maintaining good communi- cation throughout the course of the study can help a complex project go relatively smoothly. One site found it beneficial to contact the IRB with even small problems (i.e., adverse events that did not rise to the level that required immediate notifica- tion) since the IRB was often able to help resolve these issues. IRBs typically require that changes to the study protocol be documented by an amendment. However, to avoid unnecessary administrative delays, amendment submittals should be closely reviewed to reduce the number of submittals. For this reason, the strategy adopted was to bundle amendments together. With long duration studies like the NDS, researchers need to anticipate that there might be staff turnover at the local IRB office. At one site, over the course of this 3-year study, five different point people were assigned to the project with the most recent person assigned for about 20 months. Although each point person was clearly committed to protecting human subjects, each wanted to see things presented in slightly dif- ferent ways; that necessitated adjustments on the part of the research staff. Regardless of personnel changes and style dif- ferences, establishing a good rapport with IRB office staff who were managing the IRB documentation was extremely help- ful in processing modifications and understanding the details that were most important to include in that documentation. Detailed document headers and footers (including document name and version number) are recommended. Recruiting Recruiting participants for the NDS turned out to be the most challenging part of the study. The study design was intended to provide a balanced sample of drivers by age and gender. However, obtaining the desired number of participants in the specified age groups in a timely way proved to be more difficult than expected. As a result, changes were made to the recruit- ment approach, to participant compensation, and to the list of acceptable vehicles. This section describes the recruiting activities and how they evolved over the course of the 3-year study. Lessons learned that can benefit future researchers are presented at the end of this section. Requirements and Approach Initially, participants were recruited for the NDS using a cen- tralized recruiting approach. The Center for Survey Research (CSR) at Virginia Tech effectively assumed the role of national call center for all six study centers. CSR staff, using purchased lists, initiated cold calling of residents in each of the six study areas. Registration information for interested drivers identi- fied by CSR were entered into Virginia Tech’s mission control protocol. Amendment 6 expanded recruitment, addressed the use of leased vehicles, and added a semiannual draw- ing to encourage recruits to become participants. Nonowned vehicles were added in Amendment 7, which also increased participant compensation. A NAS IRB request to revise the owner permission letter was the subject of Amendment 8. Amendment 9 revised the exit survey and revised the letter and e-mail to update participant payment. A call center at Battelle was approved in Amendment 9a. A slogan devised by an advertising agency was added to recruiting materials in Amendment 10. Materials to extend participation and address variable enrollment times were the topics of Amendments 11, 12, and 13. Amendment 14 enabled vehicles with persistent issues related to the tire pressure monitoring system (TPMS) to be removed from the study. This was the most challenging modification to approve because the participants had signed up to participate in the study for specific durations and alter- ing the duration affected the financial compensation for those participants. While this situation was not ideal, it had to be balanced with the risks of having a false positive warning light illuminated on the participants’ instrument cluster. Promotional materials were added to the recruitment section of the protocol in Amendment 15. Amendment 16 allowed 4- to 7-month enrollees; it also authorized a $25 gift card for providing pictures of secondary drivers so that trips driven by these secondary (consented) drivers could be iden- tified and the data used. The total number of participants was raised to 3,300 in Amendment 17; and Amendment 18 allowed contact with secondary drivers for a photo and to ask if they were interested in follow-on studies. This brief sum- mary of IRB amendments illustrates the variety of issues that were addressed during the conduct of the study. Table 3.3 lists the amendments along with the date that each was approved at VTTI and the study centers. As shown, amendments began in June 2010 and continued throughout the project up until August 2013. Issues Encountered and Lessons Learned The issues encountered and lessons learned described in this section represent an amalgam of inputs from all six study centers. One general observation at all six study centers was that for a study as large and as complex as the NDS, it is important to not underestimate the amount of time required to interact with the IRB. Using the Virginia Tech IRB as the IRB of Record for two of the study centers (Buffalo and Durham) simplified the IRB process, resulting in significant time savings for those two cen- ters since all 18 IRB amendments were prepared and submitted by VTTI staff. Although this approach may not be permitted at all institutions conducting this type of research, it was found to provide administrative and schedule benefits.

27 Table 3.3. Amendments to IRB Documentation and Dates Approved Item VT IRB Buffalo and Durham NAS IRB Bloomington Seattle State College Tampa Amendment 1. Secondary driver options; various other 6/8/2010 na na na na na Amendment 2. Further secondary driver clarifications 7/21/2010 na na na na na Amendment 3. Further secondary driver clarifications; various other 9/23/2010 10/7/2010 na na na na Amendment 4. Mods required per NIH Certificate of Confidentiality review; further clarification of secondary drivers 10/15/2010 10/15/2010 na na na na Amendment 5. Clarify withdraw/dismissal protocol 10/22/2010 10/22/2010 na na na 11/10/2010 Amendment 6. Expand recruitment; use leased vehicles; add semiannual drawings 2/16/2011 3/16/2011 na na 6/14/2011 5/31/2011 Amendment 6a. Recruitment na na 3/25/2011 5/13/2011 na na Amendment 6b. Drawings na na 6/2/2011 na na na Amendment 6c. Leased vehicle na na 6/2/2011 5/13/2011 na na Amendment 7. Add nonowned vehicles, increase payment to $500 per year 5/10/2011 5/26/2011 6/2/2011 5/26/2011 6/14/2011 6/29/2011 Amendment 8. Revise owner permission letter per NAS IRB request; may also include consistency review items 6/8/2011 na 7/22/2011 5/31/2011 8/25/2011 na Amendment 9. Revise exit survey, mention Battelle call center, revise e-mail/letter to update payment 8/18/2011 9/2/2011 2/8/2012 10/3/2011 12/21/2011 9/26/2011 Amendment 9a. Battelle only, call center approval na na na 7/26/2011 na na Amendment 10. Add Crowley Webb slogan to recruit- ing materialsa 10/3/2011 10/25/2011 2/8/2012 2/2/2012 na na Amendment 11. Materials to extend participation up to 12 months 2/20/21012 4/6/2012 5/28/2012 5/24/2012 7/12/2012 6/10/2012 Amendment 12. Materials for variable enrollment of 8–24 months 3/27/2012 4/6/2012 5/28/2012 5/24/2012 7/12/2012 6/10/2012 Amendment 13. Add a phrase to the variable enroll- ment consent forms 4/18/2012 4/6/2012 5/28/2012 5/24/2012 7/12/2012 6/10/2012 Amendment 14. Remove vehicles with persistent TPMS issues 5/31/2012 8/1/2012 11/14/2012 8/29/2012 12/21/2012 8/15/2012 Amendment 15. Add promotional materials to recruit- ment section of protocol 10/10/2012 10/12/2012 11/14/2012 na na 10/16/2012 Amendment 16. Allow 4–7 month enrollees; remove blanket process references; allow $25 gift card for providing pictures of secondary driver(s) 2/12/2013 3/11/2013 6/25/2013 3/6/2013 6/7/2013 4/1/2013 Amendment 17. Increase number of participants to 3,300 4/19/2013 5/1/2013 6/25/2013 5/29/2013 na Pending Amendment 18. Contact secondary drivers for a photo and for follow-on studies 8/15/2013 Pending Pending na na Pending Note: Early amendments are shown as not applicable (na) for sites that started later as these amendments were addressed in the site’s original IRB submission. a Crowley Webb (advertising agency) provided the slogan, “Give a little time to safety research, we’ll all get a lot in return.”

28 • Be covered by liability insurance; • Be currently registered; and • Not be driven where cameras are not allowed (i.e., military bases and U.S. border crossings). Recruiting activities formally began at the CSR call center in September 2010. After a few months it became apparent that the call center was not supplying sufficient participants in the required age and gender categories to support the planned participant enrollment and installation rates. In November 2010, the study centers were directed to initiate local recruiting to supplement the national call center efforts. Since the option to conduct local recruiting was included in the original study plans and protocols, nominal IRB approval had already been obtained for this activity. However, the sites were typically required to submit specific materials for local IRB approval throughout the study. Every different recruit- ing method and the associated recruiting materials usually needed to be submitted and approved as they were devel- oped. The Bloomington IRB in particular, was very concerned with how compensation was presented. Use of materials from a local ad agency in Buffalo and establishment of a local call center in Seattle required IRB amendments (as noted in Table 3.3). Figure 3.1 summarizes the sequence of steps going from recruit to participant and lists (on the left side) some of the software (MCS) which was accessible online by each of the study centers. Once entered into MCS, the recruit would be called by a scheduler from the local study center who would further explain and answer questions about the study. If the recruit agreed to participate, the scheduler would set up an appointment at the study center facility to review (and sign) the consent form, complete driver survey and assessment tests, and have technicians install the equipment. Once installation was complete, the “recruit” officially became an NDS “participant.” To be accepted into the NDS study, the driver and vehicle had to meet the following criteria. The driver must • Have a valid driver’s license; • Own the vehicle (or have owner’s permission); • Drive a minimum of three times per week; • Live within the study area—county or zip code (this was relaxed if close to boundary); and • Drive at least 3,000 miles a year (originally aimed at older drivers but later eliminated). The vehicle must • Be on the eligible vehicle list, meaning the parameter iden- tification (PID) code must be available from the vehicle manufacturer (the PID code allowed the DAS to read data from the vehicle bus); Figure 3.1. SHRP 2 recruit-to-participant sequence including recruiting approaches for Buffalo site.

29 • $50 battery reimbursement if vehicle battery failed the voltage test before installation. (One site offered up to $100 gas card for battery replacement.) If participant agreed to replace battery, the installation proceeded. • Occasional $10 or $25 gas cards if the participant was spe- cifically requested to visit the facility for a maintenance or a SSD swap appointment. Additional recruiting incentives approved by the SHRP 2 program and employed at the discretion of each study center included • A $1,000 drawing held every 6 months for every 150 drivers enrolled at the time; • $25 gas cards issued throughout the study to participants who lived 20 to 30 miles (distance threshold was site-specific) or further from the study center facility; and • Promotional items, such as water bottles and T-shirts, given away at exhibits or presentations. Table 3.5 provides a list of recruiting methods and the num- ber of recruits acquired at each site using each method (as summarized in MCS). Although some caution must be used in drawing conclusions from this table (see the following discus- sion), it does provide an indication of which approaches were most viable. Excluding the category “Other” (which contains multiple methods), the most successful recruiting method appears to be “TV/radio ad” for three of the sites (Buffalo, Durham, and Seattle) and “Craigslist” for the other three sites (Bloomington, State College, and Tampa). Additional recruit- ing approaches which showed some success included hearing about the study from another participant, receiving a phone call from a local call center (Seattle), exposure to newspaper or magazine ads, or viewing flyers or posters at a variety of venues. With most of these approaches, interested individuals were directed to a website (www.drivingstudy.org) or to a toll free phone number for more information. recruiting methods used. As a representative illustration, sta- tistics for the Buffalo site are provided at each stage of the process. These numbers show that as a result of both central- ized and local recruiting, a total of 3,444 drivers in the Buffalo area expressed interest, registered, and were entered into the SHRP 2 MCS. Of these 3,444 recruits, 2,211 were contacted by local study center schedulers. (The remaining recruits remained on waiting lists to be available should someone in their age group drop out.) Of the 2,211 recruits contacted, 1,471 declined to participate for a variety of reasons, some of which are also listed in Figure 3.1 (on the right side). How- ever, 740 recruits did agree to become participants. Success rates for converting recruits to participants for each of the six study centers are summarized in Table 3.4. It is important to distinguish the total number of recruits from the number who were actually contacted. For example, some recruits were put on a waiting list because their age/gender group was already filled or a check had to be made to confirm vehicle eligibility. The names of these recruits were retained, however, in case other participants terminated early. Table 3.4 lists for each site the number of recruits in MCS, the number contacted, and the percentage who became participants. Incentives and Methods A number of recruiting incentives were employed at all study centers. These included • Compensation of $300/year (increased to $500/year begin- ning in summer 2011). This level of compensation was pro- vided to primary and additional primary drivers but not secondary drivers. • $50 in gas cards issued for keeping the first scheduled appointment for assessment and DAS installation. This policy was instituted to reduce the no-show rate and began in summer 2012. Table 3.4. Percentage of Recruits Contacted Who Became Participants Study Center Total Recruits in MCSa Total Recruits Contacted Total Participantsb Percentage Contacted Who Became Participants Bloomington 967 480 254 52.9% Buffalo 3,444 2,211 740 33.5% Durham 2,885 2,885 529 18.3% Seattle 3,629 2,451 715 29.2% State College 1,166 717 275 38.4% Tampa 4,267 2,948 734 24.9% Total 16,358 11,692 3,247 27.8% a Not counting duplicate entries for drivers entered more than once in MCS because they switched vehicles. b Totals include primary, additional primary, and AVT participants who were in the study at least 1 day (VTTI 2014).

30 young recruiter (formally and informally) attended university and community events to distribute project materials (flyers and promotional items) and sign up potential recruits. (Simi- lar peer-to-peer recruiting was used in Buffalo and Tampa.) Another recruiter hired at the Bloomington site was a lifelong local community resident. This tactic helped root the project in the community, thus providing more local legitimacy. Hav- ing the assistance of these specialized recruiters also enabled the study center to stay in frequent contact with potential candi- dates, which led to converting 70% of those locally recruited (in the 16–17 and 76+ age groups) to actual participants. The Buffalo study center hired a local marketing firm with experience recruiting participants for pharmacological studies. The firm was familiar with following study protocols, includ- ing recruiting under the control of an IRB. After consultation with the SHRP 2 NAS management (which included discus- sion of marketing concepts such as study branding and media outlets for the message), the Buffalo site was authorized to proceed. A logo and message content for posters, flyers, and billboards were developed as well as radio ads. All received IRB approval (see Amendments 10 and 15 in Table 3.3). Figure 3.2 shows a sample logo and message (left) and its use in a mall recruiting kiosk (right). The category of “Unknown” in Table 3.5 provided about 8% of the total recruits. It is likely that recruits whose ini- tial exposure to the study occurred through the national call center are listed in the Unknown (or Blank) category because the recruiting method (source) field was not included in MCS at the beginning of the project. This belated introduction occurred because, initially, no other recruiting sources outside of the national call center were planned. This would account for the lower than expected number of recruits listed as com- ing into the study via the CSR call center. The category of “Other” in Table 3.5 included a variety of approaches, some of which were common across all sites while some were unique to a particular site. A selection of these methods is listed in Table 3.6 along with the venue and the study center(s) which utilized the method. Once it became apparent that the study centers had to take responsibility for recruiting their own participants, a number of approaches were pursued which targeted specific age categories, especially the hard-to-recruit younger (16–17) and older (76+) age groups for which shortfalls were most significant. One study center (Bloomington) convened a focus group of younger participants which led to that site’s decision to hire a young recruiter to present a more youthful face for the project. This Table 3.5. Number of Recruits by Method as Reported in MCS Recruiting Method Bloomington Buffalo Durham Seattle State College Tampa Total Another participant 122 321 294 103 47 159 1,046 Craigslist 217 493 493 404 231 357 2,195 CSR national call center 5 31 24 8 14 25 107 E-mail 37 34 47 57 140 123 438 Facebook, Twitter, LinkedIn ad 7 81 25 63 12 121 309 Family or friend 33 107 122 199 17 111 589 Flyer/poster 71 191 187 199 96 301 1,045 Movie theater ad 1 3 3 2 14 15 38 Newspaper or magazine ad 65 123 260 45 74 242 809 WA call center (Seattle) 0 9 5 420 3 5 442 Press/media exposure 7 50 124 56 15 55 307 TV/radio ad 31 1,266 529 580 73 221 2,700 Vehicle-based ad 3 15 24 5 0 12 59 Website-based ad 40 50 28 238 22 76 454 Othera 267 463 454 810 339 2,095 4,427 Unknown 59 193 257 430 69 340 1,348 Blank 2 14 9 10 0 9 44 Total 967 3,444 2,885 3,629 1,166 4,267 16,358 Notes: CSR = Center for Survey Research. Recruits contacting the driving study via an 800 number and website are combined for each source. Values in italics indicate the most successful method for that study center (excluding “Other,” which contains multiple methods). a “Other” contains multiple methods (see Table 3.6 for further information).

31 Table 3.6. Examples of Recruiting Methods Included in “Other” Category “Other” Recruiting Methods Venue Study Center Exhibits/display tables (with DAS or possibly with DAS-equipped vehicle) Colleges (student union, library, dining halls, campus expositions, new student orientations) Tampa, Seattle, Bloomington Auto shows; auto plants Buffalo, Seattle Open air markets Tampa Shopping malls Buffalo Fairs, festivals Buffalo, Durham, Bloomington College parking operations Bloomington Movie theater exhibita Tampa Visits by NDS personnel; presentations High school classes; driver ed, high school traffic safety fair Seattle, Tampa, Buffalo Senior centers, assisted living facilities; senior wellness fair Buffalo, Durham, Tampa, Seattle, Bloomington College sporting events Durham American Automobile Association (AAA) driver safety class; American Association of Retired Persons (AARP) mature driver class Durham, Seattle, Buffalo Farmers market Durham Neighborhood blogs Seattle, Durham Postcards with NDS information Distributed at sporting events; put on cars at shopping centers; direct mailings to participants regarding other drivers in family; cold mailings to citizens in the correct county and age group Durham, Seattle, Buffalo Letters from vehicle manufacturer or National Academies Seattle, Buffalo, Tampa Advertisements at major college sporting events College basketball games (Duke and University of North Carolina) Durham a During blockbuster movies. Not to be confused with movie theater screen ads. Figure 3.2. NDS logo and message used in posters, billboards, and flyers. The Seattle site utilized a local call center to conduct cold calling for that area. This center (listed as WA call center in Table 3.5), was effective for recruiting across all age groups but was most effective for recruiting participants ages 76+. Phone calls from a locally based center with name recogni- tion (rather than the national call center at VTTI) appear to have been better received. It is important to note that much of the early recruiting was done under conditions which were subsequently changed. For example, 9 months into the recruiting effort, compensation levels were increased to provide more incentive for recruits to become participants. The acceptable vehicle types were also expanded from the original “prime” category to include “legacy” vehicles (June 2011) and eventually “basic” vehicles (Nov. 2, 2011), largely because many younger drivers and some older drivers did not own prime vehicles. In effect, the low recruiting rates early in the program drove changes in the study ground rules. It is likely that some recruiting methods or venues that were minimally effective early in the program (and were discontinued) might have performed better later

32 At the Bloomington site, free advertisements were posted on university and local community colleges’ online bulletin boards at 2-month intervals with the message tailored to 18–25-year-olds. These university ads were also very success- ful with individuals in the 36–65 age groups who were well acquainted with research and the university and were willing to contribute to a project deemed scientifically worthy. Online ads were also posted on high school websites to reach parents as well as students in Bloomington. However, access to high school students via online methods was restricted in other areas (e.g., Buffalo) where e-mails could only go to administra- tion officials at the schools. Attempts at broadening the scope of the online presence via official government websites at some sites were administratively or organizationally impeded. For example, transportation- related websites like the Indiana DOT or the Bureau of Motor Vehicles cater to very specific temporary communications, such as construction closings and branch operations, and provided no place for announcements. However, this was not the case in Seattle where the Washington State DOT was very cooperative, allowing the Seattle site to place a recruiting announcement on its website. The advertising effort was very effective at attract- ing recruits, primarily for ages 26–50. Advertisements on Craigslist reached a different commu- nity of individuals looking for small economic opportunities within their vicinity. Such advertising provided a moder- ate return, although perhaps selecting for participants with financial motivations. At Durham, a Facebook ad targeting 16–17-year-olds ran for 10 days, generating 249 clicks. Duke University released a Facebook post along with an e-mail blast to students. Radio ads also ran on Sports Network Radio during Duke football games. A banner ad guaranteed for 250,000 impressions ran on goduke.com. Exhibits and Presentations A recruiting approach used by some of the study centers involved setting up NDS exhibits (or tables) at community events or at university venues. At these events, potential recruits could learn about the driving study, see the DAS instrumen- tation, speak with knowledgeable personnel near their own age from the study center and complete the registration forms immediately available at the exhibit. In Bloomington, notable opportunities for recruiting included university orientations for entering students, which targeted young participants (ages 17 and 18), who were new to the city and potentially seeking opportunities to engage in community projects. For the Tampa study center, exhibits were the most efficient type of recruitment. Flyers, promotional items, and a DAS unit were part of every exhibit. For exhibits on the university in the program. However, it is clear that the flexibility and willingness of the study center staff to adjust as the program evolved was instrumental in ensuring the eventual success of the data collection effort. Effectiveness of Methods The following paragraphs step through the major recruit- ing approaches and provide additional detail regarding their implementation and effectiveness. Cold Calling The objective of cold calling was to generate a representative sample. However, the number of recruits obtained from CSR cold calling was insufficient to support the program sched- ule. For example, in Bloomington, the cold calling approach generated only 20% of the total participants. It was particu- larly unsuccessful in reaching younger age groups, most likely because younger groups rely more on cell phones than land- lines. The call center generated few participants under age 25 for Bloomington. A number of factors led to this lack of success. The CSR call center at VTTI had little name recognition at the six study centers. Often, recruits contacted by the local sched- uler (after recruit information was posted in MCS) did not remember speaking with the call center. It is believed that the centralized call center approach was too anonymous and impersonal. In addition, the process itself was too lengthy: the call center collected registration information from the recruit, then subsequently provided that information to the study centers—often meaning that the first contact by the local scheduler took too long to be productive. This was in contrast to the experience at the WA call center in Seattle which placed calls to residents in the Seattle recruitment area. This call center was local, had name recognition, and proved to be effective across all age groups, but especially for the 76+ age group. Online Presence There were a variety of avenues pursued by all sites which utilized online options to publicize the study. These included banner ads on university and (some) local DOT websites, as well as ads on Craigslist, Facebook, Twitter, and LinkedIn. NAS created a SHRP 2 NDS website which provided informa- tion on the study (www.shrp2nds.us) at all six study centers. (This link was later changed to www.drivingstudy.org to make it easier to remember.) The marketing firm used by the Buffalo study center subsequently created a website that was custom- ized for New York, more streamlined (since it focused only on Buffalo), and easier to navigate.

33 basic vehicles later became acceptable, people at the senior center were not interested in hearing about the lower restric- tions. The lesson learned was that there is only one chance to make a first impression. Presentations given at high schools in Tampa were success- ful in recruiting participants 16–17 years old. Undergradu- ate recruiters e-mailed teachers individually throughout the study center area and, with the permission of various teachers, schedules were set for the recruiters to go to the high school to a particular classroom. Presentations at senior homes were more difficult to schedule, as many senior homes were either not allowed to have presentations or not interested. However, exhibits at Senior Funfest events enabled seniors to come together and meet different vendors, play games, and eat food. Six participants were installed through these senior events in Tampa, with four participants being 76+. Word of Mouth Word of mouth can be a powerful tool. As already noted, one of Bloomington’s more effective recruiting efforts came through hiring a lifelong local resident as a recruiter. The decision for a recruit to participate is, in many respects, a nonbinding social contract; and it is more difficult to renege on an agreement with a friend than it is with a stranger. The local resident recruiter gave a consistent persona to the proj- ect which led to an increased follow through by recruits. Even when the study center staff knew that certain recruits would not be eligible because their age cell was filled, they still called those recruits back to thank them for their interest. Often this would stimulate a conversation about other family members (particularly 16–17-year-olds) who might be interested. Driving study T-shirts distributed free as promotional items to anyone who stopped at exhibits in Tampa served as word-of-mouth advertising, especially around the univer- sity campus. At all study sites, a number of recruits signed up because their parents or spouse were already in the study and they had seen the equipment in operation, which helped them overcome any hesitation about becoming a recruit. Handing out flyers and telling people to tell their friends and family was vital for the project in Tampa as 116 partici- pants were installed through this type of recruitment. In the 16–17 age cells, almost half of the total participants installed had signed up through word-of-mouth recruitment. Buffalo and Durham also had a number of participants recommended to the study by other participants. For word of mouth to be effective, it is important that par- ticipants have a good experience. However, word of mouth could also become a disadvantage. For example, when some participants were lost because of battery discharge problems, a number of other participants dropped out even when their vehicle was not affected. campus, undergraduate recruiters worked the booths, provid- ing peer-to-peer interactions. Particularly successful was an exhibit at the USF Bulls (open air) Market, a well-established event held every Wednesday from 9 a.m. to 3 p.m. with approx- imately 10,000 visitors a day. Promotional items were suc- cessfully used to attract attention (although they may have increased the number of false recruits who were not really interested). Promotional items included 512 MB flash drives, pens, water bottles, slap bracelets, sunglasses, backpacks, bears, koozies (can coolers), tumblers, and T-shirts. A total of 34 Bulls Market NDS exhibits were held over a 17-month period, with an average of 14 people registered per exhibit for a total of 478 recruits. Exhibits at the library and at the student union on Tuesdays and Thursdays reached students not on campus on Wednesdays. At all on-campus exhibits, iPads were used to increase “likes” on the Facebook page. Exhibits were also held at dining halls. Sixteen lobby exhibits at movie theater blockbuster movie premiers were held in Tampa, netting 737 recruits (one-third in age group 36–50) of which 23 became participants. Shop- ping mall exhibits showed some success; however, they were much more costly. Unlike Tampa, exhibit tables at college campuses were not an effective recruitment method in Seattle or Buffalo. The success seen in Tampa with college-age recruits was likely because the Tampa study center was operated by the univer- sity, whereas the Seattle and Buffalo study center staff were effectively outsiders on the college campus. Tampa also had the benefit of being able to leverage the well-established Bull Market which was known to the student population and had high student traffic. Another venue for a table type exhibit was used in Durham, where recruiting tables were set up outside the entrance to University of North Carolina (UNC) home football games to distribute postcards. Similar display tables were set up at UNC women’s basketball games. Some limited success was achieved at exhibits at local art and music festivals but less success at county fairs; this was not because of a lack of interest but because of the inability early in the program to accommodate the older vehicles owned by many of the fair goers. As noted earlier, the prime vehicle requirement was subsequently relaxed to include subprime, legacy, and basic classes of vehicles. Presentations directed at the 66-year-old and above age groups were given at assisted and independent living commu- nities for senior citizens and at senior centers. Each of these reached a different demographic of senior citizen, distin- guished primarily by education and income. In Bloomington, early recruiting efforts at the senior center were dramatically less successful than those at the assisted and independent liv- ing communities because many senior center clients did not own the newer prime vehicles. When subprime, legacy, and

34 2.2 million people, but only 250,000 were in the primary study area. Thus, these methods of advertisement were not consid- ered to have an adequate return on investment. An advertising plan to use a local low-power radio station to target younger drivers was considered; that station’s signal reaches approxi- mately half of the study area. However, the conclusion was that this would not be as productive as another round of newspaper ads and would cost about twice as much. In Durham, radio ads ran twice a day (over two work weeks) during drive time on eight stations that are part of Triangle Radio Network for a total of 160 commercials. The Seattle site ran several radio campaigns on a variety of radio stations, ranging from 1 to 4 weeks; most campaigns included an online streaming component. Radio ads generally targeted the 16–17, 18–20, and 21–25 age groups, with 1 week of advertising on an FM talk radio station targeting drivers age 65 and older. Television News Stories/Press Having a TV news crew do a story on the project was beneficial since it gave credibility to the project. In Tampa, a local news channel visited the center and did a short story about the study. The study center saw relatively high pickup after the stories aired. A similarly positive response was noted in Buffalo after a TV news crew interviewed study center staff and filmed the equipment and installation facility. Durham had a similarly positive response in the fall of 2011 when two local stations did stories on the SHRP 2 project in the local news. Newspaper and Magazine Ads Traditional newspaper ads were effective in recruiting middle age and older participants at some sites (Buffalo, State College) and also younger people if the ad was specifically targeted at that group (as in Durham). Besides running ads in multiple newspapers and weekly publications, sites also ran ads in col- lege and even high school newspapers (e.g., Durham). Flyers/Posters/Banners Flyers were a key recruiting tool for a variety of venues. Fly- ers were mailed, e-mailed, or posted on bulletin boards at colleges and universities, high schools, driving schools, senior centers, and collision shops. Flyers at freshman dorms were effective in capturing recruits between ages 17 and 19. A large banner (a variation on a poster) that hung on the State College campus was very helpful in recruiting college-age (17–24) subjects. Other recipients of flyers or posters included associations (Visiting Nurses Association, Auto Dealers Association), Meals on Wheels, volunteer fire stations, local supermarket chains, and unemployment offices. Posters were generally not permitted at the DMV. Appendix D provides examples of the recruiting materials that were used at the various sites. Radio/TV Ads In Buffalo, a series of radio ads targeting specific age groups were found to be very effective. These ads, developed by a local ad agency, were broadcast over radio stations carefully selected for each age group. One of these scripts is provided as an example: Radio Spot (SOUND): Street ambiance *(WOMAN 1, 30s) “Every day on my way to work. . . .” *(MAN 1, 20s) “When I go to the gym. . . .” *(WOMAN 2, 20s) “When I’m driving around with my friends. . . .” (MUSIC BEGINS) (ANNOUNCER) “As a participant in a major driving study, you could contribute to important scientific research every time you use your car. This national program has the poten- tial to make driving, roads, and even cars themselves safer for everyone. By taking part, you can help—just by doing the driving you’d be doing anyway.” *(WOMAN 1, 30s) “Like when we go the movies, when I take the kids shopping, anywhere . . .” (ANNOUNCER) “Plus, participants will be compensated for taking part. So if you’re over 16 and have a valid driver’s license, visit [website] to find out if you might qualify to make a difference.” (MAN 2, 30s IN CAR) “Right now . . .” (ANNOUNCER) “Give a little to this important project— we’ll all get a lot in return. Detailed study information and enrollment applications are available at [website].” Variations on this script were also used in Buffalo to success- fully recruit the hard-to-reach age groups (16–17 and 76+). For example, for the 16–17-year-old age group, the four lines of script above (marked with an asterisk) would be replaced with “On my way to school . . . ,” “Every Friday when we go to the football game . . . ,” “When I am driving to my friend’s house . . . ,” “When we go shopping or when I’m running errands for my parents, anywhere. . . .” The success with radio ads seen in Buffalo was not repli- cated in the more rural State College study area, where TV and radio ads were expensive and not very effective. As another predominantly rural site, the Bloomington study area was situated in the middle of four medium to large media markets: Indianapolis, Terre Haute, Louisville, and Evansville. Other than the local National Public Radio (NPR) station, there were no high-power radio stations centrally situated in the study area. The Indianapolis market reached approximately

35 hard-to-reach groups were the radio ads targeted at those age groups. Across all ages, the best recruiting methods were the radio ads, Craigslist (used late 2010 to early 2011), and recruits acquired through another participant. The least effective recruiting method in Buffalo was the movie theater screen ad. Durham In Durham, posting flyers at senior centers and retirement facilities was helpful in recruiting older participants, while TV/ radio and newspaper ads targeted at younger people were most effective for those groups. Durham also advertised on one of its maintenance vehicles with a van wrap (a wrap is a graphic applied to the vehicle as a form of mobile advertising). The easiest group to recruit was middle aged men and women via ads in the newspaper, radio, and Craigslist. The hardest group to recruit was high schoolers. Younger recruits who became participants were most interested in the compensation, while the adults were more interested in supporting research. Seattle In Seattle, recruits ages 26–35, 36–50, and 51–65 were readily recruited. The remaining age groups were more challeng- ing to recruit, especially those under 26 and over 76. Radio ads were particularly effective for recruiting participants between the ages of 18 and 35, and Craigslist ads were partic- ularly effective for recruiting participants between the ages of 36 and 75. The WA call center was effective for recruiting across all age groups, but was most effective for recruiting participants ages 76+. (It is believed that this call center was more effective than the CSR at VTTI because it was local.) Another effective recruiting mechanism was the Washington State DOT weather traffic web page. Movie theater ads were not an effective recruitment method across all age groups, and press/media websites were only somewhat effective for ages 26–65. State College At the State College study center in central Pennsylvania, the youngest (16–17) and oldest age groups were the most dif- ficult to attract; college-age recruits (18–24) were the easiest. The best methods for recruiting included newspapers (for middle age and older recruits), flyers (including the large banner ad on campus), Craigslist, and word of mouth. The worst methods were movie theater and TV ads which were not very successful and were expensive. Tampa For the Tampa study center, the age groups that were readily recruited (either because of a desire to support research or an Movie Theater Ads The Durham site ran animated recruitment advertisements on movie theater screens before the showing of the featured film. Ads were also run on movie theater screens at State Col- lege. As can be seen in Table 3.5, movie theater screen ads were not very effective (and could be expensive). This is in contrast to the success experienced by the Tampa site using exhibits in the movie theater lobby, staffed by NDS personnel. IRB Amendments Related to Changes in Recruiting Some of the changes to recruitment practices that were not included in the original IRB document required amendments to both the VTTI IRB and to the local study center IRB (see Table 3.3). For example, items that had to be approved by one or more IRBs included radio ad scripts, movie theater adver- tisements, TV ad scripts, print advertisements, a banner adver- tisement; and a listserv advertisement. Some IRBs also wanted to downplay the compensation aspects in any advertisement (in particular the dollar amounts) in favor of emphasizing that participating in the study would support research aimed at improving safety for the driving public. In these instances, after all other information on the study was provided, the ad might only say “participants will be compensated.” Site-Specific Recruiting Summary The previous section described various recruiting methods and provided examples from individual study centers. This section briefly summarizes the recruiting activities and chal- lenges by study center location. Bloomington Presentations or exhibits and ads on the university’s online bulletin board were the most effective tools for the 18–25 (col- lege) age demographic and also for individuals in the 36–65 age groups. Presentations at assisted living and independent living centers were most successful for recruiting individuals in the 66-and-above age brackets. The national call center gener- ated no participants under age 25, in part, because most young people do not have landlines. Newspaper ads also generated a lot of interest. Rapid contact with all potential participants, even those that staff knew would be ineligible, helped main- tain a word-of-mouth network with other highly sought-after participants, particularly 16–17-year-olds and 76+. Buffalo In Buffalo, the hardest ages to recruit were the 16–17-year-olds and drivers over age 76. The easiest ages to recruit were ages 51–65 and 66–75 years. The best recruiting method for the

36 • Other adult drivers of installed vehicle are considered “sec- ondary” drivers (with consent dated and a photo obtained) but are not counted in participant age group totals. • All three types of drivers (primary, additional primary, and secondary) are assigned a unique driver ID number and are included in MCS. Lessons Learned The recruitment of participants proved to be the most dif- ficult task for the study centers, in large part because they had not expected to do recruiting and thus had no plans in place to perform this task. Each study center therefore experimented with and applied slightly different strategies to recruit participants. All the various strategies were shared with the other centers. Some of the lessons learned during this process are as follows: • Recruitment of participants should start early, as far as 3 months before the start of data collection. Most traditional recruitment activities (radio, newspaper, flyers, etc.) take time to produce recruits; that is, there is a delay between the ad and when people start calling and are able to schedule an appointment. • Recruiting at the local study center is more efficient (and effective) than recruiting through a national call center. The knowledge of the local population can help in select- ing the media with the best potential for attracting recruits. • Recruits can lose interest and grow stale. Once a recruit registers, he or she should receive a follow-up call from the study center as soon as possible (preferably within a week). Based on the NDS experience, the half-life of a recruit is 2–3 weeks. • Using recruiters in the same age group as the targeted indi- viduals was useful. • Once potential participants determined they were ineligible or that the compensation was inadequate, they rarely recon- sidered participation. (You only get one chance to make a first impression.) • It is important to keep good records of interactions with recruits. Call logs linking recruit identification numbers to dates of attempted calls and notes on response helped maintain a history of contact with recruits and enabled tai- lored follow-up with hundreds of recruits while ensuring confidentiality. These methods also helped cultivate rela- tionships with interested candidates from the beginning of their involvement in the study through their deinstallation (and satisfied participants garnered additional recruits by word of mouth). • The time lags built into the initial process using the national call center and MCS posting was too long to be effective. Sev- eral 17-year-old participants aged out of the 16–17 bracket interest in the money) were between 26 and 75. Both tradi- tional recruitment (flyers, ads) and exhibits were effective. The hard-to-reach age groups included participants aged 16–25 and seniors over 76. These youngest and oldest age groups were more concerned about privacy issues (relative to the middle age groups). The use of exhibits, word of mouth, and Face- book really helped recruit the youngest and oldest age groups. However, it is important to note that traditional recruitment and exhibits worked hand-in-hand. The majority of recruits aged 16–25 that became participants stated they heard about the driving study through flyers or word of mouth. These flyers were most likely distributed to them (or their friends) during an exhibit so, effectively, exhibits enabled more word-of-mouth recruitment. The plots in Figure 3.3 illustrate the timeline of advertising activities in Tampa relative to the number of par- ticipants installed from December 2010 to July 2013. Reasons “Recruits” Did Not Become “Participants” Besides documenting methods which were successful in obtaining recruits, it is also useful to look at reasons that recruits did not become participants. Table 3.7 lists some of the more common reasons cited by study center schedulers or by recruits (during conversations with the schedulers). The data in this table are presented in order from the most highly cited reason to the least cited reason (based on rank- ings observed in Buffalo and Tampa). The top five reasons shown in the table make up over 80% of the total. It was observed that recruits who had information about the driving study from an ad, the website, or other source and had become educated about the study before talking with a program representative and becoming a recruit were less likely to decline to participate than someone contacted via cold call. Final Participant Distributions by Age and Gender The number of required participants in each age group and gender cell at each study center changed several times during the course of the 3-year study as per guidance from VTTI and NAS. Table 3.8 summarizes the final distribution of partici- pants by age group and gender for the entire program. (Appen- dix E provides the same table for individual study centers.) Age is defined as age at time of recruitment. Note that AVT partici- pants are captured in one category at the bottom of the table. In Table 3.8, the following qualifiers hold: • A participant is a “primary” driver or “additional primary” driver of a vehicle in which equipment was installed. All participants who were in the study at least 1 day are included in the table.

37 Figure 3.3. Timeline of recruitment activities in Tampa relative to participants installed from Dec. 2010 to July 2013. D e c - 1 0 J a n - 1 1 F e b - 1 1 M a r - 1 1 A p r - 1 1 M a y - 1 1 J u n - 1 1 J u l - 1 1 A u g - 1 1 S e p - 1 1 O c t - 1 1 N o v - 1 1 D e c - 1 1 J a n - 1 2 F e b - 1 2 M a r - 1 2 A p r - 1 2 M a y - 1 2 J u n - 1 2 J u l - 1 2 A u g - 1 2 S e p - 1 2 O c t - 1 2 N o v - 1 2 D e c - 1 2 J a n - 1 3 F e b - 1 3 M a r - 1 3 A p r - 1 3 M a y - 1 3 J u n - 1 3 J u l - 1 3 Cold Calls Newspaper TV ad Facebook ad Exhibits Radio ad USF Newspaper HS Newspaper

38 Table 3.7. Reasons Recruits Did Not Become Participants Reason Comments Recruit could not be contacted Schedulers made six attempts for both Buffalo and Tampa recruits (varying the times of day and days of the week that attempts were made). If the recruit had voicemail or an answering machine, a message was left with center study hours. After six attempts, an e-mail or letter was sent (with center phone number) asking if the recruit was in fact still interested. Changed mind about participating Recruit agreed to installation but changed mind before installation. Resident of noneligible county Moderate issue. If recruits lived near a county border and did the majority of their driving in the eligible county, they were accepted. In Buffalo, the influx of residents outside the county became a larger issue once radio ads started. Issues with outward-looking cameras in restricted areas Cameras were an issue at border crossings and on active military bases. Use of cameras on military bases without permission was prohibited. The border-crossing restrictions were an issue mostly in Buffalo and Seattle given their proximity to the Canadian border. Border-crossing issues This was an issue mostly in Buffalo and Seattle given their proximity to the Canadian border. Noneligible vehicle Early in the study this was a concern for younger participants with older cars. Objected to cameras and monitoring The term big brother was used often by the older participants. Some thought cameras too inva- sive (i.e., those with children or who often carry passengers). Younger people (18–25) were most suspicious of cameras, of data being reported, and of the alcohol sensor. Moving away Recruit would not be in study area long enough to complete the study. Too much time for process Some recruits could not take 4 hours off work or out of personal time to complete the installation. Did not want holes drilled in bumpers for front bracket This was an issue in states that did not have front license plate holders that could be used to secure radar unit. These states were Florida, Indiana, North Carolina, and Pennsylvania. In Bloomington there was some success in combating this problem by purchasing color matched bumper plugs (www.bumperplugs.com) to fill the holes in a professional-looking way. Health issues/concerns Recruits felt they could not drive enough due to existing illness. Multiple no-shows (for appointment) This was a site-specific decision to not pursue recruit. Concerns about car warranty and insurance; equipment might cause car problems This was a big concern. Sometimes sites were able to overcome it. However, all warranties were different. Staff advised recruits to check with dealer (who usually said it would void the warranty). Could not obtain vehicle owner’s permission Typically this was an issue with younger drivers whose parents would not provide permission. Insufficient compensation When compensation was increased, this became less of an issue (1 year increased from $300 to $500 and 2 years from $600 to $1,000). Concerns about privacy of the data Older people were wary about the study reporting data to the DMV or insurance company and having their license taken away. Check-engine light was on This excluded vehicles with apparent problems. Equipment interfered with the vehicle’s TPMS This was a fairly large issue (especially at State College and Seattle) and resulted in the exclu- sion of many recruits with late-model (2007 and later) vehicles; a fair number of participants (several dozen) were dropped from the study because of TPMS concerns. High-wattage sound system This was primarily an issue with younger participants who had after-market sound systems installed. It was also difficult to screen these participants before their appointment, since they didn’t always know what wattage their sound system was. Project wants too much information Recruits felt they needed to give up too much personal information to participate. Too far to drive for appointment Not an issue in general, though it did happen a few times in Tampa because that study area was larger than Buffalo (for example). Not a good time (e.g., illness, other life issues) No explanation needed. Low mileage Recruit did not drive enough. Common comment with older recruits. Felt too old to participate Staff assured them the study accepted all ages above 16 as long as they had a valid license. If they still declined, it was for health reasons or because they didn’t think they’d be driving for much longer. Spouse did not want equipment in car No explanation needed. Equipment too bulky; conspicuous No explanation needed. Unhygienic vehicles These were rejected by study center.

39 • Focus advertising toward larger newspapers which have more readers as opposed to multiple smaller avenues (such as playbills at arts centers). • The use of the local evening news proved very beneficial to the recruitment process. The evening news reaches thou- sands of people and also legitimizes the project. (Some recruits initially thought the project was a scam.) This type of publicity early on might have enabled the call center to have a better success rate. • For the highest recruiting returns, it is important to understand who you are targeting with each recruiting method so that areas of exposure are selected based on the interests and underlying motivations of that popula- tion. Craigslist ads and posted flyers are more likely to recruit individuals already seeking opportunities to par- ticipate in research or earn money. Groups not actively seeking these opportunities must be sold the project on other merits. by the time they were contacted. Recruits may also forget their initial interest or have looked for (and found) other opportunities that fit their interests. • If a call center is used, adjust calling hours if a high number of no-answer and/or answering machine responses occur. In current times, call screening creates problems when try- ing to contact anyone over the phone. • Once the hard-to-recruit groups are identified, focus efforts primarily on those groups, as the other groups will accu- mulate over the course of the project. • Consider keeping information on all potential recruits in case criteria change and previously ineligible individuals become eligible. • The distribution of promotional items attracted attention, bringing people to the exhibit where they could be told about the study. Promotional items (especially T-shirts worn around campus and the community) further pro- moted the driving study by word of mouth. Table 3.8. Participants by Age Group and Gender for All Test Sites Age Group (years) Gender Total Participants for All Study Centersa Secondary Driverb Primary Driver Additional Primary Driver Total by Gender Total by Age Group 16–17 Male 109 10 119 262 0 Female 140 3 143 0 18–20 Male 233 4 237 526 4 Female 284 5 289 4 21–25 Male 241 4 245 593 8 Female 345 3 348 8 26–35 Male 156 2 158 308 12 Female 148 2 150 17 36–50 Male 153 3 156 321 16 Female 161 4 165 15 51–65 Male 154 3 157 339 19 Female 181 1 182 23 66–75 Male 166 0 166 314 19 Female 148 0 148 10 76+ Male 248 1 249 448 4 Female 197 2 199 6 AVT Both 135 0 135 135 0 Not specified 1 0 1 1 44 Total 3,200 47 3,247 3,247 209 a Primary participants and secondary drivers with at least 1 day in the study are included. b Secondary drivers are not counted toward “participant” age group total. Only secondary drivers with consent date and reference image are included in secondary driver totals. Note that age and gender are available for 79% of secondary drivers (if designations are unavailable, drivers are included in “not specified”).

40 Participant Intake Process This section describes the sequence of steps that made up the participant intake process. When the participant arrived at the study center, the assessor greeted the individual, verified his or her identity (via driver’s license photograph), confirmed that the driver had a valid (unexpired) license and vehicle registra- tion, and requested proof of liability insurance. If these docu- ments were present, the assessor began the “informed consent” process. This included providing a one-page information sheet describing the project and playing the informed consent DVD which contained a 10-minute video reenacting the intake pro- cess. Actors posing as recruit and assessor demonstrated the typical intake process and addressed any questions or concerns a participant might have. (This video was skipped if the partici- pant previously viewed the video online or reviewed the con- sent form before the appointment). Once any questions were answered and the full consent form was reviewed, dated, and signed, the assessor and the participant walked to the instal- lation area where the vehicle condition (existing scratches or other minor damage) was documented before work began on the vehicle. Explanations were provided regarding how equip- ment was to be attached (e.g., Velcro was used to attach the NextGen computer in the trunk, existing screw holes in the license plate holder were used to attach the forward radar when- ever possible). (Holes did have to be drilled in the vehicle (bum- per) in Tampa, Durham, Bloomington, and State College since vehicles in these states did not have front license plates.) Once the vehicle review was completed, installers began work, and the assessor and participant returned to the assessment area. Back in the assessment area, information was collected to enable processing of compensation checks. This information included bank routing number, checking account number, bank name and address, and so on. All such private infor- mation was stored in a secure location. The participant was also asked for the names and phone numbers of any sec- ondary drivers who might drive the vehicle (at least three times a week). If such drivers were named, the participant was provided a copy of the Secondary Driver Consent Form to take home and share with the secondary driver(s) along with instructions for completion and return of the forms. Table 3.9 summarizes the participant intake process. Participant Assessment Tests and Surveys The next step in the intake process was the administration of assessment tests and surveys by the assessor. The objective of the assessment tests was to establish a baseline in functional capabilities of the driver with regard to perception, cogni- tion, and psychomotor and physical abilities. The surveys or questionnaires enabled psychological testing and documen- tation of health, medical conditions, and medications as well • Other notes on the use of media in recruiting include: C In rural areas, radio and TV ads were expensive and not very effective, especially if only broadcast on public access stations. Newspapers, flyers, posters/banners, Craigslist, and word of mouth worked better in rural areas. C In urban areas, radio ads targeted at specific age groups and radio stations selected for that demographic were quite effective in recruiting the hard-to-reach age groups (Buffalo, Durham, Seattle). Consent and Assessment Process Consent Forms Once the study center scheduler set up an installation appoint- ment for a recruit, the appropriate consent form was sent to the recruit for review. There were different versions of the con- sent form for adult, minor, and secondary drivers. There were also additional versions in each of these categories depending on the length of time the participant was to be in the study. (Time in study was initially limited to 1 or 2 years, but later in the program shorter periods and longer periods of participa- tion were accepted). Finally, there were versions of the consent form for a parent of a minor to sign as well as for the vehicle owner, if different from the participant driver. Regardless, in all of the versions, the salient points were the same. Appendix F provides an example of the consent form for an adult driver participating for 2 years. Initially, the consent form was reviewed at the start of the installation appointment. However, it quickly became appar- ent that it took too long (about 45 minutes) for participants to read the 14-page document (and the equipment installa- tion could not start until the consent was signed). NAS sub- sequently allowed the study centers to send the consent form in advance either by e-mail or U.S. mail. However, this did not guarantee that the recipient read it or reviewed the pictures of the equipment. When recruits arrived for the installation and did review the information, a few declined to participate. The most common reasons for this last-minute change were that they didn’t realize they couldn’t go into Canada (applies to Buffalo and Seattle sites), the equipment was bigger than they expected, they didn’t realize there were cameras, or they didn’t want holes drilled into their bumper (applies to the four states with no front license plate holder). Although all of these issues were previously explained on the phone, some recruits did not fully absorb them. Sending the consent form in advance may have also acted as a deterrent. Some recruits were scared off by the legality of the consent language and the length of the document. Occasionally, recruits would call and cancel their installation appointment once they received and read the consent form.

41 Figure 3.4 shows an illustration of the clock drawing test: the drawing on the right is appropriate while the drawing on the left suggests a possible Alzheimer’s or dementia-related issue. Figure 3.5 shows the instrumentation used for the vision and grip strength tests. Illustrations in both figures are adapted from the VTTI assessment test training briefing. Note that the CPT-II test and three of the four DHI tests were computer-based. Results of these tests were automati- cally uploaded to MCS on the VTTI server via the SHRP 2 NDS Participant Portal. Results of the Optec Vision testing, the Jamar Grip Strength test, and the Rapid Pace Walk test were scored locally and entered into MCS by the assessor. The hand-drawn clock test was scanned into the computer and also uploaded to MCS by the assessor. Table 3.11 lists the names of the various surveys that each participant completed during the intake process. Some of the surveys were confidential, as indicated. Examples of the type of questions asked are included in the table to provide an indication of the nature and content of each survey. The full list of survey questions is not provided in this report, but sample surveys are available in the appendix of the VTTI S06 report (Dingus et al. 2014). The last two surveys were admin- istered at the end of the study period for each participant when the SHRP 2 equipment was removed from the vehicle. The surveys completed on the computer were also auto- matically uploaded to MCS via the Participant Portal. If for some reason the surveys could not be completed at the study center facility, participants were given a paper copy of the sur- veys along with a facility-addressed, postage-paid envelope in which to return the completed survey. Alternatively, a web- site link (with username and password) could be provided to the participant. If secondary drivers were identified, the as safe driving knowledge and history. The tests and surveys were selected by the SHRP 2 program and were administered uniformly (using the same protocols) at all the NDS study centers. Participants were informed that they could decline to do any of the tests or surveys. The assessment test process was essentially the same at each site, with perhaps some slight differences in the order that tests were administered. The lead assessor at each site was either an experienced employee of the company (or university) operat- ing the study center or was a consultant with the credentials to serve in this role. At two sites (Buffalo, Tampa), the lead asses- sor was a registered occupational therapist, who was also a certified driver rehabilitation specialist. At other sites, the lead assessor either held a degree in a related field (e.g., sociology) or was a trained interviewer. After receiving SHRP 2 training at VTTI, the lead assessors trained other assessors who might be study center staff with a degree in psychology or market- ing and/or undergraduate or graduate students in a health- related field, transportation, or engineering. In particular, assessors needed to have good interpersonal and communica- tion skills with an ability to adapt to changing circumstances. All received specialized training to conduct SHRP 2 partici- pant enrollment and driver assessments. This included verbal training on obtaining participant consent and administering tests, as well as hands-on training using other trainees to per- form mock enrollments. Additionally, all of the assessors com- pleted Human Subjects Training. The assessment tests and surveys were usually completed by participants in about 2 hours. The tests were conducted in a comfortable setting while the NDS instrumentation was being installed in the participant’s vehicle. Table 3.10 lists the various assessment tests with a brief description of each test. Table 3.9. Participant Intake Process Category Process Step Arrival Participant arrived at the NDS facility • Assessor greeted participant • Assessor and participant proceeded to driver assessment facility Driver and vehicle documents Assessor reviewed documents brought by participant • Valid driver’s license; note birth date for those under age 18 • Valid vehicle registration • Proof of vehicle liability insurance Consent Assessor reviewed information about project and vehicle instrumentation with participant Participant viewed informed consent video and reviewed consent form Assessor answered any questions—called senior staff as necessary Assessor obtained two signed copies of consent form (one for participant, one for project file) Vehicle review Participant and assessor reviewed vehicle condition; discussed equipment placement/attachment Assessor instructed installers to perform installation Compensation Assessor obtained participant auxiliary data • Voided check for direct deposit (selected sites) • Participant auxiliary data was stored in secure location at all times during intake process Assessment tests Assessor proceeded with participant assessment tests and surveys

42 Table 3.10. Participant Assessment Tests Assessment Test Name Test Description or Purpose Notes Clock drawing test Indicator of Alzheimer’s or other dementia Participant draws clock face and indicates time “ten minutes after eleven” Conners’ Continuous Performance Test (CPT-II V.5) Measures of Attention Deficit Hyperactivity Disorder (ADHD) and impulsivity; measures of attentional lapses; ~12 minutesa Interpretation of results requires clinical training; results should be used only with other sources of information Optec 6500 Vision Testing Acuity (far/near, binocular) Contrast sensitivity (night, no glare, right and left eye) Stereo depth perception (binocular) Contrast sensitivity (day, no glare, right and left eye) Color perception (binocular) Contrast sensitivity (night, glare, right and left eye) Peripheral vision (right and left eye) Participant should wear the vision correction they typically wear while driving, including any specific glasses for night driving. No sunglasses. Jamar Grip Strength Test Used to approximate upper body strength Measured twice with each hand using second to narrowest grip distance; 15 seconds between each trial Driving Health Inventory (DHI) Rapid Pace Walk Test (tests walking mobility and lower body strength) Timed walking task; 10 ft back and forth Visualizing Missing Information (tests visual percep- tion, ability to complete a shape)a Several trials, varying difficulty Useful Field of View (UFOV) (tests visual information processing speed)a Image flashed on screen, duration varies. Two questions asked. Trail Making (timed connect-the-dots; two tests)a Numeric values in ascending order; Alpha and numeric values in ascending/alternating order a Computer-based test. Source: Adapted from VTTI (2010). Figure 3.4. Clock drawing test. Source: Adapted from VTTI (2010). Figure 3.5. Optec 6500 Vision Testing machine and Jamar Grip Strength tool. participant was also given paper copies of the demographics and driver history surveys (numbers 2 and 3 in Table 3.11) for the secondary drivers to complete and return. Once the assessment and survey process was completed, the assessor provided the participant with an exit letter and a glove box letter. The latter was provided so that if the participant was stopped by a police officer and questioned about the equip- ment, an official explanation could be provided. (This letter could also be used if a participant was questioned at a U.S. bor- der crossing.) A copy of this letter is provided in Appendix G. When the vehicle was ready, the assessor and participant met the installation technician and together performed an inspection of the vehicle and noted the location of the installed equipment. The participant then sat in the driver’s seat where three digital photographs were taken using the camera mounted near the rearview mirror. These photographs were stored and subsequently used to confirm (when video data was utilized) that the driver was the consented participant. The installer then had the participant sign a postinstallation inspection form and moved the vehicle to the reception area while the asses- sor accompanied the participant to the facility entrance. With the intake and installation activities completed, the participant then departed.

43 Table 3.11. Participant Surveys No. Name of Survey Description/Purpose Examples of Questions Asked or Information Collected No. of Questions 1 Barkley’s Quick Screen Assess ADHD tendency in past 6 months If easily distracted, if difficulty organizing tasks, if loses things, if often restless . . . 6 2 Demographics Questionnaire Characterize driver via census-type descriptive data Gender, date of birth, ethnicity, work status, household income, miles driven per year, number in household, do they drive, etc. 45 3 Driving History Confidential survey about driving education, experience, viola- tions, and crashes Annual mileage, years driving, number and type of tickets, number and severity of crashes, etc. 17 4 Driving Knowledge Assess general knowledge of rules of the road Meaning of traffic signs/control devices, roadway markings, speed limits, rules on right of way, etc. 20 5 Frequency of Risky Driving Behavior Questionnaire Confidential survey to assess risk- taking tendencies Frequency of running red lights, driving drowsy, speeding through curves or catching air, illegal turns, drinking and driving, using cell phone, eating, reading while driving, not wearing seat belt, etc. 32 6 Vehicle’s Integrated Systems Feature Identification Document cell phone, navigation, and music technology in vehicle Information on integrated vehicle cell phone or Bluetooth, if voice recognition, if OnStar/Sync or Safety Connect, if navigation system or vehicle MP3 system, how music controlled, etc. 12 7 Medical Conditions and Medications Document driver medical status and number and type of medications Age, gender, weight; if medical conditions relate to vision, hearing, heart, stroke/brain, vascular, nervous system, respiratory, kidney, bone, etc. List current medications; if affect ability to drive. 37 8 Modified Manchester Driver Behavior Questionnaire Confidential survey for driver to judge own driving errors and traffic violations Frequency that driver will pass on right, tailgate, forget where car parked, run light as yellow goes red, speed, fail to note pedestrian, bicyclist, hostility toward other drivers, hit something backing up, etc. 24 9 Perception of Risk Questionnaire Confidential survey for driver to assess own crash risk if engaged in various actions If run red light or stop sign, change lanes suddenly, follow emergency vehicles with siren on, drink and drive, use cell phone, race, drive with worn tires, turn without signaling, etc. 32 10 Sensation-seeking scale Confidential survey on types of activities, people, or things driver likes or how he/she feels about an activity Likes wild versus quiet parties, likes (or dislikes) seeing movie twice, would try (or never try) marijuana, prefers friends who are unpredictable or reliable, would/would not try new foods, etc. 40 11 Sleep quality Assess driver work schedule, sleep requirements, sleep patterns, sleep problems Work hours, if shift work, amount of sleep, if naps, fre- quency of nodding off, use of caffeine, alcohol, tobacco, sleeping aids, etc. 60 12 Exit Survey Have driver assess own stress level during study, if impact on driving, rate study experience Level of life stress during study, rate driving ability/safety, if driving altered or restricted during study, if pushed inci- dent button during study. 10 13 Medical Conditions and Medications Exit Survey Document whether driver medical condition or medications are same or changed since start of study Similar questions as no. 7. 40

44 participants were unsure if volunteering within their com- munity should be considered work. Some survey questions contained obvious bias regarding alternative lifestyles that many participants found ridiculous or even offensive, and again, some did not contain relevant options. Finally, there was feedback related to operational use. Each survey opened in a new window which needed to be saved and closed before navigating back to the main screen. Participants who were not comfortable on a computer had some difficulty with this process. They often forgot to save their data before closing the window or became confused by the number of open windows. Hard copies were available for any participant who preferred them, and it is recommended this be an option for future studies utilizing surveys. Installations Installation Facility Characteristics Installation facilities were established at each of the six study centers. These facilities were situated near the areas used for participant assessments so equipment could be installed in the consented participant’s vehicle while they were completing the assessment tests and surveys. Study centers were required to provide one installation bay per 150 DAS units assigned to their site. Thus, as a minimum, the Buffalo, Seattle, and Tampa sites were required to provide three installation bays; Durham was required to provide two installation bays; and the State College and Bloomington sites were required to provide one installation bay. The capability to perform two installations per day per bay was also established. Table 3.12 summarizes the locations and selected characteristics of each of the instal- lation facilities. The initial DAS installation training for operations man- agers and senior technicians from each site was performed at a workshop conducted by VTTI in Blacksburg, Virginia. This training provided the technicians with experience working with the DAS hardware and the InstallWare software tool. The InstallWare software communicated installation data such as component serial numbers and vehicle information to the VTTI database. The software also permitted install- ers to calibrate the accelerometers in the head unit and check the alignment of cameras. The initially trained individu- als then trained additional staff at their own study center facilities. Most of the installation technicians employed at the study center sites had prior experience working on automotive systems. This included installation of automotive stereo and alarm systems as well as general automotive repair work. At the Seattle study center, Battelle subcontracted with Roush Industries to provide installation technicians. Roush was able Feedback on Consent and Assessment Process VTTI’s informed consent video explaining the main points of the consent form was considered by the assessors to be very useful. It answered the majority of participant questions in a concise manner. As might be expected over the course of a 3-year program, there were a few technical issues associated with administer- ing multiple assessment tests remotely. For example, there were times when the VTTI computer connection was down, or individual tests or logins would not work. A suggestion was made that in future studies, a local computer system should also be made available so that tests could be administered even if the remote server was down or the connection was having problems. There were also times a test’s software license (not accessible at local site) would expire which prevented software from opening. This type of issue is more likely to arise in multi- year studies. The study center assessors noted that it would be con- venient to be able to repeat sections of the Driving Health Inventory suite without repeating the entire suite. For some administrations, one part of the testing failed while the rest was successful. It would also be convenient if the tests admin- istered by the assessor were grouped together and kept sepa- rate from the tests that required participant interaction with the touch screen. This would expedite the process by not requiring the frequent switching of seats. Quality control of the test results is extremely important. The study centers were able to upload documents to MCS, but it would be helpful if those documents could be reopened at the local site for verification purposes. The length of the surveys should be reviewed. Feedback from many participants indicated that the surveys were too long. In particular, some of the elderly found the length of the vision tests to be frustrating. The UFOV test was also frustrat- ing for older drivers who couldn’t see the small car and truck appear on the screen. Several questions on the surveys should be reviewed before being used again as they appeared ambiguous, confusing, or did not have relevant answer options. For example, the question “How much coffee do you drink?” did not offer “none” as an option. Similarly, the demographic survey asked for household status with the answer options of “two parent,” “one parent,” “live alone,” or “other.” This was confusing for participants who lived with a significant other or were married without children. Older participants with a living spouse and grown, indepen- dent children were also unsure how to answer this question. These participants chose “other” for lack of a relevant option. The sleep survey also contained a question that some participants could not accurately answer. When asked for their work schedules, the option “retired” was not listed. Some

45 Installation Process The DAS components are shown in the schematic provided in Figure 3.6. The major DAS components were • Head unit to be mounted on the front windshield near the rearview mirror. (The head unit contained four cameras, accelerometers, an illuminance sensor, an infrared illumi- nator, a passive alcohol sensor, and a GPS sensor.) to provide the study center with highly trained technicians who held the appropriate Automotive Service Excellence (ASE) and state certifications. Similarly, at the Buffalo study center, CUBRC contracted with Calspan Corporation to pro- vide installation technicians and facilities. At the Bloomington center the installation technicians were full-time university employees at the Motor Pool who had multiple ASE certifica- tions. Technicians at all study center sites received training on the protection of human subjects in research. Table 3.12. Facility Characteristics Study Center Installation Facility No. of Vehicle Bays Maximum DAS Units Assigned to Site Bloomington The Indiana University Motor Pool garage and mechanics were used to perform the installations. The Motor Pool also provided up to four bays (if needed) and office space to conduct intake and assessments. The Motor Pool was staffed with project-related mechanics from 6 a.m. to midnight to handle the occasional unscheduled vehicle maintenance task. 2 150 Buffalo Installations were performed by a subcontractor (Calspan Corporation) at its facility which is colocated with CUBRC (same building). The facility is located across the street from the Buffalo Niagara Airport, which provided an easy reference point for participants. 3 450 Durham Installations were performed at a Westat facility by salaried employees. The facility was centrally located in the NC Triangle area near the Raleigh-Durham airport. 2 300 Seattle Installations were performed by a subcontractor (Roush Industries) at a Battelle- leased facility in Tukwila, Washington, a suburb south of Seattle. The location was centrally placed within the three-county recruitment area, with easy access to I-5. 3 420a State College Installations were performed in a high-bay, unheated space belonging to the Larson Institute and College of Engineering in a laboratory building about 5 miles from the main campus. The installation site was in the same building as the participant test- ing and intake site, about a 3-minute walk from the participant office/waiting area. 1 150 Tampa Installations were performed by USF Center for Urban Transportation Research (CUTR)-employed technicians in a research park facility on the USF campus. The facility was located on the outer perimeter of the campus adjacent to a major road (Fowler Ave.) and was within walking distance of the main CUTR facilities. 3 450 a Due to DAS equipment shortages, the Seattle allotment of DAS units was reduced to 420. Figure 3.6. Schematic showing main DAS components.

46 required front license plates (Buffalo and Seattle). At the other centers, license plate brackets were purchased from automobile dealers to permit installation of the radar. • The radar interface box (RIB) was secured in the engine bay and connected to constant power and ignition switch power on the fuse box. The headlight and turn signal cables were connected if needed. [Some vehicles did not need these connections implemented because the information was available from the vehicle Controller Area Network (CAN) bus.] • The plastic panels under the steering wheel were removed to access the internal fuse block, brake switch, and OBD port. The network box was installed and connected to constant and ignition power. If needed, a connection was made to the brake signal cable from the brake switch. • The head unit was aligned, its attachment location on the windshield was cleaned, and primer was applied to the windshield. The head unit was attached to the windshield and the power cable was connected and routed down the A-pillar to the rear of the vehicle. • The door sill panels on the driver side were opened, and cables were run from the fuse box, vehicle network connec- tion, and head unit to the car trunk. The door sill panels were then reinstalled. • The camera on the rear windshield and the General Packet Radio Service (GPRS) antenna were installed and cables routed down the C-pillar into the vehicle trunk. • Once all cables were routed, the NextGen (DAS) was installed using cable ties or industrial Velcro in the trunk under the deck or under the trunk bed; in some cases, it was installed on the driver-side quarter panel. Cables were connected on the NextGen, and all panels and carpet were put in place. • The NextGen was connected to the installer laptop via Ethernet cable, and InstallWare software was run to initial- ize the NextGen software. • The alignment board was centered in front of the vehicle using two laser alignment tools attached to the left and right windows. The board was then used to align the front radar and forward-looking cameras. • The offset from vehicle centerline and height of head unit from ground level was measured and entered into the InstallWare software. • InstallWare software was run and checks were made to ensure that all turn signals, brake, headlight, and speed data were obtained. Measurements of vehicle width were entered into the software. • The rear camera was aligned to ensure video coverage of the vehicle’s rear blind spots. The image was checked on the InstallWare software. • The participant was invited to sit in the driver seat, adjust the seat and mirrors; then the following three photos were • Radar unit to be mounted on the vehicle front bumper. • Radar interface box to be mounted in the engine com partment. • Rear-looking camera to be mounted on the rear window. • GPS and cell phone antennae to be mounted on the rear window. • DAS main unit to be installed in the vehicle trunk. • Cabling to connect the various DAS components and the vehicle systems. Installation of the DAS in a participant’s vehicle was a multi- step process which varied slightly among vehicle types and from study center to study center. The general steps employed in the process are as follows: • The participant was greeted at the facility entrance lobby by a study site representative (e.g., receptionist, assessor, installation technician, scheduler). • The participant was asked to provide his/her driver’s license, registration, insurance card, and keys to the vehicle. Once ownership and insurance status was verified, the participant was asked if there was anything in the vehicle they needed to get and if the vehicle was free of guns, medications, or dangerous items. • The participant was escorted to an assessment room to complete the consent process. • The vehicle was driven to the installation area by a technician. • The technicians inspected the vehicle to determine if it was eligible for DAS installation. If the vehicle was found to be ineligible, it was driven back to the reception area and rea- sons for rejection were explained to the participant. • Once accepted, the technicians recorded the odometer read- ing, battery voltage level and condition, tire pressure, and tire tread condition; the technician also took pictures of the vehicle, documenting any preexisting damage, and created a vehicle checklist document for the participant. • After completing the consent process, the participant was escorted to the installation area and the technician showed the participant the DAS equipment, described where it would be installed in the vehicle, and answered any ques- tions. The participant was asked to sign the checklist docu- ment noting that everything was in order. • The assessor escorted the participant to the area used for assessment testing and completion of the survey questionnaires. • The technicians started installation of equipment by scan- ning the participant ID number into the InstallWare soft- ware and edited any missing/wrong information about the vehicle. • All necessary components were scanned into the InstallWare software. • The radar was installed on the front bumper using a license plate bracket. Only two study centers had state laws that

47 • The vehicle was driven to the reserved parking spot, and the participant was asked to sign the after-installation checklist document. • The participant was given a letter to keep in the glove box in case of questioning from the police about the equipment and was free to go. The next two figures illustrate typical installation opera- tions. Figure 3.7 illustrates wires being routed from the head unit in the passenger compartment to the DAS NextGen computer in the vehicle trunk. Figure 3.8 shows the instal- lation of the radar unit on the vehicle front bumper and the RIB in the engine compartment. taken using InstallWare software and the driver camera in the head unit: one looking forward, one glancing at the rearview mirror, and one looking at the mirror directly. The participant was shown where components were installed and where the “incident” button was located. • The participant was escorted back to the waiting area until the installation was completed. • Photos were taken of the vehicle after installation for the vehicle checklist document. • The last check was the “shakedown” software check which was an on-the-road test to ensure all sensors worked as expected. If any abnormalities were detected, the compo- nent was swapped with a new one. (a) (b) Figure 3.7. Installation of cabling from head unit to DAS NextGen computer in trunk: (a) kick panel on driver side removed and (b) connection of cabling to NextGen. Figure 3.8. Installation of radar unit components: (a) installation of radar unit on front bumper and (b) installed RIB. (a) (b)

48 participants would show up). However, this strategy was only used at one site during periods of peak installations, and backup personnel were on call in the event that all of the scheduled participants showed up. Installation Rates and History Table 3.13 shows the actual number of installations performed each month at the six study centers. The numbers include all installations performed at the study site each month. These include installations of equipment into new participants’ vehicles as well as reinstallation of equipment into new vehicles of existing participants. The sizeable month-to-month variations shown in the table at all of the study center sites was not anticipated at the beginning of the program. As discussed earlier, the installa- tion rates were strongly affected by the availability of recruits and equipment. For example, the spikes in installments shown for the Buffalo study center in March 2012 and January 2013 resulted from relatively large numbers of recruits attracted to the program by radio advertisements that aired in the preceding months. Figure 3.9 provides a graph of the total installations per month summed over all six study centers. Table 3.14 provides a summary of installation appointment statistics for all six study center sites. Of 5,142 appointments Installation Scheduling Strategies During the course of the program, the actual number of instal- lations completed per day ultimately depended on the success in recruiting participants in the required age groups and DAS equipment availability. Additional factors affecting the num- ber of installations performed per day were the number of last-minute participant cancellations, participants failing to show up for their scheduled appointment (i.e., no-shows), and rejected vehicles. To minimize no-shows, schedulers at the study centers instituted procedures to provide the participant with reminders of their appointments through phone calls, e-mails, or text messages. Usually a confirmation was sent out at the time the appointment was made, and a reminder was sent out 1 or 2 days before the appointment. In addition, as the program progressed, to encourage participants to show up for their installation appointments, they were given a $50 gas card at the time of the installation. Occasionally, bad weather or especially good weather led to a large number of no-shows and cancellations. For example, during January and February 2012, Buffalo had a 40% to 50% no-show rate largely due to adverse weather conditions. Attempts were made to reschedule cancel- lations and no-shows whenever possible. Overbooking was considered as a strategy to overcome no- shows and increase the rate of installations (i.e., the study center would schedule four installations and hope only three Table 3.13. Number of Installations by Month at Each Study Center Year Month Bloomington Buffalo Durham Seattle State College Tampa Total 2010 October 0 4 0 0 0 0 4 November 0 10 8 0 0 6 24 December 0 13 5 0 0 8 26 2011 January 1 13 20 0 0 2 36 February 8 6 21 3 6 7 51 March 10 15 20 10 9 16 80 April 14 26 15 24 15 26 120 May 14 12 13 22 11 21 93 June 12 9 29 39 17 17 123 July 15 20 28 46 16 27 152 August 16 31 30 36 18 55 186 September 19 46 30 32 10 52 189 October 13 35 9 28 12 33 130 November 16 38 23 26 7 31 141 December 11 45 35 25 9 40 165 2012 January 5 34 29 44 10 42 164 February 8 26 27 35 8 34 138 (continued on next page)

49 Table 3.13. Number of Installations by Month at Each Study Center (continued) Year Month Bloomington Buffalo Durham Seattle State College Tampa Total March 15 70 9 22 7 59 182 April 4 14 16 34 5 19 92 May 9 14 17 29 6 19 94 June 5 16 7 30 4 21 83 July 8 17 15 29 8 14 91 August 4 27 8 27 9 16 91 September 8 21 6 32 4 36 107 October 3 26 15 37 13 28 122 November 6 32 11 21 10 28 108 December 4 26 26 12 8 14 90 2013 January 3 51 27 21 8 33 143 February 2 24 18 21 10 23 98 March 1 16 12 17 10 19 75 April 5 6 4 7 3 7 32 May 9 11 4 6 7 14 51 June 0 13 5 13 5 7 43 July 6 2 3 10 6 6 33 August 0 0 0 1 1 1 3 September 1 0 0 0 1 0 2 October 0 0 0 0 0 0 0 November 0 0 0 0 0 0 0 December 0 0 0 0 0 0 0 Total 255 769 545 739 273 781 3,362 Figure 3.9. Total installations by month summed over all six study centers.

50 Table 3.14. Installation Appointment Statistics—Totals for All Study Centers Year Month Appointments Scheduled Number of Cancellations and No-Shows Number of Vehicles Rejected at Site Percentage of Cancellations and No-Shows (By Month) Percentage of Appointments with Vehicles Rejected Percentage of Cancellations and No-Shows (Program to Date) 2010 October 7 2 1 28.6% 14.3% 28.6% November 33 5 4 20.0% 16.0% 21.9% December 36 3 7 6.8% 15.9% 13.2% 2011 January 64 14 9 21.9% 14.1% 17.1% February 98 35 4 35.7% 4.1% 24.8% March 137 46 7 33.6% 5.1% 28.0% April 169 45 9 26.6% 5.3% 27.6% May 160 47 10 29.4% 6.3% 28.0% June 179 56 3 31.3% 1.7% 28.7% July 221 56 4 25.3% 1.8% 28.0% August 277 89 5 32.1% 1.8% 28.8% September 241 52 4 21.6% 1.7% 27.7% October 190 48 1 25.3% 0.5% 27.5% November 214 57 3 26.6% 1.4% 27.4% December 236 66 10 28.0% 4.2% 27.5% 2012 January 258 80 9 31.0% 3.5% 27.8% February 214 68 10 31.1% 4.6% 28.1% March 260 64 11 24.6% 4.2% 27.8% April 131 38 8 29.0% 6.1% 27.8% May 139 36 4 25.9% 2.9% 27.7% June 130 35 9 28.5% 6.9% 27.8% July 152 53 7 34.9% 4.6% 28.1% August 164 58 10 35.4% 6.1% 28.4% September 182 52 18 28.6% 9.9% 28.4% October 230 77 23 33.5% 10.0% 28.7% November 202 58 29 28.7% 14.4% 28.7% December 135 37 11 27.4% 8.1% 28.7% 2013 January 201 48 16 23.9% 8.0% 28.4% February 163 52 19 31.9% 11.7% 28.6% March 95 31 10 32.6% 10.5% 28.6% April 40 17 8 42.5% 20.0% 28.8% May 73 10 9 13.7% 12.3% 28.5% June 49 7 6 14.3% 12.2% 28.4% July 55 12 10 21.8% 18.2% 28.3% August 2 0 0 0.0% 0.0% 28.3% September na na na na na na Total 5,142 1,456 308 28.3% 6.0% 28.3% Note: na = not applicable.

51 Figure 3.10. Number of installation appointments per month. scheduled across all sites, 28.3% were no-shows or cancella- tions (who did not reschedule), and 6% of the appointments resulted in vehicles being rejected at the installation site. However, 3,378 appointments (65.7%) resulted in a success- ful installation. Detailed data for the individual study centers is provided in Appendix H. Figure 3.10 shows the number of installation appointments per month for the six study center sites. Early in the program the number of installation appointments was restricted by the availability of DAS equipment and the number of available recruits in the hard-to-recruit demographic cells. This graph also shows the temporal variability in scheduled appoint- ments experienced at all the sites. This variability created challenges with regard to efficiently scheduling manpower to perform the installations. Figure 3.11 shows the cumulative percentage of no-shows and cancellations for four of the six study center sites. As noted, by the end of the study, the cumulative percentage of no-shows ranged from a high of about 32% (Seattle study center) to a low of 22% to 24% (Buffalo and State College study centers, respectively). Reasons for Rejected Recruit Vehicles Participant vehicles were occasionally rejected at the time of installation due to problems with vehicle condition. Fig- ure 3.12 provides a graph showing the number of vehicles rejected per month for five of the six study center areas (i.e., Buffalo, Durham, Seattle, State College, and Tampa). Also shown is the percentage of monthly appointments resulting in vehicles rejected for installation. The graph indicates higher vehicle rejection rates occurred later in the program (i.e., Sep- tember 2012 through February 2013). During this period, recruitment activities focused on younger and older drivers. The fact that younger drivers typically had older vehicles may be a possible explanation for the high vehicle rejection rates experienced during this period. In general, reasons for vehicle rejections included • Check-engine or other dashboard warning light(s) on; • Alternator voltage below threshold; • Incorrect vehicle description (convertible versus hard top); • Damaged, missing, noncompatible front bumper; • Generally poor (or unhygienic) vehicle condition; • Insufficient documentation (insurance, etc.); • Installed after-market electronics (e.g., a large speaker sys- tem which draws a lot of energy); • TPMS; • Cracked windshield; • Leaking trunk; • Inaccessible OBD location; • Window tint; • Vehicle fluid leaks (radiator, fuel, oil, water); and • Head liner attached to the roof—no place for cables.

52 Figure 3.11. Cumulative percentage of no-shows and cancellations at four study centers. Figure 3.12. Vehicle rejection rates by month for five study centers.

53 available for activities in the shop. Also of note are the least squares fits to the data. These fits indicate similar reductions in installation times as technicians at the sites gained experi- ence with installations. As noted earlier, an important activity during the partici- pant intake process was the participant assessment activities. Figure 3.14 provides a comparison of the installation and assessment times for each of the Buffalo study center partici- pants. As in the previous graphs, the installation and assess- ment times have been numbered in the order in which they occurred, with “1” being the first participant enrolled in the program. Also shown on the graph is the total participant intake time. The total intake time is the elapsed time from when the participant walked in the door to when he or she walked out the door. This included times required for partici- pant consent and assessment as well as equipment installa- tion. At the beginning of the program it was assumed that the total participant intake process could be completed in 4 hours or less. Data shown in Figure 3.14 confirm this assumption; approximately 82% of Buffalo study center participants had total intake times of 4 hours or less. The State College study center site had similar experience with the majority of its par- ticipants completing the intake process in less than 3 hours. Participants Switching Vehicles by Site Some participants required more than one installation of DAS equipment. This occurred most frequently when par- ticipants changed their vehicle while they were enrolled in the study and opted to continue in the study with their new vehicle. Table 3.15 summarizes the percentage of primary participants at each study center who switched vehicles, had Time Required for Installations Figure 3.13 is a scatter plot showing the reported times required for DAS installations at the Buffalo and Seattle test sites over the entire program. The installations have been numbered in the order in which they occurred, with “1” being the first installation. Also shown on the plot is a linear least squares fit to the data for each of the two study centers. As shown on the graph, the installation times at the Buffalo study center were longer than the corresponding times at the Seattle site. For example, the average installation time in Buf- falo was 2 hours and 22 minutes compared with 1 hour and 22 minutes in Seattle. Furthermore, approximately 83% of the installations in Buffalo were completed in 3 hours versus 99% in Seattle. One explanation for the difference in installation times between the two sites is how the sites defined instal- lation time. In Buffalo, installation time included the time for equipment installation and check-out as well as the time for preinstallation activities such as inspecting and photo- graphing the vehicle, reporting the inspection to the partici- pant, and answering any questions the participant might have about the equipment to be installed. In Seattle, installation time was defined as including only the time to install and check out the DAS equipment. The inclusion of inspection and sign-off times in the Buffalo data could easily account for 30–40 minutes of the difference in site installation times. A second factor contributing to the difference in installation times between the two sites is that Buffalo routinely used two technicians per installation, while Seattle allocated its four technicians across whatever vehicles were in the shop at any given moment. In this way, the Seattle technicians could parse their time between vehicles and activities (e.g., installs, maintenance, SSD swaps). The exception occurred when two technicians traveled for off-site maintenance appointments and SSD swaps; in these situations, only two technicians were Figure 3.13. DAS equipment installation times at Seattle and Buffalo study centers. Figure 3.14. Total intake process times for participants at Buffalo site.

54 • It is important to record as accurately as possible the con- dition of the vehicle (as determined during the preinstalla- tion vehicle inspection and the installation process). It was easier to address participant complaints and comments regarding vehicle problems that arose during the program by referring to the installation and inspection records. • When planning an installation, careful thought should go into how wires associated with the equipment are routed and secured, especially when hiding them behind interior trim panels and under seats and carpet. The deinstallation process can be simplified and expedited by strategically rout- ing the cables and securing them to objects that are easily accessible or that do not require substantial disassembly of interior trim parts. This can reduce how much of the vehicle will need to be dismantled during the removal of the wires. • Reference sources are of great assistance during the installa- tion process. Commercially available maintenance manuals that encompass all vehicles proved very useful in determin- ing the routing of wires and locating power lines on vehicles. • The age of vehicles must be considered when installing the DAS equipment. While the power taps provided by the pro- gram worked well with the mini-style fuses in most newer vehicles, they did not work well in vehicles that have the older larger-style fuses. Using these power taps in a fuse box that has the larger-style fuses can result in a loose power connection that will affect DAS or RIB performance. • Some vehicles seem to be very sensitive to power taps into circuits that are tied to the vehicle computer. Even though the circuits being used only operated a solenoid for a vent valve, for example, several vehicles were encountered that had issues with this process, while many others did not. As more and more circuits are controlled by the vehicle com- puter on new vehicles, more thought should go into how to provide power to the equipment. DAS equipment installed in their new vehicle, and continued their time in the study. Number of Vehicles by Make and Year at Each Study Center The installation process had to accommodate a large variety of participant vehicle makes and model years. Table 3.16 pro- vides a summary of the participant vehicle makes enrolled in the study for the six study center sites. Figure 3.15 shows the number of total vehicles in each vehicle model year for all study centers. The model years of vehicles enrolled in the program ranged from 2013 to 1987. A large proportion (72.4%) of the vehicle fleet comprised vehicles in the 2005–2013 model years. Finally, participant vehicles were characterized according to the nature and quality of data that could be obtained from the vehicle. Four vehicle designations were employed during the program, namely prime, subprime, legacy, and basic. The defi- nitions of these vehicle categories were provided previously. Figure 3.16 summarizes the available data on the categories of vehicles in the participant fleets at four of the six test sites. As is evident, the majority of vehicles in the fleets at the four sites were prime vehicles. Lessons Learned on Installations The following list summarizes installation-related observa- tions and lessons learned from the six study center sites. • Keep participants informed about any delays that might occur during the installation process. In general, partici- pants were cooperative and accommodating when problems arose. Table 3.15. Percentage of Participants Who Switched Vehicles During Study Study Center No. of Primary Participantsa No. of Primary Participants Who Switched Vehicles Percentage of Primary Participants Who Switched Vehicles Bloomington 246 10 4.1% Buffalo 725 52 7.2% Durham 528 14 2.7% Seattle 704 36 5.1% State College 269 2 0.7% Tampa 728 52 7.1% Total 3,200 166 5.2% a Only primary participants were considered since they owned or otherwise had responsibility for vehicle (i.e., additional primary and secondary drivers were not included).

55 Table 3.16. Number of Participant Vehicles by Make at Each Study Center Vehicle Make Bloomington Buffalo Durham Seattle State College Tampa Total Acura 0 1 4 8 0 4 17 Audi 0 0 0 1 1 0 2 BMW 0 3 3 1 0 1 8 Buick 7 17 5 7 4 5 45 Cadillac 1 0 3 9 1 4 18 Chevrolet 18 117 27 30 33 40 265 Chrysler 2 7 5 9 1 4 28 Dodge 2 19 4 10 3 16 54 Ford 25 125 92 105 36 116 499 Geo 1 0 2 1 2 0 6 GMC 5 5 5 4 0 6 25 Honda 35 63 91 110 39 103 441 Hyundai 19 38 22 51 13 77 220 Infiniti 1 2 1 4 0 4 12 Jeep 5 10 9 14 6 9 53 Kia 13 26 12 24 8 22 105 Lexus 0 3 4 6 0 7 20 Lincoln 0 3 4 0 0 3 10 Mazda 6 9 14 18 4 13 64 Mercedes 0 0 1 0 1 2 4 Mercury 4 18 6 13 3 9 53 Mini 0 0 2 4 0 0 6 Mitsubishi 1 4 4 2 0 10 21 Nissan 13 60 46 63 19 84 285 Oldsmobile 0 4 0 3 0 1 8 Plymouth 0 0 1 0 0 1 2 Pontiac 11 35 6 10 5 12 79 Saab 0 0 0 2 1 0 3 Saturn 1 18 7 9 5 19 59 Scion 1 5 4 5 0 4 19 Subaru 4 11 5 29 4 4 57 Suzuki 0 2 5 1 0 2 10 Toyota 72 147 139 159 79 186 782 Volkswagen 6 7 7 16 4 7 47 Volvo 0 1 4 6 1 4 16 Unknown 2 9 1 5 0 2 19 Total 255 769 545 739 273 781 3,362

56 Figure 3.16. Number of study center vehicles by vehicle designation for four sites. Figure 3.15. Count of vehicle population by model year for all centers.

57 would participate for 24 months, providing a total of 3,102 par- ticipants and 46,800 participant-months (or 3,900 data-years) in the field (Campbell 2010). As noted in Figure 3.18, the actual distribution of participant time in the study differed from the initial plan as some drivers participated for less than 12 months and some for more than 24 months. The deviation from the original plan was due primarily to difficulty obtaining recruits and the availability of DAS equipment. However, the study centers exhibited considerable flexibility in adapting to the program requirements and were able to provide the pro- gram a total of 3,247 participants (104.7% of plan) and 46,866 participant-months in the study (100.1% of plan). The latter represents 3,905 data-years. (Note that the 3,247 participants includes 131 participants in the study for less than 4 months). The figures just cited do not include the additional data that will be available from the 209 (verified and consented) second- ary drivers. Data for individual sites is presented in a table in Appendix J. Maintenance of Vehicle Fleet A Request Tracker (RT) system was implemented by VTTI to help monitor the fleet of NDS vehicles. This system enabled the communication of messages, known as RT tickets, between VTTI and the study centers. The RT tickets were used to track issues in several areas, including problems related to vehicles in the field (e.g., issues with instrumentation, alerts that the SSD was almost full), notices that participant survey data or photos were missing in MCS, or technician-identified prob- lems with inventory or equipment at the installation site. Participant Management and Fleet Maintenance This section discusses the tasks and strategies associated with management of the participants and maintenance of the fleet of NDS vehicles. Participant management tasks included providing general support to the participants by answering questions about incentive payments, scheduling vehicle main- tenance appointments, and answering participant or garage mechanic’s questions should they arise during routine non- NDS maintenance and servicing activities, and so on. Fleet maintenance–related tasks included care and upkeep of the instrumentation in the participants’ vehicles, replacing a SSD which had reached its storage capacity, or repairing/replacing damaged NDS equipment. These tasks could be performed either at the study center facility or at an off-site location more convenient to the participant. In the latter case, it was neces- sary to send one or more technicians to the off-site location. When maintaining the vehicle, every attempt was made to accommodate the participant’s schedule and preferences. To appreciate the size and scope of this task, it is helpful to consider the total number of vehicles in the field over the 3 years of the study. Figure 3.17 provides the total vehicles— summed over all six study centers—which were in the field each month from October 2010 through October 2013. Plots for individual study centers are provided in Appendix I. It is of interest to consider the number of participant-months that drivers spent in the field. Figure 3.18 summarizes these data for all sites. At the beginning of the study it was anticipated that most drivers would participate for 12 months but some Figure 3.17. Total instrumented vehicles in field each month (all six study centers) over 3-year period.

58 These tickets were vehicle centric (identifying problems with specific vehicles) as opposed to issue centric (identifying all vehicles with a specific problem). However, not all mainte- nance requests had an associated RT ticket. Some mainte- nance issues reported by participants to the test facilities were easily corrected without contacting VTTI and/or creating an RT ticket. (For example, radio interference could often be addressed by simply moving the computer in the trunk.) Table 3.17 summarizes the number of RT tickets issued at each site over the 3 years of the study. Some of these RT tick- ets were duplicate warnings for the same item (e.g., a drive was 70% full, 80% full, 90% full). As might be expected, the number of RT tickets at each site was roughly proportional to the size of the fleet at that site. The NextGen SSD in each vehicle had a data capacity of 128 GB. The SSD was expected to reach its storage capacity after 4 to 6 months, depending on frequency and length of driving trips. Whenever the driver started the car, an auto- mated “health check” would be performed to determine if all DAS components were functioning properly and communi- cating with the NextGen computer in the vehicle. The health check also determined the currently used capacity of the SSD. The DAS would assemble a health check message containing the component and SSD status and send the message to the VTTI server through the cellular link in the DAS. When a SSD was 70% full, the RT ticket system would issue a warning to the appropriate study center that a SSD swap would soon be required for the vehicle. Figure 3.18. Number of participants versus months in study. Table 3.17. RT Tickets Used to Track Participant Maintenance Issues Study Center No. of RT Tickets Notes Bloomington 748 Used own in-house system for managing maintenance. RT system generated inputs into the in-house system. Buffalo 1,979 RT tickets were used to generate participant appointments for vehi- cle maintenance or SSD swaps. Durham 1,350 Seattle 1,719 Of these, 1,542 were vehicle-related. The tickets from the RT system were imported into the internal participant management data- base for scheduling and tracking. State College 1,096 Tampa 2,684 Total 9,576 The large number of SSD swaps that needed to be per- formed required a strategy to efficiently accomplish the task. The initial notice of the SSD exceeding 70% capacity was usu- ally followed up at most centers with a search for the specific vehicle on the MCS. The MCS would provide a date at which the system predicted the drive would reach 100% capacity.

59 As might be expected, it was much easier to get partici- pants to return phone calls and schedule appointments when they were owed a payment for participation. Therefore, even if their drive still had storage capacity, if a payment was due (and it was thought that the drive would fill up before another payment was due), an SSD swap would be scheduled when the payment was made. Figure 3.19 summarizes the available data on the distribution of days between SSD swaps in the Buffalo study center fleet. The data set includes 993 SSD swaps. The time between swaps includes the number of days between the installation date and the first drive swap and then the number of days between sub- sequent drive swaps. Some participants had only one drive swap while others had as many as six or seven swaps during their time in the study. The vertical axis in the graph shows the number of SSD swaps for each of the bins. Also shown is the percentage of the total number of SSD swaps for each bin. About 54% of the drives were swapped after being on the road between 100 and 200 days. About 2% needed to be swapped in less than 50 days and 2% were swapped after 500 days. Table 3.18 summarizes data on the number and type of service calls at each of the study centers. Some calls were dual purpose (equipment maintenance and SSD swap). Available data from five of the six centers indicate that, on average, 65% of the service calls were SSD swap only. The remaining 35% were a mix of equipment maintenance only and dual purpose calls. Over 7,500 service calls (on-site and off-site) were per- formed across all six study centers. The retrieved drives were returned to the study cen- ter and the encrypted data transferred from the SSD to a Due to differences in the amount of driving by each par- ticipant, the date could be as short as 30 days or as long as 8 months. These data would provide additional information to each center to permit efficient scheduling of maintenance actions on a month-to-month basis. Differing maintenance scheduling strategies were employed by the six centers. For example, the Durham study center started contacting the participant when an RT ticket indicated the SSD had reached 70% capacity. At the Tampa study center, when the SSD reached 75%, the participant was called with a request to schedule an appointment for a SSD swap. If it was difficult to reach the participant, repeat calls were made every other day and notes about the call (e.g., if voicemail was left) were recorded. After three attempts to call the participant, an e-mail was sent requesting that he or she schedule an appoint- ment within five business days. If contact was not made within five business days of the e-mail, a letter was sent notifying the participant that if contact was not made in a timely manner, participation in the study—and payments—were in jeopardy. The Buffalo study center followed a similar process with regard to contacting participants. The Buffalo study center evaluated the amount of time left until the drive was expected to be filled and planned appoint- ments so that the drive level would not exceed 90%. If other maintenance actions were required, these appointments would be combined to increase the efficiency of the mainte- nance activities. The State College study center also tried to integrate tasks to swap SSDs with other maintenance calls. The strategy was successful because they found that participants with SSDs in the 80% full range would take a while before hitting the 90% mark. At the Bloomington study center, RT tickets (for both main- tenance and SSD swaps) were prioritized based on impact on the collection of data. SSDs greater than 90% full, dangling head units, or obscured camera views were high-priority main- tenance activities. Maintenance activities in the next priority level were SSDs less than 80% full, vehicles with communica- tion issues, and last, misaligned cameras. Maintenance priorities also depended on VTTI’s requests and the availability of parts. In general, the Bloomington study center took advantage of all maintenance visits to perform a SSD swap. The Seattle study center began calling drivers to sched- ule SSD swaps when their drives were 70% full. It was their experience that, in some cases, once the drives reached 70% of storage capacity they would reach full capacity within a couple of weeks. The Seattle study center also performed SSD swaps at all maintenance appointments, regardless of how full the drive was. Furthermore, if a driver was not seen in 6 months or more, an SSD swap was scheduled regardless of how full the drive was to avoid data loss in case of crash, theft, or the like. Figure 3.19. Days between SSD swaps at Buffalo study center.

60 (radio-frequency interference). However, not all maintenance problems generated an RT ticket. Many issues identified in the list occurred infrequently but are included to show the variety of real-world mainte- nance issues which NDS technicians had to troubleshoot and resolve to maintain the vehicle fleet. Mobile Maintenance The study centers leased or purchased one or two vehicles to use for off-site maintenance and SSD replacement tasks. Cabi- nets were installed in the vehicles to store basic tools (including metric and standard wrench and socket sets, screw drivers, elec- tric drills, and a heat gun) and spare DAS equipment needed to maintain the DAS units in the field. The vehicles were equipped with DC to AC inverters to provide power to hand tools and computers. The vehicles were also equipped with navigation systems and electronic toll transponders to permit efficient travel to and from maintenance locations. Figure 3.20 shows a picture of the exterior and the interior of one such maintenance vehicle (2010 Ford Transit Connect). In general, the off-site maintenance worked quite well and was particularly effective when performing SSD swaps. The rate of participants not keeping these appointments was low (e.g., 3.9% at the Buffalo study center). To set up an off-site maintenance appointment the scheduler would contact the participant to arrange for a convenient time and place. The participant needed to be present at the appointment both to provide access to the vehicle and occasionally to accept a payment. Instructions on how to reach the participant at the off-site location were noted by the schedulers during the ini- tial contact with the participant. Examples of these instruc- tions are “use the doorbell,” “call on arrival” to gain access to an apartment or office building garage. On a few occasions, maintenance work had to be rescheduled because there was not a safe (off the street) place for the technician to work on the vehicle. In Pennsylvania’s mountainous terrain, there local VTTI-provided staging server and then to VTTI via a secure high-speed network (either Internet2 IP or National LambdaRail). The network was required to be capable of sustained 100-Mb/sec throughput with capacity for bursts of 200–300 Mb/sec. At VTTI, processing and quality con- trol checks were performed before addition of the data to the NDS database. The final SHRP 2 NDS database is expected to approach two petabytes (2,000 terabytes) in size (Campbell 2012). Top Maintenance Issues At the Bloomington study center, the top maintenance issues could be divided into two categories: those occurring regularly with a low impact and those with a low frequency but a high impact. In the first (regularly occurring) category, the primary issues included NextGen DAS communication issues, mis- aligned cameras, and dangling head units (typically because of glue failure in hot conditions). These happened often but required relatively straightforward fixes. In the less frequent but high-impact group, radio interference, hybrid vehicle bat- tery draining, and DAS system interference with the TPMS happened with lower frequency, but could take up to a day and multiple appointments to remedy. The Seattle, Tampa, Buffalo, and Durham study centers also experienced communication difficulties, rear camera misalignment, and problems with radar units. The Durham study center typically received notification of damage to the radar unit from participants, rather than through the RT ticket system. The Tampa study center had problems with the RIB because it overheated on occasion and failed to receive radar unit signals. These and other examples of participant vehicle maintenance issues (not including routine SSD swaps) which appeared in the RT ticket system are listed in Table 3.19 in the major issue categories defined by VTTI (Dingus et al. 2014). These categories (column 1) are listed in order of highest occurrence (communications) to lowest occurrence Table 3.18. Number and Type of Service Calls Required to Maintain Vehicle Fleet Service Call Type Bloomington Buffalo Durhama Seattleb State College Tampa Row Total (six sites) SSD swap ONLY 470 993 ---c 1,199 236 1,246 --- Percentage of total calls 72% 60% --- 69% 59% 67% --- Equipment maintenance ONLY 20 443 --- 0 522 --- Dual purpose: SSD swap and equipment maintenance 166 207 --- 540 163 88 --- Total all calls 656 1,643 1,213 1,739 399 1,856 7,506 a Durham tracked total service calls only. Thus, row totals for six sites are not provided except for “Total all calls.” b Seattle performed SSD swaps at all maintenance appointments (whether the drive was full or not). Thus, theoretically, Seattle did not perform any exclusive maintenance appointments. The data are categorized for the main purpose of the appointment (1,199 primarily SSD swap and 540 primarily maintenance). The majority of Seattle appointments were conducted at the study center facility, not in the field. c Dashes indicate data not collected or total not available for all six sites.

61 Table 3.19. Examples of Participant Vehicle Maintenance Issues Addressed by NDS Technicians Category Examples Communication/telemetry issue Communication issues, errors, or no communication for some time Failure to transmit advanced health checks NextGen computer not showing in database Camera/video issue Front, face, and/or hands video not available or blurry Hands camera obscured; hands video upside down Front or face camera obscured Rear, hands, or face camera misaligned All four video views unavailable Administrative request Object hanging from mirror or other requests not requiring visit to vehicle General maintenance DAS computer (NextGen) needed replacing Return of defective DAS, radar, or head unit RIB missing all signals (overheating) RIB needs updated firmware SSD swap needed due to crash Battery drainage; grounding issues NextGen on continuously Break-in and theft of equipment Dangling head unit Head unit dangling or fell off (glue failed in hot conditions) Synchronization NextGen swap, real-time clock issue Radio-frequency interference Interference with TPMS Interference with radio Figure 3.20. SHRP 2 van supporting off-site fleet maintenance and SSD swaps. were instances when the technician could not fully complete a maintenance call because of poor cell phone coverage (e.g., if the scheduler had to be contacted from the field, or if the tech- nician wanted to update vehicle equipment status with VTTI). Hilly terrain in Seattle made it difficult at times to find a level surface to recalibrate head units, so most maintenance activi- ties requiring calibrations were performed at the Seattle study center. In urban environments, carrying cash to pay partici- pants at off-site appointments was, at times, a safety concern. Driver-Related Support Each center had a main phone number which was provided to participants should questions or problems arise. Calls from participants were received for a variety of reasons, some of which are listed in Table 3.20. The data in this table are presented in order from the most highly cited reason to the least cited reason (based on rankings observed in Buffalo and Tampa).

62 second was incidental notifications from participants, either when they spoke with research staff during appointment scheduling or when SSDs were swapped and equipment was repaired. (These were usually lower-level crashes, and the participant assumed it was not necessary to contact the study center.) The third and least common was notification by VTTI. The Bloomington and Buffalo study centers learned of only one crash from VTTI. The Durham study center learned of none that way. Once notified of a crash event, the study center staff went through the crash investigation rubric provided by VTTI to determine if an investigation was warranted. Crashes which were property damage only (PDO) with no injuries and/or Crash Investigations During the course of the study a number of participant vehi- cles were involved in crashes. A process to investigate these crashes was established. This process included investigations by trained crash investigators collecting and reporting event information. The data acquired during the investigations were uploaded to VTTI for inclusion in the SHRP 2 NDS database. There were three methods of notification when a partici- pant was involved in a crash. The first and most common notification occurred when participants called the study center number (or sent an e-mail) to indicate that they were in a crash, as per instructions in the consent agreement. The Table 3.20. Reasons Participants Called Study Center Number Reason Participant Called Notes/Resolution Returning study center call Scheduling appointment for installation, maintenance, or SSD swap. Question about payments When next due? Did they miss one? Update address or contact information Update participant contact information in file. Believed SHRP 2 equipment was causing vehicle issues Study center would schedule a maintenance appointment to have the vehicle and equipment inspected. If vehicle was inoperable, study center had it towed to a mechanic for further inspection by mechanic and study center technicians. Could not pass inspection or get routine dealer maintenance because of NDS equipment Study center scheduled a maintenance appointment to temporarily disconnect the equipment. Had appointment and could not remember where facility was Schedulers would provide direction to the facility. Could not remember how long their term was Schedulers would provide the information to the participant. Called regarding a crash If minor, a maintenance appointment was scheduled to inspect equipment and replace SSD. If vehicle was totaled, a deinstall was scheduled to be performed in the field, at a mechanic’s facility, or at a tow yard. If participant wanted the video data after a crash, VTTI was informed and investigated further as needed. Knew interested candidate; was study center still accepting recruits? Participant would be provided instructions to allow recruit to sign up on the website or 1-800 number. Called about the cell phone study Participant directed to a VTTI number.a Forgot a personal item at the facility Schedulers would arrange for the participant to pick up the item. Technicians left tools in their vehicle Participants would voluntarily drop off tool at facility. Would they get a discount on their insurance because they were in the study? Participation in the NDS would not provide an insurance discount. Question about border crossing or driving on military base If participant needed to cross the border into Canada, the Buffalo site scheduled a mainte- nance appointment and temporarily removed the head unit camera. Note that a number of Buffalo participants regularly traveled into Canada with the DAS fully functional and did not have any issues. Tampa provided a similar service if participant intended to visit a military base. The Seattle site did not offer to remove head units; they simply screened out those who intended to cross the border and discouraged participants from crossing. (Seattle received one call from the border regarding the equipment. It is believed that other partici- pants crossed, based on conversations with participants, but heard of no other incidents.) Bloomington refused those who required occasional access to the military base unless they promised to use different vehicles when visiting the base. Did the NDS count as community service? Participation in the NDS would not count as community service. Did study center see footage of so-and-so cutting them off in traffic? Participants mistakenly believed that the centers could observe all video coming from the vehicles. a The cell phone study was a National Highway Traffic Safety Administration (NHTSA) initiative to determine the use of cell phones during driving activities.

63 So far, by reviewing the vehicle data, VTTI has confirmed the occurrence of 372 crashes. As the review of the vehicle data continues, additional crashes can be expected to be identified. Thus, according to the data, participants did not report a sig- nificant number of crashes. This may be because the VTTI- confirmed crashes include low-risk/low-severity events, such as curb strikes, which the participants did not believe to be of interest and therefore did not report to the study centers. Fortunately, no fatal crashes occurred during the study. Although there were some injury crashes, none of the injuries were life-threatening. Figure 3.21a shows a crashed vehicle which sustained Level 1 damage (bumper fascia, quarter panel, and hood damage only; no structural damage and no air bag deployment). Figure 3.21b shows another vehicle which sus- tained Level 2 damage (extensive driver-side damage with air bag deployment). In general, the cooperation of the participants in the crash investigations was very good. Only one participant refused to discuss the incident with the investigator. Some of the chal- lenges that crash investigators encountered during the pro- gram are as follows: • Some crash investigations were conducted under time and situational constraints, which may have compromised the quality of the investigations and retrieval of equip- ment. For example, at the Bloomington study center, one deinstallation was done at a salvage yard 15 minutes before the participant’s vehicle was due to be compacted. Often mechanics at the repair shop where the crashed vehicle was taken had already started to remove equipment before the study center technicians arrived. • Because virtually all crashes which the sites became aware of were reported by participants, the study center some- times did not receive notification until months (or even a year) after the crash. Vehicles were often already repaired and often there were no pictures available to be included in the crash report. However, in some cases, the investiga- tor was able to get pictures of crash damage from the body shops or from the participant. minor vehicle damage were not investigated. Seventy-eight such PDO crash events were reported to the sites by partici- pants. The other more serious crash events were assigned to a particular crash investigation level depending on factors such as severity of the crash, injuries sustained, air bag deploy- ment, and crash type. The study centers were permitted to upgrade the investigation level if they believed information of use to the NDS could be obtained. The two crash investiga- tion levels were defined: • Level 1. These crashes involved a higher level of vehicle damage, and vehicle occupants may have incurred minor injuries (e.g., bruises). These crashes were investigated by trained crash investigators who conducted an interview with the participant and obtained the police accident report (PAR), if one was issued. In addition, photographs of the crash site were acquired from a Google Earth appli- cation, and photographs documenting the damage to the vehicle were acquired. • Level 2. These crashes were characterized by a higher level of vehicle damage and vehicle occupant injuries. Level 2 crashes were also investigated by trained crash investigators who obtained measurements of physical evidence (e.g., road- way grade, surface type, roadway curvature, vehicle heading angles) in relation to a reference point and reference line. These measurements were used to make a scaled diagram of the crash site. The investigators also created a sketch of the crash site which included travel lanes, lane markings, traffic controls, physical evidence, and sidewalks. As with Level 1 crashes, photographs of the crash site and the vehicle were taken, and the investigator conducted an interview with the participant and acquired the PAR if one was issued. Over the 3-year study, the six study sites were made aware of a total of 188 crash events. Crash investigations were performed and documented for 110 of these events. Table 3.21 shows the number of crash events which were reported by participants in each of the study areas. The numbers include events that were PDO events and not investigated, as well as the number of crashes assigned to each of the two investigation levels. Table 3.21. Number of Crash Events Reported to Centers and Investigations in Each Level by Center Event/Investigation Bloomington Buffalo Durham Seattle State College Tampa Total No investigation 11 26 6 20 8 7 78 Level 1 investigation 5 12 17 30 6 9 79 Level 2 investigation 4 3 0 18 3 3 31 Total events 20 41 23 68 17 19 188 Total investigations 9 15 17 48 9 12 110

64 a roadway or freeway with no cross streets. In these cases the vehicle GPS position at the time of the crash would have been a valuable input to the crash investigation. • At the Seattle study center, the procedures for the Level 2 site investigations had to be adjusted due to Washington State Patrol restrictions. Investigator access to freeway crash sites was not allowed for safety reasons. Since there was often only one investigator traveling to the crash site and it was unsafe to stop and photograph a location on the free- way, the investigator would use images from Google Earth as a stand-in for site documentation for freeway crashes. For verification, the investigator subsequently drove by the crash site to visually confirm that the Google Earth images accurately depicted the crash site. • Google Earth images were used to draw a site diagram for these circumstances; these drawings were not to scale. Notes were included in the case files for which access to the site was restricted by local authorities. • Generally, PARs were not available until 2 weeks after the crash, but in some cases it was several weeks or months before they became available. Although PARs can be ordered from the state within 5 days of the crash using the electronic PAR (EPAR) number or the driver’s first and last name, due to the usual delay in notification, this was often not an option. In addition, drivers often provided only the investigating officer’s case number and not the state EPAR number. If the crash date was not accurately reported by the driver, this would make ordering the PAR more dif- ficult and time consuming. Lessons Learned on Participant Management and Fleet Maintenance During the program, each site developed its own set of les- sons learned when working with the participants. A number of these lessons were implemented during the course of the program to improve the efficiency of working with the pub- lic. Some of these lessons were • Ensure that participants are aware of the schedule for pay- ments and when the payments will be made. A large num- ber of phone calls to the centers during the program were from participants asking about the status of their payments. In the participant’s waiting area of the Buffalo study center, a poster was put up to notify the participants that their first payment would be made in the first week of the month fol- lowing their installation. This notice reduced, but did not eliminate, the number of calls regarding payment. (In the future, a simpler payment schedule might be beneficial.) • Once equipment was installed, participants were generally very good at keeping their maintenance appointments. The percentage of participants not making their maintenance • With a large, rural geographical study area, crashes hap- pening in other regions of the state were difficult to inves- tigate. Finding the crash site, collecting crash reports from local police, and finding the salvage yard to which the vehi- cle was taken provided challenges for investigators. • It was noted by a number of the crash investigators that the investigations (particularly Level 2) could have benefited from information acquired by the in-vehicle instrumen- tation. For example, a driver’s poor memory of the crash location was problematic to the investigation, and the police reports were not always available in a manner timely to the investigation. Furthermore, crashes that occurred in remote rural areas were often at a nonspecified location on (a) (b) Figure 3.21. Examples of Level 1 and Level 2 crash damage sustained by NDS vehicles: (a) Level 1 crash with damage to vehicle, no airbag deployment and (b) Level 2 crash with damage to vehicle, airbag deployed.

65 unreported events may have been simple curb strikes which were discounted by participants. Deinstallations DAS deinstallation activities during the NDS could be divided into the following two categories: • Deinstallations that routinely occurred during the course of the program. These deinstallations occurred when par- ticipants either completed their time in the study or left the study before their planned completion time. An example of the former was 1-year participants who completed their time in the study. Other reasons for deinstallations were participants changing vehicles, participant death or illness, or leaving the study because they were moving. • Deinstallations of the participant fleet at the end of the program. These deinstallation activities were scheduled to begin at all study centers on September 1, 2013, and were expected to be completed by November 30, 2013. However, some deinstallations were performed in August as some participants completed their time in the study and were not extended for an additional short period (i.e., 1 or 2 months). Most of the planned deinstallations were completed on schedule by the end of November with only a few extending into December 2013. Figure 3.22 illustrates the number of deinstallations per month for two of the larger study centers (i.e., Buffalo and appointments (including SSD swaps) was low (3.9% at the Buffalo study site). One of the strategies developed to cope with no-shows was to have cross-trained staff that could perform another function if the participant didn’t show up. • Scheduling maintenance trips by considering the number of participants in neighboring towns increased the effi- ciency and number of maintenance actions that could be scheduled in a given day. This strategy reduced technician travel times between maintenance visits. • Having an internal (local) database to track information including participant call records, consent form versions, appointment history, and participant contact information proved to be very valuable. • Participants moved and changed phone numbers with sur- prising frequency. It was helpful to check on their current address and phone number whenever the need arose to contact them or to schedule an appointment with them. • Text messaging and e-mailing were very effective for con- tacting and scheduling younger participants. • Due to the participants’ work schedules, many sites adopted alternate work hours. Many sites had evening and Saturday hours to accommodate the participants. • It appears that participant reports alone should not be counted on to determine the number of crash events. This conclusion was reached based on the apparent difference between the number of crash events that the study centers were made aware of (by the participants) and the num- ber of crash events detected by the vehicle accelerometers. However, it is yet to be determined what fraction of these Figure 3.22. Deinstallations per month at Buffalo and Tampa study centers. 200 N um be r o f D e- In st al la tio ns 150 50 0 100 Month 201220112010 2013

66 connected). The SHRP 2 equipment fuses were replaced with conventional fuses in the fuse box under the hood. • The head unit was removed from the windshield, and any remaining residue from the adhesive was cleaned off. • The network box, OBD connector, and brake signal were removed, and SHRP 2 fuses were replaced in the fuse box under the dash board. • All cables were removed from door sills, and panels were replaced. • The rear camera and antenna were removed from the rear windshield, paying attention to the window tint if any. The rear window was cleaned of any adhesive residue. • All cables were removed from the vehicle C-pillar and all panels replaced. • The NextGen was removed from the trunk and all cables and components accounted for. • A final quality check was made to confirm that all inter- nal panels, carpet or felt, and weather stripping was put in place. • Vehicle odometer reading was entered into the Deinstall- Ware software, and the participant status was changed to “complete” or “dropped out” depending on reason for deinstallation. • Photographs of the vehicle were taken again to document the (postdeinstallation) vehicle condition for the vehicle condition checklist. • The vehicle and keys were returned to the participant. The vehicle condition checklist was signed by the participant, and the participant was thanked for his or her participa- tion in the program. Deinstallation Scheduling and Activities Deinstallations were scheduled in the same manner as instal- lations. Because a large number of deinstallations had to be performed in a 3-month period at the end of the program, no-shows were a concern before the process was undertaken. However, the no-show rate for deinstallations turned out to be considerably lower than the no-show rate for scheduled instal- lations. Table 3.22 provides information on the deinstallation no-show rates experienced at the study centers, times to com- plete the deinstallations, and numbers of off-site deinstallations performed. Only a few problems were experienced when performing off-site deinstallations. These problems included inclement weather, lack of safe or convenient off-street parking for the car during the deinstallation, fewer tools available than for on-site deinstallations, need to perform the deinstallation with the participant watching, and difficulty synchronizing laptops in the field. The latter problem required synchro- nizing the laptop when the technician returned to the study center. Tampa). The graph shows the relatively low deinstallation rate at the beginning of the study (October 2010 to March 2011), followed by a period in which deinstallations were per- formed on the 1-year participants (April 2012 to February 2013), followed by the fleet end-of-program deinstallation activities (September 2013 to December 2013). The procedures were similar in both sets of deinstallation activities and involved removal of equipment from partici- pant vehicles as well as the completion of exit surveys by the participant. This final opportunity to interact with the par- ticipant was used to request that any missing or incomplete assessment tests be completed. In general, most of the deinstallations were performed at the study center installation sites. Some deinstallations were performed in the field when necessary. Some of the rea- sons for performing the deinstallations in the field included (1) participant could not drive to the study center, (2) the vehicle was not functioning, and (3) the vehicle had been sold and the deinstallation took place where the vehicle was located. The general process used for on-site deinstallations at all of the study centers is described as follows: • The participant was greeted at study center entrance, lobby, or reception area. • The participant provided the keys to the vehicle and was directed to the assessment area to complete the exit sur- veys. If the participant was missing any of the assessment tests or surveys, they were asked to complete them at this time. • The participant was provided with the login information and access to a computer to take the exit survey and medi- cal condition survey. • The vehicle was driven to the deinstallation area by a technician. • The technician photographed the vehicle and noted any damage. The condition of the vehicle was recorded on the vehicle condition checklist. Once the technician completed the vehicle inspection, the technician reviewed the results with the participant. The participant was requested to sign the completed vehicle condition checklist to document the condition of the vehicle before deinstallation. • The technician utilized the VTTI-supplied DeinstallWare software to scan the vehicle ID bar code and confirm the vehicle information before starting the deinstallation. • The technician(s) removed equipment from the vehicle and used the DeinstallWare software to inventory all com- ponents removed from the vehicle. The radar was removed but the license plate bracket was left on the bumper if one was used during installation. • The radar interface box was deinstalled and all connections to headlights and turn signals were removed (if previously

67 DAS Equipment Condition Some issues concerning the DAS equipment and vehicle con- dition were discovered by study center technicians during the end-of-program deinstallations: • The Durham study center reported two participants had added after-market stereo equipment to their vehicles after the installation of the DAS equipment. The stereo equipment installer had spliced into the power line going to the DAS. This did not result in any reported problems with the DAS equipment. • All study centers reported evidence of wear and tear on the DAS equipment, ranging from signs of water presence on (and in) the NextGen to damage to the NextGen cover and wiring from impact by heavy objects. However, none of the affected vehicles appeared to suffer DAS equipment malfunctions as a result of the observed wear and tear. • Several study centers noted that a number of partici- pants arrived for the deinstallation with a damaged radar. An example of the radar damage observed is shown in Figure 3.23. Problems Reported by Participants at the Deinstallation During the deinstallation process a number of participants at the study centers reported issues they had encountered dur- ing their time in the program. Primary among these issues was loss of radio reception. Generally, the problem was with low-power stations on the AM radio band. Many times the participant did not report this to the study center and just put up with the problem. Another relatively common issue con- cerned the TPMS warning lights illuminating. Participants at a number of sites had their TPMS warning lights illuminated Table 3.22. Study Center Deinstallation and No-Show Summary Study Center Deinstall No-Show Rates Times for Deinstallation Activities Comments Bloomington 14% average 75 minutes average Six deinstallations were performed in the field (in one case in a corn field). Buffalo 6.1% 30 to 60 minutes 30 deinstallations needed to be performed in the field during the course of the program due to vehicles that could not be driven. The vehicle problems were not related to DAS equipment. Durham 1% 30 to 40 minutes Only 4 to 6 deinstallations were performed in the field. All others were performed at the study center site. Seattle 2% 15 to 20 minutes when deinstalling one vehicle at a time; 20 to 30 minutes when deinstalling two vehicles simultaneously About 10 deinstallations were performed in the field. State College 2% 75 minutes (mean time) Only a small number of deinstallations were performed in the field. Tampa 6.4% 30 to 60 minutes (a) (b) Figure 3.23. Illustration of undamaged and damaged radar unit: (a) Undamaged radar installed next to license plate and (b) Damaged radar (missing cover) next to license plate holder.

68 assessment area to complete two exit surveys. The first exit survey asks participants to assess their stress level during study, whether it affected their driving, and if their driving was altered or restricted. They were also asked to rate their experience as a participant in the study. The second survey revisited the medical conditions and medications survey to document any changes since the start of the study. Participants were provided with access to a computer and instructions to log-on to the website to complete the survey online. They were also provided the option to complete the surveys at a later time. This latter method proved problem- atic, as many participants would not complete the surveys after leaving the study center. Participant Exit Survey Experiences Study centers reported the following observations regarding the participant exit survey experience: • The Seattle study center noted that almost all drivers over the age of 76 could not use a computer well enough to complete surveys online. To alleviate this problem the Seattle study center created a more user-friendly paper version of the exit surveys with a large font to make the survey easier to read. • A few drivers refused to complete the medical exit survey, possibly due to its length and the fact that they had com- pleted a similar survey at the beginning of the study. • The medical conditions and medications exit survey was not completed as reliably as the exit survey. A number of participants remarked that they forgot their medication information and wanted to finish the survey at home. • Participants could not be counted on to complete the exit surveys on their own. Initially the Tampa study center was providing the participant login information to facilitate par- ticipants completing their survey at home. This method was not effective; a large number of participants who did not complete the survey at the study center did not complete the survey at a later time. when they arrived at the study center for their deinstallation. In many cases the TPMS light system was reset by the techni- cian at the study center during the deinstallation process, or the warning light went off by itself shortly after the deinstal- lation was completed. Problems Noted by the Study Centers During the Deinstallation Process There were several lessons learned and observations made during the deinstallation process: • The GPS/cellular antenna mounted in the rear window was destroyed in a high percentage of deinstallations. This was primarily due to the soft flexible base of the antenna being damaged in the process of removing it from the window. • Older car models were particularly difficult to deinstall. The plastic components such as kick panels and mold- ings were very brittle. Removing some of the panels broke their fastening elements. In many cases, replacement parts for these vehicles were difficult to acquire as they were no longer available. • Some study centers (Buffalo, Seattle, State College, and Tampa) noted issues with corrosion between the provided license plate screws and the brass inserts in the license plate frame on the radar unit. This corrosion problem also extended to the radar alignment screws. • Several study centers developed a protocol to check features of the vehicle before returning it to the participant. These included checks to ensure proper postdeinstallation func- tioning of features such as the instrument lights, electronic and mechanical latches, lights, turn signals, radio, and air conditioner. Driver-Related Activities While the DAS equipment was being deinstalled from the participant’s vehicle, the participant was escorted to the

Next: Chapter 4 - Issues Encountered and Lessons Learned »
Naturalistic Driving Study: Field Data Collection Get This Book
×
 Naturalistic Driving Study: Field Data Collection
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s second Strategic Highway Research Program (SHRP 2) Report S2-S07-RW-1: Naturalistic Driving Study: Field Data Collection that summarizes the compilation of a comprehensive naturalistic driving database. This database, together with associated roadway, driver, and environmental data provides a resource from which to study the role of driver performance and behavior in traffic safety and how driver behavior affects the risk of crashes.

The Naturalistic Driving Study was tested in several locations with In-Vehicle Driving Behavior Field Studies, including:

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!