National Academies Press: OpenBook

Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development (2019)

Chapter: Chapter 4 - Testing and Revising the Measurement Tools

« Previous: Chapter 3 - Developing the Survey and Securing Transportation Projects for Testing
Page 14
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 14
Page 15
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 15
Page 16
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 16
Page 17
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 17
Page 18
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 18
Page 19
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 19
Page 20
Suggested Citation:"Chapter 4 - Testing and Revising the Measurement Tools." National Academies of Sciences, Engineering, and Medicine. 2019. Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development. Washington, DC: The National Academies Press. doi: 10.17226/25447.
×
Page 20

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

14 The next step was to test the survey on the three transportation projects to determine its ability to validly measure the effectiveness of public involvement, as well as the feasibility of administering the survey and the usefulness of the reporting. Testing the Survey with the Public Both the paper and the online versions of the survey were tested on the transportation projects as described below: 1. Puget Sound Gateway Program. The survey was tested with members of the public who had attended an in-person open house in fall 2017 and provided their email addresses. The email addresses were used to send an invitation to complete the online version of the survey. The online version of the survey was also tested with people who attended the online open house in spring 2018. Online open house participants who completed the online open house comment form were then prompted to complete the online version of the survey. A total of 33 people from the Puget Sound Gateway Program completed the survey. 2. I-405 Renton to Bellevue Widening and Express Toll Lanes Project. The paper survey was tested at an open house in spring 2018. The online version of the survey was also tested with members of the public who had attended the open house and provided their email addresses. A total of 27 people from the I-405 Renton to Bellevue Widening and Express Toll Lanes project completed the survey. 3. Washington State Ferries’ Long-Range Plan. The paper survey was tested with people who had attended one of the nine open houses in spring 2018. The online version of the survey was also tested with those who had attended the online open house in spring 2018. Online open house participants who had completed the online open house comment form were then prompted to complete the online version of the survey. Finally, an invitation to complete the online version of the survey was sent to members of the public who had provided their email addresses at either the in-person open houses or the online open house. A total of 80 people from the Washington State Ferries’ Long-Range Plan project completed the survey. Lessons Learned from Testing the Survey with the Public The testing process with the public was very useful in highlighting areas for improvement as follows: • The survey looked “too long,” even though it took less than 10 min to complete. Therefore, shortening the survey was important so that when potential participants first look at it, they do not automatically assume that it will be a burdensome process. C H A P T E R 4 Testing and Revising the Measurement Tools

Testing and Revising the Measurement Tools 15 • The survey should preferably be on one side of one sheet of paper to avoid participants ignoring or missing the questions that appeared on the back side of the paper. Short of this, very clear words and graphics are needed to alert participants that the questions continue on the back of the page. • Instructions need to explain how to use the response category scale, including the “don’t know” and “not applicable” categories. In one case, a participant started circling the bullet points he/she agreed with, instead of using the response scale. The team suspects that others may have left items unanswered instead of using either the “don’t know” or “not applicable” categories. • The survey needs to be offered in both paper and online versions, not only because of the increased use of online open houses and other online public engagement formats, but also because a number of people, when handing in the paper version of the survey, asked why it was not also available online (which was their preferred format). • Instructions need to be clear that the survey should be completed based on the respondents’ experience with the public involvement completed thus far, not just on the specific public involvement activity or event at which they received the survey. The feasibility of the survey administration process also needed to be addressed, as was confirmed by the following issues the team encountered during the testing: • Members of the public expressed some confusion about why they should take this survey, and what the agency intended to do with the results. Transportation agencies need to partner with the consultants conducting the public involvement to communicate to the public the importance of completing the survey, and how the results will benefit the public. • Public involvement staff need to intercept attendees and urge them to complete the survey, instead of hanging back and hoping participants notice the survey placed at event comment tables. This is especially important for participants who may leave the public involvement activity without seeing the area where the survey is available. • Signage at public involvement activities is needed to direct attendees to where they can complete the survey. This signage should also clearly communicate the benefits of doing so. • The survey should be presented by the transportation agency as equally important to other public involvement activity surveys and comment forms, so that the survey is not ignored. For example, the transportation agencies that conducted online open houses placed the prompt to complete the survey after participants had completed the online open house comment form. The consequence of this was that of the 3,663 unique visitors to one of the online open houses, only 20 completed the project comment form, and of those, only 8 went on to complete the survey. A potential solution would be to show the survey invitation at whatever point the participant leaves the online open house. • When the project has a community advisory committee, transportation agencies should consider having members of the advisory committee invite attendees to complete the survey. Advisory committee members may be more trusted messengers and may have a bigger impact on the percentage of participants who complete the survey. • Transportation agencies or those conducting the public involvement should consider offering incentives for completing the survey to increase the response rate. Such incentives do not have to be costly—something as simple as a chance to win one of three $50 gift cards can signifi- cantly increase the response rate. A process in which the sweepstakes entry form is provided to participants after they hand in their paper version of the survey will maintain their anonymity regarding their survey answers. For those completing the survey online, a simple redirect can be programmed at the end of the survey so that their sweepstakes entry information appears in a form separate from the survey itself.

16 Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development Reducing the Length of the Survey The length of the survey was reduced for the following reasons: • To make completing the survey less burdensome for public involvement participants. • To increase the validity of the survey as a measure of public involvement effectiveness. • To increase the reliability of the items that would form an overall index, as well as sub-indices for each indicator. (In this case, reliability refers to the degree to which the items in the overall index and in the sub-indices are related to each other.) The length of the survey was systematically reduced through several quantitative and qualita- tive methods. Quantitative Methods Factor Analysis. The team conducted a factor analysis to determine if correlations could be used to describe the different patterns in how people answered the indicator items. These patterns were counted, and an estimation of how many groups of items were reasonable to expect was determined. Next, the team explored how correlated each item was with all the other items. Then, the team made a list of all the items that did not appear in any group. [Appendix F contains the factor analysis results, and Appendix G contains more details on the factor analysis processes. These appendices can be found on the TRB website (www.trb.org) by searching for “NCHRP Research Report 905”.] Principal Components Analysis. The team conducted a principal components analysis to determine which items were so similar to others that they could be cut from the survey without losing pertinent information. First, how correlated each item was with other groups of items was explored. Two kinds of items to drop were identified: items that were outliers and items that were redundant. Outlier items were not highly correlated with other groups of items. They added little value to the final index. The team also identified potential items to be dropped that were highly correlated with other items but did not individually add much to the index. These items were too similar, so removing one of them could shorten the survey without changing the results. [Appendix H contains the principal components analysis results, and Appendix G contains more detail on the principal components analysis processes. These appendices can be found on the TRB website (www.trb.org) by searching for “NCHRP Research Report 905”.] Test of Convergent Validity. In this case, the test of convergent validity was based on the assumption that the ratings on the individual items should theoretically be related to the rating on the item that measured overall satisfaction with the public involvement. Those items less related to overall satisfaction were considered as candidates to be dropped from the survey. The results of the convergent validity test indicated a high degree of convergent validity (significant correlation coefficients of .3 or higher). [Appendix I contains the convergent validity results. This appendix can be found on the TRB website (www.trb.org) by searching for “NCHRP Research Report 905”.] Reliability Analysis. The team conducted a reliability analysis to identify items within each indicator sub-index that were potentially less related to other items in each of the sub-indices. Reliability analysis allows one to study the properties of measurement indices and the items that compose the indices by providing information about the relationships between individual items. The alpha model of reliability (based on the average inter-item correlations) was used and the Cronbach alpha coefficient cutoff for reliability was set at the generally accepted level of .70 or higher. The reliability analysis resulted in a Cronbach alpha coefficient of .98, indicating a high number of items in the index being reliably related to each other. However, several items were not related and were subsequently dropped from the survey. [Appendix J contains the reliability

Testing and Revising the Measurement Tools 17 analysis results. This appendix can be found on the TRB website (www.trb.org) by searching for “NCHRP Research Report 905”.] Qualitative Methods Mini-Focus Groups. Two mini-focus group sessions with public involvement participants were conducted to obtain their ideas on potential items for deletion. In these focus groups (with three participants in each group), participants individually identified the top 10 items that they thought could be deleted from the index. From among the items most often identified, participants then rank ordered them from most eligible for deletion to least eligible for deletion; all the while, the moderator probed on the reasons for their choices. Analysis of Patterns. To identify possible items to delete, the team conducted an analysis of patterns in non-numeric responses (“don’t know,” “not applicable,” and “skipped”). Patterns were compared within and across projects to determine if certain questions were harder to answer than others, and whether respondents used these non-numeric responses interchangeably. [Appendix K contains the skipped item analysis results. This appendix can be found on the TRB website (www.trb.org) by searching for “NCHRP Research Report 905”.] Team’s Assessment. To identity items for deletion, the team conducted its own assessment of items that appeared duplicative or less critical to measuring the effectiveness of public involve- ment. This also served as a test of face validity. The length of the survey was reduced from 47 questions to 38 questions. Items dropped can be found in Appendix L. The results of the principal components analysis also resulted in the identification of the following indicators and the slight reassignment of the items for each indicator. • Influence and Impact. The goal of this indicator is to measure the extent to which public feedback has an impact on the project decisions and ensure that agencies are not just eliciting feedback from the public as part of a “checklist.” • Transparency and Clarity. The goal of this indicator is to measure whether trust of govern- ment agencies has increased or improved as a result of the public involvement processes, and whether agencies were appropriately transparent about the project. • Timing. The goal of this indicator is to evaluate whether public involvement started early enough and was of sufficient length and frequency to be valuable. • Inclusion. The goal of this indicator is to measure the extent to which the public involvement was inclusive and representative of all targeted and affected populations. • Targeted Engagement. The goal of this indicator is to measure the extent to which the public involvement included locations relevant to the targeted and affected populations. • Accessibility. The goal of this indicator is to measure the extent to which the public involve- ment activities used multiple methods for participation. Figure 1 presents a diagram of the entire survey testing and revision process. Testing the Agency Version of the Survey Another aspect of the testing involved having the staff who conducted the public involve- ment complete the agency version of the survey. The lead public involvement staff from each of the three transportation projects completed the survey, as well as provided objective documentation or evidence for each of their ratings (see Figure 2 for an example of the agency version of the survey).

Source: PRR, Inc. Figure 1. Survey testing and revision process. Source: PRR, Inc. Figure 2. Online agency survey example.

Testing and Revising the Measurement Tools 19 Public involvement staff who completed the agency survey were interviewed about their experience. The key findings from these interviews were as follows: • Completing the survey was intuitive, straight forward, and easy to move through. • It needs to be made clear that the ratings are based on the public involvement conducted thus far and not relative to any one specific public involvement activity or event. • It was sometimes difficult to come to conclusive answers for some questions, because the project is still in the midst of public involvement activities. • It needs to be made clear that the “not applicable” category can be used for aspects of the project that are yet to occur. • It was sometimes difficult to recall documentation or evidence for substantiating a rating, given that the project had been going on for more than 3 years. • The act of completing the survey planted ideas about issues the agency should be considering for future public involvement. As a result of the public involvement agencies’ experience, the following changes were made in the instructions for completing the agency version of the survey: • When completing the survey, the respondent should consider the public involvement completed thus far. • The “not applicable” response category should be used for aspects of public involvement that will/might occur in the future. • Documentation or evidence should be collected throughout the project, starting from the beginning. Testing the Scoring Tool with Public Involvement Staff The Excel scoring tool was also tested with public involvement staff from two of the trans- portation projects. Each test lasted about 1 hour and covered the scoring tool instruction manual, data entry into the scoring tool, calculation of index scores, and interpretation of scoring results. • Testers were provided with a copy of the scoring tool instruction manual, the scoring tool itself, and a sample dataset to use for data entry. • Testers reviewed the scoring tool instruction manual with a member of the research team. They pointed out anything they found unclear and provided suggestions for how to clarify the content. They also pointed out where anything was missing and made suggestions for revisions. • Next, testers copied data from a sample dataset into the scoring tool and gave feedback on what would make this process easier. They reviewed the final index scores and discussed how they would interpret the results, as well as what additional information might be useful to them. • Based on results from the first tester’s feedback, visuals were added describing the structure of indices and the public involvement effectiveness survey, prior to the second person beginning the testing. Based on the results of this testing, the following changes were made to the instruction manual and scoring tool. Instruction Manual • Added background information explaining the survey and the tool • Used simple, conversational language • Clearly defined all terms • Used visuals to show the overall structure of the scoring tool and individual data entry tabs • Provided detailed instructions on how to work in Excel and move data between spreadsheets

20 Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development Scoring Tool • Added a “Notes” field to data entry tabs • Shaded cells to indicate weak, moderate, strong performance, rather than using colored text • Moved the Codebook tab to the first tab position Assessing the Agency Experience with the Survey Process and Summary Report Finally, the project manager from each of the transportation projects was interviewed. All three project managers reported that the testing process did not interfere with their public involvement activities. However, one project manager mentioned that some participants were confused by being presented with three surveys (the transportation agency’s survey, the trans- portation agency’s comment form, and the public involvement effectiveness survey). They believed this contributed to some survey burnout. All three project managers found the summary report for their projects easy to read and very useful (see Appendix M). They mentioned that the demographic comparison section and the table showing the public and agency ratings on each of the survey questions were useful in evaluating how they are doing and informing adjustments to the public involvement process. One project manager reported that interpreting the table that includes the public and agency scoring was not intuitive. This manager recommended moving the scale legend to the top of the table and including an example of how to read the table (see Appendix M, page M-2). Project managers also mentioned the organization of the report and its brief length (four pages of results, two-page appendix with the survey questions) worked well for their and their staff ’s busy schedules. One project manager shared the report with staff, who also had positive feed- back. Another mentioned that the items on which the agency scored the lowest and the highest mirrored their sense of how the public involvement had been going. They found the key find- ings from the survey useful in providing them with data-driven information beyond their own impressions. Also interviewed was the primary person who managed the public involvement process for each of the transportation projects. They echoed the sentiments of the transportation agency project managers regarding the testing process not interfering with the public involvement activities, as well as the usefulness of the summary report. They appreciated that the report was well-organized and concise, and said they used the section on the demographics to identify potential underrepresented population segments. They also found the ratings by the public, which were lower than the agency ratings, to be surprising but useful for pinpointing areas for improvement, especially where there were large discrepancies between public and agency ratings. For example, two of them mentioned that the report provided them with feedback that will likely result in their changing future open house events from just a series of boards describing the project with staff available to answer questions to an actual presentation by agency staff, in addition to the boards.

Next: Chapter 5 - Public Involvement Effectiveness Measurement Toolkit »
Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development Get This Book
×
 Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Research Report 905: Measuring the Effectiveness of Public Involvement in Transportation Planning and Project Development provides a field-validated and practitioner-ready toolkit to measure the effectiveness of a transportation agency’s public involvement activities.

The toolkit is designed to collect feedback from the public on several indicators of effectiveness and to compare that feedback with the agency’s own perceptions. The combined responses can then be used to calculate scores for each indicator and an overall effectiveness index. This allows for systematic comparison of the effectiveness of different public involvement strategies over time.

Public involvement programs provide transportation agencies and the public with a means for exchanging information about planning and project development activities. When effective, public involvement activities enable the public to participate in transportation decision making. Transportation professionals need to measure the impact of public involvement activities to ensure that they are successful and an efficient use of public resources. In addition, repeated measurement can track an agency’s performance over time, demonstrating ongoing commitment to public involvement and increasing overall accountability in the transportation decision-making process.

The toolkit includes a series of online resources, including a survey instrument for use with the public (suitable for distribution in printed form or online), an electronic survey for transportation agency staff to enable the agency to score itself, a spreadsheet-based scoring tool for converting survey response data into an effectiveness index, and guidelines for using and scoring the survey. A set of presentation slides with speaker notes describing the project are also available.

The following appendices to NCHRP 905 are also available online:

Appendix E: Survey Used for Testing

Appendix F: Factor Analysis Results

Appendix G: Description of Factor Analysis and Principal Components Analysis

Appendix H: Principal Components Analysis Results

Appendix I: Convergent Validity Test Results

Appendix J: Reliability Analysis Results

Appendix K: Skipped Item Analysis Results

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!