National Academies Press: OpenBook

Guide for Customer-Driven Benchmarking of Maintenance Activities (2004)

Chapter: Chapter 4 - Steps of Customer-Driven Benchmarking

« Previous: Chapter 3 - Measurement
Page 76
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 76
Page 77
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 77
Page 78
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 78
Page 79
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 79
Page 80
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 80
Page 81
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 81
Page 82
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 82
Page 83
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 83
Page 84
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 84
Page 85
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 85
Page 86
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 86
Page 87
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 87
Page 88
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 88
Page 89
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 89
Page 90
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 90
Page 91
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 91
Page 92
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 92
Page 93
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 93
Page 94
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 94
Page 95
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 95
Page 96
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 96
Page 97
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 97
Page 98
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 98
Page 99
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 99
Page 100
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 100
Page 101
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 101
Page 102
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 102
Page 103
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 103
Page 104
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 104
Page 105
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 105
Page 106
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 106
Page 107
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 107
Page 108
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 108
Page 109
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 109
Page 110
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 110
Page 111
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 111
Page 112
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 112
Page 113
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 113
Page 114
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 114
Page 115
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 115
Page 116
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 116
Page 117
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 117
Page 118
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 118
Page 119
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 119
Page 120
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 120
Page 121
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 121
Page 122
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 122
Page 123
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 123
Page 124
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 124
Page 125
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 125
Page 126
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 126
Page 127
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 127
Page 128
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 128
Page 129
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 129
Page 130
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 130
Page 131
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 131
Page 132
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 132
Page 133
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 133
Page 134
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 134
Page 135
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 135
Page 136
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 136
Page 137
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 137
Page 138
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 138
Page 139
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 139
Page 140
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 140
Page 141
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 141
Page 142
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 142
Page 143
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 143
Page 144
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 144
Page 145
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 145
Page 146
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 146
Page 147
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 147
Page 148
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 148
Page 149
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 149
Page 150
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 150
Page 151
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 151
Page 152
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 152
Page 153
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 153
Page 154
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 154
Page 155
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 155
Page 156
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 156
Page 157
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 157
Page 158
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 158
Page 159
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 159
Page 160
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 160
Page 161
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 161
Page 162
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 162
Page 163
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 163
Page 164
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 164
Page 165
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 165
Page 166
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 166
Page 167
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 167
Page 168
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 168
Page 169
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 169
Page 170
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 170
Page 171
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 171
Page 172
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 172
Page 173
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 173
Page 174
Suggested Citation:"Chapter 4 - Steps of Customer-Driven Benchmarking." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 174

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

79 CHAPTER 4: STEPS OF CUSTOMER-DRIVEN BENCHMARKING AN OVERVIEW OF THE STEPS Chapter 4 sets out a five-step process for customer-driven benchmarking for road maintenance. It provides a detailed description of each step and includes worksheets to help you develop measures, organize your measurement activities, record the results, analyze improvement opportunities and best practices, and implement improvements. The main steps are illustrated in Figure 8. The five main steps are as follows. 1. Select Partners. The first step involves assembling a benchmarking partnership. Partners are agencies that also desire to improve performance through sharing information. They have the authority to allocate internal resources and make commitments to change internal practices to conform to decisions that are made by Figure 8. Steps in the Benchmarking Process

partners and governed by a partnership agreement. The process of selecting partners consists of the following: ♦ Determine the partners you will commit to work with for at least 2 years, ♦ Determine the organizational level at which you will benchmark, ♦ Determine the number of benchmarking units you want, and ♦ Develop a benchmarking partnership agreement. 2. Establish Measures. The second involves identifying measures to use for benchmarking that directly relate to the attributes of the products and services that a maintenance organization provides its customers. Instead of thinking about maintenance activities, you will have to reorient your thinking to what the customer is “buying.” This second step is composed of the following smaller steps: ♦ Identify the role of the customer in the vision and the mission of the maintenance organization, ♦ Identify products and services that the customer is buying and the corresponding attributes and maintenance activities, ♦ Identify candidate customer-oriented outcome measures that correspond to each attribute, ♦ Identify measures for resource usage, ♦ Identify measures pertaining to hardship factors, ♦ Identify output measures, and ♦ Assess the value of using various customer-oriented measures and select the ones you will use. 3. Measure Performance. The third step involves measuring performance and reducing the measurements into summary results that will be used to assess best Chapter 4: Steps of Customer-Driven Benchmarking 80

performances. This third step is composed of these smaller steps: ♦ Plan and schedule measurement activities, ♦ Develop a database, ♦ Take measurements and record the results, and ♦ Share results. 4. Identify Best Performances and Practices. The fourth step involves analyzing best performers to unearth best practices and improvement opportunities. This step is composed of the following activities: ♦ Determine best performers, ♦ Identify improvement opportunities, ♦ Identify best practices of best performers, ♦ Document your own practices and best practices, and ♦ Determine the value of adopting best practices. 5. Implement and Continuously Improve. The fifth step involves implementing best practices or making other improvements that exceed best practices and then continuing to improve by repeating the benchmarking cycle. This step consists of the following: ♦ Identify improvement options, ♦ Prepare the organization for improvements, and ♦ Implement improvements. Then start the benchmarking cycle again. The remainder of this chapter takes you through each of these steps in detail. 81

STEP 1. SELECT PARTNERS The first step of the benchmarking program is to select the group of benchmarking partners you will be working with. There are some preliminary activities you will have to go through to become internally organized. Indeed, one of your options is to perform customer-driven benchmarking on internal units. However, in this guide, benchmarking partners means external organizations. You must first assemble a team that will guide the internal organization and coordinate with the benchmarking partners that you select. Review the material on selecting a team in Chapter 2, and establish your team. Once assembled, your team should review the Primer document (included with this guide) and discuss what your organization hopes to gain from customer-driven benchmarking. Make the first assignment, which is to have each team member thoroughly study Chapters 1 through 3 of this guide and review Steps 1 and 2 of Chapter 4 before a second meeting. At the second meeting, you will want to ensure that each team member is clear about what your agency needs to do to effectively lead or participate with other partners. All questions of team members need to be answered before continuing. At this meeting, you should also establish the preliminary goals for customer-driven benchmarking in your agency. You should discuss potential partners and, after reading about selecting partners, make assignments to contact targeted agencies as potential partners. At this point in time, your agency may not have previously defined maintenance work in terms of customer products or services and you may not be certain as to what organizational level you will want to benchmark. However, you should have a general idea of the primary maintenance elements or assets that you are interested in improving. The same will be true for agencies that you contact regarding forming a partnership. Each agency will not necessarily know, at this time, at what organizational level they desire to or can benchmark; however, they should have a preliminary idea of the level and the number Chapter 4: Steps of Customer-Driven Benchmarking 82

83 Each potential benchmarking partner should use Worksheet 1 to identify its potential benchmarking units and their characteristics. ♦ At the top of the page enter the following: – The name of the organization that is a benchmarking partner, – An identification code for the benchmarking partner, – The number of benchmarking subunits, – The organizational level of the subunits that will be participating in the benchmarking activity, – Whether this partner has entered into an oral or written benchmarking agreement, and – The benchmarking agreement number. ♦ In the left two columns enter the number and the name of each subunit. ♦ In the remaining columns to the right, for each subunit provide the following information: – Lane miles; – Number of employees; – Budget (maintenance) in thousands of dollars; – Terrain (F = flat, H = hilly, M = mountainous); and – Weather/environmental region. USE MORE THAN ONE WORKSHEET IF NECESSARY. of benchmarking units that they can offer for benchmarking to the partnership (see Worksheet 1). Once you have established a group of potential partners and a prospective lead partner among the group, you should create a more formal agreement before proceeding. Review Appendix A and the content for a partnership agreement in Chapter 2 and then establish an agreement with the partners. Each partner will need to complete Worksheet 1 and circulate it to the lead agency, which will share it with each of the partners. This worksheet is to identify the potential benchmarking units for each partner. This list may be altered later after the partners have determined what measures to use and what product or services they wish to benchmark first.

Chapter 4: Steps of Customer-Driven Benchmarking 84 WORKSHEET 1. BENCHMARKING UNITS OF EACH PARTNER Name of Benchmarking Partner: Department of Transportation Identifica tion Code: 00031 Number of Benchmarking Units: 13 Organizational Level of Benchmarking Units: County Benchmarking Agreement # : B1234567 No. Name of Benchmarking Unit Lane Miles No. of Employees Budget ($000s) Terrain (F,H,M) Weather/ Env. Region 1. Jefferson 325 40 900 F Wet 2. Polk 567 62 1500 F Wet 3. Washington 1789 167 4500 F Wet 4. Hamilton 456 50 1200 F Wet 5. Adams 234 30 600 H Snow 6. Roosevelt 748 80 2100 F Wet 7. Truman 2788 201 6200 F Wet 8. Clinton 980 89 3100 H Wet 9. Jackson 654 56 1800 F Wet 10. Eisenhower 401 44 1200 F Wet 11. Lincoln 777 68 2100 M Snow 12. Nixon 903 88 2600 H Wet 13. Buchanan 1123 103 3300 F Wet

85 STEP 2. ESTABLISH MEASURES Once the benchmarking agreement is completed, you need to return to initial internal activities for customer-driven benchmarking. The next step is to be sure that your maintenance organization is focusing on providing customer-oriented products and services. A place to start is with the maintenance organization’s or agency’s vision and mission statements. A vision statement describes what the agency wants to become in the future. The vision statement usually attempts to depict a desirable future end-state for the agency and therefore provides direction for the agency. The vision statement is also likely to address customers, attributes of key products and services, and quality. Sometimes the vision statement addresses both external and internal customers. You should carefully distinguish between the two because the focus of customer-driven benchmarking is on external customers. The vision statement may also stress a commitment to quality, continuous improvement, or both. Examining the vision statement of your maintenance organization and of the overall agency will help provide direction for benchmarking. Below is the vision statement for Caltrans. California will have the safest, best-managed seamless transpor- tation system in the world. ♦ Every Caltrans employee contributes to improving mobility. ♦ Our workforce will be a diverse, professional, and effective team whose members value each other’s contributions. ♦ We will be responsive and accountable. ♦ We will be well managed and serve as a model for others. ♦ We will work in partnership with other agencies and the public to ensure that our work is done in a way that is sensitive to the needs of the environment and communities. ♦ We will use the latest research and technology to improve mobility for people, goods, and information. ♦ We anticipate and plan for changes. ♦ The public will appreciate the quality of our products and services and the participation that it has had in our decisionmaking.

Chapter 4: Steps of Customer-Driven Benchmarking 86 Use Worksheet 2 to analyze the role of the customer in the vision of your agency: ♦ Write out your current vision statement, ♦ Identify key phrases in your vision statement, ♦ Identify how each phrase relates to the customer, ♦ Assess the degree to which your vision statement relates to the customer by checking off the appropriate answer to each question, ♦ Write a revised vision statement if you feel it will benefit your benchmarking activities, and ♦ Verify that key phrases of your revised vision statement have a relationship to the customer by completing the last part of the worksheet. USE MORE THAN ONE WORKSHEET IF NECESSARY.

87 WORKSHEET 2. ROLE OF CUSTOMER IN VISION YOUR VISION STATEMENT The department will meet the needs of its citizens, visitors, and commerce for mobility and accessibility in a manner that enables the people to prosper in a rapidly changing global economy and to enjoy a high quality of life in an environmentally sustainable manner. KEY PHRASES RELATIONSHIP TO CUSTOMER 1. Will meet needs of citizens, visitors, and commerce 1. Identifies three customer segments 2. For mobility and accessibility 2. Key transportation attributes important to customers 3. That enables the state to prosper in a rapidly changing global economy 3. Addresses economic prosperity of customers and need for continuous change 4. To enjoy high quality of life in an environmentally sustainable manner 4. Addresses environmentally sustainable quality of life of customers ASSESSMENT OF VISION STATEMENT ❑ Customer(s) directly addressed?  Yes  No ❑ Key transportation attributes explicitly addressed?  Yes  No ❑ Addresses quality/continuous improvement?  Yes  No ❑ Others: REVISED VISION STATEMENT (for Agency or Road Maintenance) Vision statement is OK 1. 1. 2. 2. 3. 3. 4. 4. ✓ ✓ ✓

Mission While the vision statement of an organization describes what the agency wants to become in the future, its mission statement describes what the agency is supposed to do that justifies its existence. In most cases, the customer is prominent in the mission of the overall agency and in the mission of the maintenance organization. A common mission statement says the agency is responsible for providing safe, efficient, aesthetically pleasing transport of people and goods in a manner that is sensitive to the environment. Below is the mission statement of the Maryland State Highway Administration. Chapter 4: Steps of Customer-Driven Benchmarking 88 Note the following characteristics of this mission statement: 1. It addresses external customers, the people who use the highway system. 2. The mission identifies in broad terms the main product or service the agency provides, namely mobility. 3. The mission stresses the importance of certain attributes of the products and services and lists them in an order that may reflect the agency’s priorities: safe, well maintained, attractive, supportive of Maryland’s economy, and environmentally responsible. This mission statement, like many others, provides strong clues regarding how to begin thinking about a benchmarking program from the standpoint of the customer. “To provide mobility for our customers on a safe, well-maintained and attractive highway system that supports Maryland’s economy in an environmentally responsible manner”

89 Use Worksheet 3 to analyze the role of the customer in the mission: ♦ Write out the current mission statement, ♦ Identify key phrases in your mission statement, ♦ Identify how each phrase relates to the customer, ♦ Assess the degree that your mission statement relates to the customer by checking off the appropriate answer to each question, ♦ Write a revised mission statement if you feel it will benefit your benchmarking activities, and ♦ Verify that each key phrase of your revised mission statement has a relationship to the customer by completing the last part of the worksheet. USE MORE THAN ONE WORKSHEET IF NECESSARY.

Chapter 4: Steps of Customer-Driven Benchmarking 90 WORKSHEET 3. ROLE OF CUSTOMER IN MISSION YOUR MISSION STATEMENT The mission of the department is to provide safe, efficient, pleasing transportation that protects or enhances the environment. KEY PHRASES RELATIONSHIP TO CUSTOMER 1. Provide safe, efficient, pleasing transportation 1. These are three attributes important to the road user 2. That protects or enhances the environment 2. This is an attribute important to road users, general public, and adjacent property owners 3. 3. 4. 4. ASSESSMENT OF MISSION STATEMENT ❑ Customer(s) directly addressed?  Yes  No ❑ Key transportation attributes explicitly addressed?  Yes  No ❑ Addresses quality/continuous improvement?  Yes  No ❑ Others: REVISED MISSION STATEMENT (for Agency or Road Maintenance) Our mission is to continually improve and exceed the customer’s expectations by delivering safe, efficient, pleasing road transport in a manner that promotes economic growth and protects and enhances the environment. KEY PHRASES RELATIONSHIP TO CUSTOMER 1. Continually improve and exceed customer expectations 1. Customer can expect continuous quality improvement and expectations to be exceeded 2. In delivering safe, efficient, pleasing road transport 2. These are highway attributes important to customer 3. Promotes economic growth and protects and enhances the environment 3. Goals important to road users and those affected by highway activity 4. 4. ✓ ✓ ✓

Attributes of Products or Services and Activities In the past, maintenance management has been organized around various activities. Managers and crews thought of themselves as performing certain types of activities ranging from pothole repair to trimming vegetation to snow and ice control. However, these activities were not described in such a way that the relationship to the organization’s customer was apparent. The connection between the activities and customer satisfaction, customer-oriented outcomes, or the value customers received was weak or not evident. An increasing number of agencies have taken a step back from always thinking in terms of activities and have asked more fundamental questions: ♦ What business are we in? ♦ Who are our customers? ♦ What products and services do we deliver? ♦ What attributes of the products and services are customers buying? ♦ How do we increase or create value for our customers? Customer-driven benchmarking begins by answering these questions. Approach Determining what your customers are buying will require fresh thinking. If people in your maintenance organization are accustomed to thinking in terms of maintenance activities rather than being in the business of delivering products and services to various groups of customers, you might have difficulty at first. You will need to assemble a group of key maintenance managers and charge them with determining what customers are fundamentally buying. Your challenge will be to reach some consensus. Suppose you begin with winter maintenance operations. What are customers buying? ♦ Snow and ice control? ♦ Anti-icing or deicing? 91

Chapter 4: Steps of Customer-Driven Benchmarking 92 ♦ The ability to drive the speed limit, unrestricted by snow and ice? ♦ Safe passage to destination on roads free of snow and ice—in other words, on roads whose pavements are returned to bare condition as quickly as possible after snow or ice begins to accumulate? Market Research To determine what customers are buying, your agency should conduct market research. You will need to enlist people with expertise in market research to help you. They can be found inside your organization or in market research and consulting firms. There are four types of market research inputs that can provide insight regarding what customers are buying: 1. Market research literature regarding road maintenance. See the References section. 2. Surveys that have been previously conducted by various agencies. See Appendix E. Both the questions and the responses can be revealing in terms of what customers are buying. 3. Focus groups should represent different segments of customers, so you may have to conduct a number of them. See Appendix C for further guidance regarding focus groups. 4. Surveys of your own customers. Design, administer, and summarize responses to surveys of your maintenance organization’s customers. See Appendix C for guidance on developing and administering surveys. Example The Minnesota DOT (MnDOT) undertook a major effort to rethink its approach to maintenance in business terms and defined seven products and services: 1. Clear roadways – Clear of debris, and – Roadway clear of ice and snow.

2. Smooth and reliable pavements – Availability of roadway for year-round use, – Road ride comfort, and – Road reliability. 3. Available bridges 4. Attractive roadsides – Amount of roadside litter, – Noxious weed control, and – Vegetation height control. 5. Safety features – Guardrail and bridge rail condition, – Pavement markings, – Roadway lighting, – Signing, and – Traffic signals functioning as designed. 6. Highway permit/regulations – Encroachments on the right-of-way, – Accessibility of permit office, – Consistency of permit requirements, and – Time required to issue permits. 7. Motorist services – Motorist information on unplanned conditions, and – Attractive rest areas. In the process of identifying products and services, MnDOT also identified the products’ and services’ important attributes. The list above shows the attributes the department initially associated with each product and service. Over time, MnDOT has become increasingly sophisticated in its understanding of the attributes of its products and services, partly as a result of carrying out an extensive program of market research. Table 3 presents an expanded set of attributes that MnDOT has identified. These attributes become the basis for developing customer-oriented outcome measures. 93

Chapter 4: Steps of Customer-Driven Benchmarking 94 Table 3. Attributes MnDOT Has Identified or Addressed in Market Research Category Attributes Clear of unplanned obstructions Roadway clear of ice and snow Trucks plowing as soon as snow appears Plowing frequency during average snowfall Ability to see shoulder striping during snowfall Ability to see road edge during snowfall Ability to make turns at crossovers/intersections Driving speed during snowfall Day versus night snow removal expectations Weekend versus weekday snow removal expectations Radio channels listed for weather/road information Bare wheel paths Scattered slippery spots Only right lane plowed to bare pavement All driving lanes plowed to bare pavement All lanes plowed full width Clear Roadways Fully cleared intersections/crossovers Availability of roadway for year-round use Road ride comfort Smooth and Reliable Pavements Road reliability Available Bridges Availability of bridges Guardrail and bridge rail condition Pavement markings Roadway lighting Signing Traffic signals functioning as designed Attractive woods by road and lack of clear space to woods Vegetation on shoulders blocking site distance Vegetation blocking site distance at corners Safety Features Vegetation blocking signs Amount of roadside litter Noxious weed presence Vegetation height Attractive Roadsides Neatness of vegetation Encroachments on right-of-way Accessibility of permit office Consistency of permit requirements Highway Permits/ Regulations Time to issue permits Motorist information on unplanned conditions Motorist Services Rest area attractiveness

Attributes of Products and Services Important to Your Customers You will now use the inputs you have obtained from market research literature, surveys conducted by other organizations, focus groups, and additional customer surveys your organization has undertaken in order to begin to characterize what customers of maintenance are buying. If no research information is available, you can use your internal team for ideas on what customers want, desire, or are buying. These are the attributes of a product or service. Brainstorm or extract from research a list of what your customers desire. These are the outcome attributes of maintenance work that are important to your customers. Reorganize the list of attributes into categories. Derive each category by grouping attributes based on a specific aspect of a driver’s experiences. Finally, give the category a name that summarizes what the customer is receiving from the collection of attributes. The completed Worksheet 4 presents an example of how to proceed. 95 Use Worksheet 4 to define your products and services. ♦ In the left column, list all of the attributes (what the customer is buying, wants, or desires) from the available research or your implementation team’s ideas. This is an exercise to generate a list. Then edit the list: eliminate items that are redundant or not really important. ♦ In the center column, group the attributes into categories that relate to a similar aspect of driving experiences. There will likely be 5 to 10 categories. ♦ In the right column, establish a name for each category that captures the essence of what the driving customer desires, wants, or is buying, as represented by the group of attributes. These names then become the names of the maintenance products or services. USE MORE THAN ONE WORKSHEET IF NECESSARY.

Chapter 4: Steps of Customer-Driven Benchmarking 96 WORKSHEET 4. FIGURING OUT YOUR PRODUCTS AND SERVICES Attributes Attributes by Category Product/Services Name Clear Roadways a. Clear of unplanned obstructions b. Clear of ice and snow c. Plowing frequency during snowfall d. Clear intersections and crossovers Smooth Pavements a. Ride comfort Available Bridges a. Bridge open and closed b. Posted loads c. Traffic detoured × detour length Condition of bridge components Safe Guidance a. Guardrail and bridge rail condition b. Nighttime visibility of signs and markings c. Legibility of signs d. Signpost condition e. Obstruction of safety features 1. Legibility of signs 2. Guardrail and bridge rail condition 3. Posted loads 4. Signpost condition 5. Plowing frequency during snowfall 6. Clear of unplanned obstructions 7. Nighttime visibility of signs and markings 8. Condition of bridge components 9. Traffic detoured × detour length 10. Bridge open and closed 11. Clear intersections and crossovers 12. Clear of ice and snow 13. Obstruction of safety features 14. Ride comfort

Mapping Maintenance Activities to the Products or Services Maintenance management systems typically group work activities according to maintenance activities. It is critical to reorganize the maintenance activities to match the products and services that the maintenance department is delivering to its customers. This is significant because performance means performance of a maintenance product or service. Performance can only be understood when the level of outcomes (results) from delivering these products or services, the level of output (production), and the level of resources expended are known. 97 Use Worksheet 5 to map maintenance activities to the maintenance products or services. ♦ In the left column, list the products or services (probably 5–10). ♦ In the center column, list the maintenance activities that impact the attributes of the product or service. ♦ In the right column, write the maintenance code from your maintenance management system that accompanies the maintenance activity from the center column.

Chapter 4: Steps of Customer-Driven Benchmarking 98 WORKSHEET 5. MAPPING MAINTENANCE ACTIVITIES TO PRODUCTS AND SERVICES Name & Code of Partner: Department of Transportation, 0031______ Benchmarking Agreement # : 1234567___________________ Organizational Level of Benchmarking Unit: County Number of Benchmarking Units: 13 Product/Services Maintenance Activity Description Activity Code a. Deicing 101 b. Anti-icing 102 c. Plowing and sanding 103 d. Removal of ice and snow 104 1. Clear Roadway (Ice and Snow) a. Micro-surfacing 150 b. Fog seal 151 c. Chip and seal 152 d. Pothole repair 153 e. Deep patching 154 2. Smooth Pavements a. Deck repair 45 b. Deck replacement 46 c. Strengthening 47 d. Repair of bridge component 48 e. Maintenance of bridge component 49 3. Available Bridges a. Guardrail repair 70 b. Bridge rail repair 71 c. Sign repair 72 d. Sign replacement 73 e. Signpost replacement 74 4. Safe Guidance

Thus far, you have determined the attributes important to the customer that are associated with different products and services and you have listed the combination of maintenance activities that produce the products and services. The next step is to identify possible outcome measures that correspond to each product or service attribute important to the customer. You need to prioritize these candidate measures and identify the two to four most important measures from your agency’s perspective. You should also determine whether your agency currently has the data for each candidate measure or whether the data exists. Refer to the list of commonly recognized measures in Chapter 3, Table 1 and to Appendix E for ideas regarding outcome measures to consider. 99 For each product or service attribute, fill out Worksheet 6 to identify customer-driven outcome measures corresponding to each product or service attribute. These should be measures that you are currently using or those that you think should be used for benchmarking this product or service. ♦ At the top of the worksheet, fill in the product or service name and the attribute(s) for which you are identifying candidate measures. ♦ In the top half of the worksheet labeled “Outcome, Condition Measures,” list candidate measures that are for assessing the conditions of a product or service attribute resulting primarily from maintenance activities (e.g., “time to return to bare pavement” after a snowfall). ♦ In the bottom half of the worksheet labeled “Outcome, Customer Survey Questions,” list customer survey questions that you use and that you believe give a good indication of the customers’ satisfaction with the level of performance of the product or service attribute (e.g., “ride comfort” of pavement). Also, list potential customer survey questions that you believe would give a good indication of customers’ satisfaction. ♦ In the second column, indicate (yes or no) whether the measure is available, meaning that the data exists, that the agency already uses this measure, or both. ♦ In the last column, place a check if you believe that this is a measure of high priority for the benchmarking partners to consider. USE MORE THAN ONE WORKSHEET IF NECESSARY.

WORKSHEET 6. IDENTIFYING MEASURES FOR ATTRIBUTES Product/Service: Smooth Pavement Attributes: Ride Comfort OUTCOME, CONDITION MEASURES AVAILABLE? PRIORITY 1. International Roughness Index  Yes  No 2. Maintenance Ride Quality Index  Yes  No 3. Longitudinal Profile  Yes  No 4. Number of potholes per lane mile  Yes  No 5.  Yes  No 6.  Yes  No OUTCOME, CUSTOMER SURVEY QUESTIONS AVAILABLE? PRIORITY 1. Satisfaction with pavement smoothness (1 = very unsatisfied; 5 = very satisfied)  Yes  No ✓ 2. Satisfaction with ride comfort  Yes  No 3.  Yes  No 4.  Yes  No 5.  Yes  No 6.  Yes  No ✓ ✓ ✓ ✓ ✓ ✓ ✓

Calculation, Source, Network Coverage, and Quality of Outcome Measures Before you decide what outcome measures to use, you will need to compile information on the data necessary for calculating the measure and on the availability of this data. If data is not available or does not exist for measures that you need for benchmarking, then you will need to document the data that does not currently exist. 101 Use Worksheet 7 to identify the calculation, availability, source, network coverage, and quality of data for each outcome measure (i.e., condition or survey measure) for each product and service attribute. ♦ At the top of the page, write the name of the product or service. ♦ Enter the attribute(s) of the product or service. ♦ In the left column labeled “Name of Measure,” list the name of each candidate benchmarking measure corresponding to the product or service attribute. ♦ In the column labeled “How Measure Calculated/Scale,” – Write the formula and/or description of how the measure is calculated, and – Write the scale for the measure. ♦ In the column labeled “Month Data Available,” write the dates or months in each year that data is available, or needs to be available, to calculate the measure. ♦ In the column labeled “Where Data Stored,” write the name of the system, database, or location where the data is or should be maintained. ♦ In the column labeled “Roadway Network Coverage,” – Describe the types or classes of roads for which data exists. – Indicate whether the data is 100% coverage or is a sample. – If the data is sample data, then indicate the lowest geographical or organizational level for which the data is statistically valid. ♦ In the column labeled “Data Quality H, M, L, N,” write the letter for high (H), medium (M), or low (L) that best describes your team’s assessment of the data quality; if the data does not exist, then write “N.” USE MORE THAN ONE WORKSHEET IF NECESSARY.

WORKSHEET 7. OUTCOME MEASURES Product/Service: Smooth Pavement Attribute(s): Ride Smoothness Name of Measure How Measure Calculated/Scale Month Data Available Where Data Stored Roadway Network Coverage Data Quality H, M, L, N Instrumentation: the number of inches of PMS Nat. Highway Sys. deviation in elevation from a fixed horizontal 100% coverage IRI plane per mile September H Semi-annual driver survey rating smoothness Customer County of pavement on a 1–5 scale Survey Survey Question April & October Database M

Resources Associated with Maintenance Activities That Produce Outcomes Historically, maintenance organizations may not have defined outcome measures for individual products or services; however, it is likely that they have data for measuring the amount of resources that are used in specific time periods to deliver a product or service. In Worksheet 4, you identified the maintenance activities performed that deliver a desired product or service. In Worksheet 8, you want to identify the measures that will be used to indicate the amount of resources expended to deliver the product or service. Resources are labor, material, and equipment used by the maintenance agency and other service providers with whom the agency contracts to perform maintenance activities for the product or service. It would be most useful if contracts were divided by labor, materials, and equipment usage; however, total dollars may be the only measure available for contracts. Other measures will likely include hours for labor, pounds or gallons for material, and hours of usage or miles driven for equipment. In most cases, dollars spent for each of these resources can be a surrogate measure of the resource usage. Also, some agencies use the quantity of a resource used (e.g., gallons of material) as a measure of the amount of work completed (output or production). For customer driven- benchmarking purposes, the quantity of a resource is a resources measure, not a production or output measure. 103

Chapter 4: Steps of Customer-Driven Benchmarking 104 The purpose of Worksheet 8 is to identify the measures of resource usage of the maintenance activities for a specific product or service. At the top of the worksheet, enter the product or service that you plan to benchmark. ♦ In the first and second columns, enter the activity code and the name for each maintenance activity that will affect a measured outcome of this product or service. ♦ In the third column labeled “Labor (UOM),” enter the units of measure (other than $) for labor resources. ♦ In the column labeled “Equipment Type,” enter each primary type of equipment that is typically used to carry out the maintenance activity, and enter the equipment units of measure in the fifth column labeled “Equipment (UOM).” ♦ In the sixth column, enter the type of material that is typically used to carry out the maintenance activity and enter the units of measure for the material in the seventh column. ♦ In the eighth column, provide your judgment of the average quality of the resource data (H = high, M = medium, L = low). ♦ In the ninth column labeled “Cost Data Available,” indicate whether the dollar expenditures are available for these resources; write L = labor, E = equipment, or M = material for the respective resource for which dollar expenditures are available. Write T = total cost if only the total dollar amount is available for the activity (this may be the situation for contracted activities). Also, write OH if overhead cost data is available for the activity. ♦ In the tenth and last column, for each maintenance activity identify the lowest organization level for which resource data is maintained and enter the number of these organizational units that have complete resource data—labor, equipment, material, and related costs. USE MORE THAN ONE WORKSHEET IF NECESSARY.

WORKSHEET 8. RESOURCE MEASURES Product/Service: Clear Roadways (Ice & Snow) Activity Code Name of Activity Labor (UOM) Equipment Type Equip. (UOM) Material Type Material (UOM) Quality of Data H,M,L Cost Data Available (L,E,M,T,OH) Lowest Org. level & # 102 Anti-icing Hrs Truck Spreader Hrs Brine CMA Gallons Pounds H L, E, M, T, & OH Areas 43 103 Plowing and Sanding Hrs Truck Plow Hrs Sand Tons H L, E, M, T, & OH Areas 43

Hardship Factors Roadways exist in many different environmental settings that create varying degrees of hardship. Hardship in this context means that the greater the hardship, the greater the quantity of resources required for maintenance activities to deliver a level of a product or service. A specific correlation may not exist or be known between hardships factors and the ease of delivering maintenance products or services. However, it is generally understood that the greater the hardship, the greater the difficulty in delivering a maintenance product or service. High population density, severe weather, and difficult terrain are examples of hardship factors. Data regarding several hardship factors should be collected and measures should be calculated before a judgment is made as to which hardship measures to use in evaluating performance. Chapter 4: Steps of Customer-Driven Benchmarking 106 Use Worksheet 9 to identify the potential or candidate hardship measures for a product or service and the data required to calculate the measure. At the top of the page, enter the name of the product or service being benchmarked. ♦ In column one, list the factors that are believed to affect the level of resources (e.g., weather, traffic, population density, etc.). ♦ In column two, list possible measures for each respective factor and a description of how the measure is calculated. ♦ In column three labeled “Specific & Data Source,” identify for each measure the source of the data. ♦ In column four, identify the lowest level of organization for which the data and the measure is/or could be available. ♦ In column five labeled “Time Period,” identify the time period covered by the measure (e.g., monthly, quarterly, or annually). ♦ In column six, the last column, rate the quality of the data: H = high, M = medium, L = low, or N = not available.

WORKSHEET 9. HARDSHIP FACTORS Product/Service: Clear Roadways (Ice & Snow) Factor Possible Measures & Description of How Each Is Calculated Specific Data & Source Lowest Org. Level Time Period Data Quality Inches of freezing precipitation National Weather Service Weather District Nov-Apr M Number of storms that require crews to treat or RWIS & Maintenance clear roads Management Information System Weather (MMIS) County Nov-Apr H Average daily traffic Traffic County Annual M Elevation change per mile in feet Topographical Map Terrain Area Continuous H

Outputs Good performance reflects both quality and quantity of work; therefore, for each product or service, you will need measures of the amount of work that was accomplished—that is, production (also called output). Each primary maintenance activity of a product or service will likely have its own measures of production; however, the ones to examine are measures for the entire product or service. Chapter 4: Steps of Customer-Driven Benchmarking 108 Use Worksheet 10 to identify the candidate measures of production for each product or service. ♦ At the top of the worksheet, enter the product or service name. ♦ In column one, enter a list of potential measures for this product or service that would give an indication of how much total work was done in a time period. ♦ In column two, describe how each measure is or would be calculated. ♦ In column three, list the data required for the measure and the timing of the data’s availability (monthly, annually, in September, etc.). If the data is not collected, indicate so by stating “NC.” ♦ In column four, identify the lowest organizational level for which this data exists and the number of these organizations that have this data. ♦ In column five, rate the quality of the data for the measure: H = high, M = medium, or L = low. USE MORE THAN ONE WORKSHEET IF NECESSARY.

WORKSHEET 10. OUTPUT MEASURES Product/Service: Clear Roadways (Ice & Snow) Name Measure How the Measure Is Calculated Data Required & Timing of Availability Organization Level & # of Orgs Data Quality (H, M, L) Sum of miles traveled for all trucks for Truck log miles from first event to last Activities 150, 151, & 152 for the season event of the season–data available daily and at end of the season (May 10) Total Miles of Anti-icing, Plowing & Sanding Garage 57 M

Benefits and Costs of Measures By completing the first 10 worksheets, you may well have determined that your agency should create additional measures, possibly for any of the four categories of measures: outcomes, outputs, resources, or hardship factors. As you work with benchmarking partners to determine common measures for the product or services that you wish to benchmark, some of the partners will likely need to develop new measures. Your agency may need to collect data that has not previously been collected in order to calculate measures. Rather than indiscriminately launching activities to collect data, you and your partners should assess the cost of collecting the new data and of creating a measure and the benefits that would come from having the new data and measure. This assessment could result from a thorough investigation with detailed calculations; however, a more general and subjective assessment is appropriate to ensure that there likely is a benefit to the agency and partnership for having a new measure. Also, if there are several candidate new measures for which data needs to be collected, then there should be a comparison of the cost- benefit relationship of the candidate measures. This comparison will help to ensure that the agency and partnership collects data and creates the measures that are of the highest priority and do not spend unnecessary time and money collecting data that has little value. Costs can be estimated by considering the equipment required for data collection; whether you need sample data or complete coverage (census) data for a benchmarking unit; the staff and training required for data collection; and the systems required for maintaining and/or manipulating the data to create a measure. Benefits resulting from using the measure can be estimated by determining a feasible range of cost reduction in delivering the product or service or by determining the importance of improving the outcome of the product or service to customers. The estimate can be subjective—for example, high, medium, or low—for both benefits and costs. Chapter 4: Steps of Customer-Driven Benchmarking 110

111 Use Worksheet 11 to profile your estimates of the cost-benefit relationship of various candidate measures that will require new data collection and cannot currently be calculated. ♦ For each measure, rate the cost as high, medium, or low. ♦ Rate the benefit of the measure based on its usefulness of measuring the value to the customer of the production service; also, considering the feasible savings in delivering the product or service. The net benefit should be rated as high, medium, or low. ♦ Place the name of each measure in one of the nine cells that corresponds to the measure’s rating on both the cost to collect and to calculate the measure and the benefit from using the measure. ♦ Choose which measures to create based on which will give the best combination of high benefit and low cost. These are measures that are closest to the upper right corner of the grid.

WORKSHEET 11. BENEFITS VERSUS COST OF MEASURES Product/Service: Smooth Pavement Survey Question rating drivers satisfaction with pavement smoothness IRI National Highway and all primary roads annually, all others every other year. Contractor breakout of costs of labor equipment, material Condition rating, surface rating inspections Potholes per lane mile B E N E F I T S MEDIUM LOW HIGH C O S T MEDIUM LOW HIGH

Summary of Performance Measures for Each Product or Service That a Partner Would Like to Benchmark Each partner will need to complete a description of the measures that they have available or believe are appropriate for the each product or service to be benchmarked. This set of measures will later need to be reviewed by each partner and a commitment will be reached on the common data and measures that each member of the partnership will use. From their own Worksheets 6 through 10, each partner should aggregate the outcome, resource, hardship, and output measures and use it in Worksheet 12. 113 Use Worksheet 12 to summarize the recommended measures that your organization uses or would like to use for benchmarking a desired product or service: ♦ At the top of the page, enter: – The product or service being benchmarked, – The name of the partner and Identification code, – The benchmarking agreement number, – The organizational level for the benchmarking units, and – The number of benchmarking units. ♦ In the left two columns, number and list the code and the name of each recommended measure. For coding, use “OC” to indicate it is an outcome measure; “OP” to indicate it is an output measure; “R” to indicate it is a resource measure; and “H” to indicate it is a hardship measure. Code each measure of each type consecutively (e.g., R1, R2, . . . RN). ♦ In the remaining columns to the right, for each outcome, output, resource, or hardship measure, provide the following information: – In column three, a description of the measure (e.g., mean of total segment samples of edge drop-off of more than 2″ extrapolated to the number of lane miles); – In column four, “UOM” is the unit of measure (e.g., the number of linear ft. of edge drop-off >2″ per 1/4-mile segment); – In column five, “Scale” is the measurement scale (e.g., linear feet/lane mile); – In column six, “Summary Statistic” is the summary statistic of the measure (total, mean, median, etc.); and – In column seven, “Protocol” is the measurement protocol that is the name or code of a document that defines the measure. USE MORE THAN ONE WORKSHEET IF NECESSARY.

WORKSHEET 12. SUMMARY OF RECOMMENDED MEASURES Product/Service: Smooth Pavements Name & Code of Partner: Department of Transportation, Code 00031 Benchmarking Agreement # : B1234567 Organizational Level of Benchmarking Unit: County Number of Benchmarking Units: 13 Measure Code Measure Name Description of the Measure UOM Scale Summary Statistic Protocol OC 1 IRI Deviation in the elevation of a pavement from a fixed horizontal plane Inch per Mile 50–210 Section Mean in County FHWA OC 2 Survey Q on Smoothness Semi-annual drivers survey rating their satisfaction with the smoothness of the pavement Rating 1–5 Mean County Response Survey Design & Interview Instruct. R 1 Labor Total hours of labor for activities 150–165 Hrs Total Hrs Maint. Manual R 2 Equipment Total hours of equipment usage, activities 150–165 Hrs Total Hrs Maint. Manual H 1 Degree Days Number of degrees below freezing summed for the year Degrees 0–50 Sum Maint. Manual Section 4.2 OP 1 Lane Miles Treated Numbers of lane miles treated with activities 150– 165 for the season Lane Miles 0–500 Sum Maint. Manual Section 5.6

Availability of Performance Data and Measures Benchmarking requires performance evaluations to be made and shared among the benchmarking units of all benchmarking partners. Performance is calculated for a specific time period—for example, monthly, semi-annually, or annually (for most customer driven-benchmarking, the time period will initially be annually). Therefore, the time of the year that the measure is available for calculating performance is important to the partners. Suppose the product or service is “Clear Roadways”(clear of ice and snow). If “customer’s satisfaction with this service” is an outcome measure that partners agree to use, then it is important to know when the data (in this case, the customer research data) and the corresponding measure are available to the agency and all of the benchmarking partners. One agency might conduct a customer phone survey on a continuing basis throughout the winter season, and complete data may be available at the end of the season in May. Another partner might conduct a single survey in July and not have data available until October. Unless the latter partner is willing to change the timing, the type of survey, or both, the benchmarking could not take place until sometime after October. Data availability will therefore significantly impact the time of year that the partnership can conduct benchmarking for a specific product or service. Knowing when data and corresponding measures are available is very important information to consider and share with partners. 115 Use Worksheet 13 to document when each candidate measure from Worksheet 12 is available. At the top of this worksheet, repeat the information from the top of Worksheet 12. ♦ Repeat the code and name of the measure and the first two columns from Worksheet 12 (e.g., OC1, Customer Satisfaction with Sign Visibility). ♦ In the third column, write the specific data that is collected for the measure (e.g., Response to Semi-annual Customer Survey). ♦ In the columns representing the months of the year, place an “x” in the columns representing the months in which the data is collected or needs to be collected for the measure. ♦ Place an M (for measure) in the months that the measure is calculated and available or should be available. If in one month data is both collected and the measure is available, just place an M in that month.

WORKSHEET 13. AVAILABILITY OF DATA AND MEASURE Product/Service: Smooth Pavement Name & Code of Partner: Department of Transportation, 00031 Benchmarking Agreement #: 1234567 Organizational Level of Benchmarking Unit: County Number of Benchmarking Units: 13 Code Measure Name Descriptions of Data Being Collected J F M A M J J A S O N D Roughness ratings, mean for OC 1 IRI primary roads for each county x x x M Summary mean of responses to OC 2 Survey Pavement Smoothness question rating smoothness or roads x x x x x x M All labor hours logged in MMIS for R 1 Labor Hours activities 150–159 x x x x x x x M All equipment hours logged for R 2 Equipment Hours activities 150–159 x x x x x x x M Number of degrees below freezing each H 1 Degree Days day of the year x x x M x x x Amount of rain, ice, and snow fallen H 2 Precipitation in a year, annual data x x x x x x x x M x x x Total # of lane miles treated by OP 1 Lane Miles Treated activities 150–159 x x x x x x x x M

Now that you have determined what your organization uses or would like to use as measures of performance, the lead partner organization must coordinate with each of the partners to reach agreement among the partnership to ensure that each partner is committed to a common set of measures for each product or service to be benchmarked. The lead partner will need to ensure that completed Worksheets 5, 12, and 13 from each partner are shared with all other partners for this purpose. It is likely that individual partners will need to be flexible in three primary areas: 1. The partnership will want each of the partners to aggregate a similar (as much as is possible) set of activities that define a product or service even though the product or service may have different names. For example, one agency may call its winter services by the name “clear roadways,” while another agency refers to the same service as “snow and ice control.” The focus is not on the name, but rather on the activities that make up the product or service. 2. Any partner may need to include activities that might be performed by another organization or organizational units. For instance, in providing a smooth ride, a substantial portion of the activities that affect ride quality may be performed by construction or contractors. Therefore, the partnership will have to make a commitment regarding what activities of the maintenance organization and other organizations are included in the customer-oriented product or service that they want to benchmark. 3. Data collection for measurements may need to change for any given partner. For example, many maintenance organizations have instituted a “level of service” measure to determine the actual quality of highways or specific aspects of highways and other maintenance assets. If the measurements to determine level of service are different from one partner (and its benchmarking units) to the next, then partners cannot very well compare performance of benchmarking units. Another example is that some partners may need to institute customer satisfaction 117

measures for its benchmarking units. Such measures need to be the same for all benchmarking units of the partners. Once the partners have reached agreement and have made a commitment to the activities to be included in a product or service to be benchmarked and the measures and their timing of availability, then a single set of Worksheets 5, 12, and 13 will be completed by the lead partner and circulated to all partners, thereby clarifying the commitment that each partner has made. The performance comparisons that will take place depend upon this commitment. At this point, the lead partner will need to establish the time frame for the benchmarking activities and to receive a commitment from each partner for completing activities according to this timeframe. For each product or service to be benchmarked, this includes the following: ♦ The beginning time for performance measures data collection (this assumes that data is not already available and that you are not benchmarking from past performance). ♦ A time at which the completed measures will be available to all partners. ♦ A time when the partner who will perform the performance comparisons will provide the results to all partners. ♦ A time frame for each of the “best” or better-performing benchmarking units to document their practices and make them available to each of the other partners. ♦ A time frame for partners and their respective benchmarking units to assess the practices of better- performing benchmarking units and to make decisions regarding any practices that they wish to implement. Likely, the partnership is planning to compare performances in the future (e.g., fall of next year); each partner will need to ensure that it has the capability and procedures for collecting the agreed-upon data for outcomes, outputs, resources, and hardship factors within the agreed-upon time periods. This may mean that there is a time gap between the time that the partnership shares final information from Worksheets 5, 12, and Chapter 4: Steps of Customer-Driven Benchmarking 118

13 and the time when data is collected for the first benchmarking performance comparison. During this time period, each partner should begin documenting the business processes of each of its benchmarking units. If there is no gap in time, then each partner will need to document business processes during the period of data collection. Documenting Existing Business Processes Part of your preparations for benchmarking should involve documenting your existing business processes, particularly those you plan to benchmark. You will need this documentation as a basis for making comparisons to business processes associated with best practices. Examining Existing Business Processes You should take a preliminary look at the business processes you are most likely to benchmark and make sure you have a solid understanding of them. Many maintenance organizations have performance standards or maintenance handbooks that describe what complements of labor, equipment, and material are normally used to carry out each activity. Performance standards may also include steps of the business process in broad terms. If the steps are exceedingly broad, you may wish to prepare a more detailed set of steps. Also, rapidly advancing technology may have affected how you do your work, and you should understand how current and evolving technology contributes to your business process. Environmental and occupational and safety regulations may pertain to a certain type of activity, and you should understand how procedures for complying with them fit in your work flow. How scheduling and daily work reporting fit into the business process can also affect productivity and outcomes. For example, organizations use different strategies to minimize the amount of time that crew leaders spend filling out daily work reports. Some methods are very effective in certain circumstances and completely free crews and their leaders to do maintenance work. 119

Business Process Diagramming An effective way to help thoroughly understand the business process is to diagram it using standard business process flow diagrams. A few simple conventions should be observed when you prepare a business process flow diagram: 1. Make a list of each step of the overall business process. Each step should be described at roughly the same level of detail. 2. Identify the personnel who carry out each step. 3. Diagram the business process using the conventions shown in Figure 9. 4. Begin every step of the business process with a verb (e.g., set up work zone, remove litter, clean spreader). 5. Connect each box by arrows in the sequence in which the steps of the business process occur. There may be parallel processes. 6. Some business processes involve one or more decision points. Diagram each decision point and show the business processes that follow from each branch of the decision. 7. If the gathering, storage, retrieval, and transfer of information are part of the business process, use the convention in Figure 9 to show databases that are sources or destinations of information. Figure 10 shows an example of a business process flow diagram. Note that the actors involved in each step are identified at the top of the diagram. You could use a different convention for identifying the actors, but this is as good as any. Chapter 4: Steps of Customer-Driven Benchmarking 120

121 Figure 10. Example Business Process Flow Diagram Figure 9. Business Process Diagramming Conventions Process: Decision: Stored Data Yes No Begin with a verb…….. TEAM LEADER SUPERVISOR CLERK/OFFICE SECRETARY TEAM MEMBERS Completes Work Completes TAC Inputs TAC Data in Remote Data Entry Device Reviews/Corrects/Approv es Paper TAC Approves Timesheets Enters TAC Into Computer Reviews and Signs Timesheet With Labor Hours Printed Out by Computer Remote Data Entry? Various Systems • Financial • Equipment • Payroll • Exp. Tracking Various Systems • Financial • Equipment • Payroll • Exp. Tracking Inputs Team ctivity Card (TAC) Data in Remote Data Entry Device Reviews/Corrects/ Approves on Computer Reviews/Corrects/ Approves Paper TAC i t wit r out by

Once you have prepared the diagram, you should also write out the corresponding steps in the manner shown below in order to make the diagram fully understandable and to check its accuracy. Frequently, by writing out the steps you will see errors or ways to draw the diagram to more accurately reflect the business process it depicts. The steps of the example business process shown in Figure 9 are as follows: 1. The team leader (and rest of team) completes work. 2a. If remote data entry occurs, the team leader inputs the Team Activity Card (TAC, or daily work report) into a remote data-entry device. 2b. A supervisor reviews, corrects, and/or approves the daily work report on a computer, and the work report is uploaded to various systems (e.g., financial, equipment, payroll, or expenditure tracking). 2c. Each team member reviews and signs a timesheet with labor hours printed out by computer. 3a. If remote data entry does not occur, the team leader fills out a paper TAC. 3b. The supervisor reviews, corrects, and/or approves the paper TAC. 3c. The clerk or office secretary enters the information on the paper TAC into a computer and it is uploaded to various systems (e.g., financial, equipment, payroll, or expenditure tracking). 3d. Team members review and sign the timesheet, with the labor hours printed out by computer. 4. The supervisor approves the timesheets. Developing a Repository You should develop a repository of business process flow diagrams. You could place them in a file folder, but it is better to store them electronically in a computer: you can easily retrieve Chapter 4: Steps of Customer-Driven Benchmarking 122

them, place them in electronic documents, and exchange them with your benchmarking partners when you are analyzing best practices. Usually the diagrams you will need for benchmarking are simple enough to draw, and there is no reason to use special software. You can prepare them using any standard drawing tool, including the one found in your computer office suite software. However, there are a large number of Computer Assisted Software Engineering (CASE) tools that include software for business process flow diagramming. So you could use a CASE tool instead. CASE tools typically include an electronic repository for business process flow diagrams. Database Design As soon as you take various outcome measurements and collect other relevant data, you will need to store it. Therefore, before you collect performance data, it is necessary to design a database. One of the benchmarking partners or a third party will need to develop the database. It is recommended that you pay careful attention to the details of database design because you may have to store a considerable amount of data. Since benchmarking is a continuous activity, you will be collecting data year after year. You may be able to get by with the database that is part of the suite of software on your desktop or laptop computer. Nonetheless, consider getting the assistance of a person experienced in developing databases. Database design includes selecting the database software you will use and establishing each of the fields, their location in the record, and their type and length. You should use standard database software that supports Standard Query Language (SQL) operations and Open Database Connectivity (ODBC). For certain applications, it may be important to store the data in a manner that easily permits standard database operations such as “joins” and “select.” In such a case, formal database design procedures may be warranted (i.e., preparing an entity relationship diagram). 123

Other Software Design and Development If data you require for benchmarking comes from a variety of sources and databases, you may wish to develop interfaces to transfer data into a benchmarking data repository. Among the interfaces you might need to develop are the following: ♦ Maintenance management system interface, ♦ Roadway feature inventory database, ♦ GIS database interface, ♦ RWIS data interface, ♦ Pavement management system interface, and ♦ Bridge management system interface. Data Entry and Communications Technology It is possible that with the rapid growth of wireless technology, you might want to support remote data entry into pen-based computers or laptops. Linkages between the database and remote data-entry devices will need to be established. If you decide to use field data collection devices and software for data collection—pen-based computers, voice recognition, bar coding, global positioning system receivers, or digital maps— you will need to design and program the user interface, the data entry procedures, and the data transfer procedures accordingly. STEP 3. MEASURE PERFORMANCE The third step of customer-driven benchmarking involves measuring performance. This entails collecting data on outcomes, resources, hardship factors, and outputs. Collecting and Recording Data You will measure performance at the appropriate level of the organization in accordance with your data collection plan. Collecting and recording data entails the following: Chapter 4: Steps of Customer-Driven Benchmarking 124

♦ Transferring related data needed for benchmarking into the database, ♦ Taking various types of measurements and entering them into the benchmarking database, ♦ Calculating any measures that are a function of the related data, and ♦ Performing quality checks on the measurement and related data. Data collection procedures may involve surveying customers, sampling roadway sections, conducting condition assessments, and retrieving data from management systems. Regardless of whether the partnership is using electronic databases or sharing data electronically, the information needs to be verified, checked, and shared among partners. Each of the benchmarking partners will need to complete Worksheets 14, 15, 16, and 17 and submit them to each of the other partners in the partnership within the agreed-upon time frame. These worksheets contain the measures for the outcomes, resources, hardship factors, and outputs. These measures will be used for the performance comparisons. 125

Chapter 4: Steps of Customer-Driven Benchmarking 126 The purpose of Worksheet 14 is to record, for each outcome measure, the observed outcomes for each subunit of the benchmarking partner. ♦ At the top of the page, enter – The name of the product or service being benchmarked, – The name of the benchmarking partner organization, – An identification code for the benchmarking partner, – The organizational level of the benchmarking units that participated in the benchmarking activity, – The number of benchmarking units, – The benchmarking agreement number, and – The time period over which performance is measured. ♦ In the left two columns, number and list the name of each benchmarking unit of the benchmarking partner. ♦ Place the code and name of each outcome measure in each of the column headings to the right. Code the outcome measures as OC1, OC2, OC3, etc. ♦ For each subunit, fill in the measurement that was taken for each outcome measure (e.g., for OC1, OC2, OC3, etc.). USE MORE THAN ONE WORKSHEET IF NECESSARY. Outcomes

127 WORKSHEET 14. BENCHMARKING RESULTS—OUTCOME MEASURES Product/Service: Smooth Pavement____ Name of Partner: Department of Transportation Identification Code: 00031 Organizational Level of Benchmarking Units: County No. of Units: 13 Benchmarking Agreement # : 1234567 Period of Performance: From: 11-01-01 To: 10-15-02 OUTCOME MEASURES OC 1 OC 2 OC 3 OC 4 OC 5 NO. NAME OF BENCHMARKING UNIT IRI Customer Satisfact. Rating 1. Jefferson 75 4.1 2. Polk 83 4.0 3. Washington 160 2.9 4. Hamilton 139 3.1 5. Adams 129 3.2 6. Roosevelt 112 3.5 7. Truman 82 4.0 8. Clinton 98 3.8 9. Jackson 181 2.8 10. Eisenhower 70 4.2 11. Lincoln 126 3.3 12. Nixon 141 3.0 13. Buchanan 110 3.7

Chapter 4: Steps of Customer-Driven Benchmarking 128 The purpose of Worksheet 15 is to record for each resource measure the observed resource usage of each subunit of the benchmarking partner. ♦ At the top of the page, enter – The name of the product or service being benchmarked, – The name of the organization that is a benchmarking partner, – The identification code for the benchmarking partner, – The organizational level of the subunits that participated in the benchmarking activity, – The number of benchmarking subunits, – The benchmarking agreement number, and – Time period over which performance is measured. ♦ In the left two columns, number and list the name of each subunit of the benchmarking partner. ♦ Put the code and name of each resource measure in each of the column headings to the right. Code the resource measures as follows: R1, R2, R3, etc. ♦ The benchmarking partner will need to fill out the remainder of the worksheet or provide the data. ♦ For each subunit, fill in the measurement that was taken for each resource measure (e.g., for R1, R2, R3, etc.). The measurement should be consistent with the relevant summary statistic (e.g., total cost for each county over the time period from January through December). USE MORE THAN ONE WORKSHEET IF NECESSARY. Resources

129 WORKSHEET 15. BENCHMARKING RESULTS—RESOURCE MEASURES Product/Service: Smooth Pavement Name of Partner: Department of Transportation Identification Code: 00031 Organizational Level of Benchmarking Units: County No. of Units: 13 Benchmarking Agreement # : 1234567 Period of Performance: From: 11-01-01 To: 10-15-02 RESOURCE MEASURES (Cost in Thousands of $) R1 R2 R3 R4 R5 NO. NAME OF BENCHMARKING UNIT Maint. Contract Total 1. Jefferson 456 33,700 34,156 2. Polk 691 25,350 26,041 3. Washington 1,210 28,740 29,950 4. Hamilton 631 24,796 25,427 5. Adams 1,100 22,330 23,430 6. Roosevelt 490 20,790 21,280 7. Truman 3,475 131,600 135,075 8. Clinton 675 12,260 12,935 9. Jackson 1,517 29,000 30,517 10. Eisenhower 897 13,100 13,997 11. Lincoln 1,400 9,473 10,873 12. Nixon 859 20,600 21,459 13. Buchanan 1,263 18,429 19,692

Chapter 4: Steps of Customer-Driven Benchmarking 130 The purpose of Worksheet 16 is to record for each hardship measure the observed resource usage of each subunit of the benchmarking partner. ♦ At the top of the page, enter – The name of the organization that is a benchmarking partner, – The identification code for the benchmarking partner, – The organizational level of the subunits that participated in the benchmarking activity, – The number of benchmarking subunits, – The benchmarking agreement number, and – Time period over which performance is measured. ♦ In the left two columns, number and list the name of each subunit of the benchmarking partner. ♦ Put the code and name of each hardship measure in each of the column headings to the right. Code the hardship measures as follows: H1, H2, H3, etc. ♦ The benchmarking partner will need to fill out the remainder of the worksheet or provide the data. ♦ For each subunit, fill in the measurement that was taken for each hardship measure (e.g., for H1, H2, H3, etc.). The measurement should be consistent with the relevant summary statistic (e.g., mean daily high temperature for each county over the time period from January through December). USE MORE THAN ONE WORKSHEET IF NECESSARY. Hardship Factors

131 WORKSHEET 16. BENCHMARKING RESULTS—HARDSHIP (UNCONTROLLABLE) FACTORS Product/Service: Smooth Pavement Name of Partner: Department of Transportation Identification Code: 00031 Organizational Level of Benchmarking Units: County No. of Units: 13 Benchmarking Agreement # : 1234567 Period of Performance: From: 11-01-01 To: 10-15-02 HARDSHIP MEASURES H 1 H 2 H 3 H 4 H 5 NO. NAME OF BENCHMARKING UNIT ADT Degree Days Liquid Equiv Precip. 1. Jefferson 401 1,539 29.1 2. Polk 275 1,819 28.2 3. Washington 159 1,654 26.3 4. Hamilton 310 1,679 31.1 5. Adams 950 1,455 27.9 6. Roosevelt 600 1,500 23.5 7. Truman 1,817 1,009 26.7 8. Clinton 851 1,103 34.2 9. Jackson 1,310 731 36.7 10. Eisenhower 729 761 31.0 11. Lincoln 557 1,216 29.8 12. Nixon 392 1,310 24.0 13. Buchanan 992 712 21.1

Chapter 4: Steps of Customer-Driven Benchmarking 132 The purpose of Worksheet 17 is to record for each output measure the observed output of each subunit of the benchmarking partner. ♦ At the top of the page, enter – The name of the organization that is a benchmarking partner, – The identification code for the benchmarking partner, – The organizational level of the subunits that participated in the benchmarking activity, – The number of benchmarking subunits, – The benchmarking agreement number, and – Time period over which performance is measured. ♦ In the left two columns, number and list the name of each subunit of the benchmarking partner. ♦ Put the code and name of each output measure in each of the column headings to the right. Code the output measures as follows: OP1, OP2, OP3, etc. ♦ The benchmarking partner should complete the remainder of the worksheet. ♦ For each subunit, fill in the measurement that was taken for each resource measure (e.g., for OP1, OP2, OP3, etc.). The measurement should be consistent with the relevant summary statistic (e.g., total cost for each county over the time period from January through December). USE MORE THAN ONE WORKSHEET IF NECESSARY. Outputs

133 WORKSHEET 17. BENCHMARKING RESULTS—OUTPUT MEASURES Product/Service: Smooth Pavement Name of Partner: Department of Transportation Identification Code: 00031 Organizational Level of Benchmarking Units: County No. of Units: 13 Benchmarking Agreement # : 1234567 Period of Performance: From: 11-01-01 To: 10-15-02 OUTPUT MEASURES OP 1 OP 2 OP 3 OP 4 OP 5 NO. NAME OF BENCHMARKING UNIT Maint. Lane Miles Contract Miles Total Miles 1. Jefferson 190 371 561 2. Polk 57 709 428 3. Washington 130 250 380 4. Hamilton 199 679 878 5. Adams 410 412 822 6. Roosevelt 165 810 975 7. Truman 1,400 390 1,790 8. Clinton 390 401 791 9. Jackson 195 318 513 10. Eisenhower 410 527 937 11. Lincoln 851 755 1,606 12. Nixon 417 498 915 13. Buchanan 537 611 1,148

Chapter 4: Steps of Customer-Driven Benchmarking 134 STEP 4. IDENTIFY BEST PERFORMANCES AND PRACTICES All the preparation described above leads to the heart of the matter—evaluating the outcomes and resources used by each benchmarking partner to identify best performers and improvement opportunities for each organizational unit. There are many possible approaches to evaluating performance, and this guide describes a few that are useful to maintenance organizations. The guide describes a simple approach to assessing performance and then presents a rigorous procedure capable of simultaneously handing outcomes, inputs, and external factors for large numbers of benchmarking units. But first, some important definitions are given: ♦ Best performance: a performance such that there is no other performance that could produce higher customer- oriented outcomes in one or more dimensions of measurement with the same resources and under similar conditions or, equivalently, a performance such that there is no other performance that could produce the same customer-oriented outcomes with fewer resources or under worse conditions. There is no single best performance because it depends on the outcomes, inputs, and levels of hardship factors being examined. ♦ Best performer: a performer that produces a best performance. ♦ Frontier of best performances: the boundary represented by the lines through the points connecting the best performances (see Figure 11). ♦ Improvement opportunity: the gap in one or more measurement dimensions between the frontier connecting best performance and a performance inside (i.e., below) the frontier. ♦ Best practice: a business practice associated with those of a best performance.

135 Simplified Benchmarking Procedure The overriding philosophy of customer-driven benchmarking is that best performers have the highest customer-driven outcomes relative to the resources used while taking into account significant differences in production requirements (outputs) and hardship factors (i.e., factors outside their control). If you are working with just a few benchmarking units—between 7 and 20—it is possible to use a process of visual inspection to obtain enough insight to identify benchmarking units that are best performers and, therefore, sources of best practices. If you have more than 20 units, visual inspection becomes difficult; if you have benchmarking units numbering higher than 30—for example, in the hundreds—you will need to use mathematical and statistical analysis tools such as the data envelopment analysis discussed below. Assuming you have just a small number of benchmarking units, you can analyze their benchmarking data by going through the following steps: 1. Prepare spreadsheet: present the data in a spreadsheet for each outcome, resource, output, and hardship measure for each benchmarking unit. Figure 11. Best Performance

2. Determine value: examine each measure and establish whether increasing or decreasing values of the measure are better or worse from the standpoint of performance. For example, higher customer satisfaction ratings are better, but higher resource usage is worse. 3. Plot bar graphs: plot a bar graph for each measure so that you can see which are the three or four best-performing benchmarking units when judged according to that measure of performance. The best performers will vary depending upon the selection of the measure. You can obtain this information from the spreadsheet, but the bar graphs help you see more clearly which are the best performers for each measure. 4. Consolidate measures: attempt to consolidate the measures in the spreadsheet you developed under the first step so there are as few as possible—for example, five. Do not exceed seven because it is well established in psychological research that individuals have difficulty simultaneously weighing more than seven factors at once. When you consolidate measures, try to do it in such a way that the reduced set of measures provides more insight into the performance of the each of the benchmarking units. Also, establish for each new measure whether increasing or decreasing values represent better performance. 5. Prepare a new spreadsheet: build a new spreadsheet that shows for the reduced set of measures the outcomes, resource usage, outputs, and hardship factors combined in new ways for each benchmarking unit. Now you can determine the best performers by visual inspection. 6. Identify best performers: for each measure, highlight the three or four best performers. You can do this highlighting using the “cell color fill” feature of the spreadsheet software. Now go down the list of benchmarking units and see which ones have the most important cells highlighted or the most cells highlighted. Since you are concerned with customer-driven benchmarking, you want to identify units that do well in serving their customers as reflected by customer survey information, by a technical measure of performance related to the attributes of roads Chapter 4: Steps of Customer-Driven Benchmarking 136

that customers care about, or both. Furthermore, in the best of all worlds, it is desirable that the organizations with the highest customer-oriented outcomes also have the lowest resource usage, have the highest production, and achieve this regardless of the level of hardship. Usually you will find that no benchmarking unit satisfies all these criteria simultaneously and that several could be identified as best performers and therefore are potential sources of best-practices information. Let’s go through an example using the data that was obtained from the field test used to validate the procedures in this guide. Prepare Spreadsheet The first step is to put all the measurement data for each benchmarking unit in a spreadsheet. Table 4 shows a spreadsheet with groups of outcome, resource, output, and hardship measures. 137

Table 4. Performance Measures for 12 Districts District ID Customer Satisfaction Rating Regain Time Labor Cost Equipment Cost Material Cost Total Miles Covered for Season Actual Lane Miles Number of Snow and Ice Events Average Daily VMT A 8.1 12.2 $536,568 $661,478 $899,520 242,060 1,960 95 4,262,352 B 8.1 34.7 $420,765 $437,788 $666,665 214,819 1,809 95 2,315,384 C 7.9 6.4 $422,308 $847,359 $254,430 490,051 3,933 89 3,280,673 D 7.5 6.2 $238,392 $551,179 $669,172 139,991 1,984 72 3,445,186 E 7.5 4.9 $686,286 $862,725 $527,519 141,725 2,072 72 7,908,242 F 7.5 1.09 $580,406 $1,278,141 $632,392 277,679 3,673 63 4,850,026 G 8.2 3.4 $3,426,774 $6,108,419 $3,107,224 398,279 3,751 56 41,892,999 H 7.7 5.6 $519,652 $487,406 $775,949 164,425 1,931 65 4,049,412 I 7.7 5.4 $645,410 $786,760 $477,106 109,395 1,700 65 4,964,813 J 7.7 8.2 $514,695 $851,307 $480,502 251,281 1,931 91 2,914,743 K 7.7 5.7 $457,553 $449,117 $389,594 193,980 1,579 91 2,173,749 L 7.5 43.8 $261,447 $386,734 $203,525 267,262 3,035 74 3,601,587 Outcomes Resources Output Hardship

Determine Value The second step in the example is to determine whether increasing or decreasing values of each measure is better. ♦ Outcomes – Customer satisfaction rating—higher values are better. – Regain time (time required to restore bare pavement after a snow storm)—lower values are better. ♦ Resources – Labor—lower values are better. – Equipment—lower values are better. – Material—lower values are better. ♦ Output – Total miles covered per season—higher values are better, given a certain amount of snow and ice. ♦ Hardship factors – Lane miles—fewer are better. – Number of snow and ice events—fewer are better. – Average daily vehicle-miles traveled (VMT)—more is better because more customers are being served. Plot Bar Graphs By graphing how each benchmarking unit performs with regards to each measure, one can obtain a clear picture of which benchmarking units are the best performers when examined from the standpoint of a single dimension of performance. The following are a series of bar graphs providing different views of the performance of the benchmarking units depending on the measure of interest. 139

Chapter 4: Steps of Customer-Driven Benchmarking 140 Figure 12b shows that Districts E, G, H, I, and K regained bare pavement in the shortest average time. Figure 12a. Outcome: Customer Satisfaction Figure 12b. Outcome: Regain Time Figure 12a shows that District G achieved the highest level of customer satisfaction. Districts A, B, and C also did well in this regard. Customer Satisfaction 7 7.2 7.4 7.6 7.8 8 8.2 8.4 A B C D E F G H I J K L Districts Cu st om er S at is fa ct io n R at in g Snow and Ice Removal 0 10 20 30 40 50 A B C D E F G H I J K L Districts Ti m e to R eg ai n Ba re Pv m t.

141 Figure 12c shows each district’s labor costs. Districts with the lowest costs were D, L, B, and C. District G is an aberration—its labor costs are many times the costs of the other districts. Figure 12c. Resource: Labor Figure 12d. Resource: Equipment Figure 12d shows the equipment costs for each district. Districts with the lowest equipment costs were B, K, L, H, and D. Again District G is an aberration—its equipment costs are many times the costs of the other districts. Labor Costs $0 $500,000 $1,000,000 $1,500,000 $2,000,000 $2,500,000 $3,000,000 $3,500,000 $4,000,000 A B C D E F G H I J K L Districts Co st s Equipment Costs $0 $1,000,000 $2,000,000 $3,000,000 $4,000,000 $5,000,000 $6,000,000 $7,000,000 A B C D E F G H I J K L Districts Co st s

Chapter 4: Steps of Customer-Driven Benchmarking 142 Figure 12e. Resource: Material Costs Figure 12f. Output: Total Miles Covered for Season Figure 12e shows that districts C, L, and K have the lowest material costs. Figure 12f shows that Districts C, G, and F accomplished the most snow and ice control during the year measured in terms of miles. “Total Miles Covered for the Season” equals the total lane miles times the average percent of lane miles covered per storm event, which is then multiplied times the number of events or Material Costs $0 $500,000 $1,000,000 $1,500,000 $2,000,000 $2,500,000 $3,000,000 $3,500,000 A B C D E F G H I J K L Districts Co st s Total Miles Covered for Season 0 100,000 200,000 300,000 400,000 500,000 600,000 A B C D E F G H I J K L Districts M ile s

143 storms for the season. Some storms may require going over all the roads numerous times. Figure 12g. Hardship: Actual Lane Miles Figure 12g presents the number of lane miles in each district that require attention when ice or snow accumulates. Districts C, G, and F have the most lane miles to address. Figure 12h. Hardship Factor: Number of Snow and Ice Events Figure 12h shows the number of snow and ice events that occurred in each district. The more events, the greater the challenge, everything else being equal. Districts A, B, C, J, and K experienced the most snow and ice events. Actual Lane Miles 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 4,500 A B C D E F G H I J K L Districts La ne M ile s District Snow & Ice Events 0 20 40 60 80 100 A B C D E F G H I J K L Districts N um be r o f S no w & Ic e Ev en ts

Chapter 4: Steps of Customer-Driven Benchmarking 144 Figure 12i presents the level of traffic in each district expressed in terms of average daily VMT. District G has a far greater challenge in serving traffic and operating in traffic than does any other district. District E, A, and I are faced with more daily VMT than are the remaining districts. These bar graphs provide some clarity regarding how well each district performs with regard to each variable and the hardships each faces in delivering winter services to its customers. Consolidate Measures The original table (Table 5) presents nine measures, which are too many to absorb and to use to identify best performers. By judiciously combining these measures, it is possible to obtain a clear picture regarding how well each district is able to serve its customers while managing its resources effectively and contending with hardship factors. The original set of measures can be reduced to five that are useful for identifying best performers and searching for best practices: 1. Customer satisfaction rating (outcome measure); 2. Regain time (outcome measure); Figure 12i. Hardship Factor: Average Daily VMT Average Daily VMT 0 5,000,000 10,000,000 15,000,000 20,000,000 25,000,000 30,000,000 35,000,000 40,000,000 45,000,000 A B C D E F G H I J K L Districts A ve ra ge D ai ly V M T

3. Daily VMT per total dollar expended (combination of hardship and resource measures with emphasis on customers served per dollar of expenditure); 4. Cost per lane mile (combination of resource and hardship measure that address the cost efficiency in serving a lane mile of highway); and 5. Number of snow and ice events (hardship factor). How was the reduced set of measures determined? A straightforward step is to combine the three cost measures (labor, equipment, and material) into “total resource cost.” Furthermore, by dividing the total resource cost in a district by the number of lane miles in the district, a new measure is obtained—cost/lane mile—that simplifies the cost comparison between districts. One more step can be taken to reduce the number of measures: divide the average daily VMT by the total resource cost. This allows you to see how many miles are driven for each dollar spent in this area of maintenance; the more miles driven per dollar spent the better. This new measure eliminates the need for the single measure of average daily VMT. As a result of these actions, four measures were eliminated and the total number of measures to view was reduced from nine to five, thereby simplifying the task of identifying those that perform the best. Figures 13 and 14 are the bar graphs for the two new performance measures. 145

Table 5 presents a summary table of the new set of five performance measures and the values for each district. Note that visual inspection of the data can begin to provide some insight regarding which districts are the better performers, but this is difficult to do. Chapter 4: Steps of Customer-Driven Benchmarking 146 Figure 13. Cost per Lane Mile by District Figure 14. Daily VMT/$ by District Cost per Lane Mile by District 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 A B C D E F G H I J K L Districts Co st p er L an e M ile Daily VMT/$ by Distirct 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 A B C D E F G H I J K L Districts D ai ly V M T/ $

147 Table 5. Comparison of District Performance* Much more clarity can be achieved by highlighting the districts that are the best performers along each dimension of performance (see Figures 13 and 14). The next table shows the best-performing districts for each measure screened in gray. It is a relatively trivial exercise to use visual inspection to see whether there are any districts that stand out clearly as best performers across all or most of the performance measures. In this example, it can be argued that District C is the best performer. It is able to achieve a high customer satisfaction score (7.9) at the second lowest cost ($388) per lane mile and while facing a relatively high number of snow and ice events (89). District G does quite well also. It does the best job of any district in producing high customer-oriented outcomes. It has the highest customer satisfaction rating (8.2) and the lowest regain time (3.4 hours). While District G has the highest cost per lane mile ($3,370) and thus spends more on labor, equipment, and material per lane mile than any other district, the number of customers Customer Regain Time Number of District Satisfaction Rating All Service Levels (in hours) Daily VMT/$ Cost/Lane Mile in $ Snow & Ice Events A 8.1 12.2 2 1,070 95 B 8.1 34.7 1.5 843 95 C 7.9 6.4 2.2 388 89 D 7.5 6.2 2.4 735 72 E 7.5 4.9 3.8 1,002 72 F 7.5 10.9 1.9 678 63 G 8.2 3.4 3.3 3,370 56 H 7.7 5.6 2.3 923 65 I 7.7 5.4 2.6 1,123 65 J 7.7 8.2 1.6 956 91 K 7.7 5.7 1.7 821 91 L 7.5 43.8 4.2 281 74 *Tables 5 and 6 do not carry forward the number of digits after the calculations were performed. Numbers had to be expressed in different orders of magnitude to simplify calculations and facilitate interpretation. It is sufficient to provide the results of calculations for purposes of ranking.

served daily (VMT) per dollar expended is the third highest of any district (3.3). Depending upon your perspective, you could argue that Districts A and B do fairly well. They both have the highest customer satisfaction score (8.1) and contend with the most snow and ice events (95). The reason for the high regain times might be attributed to various factors, including low customer expectations for restoration of bare pavement. Further insight into the relative performance of the districts can be achieved by dividing the districts into various groups and by then selecting the best performers from each group (see Table 6). For example, one could break the districts into two groups, one facing more than a certain number of snow and ice events and the other facing fewer. Chapter 4: Steps of Customer-Driven Benchmarking 148 Table 6. Comparison of District Performance with Better Performances Highlighted* Customer Regain Time Number of District Satisfaction Rating All Service Levels (in hours) Daily VMT/$ Cost/Lane Mile (in $) Snow & Ice Events A 8.1 12.2 2 1,070 95 B 8.1 34.7 1.5 843 95 C 7.9 6.4 2.2 388 89 D 7.5 6.2 2.4 735 72 E 7.5 4.9 3.8 1,002 72 F 7.5 10.9 1.9 678 63 G 8.2 3.4 3.3 3,370 56 H 7.7 5.6 2.3 923 65 I 7.7 5.4 2.6 1,123 65 J 7.7 8.2 1.6 956 91 K 7.7 5.7 1.7 821 91 L 7.5 43.8 4.2 281 74 *Tables 5 and 6 do not carry forward the number of digits after the calculations were performed. Numbers had to be expressed in different orders of magnitude to simplify calculations and facilitate interpretation. It is sufficient to provide the results of calculations for purposes of ranking.

Note that by using visual inspection, you avoid developing weights for each factor and an overall index of performance. In this guide the project team has encouraged the use of analytic techniques that do not involve the development of a composite performance measure. There are several reasons why. First, a composite index disguises the factors contributing to performance, so analysts and maintenance managers lose substantial insight in working with the numbers. Second, which benchmarking unit is the best performer depends upon the combination of measures under consideration and the emphasis given to each measure. In the visual inspection process described, it is sufficient for the analyst to peruse the data and weigh in his or her mind which are the best performers under various assumptions. This task becomes exceeding complex if more than a few benchmarking units are involved. Data envelopment analysis is a mathematical technique for performing similar analysis—without giving weights to the measures—that can be used to identify best performers when there is a large number of benchmarking units. Data Envelopment Analysis Introduction to Production Analysis Production is a process whereby inputs (labor, buildings, plants, equipment, and raw materials) are converted into outputs (goods and services), given the existing technology. 149 Consider highway maintenance. The production process is characterized by decision-making units that employ labor, equipment, and material to provide maintenance activities (e.g., sign repair and snow and ice control) to achieve desirable outcomes (e.g., customer satisfaction, high-quality infrastructure, low accident rates, and minimal travel or delay time). Business firms and other decision-making units must make important decisions that are either directly or indirectly related to the production. For example, ♦ How should resources be allocated to specific units? ♦ Should key processes be outsourced?

♦ Are decisions-making units providing services with the fewest resources necessary, or are they spending above minimum costs? These and other key issues can be analyzed by understanding concepts of production economics. First, define technical efficiency similar to Koopmans (1951): a decision-making unit (DMU) is technically efficient if it is technologically impossible to increase any outcome, reduce any input, or both without simultaneously reducing another outcome, increasing one other input, or both. Second, define the production frontier as the maximum possible outcomes that are obtainable for all input levels. As a result, note that a technically efficient DMU is operating along the production frontier. Furthermore, performance of individual DMUs is evaluated relative to the production frontier. Figure 15 provides an example production frontier. For simplicity, assume that there is only one outcome (customer satisfaction) and one input (labor). This assumption is made for simplicity only: it allows the illustration of the concepts introduced previously. Chapter 4: Steps of Customer-Driven Benchmarking 150 Figure 15. Example Production Frontier A number of important points are illustrated in Figure 15. First, the production frontier allows a distinction between attainable and unattainable production combinations defined by the available technology. The production possibilities A through C Input Outcome Production Frontier Efficient decision making unit Inefficient decision making unit A C B

are on or below the production frontier, which is consistent with the “maximum outcomes” that are possible. Points above the frontier are not feasible. Second, DMUs A and B are providing the maximum outcomes possible given their resources and are, therefore, technically efficient. These firms would be ideal candidates to serve as benchmarks for DMUs looking to improve performance. Third, the intuitive notion that increased inputs will lead to increased outcomes along a frontier can be observed by comparing DMUs A and B. While both DMU A and B are efficient, B achieves a higher outcome level because it employs more inputs. Hence, the frontier provides a conceptual understanding of scale economies. Finally, DMU C is not efficient because it is not on the frontier. DMU C could decrease its input and still provide the same outcome by operating like DMU A. Alternatively, DMU C can increase its outcome without any additional input by operating like DMU B. One common measure of performance, technical efficiency, can be measured as TE = (efficient input level)/(observed input level). Efficiency is measured by comparing minimum resources consistent with observed production with observed input usage. This is highlighted in Figure 16, which shows the input and output levels of the DMUs from Figure 15. DMU A—represented by the point (4, 10)—uses 4 units of input to produce an outcome of 10. DMU B (9, 15) uses 9 units of input to provide an outcome of 15. Finally, DMU C (9, 10) uses 9 units of input to provide an outcome of 10. 151

Note that the minimum level of resources consistent with providing an outcome of 10 units is 4 units of input (i.e., DMU A). Also the minimum level of resources that is sufficient to produce an outcome of 15 is 9 units (i.e., DMU B). DMUs A and B are on the frontier and are efficient according to the definition provided previously. Now consider DMU C, which is not on the frontier. The measure of technical efficiency for DMU C is TEC = 4/9 = 0.4444. In words, DMU C, which provides an outcome of 10, uses 9 units of the input. However, DMU C could have provided the same outcome with only 4 units of input, which implies that DMU C is only 44.44 percent efficient. Technical efficiency can also be defined according to an output orientation: TE = (observed output level)/(efficient output level). In this case, efficiency measures the degree to which a DMU’s observed outcome level falls below the frontier output level given the resources used. Referring back to Figure 16, by comparing DMU C with DMU B, it can be seen that DMU C could have provided an outcome level of 15 given the resource Chapter 4: Steps of Customer-Driven Benchmarking 152 Figure 16. DMU Efficiency Input Output Production Frontier Efficient decision making unit Inefficient decision making unit A C B 4 9 10 15

usage. Since DMU C only provided an outcome of 10, the output- oriented measure of technical efficiency for DMU C is TEC = 10/15 = 0.6667. The output-oriented measure implies that DMU C is 66.67 percent efficient. Note that the input-oriented and output- oriented measures are not equal. Also note that DMU C is benchmarked against DMU A in the input-oriented model and against DMU B in the output-oriented model. This suggests that benchmarking capabilities for performance evaluation are arbitrary and depend on the model orientation. Example 1 provides an algebraic analysis of production and efficiency. Example 1: Production and Efficiency For convenience, the project team chose a highway maintenance example where one outcome, customer satisfaction (CS), is provided. Labor (L) is assumed to be the only input. Further assume that the production frontier can be represented algebraically by CS = 10L. A DMU using 25 units of labor (i.e., L = 25) should be able to provide an outcome CS = 10(25) = 250. Likewise, a DMU using 20 units of labor should be able to provide an outcome of 200. Now consider a DMU that uses 25 units of labor and only provides CS = 200. This DMU is technically inefficient: it should have provided a higher outcome given the labor usage; alternatively, it could have provided the same level of customer satisfaction with less labor. Using technical efficiency as the measure to evaluate performance, we find that the technical efficiency of this DMU is (20/25)(100) = 80 percent (input orientation) or (200/250)(100) = 80 percent (output orientation). This DMU is only 80 percent efficient and, hence, could be targeted for performance gains. This DMU could reduce expenditures by eliminating excess labor usage. This example serves as a conceptual basis for the definition and measurement of efficiency. Measurement is based on prior 153

knowledge of the production function Y = 10X, which is, in general, problematic for real-world applications. Recent advances in economics and operations research, however, do not require this knowledge. In particular, the technique of DEA measures relative performance based solely on the observed inputs and outcomes of DMU. Uncontrollable Inputs Production analysis in the public sector often ignores hardship factors. These factors are the uncontrollable factors that transform inputs into desirable outcomes. The relationship is shown in Figure 17, where the provision of outcome Y is determined by the levels of input X (i.e., resource X) and the uncontrollable input Z. Here, three technically efficient DMUs—A, B, and C—are shown. It is assumed that DMUs A and C face the same environment because they both have the same level of the uncontrollable input: Z = Z0. Given the same level of hardship factors, note that DMUs A and C are on the same frontier. DMU C produces more output than DMU A because it uses more of input X. Now consider DMU B, which faces a more favorable external environment than DMUs A and C. Here, DMU B faces Z = Z1. To see the impact that the uncontrollable factors have on production, note that DMU B is able to provide a higher outcome than DMU A even though they both use the same amount of input X (i.e., resource X). Also note that the impact that the environment has on production can be gauged by comparing YC with YA. Also, DMUs B and C provide the same level of outcome even though both are efficient and DMU C uses more of the resource. Now directly gauge the harshness of the environment by determining the extra resources required to provide an equivalent outcome in a harsher environment. Note that this could be used to measure costs of functioning within different levels of hardship factors. The discussion about uncontrollable inputs is especially important for highway maintenance. In particular, weather, terrain, road structure, traffic volumes, and even the average distance from the garage to the job site will have an impact on the conversion of discretionary inputs into outcomes. Typically, uncontrollable inputs can be identified as cost factors that are not explicitly paid for. Chapter 4: Steps of Customer-Driven Benchmarking 154

This guide presents some basic principles of production and analysis. In the next section, the method known as Data Envelopment Analysis (DEA) is described. DEA is an applied approach to production analysis that provides the useful information described previously. Key Concepts of Data Envelopment Analysis DEA is a mathematical programming approach for evaluating the performance of DMU. This applied approach, developed in the late 1970s, builds on the economic theory of production by measuring performance relative to an empirically identified frontier. The frontier is constructed as the outer envelope of data points (i.e., the observed producers that are achieving the most output or highest outcomes for a given level of resources). Recent advances in the technique allow measurement of scale economies useful for resource reallocation, cost efficiency (by comparing expenditures relative to the outputs that are produced), and environmental harshness. In order to develop an understanding at an introductory level, DEA will be presented using an example. Information that can be obtained from DEA—including performance evaluation, benchmarking, economies of scale, cost efficiency, and environmental harshness—can be found in the literature on DEA listed in the references at the end of this guide. 155 Figure 17. Uncontrollable Inputs and Production X Y Y = f(X, Z0) Y = f(X, Z1) . . B A . C XA XC YA YC

The DEA approach has become popular in evaluating the technical efficiency of local governmental authorities in the public sector because it easily handles the multiple outcome characteristic of public-sector production; is non-parametric (i.e., does not require the estimation of parameters); and does not require input price data, which is often difficult to measure accurately in the public sector. Each DMU is evaluated relative to the frontier. Unlike Example 1, DEA does not require knowledge about the production function. Rather, information about the production process is inferred from the observed data. In addition to the performance measure of each DMU, DEA provides DMUs with useful benchmarks that can serve as guides to better performance. In particular, the frontier producers can serve as role models for continuous improvement. Further, identification of efficient and inefficient DMUs allows further qualitative and statistical analysis that can help identify sources of poor performance. In addition, proper identification of scale economies (i.e., a disproportionate increase or decrease in outcomes relative to change in inputs) would allow reallocation of resources to improve overall outcomes collectively. Finally, recent advances in DEA allow construction of an environmental harshness index that would prove useful for overcoming adverse conditions due to noncontrollable inputs faced by the DMUs. Example 2 illustrates DEA in the context of highway maintenance. Example 2: DEA This example problem provides a relatively nontechnical discussion of the development of production frontiers using DEA, which handles multiple outputs and inputs. For this example, a basic understanding of algebra and geometry is assumed. Note that DEA models using linear programming can be solved using various software packages. It is also assumed that seven DMUs (A through G) provide one outcome— customer service—using one variable input—labor. (The observed production data are shown in Table 7.) Chapter 4: Steps of Customer-Driven Benchmarking 156

157 Table 7. Observed Production Data The technique of DEA and the measurement of efficiency will be discussed in three steps. Note that the steps cannot be applied to all problems, particularly because graphs will be used. This is the only reason that a simple production framework of one input and one outcome is assumed. Extension to multiple outcomes and inputs is straight forward, but requires an understanding of more complex economics and mathematics. Step One: Plotting the Data for Visual Representation In the first step, the data are plotted. This will help visualize efficiency measurement and frontier construction. Following standard economics, labor (L) is represented on the horizontal axis and the outcome (CS Y) will be measured on the vertical axis. Each DMU can be represented by the ordered pair (x, y), which shows the input usage and outcome provision of each DMU. For example, DMU A is represented by (4, 2), implying that 4 units of labor are used to provide a customer satisfaction level of 2 units. The data plot is shown in Figure 18. Example 2 Data DMU L CS A 4 2 B 6 8 C 10.5 14 D 15 16 E 18 16 F 10.5 8 G 8 5 This data will be used throughout the example.

Chapter 4: Steps of Customer-Driven Benchmarking 158 Figure 18. Data Plot Step Two: Construction of the Frontier In DEA, the production frontier is constructed with piecewise linear segments connecting the outermost data points subject to two conditions: 1. All DMUs are on or below the frontier; and 2. Along the frontier, an increase in inputs cannot lead to a decrease in the provision of outcomes. The production frontier is shown in Figure 19. Labor Customer Satisfaction 4 2 . 6 8 . 14 . 15 16 . 18 . A B D E . F C . 8 5 G 10.5

In this case, the frontier consists of line segments AB, BC, CD, and DE. DMUs A through E are on the frontier. Given construction of the frontier, one can evaluate the performance of the DMUs. 159 Figure 19. Example 2 Production Frontier L CS 4 2 . 6 8 . 14 . 15 16 . 18 . A B D E . F C . 8 5 G 10.5

Each DMU is projected to the frontier by holding its level of customer satisfaction constant and contracting the level of the labor as far as possible. DMUs A, B, C, and D are efficient because it is not possible to contract their inputs without realizing a loss in customer satisfaction. DMUs E, F, and G, however, are technically inefficient because they could reduce their labor usage and still provide the same level of customer satisfaction. Consider DMU F. It is currently using 10.5 units of Chapter 4: Steps of Customer-Driven Benchmarking 160 Figure 20. DEA Input-Oriented Projection Step Three: Measuring Input-Oriented Technical Efficiency For this example, the graph of the production frontier can serve as the basis for performance measurement. Recall that input- oriented models seek to find the minimum level of inputs necessary to provide the observed level of outcome. The minimum level of inputs necessary can be inferred from the production frontier. The graph of the production frontier is reproduced in Figure 20. The graph is modified, however, to show the projection to the frontier via the input-oriented DEA model. L CS 4 2 . 6 8 . 14 . 15 16 . 18 . A B D E . F C . 8 5 10.5 . G 5 G’

labor to provide a customer satisfaction rating of 8. This is clearly inefficient because DMU B also provides a customer satisfaction rating of 8 while using only 6 units of labor. Consequently, the input-oriented measure of efficiency (TEF) for DMU F is TEF = 6/10.5 = 0.5714. Note that the numerator is the minimum required labor consistent with the observed customer satisfaction. It is also the observed level of input for DMU B (which is on the frontier). For this reason, DEA has useful benchmarking capabilities. Also, one could try to find reasons for the inefficiency of DMU F by comparing F with B in a secondary analysis. This would lead to causes of inefficiency. DMU E is also inefficient because it is provides a satisfaction rating of 16 using 18 units of labor. It would be possible, however, to produce this level of output using 15 units of labor. In other words, DMU E could replicate DMU D. Therefore, the input-oriented efficiency measure (TEE) for DMU E is TEE = 15/18 = 0.8333. Finally, consider DMU G, which is using 8 units of labor while achieving a customer satisfaction of 5. As shown in the figure, DMU G is not on the frontier; it could provide the same outcome level using less labor. This case, however, is not as straightforward as the previous two. After projecting DMU G onto the frontier, note that the referent frontier point G′ is not an actual DMU. Instead, it is a convex combination of DMUs A and B. Since the outcome of this referent point is known (CS = 5 is the same level that G produces), solve for the associated labor for this referent point. Note that the slope of AB is ∆CS/∆L = (8−2)/(6−4) = 3. Since the referent point G′ is on this line, line AG′ must have the same slope; therefore, 3 = (5–2)/(x–4). Solving for x results in x = 5. Thus G’ ≡ (5, 5). Note that the convex combination G′ is obtained from an equal weighting of A and B (because it is the midpoint of line AB). The technical efficiency of DMU G is TEG = (5/8) 100 = 62.50%. 161

Using DEA in Customer-Driven Benchmarking of Maintenance To apply DEA to customer-driven benchmarking of road maintenance, you will need specialized software. Commercially available software packages and proprietary benchmarking services, including software, provide suitable analytical tools. Applying DEA requires both technical expertise and practical experience in order to identify best performances, evaluate improvement opportunities, and uncover performers likely to have practices most pertinent to a specific situation. Among the specific reasons you will require expertise in applying DEA are the following: ♦ To address situations where there is ambiguity as to whether a factor is an input or an output. For example, regarding lead paint removal of steel structures, it is not clear whether pollution concentrations inside and outside a containment structure should be treated as an outcome or an input. Impact on health and safety, as reflected in airborne lead concentrations, is clearly an outcome—are as lead concentrations in soil and water. However, air and water can also be considered inputs to the lead paint–removal process. ♦ To take into account the presence of economies or diseconomies of scale. For some production processes, a proportionate increase in inputs results in a greater-than- proportionate increase in outcomes. Sometimes the reverse is true: a proportionate increase in inputs results in a less-than-proportionate increase in outcomes. Special analytic techniques are required to take into account economies and diseconomies of scale in DEA. ♦ To take outputs into account properly. Although customer-oriented outcomes are the focus of the benchmarking method discussed herein, the ability to identify best performers relevant to a particular organization or unit is enhanced by properly taking outputs into account. Chapter 4: Steps of Customer-Driven Benchmarking 162

Prototype Decision Support System of the Minnesota DOT There are additional methods for customer-oriented benchmarking of maintenance activities that focus on the value added to customers, measured in monetary terms. The project team does not recommend that agencies just beginning to benchmark pursue such a process. However, as customer-oriented benchmarking evolves, it is likely to move in the direction pioneered by MnDOT. Appendix D contains a brief overview of MnDOT’s prototype decision support system that was developed for purposes of customer-driven benchmarking of maintenance activities. The ultimate goal of this prototype was to allocate resources in accordance with the increase in value to customers (measured in monetary terms) due to an increase in input levels (also measured in monetary terms). Identifying Best Performances and Best Practices This section addresses the primary reason for benchmarking: to identify practices that will help to improve performance. The process of benchmarking is based on the premise that organizational units with the best performances have business practices that are different from those of most other organizational units. Practices may include the types of resources, the mix of resources, the procedures for implementing resources, the timing or sequencing of applying the resources, and/or the quality of the execution. Identifying Peers Who Are Best Performers To compare maintenance performance of specific products, services, or maintenance activities, numerous organizational units will be cooperating and sharing performance information. Actually, the more organizational units that are cooperating and sharing, the greater the opportunity for any one organizational unit to find practices of other units that will help improve its own performance. The evaluation of the performances of all the units will indicate that many units have, relatively speaking, best performances. This generally means that given their conditions, level of resources, or both, the units are producing the highest customer-oriented outcomes. 163

When using DEA to evaluate and compare performances, many units may be on the “frontier” of best performances. The frontier may include 10 to 40 percent of the total number of units. If there are 50 organizational units comparing performances, then as many as 20 units could be determined to be best performers. Practically, 20 is too many units with which to compare processes or business procedures. To start comparing practices, it is best to select a small number of organizations, approximately 2 to 5 of the best performing units. The issue is then for each organizational unit to determine which of the best performing units are best for comparing practices. A simple method usually works well to begin selecting peers with whom to compare practices. For maintenance organizations, this means selecting the peers with best performances who also meet one of the following criteria: ♦ Represent the largest improvement opportunities, ♦ Operate in environments that are most similar, ♦ Have a similar amount or type of roadway feature inventory, or ♦ Have a similar total resource budget. The initial selection is not necessarily a final decision. Additional units or alternative units may be selected at any time. Begin peer comparisons with those products, services, or maintenance areas that are most important to your customers and that have the greatest opportunity to impact customer- oriented outcomes. Select one product, service, or maintenance area at a time to begin to develop a set of peers whose “best” practices you may investigate. Note that the peer set will vary as the product, service, or maintenance area changes. For each outcome or resource measure, given a particular environmental setting, there will be a gap between the best performer and the others. If you are not a best performer, this gap is your improvement opportunity. The gap will represent the potential increase in the outcome you can achieve relative to a best performer. Chapter 4: Steps of Customer-Driven Benchmarking 164

After you have investigated all outcome measures, turn your attention to the resources and compare performances for labor, equipment materials, costs, and so forth. If you are looking at a type of resource usage, the improvement opportunity and corresponding gap will represent the potential savings in the resource you can achieve relative to the best performer. To obtain your initial set of peers for purposes of investigating best practices, select the organizational units with the greatest improvement opportunities based on the performance evaluations of all of the products, services, or maintenance areas that you and your partners have evaluated. You can refine your initial set of peers by screening based on other criteria listed above—for example, by identifying which of the peer set have inventory quantity and budget levels similar to yours. Geographical proximity and the same political structure are not the best reasons for picking peers. Maintenance organizations typically already know the most about others that are geographically close and that operate under the same type of political jurisdiction or administrative unit. Benchmarking is an opportunity to reach out beyond the typical regional or state relationships and to learn what others do. However, the project team is not suggesting that just because a unit is in geographical proximity, it should be eliminated from the peer group. Also, the intent should not be to eliminate from the comparison peer group all organizational units that are different from yours in size and operating characteristics. Human nature too easily allows one to justify why an organizational unit cannot be compared with your own. Instead, you want to establish why units that have better performance can be a basis for comparison. Identifying Best Practices Once you have settled on a peer set for each product, service, or maintenance area, then you are ready to investigate best practices of the best performers. Investigation of best practices is a critical part of benchmarking. A number of different approaches have been found to be effective; frequently, benchmarking involves all of them. Examples are as follows: 1. Background research: often there is published information available that illuminates the practices of the best 165

performers. This published information includes research reports, journal articles, conference proceedings, procedural manuals, specifications, regulations, Internet sources, and information from equipment and material vendors. Specific practices of organizations that are known to be top performers, both in the public and private sectors, often have been published and can be found among these sources. 2. Questionnaires: many benchmarking efforts involve the development of a questionnaire that is used to explore in detail the partners’ practices. To some extent, the worksheets for recording measurements of outcomes, resources, hardship factors, outputs, and other information serve the function of a questionnaire. However, you should also develop a detailed set of supplementary questions whose answers will shed light on the nature of the best practices of the best performers you wish to investigate. As soon as you know what business processes will be the focus of the best practice investigation, you should prepare the questionnaire and share it with the partners with whom you plan to exchange information. The questionnaire should address the following types of issues: ♦ Work methods—including the type of labor (skills and training levels); equipment (type, age, reliability); and materials (type, methods of application) and how these are combined in productive activity. ♦ Nature and impact of related processes on outcomes and resource usage—for example, setting up and removing work zones, material and equipment requisition, scheduling, daily work reporting, timesheet reporting, budgeting, and resource allocation. ♦ Policies, procedures, or operating constraints— including regulatory requirements, specifications, or other policies and procedures that affect work methods and results. Are there operating circumstances that require or limit the practices? ♦ Roles and responsibilities of different levels of management—how do they affect outcomes and resource usage? Chapter 4: Steps of Customer-Driven Benchmarking 166

167 ♦ Hardship factors—including weather, terrain, and population density—that are favorable or unfavorable for the practices. ♦ Cost structures—the costs associated with each resource needed for the practice(s). ♦ Difficulties in transferring the practice—including major investments in equipment, material, and skill training. ♦ Critical success factors—that is, the most important procedures or requirements to achieve successful implementation of the practices, including customer requirements. Figure 21 is an example of part of a questionnaire completed by one of the participants in the field test used to validate the procedures of this guide.

168 Figure 21. Sample Questionnaire

Business Process Flow Documentation If you have followed the sequence of steps in this guide, you will have already documented the business processes associated with your practices. Once you have identified best performers whose “best” practices you wish to evaluate, however, you will need to obtain similar documentation from them. Documentation of practices of best performers should include results from background research, business process flow charts, answers to questionnaires, and results of site visits. It is critically important to understand how each level of each organization that is a best performer contributes to the outcomes and resource usage. Management actions at different levels of the organization will have varying effects on customer-driven outcomes and resource usage and costs. Conference Calls, Electronic Information Exchanges, and Video Conferences It is possible that the background research, initial documentation, and answers to questionnaires are adequate for deciding to adopt a different practice; however, more frequently, additional data and understanding of peer practices will be necessary. The best- performing peers you have selected need to be contacted to gain a more complete understanding of their practices. Communication can occur using conference calls; electronic information exchanges such as e-mail, groupware, and chat rooms; and video conferences. The investigation should include the details of the practices, the circumstances under which the peer uses the practices, how long or how much experience the peer has had with the practices under investigation, the key requirements for implementation success, and any recommendations for other organizations considering the practices. Before such communications begin, the initiating organization should establish objectives for the interchange and describe the questions to be answered. 169

Site Visits Many organizations that do benchmarking find that site visits are valuable for understanding a practice of a best performer. Avoid industrial tourism—making site visits simply for the sake of visiting other organizations. Site visits should only occur if there is strong reason to believe that they will add value and both parties are well prepared. Generally, a pair of visitors is desirable to conduct the site visit because two pairs of eyes and ears help capture accurately what is observed. More visitors are usually unnecessary. Here are some guidelines for conducting site visits: ♦ Work through a specific point of contact to schedule the meeting and line up participants. ♦ Develop an interview protocol and agenda in advance and share it with the host. Presumably, a questionnaire will have been distributed earlier. ♦ Have the authority to share information and make sure your host does, too. ♦ Be courteous and professional. ♦ Offer a reciprocal visit. ♦ Keep to your meeting schedule and finish on time. ♦ Be sure to thank your host. ♦ Write up the practices you encountered during or immediately after your visit. Chapter 4: Steps of Customer-Driven Benchmarking 170

Analyzing the Causes of Superior Performance Before adopting a best practice, you may wish to understand in more detail the causes of superior performance. You can use a variety of techniques. The following three are explained in turn: 1. Root cause analysis; 2. Correlation, regression, analysis of variance, and other statistical methods; and 3. Design of experiments. Root Cause Analysis A straightforward and often helpful method of understanding the underlying reasons for performance, root cause analysis employs a diagram such as Figure 22 to identify the main and deeper root causes contributing to an outcome. 171 Example of Site Visit in Maintenance Benchmarking The Kansas City Department of Public Works participated in a municipal public works department–benchmarking program with several other cities in North America in order to achieve the following three goals: 1. Improve the quality of service, 2. Reduce the cost of operations, and 3. Improve the satisfaction of customers. In a structured program facilitated by a consultant, the group of public works departments chose benchmarking partners based upon performance comparisons and documented work processes. Then the benchmarking partners arranged on-site visits to compare practices and seek ideas for improvement opportunities. The visits were a commitment of time consisting of 2 days of on-site visits and documentation of work flow and work processes. Individuals participating in visits to other departments were trained in benchmarking concepts. Priorities were set for the processes each participant wished to pursue. The total benchmarking activity uncovered 32 specific work process improvements to be included in the Kansas City Department of Public Works operating plan. Some of the changes were implemented immediately, such as instituting quick service bays in all fleet maintenance facilities, while other changes were implemented over a much longer period.

Correlation, Regression, Analysis of Variance, and Other Statistical Methods There are a wide variety of statistical techniques one can apply to identify statistically significant factors associated with an outcome. By using correlation, regression, analysis of variance, and other statistical methods, often you can identify factors that correlate or explain the variation in outcomes and resource usage. You can then make important strides in determining the likelihood that an attribute of a practice will contribute positively to an outcome or to a reduction in resource costs. Commonly applied statistical techniques include the following: ♦ Correlation coefficients provide measures of the degree that various variables or factors are correlated. ♦ Regression involves estimating an equation that involves many variables and that best fits a set of data points. ♦ Analysis of variance determines the degree that different variables contribute to the variance of another variable. Chapter 4: Steps of Customer-Driven Benchmarking 172 Figure 22. Root Cause Analysis Using Fishbone Diagram To apply root cause analysis, a group of people knowledgeable about the business process identifies main categories of potential causes leading to an outcome and then dissects the causes further. The fishbone diagram is well suited for organizing the discussion and displaying the results.

Analysis of variance allows you to analyze the variance within and among groups. ♦ Factor analysis helps to reduce a set of possible causal factors to a smaller set that explains most of the variation caused by the original set. To perform various types of statistical analysis, you will need to assemble a data set for all the variables or factors of interest. Depending upon the properties of the data set, different types of statistical analysis will be appropriate. For example, you could make a list of factors contributing to pavement smoothness. If the factor is at play in a particular organization or unit, you would give it a value of 1; otherwise, you would give it a value of 0. Thus if there were 40 organizational units constituting a benchmarking partnership and 20 different factors potentially contributing to pavement roughness, then the data set would be a matrix of 40 × 20 composed of 1s and 0s. Pavement roughness could then be regressed against each of the 20 factors to determine the significance of each factor. Before doing such an analysis, you should develop a hypothesis regarding which variables are most likely to be significant. The statistical analysis will allow you to accept or reject your hypothesis. Such analysis provides a great deal of objectivity and helps overcome the use of hunches and educated guesses regarding what attributes of a process are contributing to an outcome. You will end up with more insight and have a stronger foundation for deciding whether to implement a practice. You will require a person knowledgeable about statistical methods to apply these techniques. Most larger agencies have individuals who can perform correlation analysis and do regression, and many also have people with advanced degrees in statistics or related fields. Individual consultants and firms that specialize in statistical analysis are additional sources of expertise. Design of Experiments The types of statistical analysis described above use historical data—that is, data concerning results that have already occurred from applying resources in various settings. However, additional 173

insights regarding variables that contribute to outcomes can be achieved by designing experiments and by carefully controlling for different factors of interest, whether they are main effects or interactions among factors. There is a large body of literature on the design of experiments to achieve quality improvements. Design of experiments plays an important role in diagnosing the causes of complex manufacturing problems and other processes.4 You will need expert help to design experiments in an efficient manner in order to root out the factors contributing to outcomes. The MnDOT used an experimental design in constructing a survey instrument to assess the strength of different factors contributing to the value motorists receive from different attributes of roadside vegetation. These attributes are affected by maintenance activities associated with the delivery of MnDOT’s “Attractive Roadside” product. Appendix D briefly describes how the experimental design was used to better understand the underlying factors affecting customer preferences for roadside aesthetic features. Considerations for Changing Practices Matching best practices to the goals of the initiating organization is critical because some best practices may be excellent, but they may not be consistent with an organization’s priorities. The first determination is whether the identified best practices of peer organizations are aimed at reducing resource usage and costs or whether they are designed to increase customer outcomes. If the practices are aimed at reducing resource costs and if your organization is primarily concerned with increasing the level of customer outcomes, then this might not be the first practice to spend time implementing. Also, if you are satisfied with the level of outcomes that are being produced, then you will likely be seeking to implement practices that will lower resources and costs. Estimating the Near Term-Impact of Changes For a selected practice or set of practices, the originating organization needs to calculate the estimated costs of Chapter 4: Steps of Customer-Driven Benchmarking 174 4 Keki R. Bhote and Adi K. Bhote, World Class Quality: Using Design of Experiments to Make It Happen, Second Edition, American Management Association, New York, 2000.

implementation. This will require estimating the amount of this practice to be performed in the next cycle of maintenance activity for the particular product or service that was benchmarked. The resource costs can be estimated based upon local conditions for the initiating organization. Estimating the change in outcome levels will be more difficult because the functional relationship between resource levels and outcome levels is unknown and is not easily estimated. This is especially true for customer satisfaction levels and may be true for some outcome measures of technical quality such as IRI, the number of inches of shoulder edge drop off, and the reflectivity of signs. STEP 5. IMPLEMENT AND CONTINUOUSLY IMPROVE Setting Targets for Improvements Experiences of the peer organizations will be helpful in estimating a rate of change in the outcome measures. Targets can be set for improved performance. They could be set at the level of the best performances or in accordance with an estimate of the improvement potential for a unit, which may even be at a higher level than the best-performing unit. It is usually best to set a reasonable target—a level that management believes can be accomplished in the next maintenance cycle. Making Improvement Plans After investigating best practices of peers and setting targets for improved performance, an implementation plan for carrying out the improvement must be established. The implementation plan should address the questions of what, how, who, and when. What? What business processes will be changed and what outcomes and resources will be affected? How? How will the business processes be changed—through improved scheduling, training, new technology and equipment, better 175

materials, improved management and information systems, more efficient work reporting, or a combination of the above? Who? What managers and staff need to be involved? What levels of the organization need to participate? How broadly the changes will be implemented is an important part of the plan. Will implementation include all possible work units or will the changes in practices be implemented as a pilot project that affects just one unit? When? What is the schedule for improvements? Which improvements will occur first? Do some improvements depend upon the implementation of others? Implementing New Practices It is easy to maintain the status quo. Some organizations hesitate to embrace change, especially changes in practices that were developed elsewhere. Management must support the planned improvements and emphasize and reward improved performance. Managers at appropriate levels should be given the responsibility to manage the changes and to give visibility to changed performance. Moreover, management will want to prepare the organization for the next cycle of continuous improvement. This should include gauging how the next round of improvements will affect customer-driven outcomes and resource usage. Starting Again Customer-driven benchmarking is a continuous five-step cycle: 1. Select partners, 2. Establish measures, 3. Measure performance, 4. Identify best performances and practices, and Chapter 4: Steps of Customer-Driven Benchmarking 176

5. Implement and continuously improve. In fact, benchmarking is generally regarded as a continuous improvement process. Once the last step is completed, you start over again with the first step, as shown in Figure 23. 177 In organizations committed to benchmarking, there is an attitude of continually striving to produce the best possible results. Starting again is routine for organizations committed to benchmarking. There is an atmosphere of creativity, an enthusiasm for trying new work practices, and a genuine desire to better serve the customer. Each cycle of the benchmarking process will result in a different set of best performers. There will continually be changes in the peer group with which an organization can compare practices. Each organization or unit that embraces customer-driven benchmarking can be confident that from time to time and perhaps frequently, they will be among the best performers. And even if they are not, they will be able to identify practices that will allow them to improve, year in and year out. Establish Measures Figure 23. Steps in the Benchmarking Process

Next: References »
Guide for Customer-Driven Benchmarking of Maintenance Activities Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's National Cooperative Highway Research Program (NCHRP) Report 511: Guide for Customer-Driven Benchmarking of Maintenance Activities provides guidance on how to evaluate and improve an agency's performance through a process called "customer-driven benchmarking." The objective of benchmarking is to identify, evaluate, and implement best practices by comparing the performance of agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!