The MEP involves a mix of federal, state, organizational, and private actors, who have varying goals and interests. Thus, while at the federal level, NIST seeks national impacts from the MEP on the competitiveness and capabilities of SMEs across the country, including improvements in productivity, innovativeness, and exporting, states are naturally most interested in effects within their own jurisdiction. States are especially interested in ensuring that their firms are served and in job retention and creation. Individual firms are typically interested in gains they can capture, including added sales and improved profitability, while industry organizations seek improvements in performance across sectors and supply chains.
Reliance on Self-Reported Data
NIST’s own metrics of center impacts on firms seek to incorporate many of these goals, including efficiency improvements, added sales, and jobs retained or created. While NIST does sponsor studies with a range of methods, the major ongoing way through which NIST collects this data is via the MEP survey tool (discussed in Chapter 4). The data collected are undoubtedly valuable, but they must also be regarded with caution. A number of center directors have observed that the very high level of survey responses that MEP headquarters considers to be acceptable has the effect of ensuring that local MEP centers expend considerable efforts and resources on ensuring that clients comply and complete the survey. Current average response is 85 percent, and the amount of effort expended by centers to ensure compliance is reported to range from 5 percent to 35 percent of total staff time.1
Incentives and Performance Indicators
In addition, several center directors have noted that the reliance on survey data as a primary metric of center performance introduces related incentives for centers (whose performance evaluations largely rest on survey data) and individual staff (whose compensation in at least some cases depends on both the level of client compliance with the survey and the degree of positive outcomes recorded). Staff thus have incentives to encourage clients to respond and to present the maximum possible outcome. This is therefore somewhat different than relying on self-reported data from a nonincentivized population.
If clients must undertake fairly complex calculations in order to complete the survey—which they do—and if there is minimal guidance within
1Information provided by center directors in response to NRC request, June 2012.