Appendix B

An Evaluation of the MEP: A Cross Study Analysis

Jan Youtie
Georgia Institute of Technology

The Manufacturing Extension Partnership (MEP) has used a diversity of evaluation methods and metrics over its more than two decade history. Many of these studies are sponsored by the program itself. State economic development agencies and federal oversight bodies also engage evaluation efforts. These evaluations have been carried out by various private consultants, university researchers, policy foundations, government examiners, and MEP itself. The aim of this paper is to look across these evaluation reports to identify the methods used, focus of these evaluations in terms of geography and unit of analysis, and the findings. Not all of these reports have recommendations, but for those that do, this paper will present these recommendations so that similarities and trends may be discerned. This work updates a previous effort to catalog and analyze MEP evaluations conducted by the author along with Philip Shapira (Youtie and Shapira 1998; Shapira 2003). The results will show several ongoing themes in MEP assessments including efforts to present and validate its mission, balance multiple program objectives, and adapt program direction to the need for innovative technologies and products.

1. OVERVIEW OF THE MEP EVALUATION SYSTEM

Discussions of MEP evaluation system typically focus on three primary program objectives: (1) delivering service to a broad range of manufacturing enterprises (market penetration), (2) maintaining optimal revenue levels for program operation (revenue generation), and (3) having an impact on clients that are served (client impact).1 These objectives are not always in alignment, which can lead to conflicts (Figure APP-A-1.). For example the need for market

________________

1See for example, Oldsman (2004), Sears and Blackerby (1998), Voytek et al., (2004).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 390
Appendix B An Evaluation of the MEP: A Cross Study Analysis Jan Youtie Georgia Institute of Technology The Manufacturing Extension Partnership (MEP) has used a diversity of evaluation methods and metrics over its more than two decade history. Many of these studies are sponsored by the program itself. State economic development agencies and federal oversight bodies also engage evaluation efforts. These evaluations have been carried out by various private consultants, university researchers, policy foundations, government examiners, and MEP itself. The aim of this paper is to look across these evaluation reports to identify the methods used, focus of these evaluations in terms of geography and unit of analysis, and the findings. Not all of these reports have recommendations, but for those that do, this paper will present these recommendations so that similarities and trends may be discerned. This work updates a previous effort to catalog and analyze MEP evaluations conducted by the author along with Philip Shapira (Youtie and Shapira 1998; Shapira 2003). The results will show several ongoing themes in MEP assessments including efforts to present and validate its mission, balance multiple program objectives, and adapt program direction to the need for innovative technologies and products. 1. OVERVIEW OF THE MEP EVALUATION SYSTEM Discussions of MEP evaluation system typically focus on three primary program objectives: (1) delivering service to a broad range of manufacturing enterprises (market penetration), (2) maintaining optimal revenue levels for program operation (revenue generation), and (3) having an impact on clients that are served (client impact).1 These objectives are not always in alignment, which can lead to conflicts (Figure APP-A-1.). For example the need for market 1 See for example, Oldsman (2004), Sears and Blackerby (1998), Voytek et al., (2004). 390

OCR for page 390
APPENDIX B 391 penetration emphasizes widespread service through training or short-term engagements, which can be associated with less revenue generation and lower- levels of impact on clients. Efforts to raise revenue have the potential to be associated with fewer clients served. Efforts to raise impact can also mean larger projects with a smaller set of clients. Maximizing service impact can even effect revenue generation as some types of projects with the potential to yield high impact are stressed over other types of projects that have short-term cost-cutting appeal to potential customers. Given these potentially diverging objectives, the MEP uses multiple evaluation methods to address them. The core performance measurement activity at MEP is the client survey. This survey, which has been administered since 1996, is conducted approximately one year after an engagement with a client company. The survey draws on a standard nationwide reporting system. Centers populate this system on a quarterly basis with reporting information about center, client, and project and event attributes. The survey is administered by a third party company once a quarter primarily using telephone and web- based methods. The questionnaire has included various questions over the years but the core questions ask clients to report changes in quantitative outcomes such as sales, employment, cost savings, and capital investment. In addition, the current survey asks about factors in using the MEP and strategic challenges the company faces. Prior questionnaires included items about customer satisfaction, but the MEP has reduced these questions, because of lack of variability in overwhelmingly positive responses. The current questionnaire asks clients about the likelihood of recommending the MEP to other companies. Survey results concerning fiscal year 2010 indicated that the MEP served 34,299 companies. Of the 7,786 participating in the survey (out of 9,654 qualified to do so given their receipt of more intensive services), these clients’ aggregate results attributed to MEP assistance were $8.4 billion in sales, $1.3 billion in cost savings, $1.9 billion investments, and 72,075 jobs created or retained. An economic impact model applied to the results indicated that for every $1 of federal investment, $32 of economic growth is returned (MEP 2011; see also MEP 1994, 1997, 1998). Client responses associated with high impact numbers are authenticated and, if found valid, are profiled as success stories using a mini-logic-based case study method. The survey is designed to fulfill Government Performance and Assessment Act (GPRA) requirements for metrics on program impacts. Program metrics are reported relative to measurable goals which the program sets as part of GPRA. The survey also forms the basis for tracking center-level performance, since 2003, through a system of Minimally Acceptable Impact Measures (MAIM). 2 MAIM is comprised of five measures: (1) Bottom-line 2 NIST MEP Reporting Guidelines, Center Operations, version 6.1. Effective Quarter 2, 2011.

OCR for page 390
392 21ST CENTURY MANUFACTURING FIGURE APP-B-1 Evaluation objectives. Client Impact, (2) Cost Per Impacted Client, (3) Investment Leverage Ratio, (4) Percent Quantified Impact, and (5) Survey Response Rate. In early 2012, the MEP instituted a new center performance methodology using a balanced scorecard approach, termed MEP sCOREcard. Roughly half of the score comes from seven center metrics (new sales, retained sales, jobs, investment, cost savings, clients served, and new clients served) and the other half from diagnostics of how the center performs on six dimensions: innovation practice, next generation strategy, market understanding, business model, partnerships, and financial viability. MEP’s enabling legislation calls for external panel reviews of centers. These reviews are typically conducted every other year for most centers, with panel composition including several directors from other MEP centers. Reviews focus on center operating plans, annual reports, and progress toward goals and follow established criteria based on the Baldridge quality framework. This model enables learning across centers in addition to the primary goal of providing feedback for the center under review. If the reporting and review processes continue to indicate poor performance, panel reviews are conducted more frequently and the MEP may re-compete center contracts. MEP also has invested in other types of special evaluation studies. MEP has used logic-based case studies to examine clients with exceptionally high impacts from MEP services (Cosmos Corporation 1997, 1998, 1999) and to understand how new services are being implemented (SRI and Georgia Tech

OCR for page 390
APPENDIX B 393 (2009a). The logic models portray the functioning of MEP services, showing inputs, work processes, intermediate outcomes, and impacts on technology adoption, business performance, and broader impacts on the economy. Comparison group studies of clients and non-clients have been widely used to control for selection bias in that MEP is likely to attract clients already on the path to higher productivity (Jarmin 1997, 1997; Nexus Associates 1996, Oldsman 1996, Nexus Associates 1999, Youtie et al., 2008, 2010) and to account for observable and unobservable factors (Cheney et al., 2009). The 1990s saw the role of evaluations of several service delivery pilots including inter-firm networking (Welch et al., 1997) and SBDC-MEP partnership programs (Yin et al., 1998). Experimentation in evaluation approaches was evidenced in several workshops and evaluation working group sessions in the 1990s; this experimentation period has yielded to a more standardized evaluation process at the national level in the 2000s. In addition to evaluations of the national program, several states have conducted evaluations of their particular state centers or system of centers. New York invested in independent evaluations of their centers in the 1990s (Oldsman et al., 1996, Nexus Associates, 1996). Pennsylvania conducted two highly regarded assessments of the centers in the Industrial Resource Center system (Nexus Associates 1999, Deloitte 2004). The state of Ohio included MEP program analysis as part of their assessment of the Third Frontier and other technology-based economic development programs (SRI and Georgia Tech 2009). Many states’ requirements of centers focus on activity reporting metrics, but the presence of these large scale evaluations bring forth the possibility of differences between national program and state-level evaluation and reporting requirements. Most centers do not have additional evaluation programs beyond what is required of the national or state sponsor. However, a few centers have developed and maintained special capabilities to support their evaluation efforts. The Michigan Manufacturing Technology Center created a longitudinal data set of survey-based metrics for full-scale comparison, the Performance Benchmarking Service (Luria 1997, Luria and Wiarda 1996). The Georgia MEP has administered a survey of non-clients and clients which has been used for evaluation as well as for conducting studies to address particular needs of the center from time-to-time (Shapira and Youtie 1998; Youtie et al, 2008, 2010). Trade associations have been responsible for several evaluations of aspects of the program. The National Association of Manufacturers included a question about service use as part of its survey of members’ technology adoption practices (Swamidass 1994). The Modernization Forum (a former association of MEP centers) sponsored several studies to provide information in support of the program’s rationale as well as to aide in system set-up in the early years of the MEP. In the 2000s, the Advanced Small Manufacturers Coalition or AMSC (the current association of MEP centers), along with other non-profit organizations, has turned the emphasis of MEP evaluations towards strategic redirection of the

OCR for page 390
394 21ST CENTURY MANUFACTURING program (NAPA 2003, 2004; AMSC, 2009; Stone & Associates and CREC 2010; MEP Advisory Board, Yakimov and Woolsey 2010) and international benchmarking efforts (Ezell and Atkinson 2011). This evaluation system has resulted in a substantial body of evaluation studies related to the MEP program. The table in the Appendix represents 39 evaluative studies in the 1990s and another 26 in the 2000s. Thirty percent of these studies are published in academic journals or books, including in two journal special issues published in the 1990s (Research Policy 1996 issue and Journal of Technology Transfer 1998 issue). More than 10 percent are official federal government publications, with the remainder comprised of state government publications, conference proceedings, “gray” literature reports and white papers, dissertations, and informal memoranda. The most common method used in these evaluations is the customer survey, which was utilized by roughly one-third of the studies represented in this paper. Six of the works in this table used case study methodology (although a few others had case examples within primarily quantitative papers) while five linked client data to administrative databases at the state or national level. Sixteen of the studies utilized comparison groups, which signify the sophisticated nature of the evaluations in terms of controlling for factors besides extension services that could affect client outcomes. Seven of the evaluations involve benefit-cost and/or fiscal impact analysis to represent public and private returns from the program. This paper will show that the characteristics of the evaluations reflect the nature of the program’s evaluation system (which in turn reflects the nature of the program itself). The 1990s was a period of system build-up and exploration in both the program and the evaluation system, whereas greater standardization occurred in the 2000s. Hence, this paper divides the literature into these two groupings to represent trends in evaluation methods and results over the 20-year period. 2. EVALUATION STUDY RESULTS: 1990s Evaluation studies in the 1990s used diverse and sometimes novel methods to understand program processes and effects. This mix of studies was heavily influenced by the MEP’s setting up of an evaluation working group in the 1993-1999 time period, producing a formal evaluation framework, and sponsoring four workshops on the evaluation of industrial modernization from 1993-1997.3 Feller et al. (1996), Shapira et al. (1996), and Sears and Blackerby (1998) address this early evaluation system as a whole, discussing issues in performance measurement and program improvement amidst conflicting goals such as addressing important program goals while avoiding over-burdening of 3 Atlanta Workshops on the Evaluation of Industrial Modernization, Aberdeen Woods, GA, 1993, 1994, 1996, 1997.

OCR for page 390
APPENDIX B 395 the client. As a result, a range of studies was produced with attention given to measurement of penetration, intermediate effects and short and longer-term outcomes on clients and the broader economy. Swamidass (1994) conducted a survey of members of the National Association of Manufacturers (NAM) to assess their use of modern technologies and techniques. The survey found that only 1 percent of manufacturers say government is an important source of assistance in technology investment decisions; however many MEP centers are known through their university or center name rather than as a source of government assistance. This diversity makes efforts to measure penetration of the program, outside of program counts of manufacturers served, difficult. How MEP fits in with other service providers has been an important dimension of evaluation efforts related to the market penetration objective of the program. The high cost of private firm service to small manufacturers has long been considered a major barrier to these operations’ productivity. Whether MEP competes or complements private sector consulting was the subject of a major study sponsored by the Modernization Forum and carried out by Nexus Associates through surveys of MEP clients, a comparison group of manufacturers, and private consultants (Modernization Forum and Nexus Associates 1997; Oldsman 1997). Seventy-three percent of manufacturer responses suggested that MEP complemented consultants’ work while only 7 percent of MEP clients reported that the MEP offered the same services as private consultants. Moreover, MEP clients were more likely to experience substantial benefits, in that the probability of a typical MEP customer improving its performance was 5.4 times higher than a manufacturer that acquired consulting services on its own. In another study of manufacturers in the Appalachian region, these enterprises were found to be tradition-bound in their ratings of various types of information sources—preferring their internal staff, customers, and suppliers. This study suggested that MEP service efforts would need time to build credibility and trust in their clientele base (Glasmeier et al, 1998). The MEP engaged in several formal efforts to collaborate with other service providing organizations. Shapira and Youtie (1997) found that MEP sponsorship led to greater service coordination than individual center efforts alone or state government demands would have provided, which in turn, generally improved the services to MEP clients, albeit at a significant expenditure of resources for validating and coordinating with these providers. On the other hand, Yin et al (1998) found that SBDC centers in a special MEP- SBDC pilot program did not have substantially better service delivery than a comparison group of MEP centers with their own SBDC collaboration initiatives. Partnership with state governments has been an important element to MEP’s funding formula. This formula has assumed an equal contribution of state and federal funding, with remaining revenue coming from client fees or other sources. A simulation of the federal-state relationship concluded that two-

OCR for page 390
396 21ST CENTURY MANUFACTURING thirds of the states would not provide state funds if federal funding was discontinued. (MEP 1998) Center-to-center comparisons were the subject of a few evaluation studies. Chapman (1998) conducted a distinctive data envelope analysis of MEP centers. This work showed that different centers were at the frontier of different service areas, with no one center consistently in the lead. Wilkins (1998) also performed center comparisons involving 14 centers, similarly finding that no one center excelled on all measures. The mix of services and delivery methods was the subject of various evaluations in the 1990s. GAO’s seminal 21-year-old manufacturing center review found there was a misalignment between the legislation establishing the centers—which emphasized technology transfer from the federal laboratories— and the needs of small manufacturers for assistance with proven technologies (GAO 1991). A 1993 National Academies study reiterated that although the program’s enabling legislation focuses on technology upgrading, center specialists emphasize that a broader range of management and training, as well as technology, services are required. (National Academy of Sciences, 1993). Youtie and Shapira (1997) observed that the type of outcome is associated with service mix; marketing and product development services were 60 percent more likely to lead to sales outcomes, energy products more likely to lead to cost savings, plant layout and environmental projects more likely to capital expenditure avoidance, and quality projects not strongly associated with any type of outcome. Oldsman and Heye (1998) performed a simulation which showed that services which enable a manufacturer to raise piece prices generate more profit than services which enable reduction in scrap rate. Luria (1997) maintained that the program’s service mix attracts cost-minimization/lean companies that are not on the path to increasing productivity. Cosmos Corporation led case studies of high impact projects with 25 manufacturing clients (1997) six manufacturing clients (1998) and seven different highly transformed manufacturers (1999). The results indicated the importance of integration of services and making discontinuous changes across multiple systems with leadership by top management. Although most MEP technical assistance services are delivered on a one-on-one basis to a single manufacturing client at a time, MEP invested in a networking service delivery pilot from 1996 to 1998. This pilot had an extensive evaluation component, capped by a survey 99 members of 13 separate business networks. The results indicated that the median net benefit of network participation to the firm was $10,000, while some members experienced significantly higher benefits, raising the mean to $224,000 (Welch et al., 1997). Kingsley and Klein (1998) further found, in a meta-analysis of 123 case studies of networks, that networks with private sector leadership and funding were more likely to be associated with new business outcomes. Intermediate outcomes were a major source of examination in 1990s- era evaluations. Several client survey-based studies qualitatively indicated that a higher percentage of companies engage in implementation following MEP

OCR for page 390
APPENDIX B 397 assistance. Two-thirds of Georgia MEP customers took action on center recommendations (Youtie and Shapira 1997). Nearly 30 percent of Massachusetts center customers would not have carried out changes without MEP assistance (Ellis 1998). Many client surveys also suggested positive views of performance improvement, with the GAO (1995) finding that 73 percent of manufacturing respondents across the nation had better business performance and Ellis (1998) indicating that 71 percent of Massachusetts manufacturers improved their competitiveness as a result of center assistance. Technology adoption was an important focus of several evaluation studies. Shapira and Rephann (1996) observed that manufacturing technology assistance program participants in West Virginia were more likely to adopt individual technologies and be amenable to technological investments than non- participants, but did not have significantly higher aggregate adoption across a range of technologies. The Luria and Wiarda (1996) Performance Benchmarking database analysis indicated that MEP customers adopted most technologies (with the exception of information technologies) more quickly than non-MEP customers. Evidence from case studies of centers in Northern Pennsylvania, Michigan, Minnesota conducted by Kelly (1997) led to the conclusion that the use of one-on-one services militates against widespread diffusion of knowledge and skills important for advanced technology usage. Most of the evaluations try to get at business outcomes such as productivity as measured by value added. Several challenges in doing so are observed in these studies. The lion’s share of impacts was found to come from a small number of manufacturing clients, with many reporting small or no impacts (Oldsman 1996). Most manufacturers had difficulty calculating impacts, and the timing of measurement was found to be an issue in that customers overestimate benefits, especially sales impacts, and underestimate costs close to point of survey, except for small number of high impact projects (Youtie and Shapira 1997). Customer surveys tended to present positive outcomes. Quantitative business outcomes tended to present a more moderate picture, however, particularly when comparison groups were used to control for other factors and explanations besides program assistance. Some comparison group studies surveyed all manufacturers in a particular region (as in Youtie et al. 1998 ) or in a national sample (as in Luria 1997 and Luria and Wiarda 1996 ). Others linked MEP customer information to administrative datasets at the Census Bureau or Department of Labor and selected enterprises from these datasets to match client profiles (Jarmin 1997, 1999; Oldsman 1996; Nexus Associates 1996). Most of these studies focused on productivity as measured by value-added per employee, although other outcomes metrics were used as well. Jarmin (1997, 1999), Shapira and Youtie (1998), and Nexus Associates (1999) found clients to have higher growth in value-added per employee than non-clients. These analyses tended to focus on a few centers/network of centers (Georgia in the case of Shapira and Youtie, and Pennsylvania in the case of Nexus Associates).

OCR for page 390
398 21ST CENTURY MANUFACTURING Jarmin’s analysis of eight MEP centers from 1987 to 1992 found productivity increases in clients over non-clients ranging from 3.4 to 16 percent. Nexus Associate’s analysis of Pennsylvania centers reported higher labor productivity of 3.6-5 percent in clients as compared with non-clients. The average Georgia client had $366,000 to $440,000 higher in value-added than non-clients. Other comparison group-based evaluations found fewer differences between served and un-served manufacturers. Analysis of the Performance Benchmarking dataset showed that MEP clients do better than non-clients in sales growth, job growth, and adoption of some process improvements, but clients are not significantly better than non-clients in growth in wages, profits, and productivity (Luria 1997). Evaluation of the New York program indicated that participating manufacturers added 5.7 percent fewer workers than similar, non-participating companies (Oldsman 1996). Because the MEP seeks to enhance productivity, implementation of efficiency measures may result in a diminishment of some factory worker positions. This reduction is not automatically a drawback as the program’s aim to promote long-term manufacturing competitiveness can lead to some declines along other dimensions. The costs and benefits of manufacturing extension beyond those of the individual clients served was the subject of another set of studies. The results of these studies were reasonably positive. Cost-benefit analyses by Shapira and Youtie (1995), Nexus Associates (1996), and Michigan Manufacturing Technology Center (1996) demonstrate net public and private benefits of MEP assistance outpaces costs by a ratio in the 1:1 to 3:1 range. A Pennsylvania study (Nexus 1999) reported much more positive net returns to the state investment of 22:1. Thompson (1998) found the taxpayer payback to Wisconsin varied from slightly below break-even to positive. Several of these studies put forth methods to address various issues in cost-benefit analysis such as accounting for the full range of private and public costs and benefits, addressing returns and investments over time, and giving consideration to zero-sum re-distribution of benefits and value-added capture through downward adjustment of sales impacts for export sales and value-added (Shapira and Youtie 1995). 3. EVALUATION STUDY RESULTS: 2000s The studies conducted in the 2000s reflected a different climate than was seen in the previous decade. Whereas the 1990s was a period of program expansion and experimentation, the 2000s saw substantial fluctuations in the program’s budget, a systematizing of services, and consolidation of the number of centers as certain centers were combined into statewide or regional programs. The MEP evaluation system itself became more standardized as the evaluation working group of the 1990s was ended, center-level personnel became reporting rather than evaluation specialists, and metrics decisions were raised to the level of the center director rather than to center-level evaluators. This systematization is reflected in the MEP evaluation plan and metrics published by Voytek and colleagues in 2004 (Voytek et al., 2004). Evaluations in the 2000s presented in

OCR for page 390
APPENDIX B 399 this paper were distinctive in their greater use of international comparisons and program assessment using expert panelists and document review. The table in the Appendix shows that six of the nineteen assessments published in the 2000s involved expert panelists or document review. Five used survey methods and three used comparisons with services or manufacturers in other countries. Market penetration was addressed in several of these studies. Stone & Associates and CREC (2010) found penetration to be a concern in that the MEP only serves 10 percent of manufacturers, 2 percent with in-depth assistance. Although this level of service could be argued a reflection of cherry picking of clients, Deloitte (2004) reported that Pennsylvania manufacturing extension centers did not engage in “creaming;” a comparison of the credit rating of Pennsylvania manufacturing clients and a matched group indicated that the differences were not statistically significant. On the other hand, GAO (2011) examined the relationship between fees charged and penetration, finding that 80 percent of MEP centers were very or somewhat likely to prioritize revenue generation projects with larger clients. Concerns about mission in terms of the programs’ relationship with the private sector were raised to higher policy levels in the 2000s. Four governmental assessments—OMB (2002), National Commission on Fiscal Responsibility and Reform (2010), Schact (2011) and GAO (2011)—were devoted to this issue. OMB’s Program Assessment Rating Tool (PART) evaluated the MEP program purpose and design, strategic planning, management, and results and accountability. Rated moderately effective, the assessment determined that “It is not evident that similar services could not be provided by private entities.” The National Commission on Fiscal Responsibility and Reform concluded that MEP provides services that exist in the private sector. GAO’s review of the cost share match requirements for centers reported that rural areas, which often are too costly for private consultants to serve, also were harder for the MEP to serve as centers increasingly found it necessary to develop cost share. Schact’s Congressional Research Service report also addressed the issue of the appropriate level of federal investment for the program. To address the concerns raised in the OMB assessment, the National Academy of Public Administration (NAPA 2003) used a panel and document review and interview process to conclude that barriers to productivity improvement continue for small and medium-sized manufacturers and these firms are underserved by the private market. Management of the program is another major substantive area of study in 2000-era evaluations. The 2003 NAPA review concluded that “MEP is effective in its core mission of helping small manufacturers reduce the barriers to productivity improvement” (p. 44). The GAO (2002) PART review of the program also gave MEP top ratings along many program management dimensions including program purpose, need, program and performance goals, strategic planning, collaboration, quality evaluations, budgetary goal alignment, financial management, proposal and grantee oversight, and achievement of

OCR for page 390
400 21ST CENTURY MANUFACTURING performance goals and cost efficiencies and effectiveness. Shrank and Whitford (2009) found the program to advance experimentation, diversity, and access to local knowledge. Although center-to-center variability continued to be observed as it was in the Chapman and Wilkins studies, center-level evaluations also indicated the existence of stronger and weaker performing centers. An analysis of groups of large and small center MAIM scores in 2001, 2003, and 2005 observed that there were no consistent top performing centers from period to period, although a few centers landed near the top in many of the periods under analysis(Youtie 2005). NAPA (2004) found strong performance differences between centers; a substantial association was evidenced between high performing centers and number of clients served, years in operation, number of full time equivalent (FTE) employees for the center per million dollars of federal investment, and ratio of state dollars. A study of the manufacturing extension center in Arkansas found that it and its partners complied with MEP’s implementation resource criteria and program goals (Russell 2011). Program impacts continued to be a focal point of a set of evaluations. MEP metrics became systematized and focused primarily on the results of the client survey. As Luria noted, centers are accustomed to the survey; however it is marked by issues such as large numbers of clients that cannot monetize the effects of program assistance, the significant role of outliers, attribution concerns, and the importance of focusing on value-added.4 MEP has sought to respond to some of these issues, for example, by applying value-added adjustments to sales results from the MEP survey in its bottom-line client impact MAIM metric (Voytek et al, 2004). Several states conducted evaluations of their centers’ outcomes and assessments of individual centers or regions of centers were also published during this period. The Deloitte assessment of Pennsylvania extension service impacts concluded that productivity and fiscal impact results from the Nexus Associates 1999 evaluation persisted into the 2000-2003 time period based on findings that the client mix in the more recent period was the same as it was in the earlier study, customer dissatisfaction had not increased, and MEP customer survey results showed the Pennsylvania centers to be high performers in terms of impacts. In contrast, Davila’s evaluation of the Chicago center found that in an earlier period, clients were more likely than non-clients to have adopted new machinery and equipment, but by the next year, clients were similar to the general population in this regard (Davila 2004). Survey-based analyses of Georgia clients and non-clients in 2008 and 2010 maintained prior findings of higher increases in value-added per employee for clients. The Georgia results also differed sharply from those in comparable regions in the UK and Spain in that manufacturing extension customers in Georgia were more apt to engage in product and process innovation than similar non-customers (Roper et al, 2010). 4 See Luria (2011).

OCR for page 390
APPENDIX B 417 Author/Year Method Focus Main Findings group growth, and adoption of certain process improvements and technologies. However, center customer growth in wage rates, profitability, and labor productivity were not significantly different from that of non- customers. The author attributes the results to the center’s service mix, which attracts companies that are not on a rising productivity path, combined with intense customer price pressures. MEP (1997) Telephone Nationwide, MEP customers report survey of MEP MEP customers $110 million increased customers by sales, $16 million from U.S. Census reduced inventory Bureau levels, $14 million in labor and material savings, 1,576 net jobs created, 1,639 total jobs retained as a direct result of MEP services. Information provided 9-10 months after project close. Modernization Survey, 750 MEP Only 7 percent of MEP Forum and Nexus comparison clients, 800 clients report that the Associates group comparison MEP offers the same (1997), Oldsman companies, 202 services as private (1997) private consultants. The consultants probability of a typical nationwide MEP customer improving its performance is 5.4 times greater than a manufacturer that secured consulting services on its own.

OCR for page 390
418 21ST CENTURY MANUFACTURING Author/Year Method Focus Main Findings Shapira and Case studies 6 MEP centers MEP sponsorship has Youtie (1997) and analysis of and their led to increased service reporting data partnerships coordination not readily obtained through individual center efforts alone or through demands of state governmental funders. Increased service coordination, in turn, has mostly improved the assistance delivered to firms, though significant expenditure of resources was required to achieve these benefits. Welch, Oldsman, Survey of 99 members of The median net benefit Shapira, Youtie, manufacturing 13 separate of network and Lee (1997) network business participation to the firm customers networks is $10,000 (the average was $224,000). Youtie and Customer Georgia, MEP 68 percent assisted Shapira, (1997) survey - customers firms took action, with longitudinal more than 40 percent tracking study percent reporting reduced costs, 32 percent improved quality, 28 percent capital investment . Customers overestimate benefits and underestimate costs close to point of survey, except for small number of high impact projects. Youtie and Customer Georgia, MEP Product development, Shapira (1997) survey; project- customers marketing projects are impact analysis 60 percent more likely to lead to sales increases; energy projects are most likely to lead to cost savings;

OCR for page 390
APPENDIX B 419 Author/Year Method Focus Main Findings plant layout, environmental projects help companies avoid capital spending. Quality projects do not rate highly anywhere, although they require the largest MEP customer time commitment. Chapman Data Compares 51 Centers excel in (1998) envelopment MEP centers different areas. analysis of MEP using second (Specifically, MEPs on reporting data. half of 1996 the frontier in one area data. may move out of/not be on the frontier in another area). Cosmos Logic-based 6 case studies Integration of certain Corporation case studies from 6 centers interventions lead to (1998) substantial outcomes. Ellis (1998) Surveys of Massachusetts 29 percent MMP MEP customers MEP customers customers may not have undertaken changes without MMP assistance. 71 percent of MMP customers reported some improvement in competitiveness. Glasmeier, Survey of 51 Information Firms most often use Fuellhart, Feller, manufacturers requirements of traditional information and Mark (1998) plastics sources because of their industries the credibility and Appalachian reliability, so MTCs Regional need time to establish a Commission’s history to demonstrate counties in their effectiveness to Ohio, firms. Pennsylvania, and West Virginia.

OCR for page 390
420 21ST CENTURY MANUFACTURING Author/Year Method Focus Main Findings Kingsley and Meta-analysis Cases of Network membership Klein (1998) of 123 case industrial can be built with the studies networks in sponsorship of parent Europe, North organizations and with America, and public funding, but the Asia networks that generate new business are associated with private sector leadership and at least some private sector funding. MEP (1998) Telephone Nationwide, MEP customers report survey of MEP MEP customers increased sales of customers by nearly $214 million, U.S. Census $31 million in Bureau inventory savings, $27 million in labor and material savings, and a $156 million increase in capital investment as a direct result of MEP services. Information provided 9-10 months after project close. MEP (1998) Simulation MEP centers 2/3 of states would end (with Nexus model nationally state funding if federal Associates) funding were ended; 60-70 percent of centers would not be able to maintain a focus on affordable, balanced service. Oldsman and Simulation Hypothetical Reducing scrap by 2 Heye (1998) metal percent raises profit fabrication firm margins by 1.2 percent, but increasing piece price by 2 percent adds $200,000 a year. Sears and MEP evaluation National system MEP’s evaluation is Blackerby (1998) plan and metrics designed to contribute to center-level and system-level performance through attention to customer

OCR for page 390
APPENDIX B 421 Author/Year Method Focus Main Findings information, data quality, data analysis, and producing results. Shapira and Survey of Georgia The average client Youtie (1998) manufacturers, manufacturers plant had a value-added comparison with 10+ increase of $366k- group employees $440k over non-clients. Cobb-Douglas Production function; Controls include use of other public and private sector service providers. Thompson Benefit-cost Wisconsin Taxpayer payback (1998) study, taxpayers ratios of 0.9:1.0 to simulation 3.5:1 from the point of view of the state taxpayer who receives a federal subsidy. However, there is considerable variation in payback ratios by industry and by service type. Increasing sales shows the greatest taxpayer-payback. Wilkins (1998) Center 14 MEP centers No single measure management designates a high or benchmarking low performing center. Costing rate of $200- $400 per hour resulted. Field staff tend to develop more projects than they close. 75 percent of centers have moved from subsidizing services to generating positive cash flow. Yin, Survey and 7 pilot centers Pilot and comparison Merchlinsky, case studies, (receiving centers did not differ Adams-Kennedy comparison $750,000 over 3 markedly either in the (1998) group years to nature of their partner

OCR for page 390
422 21ST CENTURY MANUFACTURING Author/Year Method Focus Main Findings establish a relationships with manufacturing SBDC or in the SBDC) and 7 seamlessness of their comparison service delivery. centers with SBDC relationships but no special funding Cosmos Logic-based 7 case studies A transformed firm has Corporation case studies from 5 MEP made significant (1999) centers changes in four of five systems; many paths to transformation were observed. Nexus Quasi- SME client On an annualized basis, Associates (1999) experimental cohorts from IRC clients increased comparison 1988/1989- labor productivity by group study, 1998/1999, 3.6-5.0 percentage fiscal impact longitudinal points and output by analysis research dataset 1.9-4.0 percentage points more than they would have done without assistance. Productivity gains resulted in an inflation- adjusted $1.9 billion increase in gross state product between 1988- 1997. A benefit-cost analysis finds returns to state investment of 22:1. Jarmin (1999) Panel, Longitudinal The timing of observed longitudinal Research productivity study Database, improvements at client Annual Survey plants is consistent with of a positive impact of Manufacturers manufacturing 1987-1993, extension. MEP customer data from 9 centers

OCR for page 390
APPENDIX B 423 Author/Year Method Focus Main Findings OMB (2002) Program National MEP MEP is rated assessment program "moderately effective" with concerns raised that the program serves only 7 percent of manufacturers, that competition with the private sector exists, efforts to pursue self- sufficiency without federal funds have not be pursued. NAPA (2003) Panel review National MEP SME barriers to program productivity improvement persist and these firms are underserved by the market. NAPA (2004) Panel review National MEP MEP has performed program well but its basic service model, funding formula, and role of the national office need to evolve. Davila (2004) Chicago Chicago area Clients were more Manufacturing likely than non-clients Center clients to have adopted and machinery and Performance equipment but in the Benchmarking next year, this Service firms difference was insignificant. Deloitte (2004) Detailed review SME clients of IRCs are not creaming; of NAICS Industrial mean credit rating of growth, Resource 2.92 for clients v. 2.81 company credit Centers (IRCs) for non-clients not scores, in Pennsylvania significantly different comparison in multinomial group regressions. Impacts reported in the Nexus 1999 evaluation continue to hold true.

OCR for page 390
424 21ST CENTURY MANUFACTURING Author/Year Method Focus Main Findings Voytek, Lellock, MEP evaluation National system Center-level metrics and Schmit plan and metrics can be used to diagnose (2004) performance. Youtie (2005) MEP system MEP centers in No consistent “top metrics the ntional center” exists in terms system of key Minimally Acceptable Impact Measures (MAIM) metrics. Youtie et al. Survey of Georgia MEP Georgia Tech clients (2008) clients and non- experienced a 12 clients percent increase in value-added per employee over non- clients. SRI and Georgia Logic-based 8 case studies Companies in Tech (2008) case studies from 4 MEP industries with job centers losses, private family- owned firms, concentrated structure for product development, history with technological implementation were more apt to experience benefits from E!WW. 3 of 8 case study firms gained tangible benefits and 4 of 8 took steps toward growth. ASMC (2009) Survey Manufacturers One-quarter of in 18 states manufacturers do not engage in world-class practices in six strategic areas. SRI and Georgia Longitudinal Ohio MEP Comparing 443 MEP Tech (2009) study, served and clients to 14,062 non- comparison unserved firms client manufacturers, group in Ohio the average unemployment manufacturing security data establishment shrank by 6.28 employees

OCR for page 390
APPENDIX B 425 Author/Year Method Focus Main Findings between 1998 and 2002. The average firm that received treatment during this period shrank by just 0.53 employees. This difference is statistically significant at the 0.1 level. Cheney et al., Longitudinal Census of Manufacturing (2009) study, Manufacturers, extension clients had comparison MEP customer growth in value-added group data, all 59 per worker vs. non- centers clients ranging from - 5.8 percent to 1.7 percent for 713,330 observations depending on whether a difference in differences or lagged dependent variable model is estimated. Schrank and Review of SME US policy MEP promotes Whitford (2009) policy experimentation, diversity, and access to local knowledge. Helper and Wial Policy review Great lakes MEP serves individual (2010) region manufacturers willing manufacturing to pay for assistance rather than what may be needed to implement national policy goals. Roper et al. International Manufacturers University intervention (2010) survey models in Georgia USA is significant in and three Georgia product and regions in process innovation but Europe complementaries are not well captured. Stone & Interviews, National, MEP MEP serves 10 percent Associates and document clients of manufacturers, 2 CREC (2010) review percent with indepth assistance.

OCR for page 390
426 21ST CENTURY MANUFACTURING Author/Year Method Focus Main Findings MEP Advisory Advisory board National MEP Streamline innovation Board, Yakimov, review program and growth services, Woolsey (2010) target green services, emphasize exporting, develop talent. National Budget analysis, National MEP MEP provides services Commission on working groups program that exist in the private Fiscal sector and helps firms Responsibility that should close. and Reform (2010) Youtie et al. Survey of Georgia MEP Georgia Tech clients (2010) clients and non- had $11,000 higher clients value-added per employee than non- clients. Ezell and International MEP and Countries' Atkinson (2011) comparison similar manufacturing support programs in 10 programs have made a other countries transition from continuous improvement to growth but the US program is under-funded. GAO (2011) Program National MEP Clients fees comprise assessment, program more than half of non- surveys federal funds for 26 centers; 80 percent of MEP very or somewhat likely to prioritize revenue generation projects with larger clients. MEP (2011) Client and fiscal Nationwide, $1 of federal impact MEP customers investment in MEP generates $32 of return in economic growth, $3.6 billion in new sales nationally. Russell (2011) Historical Little Rock, The center and its documents, Arkansas partners comply with

OCR for page 390
APPENDIX B 427 Author/Year Method Focus Main Findings interviews, MEP’s implementation survey resource criteria and program goals. Schact (2011) Legislative National MEP Issues around federal history review program funding for the program and the match level of this funding remain. Shapira et al International 7 technology Countries' technology (2011) comparison extension extension services services in 6 reflect their distinctive countries national innovation systems.