Click for next page ( 54


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 53
5 Application of Good Practices in Prioritization and Decision-Making Processes Relevant to OST The first part of this chapter discusses the general themes introduced in Chapter 3 in the context of the decision-making environment described in Chapter 4. This treatment presents ways in which these themes can find application in OST, organized by headings that highlight some of the relevant best practices.' Following these comments of a general nature, more specific evaluations are provided in the second part of this chapter on OST prioritization and decision process steps. These are evaluative comments on the ways in which the functional process steps of Figure 4.1 are conducted within the current OST program. Findings and recommendations arising from each of these parts are contained in Chapter 6. The committee has reviewed the prioritization and decision-making practices of OST (Chapter 4) in the context of the best practices introduced in Chapter 3. The results of this comparison are presented in this chapter and form the basis for many of the findings and recommendations in Chapter 6. Discussion of OST's decision-making process is organized by the sections that follow, which highlight some of the relevant best practices. Following these comments of a general nature, more specific evaluations are provided of OST process steps using the model of Figure 4. ~ . AGREE ON CLEAR AND MEASURABLE GOALS A most important requirement for good decision making is the establishment by the top levels of management of an organization's strategic goals (see Simon, 1997). The goals must be broad and clear enough to provide guidance to all parts of the enterprise, yet specific enough to provide a basis for effective decision making at all levels of the organization. This requires definition of a hierarchy of goals and responsibilities for each organizational component and program. In addition, it should be possible to measure how well these goals are met. Although former Assistant Secretary of EM Thomas Gn~mbly established consideration of "risk" as a major strategic goal for EM, and former Assistant Secretary Alvin Alm established "reducing the mortgage" as his major strategic goal, neither of these goals was specific enough to allow OST (or other EM decision makers) to differentiate effectively between short-term and longer-term priorities. As a result, very important, but long lead-time problems, could be relegated to priorities so low that they were severely cut in funding even to the point of not appearing above the iThe only one of the 10 practices highlighted in Chapter 3 that is not discussed here is "hire the best people possible and maintain expertise," because OST is constrained by DOE and federal government inning practices that are largely outside OST,s direct control. Some program managers interviewed for this study (see Appendixes C and D) were no longer in the OST program by the time this report was published. Committee thoughts on how to apply this idea to OST are mentioned in Chapter 3. 53

OCR for page 53
54 Gooc! Decision Making Practices budget-constrained cut-off line for funding. Publication of DOE-EM's strategic plan (DOE, 1997j) and Accelerating Cleanup: Paths to Closure (DOE, 199Sa; 1998b) helps to address this prior lack of specificity. It does not appear to the committee that OST has established a uniform set of quantifiable, measurable goals and criteria needed for high-quality decision making. These top-down goals for OST can be derived from user input such as "strategic goals" (DOE, 1997j), which target the sum of the work to be achieved at all sites in a bottom-up approach. These strategic goals are obtained through a user- driven and, therefore, bottom-up decision-making process of other EM offices in which most remediation decisions are made at individual sites. Top-leve! goals derived from these and other user inputs (e.g., DOE, 199Sa; 1998b) could be used in a centralized, coordinated, top-down process within OST to drive the technology development program. Of course, any top-level strategic goals developed by OST must be consistent with the EM mission and derived in coordination with user plans and needs. UNDERSTAND AND FOCUS ON CUSTOMER NEEDS AND REQUIREMENTS As OST learned by hard experience, the technologies it has developed are of value in cleanup only if the intended technology users accept and deploy them. The fact that a technology may be used by industry for tasks other than site cleanup, although important to the developer, is not an essential criterion of the user of a technology in DOE-EM. As the level of acceptance of OST technologies by the sites came under increasing criticism, OST employed several different approaches in its attempts to get new technologies deployed at the sites. The most recent attempts are embodied in the ASTD approach, which provides cofunding to the sites as an incentive to deploy technologies, and in the TAC and other headquarters integration activities intended to facilitate deployment using upper DOE management interactions. Whether or not these approaches will be successful has yet to be determined. In any event, OST has attempted to overcome the technology deployment obstacles it faces. COMMUNICATE ACROSS ORGANIZATIONAL BOUNDARIES Interaction, with frequent feedback, between OST managers at the sites (i.e., managers of Focus Areas and Crosscutting Programs) and site contractors and their subcontractors is highly desirable. By maintaining a continuing awareness of the sites' evolving technology needs and an understanding of the many constraints and pressures that site operators face, OST may increase its effectiveness in substituting cheaper and/or more effective technologies in the baseline functional flowsheets. THINK STRATEGICALLY: HEDGE AGAINST TECHNICAL UNCERTAINTY AND INSIST ON ALTERNATIVES A much more difficult problem than technology deployment is obtaining broad recognition that alternatives to the baseline functional flowsheet are an essential part of sound systems engineering and that good decision-making practices dictate that all well-planned and executed projects allow for He possibility of failure and provide for contingencies in this event. The constrained budgets for cleanup, combined with pressures to move more quickly in cleaning up sites, have forced decision makers to eliminate any significant consideration of alternatives. Yet experience shows repeatedly that failure in RD&D efforts does occur. The good decision-making practices discussed in Chapter 3 also highlight the

OCR for page 53
Decision Making ir' the DOE-OST 55 need to consider alternatives (see also Simon, 1997), which is the responsibility of the community of users of technology at DOE-EM sites. Recognition of the need for alternative solutions to cleanup problems in the event of technical failure or funding shortfall is essential. Alternative functional flowsheets or overall problem solutions often require that alternative technologies be developed. In a similar vein, it is appropriate for OST to have input in the initial establishment of baseline functional flowsheets, even though the problem owner at the site bears the responsibility for cleanup. Because some of the most intractable of DOE's cleanup problems are the first of a kind in many respects, it is unlikely that any site contractor will be totally prepared for all of the problems to be faced. OST and its established contracted technology experts are major potential resources of technical expertise for baseline functional flowsheet discussion and evaluation to the winning bidders of any privatization contract. Through the use of its available expertise, OST could provide needed assurance to DOE-EM that the contracted cleanup has a better likelihood of success. It may not be sufficient for DOE to take the position that the entire cleanup responsibility lies with the site contractor, even though this may be what Me contract calls for. Such a course puts the taxpayer at risk; the taxpayer must pay for failure in terms of lost time, continued risk, and additional cost. Application: Balance Innovation and Incremental RD&D The process of structured, thorough screening and decision "gates" is appropriate for most development work on behalf of DOE-EM (this applies to work conducted by OST and other EM offices). Similar structured processes are generally used by industry counterparts, wig emphasis on cost-benefit evaluation of the expected results of development work. Although this report strongly endorses more structured evaluations, too rigid an application of screening criteria at least in the early stages of some types of RD&~can lead to a portfolio of RD&D efforts that becomes unbalanced by an overemphasis on projects and tasks that fit into short time horizons (i.e., with measurable payoffs In one to three years). This practice, particularly of funding only short-term, late-stage projects, would tend to exclude any project involving substantial innovation, that is, any project with technical risks, particularly if the proposed project is not fully understood or based on well-known science and technology. innovation may be seriously discouraged if there is not some policy of acceptance of reasonable technical nsk. Success rates for industrial RD&D depend on the mix of more risky RD&D projects with less risky (but often necessary) design and development projects. The committee believes that an essential need for the DOE-EM RD&D function is to maintain a portion of its development work in research projects that are not primarily refinements of existing technology. This means exploring significantly different concepts. The intent should be to maintain a hedge against the possibility that important parts of the baseline technologies may not work well enough or may prove to be much more costly than originally expected. This also recognizes that most truly innovative concepts do not pay ok, yet many must be tried in order to find the occasional big winner. Therefore, decisions on what technology to fiend should be made in the context of exploring new concepts having direct relevance to cleanup, however difficult such judgments may be. in the United States, researchers In university or other Independent settings are considered the traditional sources of innovative ideas. However, on-site technical people are most familiar with process limitations, and they should also be canvassed regularly for ideas. The industrial RD&D programs, as well as those of nonprofit organizations such as EPRI and GRT, generally recognize the need to maintain at least a small "exploratory" or "strategic" RD&D category. This is commonly budgeted in the range of 10 to 12 percent of overall RD&D expenditures. This "hedging by exploring innovative ideas" is widely recognized as a good practice for the prudent management of RD&D portfolios.

OCR for page 53
56 Good Decision Making Practices CONTINUALLY IMPROVE THE RD&D MANAGEMENT PROCESS In cases where the need is great and the outcome of a technology development project to meet the need is uncertain, it is justifiable to fund some overlapping or duplicative technology development projects, especially at the early, less expensive stages of development. However, in the past, OST has supported the demonstration of already proven and very similar technologies (GAO, 1996; see also Appendix C, Boxes C.] and C.3~. Partly as a result of budget cuts, partly as a result of increased sophistication on the part of OST managers, and partly as a result of the maturation of cleanup activities and concomitant focusing on the most important needs, this duplication of technology development activities by OST has likely been reduced. Another area of duplication of effort by OST is in the acquisition of commercially mature technologies from industry. This activity appears to be calTied out by the Industry Program at the Federal Energy Technology Center (FETC), by the DDFA (via the LSDPs; see Appendix C), and by site operators. (Focus Areas often use their own budgets for internal-to-DOE-system procurements, for example, to national labs.) Although there is some coordination of procurement based on needs formulated by Focus Areas, the practice of independent funding for procurement tends to undercut effective coordination. The most cannon OST practice is to put a statement of technology need in the Commerce Business Daily and wait for responses from industry. A preferred approach would not only consider the availability of technologies in industry, but also provide an assessment of the strengths and shortcomings of these technologies and their effective integration into site functional flowsheets before any technology development solicitation is made. OST has recently begun to use the U.S. Army Corps of Engineers to carry out cost studies of technologies developed, or proposed for development, by OST and to compare these costs with those of the baseline functional flowsheet technologies they are intended to replace. This is another attempt by OST to show top-level managers of DOE-EM, members of Congress, and "watchdog" groups that OST is performing a needed and responsible job. The quantification of problems by this approach is a way to generate informational inputs to a decision process. However, there is the ever-present problem that the cost of the baseline flowsheet is often not known to an acceptable degree of certainty. For credible cost comparisons, calculations of baseline and alternative technologies should be done in a consistent way (NRC, 199Sa). A further addition to OST's attempts to be responsive to criticisms of its productivity, as measured by deployment of the results of its efforts, is the introduction of a comprehensive database that includes technologies it has developed, their costs, and whether or not they have been deployed. This database allows OST to be responsive to various watchdog groups in a timely way and provides a measure of OST's deployment successes. It also provides an effective too! for use by OST in cost-benefit determinations, which provides valuable decision-making information. An inherent weakness in any attempt to assess the value of RD&D is the fact that it takes time for technologies to be demonstrated and deployed. Consequently, many of OST's technologies not yet credited with being "successes" may be successes in the fixture. Application: Document Bases for Retrospective Analysis, Especially in Headquarters' Discretionary Allocations The OST program management at headquarters in the past has funded some projects directly, without using established review procedures as part of its decision making. Flexibility to act in this way is appropriate because it provides a way to respond quickly to a late-breaking development or opportunity. An

OCR for page 53
Decision Making in the DOE-OST ~7 example of this method of decision making was the OST headquarters decision to fund the Hanford Tank Initiative (HTI), which the Hanford STCG had endorsed. As pointed out in Chapter 3, it is good practice to document the basis for making decisions, especially those that are made outside the established process. If a structured decision-making process is not followed, documentation to allow retrospective examination of discretionary decisions seems prudent so that lessons can be learned from past decisions and adverse reactions can be avoided. MEASURE AND EVALUATE TO GUIDE RESOURCE ALLOCATION The STCGs and Focus Areas use criteria for prioritization of needs and selection of projects. These criteria were formulated in an attempt to provide a rational basis on which to select and prioritize the technology needs forwarded to OST by problem owners at the sites. However, there has not been a concerted effort to make these criteria uniform (to the extent practicable) from one STCG or Focus Area to another (see Appendixes B and C). Furthermore, many of the criteria that do exist are not well formulated. They contain both obvious and underlying redundancies and in some cases are cast in such a way as to permit important, and even vital, needs to be downgraded so far that they are dropped from consideration. Guidance is needed from upper levels of DOE management for that development of formal, consistent, and rational criteria across all of OST and for "threshold" levels for each criterion to ensure that vital needs are met. Application: Return-on-Investment (ROD Evaluations for OST ROI calculations were done in the 1997 ASTD program of OST. The use of ROI calculations to evaluate ASTD proposals is a practice that could be extended by applying it to all technology development project proposals prior to their final selection for funding, both within each Focus Area (and Crosscutting Program) and across these program units, to identify the areas of greatest return on technology development investments. Returns could include benefits in many areas, as measured by estimated improvements in cost, schedule, safety, risk, and/or cleanup levels. ROT-type evaluations would also provide information by which to compare projects of separate program units (i.e., the relative worthiness of a Tanks Focus Area project versus one from the DDFA, or from the Robotics Crosscutting Program), thereby helping to inform decisions of how much money should be allocated to each program unit. In 1997, OST used the Army Corps of Engineers to provide an ROT calculation for the entire OST program (DOE, 1997h; U.S. Army Corps of Engineers, 1997~. The calculation estimated financial savings in the form of a "cost avoidance" to the federal EM budget. The Corps used a subset of OST technology development projects and extrapolated He deployments that could be achieved from these projects to derive an overall ROI for the $2.6 billion spent by OST since 1989, if the OST-developed technologies were deployed to replace some of the currently conceived baseline remediation technologies. Deployment assumptions introduced uncertainty into the calculations that was treated by calculating a range of financial returns, based on a range of assumptions about how aggressively these technologies would be deployed at sites. Unfortunately, many of these calculations suffer from the limited credibility of the baseline cost estimates.2 These cost comparisons cannot be done at all if a baseline remediation plan is lacking (because there is no baseline cost and technology to compare against). Another problem of the credibility of the estimated baseline cost savings when a baseline remediation technology is excessively costly; if the baseline 2More discussion of baseline cost methodology is found in NRC (1998a).

OCR for page 53
58 Good Decision Making Practices remediation approach costs too much to be a realistic approach, then savings relative to baseline cost figure are not true, credible savings. However, despite these problems, such estimations are worm considering because a baseline plan must be developed and credible costs must be generated or else the cleanup may never move forward. Application: Measures to Capture Technical, "Market", and Cost Information Evaluation of an RD&D project can be done against many measures, such as the projected (1) benefits of using the technology for a particular application; (2) opportunities (i.e., other potential applications) that the technology offers; (3) costs incurred in its adaptation and deployment; and (4) risks of this deployment. A decision to deploy a newly developed technology could be made based on positive projections in any one (or more) of these four areas. Hence, one possible composite measure might be the sum of the four, as measured in a common unit (e.g., an avoided cost). Alternatively, another measure can be constructed based on the concept that success at technology development requires success at both technical development and deployment. Thus, evaluation of a technology development proposal should include the combined probabilities of technical and deployment success. The potential benefit of a technology development initiative could be expressed as the product of these probabilities with the cost savings of performing a job using a new (versus an already established) technology. This benefit should be weighed against the development costs. This concept can be expressed mathematically by the following equation: Figure of Merit = (PT PD \05~/(CD), Where PT = probability of technical success (e.g., GRI makes this proportional to the gate number in the stage-and gate model); PD = probability of market penetration (i.e., potential for deployment on site applications); I\Os,= difference in cost of doing a job in an old versus a new way (i.e., the old cost minus the new cost); and CD = cost of developing the technology (i.e., performing the RD&D). This formula expresses a benefit-cost ratio that could be a useful measure for evaluating the worminess of technology development proposals and ranking them. This formula illustrates the basic factors in a simplified way. in the prioritization and selection of technologies for development, other factors may also be treated probabilistically. These include the probability of solving significant DOE-EM problems (which may be, but are not necessarily, proportional to the number of sites having problems addressed by the technology under consideration) and the probability of funding (which may be related to the benefits as measured against risk, cost, and mortgage reduction). Another probabilistic factor is timeliness, insofar as the probability of sustaining adequate funding for a technology diminishes as the need to solve the problem addressed by the technology becomes more distant in time. These probabilities might be obtained initially from "back-of-the-envelope"-type estimations or judgments of knowledgeable people, to be supplanted over time by more refined figures as warranted, as data become available, and as the maturity of a technology development project increases. Despite all of the problems associated with these quantitative estimations, they are recommended to provide guidance because the alternative (of not doing them) provides less guidance to decision making. The depth of any benefit-cost evaluation should be commensurate with both the proposed technology's development cost and the available knowledge. Although a simple analysis may suffice initially, a more refined, sophisticated analysis may be needed later on. In rigorous application, it is critical to determine the present worm of fixture savings using a well reasoned and documented value for the discount factor or

OCR for page 53
Decision Making ir' the DOE-OST 59 factors used. For such calculations, credible input data are needed for reliable estimates of cost, risk, and schedule. if credible data are not available, then it is not feasible to gauge the probability of project success with this approach. Application: Review Resource Allocations and Inputs In the present situation of technology and program reviews, a good case can be made that OST (and, indeed, DOE-EM) is subject to burdensome reviews. This does not mean that there should not be reviews. However, an excessive number of reviews consumes the time and energy of OST staff; the number should be reduced. This could be done by appropriately combining some reviews and deleting others, based on an analysis of the objectives to be served by these reviews. In addition, there does not appear to be a need for a large attendance at the reviews. Careful selection of key reviewers, both OST staff and outside consultants, could substantially reduce the cost in time and money spent on reviews. Although the reviews depicted in Figure 5.! serve a variety of purposes for OST, most of them are not independent, expert, external technical "peer reviews" as defined narrowly in NRC, 1997b. For this reason, even though the committee has said that there is an excess of reviews of OST, the committee suggests that, in two very important places, additional external, independent reviews can benefit the OST decision process depicted in Figure 4.~. These reviews, discussed below, are to validate the following two inputs to the decisions made at Box ~ of Figure 4. I: I. allocation of fiends among problem areas at the policy level, made by headquarters management, and 2. process of identifying baseline processes, technology contained in the baseline fi~nctional flowsheet, and the derived technology needs, developed by the sites. These reviews are considered below in more detail. The second can be conducted in a way to satisfy the requirements of a true peer review (NRC, 1997b), and presents an opportunity for knowledgeable OST contractors to access important information if the reviews are structured to include their participation. External, Independent Review at the Policy Level of the Allocation of Funds The decision about how to allocate OST funds among broad "problem areas" (e.g., Focus Areas) is a major step In the strategy to commit funds to the greatest areas of technology need for DOE-EM site cleanups. A review of the extent to which this allocation is consistent with DOE-EM strategic direction, priorities, and program goals by an independent, expert group having no stake in the allocation would be of value to top program managers. A logical body to perform this review might be a group outside OST, but familiar with the program, such as the Environmental Management Advisory Board. This board includes representatives of the public, private, and regulatory sectors and could therefore provide a particularly valuable, broad perspective on OST's program. Peer Review of Baseline Processes and Technology Needs Identified by the Sites The technology needs for a DOE-EM site are derived from the baseline fimctional flowsheets and alternatives, as recommended herein. Thus, these flowsheets are of fi~ndamental importance to cleanup success and to decisions OST makes on technology RD&D fielding. A peer review of the baseline fi~nctional flowsheets could serve three very useful purposes:

OCR for page 53
60 Decision Making in the DOE-OST EM-! C Ems Technology Gate R - riew -~ . pi Site Needs Review .; (srcG, : . ... i ,, , ~. . . . ~ ~ Procurement Renew '','' ~ GAO __ t . it. ' '. it, ' i I. ,' a,, : 2 . it . '' ~ \ ~d 'i' + - ; ' j ~ ., ;~ `, ~ ~ '.""''~.~ EMAB _= EM SEAS HAS CEMT Technical Peer Renew (ASHE) ~ ply - r ~ {D ~ 1 FIGURE 5.1 Diagram of OST's review program showing different types of reviews, offices to which reviews are submitted, and the level of the organization of which reviews occur. NOTE: EM-! = 0ff~ce of Assistant Secretary for Environmental Management; ER = 0ff~ce of Energy Research. SOURCE: NRC, 1998b.

OCR for page 53
Decision Making in the DOE-OST 61 1. It could help identify those needs that could be met by technologies that are already commercially available in the private sector, in the U.S. or abroad. 2. It could identify opportunities to develop technologies that could significantly improve on the baseline approaches or could provide a fallback option in case there is some risk that the baseline process will not work as expected.3 Availability of technical options that can provide adequate performance at substantially reduced cost could be particularly important if constrained funding threatens the ability to proceed with timely cleanup using the baseline technology (NRC, 1996b). 3. It could bring a broad range of technical expertise and experience to bear on difficult or presumed intractable technical problems. OST's access to the results of such peer reviews could enhance its interaction with its customers. Indeed, OST's technical contractors at other sites might be considered as peer reviewers of a site baseline functional flowsheet, if their technical expertise, external viewpoint, independence from the work being reviewed, and objectivity can all be established. SUMMARY AND CONCLUSIONS GENERAL ISSUES The OST program structure of FY 1997-1998 in principle, can perform all essential functions of RD&D decision making. However, improvements are needed to establish a formal, documented, and more quantifiable decision-making process. improvements to the current program organization can come by instituting structured and elective information flow between OST program units; documentation of the bases for discretionary allocations (e.g., funding decisions from headquarters that are outside the decision-ma~ng processes of the Focus Areas, Crosscutting Programs, and other OST program units described in Appendix E); appropriately targeted independent, external reviews (or "technical audits"), including a review of site baseline fi~nctional flowsheets generated by site problem owners and Dom which technology needs statements are derived; and . support of significantly different concepts directly relevant to cleanup that may come from sources over than user-generated statements of technology needs. One common way to do this (as at EPR] and GRI) is by supporting exploratory research using a fixed percentage of the program budget. 3The EMAB Technology Development and Transfer Committee identified the need for improvement and expansion of the technology development process to include technologies that could improve on the baseline (Berkey, 1997~. Along Be same lines, the NRC Committee on Buried and Tank Waste, in its review of He Environmental Impact Statement for Be Hanford Tank Waste Remediation System, concluded: "Backup approaches [to the baseline] are needed because the technologies projected to meet current requirements might not work or might cost far more than anticipated . . . DOE should develop callback options and promising alternatives that might achieve most of the projected benefits of current options at a substantially reduced cost (NRC, 1996c, p. 48~."

OCR for page 53
62 Good Decision Making Practices USE OF A FORMAL DECISION-MAKING PROCESS: EVALUATIONS OF DOE-EM'S IMPLEMENTATION OF THE PROCESS STEPS OF FIGURE 4.1 This section provides a step-by-step evaluation of the decision processes presented to the committee during its information-gathering meetings, following the box-by-box diagram of Figure 4. ~ . A summary of the information presented during these meetings is included in Appendixes B-E. User Program Funding (Boxes ~ and 2) Stated briefly and somewhat oversimplified, budget requests from each user program (e.g., programs at various DOE sites) are compiled based on general guidance from the upper levels of DOE management. The budget requests are then aggregated at DOE headquarters. Some further adjustments among the major components of DOE typically occur at this stage, based on judgments from the Office of the Secretary. Such adjustments are frequently required to meet overall budget goals. This compiled request goes to the OMB. The OMB may make or recommend further adjustments in respect to both the overall DOE budget request and the balance among major DOE organizations. The OMB does not usually recommend adjustment of the components of user programs, although new initiatives are an exception to this. The administration then submits the budget request for consideration by Congress via the applicable congressional appropriations committee. Congress may approve the budget as requested or, more commonly, make further changes based on the input of the appropriations committee. These adjustments are subject to some dialogue at each stage between the adjusting body and the relevant management levels in DOE. The budget eventually passed by Congress is then made available to DOE, which can issue authorizations for expenditures and make changes. Sometimes, portions are withheld at the discretion of the administration, subject to allocation later in the fiscal year. Most of the budget request and approval process outlined above occurs two fiscal years before expenditures are authorized and implemented. If budgets are reasonably stable (i.e., predictable within an uncertainty of approximately 5 to 10 percent), the two-year advance pattern has the advantage that orderly planning and execution of programs is possible on a time scale of two or more years. However, this is often not the case. In particular, there are instances in which the budget for the next fiscal year is substantially different than the budget for the fiscal year after that. In addition, events occurring during a year can cause out-of-cycle budget changes (usually decreases). One example is the extensive flooding in the upper Midwest in the winter-spring of 1996-1997, which resulted in massive federal financial aid that, in turn, manifested itself as funding reductions (called rescissions) during the middle of the year. Although many natural events cannot be anticipated, budget changes shortly before or during the year of expenditure can be particularly destructive because they may redirect or terminate user projects that form the basis for the user program's technology needs. This leaves decision makers in the technology provider program with the difficult choice of either terminating a technology development project, with zero return on the investment to date, or continuing the project, with the expectation that it may be needed later. In addition, this reopens some of the decisions on the relative emphasis on different goals and measures of success of the agency, of EM, and of OST and therefore on the programs mounted to accomplish these goals. This budget process has the following important implications for the decision-making process: . The OST budget must fit within the boundary conditions of higher-level budgets. In the present climate, this usually restricts the allowable budget to much less than what would be required to fund all of DOE-EM's science and technology needs.

OCR for page 53
Decision Making in the DOE-OST 63 The DOE is left with a choice between two less-than-optimal alternatives: I. plan a bottom-up budget based on focus area input two years before the program execution year in an environment that changes much more frequently, or 2. submit a generic budget and develop the detailed work plan in parallel with final approval of the budget. DOE has chosen the latter alternative. Given the boundary conditions and the importance of being relevant to user needs, this choice is reasonable. However, it is important to note that this means the OST budget negotiated with OMB and the U.S. Congress is independent of the detailed program fo~ulation ~ . - ~ ~ Ma . a ~ ~ ~ ~ tic ~ . ~ A. . - . ~ ~ by the focus Areas, which must necessarily proceed in parallel and be adjusted as the negotiations change their allocation. The decision-making process must allow for ongoing adjustments during program execution to accommodate both technological and budget changes. Funding for the Technology Provider Program (Box 3) The process for allocating funding to the technology provider programs (i.e., the program units of OST) is essentially the same as that described above for the user programs, primarily because both are part of the same DOE-EM budgetary process. Decisions on top-level OST budget requests and adjustments are apparently made by upper-level OST managers. The discretionary nature of OST program funding makes it particularly vulnerable to "directed funding" (also known as "earmarks"~. These are directives communicated via congressional budget legislation and other less formal mechanisms to fund specific projects, often at specific organizations. These effectively bypass the decision-making process shown in Figure 4.! by requiring that certain activities be funded irrespective of their merit. Although Congress has the right and obligation to establish, terminate, or alter programs, the committee believes that its intervention in what are essentially decisions at the project level constitutes micromanagement that compromises the integrity of the decision- making process. Other high-level review committees (PCAST, 1997) have recently reached similar conclusions. The committee urges that the OST program be allowed to establish and execute a transparent, high-integrity, decision-making process that includes all of its activities at the project level, and that upper-level management and budget authorities exercise their responsibilities at the higher program and policy levels. Funding Allocations to OST Program Units (Box 4) In the past, the Board of Directors (BOD) considered information from the Focus Areas to determine their budgets. The committee's only information regarding BOD methodology came from discussion during a March 1997 committee meeting with Dr. Clyde Frank, then Deputy Assistant Secretary for Science and Technology. In principle, the information about alternative technology development projects to fund, and prioritizations within each Focus Area, should be available for determining each Focus Area's budget. With this information in hand, the BOD could gauge the trade-offs that have to be made when cutting or increasing funding levels in a given Focus Area. During the BOD allocation process, this information appears to have been either unavailable or unused at the level of detail necessary to make sufficiently informed choices. As a consequence, the BOD appears to have made allocation decisions

OCR for page 53
64 Good Decision Making Practices largely based on very general considerations and trade-offs, and not on specific priorities of projects or technology needs. Adjustments to these funding levels are made in the Program Execution Guidance (PEG) annual review meeting held at the end of each fiscal year. This is an internal review by OST headquarters management of all major OST program units (e.g., Focus Areas and Crosscutting Programs), based on presentations by the units' senior program managers. The value of the total OST budget for the next fiscal year that is received from the congressional appropriations process (Box 2) is considered against each program unit's funding request, to adjust each program unit's budget target for the upcoming fiscal year. The scopes of work are reviewed to explore cost-cutting opportunities and the relative worth of various ~roiects. in FY 1998' this budget allocation step was treated to an analysis provided by a decision ~ ~ , _ ~ ~ .1 {~ ~ , ~ 1 1 1 ~1 ~ A 1 1 _ _ _ _ _ _1 _ ~ support tool, the Sinew pnor~t~zat~on process descnbed In Chapter 4 and cl~scussec next. Evaluation of the OST Headquarters Prioritization Process It was not possible for the committee to fully evaluate OST's 1998 headquarters process (Barainca, 1998; DOE, 1997i; Walker, 1997) for prioritizing work packages4 at the headquarters level because this process was not yet complete. However, based on the information presented in Chapter 4, the committee has a number of concerns about the proposed process that are offered below to provide guidance for the final stages of its development. There are inherent problems with prioritizing work packages composed of multiple needs or projects. In principle, the projects in a work package have differing cost, risk, and benefit attributes related to enabling or improving resolution of different DOE-EM problems. As a consequence, low-priority projects could be carried along with others having a high priority, or high-priority projects might not be funded because of a preponderance of low-priority projects in a work package. The purpose of the process of prioritizing work packages was not stated and was not evident. The only potential purpose the co~runittee could identify is to provide a basis for deciding which work should be terminated if the budget is cut. However, as noted immediately above, termination of a work package eliminates multiple projects, each of which has various priorities and levels of merit. For example, one work package involves developing waste retrieval technology for three sites, where the risks and priorities are likely to be different. Furler, elimination of a work package could render an entire process infeasible. For example, if a retrieval technology is unavailable, this may preclude subsequent operations such as waste processing, immobilization, and disposal. Strict application of the prioritized list is likely to have unintended dysfimctional results Hat can be avoided only by discussing the priorities on a project- by-project basis with organizations at He sites (e.g., Focus Areas and problem owners). This leads the committee to question the value of the aggregated prioritization. The database required to support a valid prioritization is not yet in place, a fact that was clearly stated by DOE. The committee believes it likely that the uncertainties in risk and cost data, when coupled with instability of the site-driven needs, will result in the uncertainties in the database being so great Hat the utility of the results is highly questionable. These uncertainties result from both inaccurate information (e.g., inability to estimate cost savings in the distant filture) and differing judgments on qualitative criteria such as "high visibility." . 4Work packages are aggregates of one or more technology needs. Insofar as ongoing projects for already solicited work are in progress in any given year, needs identified in a previous budget year are represented by technology development projects in progress. Any work package can therefore have a combination of new starts (representing new needs) and work in progress (relevant to previously identified needs).

OCR for page 53
Decision Making in the DOE-OST 65 The committee was unable to determine how priorities imposed by OST would be reconciled with user-driven priorities established by site organizations such as STCGs and Focus Areas in the event that there are differences. The committee did not have enough information on this evolving prioritization process to reach firm conclusions about it. However, the committee believes that OST should evaluate its current approach in the context of the above concerns. A peer review of the proposed prioritization approach, once it is filthy formulated, could be valuable. User or Site Program Planning (Box 5) As discussed earlier, the technology user programs develop baseline functional flowsheets (and, hopefully, alternatives), from which technology needs are derived. The lack of formal, substantive OST involvement in this planning activity implies that OST cannot now influence the technical decisions from which are derived the technology development needs that OST gathers at each major DOE-EM site. This is a substantial deficiency in overall program efficiency. Identification and Prioritization of Technology Needs (Box 6) OST accomplished the steps in Box 6 using Site Technology Coordination Groups, which are discussed in Chapter 4 and Appendix B. In the 1994 EM-50 program reorganization, the STCGs were formed to strengthen the tie to the customer. The STCG's mission was to document and prioritize technology development needs at a DOE-EM site by using input from site managers and operating contractors for EM-30, 40, and 60. Each STCG forwarded a prioritized list of site needs annually to the Focus Areas, which gathered multiple site needs, established a national prioritization, and requested Finding for work relevant to the highest-priority needs. in principle, the Focus Areas owe the STCGs a response for how each need is addressed in the suite of technology development projects selected for funding, but the committee was told informally that this feedback was often missing. Although the STCG processes all differ, they have similar strengths and weaknesses. One strength is that the resulting lists of priority needs reflect a site's priorities both within a particular topical area (e.g., mixed waste, tank waste, subsurface contamination, or decontamination and decommissioning) and across topical areas (e.g., all site needs in EM technology development can be rar~c-ordered based on the STCG process). A possible weakness is that the technical needs of the site problem owners may not be accurately communicated, since the needs statements are often by design recast to be more general descriptors than those that are received from site contractors who own the problems (the user community). All STCGs use a structured and documented process to identify needs. Typically, this process involves a series of meetings that generate annotated lists of technology needs and involve discussions and prioritization of these needs. A clear strength of a structured and documented process is that it provides an accounting trail for subsequent decision making and external review. It also allows for both technical input (at the subgroup level) and stakeholder input (at the higher STCG management level). All STCGs attempt to use a formal prioritization methodology. Typically, these formal methodologies consist of defining sets of prioritization criteria, scoring technology needs on these criteria, weighting the criteria, and calculating some weighted average to indicate the overall priority of the technology needs. The strengths of these formal methodologies are that they provide a consistent framework for prioritization and provide an accounting trail and defense for prioritized lists of needs to be reviewed in higher-level decisions. However, the committee observed several weaknesses in the use of

OCR for page 53
66 Good Decision Making Practices these formal methodologies. Many STCG criteria were poorly defined (with short phrases that were subject to different interpretations in the absence of more expanded descriptors) or were redundant (e.g., criteria of risk reduction and of safety benefit are somewhat redundant). Some methodologies used weights that could exclude essential needs. It is possible, for example, to exclude an essential need such as safety by giving it so little weight it does not affect the selection of a technology. As a result, the committee concludes that many of the prioritization criteria fall short of being truly effective and rigorous. Further, they differ Mom site to site, making it difficult to ensure consistency of results across sites. In addition to being a resource to gather site needs, STCGs may suggest a demonstration site for a developed technology and provide nonbinding endorsements of technologies or proposals (e.g., the Hanford Tank Initiative was endorsed by the Hanford STCG) to OST management. In these efforts, the STCG facilitates the demonstration and implementation of technologies at its site, thereby serving as an advocate for that site in promoting constructive technology development activities, with both OST and the site perceived as winners credited with any successfi~! venture. In the implementation of this approach, some STCGs were slow in developing structured operating procedures, with the result that early compilations of needs (see, for example, DOE, 1994b; 1995g) were sometimes less well prepared than desirable; later compilations were substantially improved. Over time, the needs template used by the STCGs was standardized across all sites (in the summer of 1996), which provided a more uniform basis upon which Focus Areas could base their technology selection decisions. Some Focus Areas (e.g., the Mixed Waste Focus Area) used other available resource material (e.g., Mixed Waste Site Treatment Plans developed by EM-30) to establish their own assessment of site-wide and national needs. The planning efforts associated with Accelerating Cleanup: Paths to Closure (DOE, Mesa) also generated data relevant to the definition of site technology needs. OST has fi~nded the STCGs in the past in order to collect and prioritize DOE-EM site needs. Whether or not this form of collecting site needs will persist, some mechanism to solicit technology development needs from users is necessary, because this fimction (as depicted by Box 6 of Figure 4.~) is necessary to learn the needs of users. If the STCGs are truly useful to the sites, as was the committee's impression at Hanford site, then perhaps STCG activities can be cofunded by the sites, with the degree of cofunding serving as a way to measure OST's effectiveness in servicing the needs of that site. Identifying Technology Needs by Characterizing Problem Boundaries and Opportunities Another OST effort at identifying technology needs was to fimd projects whose purposes were to define problem boundaries and technology opportunities. These activities include the Mixed Waste Focus Area's Waste Form Initiative, Thermal and Nonthermal Mixed Waste Treatment Studies, and the HTI. These activities, described below, have not been evaluated in detail by the committee, but are mentioned for completeness to exhibit another way for OST to identify technology needs other than by soliciting requests Dom the problem owners and technology users of other EM offices. The Waste Form Initiative of 1997-1998 was a scoping study of the use of existing waste forms and treatment options for the inventory of DOE-EM's mixed waste. The purpose of the study is to determine whether any new waste forms (and associated treatment processes) should be developed in order to properly dispose of the existing inventory. The tentative conclusion of the study is that less than 100 cubic meters a small fraction of the total inventory lacks a treatment and disposal option, a result that argues against a large-scale effort to develop new waste forms and treatment processes. The thermal and nonthermal treatment studies were conducted to compare the total life-cycle costs of competing technical approaches to dealing with DOE mixed wastes. One of the conclusions is that nonthermal treatment facilities, although devoid of the pejorative connotations associated with incineration, cost at least $1 billion more in total life-cycle cost than thermal systems.

OCR for page 53
Decision Making in the DOE-OST 67 The 1997 HT} is an effort to gather technical information on two tanks to inform tank closure specifications. The HIT was endorsed by the Hartford STCG and Finked directly Dom headquarters. It is jointly filmed by OST alla EM-30. Aggregation and Prioritization of Technology Needs for Each OST Program (Box 7) This step was accomplished by OST program units such as Focus Areas and Crosscutting Programs, as described in more detail in Chapter 4 and Appendixes C and D. These program units receive needs from the STCGs, reprioritize them at a national level, and compare the resulting priority needs with the technical products of ongoing technology development project work in order to identify the needs that merit further attention. A lack of adequate information or a lack of technical knowledge may permit individual Focus Areas to inappropriately link new needs defined by the STCGs with existing OST technology portfolios. The consequences are either that a need is misunderstood or that a technology's application to site needs is misrepresented. The needs statements that the STCGs create contain technical specifications to capture or represent site needs at a reasonable level of specificity. The Focus Areas and other OST program units combine several of these needs statements into work packages, each with one or more technology projects contained within it. This "roll-up" process can be advantageous if one technology can be developed to be flexible enough to meet more than one need. However, this process of representing the program by a suite of work packages can be disadvantageous if important infonnation is lost in the conversion of the site need into more general representative language. That is, the method has a potential wealmess if a technology was developed and designed to meet a need that is generally stated, but would have limited applicability to the actual specific circumstances of the individual site needs on which the more general statement was based. OST faces these and other RD&D-related challenges. The solution to this problem is close interaction with the ultimate users, in a way that is reflected in decision making. Chapter 4 describes the process used by OST to collect and prioritize needs when the problem owner is the community of users within EM-30, EM-40, or EM-60 offices at DOE-EM sites and a Focus Area is the provider program, as shown in Figure 4. ~ . This figure can also be used to show the user-provider dual program office interaction that occurs within OST for "technology transfer" at more basic phases of RD&D. Focus Areas and Crosscutting Programs can be user programs, receiving technology development projects begun by separately funded OST programs such as the Industry Program, the University Program, the TISA-International Program, and the DOE-EM EMSP. This last program performs mission-directed science, for which the "needs" are appropriately stated at a much more general level. Table 5.! depicts the relationships among EM programs, and Appendix E summarizes the means by which needs are identified and prioritized and projects developed in these other OST program units. Greater transparency of the needs identification and prioritization processes in these programs is desirable, and greater uniformity across FAs is needed. Funtling Decisions (Box 8), Soliciting RD&D Proposals (Box 9), and Performing Technology Development Projects (Box 10) These steps are discussed in Appendixes C-E for each OST Focus Area and Crosscutting Program and for the other OST programs that fund project work, namely the Industry Program, the TISA International Program, and the ASTD program.

OCR for page 53
68 Good Decision Making Practices TABLE 5. ~ Program Office Combinations That Relate as User-Provider Pairs Shown in Figure 4. User Provider Problem-holding site manager in Em-30, 40, or 60 Focus Area Crosscutting Program Focus Area Crosscutting Program Industry Program University Program TISA International Program EMSP Industry Program University Program TISA International Program EMSP Pragmatic Aspects of Funding Decisions Based on Prioritized Needs (Box X) Although funding high-priority needs until resources are exhausted is highly desirable, there are factors that complicate this straightforward approach. Three of these complicating factors are described below. They support the general truth that allocation of resources to the highest-priority needs may not lead to optimum results (see, for example, Levary and Seitz, 1990~. First, because most projects are multiyear, the technology provider program unit will be considering funding a mix of ongoing projects to meet previous needs and new needs. Thus, the decision-making process must include provisions to determine the relative value of the ongoing projects, based on the results of project monitoring and evaluation, and compare this to the value of meeting a new need. Termination of an ongoing project usually results in the total loss of the investment to date, and for this reason, ongoing projects are typically assigned a high priority and continued unless there is good reason for termination. Valid reasons for terminating an ongoing project include major revision in the Importance of the underlying need, a review that indicates a project is unlikely to produce results that will eventually lead to deployment, and severe budget reductions that require terminating lower- priority ongoing projects to fund new, higher-priority needs. A second complicating factor is that distribution of funding to needs in priority order requires each aggregated need to have an associated budget. For ongoing projects, this budget normally exists. However, for new needs the technology provider program unit must typically estimate a budget in anticipation of specific proposals. Although this frequently requires subsequent adjustments, the problems are generally not too severe because multiyear projects usually begin win a relatively small rate of expenditure, which allows adjustments to be made in future years based on fairly reliable budget estimates. The Bird complicating factor is the federal practice of essentially continuous review and revision of budgets and budget decisions. There is a legitimate need for higher-level management to review the perfonnance of the technology provider program unit and ensure Hat its activities conform to policy goals. However, this has evolved into directives concerning what should or should not be funded at a level of specificity that constitutes micromanagement (PCAST, 1997), coupled with budgetary issues such as withholding promised budgets until late in the year or midyear budget adjustments (usually rescissions). These result in increased program overhead and decreased efficiency of RD&D projects that are abruptly started, suspended, terminated, or redirected. The intervention of headquarters management in program decision making should be reserved for policy issues in general. Further exacerbating this is recent

OCR for page 53
Decision Making in the DOE-OST 69 congressional pressure to reduce funding earned over into future fiscal years. Although there are apparently some good reasons for this restriction, the practice can be detrimental to the conduct of multiyear projects, especially those that are fi~nded late in a fiscal year, when a federal budget does not exist early In a new fiscal year or when a project must make major corrunitments early in a fiscal year. The committee believes that congressional concern could be largely addressed while unburdening ongoing projects by restricting carryover at the program level, but not at the project level. Pragmatic Aspects of Proposal Solicitation and Provider Selection (Boxes 9 and 10) The committee believes that the solicitation of technology development proposals should be open and competitive to the extent that multiple proposals are realistically possible. At one end of Me spectrum, science needs should be met by proposals obtained through a widely advertised, open solicitation, as is already being done in We EMSP. At the other end of the spectrum, demonstration projects are often tied so closely to specific site applications (e.g., tandcs and waste burial sites) and processes that have been winnowed through the multistage RD&D process to a point at which only one choice is possible. In this case, an open solicitation is misleading and wasteful, and should be avoided. Intermediate situations may call for intermediate solutions. In evaluations such as those described in Chapter 4 and Appendixes B-E, in which a small team of individuals apply their collective judgment, the experience and knowledge of team members is important to the quality of their decision. Therefore, an effort to include individuals with this experience and knowledge base is a way to Improve these processes. Project Monitoring and Evaluation (Box ~ I) OST has implemented (DOE, 19960) a stage-and-gate system (Paladino and Fox, undated; Paladino and Longwor~, 1995) to track technology development projects through various stages of development (see Figure 4.21. The purpose of this "stage-and-gate" tracking too! is to provide a disciplined process for assessing and managing the performance of evolving environmental cleanup projects. A project may go through as many as all seven of the stages listed below: Stage I: basic research Stage 2: applied research Stage 3: exploratory development Stage 4: advanced development Stage 5: engineering development Stage 6: demonstration or pilot operations Stage 7: full-scale Implementation Each stage has goals, objectives, and measures of effectiveness. For example, in stage ~ the goal is to generate new ideas, an objective is to identify new environmental technology, and measures of effectiveness include technology end user need, technical merit, costs, and safety. There are gates between adjacent stages. A project at stage x must pass through gate x before it can go to stage x + I. There are three major actions that may be taken at a gate: (~) go forward, (2) hold for specific action, or (3) stop. Gates 2 and 4 are major decision points for OST technology development projects, because later-stage work is typically more expensive to fiend than work at earlier stages. At these gates, the Focus Area or Crosscutting Program leadership convenes a review pane! to rate (grade and score) specific _

OCR for page 53
70 technologies based on ~. Good Decision Making Practices the requirements and criteria established for that gate. The resulting evaluation is forwarded to a review group for concurrence before the project can proceed to the next stage. The committee suggests the following four ways in which the stage-and-gate project managing and tracking process might be cianfied and improved. Suggestion 1. There should be an explicit "entry" or "validation" decision to confirm that a project is sufficiently meritorious to warrant development and tracking, during project solicitation and selection. This is important because once a project is in the system, it has a high probability of advancing through the process, at least to gate 4. Although the stage-and-gate process is part of a structured decision-making approach, it should be viewed pnmanly as a tracking tool. It does not provide information about the initial decision as to what technologies should enter the process for tracking during development. Awareness of the spectrum of technologies available, both external and internal to DOE, should be a requirement for a "make-or-buy" decision on whether a technology need statement should be converted into a technology development project. There did not appear to the committee to be a formal approach in place at all fumes (see examples in Appendix C) to review existing technologies or technologies already under development and to determine if there was value in developing similar and/or improved technologies. The Defense Nuclear Facilities Safety Board (DNFSB, 1998), similarly recognized that an increased awareness of technology state of the art and state of practice would improve these decisions. Tools such as existing available technology listingsS developed and demonstrated by industry and other countries (e.g., the Organization for Economic Cooperation and Development) can be helpful, although they are limited by the specific nature of the technology needs and by timely awareness of them by the potential user. The decision as to what projects are selected is aided by input from a motivated, skilled work force, as exemplified by the Tanks Focus Area's technical team evaluations and the Mixed Waste Focus Area's waste type manager teams. Suggestion 2. The rating and scoring systems used at gates 2 and 4 have flaws that should be corrected. The pilot reviews (DOE, 1996O) at gate 4 done on two projects used criteria grouped into the following six broad categories: l. technology need (170 points); 2. technology merit (170 points); 3. cost (190 points); 4. safety, environmental protection, and risk (150 points); 5. stakeholder issues (160 points); and 6. commercial viability (160 points). The numbers in parentheses represent the maximum number of points out of 1,000 that a project can obtain in the given category. Reviewers of a technology project provide ratings on three to five relevant criteria within each category to derive an (unweighted) average rating. The score for the category equals this average rating multiplied by the maximum number of points for the category. The score for the project is the sum of its category scores. r J 5Two Focus Areas the Subsurface Contaminants Focus Area and Me Decontamination and Decommissioning Focus Area have tasked contractors to develop such databases.

OCR for page 53
Decision Making in the DOE-OST 71 This scoring process seems both unnecessarily complicated and flawed in concept. For example, a project could have scores of 0 in both the need and the safety categories and still receive 680 points. One of the two projects in the pilot review had a score of 654 out of 1,000 and received a go recommendation. The second project had a score of 528 out of 1,000 and was put on hold. The "go" project received a rating of 2 of 10 on the criterion of "end user performance requirements incorporated and implementation issues defined." This low rating deducted only 28 points from the possible score of 1,000, yet it reveals a significant weakness of the project with respect to OST's major diff~culty~eployment by the end user. These examples demonstrate that if the structure or mode! used for the quantification procedure is incomplete or inaccurate, the results can be misleading or in error. More emphasis must be placed on low ratings in important categories. Specifically, unless a project receives at least specified minimum scores in key categories (i.e., threshold criteria), it should not be given further consideration. Suggestion 3. Gate 6 and stage 7 should be given much greater attention. OST controls all of the stages and gates except these. However, the overall goal (and implied performance measure for OST) is for its projects to reach stage 7 and be deployed. Perhaps one benefit from the formalization of a technology decision process is that it shows OST's need to exercise influence on gate 6 and stage 7. Suggestion 4. OST's implementation of a stage-and-gate tracking too! should be simplified to the extent possible by reducing the number of gates at which projects are formally evaluated by personnel outside a particular OST program unit. For example, although the stage-and-gate mode! has two research stages, only one review might be needed at a project's research stage. Similarly, the three stages and gates at development junctures could be combined into only one review point. Deployment (Box 12) Deployment of OST-developed technology is an important measure of performance and is necessary if OST is to contribute to the technical capability applied to DOE-EM cleanups. However, in the current DOE- EM organization, deployments are conducted by the other EM offices with cleanup responsibilities, as Figure 4.] illustrates. This technology "hand off'' to a different program office makes the deployment step susceptible to programmatic issues (e.g., budget shortfall, change in cleanup priorities or preferred technology, or change in schedule), not just issues of technical viability. Any expanded OST role in deployment diverts its resources from the development of technologies. Deployment of an OST-developed technology is Gus outside OST's direct control. These decisions are made by site managers at DOE sites within other EM offices who own the remediation or waste management problems on which the developed technology is intended to be applied. It is evident in retrospect that having representatives of other EM offices participating in OST programs has not been sufficient to result in an adequate level of deployment. However, progress has been made with the establishment of the Focus Areas and STCGs, and still more progress in stimulating deployment has been attempted with the recent establishment of the ASTD program and the Technology Acceleration Committee composed of upper-level DOE-EM management. This area could be improved still further by closer, more Dequent discussions among Focus Areas, STCGs, and technical staff at the sites. Deployment can be enhanced with (~) well-def~ned jurisdictional boundaries between OST and other DOE-EM offices and (2) improvements in the quality and/or frequency of interaction between OST technology developers and DOE-EM personnel knowledgeable about site activities and problems. On the first point, the role of other EM offices should be to provide OST with technology needs and deployment

OCR for page 53
72 Good Decision Making Practices opportunities, while the role of OST should be to develop appropriate technology in view of these user inputs. SUMMAlRY AND CONCLUSIONS SPECIFIC ISSUES The decision-making process diagrammed in Figure 4.! is a useful way to describe OST's achievements and interactions with other organizations. The remarks below summarize the box-by-box description of how OST accomplished these process steps in FY 1997-1998. The source of funds and the restrictions placed on some funds are beyond OST's direct control and are therefore constraints upon the program. Budget allocations to OST program units are done by OST headquarters management. needs. Site program planning is outside OST's purview, yet determines technology needs. OST has employed STCGs to gather and prioritize these user-specified site needs. OST has also funded studies to characterize problem boundaries as a way to identify technology Provider programs (such as Focus Areas, in instances where user programs are the site programs fimded by other EM offices) are where key decisions are made using two inputs: (~) a prioritized lists of needs and (2) available fiends, each of which is subject to changes over time. OST's implementation of the stage-and-gate tracking too! can be improved. Deployment of an OST-developed technology is outside OST's direct control. These decisions are made by site managers at DOE sites within other EM offices who own Me remediation or waste management problems on which the developed technology is intended to be applied. Chapter 6 presents findings and recommendations based on both the general considerations and the more specific evaluations in this chapter.