National Academies Press: OpenBook

Evaluating Alternatives for Landside Transport of Ocean Containers (2015)

Chapter: Chapter 4 - Proposed Evaluation Method

« Previous: Chapter 3 - System Goals and Evaluation Criteria
Page 49
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 49
Page 50
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 50
Page 51
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 51
Page 52
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 52
Page 53
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 53
Page 54
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 54
Page 55
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 55
Page 56
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 56
Page 57
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 57
Page 58
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 58
Page 59
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 59
Page 60
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 60
Page 61
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 61
Page 62
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 62
Page 63
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 63
Page 64
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 64
Page 65
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 65
Page 66
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 66
Page 67
Suggested Citation:"Chapter 4 - Proposed Evaluation Method." National Academies of Sciences, Engineering, and Medicine. 2015. Evaluating Alternatives for Landside Transport of Ocean Containers. Washington, DC: The National Academies Press. doi: 10.17226/22136.
×
Page 67

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

49 Overview Objectives The research team developed the following specific objectives for evaluation method development: • The process must consider the goal being pursued and the scope and nature of the decision being made. • The method should guide evaluations of potential alternative container transport systems— both in the abstract and in specific port and terminal applications. Different technologies may be evaluated, but the method should be system-oriented. • The method should be usable by a wide variety of organizationally and geographically diverse project sponsors. It must specifically account for the goals or objectives of the decisionmaker. • The process must enable users to balance economic, technical, environmental, and social factors. • The method should make it easy to (1) identify and quantify the impact of inevitable project tradeoffs and (2) perform sensitivity analyses. • The process should accommodate the widest reasonable range of alternatives. • The evaluation method must consider the state of development and information available for those alternatives. • The process should eliminate infeasible or unresponsive proposals rapidly and efficiently. The early identification of “fatal flaws” is very important. • The process should be efficient, providing for the rapid and easy screening of alternatives to focus most of the effort and resources on the most promising proposals. • The method should recognize that the no-project scenario is not static. • The process must consider uncertainty and risk. • The process must be transparent to the users. All the approaches considered in this study share some steps: • Setting goals. The purpose may be expressed as a problem to be solved, specific realistic objectives to be achieved, or an advantageous direction in which to progress. Clarity in this step is critical. • Selecting criteria. The list of criteria can range from a conceptual “wish list” to a set of weighted performance criteria with technical metrics. Given the current (2014) state of technological readiness and system development, evaluation of advanced fixed-guideway systems is largely at the “wish list” stage. • Analyzing proposals. The nature and depth of the analysis varies from pass/fail screening against minimum criteria to sophisticated monetization of otherwise disparate performance factors. C H A P T E R 4 Proposed Evaluation Method

50 Evaluating Alternatives for Landside Transport of Ocean Containers • Evaluating and choosing. The rigor and depth of the selection process should be matched to the decision being made, with comparative rankings being suitable for early-stage research support and extensive quantification being required for multi-billion-dollar construction commitments. Potential Users Many stakeholders are involved with landside transport of ocean containers. All could be involved in evaluation of emerging container transport alternatives. Important stakeholders include • Container port authorities and planners, here and abroad. • Motor carriers, ocean carriers, terminal operators, importers, exporters, and other container shipping industry participants. • National, state, and regional transportation planners and officials. • Local, state, and regional environmental agencies. • Community interest and environmental organizations. • Technology developers and advocates. • Private investors. • Researchers at national laboratories and higher-education institutions. Scope and Purpose of Decisions A key finding regarding evaluation methods for container transport systems is that “one size does not fit all.” The resulting evaluation method should be adaptable to specific circumstances. Effective alignment of the project’s purpose, scope, goals, and evaluation process is a critical success factor. The research team identified four generic types of decisions that might be made regarding alternative container transport systems: • Support for research and development. • Readiness for incorporation or anticipation in other projects. • Funding for demonstrations or pilot projects. • Commitment to construction and operation. The general scale of financial commitment rises roughly a thousand-fold, from thousands of dollars in research support to millions of dollars for demonstrations to billions of dollars for construction. The complexity and rigor of the evaluation method should rise accordingly. Research and development efforts might be supported by government grants, port authority funds, a private-sector institution, or private investors. In each case, the evaluators are typically looking for promising concepts or ideas, rather than finished solutions or implementable systems. The goals and criteria are likely to be general. The evaluations are likely to focus on screening or qualification for funding rather than on demonstrated performance. EIRs and regional planning efforts typically require examination of alternatives or consid- eration of new technologies. Analysts and planners must, therefore, evaluate the readiness and applicability of container transport alternatives, even if funds are not being committed at that time. The I-710 Alternatives Analysis was undertaken for this reason. Although funding requirements for demonstration projects or pilot installations are usually much higher than early-stage research and development requirements, the same funding and investment sources may be involved. The expectation, however, is that candidate proposals for demonstration funding will have shown sufficient promise in the research and development

Proposed Evaluation Method 51 phase to be considered. Criteria are likely to be more stringent and quantified and include factors such as the potential for commercial success and integration with existing facilities. Capital investment in system construction and operations is likely to be a multi-billion dollar decision with far-reaching operational and environmental consequences. In addition to technical, economic, and financial analysis, implementation of an alternative container transport system would almost certainly require a NEPA evaluation and a full-scale EIR. There would be an exhaustive list of formal, quantitative criteria. The evaluation, whether a choice between com- peting proposals or a go/no-go decision on a chosen system design, would require substantial time and resources. There are other reasons to develop a flexible and scalable method: • The time scale of the decisions described above also increases from months, to years, to decades. • As the state-of-the-art advances for candidate technologies, the evaluation method must advance in parallel. • Circumstances, scale, and priorities will vary among port regions. Method Steps The proposed generic sequence of evaluation steps is shown in Figure 4-1. The basic method is not unique to container transport systems or even to transportation. These same steps apply to any instance in which proposals must be evaluated against given DEFINE GOALS SELECT & WEIGHT CRITERIA DEFINE BASELINE LOCATE POTENTIAL CANDIDATES ASSEMBLE SCREENING DATA SCREEN IDENTIFY EVALUATION CANDIDATES ASSEMBLE EVALUATION DATA ANALYZE EVALUATE CHOOSE BEST CANDIDATE(S) Figure 4-1. Evaluation method structure.

52 Evaluating Alternatives for Landside Transport of Ocean Containers criteria, against a baseline scenario, or against each other. The content and approach within the steps, however, will vary considerably depending on • The decision to be made. • The state of proposal development. • The timeline for development, implementation, and project life. • The availability and precision of information. • The existence of a baseline or “no-project” alternative. • The resources and time available for the evaluation. The steps in Figure 4-1 are shown in sequence, but, in practice, the process may include steps taken in parallel (e.g., criteria selection, baseline definition, and candidate documentation). There may also be feedback loops and iterations, such as selection criteria refinements based on initial screenings or preliminary analysis. The following report sections cover the major steps shown in Figure 4-1 in greater detail. Defining Goals The evaluation process begins with establishing goals: what does the user want the alternative container transport technology or system to achieve? The general long-term goal of alternative container transport initiatives is to advance a TBL agenda by improving economic, environmental, and social performance over existing or evolving “no-project” alternatives. Every review of alternative container transport technologies to date has acknowledged something akin to the overall TBL goals, but has concluded that the available technologies are not ready to achieve that goal. That long-term overall goal, therefore, should not necessarily be the operative goal for every evaluation process. Beginning with conceptual technologies low on the TRL scale, there are some intermediate purposes to be achieved requiring the evaluation processes. Table 4-1 provides examples of typical operative evaluation goals corresponding to the Technology and System Readiness levels described in Chapter 3. At low TRLs and SRLs, such as at present, the operative goals of the evaluation process are more likely to emphasize the selection of promising technologies for further research and devel- opment or investment. Such an evaluation might be undertaken by a private or public research funding source. As technologies and systems develop, the emphasis is likely to shift to pilot projects and demonstrations, where the candidates would be evaluated on research and development results to date and implications for long-term potential. Higher on the scale, candidates would be evaluated for larger scale pilot installations progressing to full-scale implementation. Environmental and social factors are likely to receive high priority in any urban port region. For example, most alternative container transport ideas have been proposed and evaluated in the Southern California port context where emissions and congestion impacts are of particularly high importance. The emphasis and the goal, however, may differ in new or developing ports outside non- attainment areas or congested cities. The recently constructed Yangshan port area at Shanghai (Shanghai Shengdong International Container Terminal) is on an artificial island connected by a causeway to Shanghai (Figure 4-2). A new container port of this configuration may have minimal concerns over air quality, traffic congestion, or community impacts, but a great need for a high-capacity, energy-efficient container transport system. TBL goals for such a facility might be satisfied by an economically and technically efficient system that did no obvious environmental or social damage.

Proposed Evaluation Method 53 In a less dramatic example, a port with a single-container terminal or a compact cluster of terminals may be seeking a system to connect an off-terminal rail intermodal facility. The Port of Baltimore is one example. Given the cost and difficulty of building new highways or conventional rail lines in metropolitan areas, the port may be seeking alternative means of achieving a specific container transport objective. Both the overall TBL goal and the operative objectives for the specific evaluation process have implications for the selection and weighting of criteria. Level Technology Readiness System Readiness Evaluation Goal 9 Line-haul technology proven by successful operation. Transport system proven by successful operation. 8 Line-haul technology qualified through test and demo. Transport system qualified through test and demo. 7 Prototype line-haul technology demonstrated in operational environment. Prototype transport system demonstrated in operational environment. 6 Model or prototype line-haul technology demo in relevant environment. Model or prototype transport system demo in relevant environment. 5 Line-haul technology component validation in real environment. Transport system component validation in real environment. 4 Line-haul technology component validation in lab environment. Transport system component validation in lab environment. 3 Analytical/experimental proof of line-haul technology concept. Analytical/experimental proof of transport system concept. 2 Line-haul technology concept and/or application formulated. Transport system concept formulated. 1 Basic line-haul technology principles observed and reported. Basic transport system principles observed and reported. Research & development support; due diligence for investment Funding of demonstration, modeling, or testing; due diligence for investment Licensing, funding, or procurement of technology or system Funding or permission for pilot installation; due diligence for investment Table 4-1. Readiness levels and evaluation goals. Source: Google Earth Figure 4-2. Yangshan port area at Shanghai.

54 Evaluating Alternatives for Landside Transport of Ocean Containers Selecting and Weighting Criteria There is an important difference between evaluating line-haul transport technologies and evaluating complete container transport systems built around those technologies. An early-stage evaluation goal of funding further research would require criteria for conceptual technology comparisons. An implementation goal would require a far more complete and quantified set of criteria. Although it is relatively straightforward to create and obtain agreement on a “wish list” of broad criteria for TBL improvements, it will likely become progressively more difficult to compile and agree on detailed operational, economic, and financial criteria and metrics. Minimum Performance Requirements The initial step may be for the user to establish the minimum performance requirements necessary to verify applicability and feasibility. These requirements should encompass all three TBL categories and should be quantified to the extent possible. These requirements will differ based on the interests of the user as well as project location. Examples of minimum requirements include • Minimum operational requirements. For example: “The system must move 100 containers per hour in both directions between port terminal X and industrial park Y. Transit time can be no more than 2 and a half hours. The system must be operational at least 16 hours per day 6 days a week throughout the year.” • Maximum level of public financial participation. For example: “The public will provide 40% of the investment up to $100 million and will not provide an operating subsidy.” • Minimum environmental goals. For example: “The system will produce no more than XX PPM of particulate matter in the port containment area.” For the recent exercise in Southern California the explicit project goal was zero “tailpipe” emissions. • Minimum social goals. These would typically relate to jobs, economic impact, or mobility. • Minimum TRL. An alternative would be to specify that the system be available for full deployment in X years, or that only “commercial off-the-shelf” (COTS) systems will be considered. In effect, this stage addresses the question of whether the proposed system would meet the evaluator’s goal or solve the evaluator’s problem if implemented and operating as proposed. A system that cannot provide enough capacity, for example, is not a candidate, regardless of cost or social/environmental advantages. The complexity of the container port operating environment and of the technologies and systems under consideration will make it difficult for sponsoring organizations to develop a comprehensive set of minimum requirements before receiving proposals. The method, therefore, must be flexible enough to deal with issues raised during the process itself. Systems or technologies that cannot meet minimum requirements or that would not solve the problem if implemented would be eliminated from further consideration. The more developed and precise the minimum requirements are, the easier the evaluation process will be to execute. System Performance Criteria Assuming prospective container transport systems meet the minimum requirements, a flexible, performance-based set of evaluation criteria such as those discussed in Chapter 3 are needed to make an evaluation.

Proposed Evaluation Method 55 Weighting Criteria As the scope increases, the criteria and weighting factors become less qualitative and more comprehensive and quantitative. For early-stage decisions, selecting a few simple, narrow, unweighted, general criteria is most appropriate. This approach proved to be suitable in the case studies. In these circumstances, the decision criteria are effectively the same as the minimum requirements, because failure to meet the minimum requirements would eliminate the proposal, regardless of any criteria weighting scheme. As the decision size and complexity increases, criteria will likely increase in number and require weighting to reflect relative priorities. For major, long-term infrastructure decisions, the criteria and weighting should be as thorough and precise as possible, using the latest and best capital planning management methods. In comparison with the early-stage decision, this process should be rigorous, comprehensive, monetized, and risk-adjusted. A major implementation decision that affects a port, its terminals, ocean carriers, customers, labor, the environment, and the community will need to incorporate elaborate criteria weighting that reflects the input of all stakeholders. Unless criteria weights are specified by some outside agency (e.g., a funding source or regulatory body), they will have to be developed or adapted for the purpose. Figure 3-3 diagrammed the generic progression from technical and economic data through objective technical judgment to the crucial incorporation of social values and criteria. Transparency will be essential in developing criteria weights, as it will be throughout the entire evaluation process. The selection criteria, the weights, and the scores will eventually come together to drive the decision, and the weights can have as great an influence on the outcome as the selection of criteria or the scoring. Several weighting methods can be employed, similar to the choices for criteria selection or scoring: • Stakeholder surveys or polls. Carefully constructed surveys, polls, or questionnaires can be used to obtain structured opinions and feedback. • Expert or stakeholder panels. Development of criteria weights can be delegated to a group of experts or stakeholders. There is, however, the risk of real or perceived bias and of objections by stakeholders not represented. This function is often performed by a Technical Working Group (TWG) or a Stakeholders Advisory Committee (SAC). • Sponsor or Steering Committee. A single evaluation sponsor or a steering committee of representatives of multiple sponsors can choose the criteria weights. Defining the Baseline As with so many other method factors, the applicability of a baseline for comparison depends on the evaluation purpose and the decision being made. The choice of baseline also depends on the time span being considered. Innovative container transport technologies are usually proposed as alternatives to con- ventional over-the-road (OTR) drayage using diesel trucks. Drayage equipment and practices are evolving rapidly under economic and environmental pressures. The truck drayage baseline has thus become a moving target, especially for the longer time horizons relevant to alternative technologies.

56 Evaluating Alternatives for Landside Transport of Ocean Containers Ports with on-dock or near-dock intermodal rail terminals have mixed container transport systems whose realities should be reflected in defining a baseline for comparisons. The following questions are relevant: • What is the current and forecast mix of OTR truck, truck shuttle to near-dock rail, and on-dock rail transfer? • For which of these trip segments would the proposed alternative systems compete? • What are the salient TBL characteristics of the competing modes or combinations? The scope of this study covers potential systems reaching up to 100 miles from the port. Under most circumstances, trips in that distance range are made only by truck. The Virginia Inland Port at Front Royal, VA, is about 215 truck miles and 265 rail miles from the port terminal at Portsmouth, VA. The rail shuttle between the two would not, therefore, be considered a competitive target within the current study scope. In Southern California there have been conceptual proposals for “inland ports” at Mira Loma, Palmdale, and Victorville to be linked to the Ports of Long Beach and Los Ange- les by rail shuttles. As Figure 4-3 shows, these locations are within 100 miles from the ports, and such trips would be within the study scope. A long-term evaluation in that case might reasonably consider evolving conventional drayage and rail intermodal shuttles as alternatives to innovative technologies. A fundamental element of most method applications is comparing new alternative systems to a truck drayage baseline. “Conventional” port truck drayage is evolving rather than static, so it is critical to compare prospective alternate systems against what highway-based drayage will become during the life cycle of the proposed project, not what highway-based drayage may be at present. A preliminary review of positive and negative long- and short-term TBL factors influencing port drayage leads to the following observations relevant to the evaluation of inland container transport systems. On the positive side Figure 4-3. Southern California inland port proposal sites.

Proposed Evaluation Method 57 • Incremental technological improvements in truck drayage propulsion produce consistent short- and long-term TBL benefits. Emissions from modern tractors are a small fraction of those from legacy fleets. The introduction of hybrid, electric, or hydrogen fuel cell tractors may make zero-emissions drayage possible. • Truck drayage enjoys the benefits associated with use of legacy capital, organizational, and institutional investments in systems, terminals, highways, and vehicles. Proposed systems relying on dedicated guideways provided by private or public/private capital may require a market return on capital. Truck drayage relies on a jointly used highway provided by public capital, for which the “return” may be defined quite differently. • Truck drayage is scalable and flexible. Average and marginal costs are essentially identical for an additional truck trip. • Institutional, managerial, regulatory, and systems changes to truck drayage hold the promise of reducing delays and bottlenecks, and thereby reducing emissions and cost. On the negative side • Short- and long-term labor, fuel, capital, insurance and other drayage cost factors are increasing. • Capacity expenditures for the highway system are politically driven and uncertain in the long term. This social and economic factor is a negative, at least in the short term, given that pas- senger mobility is declining nationwide with increased congestion. The congestion impacts of truck drayage are therefore becoming more onerous. Table 4-2 suggests that forward-looking physical highway considerations are primarily nega- tive while evolving drayage vehicle considerations are primarily positive. Terminal considerations appear neutral. Tables 4-3 and 4-4 list forward-looking short-term and long-term considerations for drayage baseline evaluators. Vehicles Terminals Ways Economics More Expensive Little Change Under Funded Environmental Cleaner Little Change Little Change Social Little Change Little Change More Congestion Table 4-2. TBL considerations and port drayage trends. Positive Short-Term Factors Negative Short-Term Factors Cleaner diesel engine technology (2007, 2010) Contribute to increasing highway congestion Increased diesel fuel efficiency standards Driver shortage increases cost Hybrid and electric tractors Fuel cost increases Early ITS improving efficiency New tractors more costly Table 4-3. Short-term drayage baseline factors. Positive Long-Term Factors Negative Long-Term Factors ITS Technology--Efficiency Underfunding for highways ITS Technology--Capacity Long-term congestion impact ITS Technology--Safety Long-term emissions impact Cleaner tractors New terminal infrastructure not needed Redundancy Table 4-4. Long-term drayage baseline factors.

58 Evaluating Alternatives for Landside Transport of Ocean Containers Locating Potential Candidates An evaluation of any kind would usually begin with publication of the minimum performance objectives or goals and an invitation to submit concepts or proposals. Alternatively, evaluators could conduct a search for candidates. Ideally, the evaluation should encompass the widest possible range of technologies and applications. To be a candidate, a proposed technology or system needs to show potential application to the goal or problem statement. Although it may seem self-evident, this observation implies a focus on what the proposed system or technology would accomplish rather than how it operates. A technology for high-speed movement, for example, may not be applicable to a capacity problem. From this perspective, the method is concerned first with outcomes and then with how those outcomes are achieved. Screening Candidates An early screening step allows the evaluator to narrow the field, conserve analytic resources, and emerge with a short list of comparable technologies or systems that meet performance requirements and do not possess any “fatal flaws.” Screening criteria are usually expressed as minimum requirements or eligibility factors and are a subset of the broader list of selection criteria, as discussed above. Screening may be as simple as listing and verifying the salient facts. For example, the recent Roadmap project in Los Angeles and Long Beach determined that “a fixed-guideway system implementation timeline is significantly longer than the deployment of electric trucks.” The project evaluated that finding against the criteria of “Emissions and Health Risk Reduction” and found that while both fixed-guideway and electric trucks were zero-emissions technologies, using electric trucks was a superior option because it resulted in the more rapid realization of project goals for emissions reduction. The goal of this first level of documentation is simplicity and consistency. The evaluators need enough information to determine if the proposals will pass the screening criteria. Evaluation workload increases as the number and complexity of proposals increases. Evaluator resources may be especially strained where conceptual proposals must be “fleshed out” or otherwise brought to a common level of documentation. The information available on many current advanced-technology proposals is mostly descriptive and conceptual. Unless the evaluation itself is conceptual, many proposals are not developed far enough at present for quantitative evaluation. The analytic resources available to evaluators are likewise usually limited. Here, too, the focus should be on goals and on information that allows decisionmakers to determine whether potential candidates can serve those goals. In a pragmatic sense, evaluators need to determine whether a technology would meet the goal or solve the problem if it performs as proposed. Data that inform that determination are relevant at the screening step. Screening criteria would become more extensive and more stringent as consideration pro- gressed from conceptual technologies toward implementation of an operable system at a specific port. The ability to integrate with legacy terminals and use specific existing rights-of-way, for example, is not important at the conceptual research and development level, but will probably be critical in the implementation phase. The concept of a fatal flaw can be applied at any stage of the evaluation and would eliminate the proposal as a candidate. The fatal flaw concept is usually applied in technical evaluations,

Proposed Evaluation Method 59 but can also find application in other instances. For example, proposals might be found to have fatal flaws if they • Would not function under the range of operating conditions (e.g., weather extremes). • Were found to be structurally unsound. • Included features that were illegal, prohibited by regulation, or violated labor contracts. • Assumed conditions or actions by others that could not be guaranteed. • Could not fit or function in the site or right-of-way available. Analyzing Candidates Candidate analysis first involves assembling the information required to support an effective evaluation. Performance against minimum requirements is typically a pass/fail dichotomy and would likely have been checked during the screening process. Fatal flaws that affect feasibility may show up at this stage and would also lead evaluators to drop affected alternatives. Proposals that pass initial screening would ordinarily constitute a “short list” of viable candi- dates for analysis. Although the evaluation method provides for a single screening step followed by a short list evaluation there is no reason why a second narrowing of the field could not take place if useful. It may be possible, for example, that the assembly of screening criteria and data would reveal additional stakeholder concerns or requirements that could be addressed in a second screening step. Assembling Evaluation Data Following the screening process, the best candidates would be asked to provide the technical information necessary to support further evaluation. Depending on the purpose, this need might be satisfied through a formal Request for Proposals. Obtaining all the data required to apply a rigorous, quantitative analysis is likely to be a major challenge to evaluators. The current (2014) state of the art for alternative inland container transport technologies is largely conceptual. There are few physical prototypes and no operating systems to provide empirical data. Most proposed technologies exist only on paper, as scale models, or as laboratory or field demonstrations of components. Under those circumstances, data are scarce and the available data are mostly rough estimates. Previous studies have gathered data via internet searches, literature reviews, technology sponsor presentations and promotional materials, and inquiries to the proposers. The 2009 LA/LB RFCS is the only request for concepts to date. (A formal proposal process was originally anticipated to follow the concepts and solutions phase if workable solutions had emerged.) At the present conceptual level, the data available will be primarily descriptive, rather than empirical performance metrics. Reliable technology and system performance metrics are likely to become available only after extended demonstrations and pilot installations, which have yet to be undertaken. Once those have been funded, initiated, and completed, there should be objective data available on factors such as throughput capacity, velocity, reliability, energy consumption, and emissions. Objective data on congestion relief, market penetration, operating cost, construction cost, and so forth are unlikely to become available in advance of full-scale implementation. Until these technologies and systems have been implemented in the port environment and have compiled a performance record, evaluators will have to rely on estimation, comparisons with similar systems in other applications, or simulation modeling. These approaches to data

60 Evaluating Alternatives for Landside Transport of Ocean Containers compilation tend to be costly and time-consuming and blur the line between data assembly and data analysis. Regardless of whether data are readily available, evaluators of inland container transport proposals will need to address issues common to all such efforts: • Comparability. System and technology sponsors working in isolation may measure perfor- mance differently, estimate costs differently, and even define container transport requirements differently. Attaining comparability may require multiple feedback loops to identify and reconcile important differences. • Assumptions. The research team’s review of technology proposals suggests that different sponsors have made different explicit and implicit assumptions regarding container sizes and weights, loading and unloading capabilities, rights-of-way, terminal configurations, and other important factors. It would probably be necessary for evaluators to provide a comprehensive set of common technical assumptions in a solicitation process. • Reliability and objectivity. There is a persistent risk of accepting aspirational or promotional information as objective fact. Technology and system sponsors are in the business of putting their proposals in the best possible light and putting the best possible face on available data. Evaluators may have to probe and verify promoter claims. • Maximum versus routine performance. A distinction must be maintained between the maximum or optimal performance of which the technology or system may be capable and the routine performance that it can be expected to deliver on a reliable basis in daily operation. It is common to provide specifications such as minimum headway between containers, maximum speed, and maximum hourly throughput. Taking headway for example, a minimum headway of 1 minute between containers does not translate into routine throughput of 60 containers per hour unless the loading system can insert the containers at precise 1-minute intervals and the unloading system can take containers out of the system that quickly. After the data and other information are received, they would be reviewed by the project sponsor, possibly with the assistance of a team of experts who can thoroughly evaluate the technical aspects of a proposal. As the technological readiness of alternate container movement systems increases, the amount and quality of specific technical information may be expected to increase and with it the complexity of fact verification. For example, assuming energy use was an important performance criterion, the proposal review team would need to be competent to judge the practical accuracy of the claims included in competing proposals. Such expertise is likely outside the routine competence of the staff of most public funding agencies. With a larger project scope it becomes increasingly desirable to use common metrics for evaluation and criteria weighting. This need becomes especially acute in confronting dissimilar TBL tradeoffs. New transport systems and the evolving drayage baseline will all offer different mixes of economics, capacity, service, emissions, safety, congestion relief, and risk. The distribution of costs and benefits will also vary over time, so a common metric that can accommodate time factors is needed. Monetization—the use of dollar values as a common metric—is typically the preferred approach to the need for a common metric that can accommodate time and risk factors, particularly for projects of large scope. By converting all criteria performance factors to dollar equivalents and adopting a life cycle net present value (NPV) approach, evaluators can develop a uniform scoring and evaluation process for complex, disparate systems proposals. The monetization approach requires • Establishment of agreed dollar values for performance measurements associated with selection criteria, particularly social and environmental measures.

Proposed Evaluation Method 61 • Agreement on risk assessments, typically expressed as expected values for each outcome. • Agreement on a discount rate (interest rate) for scaling the time dimension and calculating NPV. There are well-established methods for each of these steps (e.g., TTI has done extensive analysis on the cost of congestion; the economic value of a human life lies within a generally accepted range), but incorporating equivalent monetary values for a diverse set of criteria (e.g., emissions benefits, safety improvements, congestion relief, or other non-financial criteria) into a single model will be inherently contentious and difficult. Monetization approaches include • Determining the cost of achieving the same non-financial outcome (e.g., an emissions reduction or congestion improvement) by the best alternative means. • Estimating the value of an improvement (e.g., reduced traffic accidents or reduced community health problems) as the unit cost of the current problem (e.g., average social cost of a traffic accident or the medical costs and lost earnings associated with illness). • Using expert opinion. Although expert opinion may be consulted, the final values associated with each criterion are the final responsibility of the user. Monetization of all the evaluation criteria has the following benefits: • Many of the evaluation criteria are already monetized and much research has already been done in the area of monetizing environmental and social criteria. • Monetary values provide a transparent and easily understood means to weigh dissimilar comparison criteria. • Although there will be discussion and disagreement over values assigned, that discussion is inevitable, healthy, and facilitated by using a commonly understood measure for value. • Monetization facilitates the use of other common financial means for measuring and evaluating risk, while producing a quantified system comparison in the form of standard financial measures (including return on investment). Unfortunately, the state of the practice as of 2014 in advanced-technology container transport systems does not support a significant degree of monetization. As the case studies illustrate, there are no usable estimates for emissions, safety improvements, security improvements, or other factors that would ideally be monetized. Risk Distribution Because users will have differing tolerances of risk and alternative container transport systems will have different risk profiles, it is critical to determine the risk characteristics of each alternative in addition to the NPV of costs and benefits. To perform this analysis, the risk characteristics of each criterion contributing to the flow of costs and benefits are estimated and considered. The result is a frequency distribution of NPV, which will permit the user to compare a system with a lower NPV and less risk with a system with a higher NPV and more risk. There are multiple kinds of risk, of which the most critical for this application would be • Technological risk—the risk that the technology will not work • Performance risk—the risk that the technology or system will not provide the expected level of service • Capital cost risk—the risk that the capital costs of the system will be more than estimated or more than the budgeted contingency reserve • Operating cost risk—the risk that the operating costs will be higher than estimated

62 Evaluating Alternatives for Landside Transport of Ocean Containers • Implementation risk—the risk that the system will take longer to implement than expected • Commercial risk—the risk that the system will not be commercially viable • Environmental risk—the risk that the system will have greater adverse environmental impacts than estimated • Safety risk—the risk that the system would cause or increase the likelihood of harm to persons or property, or of terrorist or politically induced attack Measurement and evaluation of risk is difficult at best. Ideally, there would be a large number of similar attempts from which a distribution of outcomes could be derived. In practice, however, assessments of risk tend to be subjective and relative rather than objective and absolute. Risk itself is frequently defined as the consequences of an event multiplied by its probability, essentially the inverse of the expected value of benefits. In this application, it is probably easier to estimate the consequences of an event rather than its probability. For example, the consequences of a container loaded with furniture toppling off a vehicle on an elevated guideway and onto an arterial street below could include • Loss of life or injury due to container impact or to subsequent traffic accidents • Damage to property, including the furniture in the container, the container, vehicles on the street, the contents of those vehicles, and the street itself • Disruption to traffic, costs of delay to individuals, and the costs of cleanup to the municipality or the system operator The likelihood of this event, however, would be very hard to estimate. At best, it might be pos- sible to rank potential technologies and systems as more or less likely to have such events based on system design features such as securement to vehicles and fences or walls on the guideway. Figure 4-4 shows a simplified method for categorizing risk and subsequent management actions. Catastrophic Very Serious Serious Not Serious 1 2 3 4 Certain A Highly Probable B Probable C Improbable D 4A 1B 2B 3B 4B Li ke lih oo d Consequence Risk Level 1D, 2C, 2D, 3B, 3C 1A, 2A, 3A, 1B, 2B, 1C 1C 1D 2D 3D 4D 2C 3C 4C 1A 2A 3A 3D, 4A, 4B, 4C, 4D Action/Management Steps Unacceptable risks, action required to eliminate or reduce May be either unacceptable (requiring reduction or elimination) or acceptable (requiring management) Usually acceptable, may require management Figure 4-4. A simplified risk management model.

Proposed Evaluation Method 63 Another way to approach risk assessment is to estimate the costs of reducing or averting risk. One source17 suggests the following pertinent questions: • What can be done and what options are available? • What are the associated tradeoffs in terms of costs, benefits, and risks? • How do current management decisions [or, in this case, design decisions] affect risk? Evaluating Candidates The analysis step alone could be sufficient to locate the best candidate (or to determine that no proposal would succeed) without a formal evaluation process. The method includes this separate evaluation step for two reasons however: • With multiple criteria in a TBL assessment, there will likely be a need to evaluate tradeoffs between the advantages and disadvantages of proposed solutions. • In a highly visible issue, the concerns of stakeholders and proposal advocates may be best met through a formal ranking or rating of options against established criteria. There are numerous methods for evaluating proposals once analysis has been completed, and no one method is best for all applications. For example, there is an important distinction to be made between two kinds of choices: • Choosing one or more candidates for dedicated funding (e.g., dividing dedicated research funds among qualifying proposals), demonstration projects, soliciting bids, and so forth. • Deciding whether or not a given proposal will be funded or chosen (e.g., approving or dis- approving a proposed demonstration or full-scale implementation). In the first instance, the task is to pick one or more competing qualified proposals, and the evaluation tends to be relative rather than absolute. In the second instance, the objective is to determine whether or not the proposal is a good use of public or private funds, and the evaluation tends to be absolute. The evaluation method should be chosen accordingly. As suggested in the descriptions of rankings, rating, scoring, and weighting approaches that follow, the more complex approaches are more likely to be suitable as the precision, reliability, and completeness of proposal information increase. At the conceptual stage corresponding to low levels of technology or system readiness, performance measures tend to be rough, descriptive, expressed in wide ranges, or unavailable. For most systems, this is the state of the art in 2014. Under such circumstances, relative rankings based on proponent claims might be the most that can be expected. Attempting a more definitive or detailed evaluation would likely be risky, open to question, and potentially misleading. As TRLs and SRLs progress, it should be possible to advance from relative rankings to scalar ratings, at least for the more readily quantifiable criteria. A review of alternative container transport technology proposals and requests suggests that factors such as capacity, speed, and emissions may be easier to quantify and score than factors such as cost, congestion impacts, or safety. Ranking In the most basic form of ranking, the proposals would be evaluated against each criterion and listed in order, usually from best to worst. This approach has obvious limitations where 17 Originally in Yacov Y. Haimes, “Total Risk Management,” Risk Analysis 11, No 2 (1991): 169–171

64 Evaluating Alternatives for Landside Transport of Ocean Containers there are multiple criteria. One means of ranking proposals against multiple criteria is shown in Table 4-5. By ranking the six proposals in the example from 1 to 6 on five hypothetical criteria, the total score can be used to select the best proposal or to rank all six. In the example, Proposal C is ranked highest overall. In this case, no weights are assigned to criteria, so, for example, low cost is just as important as relieving congestion. This approach would be most suitable when the consequences of the decision are smaller (e.g., a modest research grant or descriptions of the state of the art) or where detailed information on the various proposals is not readily available. The ability of rankings to distinguish between similar and disparate proposals is limited. In Table 4-5, for example, Proposal C, the apparent winner, may have a much higher cost than Proposal A, but only a slightly better safety outcome. Ratings Criteria rating, an example of which is shown in Table 4-6, provides more flexibility. By rating each proposal on a scale of 1 to 5 it is possible to have ties and to distinguish either very good or very bad performance. The example in Table 4-6 shows Proposal L to be somewhat better than I or K, but Proposal G to be much worse than all the others. The 1 to 5 scale is commonly used in survey questions and can be assigned either numerical or text values. The average score can thus reflect technical scoring, opinion, judgment, or even emotional response. In Table 4-7, the 1 to 5 ratings have been converted to the familiar “Consumer Reports” symbols. This approach to displaying results works best when differences between proposals are clear or where it is used to pick more than one leading candidate. In Table 4-7, Proposals I, K, and L cannot be distinguished in the overall rating, but all three are clearly better than G, H, or J. A similar approach was used in the 2011 LA/LB Roadmap, discussed later. Capacity Cost Emissions Safety Congestion A 1 2 2 3 5 13 B 3 5 5 1 4 18 C 4 3 1 2 1 11 D 2 6 3 5 3 19 E 5 1 4 6 2 18 F 6 4 6 4 6 26 Criteria Ranking (1=Best, 6=Worst) Total ScoreProposal Table 4-5. Example of criteria ranking. Capacity Cost Emissions Safety Congestion G 1 2 1 3 5 2.4 H 4 5 1 1 4 3.0 I 4 3 5 3 1 3.2 J 1 3 3 5 3 3.0 K 5 2 4 3 2 3.2 L 2 4 4 4 3 3.4 Proposal Criteria Rating (1-5, 5 being best) Avg. Score Table 4-6. Example of criteria rating.

Proposed Evaluation Method 65 Criteria Scoring Using a broader scale (e.g., 1–100, as shown in Table 4-8) allows both more discrimination between similar proposals and a wide spread between disparate proposals. Without weighting the criteria, however, this approach still does not reflect the relative importance of cost and emissions reduction, for instance. In the example shown, Proposal P is scored highest. It could be argued, however, that Proposal R is more cost-effective at reducing emissions, because it scored higher than P on both cost and emissions criteria. More fundamentally, the ability to score proposals accurately on a scale of 1 to 100 implies that enough is known about the proposals to make those distinctions. As of 2014, the proposals for alternative container transport systems are for the most part still highly conceptual. The ability to distinguish a score of 45 from a score of 44 or 46 would therefore be questionable at best. A detailed scoring approach such as illustrated in Table 4-8 would likely be more appropriate for some point well in the future when alternative container transport systems have advanced to where performance factors can be reliably quantified. Table 4-9 illustrates another application of criteria scoring in support of container transport system evaluation. In this example, importance scales have been awarded to the criteria them- selves by stakeholders. The scores have then been totaled and normalized to result in percentage weightings. In an evaluation, the Capacity scores would receive a 19% weight, the Cost scores a 20% weight, and so forth. Capacity Cost Emissions Safety Congestion G H I J K L Proposal Criteria Rating Overall Table 4-7. Example of “Consumer Reports” criteria rating. Capacity Cost Emissions Safety Congestion M 35 23 60 15 56 189 N 20 45 30 26 40 161 O 70 63 29 56 80 298 P 55 48 67 73 90 333 Q 60 36 40 55 19 210 R 30 75 80 15 60 260 Proposal Criteria Scoring (0-100, 100 being best) Total Score Table 4-8. Example of criteria scoring.

66 Evaluating Alternatives for Landside Transport of Ocean Containers Weighted Criteria Scoring The performance of proposals against a set of criteria and the relative importance of those criteria to the relevant decisionmakers can eventually be combined in weighted criteria scoring. As illustrated in Table 4-10, selection criteria are given weights (by whatever method) that are then applied to performance scores. In Table 4-10 Proposal T has the highest weighted score by a small margin over Proposals U and V. Changing the weights can change the final rankings. Table 4-11 uses the same performance scores as Table 4-10, but the criteria weights from Table 4-9. Proposal V then receives the highest score. Capacity Cost Emissions Safety Congestion 1 35 23 60 15 56 189 2 20 45 30 26 40 161 3 70 63 29 56 80 298 4 55 48 67 73 90 333 5 60 30 36 40 55 19 75 80 15 60 210 6 260 Total Score 270 290 306 240 345 1451 Normalized Weight 19% 20% 21% 17% 24% 100% Criteria Weighting (0-100%) Total ScoreStakeholder Table 4-9. Example of criteria weighting. Capacity Cost Emissions Safety Congestion Weight 30% 30% 20% 10% 10% 100% Proposal S 3.5 2.5 6.0 1.5 5.5 3.7 T 8.0 5.5 4.5 4.5 8.5 6.3 U 7.0 6.5 3.0 5.5 8.0 6.0 V 5.5 5.0 6.5 7.5 9.0 6.1 W 6.0 4.0 4.0 5.5 2.0 4.6 X 3.0 7.5 8.0 1.5 6.0 5.5 Criteria Weighted Criteria Scoring (0-10, 10 being best) Weighted Score Table 4-10. Example of weighted criteria scoring. Capacity Cost Emissions Safety Congestion Weight 19% 20% 21% 17% 24% 100% Proposal S 3.5 2.5 6.0 1.5 5.5 4.0 T 8.0 5.5 4.5 4.5 8.5 6.3 U 7.0 6.5 3.0 5.5 8.0 6.0 V 5.5 5.0 6.5 7.5 9.0 6.8 W 6.0 4.0 4.0 5.5 2.0 4.1 X 3.0 7.5 8.0 1.5 6.0 5.4 Criteria Weighted Criteria Scoring (0-10, 10 being best) Weighted Score Table 4-11. Example of weighted criteria scoring with alternative weights.

Proposed Evaluation Method 67 Monetized Criteria Scoring When criteria have been monetized in the analytical phase, the evaluation exercise is straight- forward as commonly used economic and financial investment techniques such as cost-benefit and cost-effectiveness analysis can be applied. During the analytical phase, TBL costs and benefits of alternative container transport systems are estimated from inception and throughout their useful lives. These risk-adjusted cash and non-cash costs and benefits are combined to produce a summation of benefits year by year. This flow of costs and benefits produces an NPV for each alternative system based on a full life cycle of service. The monetization approach effectively embeds criteria metrics and weights in a single mea- surement: dollar equivalents. The monetization process itself, therefore, requires a significant investment of time and resources. If successfully completed, however, monetization provides an analytically rigorous approach to comparing proposals, analyzing tradeoffs, and optimizing TBL results. For some projects, the addition of the TBL benefits provides the critical rationale for advancing a project that may not be justifiable on financial merit alone. Choosing Candidates The final step in the process is the comparison of the candidate technologies or systems being evaluated with the evolving drayage baseline (assuming a baseline is applicable). Perhaps the most challenging aspect of this will occur when the choice involves a high-risk, high-value alternative versus a less risky alternative with a lower value. This case would be judged based on the risk tolerance of the decisionmaker. Ideally, a TBL NPV would be calculated using monetized evaluation criteria and the successful alternative(s) selected for comparison against the drayage baseline. The essential feature of this step is that the alternative technologies are compared with each other based on risk and the NPV of TBL benefits. The process allows for more than one successful system as circumstances may dictate. For example, the process may be used to find the three best systems for development grants.

Next: Chapter 5 - Case Studies »
Evaluating Alternatives for Landside Transport of Ocean Containers Get This Book
×
 Evaluating Alternatives for Landside Transport of Ocean Containers
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Freight Research Program (NCFRP) Report 34: Evaluating Alternatives for Landside Transport of Ocean Containers explores a method for evaluating alternatives to diesel trucks for ocean container transport to or from deep-water ocean ports and inland destinations within 100 miles.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!