National Academies Press: OpenBook

Protection of Transportation Infrastructure from Cyber Attacks: A Primer (2016)

Chapter: Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation

« Previous: Chapter 1 Top Myths of Transportation Cybersecurity
Page 8
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 8
Page 9
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 9
Page 10
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 10
Page 11
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 11
Page 12
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 12
Page 13
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 13
Page 14
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 14
Page 15
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 15
Page 16
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 16
Page 17
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 17
Page 18
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 18
Page 19
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 19
Page 20
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 20
Page 21
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 21
Page 22
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 22
Page 23
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 23
Page 24
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 24
Page 25
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 25
Page 26
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 26
Page 27
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 27
Page 28
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 28
Page 29
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 29
Page 30
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 30
Page 31
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 31
Page 32
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 32
Page 33
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 33
Page 34
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 34
Page 35
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 35
Page 36
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 36
Page 37
Suggested Citation:"Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation." National Academies of Sciences, Engineering, and Medicine. 2016. Protection of Transportation Infrastructure from Cyber Attacks: A Primer. Washington, DC: The National Academies Press. doi: 10.17226/23520.
×
Page 37

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

8 Chapter 2 Cybersecurity Risk Management, Risk Assessment and Asset Evaluation Risk Management Managing the risks associated with cyber, IT and ICS, can prove to be intractably challenging. For even the most robust and up-to-date security systems there is an ever- growing risk that the next exploitation methodology will be discovered by an attacker and be introduced without detection. And in truth it takes the commitment of significant resources, and the development of substantial expertise to establish and maintain an effective cybersecurity program or response capability. For transportation agencies the response to the IT and ICS security challenge lies in the formulation of a program that both balances and shares responsibility for critical infrastructure system protection among operators and employees, government agencies, industry stakeholders, technology manufacturers and product vendors. Unlike physical security protection systems where countermeasures can be deployed by an organization to harden a critical asset, “locking down” cyber systems demands that vulnerabilities be identified and eliminated, reduced or mitigated along the entire technological supply chain. Overcoming the global threat posed by international attackers who can exploit from afar adds a dimension to the problem that requires participation by government, and by extension, the entire international community. Although there are variations in application, the risk management process for transportation agencies in this cyber environment requires consideration and adoption of many of the same security principles used in the protection of physical assets. Transportation Cyber Risk Management is the process whereby transportation risk scenarios are analyzed and acted upon. This includes scenarios wherein accidents are deliberately caused, public transportation services are rendered unavailable, carrier systems lose location Figure 1: Risk Management Program for Control System Security rce wer Prese C tr stems ec r Pr gram – Tr r DH C P JWG C fere ce – e e October 27, 2010 │David Sawin Volpe Program Manager, f r A r ce C tr stems)

9 Risk Management /Risk Mitigation Strategies • R sk Assessment • Threat Assessment • Vulnerab ty Assessment • Consequence Assessment identifiers, and shipments are irretrievably lost. Optimally, significant inherent operational risk should be viewed in the context of transportation business and environmental control factors resulting in recommendations for Risk Response Options. Response Options include Risk - Avoidance, Acceptance, and reduction strategies including Assessment, Dependency and Spreading, and Transfer. Avoidance, the simplest of all solutions for eliminating risk consists of refraining from engaging in the risky activity in the first place. For example, in the scenario where cyber risk is presented by a technological automation of an operational system, the alternative of a non-cyber ventilation system will eliminate the cyber related risk of automating fan mechanisms. However, in this rudimentary example it becomes readily apparent that one or more employees will be required to manually turn on and turn off each and every one of the ventilation systems fans when required in order to make the system function. Similarly, acceptance of risk requires no real action to be taken by the organization. But acceptance should be based on a knowledgeable and responsible recognition of the probability and impact of perceived adverse cyber events. Because of the increased interface and integration of modern day cyber assets obtaining accurate information for this approach can be somewhat difficult to accomplish. Typically cost-benefit analysis can be utilized to determine the tipping point where expending funds to fix a problem exceeds the return on investment that the mitigation achieves. However with cyber the full measure of probable or likely losses is difficult to identify. Issues’ regarding the potential for loss of life now being more so associated with integrated transportation ICS systems further exacerbates the problem. However, most cyber risk is dealt with using Risk Reduction techniques. Identifying and eliminating the vulnerabilities of IT and ICS systems is clearly the main method of reducing or mitigating losses. Vulnerabilities are identified, catalogued, shared and “patched” a process that is essential to the response methodology of cybersecurity professionals. Non-professionals are taught IT systems “awareness” as a means to minimize human interface (HMI) types of vulnerabilities from breaching IT or ICS. Of note, it is estimated that between 300,000 & 1M current cybersecurity positions are vacant. Demand is expected to rise as public, private Figure 2: Risk Management/Risk Mitigation Strategies Adapted from NCHRP Report 525 Volume 14, Security 101: A Physical Security Primer for Transportation Agencies

10 and government sectors face unprecedented numbers of data breaches and cybersecurity threats. Today the lack of cybersecurity talent can be an organization's biggest vulnerability, exposing it to serious risk that can equate to unacceptable losses. Figure 3: Risk Scenario Based Process Adapted from COBIT 5 for Risk, the Information Systems Audit and Control Association – www.isaca.com In addition to managing risk through vulnerability analysis, other reduction techniques can be deployed. Risk Dependency and Spreading takes into account that coordinated collaboration amongst cybersecurity stakeholders including end user operators, information security practitioners, designers, manufacturers and distributors, integrators, standards organizations and government regulators can result in the identification of defensive strategies to effectively reduce cyber risk. Maximizing the accountability of all stakeholders in the supply chain presents the opportunity for a strong and systematized approach to managing risk that is both highly efficient and cost effective. Best illustrated through discussion of control systems, the spreading of risk in terms of cyber takes into account that historically control system security was a function of total isolation from external networks. Operations commands, instruction and data acquisition occurred in a closed environment. But today’s systems are very different. There are now integrated architectures that connect external sources: the corporate LAN, peer sites, business partners and vendors, remote operations and facilities and the Internet. Protecting what was formerly an isolated ICS system, with little if any cybersecurity defenses can be extremely

11 challenging. Particularly since the very nature of an open architecture network demands the exchange of data from disparate information sources, of which an attacker could take advantage. Risk spreading recognizes that all parties or providers to the integrated network architecture, including vendors, suppliers, business partners, corporate, security departments and the government, share responsibility to deploy mitigation strategies and countermeasures that will reduce the vulnerabilities of the system. Risk Transfer, the use of insurance to transfer all or parts of liability to another business or entity, is one of the traditional market mechanisms for estimating, pricing, and distributing risk. According to the International Risk Management Institute’s Annual Survey of specialized insurance services, businesses spent as much as 2B on cyber insurance premiums in 2013. Some estimates suggest that the number has jumped to 5B in 2014. Cybersecurity is one of the fastest growing lines of insurance. Particularly for companies that hold customer personal data or even employee data for companies with large numbers of positions and staff – credit card numbers, medical information, social security numbers, coverage can cost more. Risk Assessment and Asset Evaluation A mainstay of both physical and cyber systems security, risk reduction consists primarily of the assessment of threats, vulnerabilities and consequences (TVC analysis) of an event or series of events in an effort to reduce or mitigate losses associated with their occurrence. Risk assessments address the potential adverse impacts to organizational operations and assets, individuals, other organizations, and the economic and national security interests of the United States, arising from the operation and use of information systems and the information processed, stored, and transmitted by those systems. Organizations conduct risk assessments to determine risks that are common to the organization’s core missions/business functions, mission/business processes, mission/business segments, common infrastructure/support services, or information systems. NIST Special Publication 800-30 summarizes the steps associated with risk assessment as follows: STEP 1: PREPARE FOR RISK ASSESSMENT Task 1-1. Identify Purpose – Identify the purpose of the risk assessment in terms of the information that the assessment is intended to produce and the decisions the assessment is intended to support. Task 1-2. Identify Scope – Identify the scope of the risk assessment in terms of organizational applicability, time frame supported, and architectural/technology considerations. Task 1-3. Identify Assumptions and Constraints – Identify the specific assumptions and constraints under which the risk assessment is conducted. Task 1-4. Identify Information Sources – Identify the sources of descriptive, threat, vulnerability, and impact information to be used in the risk assessment. Task 1-5. Identify Risk Model and Analytic Approach – Identify the risk model and analytic approach to be used in the risk assessment.

12 STEP 2: CONDUCT RISK ASSESSMENT Task 2-1. Identify Threat Sources – Identify and characterize threat sources of concern, including capability, intent, and targeting characteristics for adversarial threats and range of effects for non-adversarial threats. Task 2-2. Identify Threat Events – Identify potential threat events, relevance of the events, and the threat sources that could initiate the events. Task 2-3. Identify Vulnerabilities and Predisposing Conditions – Identify vulnerabilities and predisposing conditions that affect the likelihood that threat events of concern result in adverse impacts. Task 2-4. Determine Likelihood – Determine the likelihood that threat events of concern result in adverse impacts, considering: 1) the characteristics of the threat sources that could initiate the events; 2) the vulnerabilities/predisposing conditions identified; and 3) the organizational susceptibility reflecting the safeguards/countermeasures planned or implemented to impede such events. Task 2-5. Determine Impact – Determine the adverse impacts from threat events of concern, considering: 1) the characteristics of the threat sources that could initiate the events; 2) the vulnerabilities/predisposing conditions identified; and 3) the organizational susceptibility reflecting the safeguards/countermeasures planned or implemented to impede such events. Task 2-6. Determine Risk – Determine the risk to the organization from threat events of concern considering: 1) the impact that would result from the events; and 2) the likelihood of the events occurring. STEP 3: COMMUNICATE AND SHARE RISK ASSESSMENT RESULTS Task 3-1. Communicate Risk Assessment Results – Communicate risk assessment results to organizational decision makers to support risk responses. Task 3-2. Share Risk-Related Information – Share risk-related information produced during the risk assessment with appropriate organizational personnel. STEP 4: MAINTAIN RISK ASSESSMENT Task 4-1. Monitor Risk Factors – Conduct ongoing monitoring of the risk factors that contribute to changes in risk to organizational operations and assets, individuals, other organizations, or the Nation. Task 4-2. Update Risk Assessment – Update existing risk assessment using the results from ongoing monitoring of risk factors. APTA Recommended Practice Securing Control and Communications Systems in Transit Environments, Part 1 lists the “Stages of the Risk-Assessment Process” describing the major steps in organizing for and conducting a risk assessment for a transit agency: 1. Generate Management Support and Empowerment for the Risk-Assessment Process – Management support is necessary for the risk-assessment process. The process takes time and commitment, and empowerment and resources for the team are necessary. 2. Form the Risk-Assessment Team from Technical Experts and Stakeholders – The team that is formed should be of a combination of the organizational “owners” of these

13 areas, technical experts from these areas, and auxiliary groups. For instance a team might include Engineering, Operations, Maintenance, HR, Safety, IT, Security 3. Identify Assets and Loss Impacts – Determine the critical assets that require protection. This may include list of control and computing equipment, physical and network layouts, etc., and may include hard copy drawings, electronic network drawings, database printouts, etc. Keep this information in a secure central location for the team. Identify possible undesirable events and their impacts. Prioritize the assets based on consequence of loss. 4. Identify Threats to Assets – Identify source of potential threats to critical assets. Common threat sources include: Natural Threats—floods, earthquakes, tornadoes, landslides, avalanches, electrical storms, and other such events. Human Threats— events that are either enabled by or caused by human beings, such as unintentional acts (inadvertent data entry) or deliberate actions (network based attacks, malicious software upload, unauthorized access to confidential information). Environmental Threats—long- term power failure, pollution, chemicals, liquid leakage. 5. Identify and Analyze Vulnerabilities – Identify potential vulnerabilities related to specific assets or undesirable events. Identify existing countermeasures and their level of effectiveness in reducing vulnerabilities. Estimate the degree of vulnerability relative to each asset. 6. Assess Risk and Determine Priorities for the Protection of Critical Assets – Estimate the degree of impact relative to each critical asset. Estimate the likelihood of an attack by a potential threat. Likelihood is the probability that a particular vulnerability may be exploited by a potential threat (derived from NIST Risk Management Guide 800-53). Estimate the likelihood that a specific vulnerability will be exploited. This can be based on factors such as prior history or attacks on similar assets, intelligence, and warning from law enforcement agencies, consultant advice, the company’s own judgment, and additional factors. Prioritize risks based on an integrated assessment. 7. Identify Countermeasures, Their Costs and Trade-Offs – Identify potential countermeasures to reduce the vulnerabilities. Estimate the cost of the countermeasures. Conduct a cost-benefit and trade-off analysis. Prioritize options and recommendations for senior management. Although there are currently very few cybersecurity risk assessment models specifically tailored to surface transportation assets or organizations, there are workable models and methodologies available for use in establishing the parameters by which cybersecurity risk will be evaluated. For example, the DHS ICS CERT Cybersecurity Evaluation Tool (CSET®) has been utilized by a number of transportation organizations to conduct assessments. Information about ICS-CERT is readily available at https://ics-cert.us-cert.gov/Assessments The ICS CERT Assessment Program Overview as stated on the website reads:

14 A core component of ICS-CERT’s risk management mission is conducting security assessments in partnership with ICS stakeholders, including critical infrastructure owners and operators, ICS vendors, integrators, Sector-Specific Agencies, other Federal departments and agencies, SLTT governments, and international partners. ICS-CERT works with these and other partners to assess various aspects of critical infrastructure (cybersecurity controls, control system architectures, and adherence to best practices supporting the resiliency, availability, and integrity of critical systems), and provides options for consideration to mitigate and manage risk. ICS-CERT’s assessment products improve situational awareness and provide insight, data, and identification of control systems threats and vulnerabilities. ICS-CERT’s core assessment products and services include self-assessments using ICS-CERT’s Cybersecurity Evaluation Tool (CSET®), onsite field assessments, network design architecture reviews, and network traffic analysis and verification. The information gained from assessments also provides stakeholders with the understanding and context necessary to build effective defense- in-depth processes for enhancing cybersecurity. Of course the underlying objective of the risk assessment is ensuring that the organization understands the cybersecurity risk to operations (including mission, functions, image, or reputation), organizational assets, and individuals. A more detailed discussion of the three main areas of cybersecurity TVC analysis follows. Threat Assessment In the cyber world threats are continually manifested, voluminous and subject to variation. Although there are identified primary types of threats such as “Stuxnet” a worm that attacks critical infrastructure, there are also characterizations of threat types including malware, short for malicious software, defined as any software used to disrupt computer operation, gather sensitive information, or gain access to private computer systems. The National Institute of Standards and Technology’s Guide for Conducting Risk Assessments (NIST Special Publication 800-30 Revision 1, September 2012) identifies threat event types under the category of adversarial/intentional acts as follows: 1. Perform reconnaissance and gather information a. Perform perimeter network reconnaissance/scanning. Adversary uses commercial or free software to scan organizational perimeters to obtain a better understanding of the information technology infrastructure and improve the ability to launch successful attacks. b. Perform network sniffing of exposed networks. Adversary with access to exposed wired or wireless data channels used to transmit information, uses network sniffing

15 to identify components, resources, and protections. Gather information using open source discovery of organizational information. Adversary mines publically accessible information to gather information about organizational information systems, business processes, users or personnel, or external relationships that the adversary can subsequently employ in support of an attack. c. Perform reconnaissance and surveillance of targeted organizations. Adversary uses various means (e.g., scanning, physical observation) over time to examine and assess organizations and ascertain points of vulnerability. d. Perform malware-directed internal reconnaissance. Adversary uses malware installed inside the organizational perimeter to identify targets of opportunity. Because the scanning, probing, or observation does not cross the perimeter, it is not detected by externally placed intrusion detection systems. 2. Craft or create attack tools a. Craft phishing attacks. Adversary counterfeits communications from a legitimate/trustworthy source to acquire sensitive information such as usernames, passwords, or SSNs. Typical attacks occur via email, instant messaging, or comparable means; commonly directing users to websites that appear to be legitimate sites, while actually stealing the entered information. b. Craft spear phishing attacks. Adversary employs phishing attacks targeted at high value targets (e.g., senior leaders/executives). c. Craft attacks specifically based on deployed information technology environment. Adversary develops attacks (e.g., crafts targeted malware) that take advantage of adversary knowledge of the organizational information technology environment. d. Create counterfeit/spoof website. Adversary creates duplicates of legitimate websites; when users visit a counterfeit site, the site can gather information or download malware. e. Craft counterfeit certificates. Adversary counterfeits or compromises a certificate authority, so that malware or connections will appear legitimate. f. Create and operate false front organizations to inject malicious components into the supply chain. Adversary creates false front organizations with the appearance of legitimate suppliers in the critical life-cycle path that then inject corrupted/malicious information system components into the organizational supply chain 3. Deliver/insert/install malicious capabilities a. Deliver known malware to internal organizational information systems (e.g., virus via email). Adversary uses common delivery mechanisms (e.g., email) to install/insert known malware (e. g., malware whose existence is known) into organizational information systems. b. Deliver modified malware to internal organizational information systems. Adversary uses more sophisticated delivery mechanisms than email (e.g., web traffic, instant messaging, FTP) to deliver malware and possibly modifications of known malware to gain access to internal organizational information systems. c. Deliver targeted malware for control of internal systems and exfiltration of data. Adversary installs malware that is specifically designed to take control of internal organizational information systems, identify sensitive information, exfiltrate the information back to adversary, and conceal these actions.

16 d. Deliver malware by providing removable media. Adversary places removable media (e.g., flash drives) containing malware in locations external to organizational physical perimeters but where employees are likely to find the media (e.g., facilities parking lots, exhibits at conferences attended by employees) and use it on organizational information systems. e. Insert untargeted malware into downloadable software and/or into commercial information technology products. Adversary corrupts or inserts malware into common freeware, shareware or commercial information technology products. Adversary is not targeting specific organizations, simply looking for entry points into internal organizational information systems. Note that this is particularly a concern for mobile applications. f. Insert targeted malware into organizational information systems and information system components. Adversary inserts malware into organizational information systems and information system components (e.g., commercial information technology products), specifically targeted to the hardware, software, and firmware used by organizations (based on knowledge gained via reconnaissance). g. Insert specialized malware into organizational information systems based on system configurations. Adversary inserts specialized, non-detectable, malware into organizational information systems based on system configurations, specifically targeting critical information system components based on reconnaissance and placement within organizational information systems. h. Insert counterfeit or tampered hardware into the supply chain. Adversary intercepts hardware from legitimate suppliers. Adversary modifies the hardware or replaces it with faulty or otherwise modified hardware. i. Insert tampered critical components into organizational systems. Adversary replaces, though supply chain, subverted insider, or some combination thereof, critical information system components with modified or corrupted components. j. Install general-purpose sniffers on organization controlled information systems or networks. Adversary installs sniffing software onto internal organizational information systems or networks. k. Install persistent and targeted sniffers on organizational information systems and networks. Adversary places within internal organizational information systems or networks software designed to (over a continuous period of time) collect (sniff) network traffic. l. Insert malicious scanning devices (e.g., wireless sniffers) inside facilities. Adversary uses postal service or other commercial delivery services to deliver to organizational mailrooms a device that is able to scan wireless communications accessible from within the mailrooms and then wirelessly transmit information back to adversary. m. Insert subverted individuals into organizations. Adversary places individuals within organizations who are willing and able to carry out actions to cause harm to organizational missions/business functions. n. Insert subverted individuals into privileged positions in organizations. Adversary places individuals in privileged positions within organizations who are willing and able to carry out actions to cause harm to organizational missions/business functions. Adversary may target privileged functions to gain access to sensitive information (e.g., user accounts, system files, etc.) and may leverage access to one

17 privileged capability to get to another capability. 4. Exploit and compromise a. Exploit physical access of authorized staff to gain access to organizational facilities. Adversary follows (“tailgates”) authorized individuals into secure/controlled locations with the goal of gaining access to facilities, circumventing physical security checks. b. Exploit poorly configured or unauthorized information systems exposed to the Internet. Adversary gains access through the Internet to information systems that are not authorized for Internet connectivity or that do not meet organizational configuration requirements. c. Exploit split tunneling. Adversary takes advantage of external organizational or personal information systems (e.g., laptop computers at remote locations) that are simultaneously connected securely to organizational information systems or networks and to non-secure remote connections. d. Exploit multi-tenancy in a cloud environment. Adversary, with processes running in an organizationally-used cloud environment, takes advantage of multi-tenancy to observe behavior of organizational processes, acquire organizational information, or interfere with the timely or correct functioning of organizational processes. e. Exploit known vulnerabilities in mobile systems (e.g., laptops, PDA’s, smart phones). Adversary takes advantage of fact that transportable information systems are outside physical protection of organizations and logical protection of corporate firewalls, and compromises the systems based on known vulnerabilities to gather information from those systems. f. Exploit recently discovered vulnerabilities. Adversary exploits recently discovered vulnerabilities in organizational information systems in an attempt to compromise the systems before mitigation measures are available or in place. g. Exploit vulnerabilities on internal organizational information systems. Adversary searches for known vulnerabilities in organizational internal information systems and exploits those vulnerabilities. h. Exploit vulnerabilities using zero-day attacks. Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Zero-day attacks are based on adversary insight into the information systems and applications used by organizations as well as adversary reconnaissance of organizations. i. Exploit vulnerabilities in information systems timed with organizational mission/business operations tempo. Adversary launches attacks on organizations in a time and manner consistent with organizational needs to conduct mission/business operations. j. Exploit insecure or incomplete data deletion in multitenant environment. Adversary obtains unauthorized information due to insecure or incomplete data deletion in a multi-tenant environment (e.g., in a cloud computing environment). k. Violate isolation in multi-tenant environment. Adversary circumvents or defeats isolation mechanisms in a multi-tenant environment (e.g., in a cloud computing environment) to observe, corrupt, or deny service to hosted services and information/data. l. Compromise critical information systems via physical access. Adversary obtains physical access to organizational information systems and makes modifications.

18 m. Compromise information systems or devices used externally and reintroduced into the enterprise. Adversary installs malware on information systems or devices while the systems/devices are external to organizations for purposes of subsequently infecting organizations when reconnected. n. Compromise software of organizational critical information systems. Adversary inserts malware or otherwise corrupts critical internal organizational information systems. o. Compromise organizational information systems to facilitate exfiltration of data/information. Adversary implants malware into internal organizational information systems, where the malware over time can identify and then exfiltrate valuable information. p. Compromise mission-critical information. Adversary compromises the integrity of mission-critical information, thus preventing or impeding ability of organizations to which information is supplied, from carrying out operations. q. Compromise design, manufacture, and/or distribution of information system components (including hardware, software, and firmware). Adversary compromises the design, manufacture, and/or distribution of critical information system components at selected suppliers. 5. Conduct an attack (i.e., direct/coordinate attack tools or activities) a. Conduct communications interception attacks. Adversary takes advantage of communications that are either unencrypted or use weak encryption (e.g., encryption containing publically known flaws), targets those communications, and gains access to transmitted information and channels. b. Conduct wireless jamming attacks. Adversary takes measures to interfere with wireless communications so as to impede or prevent communications from reaching intended recipients. c. Conduct attacks using unauthorized ports, protocols and services. Adversary conducts attacks using ports, protocols, and services for ingress and egress that are not authorized for use by organizations. d. Conduct attacks leveraging traffic/data movement allowed across perimeter. Adversary makes use of permitted information flows (e.g., email communication, removable storage) to compromise internal information systems, which allows adversary to obtain and exfiltrate sensitive information through perimeters. e. Conduct simple Denial of Service (DoS) attack. Adversary attempts to make an Internet-accessible resource unavailable to intended users, or prevent the resource from functioning efficiently or at all, temporarily or indefinitely. f. Conduct Distributed Denial of Service (DDoS) attacks. Adversary uses multiple compromised information systems to attack a single target, thereby causing denial of service for users of the targeted information systems. Conduct targeted Denial of Service (DoS) attacks. Adversary targets DoS attacks to critical information systems, components, or supporting infrastructures, based on adversary knowledge of dependencies. g. Conduct physical attacks on organizational facilities. Adversary conducts a physical attack on organizational facilities (e.g., sets a fire). h. Conduct physical attacks on infrastructures supporting organizational facilities. Adversary conducts a physical attack on one or more infrastructures supporting

19 organizational facilities (e.g., breaks a water main, cuts a power line). i. Conduct cyber-physical attacks on organizational facilities. Adversary conducts a cyber-physical attack on organizational facilities (e.g., remotely changes HVAC settings). j. Conduct data scavenging attacks in a cloud environment. Adversary obtains data used and then deleted by organizational processes running in a cloud environment. k. Conduct brute force login attempts/password guessing attacks. Adversary attempts to gain access to organizational information systems by random or systematic guessing of passwords, possibly supported by password cracking utilities. Conduct non-targeted zero-day attacks. Adversary employs attacks that exploit as yet unpublicized vulnerabilities. Attacks are not based on any adversary insights into specific vulnerabilities of organizations. l. Conduct externally-based session hijacking. Adversary takes control of (hijacks) already established, legitimate information system sessions between organizations and external entities (e.g., users connecting from off-site locations). m. Conduct internally-based session hijacking. Adversary places an entity within organizations in order to gain access to organizational information systems or networks for the express purpose of taking control (hijacking) an already established, legitimate session either between organizations and external entities (e.g., users connecting from remote locations) or between two locations within internal networks. n. Conduct externally-based network traffic modification (man in the middle) attacks. Adversary, operating outside organizational systems, intercepts/eavesdrops on sessions between organizational and external systems. Adversary then relays messages between organizational and external systems, making them believe that they are talking directly to each other over a private connection, when in fact the entire communication is controlled by the adversary. Such attacks are of particular concern for organizational use of community, hybrid, and public clouds. o. Conduct internally-based network traffic modification (man in the middle) attacks. Adversary operating within the organizational infrastructure intercepts and corrupts data sessions. p. Conduct outsider-based social engineering to obtain information. Externally placed adversary takes actions (e.g., using email, phone) with the intent of persuading or otherwise tricking individuals within organizations into revealing critical/sensitive information (e.g., personally identifiable information). q. Conduct insider-based social engineering to obtain information. Internally placed adversary takes actions (e.g., using email, phone) so that individuals within organizations reveal critical/sensitive information (e.g., mission information). r. Conduct attacks targeting and compromising personal devices of critical employees. Adversary targets key organizational employees by placing malware on their personally owned information systems and devices (e.g., laptop/notebook computers, personal digital assistants, smart phones). The intent is to take advantage of any instances where employees use personal information systems or devices to handle critical/sensitive information. s. Conduct supply chain attacks targeting and exploiting critical hardware, software, or firmware. Adversary targets and compromises the operation of software (e.g.,

20 through malware injections), firmware, and hardware that performs critical functions for organizations. This is largely accomplished as supply chain attacks on both commercial off-the-shelf and custom information systems and components. 6. Achieve results (i.e., cause adverse impacts, obtain information) a. Obtain sensitive information through network sniffing of external networks. Adversary with access to exposed wired or wireless data channels that organizations (or organizational personnel) use to transmit information (e.g., kiosks, public wireless networks) intercepts communications. b. Obtain sensitive information via exfiltration. Adversary directs malware on organizational systems to locate and surreptitiously transmit sensitive information. c. Cause degradation or denial of attacker-selected services or capabilities. Adversary directs malware on organizational systems to impair the correct and timely support of organizational mission/business functions. d. Cause deterioration/destruction of critical information system components and functions. Adversary destroys or causes deterioration of critical information system components to impede or eliminate organizational ability to carry out missions or business functions. Detection of this action is not a concern. e. Cause integrity loss by creating, deleting, and/or modifying data on publicly accessible information systems (e.g., web defacement). Adversary vandalizes, or otherwise makes unauthorized changes to, organizational websites or data on websites. f. Cause integrity loss by polluting or corrupting critical data. Adversary implants corrupted and incorrect data in critical data, resulting in suboptimal actions or loss of confidence in organizational data/services. g. Cause integrity loss by injecting false but believable data into organizational information systems. Adversary injects false but believable data into organizational information systems, resulting in suboptimal actions or loss of confidence in organizational data/services. h. Cause disclosure of critical and/or sensitive information by authorized users. Adversary induces (e.g., via social engineering) authorized users to inadvertently expose, disclose, or mishandle critical/sensitive information. i. Cause unauthorized disclosure and/or unavailability by spilling sensitive information. Adversary contaminates organizational information systems (including devices and networks) by causing them to handle information of a classification/sensitivity for which they have not been authorized. The information is exposed to individuals who are not authorized access to such information, and the information system, device, or network is unavailable while the spill is investigated and mitigated. j. Obtain information by externally located interception of wireless network traffic. Adversary intercepts organizational communications over wireless networks. Examples include targeting public wireless access or hotel networking connections, and drive-by subversion of home or organizational wireless routers. k. Obtain unauthorized access. Adversary with authorized access to organizational information systems, gains access to resources that exceeds authorization. l. Obtain sensitive data/information from publicly accessible information systems. Adversary scans or mines information on publically accessible servers and web

21 pages of organizations with the intent of finding sensitive information. m. Obtain information by opportunistically stealing or scavenging information systems/components. Adversary steals information systems or components (e. g., laptop computers or data storage media) that are left unattended outside of the physical perimeters of organizations, or scavenges discarded components. 7. Maintain a presence or set of capabilities a. Obfuscate adversary actions. Adversary takes actions to inhibit the effectiveness of the intrusion detection systems or auditing capabilities within organizations. b. Adapt cyber attacks based on detailed surveillance. Adversary adapts behavior in response to surveillance and organizational security measures. 8. Coordinate a campaign a. Coordinate a campaign of multi-staged attacks (e.g., hopping). Adversary moves the source of malicious commands or actions from one compromised information system to another, making analysis difficult. b. Coordinate a campaign that combines internal and external attacks across multiple information systems and information technologies. Adversary combines attacks that require both physical presence within organizational facilities and cyber methods to achieve success. Physical attack steps may be as simple as convincing maintenance personnel to leave doors or cabinets open. c. Coordinate campaigns across multiple organizations to acquire specific information or achieve desired outcome. Adversary does not limit planning to the targeting of one organization. Adversary observes multiple organizations to acquire necessary information on targets of interest. d. Coordinate a campaign that spreads attacks across organizational systems from existing presence. Adversary uses existing presence within organizational systems to extend the adversary’s span of control to other organizational systems including organizational infrastructure. Adversary thus is in position to further undermine organizational ability to carry out missions/business functions. e. Coordinate a campaign of continuous, adaptive, and changing cyber attacks based on detailed surveillance. Adversary attacks continually change in response to surveillance and organizational security measures. f. Coordinate cyber attacks using external (outsider), internal (insider), and supply chain (supplier) attack vectors. Adversary employs continuous, coordinated attacks, potentially using all three attack vectors for the purpose of impeding organizational operations. NIST Special Publication 800-30 lists non-adversarial threat events as: 1. Spill sensitive information. Authorized user erroneously contaminates a device, information system, or network by placing on it or sending to it information of a classification/sensitivity which it has not been authorized to handle. The information is exposed to access by unauthorized individuals, and as a result, the device, system, or network is unavailable while the spill is investigated and mitigated. 2. Mishandling of critical and/or sensitive information by authorized users. Authorized privileged user inadvertently exposes critical/sensitive information. 3. Incorrect privilege settings. Authorized privileged user or administrator erroneously assigns a user exceptional privileges or sets privilege requirements on a resource too

22 low. 4. Communications contention. Degraded communications performance due to contention. 5. Unreadable display. Display unreadable due to aging equipment. 6. Earthquake at primary facility. Earthquake of organization-defined magnitude at primary facility makes facility inoperable. 7. Fire at primary facility. Fire (not due to adversarial activity) at primary facility makes facility inoperable. 8. Fire at backup facility. Fire (not due to adversarial activity) at backup facility makes facility inoperable or destroys backups of software, configurations, data, and/or logs. 9. Flood at primary facility. Flood (not due to adversarial activity) at primary facility makes facility inoperable. 10. Flood at backup facility. Flood (not due to adversarial activity) at backup facility makes facility inoperable or destroys backups of software, configurations, data, and/or logs. 11. Hurricane at primary facility. Hurricane of organization-defined strength at primary facility makes facility inoperable. 12. Hurricane at backup facility. Hurricane of organization-defined strength at backup facility makes facility inoperable or destroys backups of software, configurations, data, and/or logs. 13. Resource depletion. Degraded processing performance due to resource depletion. 14. Introduction of vulnerabilities into software products. Due to inherent weaknesses in programming languages and software development environments, errors and vulnerabilities are introduced into commonly used software products. 15. Disk error. Corrupted storage due to a disk error. 16. Pervasive disk error. Multiple disk errors due to aging of a set of devices all acquired at the same time, from the same supplier. 17. Windstorm/tornado at primary facility. Windstorm/tornado of organization-defined strength at primary facility makes facility inoperable. 18. Windstorm/tornado at backup facility. Windstorm/tornado of organization-defined strength at backup facility makes facility inoperable or destroys backups of software, configurations, data, and/or logs. Vulnerability Assessment In the strictest sense, a vulnerability is basically a weakness in an information system or the procedures, controls or implementation processes surrounding the system that can be exploited by an intentional actor or compromised by non-adversarial error, natural events or accident. Generally, information system vulnerabilities result from lapses in security controls. However, the exploitation of vulnerabilities has been increasingly enabled by rapidly emerging changes in technology or changes in organizational operations or mission. Successful exploitation of a vulnerability is a function of three inter-related elements: a susceptibility of the information system itself to attack; an available means to access the system’s specific security control lapse or vulnerability; and the capability of an adversary to carry out the actions necessary to exploit the information system.

23 However as NIST Special Publication 800-30 points out, “vulnerabilities are not identified only within information systems...vulnerabilities can be found in organizational governance structures (e.g., the lack of effective risk management strategies and adequate risk framing, poor intra- agency communications, inconsistent decisions about relative priorities of missions/business functions, or misalignment of enterprise architecture to support mission/business activities)... or in external relationships (e.g., dependencies on particular energy sources, supply chains, information technologies, and telecommunications providers), mission/business processes (e.g., poorly defined processes or processes that are not risk-aware), and enterprise/information security architectures (e.g., poor architectural decisions resulting in lack of diversity or resiliency in organizational information systems).” Whether caused by internal flaws to information systems or more broadly by inadequate business practices or supply chain weaknesses, it is essential that transportation organizations understand the extent of their current and future reliance on information systems, the vulnerabilities of these systems, and how to mitigate the vulnerabilities associated with their utilization. Common Vulnerabilities of Information Systems The list of vulnerabilities for IT systems is far too voluminous and fluid to be included in the research. However, the information is readily available. The National Vulnerability Database (https://nvd.nist.gov) currently contains a listing of more than 71, 429 CVE’s (Common Vulnerabilities and Exposures). The NVD is the U.S. government repository of standards based vulnerability management data. The CVE is a list or dictionary of standardized identifiers for common computer vulnerabilities or exposures. CVE is complimentary and publicly available. Information in the CVE is organized by year, beginning with 1999. It is available for download in numerous formats CVRF, HTML, XML, and Text. Common Vulnerabilities of Industrial Control Systems In 2001 the U.S. Department of Homeland Security published the document, Common Cybersecurity Vulnerabilities in Industrial Control Systems. The report provides a useful summary of information system vulnerabilities. The information is sub-divided into three categories: 1) vulnerabilities inherent in the ICS product; 2) vulnerabilities caused during the installation, configuration, and maintenance of the ICS; and 3) the lack of adequate protection because of poor network design or configuration. 1. Vulnerabilities Inherent in the ICS Product a. Improper Input Validation. Input validation is used to ensure that the content provided to an application does not grant an attacker access to unintended functionality or privilege escalation. i. Buffer overflows. Buffer overflows result when a program tries to write more data into a buffer than the space allocated in memory. The “extra” data then overwrite adjacent memory and ultimately result in abnormal

24 operation of the program. A careful and successful memory overwrite can cause the program to begin execution of actual code submitted by the attacker. Most exploit code allows the attacker to create an interactive session and send commands with the privileges of the program with the buffer overflow. When network protocols have been implemented without validating the input values, these protocols can be vulnerable to buffer overflow attacks. Buffer overflows are the most common type of vulnerability identified in ICS products. ii. Lack of Bounds Checking. The lack of input validation for values that are expected to be in a certain range, such as array index values, can cause unexpected behavior. For instance, invalidated input, negative, or too large numbers can be input for array access and cause essential services to crash. ICS applications frequently suffer from coding practices that allow attackers to supply unexpected data and thus modify program execution. Even though ICS applications pass valid data values during normal operation, a common vulnerability discovery approach is to alter or input unexpected values. Types of exploitation can include DoS caused by out- of-range index values, crashed ICS communications service by altering the input value to negative number and crashed proprietary fault tolerant network equipment protocol. iii. Command Injection. Command injection allows for the execution of arbitrary commands and code by the attacker. If a malicious user injects a character (such as a semi-colon) that delimits the end of one command and the beginning of another, it may be possible to then insert an entirely new and unrelated command that was not intended to be executed. iv. SQL Injection. SQL command injection has become a common issue with database-driven websites. The flaw is easily detected and easily exploited, and as such, any site or software package with even a minimal user base is likely to be subject to an attempted attack of this kind. This flaw depends on the fact that SQL makes no real distinction between the control and data planes. v. Cross-Site Scripting. Cross-site scripting vulnerabilities allow attackers to inject code into the web pages generated by the vulnerable web application. Attack code is executed on the client with the privileges of the web server. The root cause of a cross-site scripting (XSS) vulnerability is the same as that of an SQL injection, poorly sanitized data. However, a XSS attack is unique in the sense that the web application itself unwittingly sends the malicious code to the user. The most common attack performed with cross- site scripting involves the disclosure of information stored in user cookies. Because the site requesting to run the script has access to the cookies in question, the malicious script does also. Some cross-site scripting vulnerabilities can be exploited to manipulate or steal cookies, create requests that can be mistaken for those of a valid user, compromise confidential information, or execute malicious code on the end user systems. vi. Improper Limitation of a Pathname to a Restricted Directory (Path

25 Traversal). Directory traversal vulnerabilities occur when file paths are not validated. Directory traversals occur when the software uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory. However, the software does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory. The attacker may be able to read, overwrite, or create critical files such as programs, libraries, or important data. This may allow an attacker to execute unauthorized code or commands, read or modify files or directories, crash, exit, or restart critical files or programs, potentially causing a DoS. b. Poor Code Quality. Poor code quality refers to code issues that are not necessarily vulnerabilities, but indicate that it was not carefully developed or maintained. These products are more likely to contain vulnerabilities than those that were developed using secure development concepts and other good programming practices. i. Use of Potentially Dangerous Functions. Otherwise known as unsafe function calls, the application calls a potentially dangerous function that could introduce vulnerability if used incorrectly. ii. NULL Pointer Dereference. A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions. c. Permissions, Privileges, and Access Controls. Permissions, privileges, and other security features are used to perform access controls on computer systems. Missing or weak access controls can be exploited by attackers to gain unauthorized access to ICS functions. i. Improper Access Control (Authorization). If ICS software does not perform or incorrectly performs access control checks across all potential execution paths, users are able to access data or perform actions that they should not be allowed to perform. Specific security control lapses: 1) access is not restricted to the objects that require it; 2) ICS protocol allowed ICS system hosts to read or overwrite files on other hosts, without any logging; 3) documentation and configuration information is shared freely (read only); 4) common shares are available on multiple systems; 5) lack of role-based authentication for ICS component communication; 6) a remote user can upload a file to any location on the targeted computer; 7) arbitrary file download is allowed on ICS hosts; 8) arbitrary file upload is allowed on ICS hosts; 9) remote client is allowed to launch any process; 10) ICS service allows anonymous access; 11) undisclosed “back door” administrative accounts. ii. Execution with Unnecessary Privileges. Services are restricted to the user rights granted through the user account associated with them. Exploitation of any service could allow an attacker a foothold on the ICS network with the exploited service’s permissions. Privilege escalation can be accomplished by

26 exploiting a vulnerable service running with more privileges than the attacker has currently obtained. d. Improper Authentication. Many vulnerabilities identified in ICS products are due to the ICS software failing to sufficiently verify a claim to have a given identity. i. Authentication Bypass Issues. The software does not properly perform authentication, allowing it to be bypassed through various methods. Web services developed for the ICS tend to be vulnerable to attacks that can exploit the ICS Web server to gain unauthorized access. System architectures often use network DMZ’s to protect critical systems and to limit exposure of network components. Vulnerabilities in ICS DMZ Web servers may provide the first step in the attack path by allowing access within the ICS exterior boundary. Vulnerabilities in lower level component’s web servers can provide more steps in the attack path. ii. Missing Authentication for Critical Function. The software does not perform any authentication for functionality that requires a provable user identity or consumes a significant amount of resources. Many critical ICS functions do not require authentication. Exposing critical functionality essentially provides an attacker with the privilege level of that functionality. The consequences will depend on the associated functionality, but they can range from reading or modifying sensitive data, access to administrative or other privileged functionality, or execution of arbitrary code. iii. Client-Side Enforcement of Server-Side Security. Applications that authenticate users locally trust the client that is connecting to a server to perform the authentication. Because the information needed to authenticate is stored on the client side, a moderately skilled hacker may easily extract that information or modify the client to not require authentication. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server. iv. Channel Accessible by Non-endpoint (Man-In-The-Middle). Commands from the HMI cause actions in the ICS. Alarms are sent to the HMI that notify operators of triggered events. The integrity and timely delivery of alarms and commands are critical in an ICS. MitM is possible if the ICS does not adequately verify the identity of actors at both ends of a communication channel, or does not adequately ensure the integrity of the channel, in a way that allows the channel to be accessed or influenced by an actor that is not an endpoint. Inadequate or inconsistent verification may result in insufficient or incorrect identification of either communicating entity. This can have negative consequences such as misplaced trust in the entity at the other end of the channel. An attacker can leverage this by interposing between the communicating entities and masquerading as the original entity. In the absence of sufficient verification of identity, such an attacker can eavesdrop and potentially modify the communication between the original entities. Weak authentication in ICS protocols allows replay or spoof attacks to send unauthorized messages and a possibility of sending messages that update the HMI or remote terminal unit must be

27 considered. The attacker may be able to cause invalid data to be displayed on a console or create invalid commands or alarm messages. Clear-text authentication credentials can be sniffed and used by an attacker to authenticate to the system. e. Insufficient Verification of Data Authenticity. If ICS protocols and software do not sufficiently verify the origin or authenticity of data, it may accept invalid data. This is a serious risk for systems that rely on data integrity. i. Cross-Site Request Forgery. When a web server is designed to receive a request from a client without any mechanism for verifying that it was intentionally sent, then it might be possible for an attacker to trick a client into making an unintentional request to the web server that will be treated as an authentic request. If the web interface offers a way to change ICS settings, hijacking credentials using cross-site request forgery (CSRF) could give an attacker the ability to perform any task that a legitimate user would be able to do through the web interface. ii. Missing Support for Integrity Check. Many ICS transmission protocols do not include a mechanism for verifying the integrity of the data during transmission. If integrity check values or “checksums” are omitted from a protocol, there is no way of determining if data have been corrupted in transmission. The lack of checksum functionality in a protocol removes the first application-level check of data that can be used. The end-to-end philosophy of checks states that integrity checks should be performed at the lowest level that they can be completely implemented. Excluding further sanity checks and input validation performed by applications, the protocol's checksum is the most important level of checksum, because it can be performed more completely than at any previous level and takes into account entire messages, as opposed to single packets. iii. Download of Code without Integrity Check. If an ICS component downloads source code or an executable from the network and executes the code without sufficiently verifying the origin and integrity of the code, an attacker may be able to execute malicious code by compromising the host server, spoofing an authorized server, or modifying the code in transit. f. Cryptographic Issues i. Missing Encryption of Sensitive Data. Credentials sent across the network in clear text leave the system at risk to the unauthorized use of a legitimate user’s credentials. If attackers are able to capture usernames and passwords, they will be able to log onto the system with that user’s privileges. Any unencrypted information concerning the ICS source code, topology, or devices is a potential benefit for an attacker and should be limited. One of the greatest security issues identified in conjunction with ICS systems is the widespread use of unencrypted plain-text network communications protocols. Many applications and services use protocols that include human-readable characters and strings. Network sniffing tools, many of

28 which are freely downloadable, can be used to view this type of network traffic. As a result, the content of the ICS communication packets can be intercepted, read, and manipulated. Vulnerable data in this scenario include usernames, passwords, and ICS commands. g. Credentials Management i. Insufficiently Protected Credentials. Credentials sent across the network in clear text leave the system at risk to the unauthorized use of a legitimate user’s credentials. Network sniffing tools, many of which are freely downloadable, can be used to view this type of network traffic. If attackers are able to capture usernames and passwords, they will be able to log onto the system with that user’s privileges. Unsecure services developed for IT systems have been adopted for use in ICS for common IT functionality. Although more secure alternatives exist for most of these services, some ICS’s have these services integrated into their applications. ii. Use of Hard-Coded Credentials. Hard-coded credentials have been found in ICS code and configuration scripts for authentication between ICS components. In such cases authentication may not be required to read system configuration file, which contains user accounts details, including passwords. h. ICS Software Security Configuration and Maintenance (Development) i. Poor Patch Management. During ICS Software Development Vulnerabilities in ICS can occur because of flaws, misconfigurations, or poor maintenance of their platforms, including hardware, operating systems, and ICS applications. These vulnerabilities can be mitigated through various security controls, such as operating system and application patching, physical access control, and security software (e.g., antivirus software). A computer system is vulnerable to attack from the time a vulnerability is discovered and publicly disclosed, to when a patch is generated, disseminated, and finally applied. The number of publicly announced vulnerabilities has been steadily increasing over the past decade to the point where patch management is a necessary part of maintaining a computer system. Although patching may be difficult in high-availability environments, unpatched systems are often trivial to exploit due to the ease of recognizing product version and the readiness of exploit code. ii. Unpatched or Old Versions of Third-party Applications Incorporated into ICS Software. These applications possess vulnerabilities that may provide an attack path into the system. The software is well known, and available exploit code makes them an easy target. iii. Improper Security Configuration. Many weaknesses identified in ICS software are because of available security options not being used or enabled. 2. Vulnerabilities Caused During Installation/Configuration/Maintenance of ICS. a. Permissions, Privileges, and Access Controls i. Poor System Access Controls. Within access controls, the following common vulnerabilities have been identified: 1) lack of separation of duties

29 through assigned access authorization, 2) lack of lockout system enforcement for failed login attempts, and 3) terminated remote access sessions after a defined time period. ii. Open Network Shares on ICS Hosts. The storage of ICS artifacts, such as source code and system configuration on a shared file system, provides significant potential for information mining by an attacker. The design of many ICS requires open network shares on ICS hosts. b. Improper Authentication i. Poor System Identification/Authentication Controls. Lack of developed policies or procedures to facilitate the implementation of identification and authentication controls. Absence of unique identification and authentication for users and specific devices before establishing connections. c. Credentials Management i. Insufficiently Protected Credentials User. Credentials should be vigorously protected and made inaccessible to an attacker. Whenever credentials are passed in clear text, they are susceptible to being captured and then cracked if necessary by the attacker. If stored password hashes are not properly protected, they may be accessed by an attacker and cracked. In every case, the lack of protection of user credentials may lead to the attacker gaining increased privileges on the ICS and thus being able to more effectively advance the attack. ii. Weak Passwords. ICS systems have been configured without passwords, which means that anyone able to access these applications are guaranteed to be able to authenticate and interact with them. d. ICS Security Configuration and Maintenance i. Weak Testing Environments. Patch management is paramount to maintaining the integrity of both IT and ICS. Unpatched software represents one of the greatest vulnerabilities to a system. Software updates on IT systems, including security patches, are typically applied in a timely fashion based on appropriate security policy and procedures. In addition, these procedures are often automated using server-based tools. Software updates on ICS cannot always be implemented on a timely basis because these updates need to be thoroughly tested by the vendor of the industrial control application and the end user of the application before being implemented. ICS outages often must be planned and scheduled days/weeks in advance. The ICS may also require revalidation as part of the update process. Another issue is that many ICS use older versions of operating systems that are no longer supported by the vendor. Consequently, available patches may not be applicable. Change management is also applicable to hardware and firmware. The change management process, when applied to ICS, requires careful assessment by ICS experts (e.g., control engineers) working in conjunction with security and IT personnel. Vulnerabilities that have had patches available for a long time are still being seen on ICS. Unpatched operating systems open ICS to attack through known operating system service vulnerabilities. ii. Limited Patch Management Abilities. Many ICS facilities, especially

30 smaller facilities, have no test facilities, so security changes must be implemented using the live operational systems. iii. Weak Backup and Restore Abilities. Backups, restores, and testing environments have been identified as a common issue within the industry for continuity of operations in the event of an incident. Backups are usually made, but usually not stored offsite and rarely exercised and tested. e. Planning/Policy/Procedures i. Insufficient Security Documentation. A common security gap is the failure of an organization to establish a formal business case for ICS security or to develop, implement, disseminate, and periodically review/update policy and procedures to facilitate implementation of security planning controls. f. Audit and Accountability i. Lack of Security Audits/Assessments. Security audits should be regularly performed to determine the adequacy of security controls within their systems. ii. Lack of Logging or Poor Logging Practices Event. Logging (applications, events, login activities, security attributes, etc.) is not turned on or monitored for identification of security issues. Where logs and other security sensors are installed, they may not be monitored on a real-time basis, and therefore, security incidents may not be rapidly detected and countered. 3. Vulnerabilities Caused by Lack of Adequate Protection Because of Poor Network Design or Configuration. a. Common ICS Network Design Weaknesses. The network infrastructure environment within the ICS has often been developed and modified based on business and operational requirements, with little consideration for the potential security impacts of the changes. Over time, security gaps may have been inadvertently introduced within particular portions of the infrastructure. Without remediation, these gaps may represent backdoors into the ICS. i. No Security Perimeter Defined. If the control network does not have a security perimeter clearly defined, then it is not possible to ensure that the necessary security controls are deployed and configured properly. This can lead to unauthorized access to systems and data as well as other problems. ii. Lack of Network Segmentation. Minimal or no security zones allow vulnerabilities and exploitations to gain immediate full control of the systems, which could cause high-level consequences. iii. Lack of Functional DMZs. The use of several DMZs provides the added capability to separate functionalities and access privileges and has proved to be very effective in protecting large architectures composed of networks with different operational mandates. iv. Firewalls Nonexistent or Improperly Configured. A lack of properly configured firewalls could permit unnecessary data to pass between networks such as control and corporate networks. This could cause several problems, including allowing attacks and malware to spread

31 between networks, making sensitive data susceptible to monitoring/eavesdropping on the other network, and providing individuals with unauthorized access to systems. v. Firewall Bypassed. Backdoor network access is not recommended and could cause direct access to ICS for attackers to exploit and take full control of the system. All connections to the ICS LAN should be routed through the firewall. No hardwired connections should be circumventing the firewall. vi. Weak Firewall Rules. Firewall rules are the implementation of the network design. Enforcement of network access permissions and allowed message types and content is executed by firewall rules. Firewall rules determine which network packets are allowed in and out of a network. Packets can be filtered based on IP address, port number, direction, and content. The protection provided by a firewall depends on the rules it is configured to use. Firewall and router filtering deficiencies allow access to ICS components through external and internal networks. The lack of incoming access restrictions creates access paths into critical networks. The lack of outgoing access restrictions allows access from internal components that may have been compromised. For an attacker to remotely control exploit code running on the user’s computer, a return connection must be established from the victim network. If outbound filtering is implemented correctly, the attacker will not receive this return connection and cannot control the exploited machine. Firewall rules should restrict traffic flow as much as possible. Connections should normally not be initiated from less- trusted networks. vii. Access to Specific Ports on Host Not Restricted to Required IP Addresses. This common vulnerability involves firewall rules restricting access to specific ports, but not IP addresses. Network device access control lists should restrict access to the required IP addresses. Allowing access to unused IP addresses traceable to legacy configuration of the firewall illuminates an attack path by using this IP address in order to be allowed through the firewall. viii. Firewall Rules Are Not Tailored to ICS Traffic. ICS network administrators should restrict communications to only that necessary for system functionality. System traffic should be monitored, and rules should be developed that allow only necessary access. Any exceptions created in the firewall rule set should be as specific as possible, including host, protocol, and port information. b. ICS Network Component Configuration (Implementation) Vulnerabilities i. Network Devices Not Securely Configured. Network device access control lists should restrict access to the required IP addresses. Network devices configured to allow remote management over clear-text authentication protocols can result in an attacker gaining control by changing the network device configurations. ii. Port Security Not Implemented on Network Equipment. Unauthorized

32 network access through physical access to network equipment includes the lack of physical access control to the equipment, including the lack of security configuration functions that limit functionality even if physical access is obtained. A malicious user who has physical access to an unsecured port on a network switch could plug into the network behind the firewall to defeat its incoming filtering protection. c. Audit and Accountability i. Network Architecture Not Well Understood. The current network diagram does not match the current state of the ICS network. ii. Weak Enforcement of Remote Login Policies. Any connection into the ICS LAN is considered part of the perimeter. Often these perimeters are not well documented, and some connections are neglected. iii. Weak Control of Incoming and Outgoing Media. Media protections for ICS lack written and approved policies and procedures, lack control of incoming and outgoing media, and lack verification scans of all allowed media into the ICS environment. iv. Lack of or Poor Monitoring of IDS’s. Intrusion detection deployments apply different rule-sets and signatures unique to each domain being monitored. Consequence or Impact Assessment Consequence analysis is basically an assessment of the perceived impact of an adverse event or series of events on critical infrastructure or processes. In regards to information systems, the level of impact is attributable to the magnitude of harm that can be expected to result from the consequences of unauthorized disclosure of information, unauthorized modification of information, unauthorized destruction of information, or loss of information or information system availability. Unfortunately in the transportation environment involves a potential for loss of life or serious injury based on the adverse effects of compromised, agency controlled or operated SCADA or ICS systems. Indeed transportation system operators are faced with a “duty of care” for system users that extends beyond the typical cyber breach. The APTA Leadership Class 2013 undertook a project to examine issues associated with cybersecurity in transit. In regards to impact the authors described the extent of the concerns as follows: Politically motivated attacks against a transit agency can generally be expected to have an impact anywhere along a spectrum of casualty, depending on the motivation for the attack, from minor disruption to complete destruction. The worst case scenario is, of course, a politically motivated attack intended to terrorize and that disables or destroys a transit agency’s systems in such a way that there is loss of life and injury to employees, passengers and the general public. In a classic case of ‘insult to injury’, on top of the loss of human life that cannot be replaced and physical assets that must be rebuilt, the transit agency and its surrounding community are likely to suffer long-term psychological and economic damage as a direct result. Other

33 political cyber attacks may result in disruption of major systems without loss of life, but with consequent financial damage, or in disruption of minor systems that serve mainly to annoy or cause public relations damage. The political attacks against transportation systems described in this report resulted in defaced web sites, compromised user credentials and some disruption to operations. One attack, whose motivation is not known, did have the potential to result in loss of life and destruction of major infrastructure. Financially motivated attacks can result in a transit agency losing cash resources, but perhaps more likely, a particular kind of data is the asset sought by the criminal hacker - data that is marketable as an asset on the black market. This data, commonly referred to as personally identifiable information or PII, belongs to the transit agency’s employees and customers not the transit agency itself. The damage resulting from this sort of breach can include liability for violation of federal and state confidentiality laws, civil suits for identity theft resulting from a failure to reasonably safeguard PII, and a loss of confidence in the transit agency on the part of its customers resulting in their refusal to utilize the very types of technologies that transit agencies increasingly depend upon for operational efficiencies, such as electronic ticketing, automatic renewal of passes and social media tools. Every transit executive should be aware whether his or her agency’s assets can be destroyed or disabled if its IT systems are subject to a cyber-based terror attack and should be kept informed of the agency’s planned response to any such attack. Additionally, a transit agency executive should know whether his or her agency obtains and keeps the type of data that criminals have stolen from other state and local government entities and how the agency ensures that such data is kept secure from a cyber breach. Traditional consequence analysis begins with the delineation of the full complement of organizational assets into those that are considered critical to business operations. In the case of information system critical infrastructure this has spawned the designation of “CIIP” (Critical Information Infrastructure Protection) as a subset of the more widely-known concept of Critical Infrastructure Protection (CIP) (Peter Burnett Meridian Coordinator, CiviPol Consultant Quarter House Ltd). Tongue-in-cheek irrespective of what it is called, critical asset identification is related to the protection of the energy, telecommunications, water supply, transport, finance, health and other infrastructures that allow a society to function. “These critical infrastructures need to be protected against accidental and deliberate events that would stop them operating correctly and would severely impact the economic and social well-being.” (Burnett) Unfortunately at present there is no fully developed listing of foundational cyber critical assets for surface transportation organizations. Volpe in collaboration with DHS is currently working on such a designation, however the effort remains a work in process. APTA

34 Cybersecurity Considerations for Public Transit does provide a very useful grouping of critical assets in transit into three main categories. The transit IT “ecosystem” and definitions for each of the categories follows: Figure 4: Transportation Information Ecosystem. From APTA Cybersecurity Considerations for Public Transit Operational systems: These systems integrate supervisory control and data acquisition (SCADA), original equipment manufacturer (OEM) and other critical component technologies responsible for the control, movement and monitoring of transportation equipment and services (i.e., train, track and signal control). Often such systems are interrelated into multimodal systems such as buses, ferries and metro modes. Enterprise information systems. This describes the transit agency’s information system, which consist of integrated layers of the operating system, applications system and business system. Holistically, enterprise information systems encompass the entire range of internal and external information exchange and management. Subscribed systems: These consist of “managed” systems outside the transportation agency. Such systems may include Internet service providers (ISPs), hosted networks, the agency website, data storage, cloud services, etc. Examples include control systems that support operational systems, SCADA, traction power control, emergency ventilation control, alarms and indications, fire/intrusion detection systems, train control/signaling, fare collection, automatic vehicle location (AVL), physical security feeds (CCTV, access control), public information systems, public address systems, and radio/wireless/related communication. Networks for traffic management, yard management, crew management, vehicle management, vehicle maintenance, positive train control, traffic control, and remote railway switch control, main line work orders, wayside maintenance, on-track maintenance, intermodal operations, threat management and passenger services. And business management systems that support administrative processes including transaction processing systems, management information systems, decision support, executive support, financial pay systems, HR, training, and knowledge management. Figure 5: Transportation Enterprise Information Systems. From APTA Cybersecurity Considerations for Public Transit

35 NIST Special Publication 800-30 guidelines recommend identifying information system critical assets based on an assessment perceived or potential: • Harm to Operations o Inability to perform current missions/business functions  In a sufficiently timely manner  With sufficient confidence and/or correctness  Within planned resource constraints o Inability, or limited ability, to perform missions/business functions in the future o Inability to restore missions/business functions  In a sufficiently timely manner  With sufficient confidence and/or correctness  Within planned resource constraints o Harms (e.g., financial costs, sanctions) due to noncompliance  With applicable laws or regulations  With contractual requirements or other requirements in other binding agreements (e.g., liability)  Direct financial costs o Relational harms  Damage to trust relationships  Damage to image or reputation (and hence future or potential trust relationships). • Harm to Assets o Damage to or loss of physical facilities o Damage to or loss of information systems or networks o Damage to or loss of information technology or equipment o Damage to or loss of component parts or supplies o Damage to or of loss of information assets o Loss of intellectual property • Harm to Individuals o Injury or loss of life o Physical or psychological mistreatment o Identity theft o Loss of Personally Identifiable Information o Damage to image or reputation • Harm to Other Organizations o Harms (e.g., financial costs, sanctions) due to noncompliance  With applicable laws or regulations  With contractual requirements or other requirements in other binding agreements o Direct financial costs o Relational harms  Damage to trust relationships  Damage to reputation (and hence future or potential trust relationships) • Harm to the Nation

36 o Damage to or incapacitation of a critical infrastructure sector o Loss of government continuity of operations o Relational harms  Damage to trust relationships with other governments or with nongovernmental entities  Damage to national reputation (and hence future or potential trust relationships)  Damage to current or future ability to achieve national objectives  Harm to national security. Finally, NERC CIP-002-3 provides a classification approach that designates assets based on information compromise criticality; either – public, restricted, confidential, or private – suggesting that the level of security protection and controls can be managed by assignment commensurate with the risk of release. Public - This information is in the public domain and does not require any special protection. For instance, the address and phone number of the headquarters of your electric cooperative is likely to be public information. Restricted - This information is generally restricted to all or only some employees in your organization, and its release has the potential of having negative consequences on your organization’s business mission or security posture. Examples of this information may include: • Operational procedures • Network topology or similar diagrams • Equipment layouts of critical cyber assets • Floor plans of computing centers that contain critical cyber assets Confidential - Disclosure of this information carries a strong possibility of undermining your organization’s business mission or security posture. Examples of this information may include: • Security configuration information • Authentication and authorization information • Private encryption keys • Disaster recovery plans • Incident response plans Personally Identifying Information (PII) - PII is a subset of confidential information that uniquely identifies the private information of a person. This information may include a combination of the person’s name and social security number, person’s name and credit card number, and so on. PII can identify or locate a living person. Such data has the potential to harm the person if it is lost or inappropriately disclosed. It is essential to safeguard PII against loss, unauthorized destruction, or unauthorized access.

37 Cybersecurity Challenges Protecting Your Transportation Management Center (Fok, ITE Journal, February 2015) posed the following questions: What would happen if the United States could not… 1. Safely operate the transportation infrastructure for all modes? 2. Efficiently operate the systems to facilitate movement of people, goods, and services? 3. Communicate with the public for the public’s interest and safety? These three questions represent the penultimate risk question for today’s surface transportation organizations. The purposeful inclusion of information technology assets to the already extensive list of what must be protected becomes a vital aspect of ensuring that the nation’s transportation infrastructure can accomplish its mission and objectives.

Next: Chapter 3 Cybersecurity Plans and Strategies, Establishing Priorities, Organizing Roles and Responsibilities »
Protection of Transportation Infrastructure from Cyber Attacks: A Primer Get This Book
×
 Protection of Transportation Infrastructure from Cyber Attacks: A Primer
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's Protection of Transportation Infrastructure from Cyber Attacks: A Primer provides transportation organizations with reference materials concerning cybersecurity concepts, guidelines, definitions, and standards. The primer is a joint product of two TRB Cooperative Research Programs, and is categorized as Transit Cooperative Research Program (TCRP) Web-Only Document 67 and National Cooperative Highway Research Program (NCHRP) Web-Only Document 221.

The Primer delivers strategic, management, and planning information associated with cybersecurity and its applicability to transit and state DOT operations. It includes definitions and rationales that describe the principles and practices that enable effective cybersecurity risk management. The primer provides transportation managers and employees with greater context and information regarding the principles of information technology and operations systems security planning and procedures.

The report is supplemented with an Executive Briefing for use as a 20-minute presentation to senior executives on security practices for transit and DOT cyber and industrial control systems. A PowerPoint summary of the project is also available.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!