Appendix F Glossary
A subject's right to use an object. Examples include read and write access for data objects, execute access for programs, or create and delete access for directory objects.
The granting or denying to a subject (principal) of certain permissions to access an object, usually done according to a particular security model.
Access control list
A list of the subjects that are permitted to access an object, and the access rights of each subject.
A level associated with a subject (e.g., a clearance level) or with an object (e.g., a classification level).
The concept that individual subjects can be held responsible for actions that occur within a system.
1. The administrative act of approving a computer system for use in a particular application. See Certification. 2. The act of approving an organization as, for example, an evaluation facility.
Administratively directed access control (ADAC)
Access control in which administrators control who can access which objects. Contrast with user-directed access control (UDAC). See Mandatory access control.
Confidence that a system design meets its requirements, or that its implementation meets its specification, or that some specific property is satisfied.
The process of making and keeping the records necessary to support accountability. See Audit trail analysis.
The results of monitoring each operation of subjects on objects; for example, an audit trail might be a record of all actions taken on a particularly sensitive file.
Audit trail analysis
Examination of an audit trail, either manually or automatically, possibly in real time (Lunt, 1988).
Providing assurance regarding the identity of a subject or object, for example, ensuring that a particular user is who he claims to be.
A sequence used to authenticate the identity of a subject or object.
Determining whether a subject (a user or system) is trusted to act for a given purpose, for example, allowed to read a particular file.
The property that a given resource will be usable during a given time period.
Bell and La Padula model
An information-flow security model couched in terms of subjects and objects and based on the concept that information shall not flow to an object of lesser or noncomparable classification (Bell and La Padula, 1976).
Use of a product by selected users before formal release.
An integrity model in which no subject may depend on a less trusted object (including another subject) (Biba, 1975).
An authenticating entity acceptable as evidence of the right to perform some operation on some object.
The administrative act of approving a computer system for use in a particular application. See Accreditation.
The Communications-Electronics Security Group of the U.K. Government Communications Headquarters (GCHQ).
An authentication procedure that requires calculating a correct response to an unpredictable challenge.
Digits or bits summed according to arbitrary rules and used to verify the integrity of data.
The result of transforming plaintext with an encryption algorithm. Also known as cryptotext.
In the ITSEC, the language that describes the desired security features of a "target of evaluation" (a product or system), and against which the product or system can be evaluated.
Clark-Wilson integrity model
An approach to providing data integrity for common commercial activities, including software engineering concepts of abstract data types, separation of privilege, allocation of least privilege, and nondiscretionary access control (Clark and Wilson, 1987).
The security level of an object. See Sensitivity label.
A software development process designed to reduce errors and increase productivity (Poore and Mills, 1989).
Unencrypted text. Also known as plaintext. Contrast with ciphertext, cryptotext.
The security level of a subject.
In the ITSEC, a Commercial Licensed Evaluation Facility.
Coordinating Committee for Multilateral Export Controls, which began operations in 1950 to control export of strategic materials and technology to communist countries; participants include Australia, Belgium, Canada, Denmark, France, Germany, Greece, Italy, Japan, Luxembourg, the Netherlands, Norway, Portugal, Spain, Turkey, the United Kingdom, and the United States.
Ensuring that data is disclosed only to authorized subjects.
1. The property of being consistent with a correctness criterion, such as a program being correct with respect to its system specification, or a specification being consistent with its requirements.
2. In ITSEC, a component of assurance (together with effectiveness).
A mechanism that reduces the vulnerability of a threat.
A communications channel that allows two cooperating processes to transfer information in a manner that violates a security policy, but without violating the access control.
Definitions of properties and constraints to be met by system functionality and assurance. See TCSEC, ITSEC.
The condition in which nonsatisfaction of a critical requirement can result in serious consequences, such as damage to national security or loss of life. A system is critical if any of its requirements are critical.
An input to an encryption device that results in cryptotext.
A sequence of symbols to which meaning may be assigned. Uninterpreted information. Data can be interpreted as representing numerical bits, literal characters, programs, and so on. (The term is used often throughout this report as a collective, singular noun.) See Information.
Data Encryption Standard (DES)
A popular secret-key encryption algorithm originally released in 1977 by the National Bureau of Standards.
To authorize one subject to exercise some of the authority of another.
Denial of service
Reducing the availability of an object below the level needed to support critical processing or communication, as can happen, for example, in a system crash.
The facet of reliability that relates to the degree of certainty that a system will operate correctly.
The existence of a relationship in which the subject may not work properly unless the object (possibly another subject) behaves properly. One system may depend on another system.
Data that can be generated only by an agent that knows some secret, and hence is evidence that such an agent must have generated it.
Discretionary access control (DAC)
An access-control mechanism that permits subjects to specify the access controls, subject to constraints such as changes permitted to the owner of an object. (DAC is usually equivalent to IBAC and UDAC, although hybrid DAC policies might be IBAC and ADAC.)
Department of Trade and Industry, U.K
A system with both military and civilian applications.
1. The extent to which a system satisfies its criteria. 2. In ITSEC, a component of assurance (together with correctness).
A signal emitted by a system that is not explicitly allowed by its specification.
1. The process of examining a computer product or system with respect to certain criteria. 2. The results of that process.
1. An advantage attributed to a system. 2. A euphemism for a fundamental flaw that cannot or will not be fixed.
The programmable information used to control the low-level operations of hardware. Firmware is commonly stored in Read-Only Memorys (ROMs), which are initially installed in the factory and may be replaced in the field to fix mistakes or to improve system capabilities.
Having a rigorous respect for form, that is, a mathematical or logical basis.
Formal top-level specification. (See "Security Characteristics" in Chapter 5.)
As distinct from assurance, the functional behavior of a system. Functionality requirements include, for example, confidentiality, integrity, availability, authentication, and safety.
A system connected to different computer networks that mediates transfer of information between them.
Government Communications Headquarters, U.K.
A set of subjects.
Identity-based access control (IBAC)
An access control mechanism based only on the identity of the subject and object. Contrast with rule-based access control. See Discretionary access control.
The mechanism that (supposedly) realizes a specified design.
Data to which meaning is assigned, according to context and assumed conventions.
Access control based on restricting the flow of information into an object. See, for example, Bell and La Padula model.
Information security. See also COMPUSEC and COMSEC.
The property that an object is changed only in a specified and authorized manner. Data integrity, program integrity, system integrity, and network integrity are all relevant to consideration of computer and system security.
A level of trustworthiness associated with a subject or object.
International Traffic in Arms Regulations (Office of the Federal Register, 1990).
The Information Technology Security Evaluation Criteria, the harmonized criteria of France, Germany, the Netherlands, and the United Kingdom (Federal Republic of Germany, 1990).
A most trusted portion of a system that enforces a fundamental property, and on which the other portions of the system depend.
An input that controls the transformation of data by an encryption algorithm.
A level associated with a subject or object and defining its clearance or classification, respectively. In TCSEC usage, the security label consists of a hierarchical security level and a nonhierarchical security category. An integrity label may also exist, consisting of a hierarchical integrity level and a nonhierarchical integrity category (Biba, 1975).
A logic bomb, contained in electronic mail, that is triggered when the mail is read.
1. The combination of hierarchical and nonhierarchical components (TCSEC usage). See Security level, Integrity level. 2. The hierarchical component of a label, more precisely referred to as "hierarchical level" to avoid confusion. In the absence of nonhierarchical categories, the two definitions are identical.
A Trojan horse set to trigger upon the occurrence of a particular logical event.
Mandatory access control (MAC)
1. Access controls that cannot be made more permissive by users or subjects (general usage, roughly ADAC). 2. Access controls based on information sensitivity represented, for example, by security labels for clearance and classification (TCSEC usage, roughly RBAC and ADAC). Often based on information flow rules.
An expression of a policy in a form that a system can enforce, or that analysis can use for reasoning about the policy and its enforcement.
Recording of relevant information about each operation by a subject on an object, maintained in an audit trail for subsequent analysis.
Providing mutual assurance regarding the identity of subjects and/or objects. For example, a system needs to authenticate a user, and the user needs to authenticate that the system is genuine.
The National Computer Security Center, part of the National Security Agency, which is part of the Department of Defense.
A computer system that is connected to a communications network and participates in the routing of messages within that network. Networks are usually described as a collection of nodes that are connected by communications links.
Equivalent to mandatory in TCSEC usage, otherwise equivalent to administratively directed access controls.
An authentication that with high assurance can be asserted to be genuine, and that cannot subsequently be refuted.
Something to which access is controlled. An object may be, for example, a system, subsystem, resource, or another subject.
A collection of software programs intended to directly control the hardware of a computer (e.g., input/output requests, resource allocation, data management), and on which all the other programs running on the computer generally depend. UNIX, VAX/VMS, and DOS are all examples of operating systems.
Common name for the Department of Defense document that is the basic definition of the TCSEC, derived from the color of its cover (U.S. DOD, 1985d). The Orange Book provides criteria for the evaluation of different classes of trusted systems and is supplemented by many documents relating to its extension and interpretation. See Red Book, Yellow Book.
Open Systems Interconnection. A seven-layer networking model.
The practice of procuring from external sources rather than producing within an organization.
A sequence that a subject presents to a system for purposes of authentication.
A section of software code that is inserted into a program to correct mistakes or to alter the program.
A boundary within which security controls are applied to protect assets. A security perimeter typically includes a security kernel, some trusted-code facilities, hardware, and possibly some communications channels.
Personal identification number. Typically used in connection with automated teller machines to authenticate a user.
See Clear text.
An informal, generally natural-language description of desired system behavior. Policies may be defined for particular requirements, such as security, integrity, and availability.
A person or system that can be authorized to access objects or can make statements affecting access control decisions. See the equivalent, Subject.
See Secret key.
A program or subsystem that can act as a subject.
A key that is made available without concern for secrecy. Contrast with private key, secret key.
An encryption algorithm that uses a public key to encrypt data and a corresponding secret key to decrypt data.
Rating Maintenance Phase. Part of the National Computer Security Center's product evaluation process.
Subjects reading from a communication channel.
The Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria, or TNI (U.S. DOD, 1987).
A system component that enforces access controls on an object.
A statement of the system behavior needed to enforce a given policy. Requirements are used to derive the technical specification of a system.
The likelihood that a vulnerability may be exploited, or that a threat may become harmful.
The Rivest-Shamir-Adelman public key encryption algorithm (Rivest et al., 1978).
Rule-based access control (RBAC)
Access control based on specific rules relating to the nature of the subject and object, beyond just their identities—such as security labels. Contrast with identity-based access control. See Mandatory access control.
The property that a system will satisfy certain criteria related to the preservation of personal and collective safety.
Known at most to an authorized set of subjects. (A real secret is possible only when the size of the set is one or less.)
A key that is kept secret. Also known as a private key.
An encryption algorithm that uses only secret keys. Also known as private-key encryption.
An information path in which the set of all possible senders can be known to the receivers, or the set of all possible receivers can be known to the senders, or both.
1. Freedom from danger; safety. 2. Computer security is protection of data in a system against disclosure, modification, or destruction. Protection of computer systems themselves. Safeguards can be both technical and administrative. 3. The property that a particular security policy is enforced, with some degree of assurance. 4. Often used in a restricted sense to signify confidentiality, particularly in the case of multilevel security.
A clearance level associated with a subject, or a classification level (or sensitivity label) associated with an object.
A subject writing to a channel.
A security level (i.e., a classification level) associated with an object.
Separation of duty
A principle of design that separates functions with differing requirements for security or integrity into separate protection domains. Separation of duty is sometimes implemented as an authorization rule specifying that two or more subjects are required to authorize an operation.
Software offered publicly and shared rather than sold.
See Digital signature.
Simple security property
An information-flow rule stating that a subject at a given security level can read only from an object with a security label that is the same or lower (Bell and La Padula, 1976).
A small computer in the shape of a credit card. Typically used to identify and authenticate its bearer, although it may have other computational functions.
The textual form in which a program is entered into a computer (e.g., FORTRAN).
A technical description of the desired behavior of a system, as derived from its requirements. A specification is used to develop and test an implementation of a system.
Assuming the characteristics of another computer system or user, for purposes of deception.
An abstraction of the total history of a system, usually in terms of state variables. The representation can be explicit or implicit.
In the classical model of a state machine, the outputs and the next state of the machine are functionally dependent on the inputs and the present state. This model is the basis for all computer systems.
A secure telephone system using end-to-end private-key encryption.
An artifact, usually software, that can be used to simulate the behavior of parts of a system. It is usually used in testing software that relies on those parts of the system simulated by the stub. Stubs make it possible to test a system before all parts of it have been completed.
An active entity—e.g., a process or device acting on behalf of a user, or in some cases the actual user—that can make a request to perform an operation on an object. See the equivalent, Principal.
1. A state machine, that is, a device that, given the current state and inputs, yields a set of outputs and a new state (see State machine). 2. An interdependent collection of components that can be considered as a unified whole, for example, a networked collection of computer systems, a distributed system, a compiler or editor, a memory unit, and so on.
See Trusted computing base.
The Department of Defense Trusted Computer System Evaluation Criteria (U.S. DOD, 1985d). See Orange Book.
U.S. government rules for limiting compromising signals (emanations) from electrical equipment.
The potential for exploitation of a vulnerability.
A Trojan horse set to trigger at a particular time.
When used in the context of authentication, a physical device necessary for user identification.
A pocket-sized computer that can participate in a challenge-response authentication scheme. The authentication sequences are called tokens.
A hidden flaw in a system mechanism that can be triggered to circumvent the system's security.
A computer program whose execution would result in undesired side effects, generally unanticipated by the user. A Trojan horse program may otherwise give the appearance of providing normal functionality.
Belief that a system meets its specifications.
Trusted computing base (TCB)
A portion of a system that enforces a particular policy. The TCB must be resistant to tampering and circumvention. Under the TCSEC, it must also be small enough to be analyzed systematically. A TCB for security is part of the security perimeter.
A system believed to enforce a given set of attributes to a stated degree of assurance (confidence).
Assurance that a system deserves to be trusted.
An attack that attempts to exploit a weakness in a system at a low level of abstraction.
Assuring the identity of a user. See Authorization.
User-directed access control (UDAC)
Access control in which users (or subjects generally) may alter the access rights. Such alterations may, for example, be restricted to certain individuals by the access controls, for example, limited to the owner of an object. Contrast with administratively directed access control. See Discretionary access control.
A program that attempts to detect and disable viruses.
A program, typically hidden, that attaches itself to other programs and has the ability to replicate. In personal computers, ''viruses" are generally Trojan horse programs that are replicated by inadvertent human action. In general computer usage, viruses are more likely to be self-replicating Trojan horses.
A weakness in a system that can be exploited to violate the system's intended behavior. There may be security, integrity, availability, and other vulnerabilities. The act of exploiting a vulnerability represents a threat, which has an associated risk of being exploited.
A worm is a program that distributes itself in multiple copies within a system or across a distributed system. A worm attack is a worm that may act beyond normally permitted behavior, perhaps exploiting security vulnerabilities or causing denial of service.
The Department of Defense Technical Rationale Behind CSC-STD-003-85 (U.S. DOD, 1985b). Guidance for applying the TCSEC to specific environments.
Zentralstelle für Sicherheit in der Informationstechnik. The German Information Security Agency (GISA).