Assistance Phase (VAP), (3) Design Analysis Phase, (4) Formal Evaluation Phase, and (5) Rating Maintenance Phase (RAMP).

In the Pre-review Phase vendors present the NCSC with a proposal defining the goals they expect to achieve and the basic technical approach being used. The pre-review proposal is used to determine the amount of NCSC resources needed to perform any subsequent evaluation. The Vendor Assistance Phase, which can begin at any stage of product development, consists primarily of monitoring and providing comments. During this phase, the NCSC makes a conscious effort not to "advise" the vendors (for legal reasons and because it is interested in evolution, not research and development). The Vendor Assistance Phase usually ends six to eight months before a product is released. The Design Analysis Phase takes an in-depth look at the design and implementation of a product using analytic tools. During this phase the Initial Product Analysis Report (IPAR) is produced, and the product is usually released for Beta testing. The Formal Evaluation Phase includes both performance and penetration testing of the actual product being produced. Products that pass these tests are added to the Evaluated Products List (EPL) at the appropriate level. Usually vendors begin shipping their product to normal customers during this phase. The Rating Maintenance Phase (RAMP), which takes place after products are shipped and pertains to enhancements (e.g., movement from one version of a product to another), is intended for C2 and B1 systems, to enable vendors to improve their product without undergoing a complete recertification.

3.  

The NCSC has argued that it is premature to adopt criteria that address security features that support Clark-Wilson integrity because formal models for such security policies do not yet exist. In this way they justify the present bundled structure of the TCSEC (committee briefing by NSA). The NCSC continues to view integrity and assured service as research topics, citing a lack of formal policy models for these security services. However, it is worth noting that the Orange Book does not require a system to demonstrate correspondence to a formal security policy model until class B2, and the preponderance of rated systems in use in the commercial sector are below this level, for example, at the C2 level. Thus the NCSC argument against unbundling the TCSEC to include integrity and availability requirements in the criteria, at least at these lower levels of assurance, does not appear to be consistent.

4.  

In the future software tools that capture key development steps may facilitate evaluation and cross-checks on evaluations by others.

5.  

In the DOD environment the term "accreditation" refers to formal approval to use a system in a specified environment as granted by a designated approval authority. The term "certification" refers to the technical process that underlies the formal accreditation.

6.  

The claims language of the ITSEC may be more amenable to system security specification. However, product evaluation and system certification are still different processes and should not be confused, even if the ratings terminology can be shared between the two.

7.  

Proposals for an A2 class have been made before with no results, but LOCK and other projects suggest that it may now be time to extend the criteria to provide a higher assurance class. This class could apply formal specification and verification technology to a greater degree, require more stringent control on the development process (compare to the ITSEC E6 and E7), and/or call for stronger security mechanisms (e.g., the LOCK SIDEARM and BED technology, described in Appendix B of this report). The choice of which additional assurance features might be included in A2 requires further study.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement