Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 103
APPENDIX D Development of the Final List of Eight Issues At its first meeting, the committee identified and considered a number of issues and facets of issues, as shown in this appendix. These initial deliberations, as well as those of later committee meetings, were analyzed and organized, and they eventually led to the final list of six technical and two strategic issues. This appendix provides an insight into some of these deliberations by listing some of the earlier, more specific, issues and topics tied to each of the final list of eight. SYSTEMS ASPECTS OF DIGITAL INSTRUMENTATION AND CONTROL TECHNOLOGY How can potential safety augmentation at the system level by the use of computers (e.g., diagnosis and accident management) be balanced and evaluated against potential safety decreases (e.g., owing to overreliance or to poor design/implementation that does not achieve assumed benefits or makes things worse)? Performance during transients, anticipated transient without scram (ATWS) issues, fail-safe design (e.g., can failures be detected as easily as with analog devices?). Does the use of computers make any difference in these areas? Are there new environmental concerns (e.g., electromagnetic interference, climate control)? What behaviors or features are of concern and how do we provide confidence (assessment) for them (e.g., unintended function, performance issues, capacity and overload, fail-safe design, networking)? Communications system distractions. System capacity. Response time of the system. Network reliability, especially in advanced plants. Recognition/detection of failure modes. Architecture performance during transients. Integration issues with analog components. SOFTWARE QUALITY ASSURANCE How can confidence be obtained in the safety and/or reliability of software? How should software be assessed? What methods are appropriate and effective (e.g., verification and validation techniques, formal methods, quantification, hazard analysis, failure mode analysis and design)? Do some software design techniques present special problems in assessment (e.g., artificial intelligence techniques)? How can it be assured that changes and fixes do not degrade reliability and safety? What changes should require U.S. Nuclear Regulatory Commission (USNRC) approval and which should not? What changes should be instituted for change control? (E.g., should patching be allowed?) How can it be assured that required changes are made? Confidence level (quality, verification and validation, formal methods, lack of meaningful standards). Certification basis (process vs. product). Fear of unintended function(s). Configuration control (maintenance/upgrading). Security considerations. COMMON-MODE SOFTWARE FAILURE POTENTIAL Are changes needed in the procedures for evaluating common-mode failures? Reliability vs. safety: Do the enhanced capabilities of software allow new means of protection against computer failures or failure modes?
OCR for page 104
Quality vs. diversity: How much relative attention should be paid to each? Diversity achievement. Progressive approach to failure (defense-in-depth). SAFETY AND RELIABILITY ASSESSMENT METHODS Are there any implications for design basis accidents and the procedures for certifying against them? What are the implications of using computers with respect to probabilistic risk assessment (PRA) procedures and use? Are we taking solutions for old technology and inappropriately applying them to new technology (e.g., emphasis on diversity and redundancy, bottom-up component reliability approaches vs. risk-based or hazard analysis approaches)? Are there new approaches that may be more appropriate? Assessment technology. Added complexity of digital technology compared to analog. Definition of safety margin with digital technology. Loss of margin of safety by consolidation of data. PRA or mathematical assessment method validity with digital technology. HUMAN FACTORS AND HUMAN-MACHINE INTERFACES Should restrictions be imposed on the safety or safety-related functions that can be allocated to computers as opposed to operators or analog devices? Other operator aids such as alarm analysis, value sequencing, and decision analysis. Task allocation (computer vs. human). Level of automation. Human interface (role, display, information, nuances). Use of "intelligence" aids (e.g., neural nets, artificial intelligence). Operations and maintenance impacts (pluses and minuses). DEDICATION OF COMMERCIAL OFF-THE-SHELF HARDWARE AND SOFTWARE Are special procedures required for software tools (e.g., compilers, code generators)? What assessment procedures are appropriate for COTS software? How should dedication procedures differ from those used to certify (handle) specially constructed software? IEEE-STD-279 compliance. Use of standard software tools/compilers. CASE-BY-CASE LICENSING PROCESS Types of software complexity: Should the assessment basis and procedures differ? Are there fundamental differences in functionality between analog and digital devices, e.g., between their failure modes, and do they affect certification or licensing? Use of computers in safety compared to nonsafety systems. Does the use of computers change the basis for certification procedures at the system level? What should be the limits of the USNRC regulatory activities? How does the USNRC determine whether safety value has been added or reduced? Should the certification basis for computers and software be different from that for the analog devices they replace? How can the USNRC determine whether safety or reliability has been degraded when we retrofit computers into existing designs? How should version control be managed? Is this a USNRC concern? Safety/control systems separation in digital as opposed to analog systems. Lack of understanding of design basis. Digital value added (e.g., accident diagnosis and management). Regulatory constraints. Short half-life of the technology. ADEQUACY OF TECHNICAL INFRASTRUCTURE How should the USNRC deal with the rapid changes in technology? Lack of strategic plan for the USNRC research program. Other industry experience as part of the USNRC technical basis.
Representative terms from entire chapter: