Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 25
Summary of a Workshop on Software Certification and Dependability Appendixes
OCR for page 26
Summary of a Workshop on Software Certification and Dependability This page intentionally left blank.
OCR for page 27
Summary of a Workshop on Software Certification and Dependability A Workshop Agenda MONDAY, APRIL 19, 2004 Welcome Charles Brownstein, Director, Computer Science and Telecommunications Board Daniel Jackson, Chair, Committee on Building Certifiably Dependable Systems Panel A: The Strengths and Limitations of Process Isaac Levendel, Independent Consultant Gary McGraw, Cigital Peter Neumann, SRI International Moderator: Martyn Thomas The focus of this panel is the contribution of particular processes and process characteristics to the successful development and effective certification of dependable systems. What are the important characteristics of the processes that you believe should be followed when developing certifiably dependable systems? What evidence exists to support your opinion? How would it be possible to gain stronger evidence? How important is evidence of the development process (or the absence of such evidence) to certification that a system meets its dependability objectives? Does your answer depend on the nature of the system under consideration? If so, in what way? What specific processes, if carried out effectively, could provide sufficient evidence that a system meets its functional requirements? Does your answer change if the system is (a) preparing customer bills for a major utility company; (b) controlling a radiotherapy system; (c) providing flight-control for a fly-by-wire civil airliner; (d) protecting military secrets in a system accessible to staff with lower-level security clearances? How would your answers change for the same question, applied to nonfunctional requirements, such as performance and usability? How do you measure or demonstrate the correlation between process metrics and product metrics for attributes such as reliability and security?
OCR for page 28
Summary of a Workshop on Software Certification and Dependability Can it ever be reasonable to argue that a system is more dependable than the evidence available can demonstrate scientifically? What should be the role of engineering judgment in certifying systems? What do you consider to be the strengths and limitations of process metrics in assessing the dependability of a computer-based system? Panel B: Looking Forward: New Challenges, New Opportunities Robert Harper, Carnegie Mellon University Shriram Krishnamurthi, Brown University James Larus, Microsoft Research André van Tilborg, Office of the Secretary of Defense Moderators: John Rushby, Lui Sha The focus of this panel is what has changed in the last 30 years with respect to certification. How have the development of new technology and the spread of computing changed both the problems we face in certifying software, and the potential solutions to the certification problem? How does the increasingly pervasive use of software in infrastructural systems affect the need for certification? Does the greater sophistication of today’s users affect the problem? What challenges and opportunities are presented by the widespread use of COTS software and outsourcing? How can we build and certify systems in which critical and noncritical components work together? Should we move certification from a process-centric process to a product-centric process over time? If so, how? What technologies are promising for aiding certification efforts? What role will there be for static methods such as static analysis, proof systems, and model checking? And for dynamic approaches involving, for example, runtime assertions and fault detection, masking, and recovery? Is incremental certification in traditional safety-critical systems such as flight control an important goal to work toward? What is the technology barrier to success? Panel C: Certification and Regulation: Experience to Date Brent Goldfarb, University of Maryland Mats Heimdahl, University of Minnesota Charles Howell, MITRE Corporation Robert Noel, MITRE Corporation Moderators: Michael DeWalt, Scott Wallsten The focus of this panel is to understand how certification and regulation affect software development. How do regulation and certification affect current mission-critical software development? What are the differences and similarities between industry-standard, self-imposed regulations, and government- or policy-imposed regulations and standards?
OCR for page 29
Summary of a Workshop on Software Certification and Dependability How do developers consider trade-offs between improved safety/reliability and lower costs associated with less dependable (but perhaps more available) systems? How do regulations and certification affect innovation? Within your field of expertise, what are the top three issues in regulation or government oversight that hamper system dependability? What are the top three regulatory approaches that have provided significant improvements in system dependability? How are regulations and guidance within your organization promulgated and approved? What are the differences and similarities between developing regulations and guidance material for hardware dependability and developing them for software dependability? What are possible future challenges (economic, technological, or otherwise) with respect to current regulatory and certification approaches? In your answers to these questions, what supporting data are available and what supporting data are needed to buttress analyses? Panel D: Organizational Context, Incentives, Safety Culture, and Management Richard Cook, University of Chicago Gene Rochlin, University of California, Berkeley William Scherlis, Carnegie Mellon University Moderators: Charles Perrow, David Woods The focus of this panel is to explore the implications of certification within the organizational context. How are software development organizations responsible for failures? How can software development organizations learn from failure? How can software development as a model of operations be integrated with operations? How can software development better anticipate the reverberations of technology change? Do we need to highlight particular problems with organizational performance in the certification area, distinct from dependability in general? Are the “mental models” of organizational routines more vulnerable in this area, thus requiring more demanding safeguards or personnel? How might this be achieved? By outsourcing, special training, incentives? What role might insurance and liability play in achieving higher levels of certification? Might liability threats promote a better safety culture? Would the availability of insurance help (or make matters worse—the moral hazard problem)? Might insurers require evidence of reliability practices to make insurance available or reduce high premiums? Are there precedents for this in other areas of safety in low-probability/high-risk endeavors, and is there evidence the effort is successful? To what extent should certification be left entirely to the producer? When should a firm hire specialists? When should a consumer require that an independent agency do the certification? Are there trade-secret issues with outside involvement? Products are sold on the basis of performance and features. How can we make the promise of dependability attractive to consumers given its added cost? Reactions to Panels
OCR for page 30
Summary of a Workshop on Software Certification and Dependability TUESDAY, APRIL 20, 2004 Panel E: Cost-Effectiveness of Software Engineering Techniques Kent Beck, Three Rivers Institute Matthias Felleisen, Northeastern University Anthony Hall, Praxis Critical Systems Moderators: Peter Lee, Jon Pincus The focus of this panel is to understand the cost-effectiveness of current software engineering techniques as they relate to dependability and certification. What is the evidence for the cost-effectiveness of various software engineering techniques, either today or looking toward the future? Ideally, this would focus on the techniques’ roles in producing dependable software; however, strong evidence for cost-effectiveness in other domains is also interesting. To the extent that evidence is currently limited, what kind of investigation could lead to strengthening it in the future? Are there particularly promising directions that can lead to particular software engineering techniques becoming more cost-effective for creating dependable software? Panel F: Case Study: Electronic Voting David Dill, Stanford University Douglas Jones, University of Iowa Avi Rubin, Johns Hopkins University Ted Selker, Massachusetts Institute of Technology Moderators: Reed Gardner, Daniel Jackson The focus of this panel is to explore a particular application domain within the context of certification, dependability, and regulation. What role does software play in voting? How crucial is it? Does it make things worse or better? What properties of the software might be certified? What current approaches might help? What would the certification process, if any, be? Who would do it? What credibility would it have? Who has to be trusted? What ulterior motives are at play? With respect to issues of dependability and certification, is this case study typical, or unique in some ways? Group Brainstorm Moderator: Daniel Jackson What are the important questions that have come out of this workshop that the committee should address in the rest of its study?
Representative terms from entire chapter: