The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Software For Dependable Systems: Sufficient Evidence?
for human behavior or larger organizational effects; how to handle normal and malicious users; or how to express crucial properties.
Reasoning about fail-stop systems. The critical dependability properties of most critical systems will take the form “X should never happen, but if it does, then Y must happen.” For example, the essential property of a radiotherapy machine is that it not overdose the patient. Yet some amount of overdose occurs in many systems, and any overdose that occurs must be reported. Similarly, any fail-stop system is built in the hope that certain failures will never occur but is designed to fail in a safe way should they occur. It therefore seems likely that multiple dependability cases are needed, at different levels of assurance, each making different assumptions about which adverse events in the environment and failures in the system itself might occur. The structuring of these cases and their relationship to one another is an important topic of investigation.
Making stronger arguments from weaker ones. A chain can be stronger than even its strongest link if the links are joined in parallel rather than in series. Similarly, weaker arguments can be combined to form a single stronger argument. A dependability case will typically involve evidence of different sorts, each contributing some degree of confidence to the overall dependability claim. It would be valuable to investigate such combinations, to determine what additional credibility each argument brings, and under what conditions of independence such credibility can be maximized.