Skip to main content

Currently Skimming:

8 Testing Software-Intensive Systems
Pages 127-136

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 127...
... The focus of our efforts has been on how the services conduct operational testing and evaluation on software-intensive systems, what the special procedures are for such systems (noting wide variation in the techniques used for operational testing and evaluation across the services) , and what special problems arise.
From page 128...
... We believe this is not an efficient use of analysts' time: the test sample of lines of code that are examined should never be based on a fixed percentage of total lines of code; furthermore, it is the software architecture that should be examined, not the code. We do applaud AFOTEC's efforts to communicate early in the process with software developers, and we concur that the use of software metrics, a measure based on code characteristics that evaluate code complexity, is useful for the purpose of producing estimates for support budgets.
From page 129...
... By their nature, evolutionary procurements will result in repetition of the operational testing and evaluation cycle, creating the opportunity for use of test infrastructure that might have been developed in earlier cycles, as well as use of existing operational test and field data. These facts should be taken as a mandate for investing in the creation of test infrastructure to be used and enhanced in later cycles of an evolutionary procurement.
From page 130...
... For example, the operational profile can be calculated as the long-run probability distribution of the states of the chain, which corresponds to the proportion of time the system will be in each state of use in expected operational field use; the expected sequence length corresponds to the average number of events in a test case or a scenario of use; and mean first passage times correspond to expected amount of random testing required to experience a given state-of-use or transition. These and other statistics of the chain are used to validate the model and support test planning.
From page 131...
... Finally, usage model analysis supports test planning and estimation of the amount of testing required to achieve specific objectives, such as experiencing every possible state of use and every possible transition and experiencing various scenarios of use, reliability targets, and other quantitative criteria for stopping testing. EXPERIMENTAL DESIGN AND TEST AUTOMATION The complexity of the efficient selection of test cases is beyond human intuition, because the combinatorial choices are astronomical and the relation
From page 132...
... In the case of automated testing, the scripts are commands to the testing system. Reliability and other quality measures are defined directly in terms of the source chain and testing experience without additional assumptions: for example, there is no assumption that failures are exponentially distributed, which permits monitoring quality measures and stopping criteria sequentially as each test case is run and evaluated.
From page 133...
... Recommendation 8.2: Service operational test agencies should use experimental design methods to select or generate test cases for operational testing of software-intensive systems. Service test agencies should make the institution of test automation a priority for operational testing and evaluation.
From page 134...
... And, at any point during testing, what-if assumptions can be made regarding success or failure of prospective testing to evaluate the range of expected out comes. Recommendation 8.3: In operational testing of software-intensive systems, service test agencies should be required to evaluate software architecture and design principles used in developing the system.
From page 135...
... Field failure data are key to the most meaningful experimental controls and to evaluation of software engineering methods used throughout the life cycle of defense systems, including the operational testing and evaluation phase. Information on the number of systems deployed and hours (or other units)
From page 136...
... This recommendation will probably be easiest to apply and most useful for software failures, but it should be viewed as appropriate and valuable for all defense systems when feasible.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.