autonomy that correspond to different degrees and kinds of direct human involvement in guiding system behavior.
The overarching rationale for deploying such systems is that they might replace humans performing militarily important tasks that are dangerous, tedious, or boring or that require higher reliability or precision than is humanly possible. If such replacement is possible, two consequences that follow are that (1) humans can be better protected and suffer fewer deaths and casualties as these important military tasks are performed, and (2) important military tasks will be performed with higher efficiency and effectiveness than if humans are directly involved.
Computer systems (without the sensors and actuators) have always had a certain kind of “autonomous” capability—the term “computer” once referred to a person who performed computations. Today, many computer systems perform computational tasks on large amounts of data and generate solutions to problems that would take humans many years to solve.
For purposes of this report, an autonomous system (without further qualification) refers to a standalone computer-based system that interacts directly with the physical world. Sensors and actuators are the enabling devices for such interaction, and they can be regarded as devices for input and output. Instead of a keyboard or a scanner for entering information into a computer for processing, a camera or radar provides the relevant input, and instead of a printer or a screen for providing output, the movement of a servomotor in the appropriate manner represents the result of the computer’s labors.
Autonomous systems are fundamentally dependent on two technologies—information technology and the technology of sensors and actuators. Both of these technologies have developed rapidly. On the hardware side, the costs of processor power and storage have dropped exponentially for a number of decades, with doubling times on the order of 1 to 2 years. Sensors and actuators have also become much less expensive and smaller. On the software side, the technologies of artificial intelligence, statistical learning techniques, and information fusion have advanced a long way as well, although at the cost of decreased transparency of operation in the software that controls the system.
Software that controls the operation of autonomous systems is subject to all of the usual problems regarding software safety and reliability—programming errors and bugs, design flaws, and so on. Flaws can include errors of programming (that is, errors introduced because a correct performance requirement was implemented incorrectly) or errors of design (that is, a performance requirement was formulated incorrectly or stated improperly).