F
E-Mail Commentary on Workshops One and Two
Written by Richard P. Hallion, Senior Adviser, Science and Technology Policy Institute; Deborah Westphal, Chairman of the Board, Toffler Associates; and Michael Yarymovych, President, Sarasota Space Associates on September 30, 2020, and highlighted in the opening remarks and discussion sections of the workshop on October 1, 2020 (see Chapter 5).
Richard Hallion:
First, I think Speedy’s idea was excellent. Second, I have been reviewing many of the briefings we have received, and, for what it is worth, I think I have spotted something we need to concern ourselves with or at least keep in the back of our minds. The Air Force/Space Force is looking at reducing time cycles, creating a world of pervasive knowledge (information) and application of that knowledge, and building a vast network of interconnectivity between all of its many systems—actual weapon systems; system architectures; organizational structures; and the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance enterprise—cognizant of the importance of artificial intelligence and cyber as the “new stealths” of this era.
In effect, it is producing a sphere with a centroid point (national command authority) and web strings that, seen in three dimensions, run vertically, laterally, longitudinally, and transversely throughout the sphere, connecting every node within. And each node, of course, has its own mini-web as well. Within each node are multiple organizations and sub-elements (e.g., at the lowest level, an individual microsat, aircraft, remotely piloted aircraft, Special Operations Forces special tactics airman, or hospital technician). All of these elements have their own time cycles, and the cumulative sum of inefficiencies in each one’s execution cycle across their own operations and their interactions with others across their micro-sphere and the larger sphere constitutes the Clausewitzian “friction” afflicting the entire time-cycle enterprise. Perhaps a graver challenge is also that each of these connecting strings, and each node as well, constitutes—in the artificial intelligence and cyber age—a dangerous point-of-entry for hostile penetration and exploitation—think “Son of Stuxnet.” It only takes one individual’s failure to follow IT protocols and Communications Security to introduce a potentially catastrophic cyber weapon, one that might not necessarily be visible at first glance, and which might, indeed, sit for a very long time—even years—until needed. Again, this illustrates that technology offers both benefits and vulnerabilities.
One way to think of this is reviewing the structure of ships. Classic sailing vessels worked pretty well, but they faced a serious problem from both systemic stresses—imposed by the pitch and heave of the seas—and
from wood-eating marine worms. The ships had robust structures—keels, ribs, reinforcing timbers, etc.—and so weathered the obvious threat—the pitch and heave of the seas—pretty well. But the worms were a major problem, for, to them, the structure was not only a means of attaching to the ship, it was a food source, a highway, and a home from which they could replicate and ultimately destroy the ship—which in our terms was an entire system. It was for this reason that we cladded hulls in copper and today use various maritime paints and coatings on the vulnerable wooden hulls of smaller and older boats. Metal, of course, changed this dramatically but still had its own vulnerabilities to corrosion, fatigue, etc., proving that no solution is completely satisfactory.
In our rush to exploit the new age of human–machine and machine–machine interfacing, we are actually entering a new age of cyber vulnerability that could, in a crisis, reveal itself in weapons and sensors that do not work, in “false pictures” being generated, in wide-spread shutdowns and break downs in systems and connectivity, etc. Now, this is not all bad—our prospective opponents are rapidly trying to do the same thing, and I would suggest that we have some great opportunities to exploit their efforts for our own ends by planning to capitalize on the entry points, human failings (e.g., the individual who in a rush fails to properly safeguard an IT system), etc. My thoughts here are imperfectly formed, but I throw them out for at least some consideration, and I copied Mike Yarymovych, who is far more knowledgeable about such things than I. Cheers to all, and looking forward to tomorrow’s discussion!
Deborah Westphal:
Thank you, Dick. I need to reflect more on what you have, but I’m following you. A couple of things that stood out. You described a very complicated systems of systems that may or may not be understood or can be understood. I am reminded of the F-22 avionics software. We don’t even really know what code we have in the avionics and it is approximately 40 percent of the cost of plane (maybe more now). Over 10 years ago, I had a conversation with Paul Nielsen about how we have gotten to the point that software, when it gets up close and personal with other software, can start to “mate” without coders’ direction. Basically, code creating code. So, inside this systems of systems with lots of algorithms, do we even know what is going on? Second thing, there is a human system of systems that interfaces with all the technical systems of systems. Humans are flawed, they have brains, they have motivation and purpose. This is good, but this is also random and uncertain. How do we manage this? Old command authorities are not going to cut it. Last thing that came to mind is the Star Wars strategy of trying to bankrupt Russia. How do we know for sure that China isn’t doing the same thing to us? Do we REALLY know what they are investing in and what they have? How do we know this for sure?
Richard Hallion:
All good points you raise! My thoughts on this are very imperfectly formed, but you added an important dimension to it. Years ago, when Maj. Gen. Peter W. “Peet” Odgers was brought in to run the B-1SPO after Maj. Gen. Bill Thurman pinned on his 3rd star and became head of ASD(1986), he was stunned to discover that a significant portion of the B-1’s code was being written under contract with a supplier in Mexico—not exactly the most secure system.
The other issue you raise—code writing code—takes on great significance in the machine-to-machine era. Frankly, we get very far down in development cycles before we fully realize exactly what those codes can do. I am reminded of Tom Morgenfeld’s landing accident (fortunately not fatal) in the YF-22A at Edwards when the code decided he shouldn’t flare and land and then generated a divergent PIO—the film is quite dramatic. Or Airbus’ experience with an early Airbus (A310?) that flew into trees when the FCS decided the pilot wanted to land when he was simply making a low fly-by at a public demo—ugly.
Humans are extraordinarily flexible and broadly effective integrators, but their decision-rates are at the rate of nerve impulses across synapses, and the rates of chemical transfer within brain cells—wish we had a brain doc to help us get some actual rate transfer numbers!—and thus are limited when dealing with electrical connectivity rates or “Mach a Million” light signaling. Integrating the two is likely necessary for a whole lot of ethical and law of war reasons but very imperfect in terms of efficient time cycles. Just as the fighter pilot’s 9g physiological
limit forces limits in aircraft design, making more maneuverable remotely piloted aircraft our likely future, the physiological rate of decision making makes machine–machine decision making much more “attractive” but also more troubling.
Finally, one of the major developments in German military thinking during the First World War was the notion of “Auftragstaktik” or “Mission orders”: Rather than spell everything out in minute detail, commanders were taught to empower subordinates by giving them orders that spelled out the final objective but then let them figure out how to undertake the mission. The result was pretty overwhelming success in the early Spring Offensives of 1918, which required a hugely disproportionate Allied response to defeat. And, of course, carried forward to the Blitzkrieg of 1939–1940, it ensured that the Wehrmacht overran Western Europe.
Today, with the unfolding advance of artificial intelligence and machine-to-machine decision making, might we not think of both as “subordinate humans” so that we can “write” mission-orders spelling out what needs to be done, “empower” them to do it, consistent with guidelines such as those offered by Isaac Asimov relaxed to permit considered and discrete lethal force. Very complex, but if you are trying to respond to a threat moving at 1 mi/sec+ or Mach a Million, telephone calls aren’t going to do it.
Michael Yarymovych:
Dick, very thoughtful remarks. Some of the time difficulties can be overcome with training, where pilots act intuitively without lengthy deliberations and without the need for radiocommunications. This brings me back to my original question on the speed of light. Is speed of thought even faster than the speed of light? Einstein don’t look! There are personal experiences with simultaneous thought expressions between friends. Is there any scientific literature on that subject? Also, I believe that artificial intelligence built into new fighting systems is very much upon us. In a perverse way, such artificial intelligence fighting machines could be impervious to otherwise misleading information provided by hybrid warfare, already in operation as practiced by Putin’s Russia.