David Isaacson, Office of the Director of National Intelligence
Rama Chellappa, University of Maryland, College Park
George Coyle, National Academies of Sciences, Engineering, and Medicine
After welcoming participants and expressing his gratitude to the workshop planning committee, David Isaacson, Office of the Director of National Intelligence, explained that this 2-day workshop would present technical advances relating to adversarial attacks and anomalies, which have high-level national security implications. He said that Congress has taken notice of these technical advances, especially in light of the prevalence of fake videos and images created using machine learning techniques. Deepfake technology blurs the lines of fact and fiction and could undermine public trust in recorded videos and images, Isaacson explained. This increased presence of false information is accompanied by a volume of available digital information that exceeds the Intelligence Community’s (IC’s) current capabilities for human vetting and processing. Isaacson hoped that this workshop would present the current state of the art as well as provide insight into forthcoming innovations so that the IC and the nation are prepared to retool and be better equipped in both the current environment and future scenarios. Rama Chellappa, University of Maryland, College Park, and George Coyle, National Academies of Sciences, Engineering, and Medicine, noted that, to help guide the IC’s future technology investments, workshop speakers would identify both near- and long-term enabling technology capabilities (see Appendix D).
Michael I. Jordan, University of California, Berkeley
The rapid growth in the size and scope of data sets in science and technology has created a need for novel foundational perspectives on data analysis that blend the inferential and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in data
science is apparent from their sharply divergent nature at an elementary level—in computer science, the growth of the number of data points is a source of “complexity” that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of “simplicity” in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as “runtime” in core statistical theory and the lack of a role for statistical concepts such as “risk” in core computational theory. I present several research vignettes aimed at bridging computation and statistics, discussing the problem of inference under privacy and communication constraints, the problem of the control of error rates in multiple decision-making, and the notion of the “optimal way to optimize.”
Ruzena Bajcsy, University of California, Berkeley
Ruzena Bajcsy, University of California, Berkeley, discussed model- versus data-driven analysis and prediction in the context of human movement. She first provided an overview of the Human-Assistive Robotic Technologies Laboratory’s1 human modeling and presented a series of important definitions.
- Accuracy. The difference between the measurement and the actual value.
- Precision. The variation observed when measuring the same part repeatedly.
- Reliability. The degree to which a test or an instrument consistently measures.
- Reproducibility. A component of the precision measurement or test method.
- Repeatability. The degree of agreement between tests.
- Robustness. The ability of a system to tolerate perturbations that might affect the system’s functional body.
- Measurement. The mapping of qualitative empirical relations to relations among number, which is only a reflection of reality.
- Data. Processed measurement.
Bajcsy explained that understanding limits—accuracy, precision, sensitivity, detectability—of measurements is important, especially when analytics are used to make life or death decisions and in the design and implementation of safety-critical systems. It is essential that systems are safe and that their behavior is repeatable, reliable, stable, and robust. It is also important to provide a system user with guarantees and limits of system performance. To do this, one must know the limits of the sensors that provide measurement.
Bajcsy provided examples of the use of visual sensors in research. The first example was a collaboration with orthopedic surgeons on a study of postural stability in post-surgical sit-to-stand transitions. There is a high rate of failure in subjects undergoing spinal fusion, and this project aimed to shed light on when the surgery would not work based on biomechanical changes. Her team sought to identify kinematic, dynamic, and muscular changes pre- and post-spinal fusion surgery, as well as effects on balance and standing strategies. By understanding the limits of measuring devices, it was possible to detect the individual sit-to-stand transitions (Matthew et al., 2018). However, she noted that simplifying assumptions were needed, including that sit-to-stand could be modeled solely in the sagittal plane and that an allometrically scaled musculoskeletal model could represent the subject.
This research utilized three-dimensional cameras (Kinect 1, Kinect 2) and several different types of modeling methods including inverse kinematics, inverse dynamics (i.e., an estimate based on a patient’s overall height and weight), and muscular model. The results of the research were used to inform decisions about surgical interventions and have shown a positive improvement. However, Bajcsy emphasized that
decisions about medical diagnostics are inherently personal, and individual situations may vary. She also stressed that differences in models can dramatically impact the data analysis results; accuracy is particularly important in healthcare decision-making. While this example focused on diagnostic devices, the future goal is to develop techniques for intervention.
The next study that Bajcsy discussed was on gross and fine motor synergies in reach-to-grasp movements, which is a work in progress. The objective of the work was to incorporate biomechanical models of function and impairment into individualized, assistive control schemes. To do this, Bajcsy’s team built a robotic system for assessment and assistance of the planar reach-to-grasp motion. The system controls wrist position with elbow support and an assisted pinch grasp, measures the position and force at the wrist, and operates under admittance (i.e., force-velocity) and position control modes. This system illustrates that smoothness of the motion varies from healthy subjects to those who have had strokes. This concept relates to the ability for coordination of human–machine movement. The preliminary results of control and upper limb impairment subjects (i.e., stroke patients) showed differences in linearity and smoothness of trajectory, velocity profile, and ability to stabilize.
Another study considered enabling multi-degrees-of-freedom musculoskeletal dynamics models via ultrasound and acoustic myography. Unlike the other studies, which addressed full body analysis, this study began to address human dynamics more locally to better understand the strength of the muscles. Ultrasounds help to better understand the mechanical properties of the muscle because they give a reasonable spatial, temporal resolution and show cross-sections of muscles, Bajcsy explained. The objective of the study was to construct low-dimensional models mapping noninvasive sensor data to the in vivo muscle force of the arm. Bajcsy’s team observed substantial deformation under changes in muscle load and kinematic configuration and is currently employing statistical shape analysis to extract low-dimensional deformation models.
Bajcsy commented that model-based approaches for human modeling research are useful because interpretability is critical for medical and human-interfacing applications, many of which are safety-critical and in need of guarantees. Thus, it is essential that the medical community understands, or at least trusts, the system. Domain knowledge is plentiful and important to scoping models appropriately; while models are also abstractions, they serve as a guide about what to measure. She said that biology obeys constraints (e.g., laws of physics, musculoskeletal attachments and interactions) but that there are many niche applications with their own interests and constraints—specific tasks (e.g., sit-to-stand, sports) and specific pathologies (e.g., stroke, spinal cord injury). It is critical to map the results to the existing body of literature because model-free approaches alone do not allow for easy incorporation of constraints and do offer limited interpretability. Big data are useful in exploratory analysis because they can be used to establish a healthy baseline for a particular task or measurement to classify pathology or activity or configuration. They can also be used to perform high-level analysis (e.g., extract through principal component analysis) to inform the development of future models, she continued, particularly for non-critical but time-consuming tasks. Data-driven approaches work best when the scope is limited and the training data sufficiently cover the space of the inputs.
In conclusion, Bajcsy said that model- and data-driven approaches are complementary for modeling human movement, although it is critical to determine the right tool for the right job. She reiterated that no two humans are alike and highlighted the promise of precision medicine that could provide more customized diagnostics and treatments. She highlighted several important questions to consider when thinking about human modeling, including the following:
- Are we interested in behavior/performance or the system structure to predict behavior at different scales?
- What simplifying assumptions can be made?
- What trade-offs must be made between accuracy/performance and generalizability?
- How can we provide guarantees for robustness and safety? Even if we consider techniques such as reachable sets in control systems, the boundaries depend on the accuracy and precision of the estimated parameters of the dynamical system, so it is important to pay attention to the estimators.
She closed by explaining that deep, fundamental results take time to achieve. In response to a question from an audience participant, Bajcsy said that her planar reach-to-grasp systems are cost-conscious as well as robust and easy to use. As an example, she explained that children with muscular dystrophy have decreased ability to reach as the disease progresses (i.e., children will not be able to comb their hair or feed themselves). She gave her reachability software to colleagues to use. Although it takes time to get approval from the Food and Drug Administration, she and her team have been fairly successful in transferring technology. Devices for estimating kinematics can be bought off the shelf (e.g., motion capture systems), and, in the case of the “sit-to-stand” experiment, such a device/system is now being used in at least two orthopedic departments. She emphasized that the role of academia is to develop ideas and demonstrate feasibility, and industry can take it from there.