1. Prioritize forecasted technologies. System operators must assess the potential impact of the forecast on society, resources, etc., and the lead time, from warning to event, to determine appropriate signals to track, threshold levels, and optimal resource allocation methods.

  2. Optimize process, monitor, and report tools. Processing and monitoring tools should be optimized to look for outliers and to find weak signals and signposts in noisy information environments. System users (decision makers, experts, and the public) should be able to access and analyze the real-time status of critical potential disruptions and the progress of a critical disruption relative to historical trends and breakthrough points as well as to develop an overall picture of the range of possible disruptions. In addition, the following tools should be included at a minimum:

    • Search/query/standing query. On ideas, text, images and other media, linkages, signals, and the like.

    • Interactive interface. User ability to control and manipulate time, scope, scale, and other variables.

    • Signal threshold control. Signals and/or alerts should be generated when certain thresholds are met or events occur. In general, low thresholds should be used for high-impact signals, and high thresholds for low-impact signals.

    • Analytical tools. The system should incorporate a rich set of tools, including link analytics, pattern recognition, extrapolation, S-curves, and diffusion rates.

    • User-controlled visualization, presentation, and dashboard tools. Data should be presented using multiple visualization methods and formats. Dashboards should be designed to engage with decision makers.

    • Standard and special reports. The system should generate standardized as well as user-defined reports. Templates should be developed to enhance ease of use and to support comparison and analysis across reporting periods.

  1. Develop resource allocation and decision support tools. Decision makers will need tools to constantly track and optimize their resource portfolios and decisions in response to changes in the probabilities of potential disruptions.

  2. Assess, audit, provide feedback, and improve forecasts and forecasting methodologies. Process and system improvement should be ongoing. Operators should consider reviewing why previous disruptions were missed (bias, lack of information, lack of vision, poor processes, or lack of resources and the like) and what could be done to overcome these biases. Operators of the system should seek feedback from users and decision makers about the usefulness of the forecasts derived from the site and the impact the forecast had on decision making. An understanding of how users apply the forecasts in day-to-day decision making would help operators to refine the system. Finally, audit tracking and system protection processes must be put in place to ensure that system results are not purposefully hidden, manipulated, or lost.


Postmortem analysis of disruptive events often reveals that all the information necessary to forecast a disruptive event was available but missed for a variety of reasons, including the following:

  • Not knowing enough to ask a question,

  • Asking the right question at the wrong time,

  • Assuming that future developments will resemble past developments,

  • Assuming one’s beliefs are held by everyone,

  • Fragmentation of the information,

  • Information overload,

  • Bias (institutional, communal, personal), and

  • Lack of vision.

The committee believes a well-designed persistent forecasting system focused on continual self-improvement and bias mitigation can address many of these issues by reducing the scope for uncertainty and likelihood of surprise and leading to improved decision making and resource allocation.

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement