7
Conclusion
Technology forecasting is strategically both a defensive and offensive activity. It can assist in resource allocation and minimize the adverse impacts or maximize the favorable impacts of game-changing technology trends. In a general sense, it is wise to be circumspect by analyzing the state of trend-setting technologies, their future outlook, and their potential disruptive impact on industries, society, security, and the economy. To paraphrase Winston Churchill, the goal of forecasting is not to predict the future but to tell you what you need to know to take meaningful action in the present.
This final chapter summarizes and condenses key points from throughout the report, presented in the form, first, of a checklist of important system attributes and, second, steps to build a persistent forecasting system for disruptive technologies.
BENCHMARKING A PERSISTENT FORECASTING SYSTEM
Table 7-1 describes the attributes of a well-designed, persistent forecasting system by component of the system.
STEPS TO BUILD A PERSISTENT FORECASTING SYSTEM FOR DISRUPTIVE TECHNOLOGIES
An open and persistent system offers the opportunity to use a richer set of data inputs, forecasting methods, assessments, and analytical capabilities to produce more useful forecasts. A poorly designed system could be overwhelmed by information overload or missed correlations due to poor data organization techniques, or it might never achieve a critical mass of expert or public participation. The committee believes that an open and persistent forecasting system requires substantially greater investment in both planning and implementation than traditional forecasting approaches. Eight steps to building a persistent forecasting system are outlined next:
-
Define the mission. The designers of the system should conduct in-depth interviews with key system stakeholders to understand their objectives. The mission or goals of the stakeholders are likely to change and expand over time. Therefore, regular meetings should be held to identify new priorities and methods to improve the existing system (feedback loop).
TABLE 7-1 Attributes of an Ideal Forecasting System
Category |
Attributes |
Description |
Data sources |
Diversity of people and methods |
Data should come from broad range of experts and participants from diverse countries, cultures, ages, levels of wealth, education, expertise, etc. |
|
Diversity of sources |
Data should be from a broad range of sources and formats, with particular attention to non-U.S. and non-English-speaking areas. |
|
Metadata |
Key metadata should be captured, such as where, when, and how they were sourced as well as quality, measurements of interest, and resolution of data. Patterns can be distinguished by region, age of contributor, quality, etc. |
|
Data liquidity, credibility, accuracy, frequency, source reliability |
Should use multiple methods to ensure data accuracy, reliability, relevancy, timeliness, and frequency. Data should be characterized and stored in a way that makes them interchangeable/interoperable regardless of format or source from which they were gathered. |
|
Baseline data |
Collect historical, trend, and key reference data that can be used for comparison and analysis of new collections. |
|
Diversity of qualitative data sources |
Gather data using a variety of qualitative methods such as workshops, games, simulations, opinions, text mining, or results from other technology forecasts. |
|
Diversity of quantitative data sources |
Data should be sourced from a variety of data sets and types, including commercial and proprietary sources. |
Forecasting methods |
Multiple forecasting methodologies |
System should utilize multiple forecasting methodologies as inputs to the system to reduce bias and to capture the widest range of possible forecast futures. Backcasting should be one of the processes used with a handful of initial future scenarios to begin the process of identifying key enablers, inhibitors, and drivers of potential disruptions, with particular attention to identifying measurements of interest, signposts, and tipping points. Vision-widening techniques (brainstorming, interviews, workshops, and open-source contributions) should be key components of the forecasting process. |
|
Novel methods |
System should consider incorporating novel methods such as ARG, virtual worlds, social networks, prediction markets, and simulations. |
|
Qualitative |
System utilizes qualitative forecasting methodologies. |
|
Quantitative |
System utilizes quantitative forecasting methodologies. |
Forecasting team |
Expert diversity and ongoing recruitment |
Team should be diversified by country, culture, age, and technology disciplines, etc. Use culturally appropriate incentives to maintain required levels of participation. |
|
Ongoing recruitment |
Renew personnel and continually recruit new team members to ensure freshness and diversity of perspectives. |
|
Public participation |
Broad and diverse public participation is critical for capturing a broad range of views, signals, and forecasts. Application of culturally appropriate incentives and viral techniques to reach and maintain a critical mass of public participation. |
Data output |
Readily available |
Data should be readily available, exportable, and easily disseminated beyond the system in commonly used formats. |
|
Intuitive presentation |
Output should be presented in a way that is informative and intuitive. Utilization of dashboards and advanced visualization tools. |
|
Quantitative and qualitative |
Raw quantitative and qualitative data and interpretive elements are readily available for further analysis. |
Category |
Attributes |
Description |
Processing tools and methods |
Enablers/inhibitors |
Facilitate methods to identify and monitor key enablers, inhibitors, measurements of interest, signals, signposts, and tipping points that contribute to or serve as a warning of a pending disruption. |
|
Multiple perspectives—qualitative/human |
Humans with varying backgrounds, of diverse cultures, ages, and expertise analyze data employing multiple tools and methods. |
|
Outlier events/weak signal detection |
Tools and methods for finding weak signals or extreme outliers in large data sets. Tools and processes to track and monitor changes and rates of change in linkages between data are essential. |
|
Impact assessment processes |
Employ methods to assess impact of potential disruptive technology and recommend potential methods to mitigate or capitalize on the disruption. |
|
Threshold levels and escalation processes |
Employ methods to set and modify warning signal threshold levels and escalate potentially high-impact signals or developments to other analytical perspectives or decision makers. |
|
Forecast data object flexibility |
Store data using object-oriented structures. The data objects being used to forecast can show flexibility in how they are stored. Data objects can be categorized in several ways, including but not limited to disruptive research, disruptive technologies, and disruptive events. Relationships and structures between these objects can be restructured and analyzed. |
|
Visualization |
Data should be visually represented intuitively and with interactive controls. System should support geospatial and temporal visualizations. |
System attributes |
Bias mitigation processes |
Robust ongoing internal and external bias mitigation processes are in place. |
|
Review and self-improvement |
Processes in place to review and assess why prior disruptions were either accurately predicted or missed by the platform. |
|
Persistence |
Forecasts are ongoing and in real time. |
|
Availability |
System should be continuously accessible and globally available. |
|
Openness |
System should be open and accessible to all to contribute data, provide forecasts, analyze data, and foster community participation. The data, forecast, and signals generated from the system are publically available. |
|
Scalability/flexibility (hardware and software) |
System should scale to accommodate large numbers of users and large datasets utilizing standardized data and interchange formats. |
|
Controlled vocabulary |
Use standard vernacular for system benchmarks (watch, warning, signal, etc.), language and tagging. |
|
Multiple native language support |
Data should be gathered, processed, exchanged, translated, and disseminated in a broad range of languages. |
|
Incentives |
Reputation, knowledge, recognition, and other methods for incentivizing participation. Monetary incentives could be considered to get certain expert sources and research initiatives to contribute. |
|
Ease of use (accessibility, communication tools, intuitive) |
Make the site easily accessible. Navigation around the site should be intuitive and have communication tools to facilitate usability and community development. |
Environmental considerations |
Financial support |
The system must be underpinned by long-term and substantial financial support to ensure that the platform can achieve its mission. |
|
Data protection |
Data must be protected from outages, malicious attack, or intentional manipulation. Robust back-up and recovery processes are essential. |
|
Auditing and review processes |
Put processes in place to regularly review platform strengths and weaknesses, biases, why disruptions were missed, and to audit changes to data, system, architecture, hardware, or software components. |
-
Scope the mission. Define which people and resources are required to successfully build the system and meet mission objectives:
-
Secure substantial and sufficient long-term financial support.
-
Establish a small team with strong leadership for initial analysis and synthesis. This team will target methods and sources for the forecast, as well as synthesize results. To provide continuity, this team should produce regular updates along with the overall forecast. It should also learn over time from its successes and failures and adjust accordingly. Experience suggests that such teams can improve over time. A key success factor for this group is diversity of skills, expertise, culture, and demographics.
-
Identify, design, and build the necessary systems and processes required to support a highly scalable, persistent forecasting system.
-
Identify the best way to organize disparate sets of structured and unstructured data.
-
-
Select forecasting methodologies. The requirements of the mission and the availability of data and resources will determine the appropriate methodologies for recognizing key precursors to disruptions, identifying as many potential disruptive events as possible.
-
Backcasting. Identify potential future disruptions and work backwards to reveal key enablers, inhibitors, risks, uncertainties, and force drivers necessary for that disruption to occur. Distinguish key measurements of interest that can be tracked and used for signaling. Particular attention should be focused on identifying potentially important signals, signposts, and tipping points for that disruption. The backcasting process should help to crystallize the minimum data feeds and experts needed to warn of potential disruptions.
-
Vision-widening techniques. Utilize traditional means (brainstorming, workshops, trend analysis, the Delphi method) as well as novel vision-widening techniques (open source, ARG, predictive markets, social networks) to identify other potentially disruptive outcomes. The vision-widening process should reveal additional information sources and expertise required by system operators.
-
-
Gather information from key experts and information sources. The process of gathering information from people and other sources will need to be ongoing.
-
Assign metadata. As data are gathered, they should be tagged. Key tags include the source, when and where the data were gathered, and appropriate quality ratings (reliability, completeness, consistency, and trust).
-
Assess data sources. Select data sources that are relevant to the forecasting exercise. Do not “boil the ocean” and attempt to process all available data but instead process the data that are relevant or potentially relevant to achieve the goals of the forecast. State if the data are readily available, semiavailable (proprietary data or periodically available data), or unavailable. Relevant data feeds should be integrated into the system to support automated processing, and proxies should be developed where data are critical but unavailable. Where proprietary data sets are important, negotiating access should be explored. Information-gathering from human sources should be continuous, utilizing both traditional means (workshops, the Delphi method, interviews) and novel (gaming, predictive markets, ARG) methods.
-
Normalize data. Reduce semantic inconsistency by developing domain-specific anthologies and by employing unstructured data-processing methods such as data mining, text analytics, and link analysis for creating structured data from unstructured data; using semantic web technologies; and utilizing modern extract, transform, and load (ETL) tools to normalize dissimilar datasets.
-
Where possible, gather historical reference data. Breaks in long-running trends are often signals of major disruptions and can be observed in the historical data. Historical reference data are useful for pattern recognition and trend analysis.
-
Assess and mitigate biases in data gathering. Ensure that the data being gathered are from multiple regions and cultures and that the human sources are diversified by age, language, region, culture, education, religion, and so on. Are the incentives attracting diverse, highly qualified participants? Determine which tools and incentives would attract and quality of experts to participate. If not, determine which tools and incentives would attract and retain such participants.
-
-
Prioritize forecasted technologies. System operators must assess the potential impact of the forecast on society, resources, etc., and the lead time, from warning to event, to determine appropriate signals to track, threshold levels, and optimal resource allocation methods.
-
Optimize process, monitor, and report tools. Processing and monitoring tools should be optimized to look for outliers and to find weak signals and signposts in noisy information environments. System users (decision makers, experts, and the public) should be able to access and analyze the real-time status of critical potential disruptions and the progress of a critical disruption relative to historical trends and breakthrough points as well as to develop an overall picture of the range of possible disruptions. In addition, the following tools should be included at a minimum:
-
Search/query/standing query. On ideas, text, images and other media, linkages, signals, and the like.
-
Interactive interface. User ability to control and manipulate time, scope, scale, and other variables.
-
Signal threshold control. Signals and/or alerts should be generated when certain thresholds are met or events occur. In general, low thresholds should be used for high-impact signals, and high thresholds for low-impact signals.
-
Analytical tools. The system should incorporate a rich set of tools, including link analytics, pattern recognition, extrapolation, S-curves, and diffusion rates.
-
User-controlled visualization, presentation, and dashboard tools. Data should be presented using multiple visualization methods and formats. Dashboards should be designed to engage with decision makers.
-
Standard and special reports. The system should generate standardized as well as user-defined reports. Templates should be developed to enhance ease of use and to support comparison and analysis across reporting periods.
-
-
Develop resource allocation and decision support tools. Decision makers will need tools to constantly track and optimize their resource portfolios and decisions in response to changes in the probabilities of potential disruptions.
-
Assess, audit, provide feedback, and improve forecasts and forecasting methodologies. Process and system improvement should be ongoing. Operators should consider reviewing why previous disruptions were missed (bias, lack of information, lack of vision, poor processes, or lack of resources and the like) and what could be done to overcome these biases. Operators of the system should seek feedback from users and decision makers about the usefulness of the forecasts derived from the site and the impact the forecast had on decision making. An understanding of how users apply the forecasts in day-to-day decision making would help operators to refine the system. Finally, audit tracking and system protection processes must be put in place to ensure that system results are not purposefully hidden, manipulated, or lost.
CONCLUSION
Postmortem analysis of disruptive events often reveals that all the information necessary to forecast a disruptive event was available but missed for a variety of reasons, including the following:
-
Not knowing enough to ask a question,
-
Asking the right question at the wrong time,
-
Assuming that future developments will resemble past developments,
-
Assuming one’s beliefs are held by everyone,
-
Fragmentation of the information,
-
Information overload,
-
Bias (institutional, communal, personal), and
-
Lack of vision.
The committee believes a well-designed persistent forecasting system focused on continual self-improvement and bias mitigation can address many of these issues by reducing the scope for uncertainty and likelihood of surprise and leading to improved decision making and resource allocation.
The construction and operation of a persistent forecasting system is a large and complex task. It is important to note that the creation of an ideal system is iterative and may take several years to perfect. System operators and sponsors must improve the system by installing processes to continually assess, audit, and evaluate its strengths and weaknesses. These assessments should be performed by both internal stakeholders and unaffiliated outsiders.
Persistent systems require continuing sponsorship and organizational support. Building and maintaining an ideal, open, and persistent forecasting platform will not be inexpensive. A professional staff is needed to build and operate it, and it requires a robust infrastructure, access to quality data, enabling technologies, and marketing to attract a broad range of participants. Consistent and reliable funding is critical to the successful development, implementation, and operation of the system.