National Academies Press: OpenBook

Persistent Forecasting of Disruptive Technologies—Report 2 (2010)

Chapter: Transcript of the Workshop for Appendix D

« Previous: Appendix F Visualizations of Workshop Discussions
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

Appendix D
Transcript of the Workshop

8:30 A.M. – 10:30 A.M.


Due to a recording issue at the start of the morning the introductions of some attendees are not shown here.

SCHWARTZ:

…Forbes, a student of Paul Saffo, a member of the committee, and I spend most of my time in GBN, helping organizations think about the future in a great variety of ways. And what excites me about this day, frankly, is we’ve been at this now for a couple of years pulling all the pieces together and making sense of it.

MANSFIELD:

I’m Carolyn Mansfield. I’m a new addition to the Monitor 360 team so I’m excited to be here because Derek’s had me doing a crash course on everything you guys have put together so far and I’m just excited to see the ideas that crystallize out of the day.

McCORMICK:

I’m Mike McCormick. I’m with McLiera Partners and we basically help companies use disruptive technologies in the marketplace to gain market share. I’m excited today about basically seeing different perspectives. I think one is a good friend of mine who’s got a great definition of wisdom, which is being able to see the same situation from multiple perspectives simultaneously at a time and I think this is kind of an interesting opportunity to be involved in.

DREW:

I’m Steve Drew. I’m a member of the committee and a consultant of the pharmaceutical and biotech industries. What excites me is the role that biology will play in every aspect of our lives, all of the technologies, all of the directions that we go. And I’m seeing ways in which that’s coming together.

LONG:

I’m Darrell Long. I’m a professor of computer science at University of California. That’s my day job. I also spend a lot of time working with the government Department of Defense and intelligence communities in particular. And I used to be a member of the TIGER committee involved here. What excites me about this is looking at technologies coming not just from my discipline but from other disciplines, physics, engineering, biology and seeing how they can come together and try to understand what might happen when these things come together.

VELOSA:

Hi. I’m Al Velosa from Gartner, another market analyst trying to, and I think failing, as somebody else said, to forecast technologies markets and all sorts of good things like that. But it’s a really fun activity to do. And what I’m really excited about actually is to

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

look at somebody else’s, actually just looking at markets with this set of talent because I always learn something from talking to folks like you.

TWOHEY:

Hi. My name’s Paul Twohey. I’m a recovering academic so I’m now an entrepreneur and so I used to work at Palantir and now I’ve got a startup that we’re hoping is going to disrupt some markets ourselves. And I’m kind of excited about getting a glimpse into the future with some really smart people and making sure it turns out right.

LYBRAND:

Hi. I’m Fred Lybrand. I’m on the committee. I run the U.S. operations for an advanced textiles company that’s headquartered out of Europe and have started a company around food safety and nutrition using IT perspectives. And similar to Peter, I’m enthused about the opportunity for synthesis in a lot of the ideas that we’ve been talking about for almost two years now.

ZYDA:

Hi. I’m Mike Zyda. I’m the founder of the Games Program at USC, the Director of the USC GamePipe Laboratory. I’m also advisor to five startups, probably the most, two most exciting are Emsense, which is a brain sensing, human emotion modeling company, which now has offices in San Francisco, New York, Chicago, London. We started this in 2004. It's growing real quick. And also Happynin Games, which we founded in September. My brother is involved in that. And I hired 15 of my own students from my own program, which is pretty fun. How does my professional work link to this topic? I’m just kind of a disruptive kind of guy and maybe you need – [General laughter] – someone like that, and I just -- So what I typically do is I go and do what makes perfect sense to me and I just go make it happen. And this is, you know, I tried this in a military school. I was at the Naval Post-Graduate School for 21 years and founded the largest cross-disciplinary degree program there at the MOVES Institute. Built a hit game inside of the school, America’s Army, through its almost four million registered players. No one told me you’re not supposed to build a hit game, build an operating hit game inside of a university but what the heck, I just do what I feel compelled to do. I’ve also helped found a nonprofit in the last year called The Fight Against Obesity Foundation and it is sponsored by Steve Harvey, the comedian, if you know that. We’re just about to buy a building in Inglewood, California, to support a group that encourages proper diet choices and fitness. Anyway, what excites me about this meeting? A lot of interesting people, San Francisco’s fun, Gilman Louie, of course. You know, I always like to come to his meetings and listen to, know what he has to say and so I think it’s lots of fun to talk about the future. I think it’s really hard to predict the future. I think it’s, the future just happens and I think sometimes you have to just jump from what you’re doing and go to the next thing. So I got to do that. I quit my tenured full professor job on my 50th birthday and took a new position at USC and founded a game program. So that’s the kind of guy I am and that’s why I’m here.

GOLDHAMMER:

Thanks, Michael. Philip?

WONG:

I’m Philip Wong. I’m with Walt Disney Parks and Resorts. I’m the director of Business Planning and Development. I have a small team that basically looks at any sort of strategic issues and population actually has, so these can range from issues around technology, they can also range from capital restructuring. So basically – and also forecasting and planning. So we cover a whole range of issues all across the company. The reason I’m interested – I’m going to do this the other way around. Before I actually joined Disney I was in technology for about close to a decade, started off my career at NASA, worked at actually Hughes Communications, Inc for a while, designed a satellite

Transcripts were not edited.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

system for the ICO Global Communications, which was a mobile satellite communication system. Didn't fare so well. Realized the business implications in that.

SCHWARTZ:

Nor did Iridium.

WONG:

Nor did Iridium but it was a great technology. And then joined a couple startups and we actually took one startup that I worked on, which is an IP company, CallWave public, a number of years ago, and so really sort of enjoyed working in that environment, which was very disruptive in terms of the technology that we were looking at. And what I thought was fascinating about that was that the disruption in the technology field came from sort of the down market and not necessarily the up market, which is the performance, sort of the performance aspects of the technology. And so I’ve always been, even what I do now I think we’re all constantly looking and being careful about what could disrupt our company’s business. And so I’m a firm believer in a Christensen sort of framework for disruptive innovation and so just very excited to be participating in sort of a forum where we can actually discuss disruptive technology.

GOLDHAMMER:

Great, thank you. Rich?

GENIK:

I’m Rich Genik from Wayne State University School of Medicine. I’m the Director of Emergent Technology Research there. We mainly are dealing with neuroscience and neuroimaging, looking at, trying to do two things at once, which I was reading and talking so I didn't do too well there, like talking on a cell phone and driving a car. Being from Detroit, we got a lot of support from the auto industry, used to have a lot of support from the auto industry. [General laughter] What I’m excited to be about, be here today is looking at approaches to predicting future disruptive technologies that are non-Delphic models and also the difference between forecasting and predicting and to be with a group and participate in looking at those specific items.

GOLDHAMMER:

Great.

WINARSKY:

I’m Norman Winarsky. I’m on the panel as well. At SRI I am responsible for launching ventures in licenses from SRI, disruptive technology opportunities. I’m excited because I’m going to learn from bright people.

GOLDHAMMER:

Good. Jim?

O’CONNOR:

Hi. My name is Jim O’Connor. I think the most relevant experience from my past is the fact that I was at Yahoo! Finance for seven years as the Director of Product Management, spending most of my time figuring out how to manage large sets of data, translate them and display them in a very easy to consume fashion, not necessarily to finance professionals but to the average retail investor, as well as working on communities, trying to figure out what kind of intent and how to mine that data so that it would be more helpful to the retail investor. My current position, I’m a partner at a small company called Mondia down in Mountain View, where we’re a startup incubator/accelerator, helping small startups move from the idea stage to reality as quickly as possible. I think I’m most excited here – when I went through the bios I realized I’m the least educated person in the room probably, which is really exciting for me 'cause I enjoy being, you know, not the dumb guy but the least educated person. [General laughter] Because I know I’m going to walk out of here tomorrow, you know, or today, smarter than I was when I walked in this morning. And going through all the papers last night, I think the most

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

interesting thing for me really is taking this really massive boil-the-ocean project and try and figure out how does it go from where it is, kind of a concept stage, into reality and in particular, kind of what the interfaces look like because there’s, you know, a wide breadth of ideas in what we all went through last night. And then also I think in one of the papers there was a comment that said that it’s very difficult to predict the future but the more you know about the probabilities and the possibilities and discuss them, the more ready you are to react to them if they actually become a possibility. I think that’s something that’s very exciting.

GOLDHAMMER:

Great, thank you.

DOLAN:

I’m Phil Dolan with Monitor 360. Apologies for showing up a few minutes late. I do most of my work with Herrick, Feinstein LLP the national securities establishment. What I’m most excited about is not disruptive technology per se but disruptions that cut across domains and how technologies that are small improvements in one domain in fact can be dramatically disruptive in another and vice versa.

GOLDHAMMER:

Great. Did I miss anyone?

UNKNOWN:

Gilman.

GOLDHAMMER:

No. We got Gilman. But now what I’d like to do is pass the baton to Gilman, who is going to set some context for us about sort of what the committee has been doing for the last, I think it’s a year and a half now.

UNKNOWN:

Two years.

GOLDHAMMER:

Almost two years? Almost two years, setting some context for what the committee’s been up to, what we’re going to be doing today and what success would look like at the end of the day.

LOUIE:

Thank you. So as I said earlier, my part-time job is being a venture capitalist so basically what I do is I kind of sit on my butt in my conference room and listen to startups pitch us, usually slide ware. You know, they come on in and they say they’re going to change the world, have a great idea and they throw up a bunch of slides. So one of the things I learned from that exercise is, you know, it’s a very effective system of going through lots of, lots of ideas of which the entrepreneur has done very little work. That’s the key. The entrepreneur has done very little work. And so one of the goals of this exercise that we’re going to be going through today is think of ourselves as a startup and can we come up with our own pitch deck to be able to say before we build anything, before the government goes off and invests whatever large sums of money they usually invest in big systems, you know, think about what are the possibilities and what could this thing look like before we actually build it. So that’s kind of one of the objectives. I mean, another objective is, you know, as in any 1.0 startup, you know, a guy comes on in or a woman comes on in and says, “I’ve got the billion dollar idea. Please give me $100 million.” Sometimes they come in, “Please give me $2 billion to give me the $1 billion idea.” Whatever. The point is, I usually come back and say, “Well, you know, I’ve only got this little bit of money. I’ll give you a little bit of money if you can prove out the concept.” And so one of the exercises on any kind of 1.0 activity, and we consider this kind of a 1.0 activity, is what is the least amount of money, the least amount of energy we could expend to even prove out that the idea has any traction. So this is not about an exercise

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

about building, you know, the system to end all systems in the next 12 months. It’s not the Manhattan Project. But, you know, can we come up with some sort of a framework to think about what the problems are. And so this is airplane ware, which is kind of traditional for any startup. Airplane ware is when you’ve got a meeting with a venture capitalist and you’re flying across the country, as I was last night, you know, I need to put a pitch deck together. I start working on my slides. So what’s good about slide ware, airplane ware, is the latest thinking, good and bad, all consolidated into a single pitch deck, okay? So there’s very little thought but a lot of feelings that have gone into the slide deck, which is kind of what we started off with when we started off this committee, which is we had a lot of hunches, we had a lot of ideas. We wrote a first report kind of looking at the history of forecasting, put some of the concepts together as hey, somebody should think about these kinds of concepts. Most of it is what I call feeling based rather than fact based, which is okay. Any new endeavor, particularly disruptive technologies, starts off with a feeling, hardly ever starts off with real fact and data because there fundamentally are not facts or data to start with. So one of the things we started thinking about maybe was, you know, before we jumped into technology, just think why we have disruptive events. And on the [..?..] of these kind of disruptive activities -- it could be a piece of technology, it could be, you know, not seeing 911, Pearl Harbor, whatever it is that is disruptive -- why didn't we catch it? And then of course whenever you look backwards it’s immediately obvious that you should have seen it. So we came up with kind of our laundry list of what causes these kinds of surprises. So the first thing is not knowing enough to even ask a question, right? When you kind of get smacked up on the side of the head it’s usually because you weren't looking at where the punch was coming from. So not knowing enough to ask a question or you could have asked a really good question but you didn't ask that at the right time. You know, the environment wasn't right for somebody to get good signals or responses or answers out of it. This is my favorite. This is the problem of experts. They assume what has happened in the past is going to happen again, right? I’ve done this 20 years ago. It was a total failure. This young kid is dumber than I am. She will totally fail as well. A lot about mirroring, this idea that somebody else is going to tackle the problem, look at the situation the same way I’m going to do it. They’ll never go down that path. That makes no logical sense. That is totally crazy. A rational person would never do this. One of the things interesting about disruptive tech is rational people don’t make disruptive technologists. Highly irrational, highly focused, somewhat crazy, definitely not normal people. If you were normal you’d probably have a day job and you’d go home, put the kids to bed and enjoy life. If you’re abnormal, you create companies like Oracle, Apple, Google. Information fragmentation. Lots of information around. There’s lots of noise and it’s all over the place and you can’t figure out which is the good information from the bad information, information overload, way too much stuff coming in. I can’t figure out what’s going on. Biased institutions, bias, your own personal bias, bias of the community, dismissed, potential outcomes. And finally, the most important one, is my favorite, came out of the 911 Commission on why we were able, not able to predict it: a lack of vision. There’s also another one I didn't put on here, is dismissing visionaries as crazy, uneducated or not experienced enough to understand what the real world is all about. So we had to wrestle with what is a disruptive technology, you know? Is that something that just suddenly appears on the scene and changes the world overnight or is it something that slow brews for 20 years and is something that changes that has sudden impact? So we came up with these kind of four concepts around disruptive tech and everybody has their own version but this is our committee’s definition. It’s innovative technology which triggers sudden and unexpected effects. It doesn't mean a new technology which triggers sudden and unexpected effects, just saying innovative technology. It could have just appeared on the scene or it could

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

have been around for a long time and somebody figured out how to use it in a different way. It refers to that type of technology that incurs a sudden change of established technologies in markets. These have real impact, right? It’s really hard to have something that is disruptive and has no impact so impact is really, really key. It can include technologies that really change the balance of global power. There’s this kind of hats off to our DOD government friends but in many cases technologies have broad impact. They just don’t impact a particular region. They may start off impacting a particular region or a particular market segment but it quickly begins to spread and has global impact pretty quickly, especially these days. Then of course they’re hard to predict, they’re highly infrequent and, you know, there’s lots of factors that make it hard to see it coming. Huge difference between evolving tech and disruptive tech. So Al Shaffer, who was director of plans and programs inside the DOD in 2005, said from the DOD’s perspective there are three reasons why we’ve really got to understand disruptive tech. One is just to be competitive, right? It doesn't matter whether you’re in a corporate environment or whether you’re in a nation state environment from a military point of view. If you don’t stay current on technologies and begin to try to think about how technologies can impact you, you’re no longer going to be competitive in the marketplace. This is kind of obvious to all of us in this room. The U.S. is not the sole keeper, creator and distributor of high quality technologies that have disruptive impact. Pretty important for kind of policy issues, which was in the old days that we’re going to solve the problem by not letting any of the good stuff out. Doesn't make a whole lot of sense because – but now we have a problem, is does the good stuff even get in. And then quite frankly, we need to stay engaged with the rest of the world. Now I’m not just talking about the rest of the world from the defense, military point of view. I think DOD does a pretty good job – you know, nobody’s perfect – but does a pretty good job of understanding what I call, you know, what the big systems are that they may run into done by big nation states that require billions of dollars of investments. We have whole organizations who think about what those platforms might look like. We have whole organizations that go out to listen what other people are doing, and some organizations that go out and steal what other people are doing. Okay, that’s not what you’re talking about. What we’re talking about is kind of disruptive technologies in plain view. What are the kind of technologies out there that we all take for granted, they don’t have obvious military applications and we wake up, we go into a country and this surprises us in a very fundamental, profound disruptive way. IEDs are kind of a good example of that, right? But there are, you know, many more kinds of technologies. The Internet, mobile phones, well next generation wireless toys, all could have an impact to Department of Defense. And so what they asked us to do is don’t think like us because we already know how to think like us. Think like the market. Can you encourage a group of thought leaders from around the world to participate in a system that has value well beyond the Department of Defense of the United States that thinks about disruptive technology and it’s okay if it’s shareable by everybody. You know, we can figure out what we want to do with it and use it our way. Chinese can figure out what they want to do with it and use it their way, the Russians, Israelis, you know, GM, Nokia. If you have a valuable system it should be valuable to everybody. So is there a way to kind of come up with, for lack of a better term, kind of the Wikipedia of disruptive technologies. So what makes a good forecast? Many people here are forecasters. A few of you are actually people who think they, they do predictions. But a good forecast is not necessarily an accurate forecast, right, because it’s really hard to know when you make a forecast whether or not you’re going to be accurate, right? Really hard to do. You can go well, you know, this person has a good batting average but at that moment at the plate that person could strike out, right? So what makes a good forecast? So first of all, in some ways it’s more important to understand the impact of potential disruptive technologies

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

than actually understanding the technologies themselves. What is the world or what could the world look like, right? Hey, we might have gotten it wrong. It might not have been an electric car or a hybrid car. It might be some other kind of car, another kind of vehicle. What’s important to realize is, in this particular view of the future, that we may not be using cars that consume petroleum. In some ways that is more important than figuring out this specific technology this week, which we think is going to cause that to happen. You should increase the lead time for stakeholders to plan and address potential disruptions. In the range of potential impacts that are out there a good forecast gives a person a view to help them prepare and increase the time in which they begin to think about how they plan and how are they going to react to potential futures. This is also very important. A good forecast should allow somebody to slightly change the odds from 100% random to slightly better than random. So should think of it as card-counting in Black Jack. Doesn't guarantee at any moment in time that you’re going to have a winning hand but over the long term of playing the game out, you beat the house odds by just changing it just a little bit. A good forecast is like counting cards. Doesn't guarantee a win, it just begins to subtly shift the odds in your favor. And most importantly and a lot of forecasters forget this, is at the end of all the forecasts is what do we look for to see whether or not a forecast is coming true or not coming true? What are the signals, what are the signposts, what are the thresholds, what are the tipping points that we should be out there listening and monitoring for to say oh, my God, it’s happening? So think of it as a chess game. You’re sitting there and you’re playing a Grand Master and the Grand Master looks at the chessboard, in about ten seconds says, “Oh, I see a pattern here. It just kind of looks like that game. I know my next eight moves.” To a novice, you look at the board and go, “I don't know what the heck to do next.” So an early warning system is kind of having what I call that opening book in a chess program, right? Now how can we fill that opening book, those pattern recognitions that allows somebody to say, “Hey, this might be coming true, this may not be coming true”? So when you would see me down whining in the TIGER Committee – so the TIGER Committee is this standing committee for the National Academies of Science in which they put really, really smart people and a few not so smart people, like myself, in a room to think about these problems. And we were just sitting around whining about how poorly we have done in forecasts. The Department of Defense, the intelligence community has effort after effort after effort to produce what is fundamentally the same list of stuff. So the general process is we go out, we might use the Delphi method, we might go out and do a survey or we might have some analytical exercise and you always come out with the same list. And we kind of say why is this list always the same? There’s always bio, nano, you know, computation. Recently we’ve added neural, you know. There might be two more layers of depth in there but it’s always the same list. And if you go back twenty years and kind of look at historical forecasts, there’s always the same list. But what was amazing is the list, how inaccurate and how wrong it is. In fact the greater level of experts participating in the forecast increases the likelihood that that forecast is going to be more inaccurate, which is kind of weird, right? You stand a better chance of looking into the future by asking people who read science fiction with no education than asking people who are highly educated in the particular subject matter, expert, and say, “Can you predict the future?” So we said, you know, one of the causes could be because we always go to the same group of experts. You all speak English, all cleared, which, you know, to be cleared it automatically takes even a population from here down to five people so, you know, it’s highly Western oriented, highly American bias. Particularly on the technology side it is high tech bias. We like shiny objects. We like really expensive shiny objects. We like really expensive shiny objects that nobody else can see, right? That’s our bias, you know? If it has like bolts hanging off and a big airplane, right, and if it has vacuum tubes on the inside, we

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

immediately dismiss it as so yesterday and sometimes that could cause some of the lack of understanding of what the possibilities are. We’ve very tech focused and we’re very list focused. We’re not impact focused and we don’t explore the secondary effects, which is if you had all these technologies what will you do with them beyond the obvious use of those technologies? Because the most impactful disruptive technologies typically aren’t new technologies but aggregations in a system of existing technologies used in a new and profoundly different way that nobody ever anticipated before, right? So you have these four secondary effects, not just look at wow, you know, it’s nano, it’s really small. Well that’s kind of interesting but what impact, how could that be used to create something else? This is really important for the next 15 or 20 years because there is this gut feeling that we’re kind of like once again, just like the Einsteinian revolution, on that brink where you’re going to have this convergence of technologies, science, quite frankly the human condition coming together to create these really unbelievable opportunities for great disruptions and we just quite don’t know where it’s going to come from or from any one particular field of science.

 

Forecasts typically are going to provide snapshots that are increasingly obsolete. The moment you forecast it it’s over. There is this overwhelming tendency, particularly for people who use the Delphi approach, to go for a consensus view. That’s pretty good when you try and forecast technologies but disruptive technologies you’re really more interested in the tails. So you’re more interested in many cases in the stuff that people dismiss than the stuff that they agreed upon. And so one hunch we have is you should do the consensus view, use that as the mask and ignore it, right, and then get to the tails. And finally, these forecasts are very, very difficult to make actionable. So we spend a lot of time talking to ourselves, talking to other folks who participate in creations of these kinds of forecasts, we talk to technologists, we talk to folks from the Department of Defense, we talk to some people from other countries. We went out to different countries and explore around. And so here’s our hunches. Again these are hunches. There’s no foundation in fact or proof. Our hunches. A good forecasting system should be persistent, right, should be kind of hey, gee, you know, what’s the current thinking, you know, pull it up on your website, be able to kind of go through it and scan it and have it try to be as up to date as you possibly can. So it should be living rather than a moment in time. It has to be not focused on DOD needs because if you focus on DOD you start focusing on things that go boom. Things that go boom takes a certain logical way of kind of building down the path of things that go boom. War may not be about in the future of things that go boom, right? It might be -- remember, war is the final stage of making somebody else do something that they don’t want to do and you’ve exhausted all other possibilities. That is the military’s application of force. There are many other kinds of force and potential force that we may not be considering which may be the definition of war in the future that is not the definition of war today.

 

Third point. Don’t ask the experts. Ask the people who are most likely who are going to be affected by the great disruptive changes. Ask the people who most likely are going to create those disruptive technologies and it’s partly not going to be, though there may be a few in this room, that many who look like us. They’re probably people today who are just kids. And what we call kids, anybody under the age of 30. The second thing of that is we said besides go young, look at what they’re betting their lives on. After you finish your post-doc program, what would be the great program for you to work on next that you want to do, not what your professor wants you to do and what your department head wants you to do. What is it, as an entrepreneur, that you’re willing to risk the next four years of your career and life to go pursue? Ask those kinds of questions. Go abroad and

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

don’t ask them in English. You know, it’s kind of fun. I grew up in a household, I’m the only English speaker in my household. My 4-year-old and 6-year-old speak three languages, English, Mandarin and Cantonese and learning Japanese now, right? They give me a completely different answer in English than they give mommy in Chinese, right? So a hunch is if you ask somebody in English they may give you what they think you want to hear versus listening to what they would normally be talking about in their own language that naturally occurs. And the subtlety of language is really, really important. Assume the world is lumpy. I know everybody read “The World is Flat”. I know we think this is a global world. Technologies impact people differently. Different countries, different technology clusters have different priorities. So if you’re sitting there in the Middle East and you worry about what’s going to be life like when oil kind of is no longer important in the world, and that may be a completely set of priorities than somebody in India trying to figure out how do I deal with billions and billions of people who are starving and get them into the modern world, versus somebody who’s sitting off in Europe thinking about, you know, the next Collider project, right? The world, while maybe relatively flat, I suspect, we suspect it’s very lumpy along the way and understanding the lumpiness is important. One methodology doesn't fit all. You know, we don’t believe, after kind of looking at all these approaches that we can create one approach that will obsolete all other forecasting approaches and our gut hunch is that we should consolidate lots of different approaches into kind of this grant repository, a multiple repository, that …..[Mic noise] This was highly debated, particularly because we are the National Academy of Sciences. Our committee thinks there’s value of engaging the crowd as well as experts. So crowd sourcing we think has a role in this as well as expert sourcing. We’re not a subscriber to either camp that believes one replaces the other. We actually think both are important. How to use the crowds and how to use the experts is something that we kind of wrestle with and try to figure out. Web technologies we think will be very useful. Don’t boil the ocean. We said that already. Don’t launch a Manhattan Project. Any forecasting should have more than one future being prognosticated and we think backcasting may be very useful as a tool to kind of figure out how to develop a signals pattern that can actually be in the monitor and it needs to be impact focused rather than ….[Mike noise].

 

So forecasting disruptive technologies. Four key things that we think that any particular forecast or any particular technology or impact should include. One, it should include a vision. Forecast a reality describing the vague way. Trying to be too specific is actually a bad thing in many cases. It should include a measurement of interest or measurements of interest. You know, what’s the thing that will change if you change the tipping point could be the cost of energy stored in a unit, a mass, that once that number crosses a threshold, that is the key thing that starts everything flowing. There should be some signpost. Hey, you know, these things happen. You know, there’s an indication that this either will happen or might happen or can’t happen, and then the actual signals themselves. Report 1. You guys, I don't know if you guys had a chance to read our lovely Report 1 but it’s long and boring and will put you to sleep. But it did have these six major sections, which is, you know, basically like just looking at the past, looking at the forecasting approaches, some things that we talked about and discussed a lot of issues around bias because we think that was a really, really critical thing that basically handicaps most forecasting approaches. And then we looked at some persistent systems.

 

So why are we here? We want a lot of new ideas and some old ones. We want to learn from the experience of folks, we want to explore some new methodologies as well as figuring out what existing methodologies could be used in a unique way that could add

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

to the answers. And we want to develop a framework for Version 1.0. So when we thought about Version 1.0 and somebody said, “So what is Version 1.0? I mean, what do you guys really want?”, first is start with the output, right? Whenever you design a video game, start with what the screen shots will look like, right, because if the screen shots aren’t that exciting and the kids don’t want to play it, then it doesn't matter all the great algorithms you have in the back side, doesn't matter how good the input was, right? Start with the output. Then think about what are the sources that you have, what are the resources that are out there that you can actually provide for useful input once you figure out what people would actually use on the output. Then define the methodology. Come up with a simple block diagram of both the human process as well as the machine process. It’s not just a machine, you know, of computer science mapping. It’s that you are going to go through a persistent forecasting system where there are humans, there are computers, there’s information sources. Can you define kind of a high level block diagram of what that would look like? Could you come up with a way of tracking signals and tipping points and then at the end of this all the reasons why this is going to fail, won’t work or some of the challenges that we’re going to run into. So somebody asked me what’s the ideal output look like. Now this is my gut. I don’t want to bias you to work on this list. But the thing that fascinated me most was, you know, in my prior role I ran In-Q-Tel for the Central Intelligence Agency, which is kind of a venturing organization to go out and get good ideas in Silicon Valley and other places in the United States. And so the CIA comes out with this book called the “World’s Fact Book.” Basically it’s this book and you go by country and it lists kind of all the key attributes of that country and some of the issues it has. So that kind of biased me in saying gee, you know, it would be great to have kind of the “World Fact Book” for forecasting. I can flip to a country, you know, I can kind of go and say well, Georgia, what are the issues in Georgia today? Now what’s your technical bets? What are your universities thinking about, what kinds of technologies are going to impact them? But most importantly, what are their big knotty problems? I suspect that if you try to figure out people’s problems, you put resources into solving problems or creating opportunities. So if you had an output that basically didn't say, you know, here’s the world and here’s the ten technologies that impact the world, I think it would be more useful to say, you know, by country or region or by technical cluster, here’s the problems and opportunities they’re going to work on. Here’s how they’re beginning to think about the problems. These are interesting sources of technologies and uses of technologies.

 

So there’s a bunch of questions that we’ve got to ask ourselves if we actually build this system. A), would anybody use it? A good question that we get repeatedly is since this is kind of sponsored by the Department of Defense, why would anybody else in the world participate? Why should they even trust the system given the history of U.S.-based technologists? One argument says well, you know, the Internet was kind of created by the Department of Defense. How about this thing? We don’t know the answer to that but it’s a key question because we can build a great system but we’re not sure, if you believe it can be a great system, we’re not sure anybody would actually use it. If nobody uses it, it’s not a great system. So figuring that out is important. There were some arguments by National Academies' members whether or not this was technically feasible. I’m less concerned about that. It’s just an issue of people not being able to see beyond their noses. And as I said, you know, what’s the minimum level of effort to test the viability? What’s the least amount that we can do to see if this is going to have traction? So that’s my airplane ware and, you know, airplane ware, typically about 80% of it is raw but is a good starting position to begin to think through what the problems are.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Gilman, would you also just walk us through, as a way to start here, this?

SCHWARTZ:

Jessie, three other people came into the room.

GOLDHAMMERGOLDHAMMER:

Yes. Let’s do quick introductions for folks who just arrived. Thank you, Peter.

VONOG:

Hello. I’m Stan Vonog. I founded two startups here, currently second and Gilman is an investor, his venture firm is an investor in our startup. And I come from Ukraine and I was educated in Russia, mostly at the Institute of Physics and Technology. And I won a couple of software design competitions worldwide and presented to many people like Bill Gates, etc. in Russia, all kinds of cool technology. So I’m very interested to be here. It’s all very interesting and I’m excited.

GOLDHAMMERGOLDHAMMER:

Great. Who else joined us? Yes?

CULPEPPER:

I’m Mark Culpepper, Chief Technology Officer at SunEdison. We basically do distributed generation of photovoltaics (PV) systems on commercial government utility rooftops. My background, my agency degree is international economics. I’ve been in technology ever since college primarily in infrastructure, what I would call lightweight infrastructure, so data communications, telecommunications and then transitioned from that into disruptive generation and PV about four years ago.

GOLDHAMMER:

Great. And one more? Lynn?

CARUTHERS:

Oh, hi.

GOLDHAMMER:

We’ll introduce Lynn in a moment. Hang on, Lynn. Gilman, would you mind --We’re going to use this as a basis for some of the discussions. If you could just give us a high level description of what this is, perhaps answering some of those questions, that would be very helpful.

LOUIE:

So let me kind of back up. One of the challenges we had when we did our first draft report from some of the monitors – monitors are kind of like people who review your paper to see whether or not it’s publishable, every much like submitting your work as a Ph.D. candidate and we do have peer reviews. It’s a very important process. And so one of the concerns that the reviewers had is, you know, this is really interesting, provides a great background but, you know, you haven't given the readers enough of a framework to think through how you would even go about building out a system to accomplish the goals that the committee wants to accomplish. So one of the recommendations is to put together a flow diagram or traditional block diagram, you know, one or the other or some hybrid approach, that basically describes what you’re talking about. So let me start off by saying the following. This is raw, okay? So please don’t take it too seriously. We understand that it’s raw but – and let me explain why we think it’s raw and then why we think it’s still valuable. In the old days most of you -- about half of the room can remember. In the old days computer systems used to be batch operations. You used to take a pile of cards, used to submit it down in the basement of some building somewhere, a bunch of geeks would load it up and the next morning you’d get your report, right? And so when CRT started showing up, or even Teletype 33 started showing up – you’ve got to be really old to remember what the Teletype 33 is, but when those started showing up people started thinking about gee, you know, this computational environment, this real

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

time kind of persistent computing environment, it’s a lot different than the batch environment. So how do we think about that world? So one is to leap ahead and say well, you know, now that we have kind of real time computing we should be on the cloud, there should be all this stuff, it should be persistent, it should be everywhere, right? But if you were sitting in that basement, all that would be gibberish, right? You wouldn't understand what you’re talking about. So what we had to explain to people back in the batch days was real time systems was really batched but just done really, really fast. [General laughter] Something you don’t have to change too much about how you think about the problem. You just need to use the same methodology. You get that white and black book that IBM published about system, design your system, flow charts that they published in the 1970s, and it’s still the same thing but we’ll do it maybe a hundred times a day rather than one time a day. But don’t worry. Nothing will change. Your life won’t be too different. This is that version. Life will not change forecaster, Ms. Forecaster. We’re just going to do it many, many times really, really fast, okay? So we understand that that is a lie but it is a useful approach to think through the problem. So in the traditional approach of forecasting we said to ourselves, you know, you’ve got to really define the project. What’s the mission, why are you even doing this in the first place? Now we all know when you build a system like Wikipedia or, you know, any repository of information out there or you’re doing Google, you know, trying to build a search engine, right, you don’t say well I’m going to build a search engine that’s really good for scientific discovery. You know, those kids kind of start off in [mike noise] but for us old farts we like to think about Manhattan. We’ve got to figure out who your users are. So this is kind of like kind of the nod to a good forecaster would start off with the idea of who your users are. Now in a 1.0 version of the world it is useful to figure out a small sliver of a target market that may be representative of a bigger market to see if there’s any value even for that small market to begin with. So kind of think of this as vertical segmentation of a big idea for all you new farts and for all you old farts, not that much is different than the way you did forecasting in the past. Okay, so once you kind of define what the priorities of the mission, which is if you had this forecast what would you use it for, right? Understanding the use of the forecast, how they will apply that forecast, critically important because if the person is trying to allocate resources because it’s a financial decision, right, versus a human decision or I just don’t want to die tomorrow; I’m not trying to maximize my opportunity, I’m trying to minimize my likelihood of total, complete utter failure and produce a different kind of forecast. Second, you’ve got to go off and then figure out given those objectives, who has the data, where are the sources of data, where are interesting data that I can get in touch that might be useful to inform a forecast? There are data feeds and people feeds. We thought that was really important because any automated system needs to be able to ingest data in a way that computer systems can maximize and use, but there’s also people, kind of like this. This is a disruptive technology. [General laughter] [Comments] Can we start later, please?

UNKNOWN:

Actually it’s a destructive technology. [General laughter]

LOUIE:

As a feedback [Multiple Comments] It’ll come back in five minutes. It’ll be really short and then it’ll come back five minutes later and be really, really short. Okay. Once you’ve got to go through the data – you know, traditionally you have to go through a data hygiene process that’s either restructuring the data, cleaning out. Now we did have an insight that even bad data is useful in a forecast, right, because it begins to inform you of where kind of conventional wisdom is falling. So the big mistake about data hygiene in forecasting at this point is you start biasing it by saying this is useless data so I’m going to ignore it. So you want to be able to take any data, including bad, what you might

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

consider bad data, and still be able to present it in a way that will inform you what the world might become or what the world thinks it might become or why you may be wrong. Then a bunch of processes that you would go through to crunch through all of that data, including the employment of forecasting methodologies in the analytical phase. Once that is done you get to an output. The output should then allow somebody to look at the portfolio of possible futures, not necessarily probable futures, spread your bets against that environment, allocate that resource and then the feedback is track it, see it and then do it again and do it many, many, many times. So again, I apologize. This is a lie but it is a useful framework to think about, okay?

GOLDHAMMER:

Thank you. So this is the Noble Lie, which we’re going to use as the basis for our discussion today and it’s actually important to provide us I think with some structure to be able to think about what product 1.0 is going to look like. What I want to do is just use that as a transition now to dive into some conversation about some of the key design challenges that we face in thinking about a persistent system to identify disruptive technologies. Now these design challenges that I want to talk about are ones that we have sort of observed in our own experience helping organizations and governments in particular design systems like this. And at different junctures in the system there’s some really thorny, tricky design challenges that are worth talking about before you actually get down with pen and paper or in this case, shapes and paper, and start figuring out what are the different elements of this system and how do they fit together. So we’re going to again use the Noble Lie here to process steps, identify a set of process steps which no matter what system we ultimately design we’re going to have some set of these steps. At some point you have to define what are we looking at or what are we not looking at. What’s in, what’s out. What are we actually, what kind of information are we actually gathering. Is it open source, closed source. These are things that at the high level we have to do. Now we also have to think about some key design challenges, defining the unknown. So if you have to define what’s in and what’s out, how do you do that when you don’t know exactly what you’re looking for? When you’re collecting information and there happens to – as I’m sure someone has probably on the tip of their tongue how much information is actually in existence on Earth today, exabytes of information, we’re talking about a lot of information about a lot of different kinds of not only technologies, new and old, but also information about the ways in which those technologies are being adapted in different parts of the Earth at different moments of the day. It’s a lot of information and one design challenge is how do you avoid data overload. You can’t have it all. How do you gather outside perspectives when you fundamentally are designing a system that’s going to be used by, in many, most likely by a U.S.-based organization? One key question here that we’ve run into in many cases with clients that we’ve worked with is how do you synthesize data into narrative. You’ve got lots of information about lots of different changes in disruptions that are happening around the world and as Gilman pointed out, you may have a normative set of changes which everyone basically says yeah, we need to look at nano, we need to look bio, these things are really important. They’re going to change the way in which we live. It’s five years, it’s ten years out. This is going to make a big difference. And then there’s this other stuff in the tail and no one wants to believe what’s happening in the tail. That’s why it’s in the tail. And yet you have to be able to find evidence of some sort in the world that what’s going on in the tail actually matters and you have to be able to tell it, you have to tell it in a story or in a narrative that is going to get the people to actually make decisions or take actions now to prepare for those changes in the future to do it today. It’s a really, really hard thing to do. The final design challenge is communicating to key stakeholders. Once you’ve actually figured out what you’re looking at, how you put it together and the story you’re going to

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

tell about it, how do you get people to believe it? How do you get someone to say, “You know what, you’re right. That 15 years, it’s going to be 15 years out, but we need to do something today,” when they have a million other competing priorities? And so a system that’s going to generate this kind of information and we also have to think through how is it generated in such a way that you can actually use it, that the person at the other end of the stream or at the other end of the narrative actually has a reason to believe that this is something that they need to care about. So let’s start with the defining of the unknown. And now I want to open it up to all of you. To the extent that this is a design challenge, how from your own experience have you dealt with this challenge? Can you share with us any ideas or thoughts about how we can define the unknown when we don’t know what the unknown is?

UNKNOWN:

So just before I answer your question directly, let me just capture a thought that I had in response to a specific question that probably comes a little bit later. But the question was about experts versus crowd. And I’m going to suggest that there’s another choice besides the experts and the crowd and let me call it the generalist as opposed to the expert. And what I found is that I don’t get good results by going just to experts. But if we add generalists, the results get better.

GOLDHAMMER:

Okay.

UNKNOWN:

Okay? So now in terms of defining the unknown, I would go after – again, there’s a technique, it’s a brainstorming technique. You need to seed it but it’s very important that it be pure brainstorming and invite extremes. So we typically ask things like what will things be like 100 years from now, 200 years from now, 500 years from now? I mean, we really push because otherwise when we ask the questions, whether it’s expert or not, we get what will things be like a year or two from now, okay? So that’s a piece.

GOLDHAMMER:

Great.

SCHWARTZ:

Stretching the timeframe, in other words.

UNKNOWN:

Yes.

GOLDHAMMER:

Yes.

UNKNOWN:

Well in going through this process in the past, technology, people don’t buy technology for technology’s sake. It’s always solving some fundamental problem and I think, you know, it was brought up a little earlier, it’s, you know, somebody, the problems here and the problems somewhere else in the world are different. And I think a big part of it is figuring out what are the fundamental problems that people are trying to solve? Because that’s where money gets allocated and that’s where technology gets implemented, and trying to figure out okay, what are the fundamental differences? And then it’s a matter of doing a scenario problem. What could the potential outcomes be? And then you get into the whole notion of okay, in that scenario, what are the viable technologies for that, and exploring what those options are.

GOLDHAMMER:

That’s great. So recognizing that people are solving different problems and they’re solving them in different ways in different parts of the world. I just want to just pause for a second and also I wanted to give Lynn a chance to kind of get onto the board what exactly she’s going to be doing for us over the course of the day, which is recording

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

our thoughts rapidly. And I also wanted to pause just to remind everyone that if you can speak slowly and also if you can at the end sort of sum up what your point is, this is very helpful for Lynn to be able to get your idea onto the page. And so let me just introduce Lynn Caruthers, who’s going to be recording for us for the rest of the day.

CARRUTHERS:

Good morning. And if on occasion, if you have a great long thought, if you breathe -- [general laughter] -- I would be grateful.

SCHWARTZ:

And if it doesn't get up there quite right, Lynn is quite capable of fixing it so go up and tell her, you know, “I didn't actually mean ‘fundamental’, I meant ‘irrelevant’,” you know.

CARRUTHERS:

All right. This is to be a reflection of your conversations today. It is not ours. It’s markers and tape.

UNKNOWN:

Does this turn into like a printout that we get later?

SCHWARTZ:

Yes. We take pictures of these and shrink them and you can actually read it and it’s a good summary of the day.

SCHWARTZ:

Yeah.

GOLDHAMMER:

Yeah. Phil?

NOLAN:

I wanted to pose a question about unknown to whom and if we have a set of, for example, analysts in the government, there may be a vast number of things that are unknown to them that are actually broadly known in another sector. And it makes me think of a few math classes I took around infinity and there were many different kinds of infinite sets and I think in a similar way there’s many different kinds of sets of unknowns. And spending a little time in advance to try to figure out what are the unknown sets that are of interest to us can be very helpful.

GOLDHAMMER:

Yeah, Jenny?

HWANG:

I just noticed that the dialogue here may be to have this thought. That is the design challenges I think has one very fundamental thing. That is how to formulate the questions. I can see the question proposed – I just immediately saw how you responded to it. But the question is formed a different way, you will respond differently. I think that’s probably a very formidable challenge, is really what to ask, you know, how to formulate the question.

GOLDHAMMER:

Right. So Jenny’s point is it’s fundamental to think about how people, what questions people are asking and how people are asking those questions, which gets to this question of defining the unknown. Yes?

SCHWARTZ:

So I think that we have discussed this in the committee. Actually Steve was one of the ones who did it in the committee but if I could paraphrase what I remember, is that it’s not just the technology, it’s the application and the use. That to some extent defines the unknown. We don’t know how somebody else is going to use it. So for me that is the most important component, the usage paths.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

It’s the usage and adaptation of the technology that may already exist. Great. I think Stuart had a point and then Peter.

BRAND:

I’m most interested in catching positive feedbacks early and so what Gordon Moore did way back when it was Moore’s conjecture was that, you know, the number of processors on a chip was going to double every couple years and that that was going to be important. It was really just a business plan. It was classic pitch deck stuff. But then other people, Negroponte at MIT said well if that’s the case then personal computers are going to defeat minis and all these other things will happen. We’ll try to get out in front of that. And he allocated a bunch of MIT resources to get ahead of that work. Metcalfe’s Law, that networks multiply their effects way more than just the number of nodes that you add. And so from that you could have predicted that when cell phones went into the developing world they would explode and change the world. The tricky point about positive feedbacks is it’s real easy to identify them once they’ve changed the world. What you want to do is identify them back when Gordon Moore did when -- doubling is not a very big event, when you go 2 to 4, you know, so what? You know, we’re looking at thousands. What’s going on here? And there’s lots of them that will get to 2 to 4 to 8 and then stop so you want to try to identify the ones that have a self-acceleration that keeps going and has impacts along the way that then feed it, which is what happened with cell phones. So that’s what I would be looking for, is where are these things just starting to just show the tips of their ears in taking off.

GOLDHAMMER:

That’s great. Peter?

SCHWARTZ:

I think another way to get at the question of unknowns is to think about the impact end of it and this goes to, you know, Gilman’s list, one of his several lists, but of what we want at the end. We want to know those things that are really going to make a difference. So the question is what makes a difference? In other words, what are the kinds of differences that matter, whether they’re balances of power, capabilities that people will have that we don’t have, abilities to see things we can’t see and so on? So what are the classes of impacts that would make a difference and then work back upstream to see how would you achieve those impacts and where would they be achieved? And that leads very directly to a point that I think Stuart has made several times in other contexts, that if you’re thinking about the impacts – and this also goes to the point about problem solving – then you start looking at different ways that people solve that problem. So for example, folks in favelas who don’t have resources and who have to innovate from the bottom up, as it were, to invent ways to get power or healthcare or communications or whatever it is that they’re in fact stealing, organizing, reorganizing and so on and reinventing. So it’s that sense of what’s that impact from the bottom up that enables these people to recombine technologies to create new capabilities. And I mean, the IED is a classic example of a favela solution to a problem.

GOLDHAMMER:

And a very effective one at that.

SCHWARTZ:

Yeah.

GOLDHAMMER:

Yeah. Any other last comments on this one? Yes?

TWOHEY:

I think in a disruptive technology shouldn't it be an output rather than an input? Why are you trying to define that node? Why don’t you wait for it to merge in the [Mike Noise].

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Okay. Say more about that. So if it’s an output then what’s the input.

TWOHEY:

Oh, the input is the situation. I mean, it can be like a collective knowledge of different groups of people, right? And then the output would be some sort of refinement of that.

GOLDHAMMER:

Okay.y, great. Great. And then we’ll come over to Jenny.

WINARSKY[?]:

Just following up on that, turn it around and also following up on what Peter said, take the impacts, take the visions with their potential impact and talk about how much we know about those. Measure not just the impact but the amount of knowledge or the relative amount of knowledge. So do we know very little about how we might achieve that impact, do we know a lot about how we might achieve that impact? But we’re not going to the unknown directly. We’re saying knowledge based is one way we should be measuring what we are dealing with systematically.

GOLDHAMMER:

Great. Last comment on this challenge and we’ll push on.

UNKNOWN:

I think something that was in the initial presentation that we want to look at is identify and track the heretics. Find those people that are on the lunatic fringe. I mean, you could make a great argument --

UNKNOWN:

I’m right here. [Laughter]

UNKNOWN:

Well most of them are in this room, yeah. But I mean, you could make a real good argument. For example, Bill Gates. You know, the entire hacker community hated Bill Gates when Bill Gates wrote a treatise and said, “We have to be able to either patent or license software.” Now everybody hated the idea but it has had an impact on the way software has been developed ever since. And it was a radical idea at the time but…

GOLDHAMMER:

Identify the heretics. Great. So avoiding data overload. There’s a lot of data out there, a lot of information. We just, even in defining unknowns, a lot of information that you would want to collect, identifying heretics, for example. How do you manage this problem in a system like this?

SCHWARTZ:

Well I think part of the design challenge is avoiding the issue of the light, the key under the streetlamp problem. That is, you know, we all know the old joke of the drunk looking for the keys under the streetlamp because that’s where the light is, and it is the case that we have a tendency to focus where the light is. And so I think part of the challenge for us is – and I don’t have a good answer, okay -- This is a problem and the problem is that we have a tendency to look where it’s easy to look as opposed to being able -- And that’s how we avoid the overload problem. We say all right, let’s just focus on those things we can just get at. If we actually looked in all the places where the keys could be, we really would have a data overload problem. So getting that right is I think one of the hard things.

GOLDHAMMER:

Yeah. Gentleman over here, go ahead.

DREW:

Steve Drew. Just a thought on that comment that you just made. It’s not just looking under the light. It’s deciding where in the architecture you’re going to look. Earlier today we talked about problems and issues and I imply from that that problems and issues give rise to vision, see that problem as not a problem. So do you start looking at the top of the

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

mountain of data and you say I think the visions are, I think the problems are? Or at the other extreme, do you start looking at the fundamental disciplines at the base of the mountain and then ask each discipline what –

[TAPE INTERRUPTED]

LONG:

…method that can determine what’s good information and what’s nonsense, I think is going to be very, very difficult. You really need a smart person and even smart people can be deluded into believing things that are just wrong. Zero point energy, for example --

SCHWARTZ:

Good example.

LONG:

-- brings a lot of money in from naïve people by unscrupulous scientists and you’re not going to have an automatic method that can tell good science from bad science.

GOLDHAMMER:

Yeah, good point. Yes?

WINARSKY:

So this has been said in different ways but I’ll just try and summarize. First of all, there’s two kinds of data we’re talking about. One is disruptive technologies and the other one is what could be done with it and you could almost start from the other way around. What are the disruptive events that could occur and what are the solutions to creating that? And if you do it wrong, you have massive data overload. So if somebody invented a passive imaging system that detected the vibration frequencies of a surface at more than a hundred meters, that’s potentially a disruptive innovation. Why? Because somebody could be imaging this window and hearing everything we have to say in this room, okay? The relationship between a technology and what it might enable is extremely difficult. That’s what venture capitalists are always trying to figure out as well, you know, what does this venture do? I mean, don’t tell me about his technology with these many gigahertz, and this kind of storage. Tell me what it can do. So let me just complete the thought by saying I would avoid data overload by starting with the futures that we predict could be the difficult or possibly problematic futures and then engage in what are the solutions to those futures.

GOLDHAMMER:

Great. Last comment over here and then we’re going to push on.

McCORMICK:

I just want to add to what you’re saying. I think there’s two paradigms at work. One of the things that are a little bit sort of disconcerting in listening to some of the things we’re talking about here, first you start talking about disruption technologies and all of a sudden it implies that we’re trying to solve a problem. The reality is most of the change in the world comes from an opportunity, you know, people looking for some kind of economic gain, some kind of difference, which always implies a context. What’s economically viable for you isn’t necessarily what’s economically viable for me, which isn’t going to be what’s economically viable for another country in some way, shape or form. So I think we have to be really careful with the context that we choose and how we’re looking at it because in reality most things don’t happen unless there’s something to gain, you know.

GOLDHAMMER:

Great. “Gathering Outside Perspectives,” another interesting design challenge. Outside of what, whose perspectives, how are they gathered, how do you make sense of them? I mean, right now we’re in the process of gathering outside perspectives. Is there any coherence that just is intrinsic to the perspectives that are being articulated in this

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

particular meeting that one could easily make sense of and insert into a system that identifies disruptive technologies? Not an easy equation to solve. You want to pick up on that, Al?

SAFFO:

Sure. Just move it back one slide. I want to make one observation.

GOLDHAMMER:

One observation.

SAFFO:

How many of us know people whose offices look like that who are absolutely brilliant at forecasting? I think this is a photo of Esther Dyson’s office. [General laughter]

GOLDHAMMER:

I’m going to Photoshop Esther into this. [General laughter]

SAFFO:

She’s under the second pile. [General chuckles] It fits with -- a model I follow as a forecaster is strong opinions, weakly held, come to a conclusion as quickly as possible and then attempt to destroy it yourself before somebody else does. And I think the lesson is make models early and then kill your children.

TWOHEY:

I have a question. So what – for outside perspectives, was anybody, did anybody have the perspective that Napster would come along and totally tank the music industry? Did anybody around say, “Yeah, some guy’s going to put a bomb in a shoe box” and that’s going to go kill a lot of our soldiers, right? Was there anybody around on earth that had this thing before it happened just because there is -- have we done any analysis going backwards? I mean, clearly Mr. Brand wants to go. [General chuckles]

BRAND:

I did -- Yeah. In 1972 I went to the Stanford AI lab and saw that they were passing information around and put it in a book called Two Cybernetic Frontiers and in there actually said goodbye to the music store.

GOLDHAMMER:

So I guess the answer’s yes at least on that particular issue but the point is, very well taken. Yes?

ZYDA:

I think there’s a fundamental flaw in what we’re trying to do here today and the fundamental flaw is you’re trying to come up with a process that helps you predict disruptive innovations, whereas the truth of the matter is the stuff that happens happens and then we all go “Wow, that’s really cool.” But if you go find those people, they didn't go through regular VCs and regular government funding sponsors. You know, when they went to the government funding sponsor at DARPA they probably got kicked out the door, you know. So I think there’s a flaw and so, you know, when -- and the flaw being that if you took some of the crazy ideas -- if you backed up and said -- Let’s take the music example. “I'm going to -- music is now going to be digitally distributed.” You know, we started seeing that with the first iPod in 2001 but did people really believe that was going to close all the music stores? Probably not, until much later.

GOLDHAMMER:

Harry.

BLOUNT:

Yeah, I think one of the key things with this outside perspectives – and if we look at the financial markets it’s a perfect example. Wall Street spent billions of dollars building tight algorithms to correlate all asset classes because they had forty years of financial data and they basically started with the assumption that if you have enough data, enough

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

history, you can hedge away all risk. And that worked really well until it didn't, which was two years ago. [General laughter]

LOUIE:

Then it really didn't.

BLOUNT:

Two years ago, but if you asked somebody in the mortgage business, “Hey, is this a good idea to give loans to people that have no way of repaying?” they would have told you that, you know, we’re setting ourselves up for a large disaster. So that’s probably one of the largest examples looking for outside perspective. Now the question I think boils down to the experts versus the crowd, and on that front I guess one of the things I’d like to offer up is I think given the search technologies, the social networks out there, it seems like you can ask really basic questions among a lot of different perspectives and a lot of different groups and get feedback relatively quickly.

GOLDHAMMER:

Mark?

CULPEPPER:

Yeah, you know, the thing that I think about is kind of historical elements that at one time were considered disruptive but now we just completely take for granted. And they’re so taken for granted that they’re literally invisible to us. I had the opportunity to drive across the country about a year ago, my family and I, we took about ten days and, you know, never once did we run out of gasoline, never once was there even a question of whether or not the roads were going to be good. Everything was just there and it was literally to the point where everybody assumed it’s going to be there. And I think one of the ways to get good outside perspective is to look at things today that we take for granted that are just there and say, “When did that happen?” Because that didn't happen just overnight, that happened literally, in some cases, over decades. And the way I look at this and the perspective of something that’s truly disruptive is take a look at things like how did the dams get built and why, right? I mean, a great book, Cadillac Desert, that talks about that whole dimension, you know. A lot of these things I think that have great historical context for where we’re at today, give us a view on how disruption occurred in the past and it occurred and then it became invisible.

GOLDHAMMER:

Great. Bill?

MARK:

So my bias, which has been expressed several times by other people, is that the disruption occurs in the use of the technology.

SCHWARTZ:

In the use?

MARK:

Use of the technology. So that makes me think that there’s – in this recent conversation there’s been too much emphasis on opinion and gathering opinion, whereas, in fact, there’s experiments going on all the time in various uses of technology. So Napster came up --

SCHWARTZ:

Perfect example.

MARK:

And this came, this also comes back to the, you know, unknown to whom. So there were a bunch of people, mostly college kids, who knew all about Napster very, very early. So if people were tuned into that experiment going on, perhaps they would have been able to tell. People are using their cell phone minutes to buy things and trade things, right? That’s an experiment that’s been going on for a number of years now and that’s leading

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

to some pretty spectacular stuff. Early identification of interesting experiments going on I think would help us understand the disruption.

GOLDHAMMER:

I think that’s great. One last point. Peter?

SCHWARTZ:

Yeah, and I’m just following up on precisely that point and it goes to one of the biases we argued earlier, which -- we said go young but I would say go to those people who are most likely to be the users of things that we – not necessarily – we don’t know what they’re going to do but the people who for one reason or another are not able to use or don’t want to use the conventional approaches to solving the problems that they have like getting access to music, for example. And the interesting thing that one sees in say the Napster/iTunes case is that the real surprise in it was that it was Apple Computer, not Sony, that -- And it ought to have been. In fact if you looked at the case you’d say, “Ah, Japan’s going to win this war, Sony’s going to win this war. They’re going to reinvent the music business because they got the music, they got the MP3 player already, they’ve got the systems, the distribution. They should create this.” And of course what you saw was a conservative bias that said, “No, no, we don’t want to let go of the old business.” It took essentially new players out at the edge of the business that weren't involved to reinvent it. So that sense of where is that reinvention going to come from is I think the great challenge in how do you get that external perspective, how do you find the right people to be asking, in a sense, who are going to use the technology in new ways, because Apple didn't invent anything there.

BLOUNT:

Yeah, so maybe go broad instead of go young, is maybe what you’re saying.

SCHWARTZ:

That’s another way to put it, yes.

UNKNOWN:

I mean, this is a classic paradigm shift.

GOLDHAMMER:

Harry, can you repeat that so we can capture that?

BLOUNT:

Oh, I said so what I think I heard Peter say is go broad instead of go young.

GOLDHAMMER:

Go broad instead of go young. Great! Just finish here with Phil and then we’re going to push on.

NOLAN:

Just want to call out something I’ve heard that seems implicit in a lot of our discussions about getting outside perspectives. They’re costly, they’re costly the old-fashioned way, which is, you know, we’re talking about putting people on planes and making sure you have good translators and so on when you’re in a different country. They’re also either socially costly, you’re going to the heretic, you’re going to an outcast person or you’re going to hang out with the 18-year-old. It can be emotionally costly, the person who has that perspective and idea is a flaming asshole who you don’t want to be around. So in some ways –

GOLDHAMMER:

Did we capture that, by the way? [General laughter]

SCHWARTZ:

I get a lot of letters from those guys. [Laughter]

LOUIE:

It’s called the asshole theory, and says you can take all the billionaires who made a billion dollars or more from the very beginning and made it all the way through the other

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

end, almost everyone’s an asshole, you know, Gates, you know. You can go through all the list, you know, Ellison, you just go boom, boom, boom, who made it all the way through, most of them are assholes.

GOLDHAMMER:

A lot of energy around that. [General laughter] All of this is going to be searchable on the Internet. I’m just going to warn all of you. You’re on the record.

GRAY:

You can call it a personality type.

CARRUTHERS:

Somebody tell me what the name of that book was?

CULPEPPER:

Cadillac Desert.

CARRUTHERS:

Cadillac Desert. Thank you.

GOLDHAMMER:

Cadillac Desert. Great. Synthesizing data into narrative. All of you, my guess, in your different roles, different professional capacities, have been in a position of having to tell a story to someone who either controlled power or resources about a disruption that was coming or perhaps that you were bringing and were trying to get them to see the world in a different way. It is not an easy thing to do, especially when you’re blindsided by it. You’ve got a particular worldview and all of a sudden someone walks in your office and says, “Everything that you believe is wrong, everything that you believe is wrong, and the world is in fact going to look this way.” This starts to get at that issue, which is now synthesizing data into narrative. Any thoughts about how you take expert opinions, data that gets collected through search engines, different kinds of pieces of information about either how people are using or adapting technologies or new technological developments, how do you pull that together into something that is coherent? Yes?

SAFFO:

Never let the facts get in the way of a good story. Yeah, it’s the engagement of strategic misdirection, being intentionally misleading in the service of provoking creative thought.

GOLDHAMMER:

Great. Al, yeah.

VELOSA:

I think also wherever possible avoid decimal points. [General laughter] One of the things that actually is a very big initiative generally is actually is to get rid of actually publishing a lot of numbers. Actually just publish the assumptions. Because at the end of the day there’s a story behind it – I'm going to quote you there, Paul – that’s a much more interesting proposal. Because they’re going to believe or not believe your numbers anyway but they want to know how you got there. So the narrative of how you got there is the most important thing on any forecast.

GOLDHAMMER:

Great. Let’s do Ray, Mike and then Stewart. Go ahead, Ray.

STRONG:

There’s a very standard technique that comes under a lot of different names; I’m thinking implication wheel. And it’s something that can be facilitated by a system so it can be done systematically and it is just that. You know, you start out with some central vision and theme and then you say, “Okay, what are the implications?” and you ask in various aspects, you know, speed or social, political, economic and so forth – you ask all those aspects. You run around the wheel, you get new centers, you expand out and the system can actually keep you balanced so that you don’t go too far in one direction nor the

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

others. And that’s directly answering the question: How do I synthesize data? Because you can have a system asking the questions and filling them in.

GOLDHAMMER:

Great. Yeah, Mike?

McCORMICK:

I always say there’s four fundamental questions: What’s the value? Where are you going? How are you going to get there? Why are you going to be successful? That’s what the story’s got to know.

GOLDHAMMER:

Okay, great. Stewart? Can we actually just – can you repeat that one more time for Lynn to capture, perhaps?

McCORMICK:

Sure. What’s the value --

CARRUTHERS:

I’ll put the key words and I’ll fill in the rest in a bit.

McCORMICK:

Okay. Where are you going? How are you going to get there? Why are you going to be successful?

CARRUTHERS:

Why you’re going to be what?

McCORMICK:

Successful.

CARRUTHERS:

Thank you.

GOLDHAMMER:

Thank you. Stewart?

BRAND:

I think that every wrong theory is based on a number of wrong pillars. And so when you’re going to go into your narrative if you start with one pillar that’s really wrong that the person is standing on -- In my case I’m trying to convince environmentalists that nuclear’s good for them, not bad for them. I can start with radiation. Radiation is not as bad for you as you think it is and here’s why and Chernobyl, etc. So if you can get them unsure about one of the pillars they’ve been standing on all this time and let them know you’re going to deal with every single one of the pillars, not right in this meeting but soon enough, that’s a chance to get in the door.

GOLDHAMMER:

Gilman

LOUIE:

I always tell my associates that “Don’t invest in any technology unless you saw it on Star Trek first.”

GOLDHAMMER:

Unless what?

LOUIE:

Unless you saw it on “Star Trek” first.

GILMAN:

Pick an episode, and if you can’t find it somewhere in there, maybe it’s not worth the investment. [General chuckles] And the reason for that is, you know, if you think about writing fiction or writing a screenplay or a television show, you know, they’ve got to tell a story about the human condition. That’s really, really key. It’s a human condition issue. I’m trying to tell a story about the human condition. If you don’t have a human condition in your narrative it’s kind of like useless. What’s the point? The second is in science

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

fiction particularly people create technologies to solve problems in the story. In other words, I’ve got to go from here to here and I can’t do it just with human beings and existing technology. I have to make something up. You know, 24, you know, I’ve got fifteen minutes to tell the story. I need to make up this grand machine that can go through all the fingerprints in the world and do facial recognition in ten seconds or less because that’s what I’ve got to do on the show. And what’s interesting about that is as they are trying to solve their story using technology to take these leaps to tell the human condition, it informs you on what you have to do. So my view is something slightly different, don’t believe in any technology where you can’t turn it into a story. If you can’t make a story out of there that this is an interesting story about the human condition, that’ll probably never come true.

GOLDHAMMER:

Last comment right here.

WINARSKY:

I would also -- and this is almost repeating what we’ve said before, I’d start with the narrative. I’d start with – and then derive evidence, then move my narrative as the evidence tends to persuade me or others in different directions. And I’d start with somebody who is helping the decision-makers define the problem and I’d find a narrative that worried them the most.

GOLDHAMMER:

Great bridge comment. Peter?

SCHWARTZ:

Just one quick comment that – well this is in fact a comment on the next topic.

GOLDHAMMER:

Yes.

SCHWARTZ:

Okay, so it is the next topic literally, and that is that in fact you really have to understand your audience, that it makes a huge difference – When I was head of scenario planning of Shell I spent an enormous amount of time trying to understand the language, culture and context of the people who had to actually use the information I gave to them. And if I didn't understand it I was speaking in the abstract. So the story and the issue that we’re trying to get across has to speak specifically to the concerns, fears, aspirations, context, mental maps of the people that you’re actually trying to influence with it.

GOLDHAMMER:

Yep. Mark?

CULPEPPER:

Yeah, just to follow on that, I think that it’s, you know, I like to say every sale is ultimately an emotional sale, right, there’s got to be an emotional hook. And I don’t care if you’re selling computers, routers, network switches, PV systems, whatever. You know, so if you don’t have an emotional hook when you’re communicating, your odds of success go way, way down.

GOLDHAMMER:

Yeah. I mean, one thing that I think is sort of amazing is just the conversation we’re having around narrative and stakeholders is, to me, in at least my humble opinion, the most profound thing we’ve been talking about this morning, in part because the capabilities required to do that well are actually quite different than the capabilities for gathering information, collecting information, synthesizing information, all the kind of technical requirements that go into what’s going out in the world from a technology and adaptation perspective. And then the ability to actually pull that together into a compelling, emotional narrative and to do that in a language that stakeholders are going

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

to understand so they actually take action and make decisions differently than they might otherwise, that’s quite profound.

LYBRAND:

So would that mean that you would want to recruit from your stakeholders as you build this platform so that you’re building out people who already speak the language?

GOLDHAMMER:

Quite possibly.

SCHWARTZ:

That’s interesting.

GOLDHAMMER:

Ken?

PAYNE:

And I think this – as a person who is non-technically oriented, having worked with scientists and engineers on a regular basis for like the past twelve years or so, that’s one of the hardest things to do when we go up to seniors. And I see their slide deck of 50 slides and they’ve got an hour and I start pulling out about 45 of their slides saying that they don’t care about this, you know, especially in our world, the intel world, they’re mostly – you know, as they’re called, soft science majors or fuzzy science or whatever they call it. They don’t care. They want to know why they’re important, they want to know what’s the consequences if you’re right and I don’t listen to you because they’re either political appointees or work with political appointees, you know. Because that’s like “As long as I’m good during this administration era, okay, I’m good to go.” [General laughter] “But if something’s going to happen to affect me before the end of that administration and I’m going to end up looking bad, then I’m concerned about it.” And it goes back to knowing your audience and stakeholders, which is difficult. You know, and when folks go up there and they have to talk to seniors and they have an hour, you know, I can barely try to convince them and it’s hard because they put all this work into it and they have all this great data and a bunch of decimal points – [General laughter] – they have all this stuff, and I say, “You need to cut your presentation down to 40 or 45 minutes.” And they say, “Why?” I say, “Because somebody’s going to brief that person after you and the questions that they have are going to be answered by somebody else or through somebody else. And so if you leave that 15 to 20 minutes at the end, you can answer the questions with the technical data if they ask for it or with the emotion that they need.” And that’s the difficulty in the government when you do that because you’ve got some different -- People aren’t in it for the money, necessarily, they’re in it for, “Okay, I’m going to say I serve my country or I want this political connection” or “Hey, I probably wouldn't have this level of importance if I were outside the government.”

GOLDHAMMER:

Thank you. Great point. Mike?

McCORMICK:

I actually want to come back to what Mark was talking about earlier. I’m sorry. I agree. I mean, people buy wants, not needs. But what worries me about this discussion a little bit is if we’re talking about pitching this to, shall we say, higher ups, it’s not the wants of the higher ups that matter, it’s the wants of the individuals that are creating that disruption. So the context of what you’re talking about, the selling – okay, this is the life of what they live, you know, that’s going to create this dynamic change. And having it so it’s relevant is I think a harder point of the actual discussion. Because let’s face it, if you live in the myopic world – excuse me on this one – of Washington, D.C. at times, you know, you don’t understand the fact of what’s going on in the rest of the world and the implications of that.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

That’s right. Steve, Steve, did you have a comment?

DREW:

Yeah. This may be a weird thing to say but maybe we’re headed at this the wrong way or maybe there is an alternate way to think about it. I think it was Norman who said do the narrative first and then fill in from behind. Maybe what we should be looking for are the long-term successful narrators, not the --

SCHWARTZ:

Narrators?

DREW:

Narrators.

SCHWARTZ:

The storytellers.

DREW:

The sources. Who weaves successful stories time after time after time that are technology built and how did they weave that? What was it about their thinking that captured that future to come in the basic technology?

GOLDHAMMER:

Peter?

SCHWARTZ:

Yeah, this is going to –

DREW:

A little strange but…

SCHWARTZ:

No, I think your point is well taken and it goes back to actually Gilman’s comment about Star Trek. Several of us in this room had the experience of helping to create the world for the film Minority Report and some of you have seen it. And one of the most significant things about the film is how many times clips from that film are now used to communicate new products, whether it’s Microsoft’s Table or a new advertising system or new scanning recognition systems and so on. I see the clip all the time. Why did it work? It works because it’s Steven Spielberg, not -- I mean, frankly, we had a group of remarkable experts in the room, Neil Gershenfeld, a whole bunch of really smart people trying to tell him what the technologies were and that worked brilliantly. But what made it actually work was Spielberg was the director and he put it in a human context and, you know, everyone remembers the moment in the film, you know, their own particular moment – the electronic newspaper or Tom Cruise chasing his eyeball down the ramp for the optical recognition system and so on. The point being that it is the story in context that actually makes those technologies come to life and has actually probably accelerated the rate of development of the technologies.

GOLDHAMMER:

Yes, Mike?

CULPEPPER:

Yeah, one thing that I’ve done in the companies that I’ve worked with is particularly for startups is create a glossary that everybody can work off of because it’s a common language and a framework and everybody then is immediately, [snapping fingers] like you can say something and everybody gets it, right? I don't know if that works in this context but – because you’re really taking raw ideas and putting it out there, but if you look at it and you can filter it through some sort of common language – and this goes to what everybody here’s really been saying – it makes the communication much faster, much more seamless, just dramatically easier.

GOLDHAMMER:

Darrell?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

LONG:

So I have some concern that we’re very Western in our approach here. I had a very interesting dinner with a guy that used to be a CTO of Microsoft in China. And he pointed out to me that these guys work very differently than we do. The senior leadership in China is all Ph.D.s and engineers.

SCHWARTZ:

Six out of seven.

LONG:

Yeah. They’re not run by lawyers. We’re run by lawyers and people that don’t – they like notional graphs and they don’t want labels on their axes here in the U.S.; over there they do. So I’m just concerned that we may be going down a rabbit hole here and talking about, you know, to ourselves when we need to be thinking about how other parts of the world work.

GOLDHAMMER:

I agree. Yeah?

TWOHEY:

One of the things I guess – The thing was, you know, we’re building this technology but part of the process is to put the right people in the right places. So maybe the fundamental assumption here is that, you know, everyone’s talking about the current key stakeholders are not going to change. Maybe that needs to change, right? If you have non-technical people that are fundamentally supposed to be interpreting very technical things, maybe that’s just broken and that you need to fix that and incentivize that.

GOLDHAMMER:

Great. Danny.

GRAY:

I think maybe something that we need to look at is identifying the definition of stakeholder and identifying the idea of, you know, who’s to gain or what’s to be gained. Because you look at a lot of what’s going on, you know, in improvised munitions, you know, the military has nothing necessarily to be gained. It’s really an application of guerilla warfare that’s been around for thousands of years. It’s just the guerilla mindset taking what’s around you and making something out of it. And so you look at somebody who’s in the open source software world, for example. What do they have to gain? They don’t care whether they can sell their product to the corporate world and package it to the corporate world because maybe they’re in it to impress a girl that’s online that says, “Wow, look at this software that this guy wrote.” I mean, I know this sounds really absurd to people in this room but at the same time this guy can come up with this huge innovation simply because he wants a date next Thursday night but it gets picked up halfway around the world and it turns into something else. And so maybe we shouldn't put quite so much emphasis on communicating to the stakeholders because not always is the money driving the “What’s in it for me?” for the innovator.

GOLDHAMMER:

So we just covered – in the last hour we covered a number of different design challenges. Any final reflections now looking at all the challenges together, things that as either a member of the committee or as an invited guest – and I’m speaking to the committee, any sort of higher level thoughts now that we think we should keep in mind as we push forward in the day?

SCHWARTZ:

One thing we didn't really talk much about is feedback and adaptation. In other words, this is a system that has to kind of constantly improve itself. The participants and the – both the customers and the active participants need to get better as it moves forward.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

It’s a learning system.

SCHWARTZ:

A learning system. Better way to put it, yes.

GOLDHAMMER:

Great. Paul?

SAFFO:

Visually instead of a column, it should be a loop or a cycle.

GOLDHAMMER:

Loop or a cycle, great. Gilman?

LOUIE:

Another thing is if you have a persistent system that’s used globally, the notion of users, forecasters, stakeholders, needs to be all the same. In other words, somebody who contributes to the system has to be able to get something from the system.

SCHWARTZ:

What do you mean by “persistent”?

LOUIE:

Persistent is something that I can always touch and get to it when I need it not when somebody else – when somebody else either produces it or wants it, where my primary value is answered by the system itself not the secondary value. And I go back to the concept around Wikipedia, right? There are lots of people who put stuff in there, they also are consumers of other people putting stuff in there. So this notional linear triangle concept where there’s just some general with a lot of stars or somebody sitting in a funny white-looking building in the middle of the Beltway, right, says, “I’m more important than everybody else.” I think if we use that as a mind framework, we’re going to have a very unsuccessful system. We’re not going to get the participation and get the usefulness out of it.

GOLDHAMMER:

We have three more comments. We have Harry, Stewart, Daniel.

BLOUNT:

I think one overarching question that we haven't addressed is the sustainability of the system – to Gilman’s question about persistence. Because one of the things that we’re going to have to think about from a framework perspective is at some point it is going to come down to the question of money and how you sustain this system based on a user environment. So I’m not sure how we work that into the discussion but I think to take it from the theoretical to practical, if we’re really going to quote/unquote build the system you have to think about what is the most efficient way to do it and sustain it long term.

GOLDHAMMER:

Great. Stewart?

BRAND:

I'm thinking about something I guess I think of as a good tip audit trail or maybe a bad tip audit trail, you know, who are people who have changed my mind, that I'm really glad they did that. Freeman Dyson does that, Manny Nolis does that. Who are people who started to change my mind and then I realized I was going down a primrose path and keep track of who those are, just some kind of way to, you know – This is sort of how judgment’s supposed to get better as you get older because you have more and more of those experiences that you know how to co-weight against each other. Is that a way to – can that be formalized? Well maybe.

GOLDHAMMER:

Danny?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GRAY:

One thing I haven't heard discussed is the transparency of openness of the data. When people, as Gilman said, are being involved in this, they also should be able to get something back and that’s where having the data being transparent as well as having an openness so other people can get to that data is important.

GOLDHAMMER:

It’s true.

VELOSA:

You know, there are sources of outside perspectives that are free. I mean, if you mine blogs, they’re free. And the people that are creating the blogs, they’re not involved in the system at all so you don’t have to even worry about giving results back to them. And there’s many things besides blogs that have such properties.

LOUIE:

I think mining is not as free though.

REED:

The mine – it’s pretty cheap actually. [chuckle]

GOLDHAMMER:

So it’s just after 10:30. Thank you for what I thought was a very provocative conversation. Why don’t we take a 15-minute, no, slightly less than 15-minute break. If we could all be back here at 10:45, we will continue at 10:45. There’s food, coffee in the back. The bathrooms are out the doors and any questions, feel free to ask me. Thank you.

TRANSCRIPT OF SECOND MORNING SESSION


10:45 A.M. – 12:00 P.M.

GOLDHAMMER:

So in this last segment, in this last morning segment before lunch we’re going to dive now more deeply into design criteria. This is going to be a conversation that will set up our afternoon activity where we’ll actually be applying these criteria to the creation of an actual version 1 system. And so I just want to say just a few words about why design criteria matter and then we’re going to do sort of a quick group exercise. We’re going to reconfigure the room a little bit, do a quick group exercise, have some conversations at our tables and then have a report out in about an hour. So first design criteria. Most of you I think, based on introductions, have spent at one time or another designing things so I doubt that these criteria will be much of a surprise to you. But as a reminder, design criteria matter because it does help to focus a system design, to have in mind what you’re actually trying to achieve before you put it down on paper will help you get to the Promised Land. It will also help you to figure out and clear tradeoffs and to force prioritization because you can’t have everything. And I think we heard very clearly from Gilman this morning that the goal here is not to create the perfect system. The goal here is to create -- don’t, in fact don’t let the perfect get in the way of the good. We want a good system that actually works without spending a ton of money and without taking a ton of time. And then finally, design criteria are helpful because they insure that the final system can be quality checked against clear metrics. You can actually go back and say, "Did I actually meet the criteria that I set up? Is it doing what I want it to do?" And there are lots of design criteria that we could choose from. What I want to do is just give you one example that comes from work that 360 did with the government of Singapore. So they asked us to design a system to identify disruptive technologies and emerging issues

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

and threats. And there were three, I just want to focus on three criteria just for idea generation. So there were different parts of the system. I just want to focus on one here. And Singapore, you have to remember, is of course in a unique context, a very, very small country right next to China, and focused on a set of issues that are sort of unique to their particular geopolitical context.

GOLDHAMMER:

Well, not next to China physically but next to China in the sense that that’s mainly what they think about. Thank you. And so for idea generation there were three design criteria that they felt were really important. The first was casting a really wide net because Singapore, given how small it is as a country, was a little bit at least from their perspective, a fish bowl from the standpoint of ideas. And so they wanted to cast as wide a net as possible in the design of this part of their emerging issues system in order to sort of hedge against the fact that there may be sort of a, you know, an island group thing. Second, they wanted to focus on creative ideas that have a major impact for the future of Singapore so they really wanted to stretch as much as possible into what Gilman described earlier as the tail. They didn't want to get stuck in the middle. And then also important was they wanted to vary their different ideas of idea collection – vary their methods of idea collection, using the different techniques, formats, media, timing and targets. Now the details aren’t really important. What matters is that they spent some time thinking through like given our unique context, given our needs for persistent scanning, how do we want to collect ideas in such a way that we will be able to identify emerging issues that are going to be important for us. This is the way they did it. Now these are the design criteria that are taken out of the first report that the committee generated. They are openness, persistence, bias mitigation, robust and dynamic structure, anomaly detection, ease of use, strong visualization tools and GUI-controlled vocabulary, incentives to participate, reliable data construction and maintenance. Now what’s interesting about this is we obviously haven't showed this to you yet and yet just about all of these have come up either in conversation or in the presentation that Gilman gave earlier. So we’ve already started to internalize these design criteria. Now again, you can’t have everything. The world is not perfect. So what we’re going to do is we’re going to break, we’re actually going to take Peter’s table and we’re going to disperse you at the other three tables. It’s going to be step number one. Step number two, you’ll see at your table – please don’t go anywhere yet – but you’ll see at your table that you have some dots. These dots are in the center. Everyone gets five dots. Now there are more dots. You can’t cheat. There are actually more dots than five per person but you get five dots. And then you’ll see that we’ve given you these design criteria on a template for each of these tables. And what we want each of you to do is to allocate your dots across these criteria. You can allocate all five, if you want, to one because you think it’s the most important or you can distribute your dots across the different criteria. All we’re trying to do is get a sense on a table-by-table basis, of where the group is with respect to these design criteria. You may discover at the end of your dot-loading exercise that you actually have dots across all the different criteria, but what we’re interested in primarily is the weight. What design elements do you think are the most important as a table, because those are the elements that are going to guide the construction of the system later this afternoon. Now my guess is that after you vote and after you vote to determine sort of where the group is with respect to these design criteria, you may identify additional criteria that you think are critical, you may disagree with some of these criteria. Gilman already told us this morning that we should take as kind of "noble lie” the sort of conceptual model that was developed in a previous report. That’s okay too. But the table has to agree upon either what criteria they’re going to use for the afternoon or what criteria they’re going to add. So once we break you into groups, why you do the dot-loading exercise. Your facilitators

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

will help you have a conversation about sort of where the table is and then why don’t you discuss those criteria and start to think about how would you actually use those criteria as a way of building a system, what is the linkage between the criteria themselves and the actual system that you want to create. Are there any questions about what we’re going to do just until noon? Okay. So first step is -- yes, Peter?

SCHWARTZ:

One thing that isn’t there, and I turn to our sponsor for a question, cost isn’t up there.

DREW:

Yeah.

SCHWARTZ:

So money as a constraint is not there. How do we think about that as a design criteria?

GOLDHAMMER:

Great question.

[Simultaneous comments]

PAYNE:

I don’t think it really should be up there.

SCHWARTZ:

Pardon me?

PAYNE:

I don’t think it really should be up there because I think the importance should – doesn't always – define that given that in the government, I mean, we do have a lot of assets we could put towards something if we feel like it is important.

SCHWARTZ:

So in other words, we should think about this in a sense unconstrained by the financial limits -- you know, recognizing, you know, within plausible -- yeah, okay.

GOLDHAMMER:

Steve?

DREW:

I think that might be a mistake because the reality we all live in is balancing impact against cost. I mean, almost Gilman’s opening statement said something to that affect. You know, we’re not – perfect would be wonderful but we’re really looking for maximum impact at some minimum cost value. And so to take that off the table runs the risk of having us not realize the realities of life. Frankly, I’d like to see some place on the list, maybe last, but some place on the list “impact/cost or outlay” be still part of the discussion.

LOUIE:

Let me just respond to that, which is -- because understand my bias. I’m an early stage venture guy, right? So my bias is the more money you give somebody, the higher the probability they will screw up. By constraining the resources you have to make tough decisions up front and making those tough decisions up front gets you a lot further down the path than if you do it the traditional way we do it in government, which is "Oh, you know, we’ve got unlimited resources. If it’s under a billion dollars it’s fair game." I think we just need to have our own mental discipline. Cash is just a good stand-in for that, you know, that constraint. Time is actually more important in some respects.

GOLDHAMMER:

Okay. I’ll take more comments here. We’re going to do Ray and in the back.

STRONG:

Okay. I think we already saw, a few of us, that partly from what Gilman said earlier, that persistence has an implication of sustainable cost. So it is there already, just hidden.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Great. Next comment?

STRONG:

Buried.

WONG:

Sorry. I just wanted to clarify. So are we thinking about this in terms of a system as in a piece of technology or are we thinking about this as a system with people or as an organization?

GOLDHAMMER:

The answer is yes to both.

WONG:

Because some of those priorities there are specifically strong visualization tools, GUI, specifically pertain to technology.

GOLDHAMMER:

That’s right. And part of what we’re asking you to do is to think about what elements of this system require technologies and technologies that may require strong visualization tools and GUIs; what parts of the system require human beings. And for example, the conversation we just finished, we ended talking about narrative and stakeholders. Those are not things that can be accomplished with GUIs and visualization tools. That requires people. So this system actually has both elements in it.

LOUIE:

And you should not, when you’re thinking about GUI, right, don’t think of it as in being in Silicon Valley. You know, think of it as visualization tools could be a movie, right? So visualization tools, what I’m saying, and GUIs is a human interface. Could be a human to a human or a human to a computer or a computer to a computer. It might kind of free us up from our own natural biases here in the Valley, which is thinking oh, GUI immediately runs out at, it’s some sort of, you know, next generation Windows OSX kind of environment I’m not interested in knowing. And I think GUI – it’s in a very broad sense.

GOLDHAMMER:

Go ahead, Ken.

PAYNE:

I just wanted to ask how will we identify cost in a criteria if you lay it down there, if we want to identify it? And I’m looking at, you know, okay, if you think something’s going to be like a trillion dollars, it’s not going to work. But how do we do it on that criteria as we assess them?

LOUIE:

I think from a developer’s point of view the reason why you want to do it for as cheap, of as few resources of money, to get to whether or not you’ve got traction or not on the problem is to, not only to force our discipline but also reduce the probability of the “no”.

SCHWARTZ:

Of the “no”?

LOUIE:

Yeah, which is, you know, in government there are professional GS14 Dr. Nos. Their job is to say “no”, right? So the lower you can go below that radar where it’s "Hey, you know, I’m just putting it on my kitty fund here," the higher the probability, you get to build your experiment.

GOLDHAMMER:

Yeah. And I think just to clarify what I think I’m hearing is that not that we should we building – we should be thinking about these criteria and the system itself as being unconstrained by dollars. But rather, at this point in our process the, in this point in the exercise that we’re working on together, whether we’re at, you know, $100,000, a

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

million dollars, five million dollars, like that is not actually going to be helpful for what we’re trying to do at this particular moment. Okay?

PAYNE:

I mean, don’t start a new agency.

[Simultaneous comments]

GOLDHAMMER:

We would've invited Congress if we wanted to start a new agency.

PAYNE:

Just invite the lobbyists. [Laughter]

GOLDHAMMER:

Yeah, the lobbyists.

TALMAGE:

If I can do the breakout on that table, Peter and Paul, you’re coming to this table, Darrell, Al and Mark Culpepper, you’re going to that table over here. And that leaves who? You and Drew. You’re coming to Gilman’s table.

GOLDHAMMER:

Great. So if you’re receiving guests at your table, please make room. If you’re emigrating from that table, please bring your chair. And then what I’ll do is gather everyone back together in about an hour to report.

SCHWARTZ:

Do we come back to this table?

TALMAGE:

Yes, you will be going back. You just need to take a chair and your dots.

GOLDHAMMER:

Feel free to start voting and allocating your five votes.

[TEAM ACTIVITY: DESIGNING A SCANNING SYSTEM]


During this session, three breakout groups were assigned. The groups each met with a different moderator and completed the activity described in the previous section as a means of discussing and forming team consensus on the prioritization of system design elements. The results of the exercise are detailed in the following section.

GOLDHAMMER:

…And I know that I’m interrupting conversations at the other tables. I think we’re at 11:45. We have about 15 minutes to report out. If your conversation was anything like our conversation, it’s not going, it was not a kind of coherent, synthetic, easily reportable kind of conversation. And so with that as a preface, I’m wondering if we could start with, this is table number two here?

SCHWARTZ:

No, table one.

GOLDHAMMER:

Table one, could you give us a sense of the kind of conversation that you had? I’m not going to put the burden on you of summarizing the conversation but what kind of issues did you talk about, were there specific criteria that you focused on?

SCHWARTZ:

All right. Well, we actually came up with a fairly clear consensus on a number of things. So we had a high degree of concentration on our voting. So there were a lot of things that

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

we were in strong agreement on, that I would say. And then we added a few things, kind of novel ideas that I think were quite important. One of the important points we put up there is the importance of talent in the people and the quality of the people and it was very important, and the idea of how we communicate. And part of the incentive – one of the interesting ideas that came out was that if you try and provide direct incentives, i.e., money or things like that and maybe this insight -- Norm came up with the idea if everybody believed that their idea was going to show up in a movie script, for example, or even a short film or having a group of directors say, “We’re watching this to see what’s going to be in the next Spielberg film,” and so on, qualitatively different kind of incentive and so on. And so that kind of idea I think was a fairly powerful one. Is there anything else anybody would wish to add that I haven't said?

GENIK:

Well we also thought that there should be a concentration on the narrative.

SCHWARTZ:

Yeah, the narrative was very important.

GENIK:

Plus the output and maybe use the output as an input. There could be some way to get that in there.

SCHWARTZ:

So the real value, we used the example of the IED. It wasn't knowing about cell phones and RPGs and so on, it was the narrative of people assembling them into a new class of weapon.

GOLDHAMMER:

Great.

BLOUNT:

Are you going to tease out the differences?

GOLDHAMMER:

Yes. Could you give us a sense of what your top three vote getters were?

SCHWARTZ:

Oh, yeah. Openness was number one, persistence and incentives to participate and ease of use were in the next category.

GOLDHAMMER:

Great.

BLOUNT:

Nobody had reliable data. I think that’s also important.

SCHWARTZ:

Yeah, we don’t care about reliable data.

GOLDHAMMER:

We had no votes on reliable data.

SCHWARTZ:

Yeah, we believe in, you know, just random information.

GOLDHAMMER:

That was the narrative crowd over there. Facts don’t matter.

UNKNOWN:

Facts get in the way of the…

[Simultaneous comments]

TALMAGE:

The recorder can’t get this so let’s get the roar down.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GENIK:

Our consensus was that was going to be assumed to be part of the underlying technology of the system and we shouldn't spend a lot of time talking about it. But that has to be there.

GOLDHAMMER:

Great. Would you start, Phil, just telling us what your top vote getters were, whether you had any zero vote getters?

NOLAN:

I was going to say that although that table clearly listened in on some of the rich discussion of this table – [laughter] – they didn't have the craftiness to steal our votes, which are very, very different. Our top vote getters: anomaly detection, persistence and incentives to participate. There were no zeros but we had a whole lot of ones, such as openness, which apparently is a low vote getter. I’m worried, Peter. I normally look to you for guidance but on this --

UNKNOWN:

Nine on that one.

NOLAN:

So the discussion at this table in many ways ended up being, I think of as eminently pragmatic and it might be that Gilman here was helping guide us a little bit without even trying. Lots of discussion of what are the criteria that you need for Version 1.0, the version where you’re just proving your concept. It doesn't really have to work that much better but it’s proving your concept. And then when you get more money, more approval, more love from some senior manager of Congress, whoever it might be, then you go back with Version 2, Version 3.0 and you get some of these other criterion here. So that was one part of our discussion. Another one that I wanted to mention was a theme and it just kind of came in at 90 degrees but it was a good one. It was about some of these are static criteria – I think Steve, you brought this up – and a number of others were dynamic criteria. And mixing the two together can in fact – it may create some strengths but you actually may confuse yourself by saying are we creating a system which does one --where we have the structure or are we actually creating a system where things flow through? So those were a couple big ideas. Anybody else want to throw out other things that they would -- remarkably interesting.

GILMAN:

…wouldn't go that far, I don’t think.

GOLDHAMMER:

Phil, would you like to summarize our conversation as well? [General laughter]

NOLAN:

…seem to be very good, at least on number two.

GOLDHAMMER:

So in our group our top vote getter was persistence, second was bias mitigation, third was anomaly detection. Controlled vocabulary, which may give you an indication of the kind of group that we had, that got zero votes. And I would say that our conversation kind of clustered around our top vote getters in terms of the design criteria and I would say our conversation was fairly wide ranging within those criteria, sort of talking about both what those criteria meant and then talking about some of the implications of those criteria for actually designing a Version 1 of the system. A couple of key ideas that came up, one was talking about it’s not about technology, it’s about use. It’s not about technology, it’s about use. And so there was it seemed like a general consensus that what you’re fundamentally interested in is the way in which technology gets used and adapted, not the actual invention of the technology itself. Another idea that came up at the end of our conversation was recognizing that given some of these criteria, and here I think we’re taking criteria in a slightly different context, but some of them we understand – I’m sort

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

of paraphrasing here – but some of them we understand and we know that they’re hard to do and they take time, perhaps they take time, they take money, they take effort, but we basically know how to do them. They’re just hard. Others of these criteria, perhaps like anomaly detection we don’t know how to do so it’s a different class of, in some ways a different class of criteria because solving for that criteria would require a different kind of solution than the things we actually know about. Let me stop there and ask if the table would like to add anything that I undoubtedly forgot.

BLOUNT:

I think the only other thing I’d add is just the notion that a lot of the conversation initially steered towards this being for harm prevention rather than opportunity upside.

GOLDHAMMER:

Great. Yeah, so we did have a conversation about sort of how to think about what the system is ultimately trying to identify, whether it’s about harm, whether it’s about opportunity. A little bit of discussion about what we meant by harm, harm to who, how much harm. Any other comments?

So we have just a few minutes before lunch. I think now that we’ve had, at these different tables, conversations about these criteria, I want to try to get – just insert a little bit of kind of practicality right before lunch to sort of orient our brains to what we’re going to be doing after lunch. So you’ll remember from our agenda this morning, after lunch we’re actually going to be in the teams that you’re in now clustered around these very long tables with a lot of material that you’ll be able to use to actually create a process for this V.1 product. Now we’ve heard a lot about sort of the basic criteria for this V.1 product. It’s not going to cost a trillion dollars, you’re not going to spend five years building it, it has to meet some of your key criteria here as well as some of the issues that – some of the thoughts that Gilman shared earlier this morning. You want it to be able – It’s a proof of concept, it’s a pilot. You want to be able to demonstrate to your venture capitalist that you can make this idea work not on a shoestring but in a relatively short amount of time with a relatively limited amount of funds. And so what we need to do now is start translating some of these more general ideas around criteria and design challenges into the specific ways in which we’re going to move from this conceptual model to an actual system. And so let me just kind of, just to get ourselves thinking about this in the back of our brains over lunch, does anyone want to just take a first salvo or first shot at either ways to apply some of the criteria that we’ve talked about to actually designing the system, where you think you might want to start? Just any thoughts about –

SCHWARTZ:

I have a question.

GOLDHAMMER:

Yes?

SCHWARTZ:

And it goes back in part to Gilman and to our sponsor. This notion of Version 1.0 versus the system you would like to build. My question is this: In thinking about Version 1.0, do you think of this as just simply a more modest version of what is likely to be built in a sense subsequently once you’ve proven it out or is this really just, you know, okay, we’re going to try a proof of concept and then you have to redesign to really build something more substantial. How do you think about this? Because the truth is I think if we target just the pilot system, I’m not sure we’re getting full value out of the group. And if, on the other hand, we target an ultimate system, we may not have something we can build to get started. So the question, how do you get that balance right?

GOLDHAMMER:

Gilman, do you want to address that before we take other comments?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

LOUIE:

So, I mean, it’s a good point and is an argument, should we being doing 1.0 or 3.0. So let me just kind of get people on a common vocabulary.

[Simultaneous comments]

SCHWARTZ:

That’s what I’m aiming for. Thank you.

LOUIE:

First of all, difference between -- first of all, software design in my view is nothing ever really works as the way they’re supposed to until the third version comes out, right? And Version 3.0 usually is after Version 2, which is kind of an add-on to 1 and patching any new features. You throw everything away and you rewrite to get Version 3, incorporating the concepts but maybe not the exact code base and process base that you – just kind of understand it as a ranking. What’s really important in 1.0 is what are the things that you have to do not only to produce a result but produce a result that distinguishes you differently and makes you special versus the old system and the older version. If you can’t demonstrate uniqueness of value or application or way of thinking about the problem with 1.0, you’ll never get to the 3.0. So it’s not just good enough to come up with a system that gives you a forecast if what you get is not that different than what all other forecasts gives you. On the other hand, it doesn't necessarily need to be that robust, be that reliable, built to last or any of those things. But what are the minimum things you need to say there’s something really, really special here. We should continue to pull on that thread.

GOLDHAMMER:

Darrell, do you have something?

LONG:

So I’m going to deliberately be a bit provocative here and Gilman will be able to chime in on this. We already have 1.0 of this, right? It’s called the intelligence community on Sand Hill Road, right? And so what are we trying -- what I think we’re trying to do is we’re trying to get beyond what we have now, right? And so, you know, that’s what --what I want to understand is, you know, we’ve got who knows how many analysts in DIA and CIA and in the State Department whose job it is to do most of these things on the list, right? And then we’ve got Sand Hill Road, right, who’s also doing some of these things on the list but for a different motivation. So how do we get beyond that from the existing system that we have?

GOLDHAMMER:

Good question. Jennie?

HWANG:

You are particularly kind to Bill Gates, you know, who’s on the conversion cell block --Windows. But I think another way to look at this with the final outcome is there’s a lot of, most things are done in such a way, including the documents and reports, you really want to have a maximum angle with the picture, you know, 360 degrees rather than 200 degrees at the beginning, then you come down with some of the constraints and availability and quality of information or even people so you turn out to be perhaps less than that. That’s one thing. The other is, you know, instead of looking at the basement of the walls of the Federal Reserve, we should really look at, you know, reach for the sky. So therefore then you come out with the best things. With that kind of goal, it seems a lot of things work out that way. Even if you have that, you’re not able to reach that. But you could say the limit there, then chances are you’re able to reach, you know, an outcome with even less than what you set out for. So that is another perspective, you know, to see how you really look at things.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Great. Harry?

BLOUNT:

I want to pop back to Darrell’s point a second because I think if we are looking at this as persistent and trying to get from the 1.0 version of what we already have out in the real world, one of the biggest challenges in the intelligence community, venture capital community, is the idea that you have a systematic feedback loop that can be captured and monitored. And I’m not sure that if we don’t walk out of here with at least some conceptualization of what a feedback loop is and how to measure it on a persistent basis to really improve the system on an ongoing basis, I think that will be a huge missed opportunity.

GOLDHAMMER:

Okay. Stewart?

BRAND:

We’ve got a couple biologists at this table. I think of these things as larval and adult form and taking Darrell’s approach you can say the intelligence community and venture capital community is sort of the larval form of what? And what we’re trying to think about is what’s the metamorphosis that would take it to this next stage so it can fly around and be beautiful and so on, or you can say 1.0 has got to be the larval, which means it has to feed itself and metabolize and have the capability of being something even more interesting later. So those are two approaches to take. Are we doing, creating a larva from an egg or are we metamorphosizing what already exists in something else? Probably two different techniques, both interesting.

GOLDHAMMER:

I think one other point to add is I don’t think the goal here is to reinvent the wheel and to the extent that there are, you know, there are in the world, in the intelligence community or in the VC community, there are elements of the system that can be a source for that system. That seems like something you want to think about and incorporate at either some practical or some kind of computational way into the system that you ultimately develop. You don’t have to reinvent it from scratch. You can assume that it exists.

WINARSKY:

So another differentiation from the 1.0, I mean, you’ve very clearly articulated two communities that develop these type of forecasts and that the venture community and intelligence community. One of the important points this table had was openness, so we want forecasts from everybody. We want crowd-sourced forecasts. We want forecasts from somebody in a tribe in Afghanistan. So that would be very different than the two communities that we talked about.

GOLDHAMMER:

Okay. Other comments? Yes, Stan?

VONOG:

So it’s been very interesting for me to hear and kind of understand now what we’re trying to design. So I heard these problems like you have report one, it was one report and interesting. Build system ten times, same results. And then, so some of the things here isn’t native with me, actually, and one of the things it was -- I went to Walt Disney Family Museum, which was just opened in the Presidio. And, you know, when you were talking about movie stuff, so that’s one of the things -- So I saw one thing in the Walt Disney Family Museum which was amazing, and it amazed me so much that I and my colleagues are trying to build our company a similar way. It was a Walt Disney org chart. And it was not like a tree chart or something, like the whole process, it was like a whole circle and in the upper side there was a story which comes from Walt Disney and then it

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

goes to the middle and then on the one edge of a circle is like animators and on the other edge technology innovators which are building all these moving cameras and all those special effects things. So I was thinking like – I had this idea, and I see that many people have this idea too, so it may be like a radical idea for the system. Maybe you could just separate like Walt Disney, producing -- like Walt Disney produces movies. So you could produce like reports or something and story comes straight to the center. And on one side is all these data analytical tools, like all this crowd sourcing, social networks, data filtering, whatever, so it’s not based on like reality. But on the other side is scientists and experts who kind of validate this story thing. So this crazy, you know, movie writer invents a story based on doomsday scenario or opportunity scenario and then you kind of get data on one side and then validate scientists, so they say, "Oh, it’s not going to happen in five years or it’s not going to happen in three years." And you have like many, many story writers. You a have Russian story writer who writes their own story, like U.S. story writer, and you, just your product is like movies or whatever, like mockups. So all very --

GOLDHAMMER:

Good. Interesting idea. Very good. I just want to take an opportunity, for anyone who has not spoken yet today, I want to give you an opportunity, if you have any thoughts you’d like to share with the group at this point.

NOLAN:

100% participation is pretty good.

[Simultaneous comments] [Laughter]

GOLDHAMMER:

Yeah, I’m not a cold call caller. I’ll open up the opportunity space but I’m not a cold caller. All right. So I don’t think we’ve reached consensus on exactly what it is that we’re trying to develop in the afternoon. I think the one thing that I want to leave you with though before we head to lunch is we need to be practical and pragmatic. We need to come out -- for those of you who’ve written books, you probably had the same experience I did, which is that when you finished the book you realized what it is you actually wanted to write. [General laughter] [Simultaneous comments] Exactly. And so I think, I would expect, I’m just going to signal now that that’s probably the way you’re going to feel sometime around four o’clock this afternoon, which is that by the time you get to the end of this process you’ll realize exactly what you wanted to do. But I think fundamentally what we’re trying to do using spatial relationships and causal arrows is to develop an outline of a V.1 system that may borrow from things that already exist in the world, that may combine them in new or novel ways, I think as Stan was describing, borrowing, riffing off of Walt Disney. And I would suggest that after lunch -- So we’ll break ‘til one o’clock. After lunch let me suggest that we all come back to our groups, that we spend about ten minutes just kind of getting some alignment within the group around what exactly is this product producing. Let’s get sort of a storyboard image of what the result is supposed to be, some alignment around the key criteria that you’re going to optimize for. And then when you’re done with that conversation, group number one, you’re going to be at this table over here. Group number two, you’re going to be at the table back here and we’re group number three and we’re going to be at the table over there, all right?

BLOUNT:

Hidden away.

GOLDHAMMER:

Hidden away in the corner. Any questions about what we’re doing now and what we’re doing after lunch? Daniel?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

TALMAGE:

I have instructions. So for our accounting department, we need you guys to sign in. So there’s the sign-in sheets over here. There’s a guest, a guest sheet and a committee one. Once you’ve signed in you can loop on around and lunch is served. [General laughter]

SCHWARTZ:

No lunch ‘til you sign.

Lunch Break


INTRODUCTION TO TEAM ACTIVITY: DESIGNING A SCANNING SYSTEM

GOLDHAMMER:

Why don’t we get started. Have a seat, please. Finish your conversations. iPhones in pockets.

TALMAGE:

[Laughter] Jesse’s good. He spotted the phone.

GOLDHAMMER:

The thing I love about these kinds of meetings, especially when you’re bringing the outside guests in and you want the alternative perspectives, is anyone -- how many Apple devices are there in this room? [hands raised] How many Windows based devices are in this room? Or Microsoft OS devices? [hands raised]

[Simultaneous conversation]

GOLDHAMMER:

Okay, can I get your attention? So we’re going to dive back in. I’m sure that over your Cobb salad you had time to reflect on the 45 different elements of this system we’re going to be building this afternoon. That’s great, it’s wonderful, I’m glad it’s top of mind. I want to give you -- as you probably have noticed, and it’s quite intentional, that we’ve been moving from high concept in the morning and increasingly more and more granular. And now I want to take you even one step further. So the first activity we’re going to be doing in our teams is designing a scanning system and I want to give you an idea – and this is literally just an idea of what that might look like and sort of, and just kind of hitting a level of kind of granularity that I think is appropriate for this kind of exercise. So if you have still in front of you this conceptual map, you’ll see that one part of it is people feeds, all right? So at some point the committee said you know what, people feeds is a really important input into this system. Now that may be the wrong term to describe it. There may be other things you’d want to add to people feeds that are not there. There may be text in that page that is the wrong text. All that being said, if you were to actually build that out and say how does this actually look, you could do something like this. So you could imagine having as one of your process elements some kind of DOD analytic unit, which is coming up with strategic questions. Questions around disruptive technologies, questions that they want to put to people elsewhere in the world, questions that are driving research, questions that may be driving hypotheses that may get answered with data later on. But some group of people has to figure out what are the questions we’re asking here, all right? So there’s some kind of analytic unit. That analytic unit might be driving survey research and you might be doing twelve global surveys with incentivized participation from experts. You might want to be doing surveys on a global basis and not just in a local basis. You are building a V.1 system. You’re not doing a thousand surveys. You’re doing something that’s manageable. And those surveys, that

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

survey content is bring driven by those strategic questions. You might also do a prediction market and there’s a relationship between the two. The survey data is telling you what’s going on now; the prediction markets are telling you what’s going to happen in the future. Some of the questions that you might be asking with the survey data may actually end up helping you to determine whether these predictions, depending upon how far out they are, are actually the right predictions. And this is something, for example, that Monitor 360 is doing now in Pakistan and Afghanistan, which is that we’re using Gallup, who’s doing polling in places like Afghanistan, asking people on the ground questions about what’s going on, we’re running a prediction market and we’re using those polls as data that will help us to determine whether predictions about whether there’ll be a certain level of support for the U.S. military at some point in the future is actually right. Right? So you can imagine a relationship between these two things, and those are both again people feeds. And then that feeds into five global synthesis workshops. So you’ve got a bunch of data that you’re collecting and survey data, prediction market data, and then you have to figure out what it means. And you want to figure it out using a bunch of diverse outside perspectives. You want people from different, speaking different languages, different customs, different use, use technology in different ways, and you can do it in five different places. So again, this is purely hypothetical. This is just a stake in the ground. It’s just an effort to show you that if you were to take this little conceptual piece from that process model and build it out and show the relationships and then show feedback, that there’s iteration here, that it could look something like this. And this might be one piece of what you end up having on your boards. At a conceptual level, are there any questions about this, as a goal? Wow, really? Good. So here’s, again, a good example about how -- I was expecting something.

SAFFO:

I’m just kind of stuck on the part of the local survey of Pakistan, but…

GOLDHAMMER:

So there are five people that you’re asking Pakistan the questions and then they, you just re-circulate those questions with those five people and that’s what you need. [General laughter] And they’re all in a café.

NOLAN:

Only five that matter in Pakistan. [Laughter]

GOLDHAMMER:

That’s right.

NOLAN:

Which five, is the question.

GOLDHAMMER:

So here’s what we’re going to be doing. In your team you’re going to be developing a process diagram that illustrates the essential steps in your scanning system and meets your design criteria. And remember, each team came up with a set of design criteria. Spend the first ten minutes in your teams at your tables discussing how – put a stake in the ground on how are we going to do this. Are we going to start with the output, are we going to start with the collection system? What is the process that you as a team are going to use to try to elaborate on this process diagram? And then in the remaining time that you have – and I’m going to give you about an hour and a half total – you’re going to start to lay down the process using the materials that we’ve provided. Teresa, are you here? Let me give you an example of what these materials look like. So we have precutout shapes, all right? This is very tactile. So you have different shapes. You can assign a meaning to these shapes if you want to. They don’t come prepackaged. The triangle doesn't mean anything but you can decide that it means something in your teams. We have pens, we have of course the re-stickable glue stick which allows you to put your

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

shape onto the table and have it not move because someone pushes the table and it slides over. We also have pushpins and these are your causal arrows right here. You have to be able to show how one thing connects to the other things. There are inputs and outputs in a system. So you can use this as a way of showing how, for example, if we went back here, how the [coughing] analytic unit was driving survey research, which was driving prediction markets back and forth iteratively and then down to a global synthesis workshop. So we want you to show us what are the elements of your system and we want you to show us how they’re connected, not necessarily causally but in a process sense. Does that make sense?

UNKNOWN:

Yes.

TALMAGE:

Now can I put in another word? For this event we’re actually going to record each table. We’re going to have somebody transcribing so if you could say your name every so often as you start to say something, it will help them to keep track of that. Because we have some of our staff doing the tracking today.

GOLDHAMMER:

Can we just meta tag everyone? Is that possible?

TALMAGE:

It’d be nice but we can’t do it.

[Simultaneous comments]

GOLDHAMMER:

So now are there any questions about what we’re going to be doing for the next hour and a half? Any questions first before I give you the next set of instructions?

SCHWARTZ:

Yeah. We are not trying to produce convergent results, I assume. Is that correct?

GOLDHAMMER:

In fact, it is precisely the opposite. We are hoping for divergent results. So each team will be producing their own process map. At the end of the hour and a half we’re going to ask each team to give a short report out, just walking us through what you’ve created in that 90 minutes, and it’s just a little bit of a kind of pressure test report out where other parts of the group, committee or guest can say, “Well explain to me the connection between this and this. Why did prediction markets then lead to having people who use crystal balls and then why did the crystal balls lead to, you know, briefing Admiral Mullin, and I don’t understand that connection." So we just, just a little bit of a pressure test. And then remember the final activity, which we’ll come to later, will be to layer in another level of detail in your process map looking at how do you actually do it. Does it require ten people, fifteen people, twenty people? Are these partnerships, are these inside the government, outside the government, what kind of technologies do you need? All those questions as well. Peter?

SCHWARTZ:

Another question. When you say scanning system, is this going all the way from top to bottom of this chart or only part of the way down this?

GOLDHAMMER:

I believe -- I’ll actually maybe let Gilman answer that question.

LOUIE:

So if you take that as a linear chart and assume that it was originally a circle, just kind of turned back on the straight, natural feedback loop, is can you design a complete loop, one iteration of the loop.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

SCHWARTZ:

But involves all these elements, is the question. That’s the question.

LOUIE:

If you believe those are the right elements. I don’t want to bound it by –

SCHWARTZ:

But it could.

LOUIE:

It could but it could be a bunch of other stuff or different stuff that’s in there.

SCHWARTZ:

But we’re not limited to, your example, like people feed, it is the whole story here.

LOUIE:

No, that’s correct. It should drive it through the cycle.

SCHWARTZ:

Through the full cycle.

LOUIE:

Right.

SCHWARTZ:

Thank you.

GOLDHAMMER:

Now there are a lot of ways to skin this cat. I imagine that every team is going to do something a little bit different. What I would encourage everyone to do is to be as engaged as, to sort of as engaged as possible in sort of what the team is focused on. ‘Cause I think the risk with an exercise like this is there are going to be two people over here who are saying, “Oh, what if we did it this way?” and then two people over here saying, “What if we do it this way?” Having the conversation as a team I think will be the most, you’ll get the most effective results at the end.

WINARSKY:

Are we designing in this stage the system as we imagine it complete or are we designing the alpha version that might be implemented in a short period of time and energy?

GOLDHAMMER:

Gilman?

LOUIE:

I think, let me give you the kind of, in the perfect world but I will leave flexibility up in the teams. You might have what I would call – let’s say that this bar or this unraveled loop is called “stack," right? So in the stack are the things that you think are all the important parts in the stack and then put an actual bold box around the things you want to be in 1.0, right, which means these are the things that we have to do and then put maybe a brighter colored box around the stuff that we, what we want to do really, really well, and we just assume all of the rest of the stuff will, you know, marginally do 1.0 just to get the stack or not do at all, right. So I want to know what you think is a requirement for 1.0 and what you think is going to be the spectacular jump out things that you put a lot of resources and energy behind. But it’s okay to include all the other pieces that’s in there so at least we understand conceptually where you’re driving to.

SCHWARTZ:

So for example, going back to Jesse’s diagram – for example you might say we cannot possibly do without a minimum of 12 global surveys. That’s got to be part of the, even a system 1.0. And the final one might have 25 but we’ve got to have at least 12 in Version 1.0.

LOUIE:

That’s right. And that’s a great example.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

These are designed so you can write on them. This paper you can write on. And we’ve given you a lot of space because we’re assuming that we, we want to get as much detail as possible. And when we’re done we’ll be taking pictures of these, sort of sequential pictures, so this will all be captured.

TALMAGE:

And it’ll be captured on the tapes.

GOLDHAMMER:

So Daniel before the exercise that we finished right before lunch had sort of assigned this table to the other tables. Keep those assignments. You go back to the table you were with before lunch. Questions?

WINARSKY:

Can we have a round table? [General laughter]

TALMAGE:

You can use – I’m sure there’s some design innovation you guys could use to take the rectangular table and make it round.

GOLDHAMMER:

All right, any other questions before I let you loose?

GOLDHAMMER:

All right. So again, my recommendation is in your teams have a ten to fifteen minute conversation first just to figure out what you’re going to do and then go do it. Okay?

UNKNOWN:

Can we have our recruits back?

GOLDHAMMER:

Yeah.

[TEAM ACTIVITY: DESIGNING A SCANNING SYSTEM]

1:00 P.M. – 2:30 P.M. (SEE APPENDIX E)


SCANNING SYSTEM GROUP BRIEFINGS

2:30 P.M. – 3:15 P.M.


Group 3 Briefing (Option 1)

GOLDHAMMER:

OK., why doesn’t everyone gather around this table please. So having walked around the room my self it is clear that each team approached this problem in very different ways. This team, you can kind of take a look and sort of read what is on the different post its. Darrell is going to kind of walk us through at a high level and other team mates and team members who want to add in or expand, please do so. Let take just about 5 minutes to explain. If you have any questions, that the rest of the group may have, we can talk about it.

LONG:

So you shouldn’t think about this literally, but we’ll do it literally anyway. So up here we have stakeholders. These are the spheres of interest that whoever the client is, what their interested in. From this one thing we’ll come out with is big picture questions. And one way to think of this is forcing functions. If you are interested n climate, for example, would be a forcing function. Then you have data collections, this is passive data

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

collections, constant persistent bringing in all the data, and whatever the data here is, publications, grants, financials, whatever. Here we have active (?). Maybe actually, not just actively looking for things but probing the system and looking for reaction in the system to see how the system responds. Here these are creative people, these are hypothesis generators, think of these as science fiction writer types, people that dream happy dreams, and have nightmares. (Laughter) but not necessarily are constrained. The importance here is that these people are not constrained. These are not people that are tied into policymakers or stakeholders, these are guys whose job is to dream of stuff. This is going to generate a set of hypotheses. Here we evaluated hypotheses. A very nice dream, however, that violates clausality, that can’t happen, and that feeds back up here. That’s going to feed back into both the collections process, because this is a very interesting hypotheses that you have. This could be a threat, or this could be a opportunity. We need to know more about it so we will increase collection in those cases. Or it can be that was a really good hypothesis but need to refine it, or it could be a bad hypotheses and it can’t happen. So the evaluation process may be There is a broad spectrum of evaluation of what happens here. There are scientific experts, which we are biased towards them. But there are financial issues as is practical to new financially, social, political, all this stuff, and then there is the gaming industry and this sort of clouding everyman coming in. So this is going to evaluate the hypotheses, and that is going to produce a raw output. Whether this is a good hypotheses… let’s back up here to these guys.

GOLDHAMMER:

Those guys are?

LONG:

These are the hypotheses generators.

GOLDHAMMER:

: Back to the hypotheses generators?

LONG:

Back to the hypotheses generators. Now down here,

GOLDHAMMER:

Just before you get to things down here, things that come in here are new questions. Did we miss something? Ranking analysis. These guys come together, and within their area of expertise or in their area of, or not necessarily their area of expertise, come up with “are we right” , are we on base, are we off base, and then scoring, and then that then feeds down to.

LONG:

Right, right. This is sort of raw, goes back up to the hypotheses generators, this goes down to the storytellers, and take all this stuff that we have heard of a couple of times and remove the decimal points. This goes out to policymakers, things like this. This feeds back to (we need a line here guys, feeds back here, put a staple in this)

GOLDHAMMER:

They’re are the policymakers, is that right?

LONG:

No, these are like an agency kind of thing, right? Down here, this is the president of congress.

GOLDHAMMER:

: So who are these guys?

LONG:

Some agency that is charged with doing some kind of forecast (simultaneously speaking with members of audience), maybe we need three dimensions here.

GOLDHAMMER:

Questions? Questions?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

SCHWARTZ:

How many people do you need?

LONG:

How many can you afford? This is a scalable parallel architecture. (simultaneous audience talking), you’ve got collectors, already in the system. So you’re going to leverage existing collections, so here’s all collections. How many people is that, I don’t know. How many hypotheses generators? Probably a fairly small number.

SCHWARTZ:

What a dozen, half a dozen?

LONG:

Yea, not hundreds.

GOLDHAMMER:

Where would you put in the country factors? In other words, I’ve got eased into the traditional … hypotheses engine should have your regional panelists, that part of the engine which would increase the rate.

LONG:

I don’t want analyst there, I want analysts down here.

UNKNOWN:

And actually Perry and I were talking about this, this gaming, cloud sourcing, this step we have our subject matter experts here but then here we bring in that cross leverage.

LONG:

Here is where you do the impact analysis. These are the guys that are informed by the big question. These guys say “we really need for you to pay attention to Middle East now”. And these guys will make hypotheses appropriately for that, but analysts in the traditional sense that you and I would talk about analysts, their not here. They are not creative, but panelists.

GOLDHAMMER:

I have a question. Are you able to forecast a disruption?

LONG:

You’re asking me to forecast,

GOLDHAMMER:

For this very systematic model.

LONG:

We think so, that is why it is designed this way. These guys here, we want to be ahead of the curve, if we didn’t have these guys here and just analyzing data, then we can only pick up things that have either happened or very close to happening. What we want here is for people to say, I’m seeing signs of things, and they go huh, maybe if just a couple more things happened, I could do this crazy thing. These are the guys that are trying to project us way out on the curve.

SCHWARTZ:

But you connect these guys this way, they feedback.

LONG:

They are getting feedback. These guys are getting information, this is the state of the world. We were just suddenly able to increase communication by a factor of 100. Oh, okay, that does something.

SPKR:

That’s scary.

GOLDHAMMER:

How do you get the feedback to get the right type of people?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

LONG:

There are actually two components to this feature. Part of the narrative engine is creating a plausible human condition scenario that is somewhat tied to that hypothesis. So there is a feedback loop, there is one through here and one through here, right? If people are being evaluated on their performance, right, these guys are consistently generating crackpot things, antigravity weapon, you get a different job.

LONG:

If you think about Proctor and Gamble, everybody that go to Proctor and Gamble to work start out in sales. No matter where they end up they are all required to start in sales. Imagine an analyst structure where everybody is their first job has to go on walkabout, so that they are going out to odd spots chasing up things generated out of this. So everybody spends some time way out in the field before they come back. Some kind of glossed over quantitative person. Coming in, coming through here we have a ranking system, we did bring that down, and when we called in the ranking, unlike [mike noise] they consent to see the consensus, we want to see the outliers, and then this feeds back up so we find out which group was more accurate. The consensus or the outliers based on the hypotheses.

GOLDHAMMER:

The two big drivers of the consensus, this is really great. But this real critical, what is in here is really critical to your system.

LONG:

How it is really used, that is the newest.

GOLDHAMMER:

How does this system improve over time? Is there a group of people that get smarter?

LONG:

Hopefully everybody gets smarter. If you just rotate people back and forth. You’ll find some people are good at some things, some people are very good at evaluation, people that are good at check arithmetic and not good at creating things. You are not going to recycle those people. But some people you are going to recycle through.

UNKNOWN:

My biggest concern [mike noise] that wrote the Black Swan, his book is about [mike noise] do Black Swans occur. And they may not occur in days, months, and some years, how do you evaluate someone?

UNKNOWN:

Rovini was wrong, wrong, wrong!

LONG:

So let me just close our conversation on this one, and put the team over there on the hot seat

GOLDHAMMER:

This was great, thank you. Okay, gather around this table!

Group 2 Briefing (Option 2)

[Short beginning of brief not recorded]

REED:

So and then we also use kind of classic techniques such as brainstorming, market surveys, mobile, collection with like mobile phones, data collection. We can look at games, tweets, you know, what’s going on in the news, current articles, and we can extract predictions a little bit more that we’ll get to in just a second. Let’s see. What is this one? Oh, yeah, so once we start, especially when we start identifying communities, we really want to identify communities that are doing something, that have resources behind it.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

Either a lot of people are working on it or they have funding or there’s some sort of resource that makes that community interesting. We have the -- can’t remember how to use this.

LOUIE:

That’s the comparative engine. That’s the --

REED:

The comparative engine. Thank you.

LOUIE:

Fundamentally that engine’s supposed to look at, you know, a survey of lots of different media than that and find out what’s new happening.

UNKNOWN:

Right. Is that a computer or is that a person?

REED:

That’s automated. It’s a computer. But we have this cool legend here. If you look at the A’s those are automated by computers. “A” just means there’s a human involved.

McCORMICK:

S’s. “S” means semi-automated.

REED:

Yes. Okay. As well as “A-H” also means something.

SCHWARTZ:

And “S-H” is semi-human? Is that…?

REED:

Yeah. [General laughter]

[Simultaneous comments]

REED:

And so we get, we do involve, this involves some excerpts here, right? Then we get narratives from -- Is that right, Jennie?

TALMAGE:

Jennie’s in the back here.

HWANG:

Oh, yes. [General laughter]

REED:

And the multi-narratives, where do the narratives come from? Do those come from the experts?

MARK:

They might be regenerated by narrative generators.

LOUIE:

Yeah. So they could be regional experts or technical, right?

REED:

Right, right. But those are...

[Simultaneous comments]

HWANG:

The narrative, you know, to be really valuable, has to come from integrators in the real world, rather than one type of expert.

SCHWARTZ:

Well, you know, you can actually, and it has been done – I mean, some of the movie studios have done it – you can actually create a narrative computer engine that basically combines -- You know, every movie is either a comedy, an adventure or a love story, a

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

mystery, that’s it. You know, you just combine all the variations and that’s it. And they all have 3X.

GOLDHAMMER:

So since we’ve got another one to go, I want to pick up the speed on this one and kind of hit a whole bunch of ideas as fast as we can. So Ben, just go, go, go.

REED:

Okay, okay. So up to here we just have, we’re collecting ideas, so we’re going to have a bunch of ideas. And then the idea here is to filter them out, to extract conditions for a prediction. So basically we have a bunch of predictions. We start extracting conditions for the predictions, we start extracting conditions for the predictions, we start correlating them with current events that are happening in the news, right, so this could be automated. And then once we have these predictions then we start the humans, looking at them, mapping it to current trends, current models. Then we start doing things like backcasting, second order analysis, and then -- I wasn't involved in this at all down here. [Chuckles]

[Simultaneous comments]

GOLDHAMMER:

Gilman, why don’t you tell us about it.

LOUIE:

Yeah, one thing to note here is that there are specific models of technical evolution that we see that should be mapped against what we’re seeing on the signals and idea creation and then actually use both human and automated systems and do that computation. Then, once you’re going to get that data, really begin to extract out both the signals and signposts. And this is the potential future. Can you backcast and create the signposts as well as look at what’s out there and match what is out there signaling that’s matching up against those signposts. And then report in a way that is useful, whether it be, you know, dashboards and maps and lots of diagrams that people can understand, kind of what is the potential possibilities, or even doing things like storytelling, kind of a day in the life of the future. It could be in the context of a game, it could be in the context of a movie, a short, a piece of fiction, but to be able to tell the story in the context of somebody’s everyday life from their point of view based on where they live in the world and how technology’s going to affect them and then feed it back then to create new ideas.

TWOHEY:

So it’s just disruptive technology then?

LOUIE:

They’re very focused on disruptive tech.

GOLDHAMMER:

Any other questions, one or two questions?

UNKNOWN:

That was just brilliant.

MARK:

You left out the prediction markets.

LOUIE:

Oh, yes, and that we can also use prediction markets in a different way, yes. So the problem with prediction markets as we learned in the committee is that it’s really bad for long term forecasting because you make a prediction and you don’t know if you’ll collect any money 15 years from now. So the idea is don’t use prediction markets to predict the predictions but use it to predict the signposts. When do you think a signpost is going to hit, what’s the probability of the signpost becoming real and use that to evaluate signpost analysis.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

SCHWARTZ:

And how does this -- how does the system get smarter?

LOUIE:

That’s one of the ways. So as you begin to look as the signposts are either not being hit or being hit, you may say you know what, we need more work or resolution on the signposts because you’re not matching. You may be having problems in the natural collection of the signals.

SCHWARTZ:

But who’s making that statement? Who’s the “we” in that case? We --

REED:

This is a continuous flow. This isn’t something that stops.

SCHWARTZ:

Right. But who’s the “we”? Is there a committee, is there a group?

LOUIE:

Yeah, there’s a team. There’s a group, there’s a team running the system so they’re the --

SCHWARTZ:

And they’re the ones getting smarter.

LOUIE:

Hopefully.

REED:

But also, since we’re collecting predictions from the crowd, like media, and –

SCHWARTZ:

And do they get smarter somehow?

REED:

Yeah.

SCHWARTZ:

When we give feedback?

REED:

Yes.

LOUIE:

So there’s this concept of getting smarter versus having better maps and more maps. So the concept is and where it’s really important for us is not to filter too early. So try even the bad ideas but map it, right? The more maps you have, you know, more good and bad ideas where you begin to track things as they become real, it will enlighten you hopefully or users of what is more likely to happen in the future versus less likely.

SCHWARTZ:

Are these what you mean by maps?

LOUIE:

Part of this is the analysis of the maps but it’s also in the backcast they’re here in the signpost generation.

REED:

There’s a map there with signposts and visions. The axes are the impact of uncertainty.

SCHWARTZ:

So in this model it’s more is better and don’t filter too early.

GOLDHAMMER:

Let’s see how the third team did.

GOLDHAMMER:

Let’s take a look at the last table, please.

[General Conversation]

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Okay, last report out. We’ll do a few questions and then we’re going to go to a break.

GROUP 1 BRIEFING (option 3)

GOLDHAMMER:

Okay, last report out. We'll do a few questions and then we're going to go to a break.

SCHWARTZ:

So this is organized mainly in three groups. This is mostly input, analytical approaches and things that we're doing with the stuff, and then the outputs. So we have many different sources of input. You can see all the variety here, blogs, forecasters, journalists, people – Oh, yeah, this is an important point, people on the ground all over the world – Aid workers, NGOs, soldiers, etc., the people who are actually seeing things out there. So there’s a whole bunch of sources of input. A variety of ways to analyze. We also came up with a number of incentives. For example, the idea of movie scripts, reputation enhancement, the idea that there will be interesting people listening to them. So that’s another incentive. Crowd sourced analysis. Each of these represents different approaches to analysis. Now back upstream here – and this all goes back to the feedback – are the various forms of output. Part of it is actually specific queries. So a policymaker says, “I want to know is cheap launch on the horizon for space vehicles?” A typical query. Systematic reports regularly – quarterly, annual, whatever it is. The analyst in the system who’s a member of the Committee on --

WINARSKY:

To assess disruptive technologies.

SCHWARTZ:

Yeah, the Committee to Assess says, “You know, we've been looking at something, something interesting is coming along. Policymaker, you need to know about this.” So it’s driven by the analyst. Then finally the Disruptipedia. The Disruptipedia is the public face of this where all this information gets both displayed and people can input and participate. So part of the incentive is they get to see their stuff on the Disruptipedia and the Disruptipedia is part of the interface that they get to interact with. That, in turn, feeds back down here, improving the questions – that’s one of the ways it gets – and also crowd sourced analysis off the Disruptipedia. Team, have I missed anything important?

GOLDHAMMER:

That’s great.

GENIK:

I would just add that all of our inputs here are inputs to all of the decisions so that we didn't draw every little line. It’s like an application programming interface (API), everything can feed in and everybody can draw from back here and this system will generate output without any inquiry to it.

TWOHEY:

The other thing that we didn't really say here is that there's the fundamental philosophy that it’s okay to try a bunch of different things and have a bunch of predictions that just failed utterly. Because the whole point is that it’s going to be disruptive, most of the time it’s going to be wrong. So I think in a lot of other places there was no inherent tolerance for failure and that was one of like the key design criteria that we had is that you're going to have these creative people that were going to feed stuff in and if most of them were wrong most of the time our system still had to work because that’s actually how most people are when they look at the future.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GENIK:

I guess the last thing I would add here is that there’s – we have sort of one system designed here but we were thinking along the lines of several different systems that would possibly be aggregated, there would be an aggregation step as well and then it wouldn't all be done by the same groups in the same…

GOLDHAMMER:

Questions or comment? Yes, Stewart.

BRAND:

I think the output array here is really dead on and as a participant in the first table over there I would replace our output array or add to it this stuff but keep the hypothesis engine and analysis that we have over there.

SAFFO:

With our front end and your back end, Peter –

[laughter]

 

SCHWARTZ:

I've sort of always thought of us that way, Paul.

[laughter]

 

BLOUNT:

Paul, you get the line of the day.

[laughter]

 

HWANG:

Well, the input from graduate students, would you think that would be based on the job market?

TWOHEY:

The idea we were thinking of is that – So a lot of disruptive technology innovation comes from U.S. funded research for graduate students at our universities. And so if you had every student, every semester as a contingent on their funding write three paragraphs. One that was what do they think the coolest thing that’s happening in their very specific area, in their field; what do they think the coolest thing that’s happening in their field, like computer science overall; and then what do you think the coolest thing that’s happening just in engineering. What’s going to change the world? So if you get – there’s this whole thing about experts versus crowds, right? Well, how do you make an expert? Today’s graduate students are probably going to be tomorrow’s experts in these things so you can get them younger, fresher, less jaded and you get this --

SAFFO:

There’s only one qualification to this, speaking as someone who has a Ph.D. student who’s four years late on his paper, getting engineering students to write three paragraphs often takes an entire quarter. [laughter]

TWOHEY:

It’s a contingency of their money. They don't get that –

[Simultaneous comments]

SCHWARTZ:

A requirement of the grant.

GOLDHAMMER:

We were talking about incentive systems, right?

[Simultaneous comments]

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

TWOHEY:

And that was actually the thing that we thought was really important that is it --

ZYDA:

If you put that homework on Facebook it would be no problem at all.

[Simultaneous comments]

GOLDHAMMER:

Go ahead. Finish your thought, please.

TWOHEY:

The way that you want to design incentives is that everybody locally – like you get this global optimum but their local choice like makes their life a little bit better. And if there’s some little reward or some little – if somebody publishes – says something at the start of their tenure at the NSF grant and it turns out to be right five years later and they get a little bennie for that, you know, maybe they get to go to Davos or whatever it is then people are much more likely to take this seriously.

GOLDHAMMER:

Great. So, listen, I want to thank all the teams for your creative work over the past hour and a half, just amazing what the teams –

[applause]

 

GOLDHAMMER:

Here’s what the rest of the day looks like. Why don't we take about a ten-minute break now just to get a refresh on coffee, restroom, whatever you need, about a ten-minute break. We'll come back here just a little bit after 3:30. When we come back we're going to come back into our groups and what we're going to do – just let me explain it now rather than try to gather everyone back in tables. When we come back into our groups we're going to do two things. First thing is if you learned anything from other groups you have an opportunity to revise your model and add in – For example, if their output system you think you could really benefit from, front end, back end, you can put that in. The second and more important thing we'd like you to do is now to go back through the model and start layering in one more level of detail across three different dimensions. People, what people do you need to do the things that you say you're going to do. Technologies, what kinds of technologies do you think you need in order to do what you said you're going to do. And finally, partnerships, what kinds of either organizations or companies or institutions do you believe you need to be working with in order to do what you think you need to do. So those three categories: people, technologies, and partnerships.

GRAY:

Have we captured these in their raw state before we go back and start potentially modifying them?

TALMAGE:

Yeah, I really don't want to change what we have already done.

SAFFO:

Why don't you take pictures before we come back --

[Simultaneous comments]

SCHWARTZ:

We do. We can take pictures of this.

[Simultaneous comments]

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

This doesn't have to be very complicated. You can add that layer of information in either with a pen or with a post-it note.

GRAY:

What I was concerned about was the comment – if you’ve learned something from the other groups coming back and adding it to yours. Let’s leave it where it was and we can consolidate that tomorrow.

[Simultaneous comments]

GOLDHAMMER:

Time out, hold on.

SCHWARTZ:

We need a clear set here.

GOLDHAMMER:

Yeah, a clear set of instructions. There’s nothing – I think we're on the same page with this, Peter, but let me just double-check. There’s nothing wrong with innovating as we're sort of going along here.

LOUIE:

I'm okay with that. So if people want to innovate – I'm going to follow Monitor’s approach on this, I'm going to let you guys drive.

GOLDHAMMER:

Thank you.

SCHWARTZ:

Hey, listen, folks.

GOLDHAMMER:

Listen, folks. Just so everyone’s clear.

SCHWARTZ:

Can we have one conversation, please?

GOLDHAMMER:

This is an innovative crowd. When you come back after the break if you believe you want to do some tweaks, go ahead and do some tweaks because that’s how we make the process better. Then the fundamental activity after we come back from the break is layering in one more set of data, which is people, technology and partnerships. You can use either your pens or your post-it notes to do that. We'll do that for about probably 40 minutes, 45 minutes and then we're going to come back to plenary to have a final report out. Are there any questions?

SCHWARTZ:

Paul is going to take pictures of all of these so we'll have a version of this version already.

[Simultaneous comments]

GRAY:

That’s what I wanted to make sure is that I didn't lose it before we started hybrid…

[Simultaneous comments]

GRAY:

I just think there’s an inherent value of having this before we go back and –

GOLDHAMMER:

Fine. Great. No problem.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

[BREAK]

3:15 P.M. – 3:30 P.M.


[TEAM ACTIVITY: IDENTIFYING THE HUMAN AND TECHNICAL REQUIREMENTS]

3:30 P.M. – 4:30 P.M. (SEE APPENDIX E)


OPEN QUESTIONS FROM THE COMMITTEE AND NEXT STEPS

4:30 P.M. – 5:15 P.M.

GOLDHAMMER:

Okay, so what we want to do, there are a couple concluding activities we want to do. Lynn is going to be recording this conversation.

CARRUTHERS

Thank you.

GOLDHAMMER:

And so if you would speak slowly and articulate so that everyone can hear, that’d be fantastic. A couple of concluding points. First is just, and I don’t want a formal report out from the teams but any observations from the different teams about either the human technical or partnership requirements for the system that you put together. And I’m interested both, if you have some observations about what were some of the decisions that were easy to make, like was it easy to figure out that this was 6 people and not 25 people, or the decisions that were also really hard to make in looking at these systems, what are the things that will need to be thought about. Yeah, Al?

VELOSA:

So I think one of the things that we just didn't have time to really address was the really hard question of dealing with some of those soft signals, the fuzzy early warning things. That was just something that was, you know, I think it’s the crux of the matter but we just didn't have enough time to deal with that I think. But I think for us one of the things that actually – for me was surprisingly easy towards the end was I think Paul threw out the number of a dozen and we all said yeah, that passes the BS test.

GOLDHAMMER:

Yeah. And ultimately for this table the total number of people running the system was actually quite small. It was something on the order of maybe 20 total across all the different elements of the system. Peter?

SCHWARTZ:

Yeah, I would only add that that’s similar to what we, I think what we would've said. But I would say three things that I observed. One came out of our second order of conversation. In terms of an input source using a lot of people around the world that we already have access to, NGOs, soldiers in the field, contractors, aid workers and so on, finding ways for them to be part of creating these narratives that we’re talking about. Second point is that on the whole, with one important exception, this isn’t going to cost a lot of money. This is not a very big expensive enterprise. The one place where you could get into some significant money is any automated processing of general input data, you know, whether it’s text data from newspapers or print and so on. That’s where you could end up spending ridiculous amounts of money but almost everything else in this is fairly cheap.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Okay. Other thoughts, reactions from the tables? Jennie?

HWANG:

I am still struggling with the time horizon. I mean, we, in this kind of output, we don’t want to define exactly the time. However, you know, are we looking for the infinite time or looking for the reasonable way to define the timeframe? That would give us -- Because everything is [..?..]. If we give infinite time, that become perhaps too early. Very extreme, you know, come out more toward very wild imaginations. But then if you say too short term, then we really don’t limit ourselves. So what really are we trying to balance that? What is realistic? You know, because this is different approach, you know, when you look at it. If you give infinite time, that’s an entirely different world.

LOUIE:

I think I have two parts to the answer so first is from a customer’s point of view who happens to be our sponsor, the Department of Defense. It doesn't mean that that should be the driver but it is a point of view. So given the traditional planning horizon of the Department of Defense, what they’re looking for is a valid – multiple valid approaches forecasts, what they may encounter somewhere between 10 and 20 years out.

HWANG:

Okay.

LOUIE:

Because it kind of takes that long to build things, employ things. It isn’t that they’re not interested in the shorter terms but they have lots of different systems that they can use in existence as -- You know, the longer term affects the big, bets, the multimillion dollar systems that they’re buying and the world in which they think they’re going to operate in, what’s that going to look like? That’s one person’s point of view. I think and at least of the three approaches, all three assumes that there’s value to more than just to the Department of Defense and so the time horizon in some ways is a little bit more murky. What is useful is can you provide a way of looking at the future that people can begin to track, monitor and bet on and change those bets as new information gets discovered. And so, you know, it’d be different for an IT company, you know, the time it takes for software to be developed is a different time scale horizon than maybe something that takes serious infrastructure. I mean, if it’s energy, we’re looking at a 20, 25 year infrastructure, you know, things aren’t going to change, you’re not going to unplug the grid tomorrow, you know, even if there is a better solution this week. So it really depends and in some ways that’s why it’s left a little bit fuzzy, because if we locked it down to the Department of Defense needs, that would help us on one end but would hurt us on our ability to get others to play with us and make this thing be a really useful [..?..].

HWANG:

Well even if it’s a fuzzy when, not, we still have kind of scale(?) when we talk about 10 or 20 years, maybe 30 years. And we’re not talking about 200 years, you know.

LOUIE:

No, no. We’re thinking about more like two or three product cycles, whatever that product cycle [..?..].

[Simultaneous comments]

GOLDHAMMER:

We can do Ray and then come over here and then here.

STRONG:

So just in answer to that though, one of the useful things to do is to go out and look for ideas where people are, where you tell people to think 200 years out and then backcast from those. So you’re backcasting into that range that’s say beyond five years but not, not

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

maybe, probably -- we can’t do any kind of predicting about what -- when you say 20 years or 30 years, there’s no difference. We have no difference between 20 and 30.

LOUIE:

Ten to 20 does.

STRONG:

Yes, but when you’re just outside the [coughing] you can say a little more.

GOLDHAMMER:

Michael?

McCORMICK:

This is a personal observation that I have, is for all three of these systems the management of the systems is actually one of the weakest links.

GOLDHAMMER:

Yep.

McCORMICK:

There’s a lot of shall we say bottlenecks where you’re only as good as the weakest link in it and, you know, making sure that you’re keeping high quality people, you know, keeping the feedback loops working, etc. So it takes a lot of active management for all these to actually work well.

GOLDHAMMER:

Harry, then Peter.

BLOUNT:

I have a question for the people who are not on the committee, which is in your experience, now that you’ve had a full day to kind of assess and hear the problem statement and some of the thoughts that go into it, have any of you guys seen within your domain of experience any organizations out there globally or tools or processes or companies out there globally that have a lot of the key elements in place?

GOLDHAMMER:

Yes.

McCORMICK:

Yeah, I mean, almost all the major Fortune 500 companies have these in place. Now the thing I would say is again, one of the biggest issues that comes into play is that information isn’t communicated particularly well and they’re not particularly managed well.

BLOUNT:

They have processes internally to forecast disruptions?

McCORMICK:

Yeah, especially in high tech. I mean, IBM’s a really good example. I actually helped develop a [..?..] business at IBM and one thing that’s really interesting about it is there’s a lot of people and a lot of process that go into it and it’s surprising how poor the output actually is.

GOLDHAMMER:

Yeah, I’d say yeah.

CULPEPPER:

In our business we have a team that specifically evaluates inbound technology in the [..?..] world. And because we’re a relatively large player, we’re a big fish in a small pond, right, so a lot of players come to us. I think I mentioned this, yes, this whole idea of a gravity well. Like people come to you because you’re relatively large in a given segment. If you’ve got that then you’ve got a unique position because people are going to bring things to you just by virtue of the fact that you can make a difference in their particular application area and it gives you a front end view of the entire market. I wouldn't say necessarily we do a great job of then taking that and applying it even inside our own

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

business yet, but it does put us in a unique position. And I think this has the potential to be that same kind of structure but you’ve got to identify how do you generate that gravity well on the front end. There’s a lot of other tools too out there that do things for example like tracking word analysis, just saying what’s going on in the media anywhere where I'm mentioned. I mean, some of it’s as simple as Google. There’s other more sophisticated media tools that really strip down content and give you digests on this is a good article, this is a bad article, you know, what happened here, by market, by segment, by state. You can really tear down a lot of information very quickly. So there’s a lot of tools out there that can be tapped I think.

GENIK:

I’d quickly say that I just completed a report, “Opportunities in Neuroscience for Future Army Applications”, that just got published this year and one of our overarching conclusions was that we need this kind of a capability in neuroscience research for military use. We didn't use the term “disruptive” and we didn't say that but the kind of group and the kind of system that you’re designing here, actually, when I was asked to be here last Friday, I thought it was part of forming the group finally, starting off. But I would say -- so my answer is no, in neuroscience future there is not a current type of group like this.

SCHWARTZ:

I want to go back to the question of management that you raised, Mark, which is in our group we talked about the need probably to have at least more than one parallel structure, one that is very visible and public and then probably one that is either classified or private. We talked about the differences between the two and without getting into all the nuances there, but something that is not as visible where things can take place and conversations and inputs and outputs can take place where you have to be at least discreet, you know, at the level of corporate NDAs if not, you know, top secret.

GOLDHAMMER:

Yep, Gilman?

LOUIE:

You know, we’ve played around with this concept in other agencies, that we actually formed a Disruptive Committee. There was a notion that there would be kind of two separate organizations, that we thought that maybe there would be a need for a nonprofit, multi-nationally sponsored, countries, corporations, individuals, whose job it is, is just to produce really interesting forecasts, regardless of how people are going to use it, which is kind of the honest broker, individual, and that there may be a separate group inside the Department of Defense as well as in corporate entities in other places, who will learn to leverage this, kind of bridge open source collection of forecasts and disruptive technologies and apply it to whatever their needs are. But the danger, if you tried to put both those groups into one, is you immediately re-bias both the questions and the outputs that you have. There was a notion – we have no proof that that’s a good idea, but there was a notion that [..?..].

GOLDHAMMER:

Yes?

WINARSKY:

So this conversation’s really interesting to me because what we’ve just heard -- what’s your name again?

CULPEPPER:

Mark.

WINARSKY:

-- Mark pointed out that Fortune 500 companies are, all have similar needs. Let’s put it that way. And what that means is if we do our job right, this is going to be extremely

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

interesting to them too. So, you know, the analyst’s question is perhaps private to us, you know, can we, when’s the next time a micro satellite could be launched by, you know, another nation state for other purposes, that’s our question. But the issue of satellite launchers and when it will be commercial and people who manufacture that, that’s going to be interesting to them, venture capitalists, everybody. So what this means is, is if we keep it open like this, it will become an important element of everyone’s tracking of innovative and disruptive technologies and -- But what you’ve got to keep is your own questions potentially to yourself.

CULPEPPER:

Yeah. And sort of the corollary there too is it also becomes a target for deception. Because anything that that’s interesting to that many people will have implications, financial implications on markets, on governments and what militaries may do in different parts of the world. It becomes very interesting.

McCORMICK:

Can I add to that?

CULPEPPER:

Please.

McCORMICK:

Most of the major corporations that I’ve been involved in doing this stuff with have exactly that, they’ve got a marketing arm on the side on okay, what are we going to publish, what are we not going to publish, how are we going to do this. There’s a hell of a lot of positioning that goes on.

WINARSKY:

Right. But given that we’re open, if we can open source this, you know, crowd sourcing this information, then you have a self-correcting mechanism against deception because whoever is on the one hand saying, "This isn’t going to happen for 15 years," while they really think it’ll be three, then the people that think it’ll be in three might actually come back and correct that.

UNKNOWN:

There’s still a lot of game playing.

NOLAN:

So it seems that a lot of our thinking assumes that the creation of disruptive technology is a process that kind of happens independently and then we, the system, are able to monitor it. I’m wondering if there’s any thought as to if this system goes open source, how that affects the process of creating disruptive technologies.

LOUIE:

To kind of further that thought. I mean, one idea – we found this on the open software environment, is once you start opening up ideas, what happens is it motivates its own self. Good ideas have a way of amplifying themselves. And so in some ways creating such a system might actually get people to focus around certain very interesting, potentially highly disruptive technologies to solve either really, really knotty problems or to really explore big market opportunities. And, you know, it’s sort of a twist on kind of the venturing side of the world when you look at signpost venture capitalists and, you know, “What are Sequoia and Kleiner betting on this week?" Maybe we should pay attention to what those guys are doing. That’s another kind of system. Another comment on the deception. Deception is highly valuable. In other words, for somebody to want to deceive you into believing something knowing that there’s a lot of activity around that, even if there’s a lot of false signals, it’s telling you something, that this is, somebody’s trying to steer you in a different direction. That’s also very, very useful. So in some ways, you know, the harder thing is when you have no signals at all. But, you know, deception can be used very positively in the analysis world.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Mark?

McCORMICK:

I just want to add one thing to what you’re saying. I think one of the things that’s interesting about the open software environment is the ecosystem around a particular technology develops twice as fast as ordinarily and it’s often adoption has more to do with not the raw technology but the ecosystem that goes around it.

BLOUNT:

I think very quickly the open source model, you also see where converging ideas come in. So you’re able to track that because you can see, even online you can track people that are kind of pairing up with this idea, this idea, this idea. "Let’s go offline and do this together."

GOLDHAMMER:

Stan, did you have a comment?

VONOG:

Yeah, I thought one of the areas was international participation, probably motivate people to perhaps participate in this thing. And Gilman’s was, sounds like a viable proposal, like [mike noise]works. They have like 27 countries they are building thermonuclear whatever could blow up whole thing into black hole. [Laughter] But it works so maybe it’s the right [mike noise].

[Simultaneous comments]

SAFFO:

Well it doesn't work yet but [mike noise] perfect.

VONOG:

Yeah, no, I mean, the system of scientists from different countries working on like some venturists.

GOLDHAMMER:

One, sort of one uneven I think opinion I’d like to surface a little bit is how, if you look at this thing in total, you know, and we’ve got three different systems here so sort of based on your team’s perspective, how much of this – I’m going to oversimplify here but how much of this set of activities is done by people inside the government or associated with the government in some meaningful way? How much of it is done by people outside the government or not affiliated with the government in any meaningful way, they’re part of a private enterprise or nonprofit organization that is not connected to the government?

SCHWARTZ:

99 to 1.

GOLDHAMMER:

Good, thank you. 99 to 1, meaning 99 out, one in? Do people agree with that?

SCHWARTZ:

Well, we’ve got some guys inside, we’ve got some staff. We’ve got people working inside, some analysts and --

GOLDHAMMER:

I’d like to know where the agreement or disagreement falls. Mark, I see your head nodding.

McCORMICK:

Yeah, I think the majority of it’s got to be external.

GOLDHAMMER:

It’s out.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

McCORMICK:

There’s too much -- and the thing about it, the more variety you have in the more locations, the less likely you’re going to have the bias issues that are built in.

UNKNOWN:

Yep.

TWOHEY:

I think you probably get the people inside the government participating 'cause they see this as a channel to get around what they perceive as bureaucratic roadblocks to getting their ideas published. So you’ll get, you know, you’ll get exactly the dissenters, right, because if they’re being listened to already you don’t need this.

GOLDHAMMER:

Anyone who significantly disagrees with the 99 to 1 rule?

LOUIE:

Well I can look at recent history, right, to know that if it catches on commercially or in an open source world people inside government will take a version of that, morph it and use it for their own internal use. So I look at, you know – Intellipedia is a good example. You had to have Wikipedia prove itself out before government would say, "Oh, we’ve got to go off and use this 'cause we’re behind this." If you’re leading the charge, the problem inside government is you’ve got all of these inhibitors who say, “Don’t do this, don’t mess up the system, don’t change the organization, don’t threaten the status quo” and more importantly, in order to do a lot of these things you have to change your complete security protocols to allow this to happen. So in some ways, if you really want to motivate change inside the government, you always have to do it by example on the outside. It’s not always the case but just recent history it seems to be more the case.

GOLDHAMMER:

Yes, Stan?

VONOG:

I could say my impression about Russia and I don’t know for sure but my impression is that 99 inside the government, one outside. [General laughter] And it’s a kind of simple process. They decided it’s going to be nanotechnology and putting like 15 billion, whatever, so there is, like they probably listen to what’s going on in the world and then we’re going to pick one. That’s going to be the next sort of space or science [..?..].

[Simultaneous comments]

McCORMICK:

There’s just one other thing I was just thinking about along those lines. Somewhere in the process I think there needs to be a level of anonymity. That involved in it which takes away a lot of the politics that okay, here’s somebody with, you know, "I’m a General and you’re a lowly private" type thing, whereas, you know, some of the best ideas we’ve talked about often come from the least obvious person. So it allows the best ideas to surface, not the title they service.

GOLDHAMMER:

That’s a good point. Just another sort of striking a balance question, so we talked a little bit about deep government, inside or outside. So this is a system for identifying disruptive technologies for the Department of Defense. One could imagine that even if you were identifying these technologies in open source they may have capabilities where one could imagine an organization that’s 99% outside the government imaging capabilities which are pretty interesting, the kind of capabilities that most people inside of the security and defense establishment would consider to be classified or certainly wouldn't want anyone to talk about. Can you guys give me now a sense of like how much of the work of this entity, this organization, this system, how much of it is cleared, how much of it is classified – excuse me – how much of it is unclassified?

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

LOUIE:

And after the first accusation of a leak or second [Chuckles]

NOLAN:

Let me tell you a completely fictional narrative. It’s 99% private sector or not-for-profit, 1% government. A couple ideas show up creatively which happen to be the same as some highly classified U.S. government program. I can easily tell you the story about how a lot of that stuff gets dragged behind the firewall.

GOLDHAMMER:

You don’t mean the GDN Climate Report? [Chuckles]

NOLAN:

I don’t mean anything in specific. I’m very cynical about the ability of our national leaders to tolerate the ideas that they consider secret also showing up on the outside.

LOUIE:

There’s a different twist to this and it’s always delicate. Anybody who’s cleared today and puts anything on this network is going to get shot, right, at least on the external. You do – on the internal, well you do whatever you want. And anybody who’s stupid enough to put something out there probably should be shot. The flip side of that, for any other country, including our own, I did – other people are having this same line of thought that is extremely useful in preparing against surprise, right? So if you thought you were the only guy who thought about a particular problem and you thought "Oh, wow, let’s classify and make it secret," and you find out ten other people who were just kind of chatting about this on the side also shared that idea, you’d better know about that. We had the same problem when we were thinking about Google Earth. Here’s the CIA making an investment in a startup company that put SAT data available to anybody on the Internet, [..?..], a CIA investment, right? And then we sold it to Google, right? We went through the same issues. "Oh, my God, what would happen if the bad guys get this?" Then we said, "But the bad guys probably already have this."

McCORMICK:

But wait a minute. But the Russian data was already available on a Microsoft terra site.

LOUIE:

Exactly. I mean, so sometimes we worry about things that we shouldn't be worried about and there are other things we should be worried about but it’s also you should know that other people are thinking about those same lines. So I would just put the DOD hat on to say in this kind of open source world is it’s really important to understand other people, particularly non-nation states. Because, you know, a Chinese analyst is not going to put her secrets up on the system either, right? So if other people who are not outside the normal nation states that we worry about are discussing this, I think we want to know about it.

TALMAGE:

So another question would be if you wanted this to be running without the DoD piece in it but just out in the open

SCHWARTZ:

Without the DoD what?

TALMAGE:

If you didn't want this to be in the DoD piece but, that this be an open product, what would be needed? Would it be a foundation, would it be an X Prize group? Who would run a data collection unit like this?

SCHWARTZ:

Well, the truth is I actually think the National Academy is not a bad organization to run it in the following sense. Part of what we recognize is that one of the things that participants want is a good audience, i.e., people that matter or are interesting or

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

important, to listen to them. And the National Academy has the sufficient stature to be such an entity. It could be others but it has to be something that is in itself an incentive for people who wish to participate.

WINARSKY:

Well and so we, I mean, that’s what our group was talking about, right, such as even something so simple as movie productions, a 5-minute clip, if you’re, you know, if you’ve reached a level that people recognize that really looks like a disruptive opportunity, we’re going to put it out there and Spielberg is going to, you know, base a script on this or something. So I mean, there’s lots of ways to incent people to contribute. You know, that’s very, very different than a scientific contribution.

GOLDHAMMER:

Mark, Phil and then Danny.

McCORMICK:

I actually really like your idea of the X Prize.

DANIEL:

I heard it in one of the groups. That’s why I asked.

McCORMICK:

Actually, I think that’s a phenomenal idea 'cause it gets to the heart of motivation of competition, you know.

GOLDHAMMER:

Yeah.

NOLAN:

Well let me just throw out another, a very different sort of general observation about the three systems and we talked a little bit about it in our group. The efficiency, the throughput, the speed of these systems, one imagines that that’s going to be a very important characteristic and also that might be where the lever of, the resource lever is going to help you determine whether – and I don't know whether we talked in our group whether it’s a dozen people or a thousand people running this organization, but one imagines a play on that lever can increase the speed and the throughput and I imagine within some pretty broad ranges. So that’s a place where the DOD client may be very helpful in thinking through where to target.

GOLDHAMMER:

Danny?

GRAY:

So, and this is a question to Daniel and that is what is the likelihood of contacting other nations' equivalent of the NAS and say we would like to do this initiative and, you know, and invite participation in bringing something together so that it’s not seen as an American science initiative to introduce or to identify disruptive technologies. And then that brings your buy-in where you can say we’ve gotten rid of the American bias, you know, at least to some extent, because we’ve invited participation. And basically you could kind of put this into a – I guess a national cooperation race where nobody, no nation wants to be seen as not contributing to this because this is a group science mind think system.

TWOHEY:

So people start talking about incentive systems, like --

GOLDHAMMER:

A little louder.

TWOHEY:

People are talking about incentive systems like there’s only one and I think that that’s flawed thinking, right? Like why have just one, you know, like why not take a diversified portfolio, have a prize, have an organization, have movie credits, have all these things

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

and, you know, try them all. The things that work, keep them, the things that don’t work, throw them out. And then you have this committee, right, and part of the thing about being on the committee is you should come up with new, exciting ways to incentivize people.

SCHWARTZ:

like a Nobel --

GOLDHAMMER:

Great. Paul and then Jennie.

SAFFO:

You know, the Office, the Congressional Office of Technology Assessment, was not --[General laughter]

[Simultaneous comments]

SAFFO:

-- it was simply defunded. They never eliminated it. And the shell is sitting there.

SCHWARTZ:

Does it still formally exist?

SAFFO:

It still formally exists. It just doesn't have any money. And so it sounds kind of like a Frankensteinian portmanteau kind of construct but, you know, if you sucked it that in with the NAS it would give you the kind of political cover that would protect you against members of Congress. So next time Peter doesn't get beat – I’m sorry; I won’t talk about that – but it’s, I know it sounds impractical in many ways but --

SCHWARTZ:

Brilliant. Do it.

SCHWARTZ:

If there were an OTA we might not be having this meeting.

UNKNOWN:

Yeah, yeah.

HWANG:

I just wanted to add on a couple topics discussed here. One is about the open source, outside/inside the government, the other DOD, the other is international involvement. In my engagement – for the last twenty, I think so, twenty years is kind of a break point. Before that then the DOD really had different principles, you know, from the last twenty years. I think the early 1990’s really break point, the principle is really intended delivery wanted to have outside to really test out whatever the technology concept was and really, you know, put us a criteria. You know, we were not going to adopt anything until commercial sector were really able to prove it, you know, it is real viable. So, you know, that’s one thing. [..?..]break point, about twenty years. I would say about the 1990’s. Ken, you know, you can make a comment on that. You were directly in that. My involvement is really, you know, kind of like that -- [..?..]. Okay.

PAYNE:

I think -- I’m sorry. Go ahead.

HWANG:

Okay. I just wanted [..?..] get off on this. And not only international involvement, almost all National Academies committees deliberately invite international members' participation if it’s feasible. Feasible means we can identify the people. So always, you know, has international involvement, at least the committees I’ve been with there’s always, we have some international. So that part is not, it’s almost like standard practice. So anything comes out of National Academies I have to – I think I can say, we always have some element of international.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

GOLDHAMMER:

Great. Ken, did you want to say something?

PAYNE:

No, just Jennie was asking me – it was in the nineties they kind of went to, you know, "Oh, wow, we want to go to the commercial off-the-shelf as much as possible" and, you know. Now they did it wrong. They get something – they get a system like SAP, which, you know, makes you change your business process, but then they’d hire somebody to build like bridge software so they could still do it the way they did it before. [General laughter]

[Simultaneous comments] [Laughter]

PAYNE:

I was in there. I saw it happen. I mean, very important, made a lot of money off of [..?..] on that. But, you know, I was like, “Why’d you get SAP in the first place? Why’d you get a software [..?..]?” You know, but commercial off-the-shelf was one of those things that – and, you know, good reason. I mean, it’s proven, your timeline is not that long, you know, as long as you use it right it’s not that bad. But as typically happens in government a lot is that they go overboard with it. And so like Gilman says, they wait ‘til it’s proven outside then they say, "Okay, yeah, we can use it," because people don’t want to be culpable for anything. And for a group of people whose jobs is pretty secure, are pretty secure, it’s amazing how risk averse they are. [Chuckle] But –

SCHWARTZ:

That’s how they protect their security.

PAYNE:

Yeah, I guess.

[Simultaneous comments]

McCORMICK:

This one might be kind of controversial and it’s a little bit out of scope in some respects but it just strikes me -- I had a long conversation with a VC this past week and I thought it was a pretty interesting conversation from the perspective of we educate some of the – most of the Ph.D.s around the world in the United States and then our current process and our current thinking is, you know, we make it almost impossible for them to basically get a green card and a visa to stay in the United States. In some respects, if we give somebody a Ph.D., we should be giving them a visa to be able to stay and keep the innovation here and make it easier to monitor at the end of the day.

SCHWARTZ:

That’s what we used to do.

McCORMICK:

Yeah, I know and we don’t now.

TWOHEY:

I actually have this issue right now in a startup I'm doing like today, I mean, like this is a real issue.

UNKNOWN:

[..?..] has it too.

VONOG:

Well, I would say it’s much better than many other countries, the visa system. So in places like Russia it’s just like tourist visas, working for [..?..], [..?..] Russian citizens, and I studied there for eight years so it's a bit of a pain. And if you want to work it’s just like… So it’s not that bad. And in a way if you’re a great Ph.D. you can always find work here if you want to stay. It’s not a problem really.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

McCORMICK:

It’s getting much harder.

[Simultaneous comments]

VONOG:

I mean, all my friends who are Ph.D. wanted to stay in the United States.

GOLDHAMMER:

Do any of the committee members have any specific questions or issue that they’d like to raise in the time that’s remaining to us here? Yes, Harry?

BLOUNT:

One of the things that I’m not sure I heard from any of the tables in detail was – yet we ranked it highly I think on everybody’s sheet, was this concept of anomaly processing. And I think we only superficially touched on it and I guess the question is if we’re going to successfully run a platform with very few people, that means you’ve got to have some very effective anomaly processing tools over time to do this. So did anybody hear during the process or have some background in seeing tools that are very, very good at processing the edges?

STRONG:

Well, part of the process that’s there on table two is a process of, number one, identifying measures of interest and then, number two, doing general monitoring of those for statistical anomalies. And that is a – it’s a built-in part and it is pretty much automatable.

BLOUNT:

And is there something out there already?

McCORMICK:

There’s a startup ten blocks from here called Twine that –

BLOUNT:

Twine?

McCORMICK:

Twine, and they’re working on some really interesting stuff. If you want me to, I’ll introduce you to the CEO.

STRONG:

It’s, there are, yeah, there are a lot of people who are doing this kind of thing.

McCORMICK:

There’s a massive amount of research going on between Google, Twine, Bing, I know Yahoo!, a couple like that that are doing.

TWOHEY:

There’s a guy, Ron Conway, he’s a venture capitalist, his whole thing is, you know, real times. If you hit the real time local trifecta you’re going to get a bunch of seed money from him. I mean, not necessarily for sure, but –

[Simultaneous comments]

TWOHEY:

What I’m saying is that there’s a lot of investment in, you know, different kinds of search things. It’s not just -- there’s another search engine that just raised I think a couple million dollars in funding this week. I mean, like people are still actively spending money, private money, trying to make a better search.

McCORMICK:

Bottom line, there isn’t a pre-packaged tool you can go out and buy today but there’s a lot of stuff out there you can put together.

VONOG:

I’ll bet you can tune up Google engine so it finds like lists, you know.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

LOUIE:

Let’s not forget the power of humans. You know, one of the things about the Internet is that it enables mechanical [..?..] to really be a very effective system. And so, you know, as good as the algorithms are and there's some really great stuff out there, probably the --I looked at table 3 and I say to myself, you know, the brilliance of that is, you know, they’re not – unlike our table, table 2, as "Oh, we'll rely on technologies" 'cause they let the people out there use their eyeballs and their minds to rise things up to the top. There’s huge value in that. You don’t have to do everything by traditional automated processes if you use the network appropriately and you use kind of systems in a very effective way to serve up useful information.

McCORMICK:

Like humans are still the best engine for figuring out anomalies.

UNKNOWN:

Yeah, and if you’ve got, you know, a million of them, it’s a pretty good engine.

UNKNOWN:

Yes.

GOLDHAMMER:

Harry, did your question get answered or…

BLOUNT:

I think so. I mean, I think, I agree whole heartedly with Gilman, is that we haven't come up with a machine or algorithm that can do a better job of pattern recognition than we can. I think that’s part of it and I think how it’s displayed, lots of information is displayed, which we really didn't touch on either.

McCORMICK:

Well actually to add to that, I think one of the biggest issues that I think exists today is not the analytical tools or the people and stuff like that. It’s actually the display. You know, the UI to be able to still filter through vast amounts of information, you know, in a efficient, economical way. That’s probably one of the biggest [..?..] right now.

GOLDHAMMER:

Norman, did you have a comment?

WINARSKY:

Not a comment but another question to the group.

GOLDHAMMER:

Great. A little louder.

WINARSKY:

The question is, one of the issues that I see is measures of success. I mean, people have been talking about giving prizes and things like that. On the other hand, we’re looking at ten to twenty-year horizons so how do you -- what does the group think about how you decide if you’re doing a good job?

GOLDHAMMER:

Please, Ray(?)?

STRONG:

I have a comment on that. Rather than talk about making accurate predictions, which will take ten or twenty years to measure and it isn’t what we’re all about anyway, I look at measure of success is the breadth of preparedness that’s represented by the number of things that you’ve considered, the number of different things and the breadth of that, you know, what it covers that you’ve considered and you have plans to act on if X happens. So it’s being able to, you know, if somebody, if there were somebody who were to generate a question, “Do you have a plan for if a meteor strikes?” or, “Do you have a plan for…?”, you know, generate lots of those questions and have those questions come in, and the measure is what percentage of those questions do we actually already have

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

covered by the system and what percentage of the questions will get covered by the system as the system matures. So that’s a -- it’s not are we successfully going to predict a meteor strike. That’s not the point.

GOLDHAMMER:

Paul, did you have a comment?

SAFFO:

No.

GOLDHAMMER:

Okay, Mark.

McCORMICK:

I’d actually like to add something to that. If you think about it, as soon as you figure something out that you didn't know you didn't know, it changes your thinking, right? And just by having a constant stream of that emerging, you know, it actually makes you more aware. It changes how you make decisions, it changes how you view the world.

WINARSKY:

So I agree with that. So how do you quantify that?

McCORMICK:

Well it’s, you know, to a large degree it’s the efficiency and effectiveness of what we’re talking about, is the ability to be able to identify what you didn't know you didn't know. That’s the fundamental problem I think we’re talking about here. It’s not what you know you don’t know 'cause let’s face it, there’s tons of that stuff that’s out there.

LOUIE:

You can measure it by – 'cause, you know, at the end of the day a forecast has to communicate, it has to drive some [mike noise]

SCHWARTZ:

See, I think that’s the measure of success. Do people respond – is the quality sufficient that it motivates a response? And if the answer is it’s too abstract or unclear or too far out, you know, then nobody takes it serious. It’s not a success. It’s a success if it motivates an appropriate response.

NOLAN:

Well wait. I’m worried about that because --

GOLDHAMMER:

Yeah.

NOLAN:

-- a very plausible sounding narrative can motivate action when –

SCHWARTZ:

It could be wrong.

GOLDHAMMER:

Yeah.

SCHWARTZ:

It could be wrong. There’s no guarantee that you’re right. That’s why it’s not the accuracy 'cause you don’t know that. The question is does anybody take it serious enough to do anything about it and you may be wrong. That’s a risk you take.

NOLAN:

It feels like there’s, in my mind, maybe quite a step change between what Ray’s talking about, which is identifying something clearly enough that you could think through the response and motivating people to actually take more action against it.

SCHWARTZ:

Yeah, I’m going one step further. It isn’t simply the understanding. It’s actually, it motivates – okay, we’re going to go launch an R&D program in X; we’re going to focus attention on monitoring Y more deeply.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

UNKNOWN:

Yes.

SCHWARTZ:

If it produces a response, then it’s a success.

NOLAN:

That sounds like a [mike noise] incentive program.

UNKNOWN:

It incents hype.

SCHWARTZ:

No, I -- we can talk about this offline but I don’t think so.

[Simultaneous comments]

WINARSKY:

My worry about that approach is, you know, if we had predicted two years ago that the banking industry would mostly collapse, the persuasion of people to act on it is not a good measure but still might have thought it’s very unlikely, you’re crazy. So in some sense we want --

ZYDA:

Had you announced that you would've had the Secretary – the Fed making sure it collapsed bigger than it did. [Chuckles]

LOUIE:

Yeah, but it isn’t that you’re just taking action. You know, I want to blend these two concepts together.

SCHWARTZ:

I completely disagree with what you said, Norm. Let me just -- I think you’re completely wrong. The Federal Reserve should have had and did have until 2004 a group whose job it was to anticipate financial crises. And in fact Greenspan eliminated the group in 2004 'cause no financial crises are any longer possible. This is not a possible future event. We now understand markets so he eliminated it.

WINARSKY:

In the Greenspan world this would not have been taken seriously.

SCHWARTZ:

Well my point is simply that in fact that entity ought to have had a group who said "What would happen if?" Now they don’t go out and publicly say, “Hey, we’re worried about Lehman Brothers.” That’s a different question. But they’d take appropriate action in anticipation, monitor it closely and if they see it beginning to emerge they act in a timely way. So in fact I think it is quite plausible.

LOUIE:

It is, it’s --

GOLDHAMMER:

Gilman, go ahead.

LOUIE:

Let me just finish that out, which is I want to blend these two concepts together 'cause I think the value is in the blending of these concepts and not one by itself. That is successful forecasts should provide a roadmap of potential future outcomes that is actionable and trackable.

SCHWARTZ:

Right. Bingo.

LOUIE:

If the event never takes place, you should see it in the signals because you’re not hitting those signposts.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

SCHWARTZ:

That’s right.

LOUIE:

If you’re hitting signposts or there are new signposts being hit that you didn't even think about, that’s going to tell you something about the quality of the forecast.

SCHWARTZ:

Exactly.

LOUIE:

Okay, so it is not that you actually spend money and it’s not necessary just the hype cycle, the hype is fine as long as you can map it out so you can say, "You know what, based on what we’re tracking, it’s hype," or "Based on what we’re tracking, oh, my God, the Chinese are on to something," or the Americans are on to something or Google is on to something because they’re hitting these target – these measures of interest are beginning to click off. And then ultimately to have the plans in place to say, "We have enough of these milestones that we’ve been hitting, these signposts that we’ve been hitting, and these units of measure, that we’d better start putting together plans," right? So that’s what I consider a successful forecast.

SCHWARTZ:

Exactly.

GOLDHAMMER:

Comment over here?

TWOHEY:

I just want to know, so what were people -- so it’s twenty years after 1989, right, so there’s this – and in the late Eighties there’s another financial crisis. You know, people were always worried about these scenarios. So do we have any data to backtrack what we did twenty years ago and whether it worked or not? I mean, because it seems like we’re making this entire discussion here --

UNKNOWN:

We repealed the [mike noise].

[Chuckles]

 

TWOHEY:

Wait. What I’m saying is this entire discussion we’ve had here, right, has been divorced of feedback mechanisms for things that might have happened a while ago. So maybe we should list those and like see what worked and what didn't before we just guess.

BLOUNT:

So I spent a lot, spending twenty years in the financial markets I spent a lot of time looking at this and you get into the risk and the opportunity. It’s useful to look at history as long as you don’t try to tightly model history.

LOUIE:

Right. A good one is the Saddam Hussein’s weapons of mass destruction. So if you think about it from an intelligence analyst point of view, the number one failure prior to that was our inability to track nuclear weapons testing in places like Pakistan and India because we fell susceptible to deception. So here comes this guy who used to have nuclear weapons we know is a chronic liar saying he doesn't have them, all right? So the pattern is okay, "Last time we screwed up because we didn't catch the liar. We’ve got a known liar and he’s saying he doesn't have it. Therefore it must be a lie. Therefore he must have it." You can construct a model to prove that he potentially could have it based on the same set of facts that would have proven that he didn't have it. So there is a danger, and this is the dangers that experts run all the time, is they assume the past is a

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

predictor of the future. It’s something you should look at. You shouldn't be ignorant about it, but it’s not necessarily a predictor.

TWOHEY:

I’m not saying we’re – like it’s important that [..?..] were successful or failed in the past but just what incentive systems did we use and what outcomes did they produce. Because if we used incentive systems and the outcomes all sucked then we shouldn't go use the same incentive system again.

McCORMICK:

Well, so we're basing it on behavioral, what’s the behavioral --

TWOHEY:

That’s all I’m saying, is just maybe like looking at that might be worthwhile.

GOLDHAMMER:

We are almost out of time here. Gilman, did you have any final questions that you wanted to ask to the group?

LOUIE:

So one of the things that we’ll get tested on, you know, when we write all this stuff up and we go through it tomorrow is we get evaluated by groups of experts who will come back and ask the question "Now this is really all interesting but it doesn't seem possible." So my first question to you is, you know, kind of going through all these interesting approaches, is there anything in your mind that says that any one of these activities or potential approaches is an impossible act just to do, right, just to be able to construct an organization, whether it predicts good forecasts or bad forecasts, is there anything in here that you worry about that is a show stopper that we’re kind of assuming that could easily happen, that an expert would come back and say, "You can never produce this forecast because this is an impossibility?"

REED:

I think the time horizon is way too long. I mean –

SCHWARTZ:

I’m sorry. We can’t hear you.

REED:

Oh, sorry. The time horizon, the incubation, the period between the creation of the technology and the time at which the technology actually causes a disruption can be less than ten years. And so you’re not going to -- how can you forecast out ten years?

McCORMICK:

I think it’s also going to depend on the vertical that you’re talking about or the generation you’re talking about.

UNKNOWN:

Yeah, yeah, agreed, agreed.

McCORMICK:

So some, yeah, it’s a six month horizon, some of them it’s a twenty year horizon.

REED:

Right, I agree, I agree. But I think a lot of the big ones that we’ve seen lately, like the Twitter in Iran --

McCORMICK:

Oh, social media.

REED:

Yeah. How could -- I mean, granted, you could have predicted it maybe a year or two ago but I mean, ten years ago there wasn't even Twitter.

McCORMICK:

I think actually one of the big dangers with it is not so much this stuff is not possible, it’s that things like that that people would externally say you should have been able to predict

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

 

that you couldn't predict and therefore your system is fundamentally flawed. There’s almost like a political thing that’s got to go around this to say, "Hey, this is not perfect."

UNKNOWN:

Yeah.

GOLDHAMMER:

Stewart, question, comment?

BRAND:

Just wondering how you can – make sure you’ve got some early wins. If this thing doesn't have early wins and the promise of early wins, why the hell fund it?

LOUIE:

Well the argument that we consistently run up against is why is this approach better than talking to a group of experts? They’re experts. We can trust, we can look at their Ph.D.s, we can look at their prior success and expertise in the field. Why is a group of crowd source experts, foreign nationals who are prone to deception, going to produce you anything useful relative to the existing approach? That is a question we’re just going to have to deal with. That’s a question that we’re going to be facing.

BRAND:

Well think about it. The early win there is you’ve got these incredible people, great experts. They’re in the room, they’re talking to us. Wow. So you’ve already got a kind of success story right there even though the product may be irrelevant. So I’m trying to figure out a way to – how do you make the --

GOLDHAMMER:

How do you redefine success?

BRAND:

-- how do you redefine success in a way so that you can go along for five years without any disruptions to report. I don't know, do you retro-predict stuff that already exists or things like that? It’s an interesting design problem. Well, how do you get a project, a long-term project like this, funded.

SCHWARTZ:

And sustained.

[Simultaneous comments]

McCORMICK:

I want to come back to your, the business model question. There is one thing I was just thinking about that’s not the short term but long term. If you take the premise that there’ll be more technology, which is proven out in history, there’ll be more technology introduced in the next ten years than the previous one hundred and that’s been true for almost every previous ten years that’s existed. At some point the scale of innovation, the scale of information that comes into this process gets to be almost unmanageable, you know. And right now it is manageable but at some point you do need to start looking at automating some of this stuff. You know, I think that’s probably the bigger issue. It’s you’re kind of, as an organization, you’re behind the eight ball to be able to keep up with the level of innovation that’s taking place.

LOUIE:

I think that there’s another inherent problem about disruptive technologies. Until they actually appear and become disruptive they’re fundamentally unbelievable, right? You can never predict at a hundred and – what’s the Twitter limit, 142 characters can change the world. It’s like a stupid idea. It’s the dumbest idea out there and all of us VCs who saw it thought it was a stupid idea.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

McCORMICK:

Well eBay’s another one. It’s like, you know, what are you going to do, a yard sale online? All of a sudden boom, it becomes a new channel.

GOLDHAMMER:

I think one other thing I noticed just in the conversation, which may in part answer your question, is I think, at least on the technology end of things, there are actually quite a few things that are quite doable either algorithmically or through some other kind of technology. There are lots of different experts configured in lots of different ways who can answer a lot of these questions or can evaluate hypotheses or can generate narratives. But the part of it that, there’s this sort of interstitial part that is very hard to talk about and it’s the coordinating function, which is how do you actually get all these pieces of interact with each other in ways that produce good results. And this is fundamentally the problem of any large, any – it’s the problem with any organization. There’s a cultural element to it as well and it’s the kind of thing that seems like – and it’s, it can be done but it requires some attention to really understand how do you weave these things together in a way where you’re not just, it doesn't look like a quilt where you’ve just patched together a couple technology solutions, a bunch of experts, sub-predictive markets, boom, out come your forecasts.

GOLDHAMMER:

It’s the incentives.

[Simultaneous comments]

LOUIE:

You know, the thing about what you just said that strikes me is that the probably the question that would not be asked by us of the experts, that we should ask ourselves is in any one of these kinds of new endeavors it requires leadership and a visionary to make a group of people think the impossible is possible. The absence of that or that small group of individuals who are going to go off and change the world by building any one of these systems or some hybrid version of it, it is probably highly unlikely that a group of well educated, pretty smart, good engineers and scientists can build a system that fundamentally aren't driven to [mike noise]. And that’s probably the biggest risk in any of this, is to go down, to assume that you can follow a Betty Crocker cookbook recipe out of a just freshly published National Academies report and build one of these things, it probably will fail.

UNKNOWN:

Yeah.

McCORMICK:

To your point, software, it’s always what, Version 3 that actually works?

UNKNOWN:

Yeah.

McCORMICK:

You’ve got to have the staying power to get to Version 3 and then redo it and redo it and redo it as the market changes.

GOLDHAMMER:

Unless there are any other burning questions from the committee, apart from Ken

[Simultaneous comments] [Laughter]

PAYNE:

I think it’s the last time the committee’s together, right?

TALMAGE:

Tomorrow.

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×

PAYNE:

At least when your sponsor’s going to be around, right?

UNKNOWN:

Yeah.

PAYNE:

So with you here.

UNKNOWN:

Right.

[Laughter]

PAYNE:

And from all of us, from DDR&E and DIA, I'd like to thank everybody on the committee and folks who participated in the workshop.

[END OF RECORDING]

Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 83
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 84
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 85
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 86
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 87
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 88
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 89
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 90
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 91
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 92
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 93
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 94
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 95
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 96
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 97
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 98
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 99
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 100
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 101
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 102
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 103
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 104
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 105
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 106
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 107
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 108
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 109
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 110
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 111
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 112
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 113
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 114
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 115
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 116
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 117
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 118
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 119
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 120
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 121
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 122
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 123
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 124
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 125
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 126
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 127
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 128
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 129
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 130
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 131
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 132
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 133
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 134
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 135
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 136
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 137
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 138
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 139
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 140
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 141
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 142
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 143
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 144
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 145
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 146
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 147
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 148
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 149
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 150
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 151
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 152
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 153
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 154
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 155
Suggested Citation:"Transcript of the Workshop for Appendix D." National Research Council. 2010. Persistent Forecasting of Disruptive Technologies—Report 2. Washington, DC: The National Academies Press. doi: 10.17226/12834.
×
Page 156
Next: Transcript of Breakout Sessions for Appendix E »
Persistent Forecasting of Disruptive Technologies—Report 2 Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The term "disruptive technology" describes a technology that results in a sudden change affecting already established technologies or markets. Disruptive technologies cause one or more discontinuities in the normal evolutionary life cycle of technology. This may lead to an unexpected destabilization of an older technology order and an opportunity for new competitors to displace incumbents. Frequently cited examples include digital photography and desktop publishing.

The first report of the series, Persistent Forecasting of Disruptive Technologies, discussed how technology forecasts were historically made, assessed various existing forecasting systems, and identified desirable attributes of a next-generation persistent long-term forecasting system for disruptive technologies. This second book attempts to sketch out high-level forecasting system designs. In addition, the book provides further evaluation of the system attributes defined in the first report, and evidence of the feasibility of creating a system with those attributes. Together, the reports are intended to help the Department of Defense and the intelligence community identify and develop a forecasting system that will assist in detecting and tracking global technology trends, producing persistent long-term forecasts of disruptive technologies, and characterizing their potential impact on future U.S. warfighting and homeland defense capabilities.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!