Skip to main content

Currently Skimming:

Transcript of the Workshop for Appendix D
Pages 83-156

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 83...
... McCORMICK: I'm Mike McCormick. I'm with McLiera Partners and we basically help companies use disruptive technologies in the marketplace to gain market share.
From page 84...
... I'm with Walt Disney Parks and Resorts. I'm the director of Business Planning and Development.
From page 85...
... And so I'm a firm believer in a Christensen sort of framework for disruptive innovation and so just very excited to be participating in sort of a forum where we can actually discuss disruptive technology. GOLDHAMMER: Great, thank you.
From page 86...
... The point is, I usually come back and say, "Well, you know, I've only got this little bit of money. I'll give you a little bit of money if you can prove out the concept." And so one of the exercises on any kind of 1.0 activity, and we consider this kind of a 1.0 activity, is what is the least amount of money, the least amount of energy we could expend to even prove out that the idea has any traction.
From page 87...
... There's also another one I didn't put on here, is dismissing visionaries as crazy, uneducated or not experienced enough to understand what the real world is all about. So we had to wrestle with what is a disruptive technology, you know?
From page 88...
... Think like the market. Can you encourage a group of thought leaders from around the world to participate in a system that has value well beyond the Department of Defense of the United States that thinks about disruptive technology and it's okay if it's shareable by everybody.
From page 89...
... Doesn't guarantee at any moment in time that you're going to have a winning hand but over the long term of playing the game out, you beat the house odds by just changing it just a little bit. A good forecast is like counting cards.
From page 90...
... We're not impact focused and we don't explore the secondary effects, which is if you had all these technologies what will you do with them beyond the obvious use of those technologies? Because the most impactful disruptive technologies typically aren't new technologies but aggregations in a system of existing technologies used in a new and profoundly different way that nobody ever anticipated before, right?
From page 91...
... . So forecasting disruptive technologies.
From page 92...
... It's not just a machine, you know, of computer science mapping. It's that you are going to go through a persistent forecasting system where there are humans, there are computers, there's information sources.
From page 93...
... And I come from Ukraine and I was educated in Russia, mostly at the Institute of Physics and Technology. And I won a couple of software design competitions worldwide and presented to many people like Bill Gates, etc.
From page 94...
... This is a disruptive technology. [General laughter]
From page 95...
... What I want to do is just use that as a transition now to dive into some conversation about some of the key design challenges that we face in thinking about a persistent system to identify disruptive technologies. Now these design challenges that I want to talk about are ones that we have sort of observed in our own experience helping organizations and governments in particular design systems like this.
From page 96...
... UNKNOWN: So just before I answer your question directly, let me just capture a thought that I had in response to a specific question that probably comes a little bit later. But the question was about experts versus crowd.
From page 97...
... That is the design challenges I think has one very fundamental thing. That is how to formulate the questions.
From page 98...
... Yes? TWOHEY: I think in a disruptive technology shouldn't it be an output rather than an input?
From page 99...
... If we actually looked in all the places where the keys could be, we really would have a data overload problem. So getting that right is I think one of the hard things.
From page 100...
... I think there's two paradigms at work. One of the things that are a little bit sort of disconcerting in listening to some of the things we're talking about here, first you start talking about disruption technologies and all of a sudden it implies that we're trying to solve a problem.
From page 101...
... GOLDHAMMER: Harry. BLOUNT: Yeah, I think one of the key things with this outside perspectives – and if we look at the financial markets it's a perfect example.
From page 102...
... So that's probably one of the largest examples looking for outside perspective. Now the question I think boils down to the experts versus the crowd, and on that front I guess one of the things I'd like to offer up is I think given the search technologies, the social networks out there, it seems like you can ask really basic questions among a lot of different perspectives and a lot of different groups and get feedback relatively quickly.
From page 103...
... Just finish here with Phil and then we're going to push on. NOLAN: Just want to call out something I've heard that seems implicit in a lot of our discussions about getting outside perspectives.
From page 104...
... GOLDHAMMER: Cadillac Desert. Great.
From page 105...
... It's a human condition issue. I'm trying to tell a story about the human condition.
From page 106...
... I need to make up this grand machine that can go through all the fingerprints in the world and do facial recognition in ten seconds or less because that's what I've got to do on the show. And what's interesting about that is as they are trying to solve their story using technology to take these leaps to tell the human condition, it informs you on what you have to do.
From page 107...
... I mean, people buy wants, not needs. But what worries me about this discussion a little bit is if we're talking about pitching this to, shall we say, higher ups, it's not the wants of the higher ups that matter, it's the wants of the individuals that are creating that disruption.
From page 108...
... But what made it actually work was Spielberg was the director and he put it in a human context and, you know, everyone remembers the moment in the film, you know, their own particular moment – the electronic newspaper or Tom Cruise chasing his eyeball down the ramp for the optical recognition system and so on. The point being that it is the story in context that actually makes those technologies come to life and has actually probably accelerated the rate of development of the technologies.
From page 109...
... GOLDHAMMER: So we just covered – in the last hour we covered a number of different design challenges. Any final reflections now looking at all the challenges together, things that as either a member of the committee or as an invited guest – and I'm speaking to the committee, any sort of higher level thoughts now that we think we should keep in mind as we push forward in the day?
From page 110...
... Stewart? BRAND: I'm thinking about something I guess I think of as a good tip audit trail or maybe a bad tip audit trail, you know, who are people who have changed my mind, that I'm really glad they did that.
From page 111...
... We're going to reconfigure the room a little bit, do a quick group exercise, have some conversations at our tables and then have a report out in about an hour. So first design criteria.
From page 112...
... Now these are the design criteria that are taken out of the first report that the committee generated. They are openness, persistence, bias mitigation, robust and dynamic structure, anomaly detection, ease of use, strong visualization tools and GUI-controlled vocabulary, incentives to participate, reliable data construction and maintenance.
From page 113...
... How do we think about that as a design criteria? GOLDHAMMER: Great question.
From page 114...
... GOLDHAMMER: That's right. And part of what we're asking you to do is to think about what elements of this system require technologies and technologies that may require strong visualization tools and GUIs; what parts of the system require human beings.
From page 115...
... You just need to take a chair and your dots. GOLDHAMMER: Feel free to start voting and allocating your five votes.
From page 116...
... GOLDHAMMER: Yes. Could you give us a sense of what your top three vote getters were?
From page 117...
... And I would say that our conversation kind of clustered around our top vote getters in terms of the design criteria and I would say our conversation was fairly wide ranging within those criteria, sort of talking about both what those criteria meant and then talking about some of the implications of those criteria for actually designing a Version 1 of the system. A couple of key ideas that came up, one was talking about it's not about technology, it's about use.
From page 118...
... I think now that we've had, at these different tables, conversations about these criteria, I want to try to get – just insert a little bit of kind of practicality right before lunch to sort of orient our brains to what we're going to be doing after lunch. So you'll remember from our agenda this morning, after lunch we're actually going to be in the teams that you're in now clustered around these very long tables with a lot of material that you'll be able to use to actually create a process for this V.1 product.
From page 119...
... We already have 1.0 of this, right? It's called the intelligence community on Sand Hill Road, right?
From page 120...
... BLOUNT: I want to pop back to Darrell's point a second because I think if we are looking at this as persistent and trying to get from the 1.0 version of what we already have out in the real world, one of the biggest challenges in the intelligence community, venture capital community, is the idea that you have a systematic feedback loop that can be captured and monitored. And I'm not sure that if we don't walk out of here with at least some conceptualization of what a feedback loop is and how to measure it on a persistent basis to really improve the system on an ongoing basis, I think that will be a huge missed opportunity.
From page 121...
... But I think fundamentally what we're trying to do using spatial relationships and causal arrows is to develop an outline of a V.1 system that may borrow from things that already exist in the world, that may combine them in new or novel ways, I think as Stan was describing, borrowing, riffing off of Walt Disney. And I would suggest that after lunch -- So we'll break ‘til one o'clock.
From page 122...
... So there's some kind of analytic unit. That analytic unit might be driving survey research and you might be doing twelve global surveys with incentivized participation from experts.
From page 123...
... In your team you're going to be developing a process diagram that illustrates the essential steps in your scanning system and meets your design criteria. And remember, each team came up with a set of design criteria.
From page 124...
... At the end of the hour and a half we're going to ask each team to give a short report out, just walking us through what you've created in that 90 minutes, and it's just a little bit of a kind of pressure test report out where other parts of the group, committee or guest can say, "Well explain to me the connection between this and this. Why did prediction markets then lead to having people who use crystal balls and then why did the crystal balls lead to, you know, briefing Admiral Mullin, and I don't understand that connection." So we just, just a little bit of a pressure test.
From page 125...
... I imagine that every team is going to do something a little bit different. What I would encourage everyone to do is to be as engaged as, to sort of as engaged as possible in sort of what the team is focused on.
From page 126...
... Then you have data collections, this is passive data Transcripts were not edited.
From page 127...
... This is sort of raw, goes back up to the hypotheses generators, this goes down to the storytellers, and take all this stuff that we have heard of a couple of times and remove the decimal points. This goes out to policymakers, things like this.
From page 128...
... How many people is that, I don't know. How many hypotheses generators?
From page 129...
... We can look at games, tweets, you know, what's going on in the news, current articles, and we can extract predictions a little bit more that we'll get to in just a second.
From page 130...
... SCHWARTZ: Well, you know, you can actually, and it has been done – I mean, some of the movie studios have done it – you can actually create a narrative computer engine that basically combines -- You know, every movie is either a comedy, an adventure or a love story, a Transcripts were not edited.
From page 131...
... It could be in the context of a game, it could be in the context of a movie, a short, a piece of fiction, but to be able to tell the story in the context of somebody's everyday life from their point of view based on where they live in the world and how technology's going to affect them and then feed it back then to create new ideas. TWOHEY: So it's just disruptive technology then?
From page 132...
... LOUIE: Yeah, there's a team. There's a group, there's a team running the system so they're the - SCHWARTZ: And they're the ones getting smarter.
From page 133...
... Because the whole point is that it's going to be disruptive, most of the time it's going to be wrong. So I think in a lot of other places there was no inherent tolerance for failure and that was one of like the key design criteria that we had is that you're going to have these creative people that were going to feed stuff in and if most of them were wrong most of the time our system still had to work because that's actually how most people are when they look at the future.
From page 134...
... HWANG: Well, the input from graduate students, would you think that would be based on the job market? TWOHEY: The idea we were thinking of is that – So a lot of disruptive technology innovation comes from U.S.
From page 135...
... TWOHEY: The way that you want to design incentives is that everybody locally – like you get this global optimum but their local choice like makes their life a little bit better. And if there's some little reward or some little – if somebody publishes – says something at the start of their tenure at the NSF grant and it turns out to be right five years later and they get a little bennie for that, you know, maybe they get to go to Davos or whatever it is then people are much more likely to take this seriously.
From page 136...
... CD D-54 Persistent Forecasting of Disruptive Technologies – Report 2 GOLDHAMMER: This doesn't have to be very complicated. You can add that layer of information in either with a pen or with a post-it note.
From page 137...
... GOLDHAMMER: Yeah. And ultimately for this table the total number of people running the system was actually quite small.
From page 138...
... I think and at least of the three approaches, all three assumes that there's value to more than just to the Department of Defense and so the time horizon in some ways is a little bit more murky. What is useful is can you provide a way of looking at the future that people can begin to track, monitor and bet on and change those bets as new information gets discovered.
From page 139...
... GOLDHAMMER: Yep. McCORMICK: There's a lot of shall we say bottlenecks where you're only as good as the weakest link in it and, you know, making sure that you're keeping high quality people, you know, keeping the feedback loops working, etc.
From page 140...
... There was a notion that there would be kind of two separate organizations, that we thought that maybe there would be a need for a nonprofit, multi-nationally sponsored, countries, corporations, individuals, whose job it is, is just to produce really interesting forecasts, regardless of how people are going to use it, which is kind of the honest broker, individual, and that there may be a separate group inside the Department of Defense as well as in corporate entities in other places, who will learn to leverage this, kind of bridge open source collection of forecasts and disruptive technologies and apply it to whatever their needs are. But the danger, if you tried to put both those groups into one, is you immediately re-bias both the questions and the outputs that you have.
From page 141...
... And so in some ways creating such a system might actually get people to focus around certain very interesting, potentially highly disruptive technologies to solve either really, really knotty problems or to really explore big market opportunities. And, you know, it's sort of a twist on kind of the venturing side of the world when you look at signpost venture capitalists and, you know, "What are Sequoia and Kleiner betting on this week?
From page 142...
... VONOG: Yeah, no, I mean, the system of scientists from different countries working on like some venturists. GOLDHAMMER: One, sort of one uneven I think opinion I'd like to surface a little bit is how, if you look at this thing in total, you know, and we've got three different systems here so sort of based on your team's perspective, how much of this – I'm going to oversimplify here but how much of this set of activities is done by people inside the government or associated with the government in some meaningful way?
From page 143...
... GOLDHAMMER: That's a good point. Just another sort of striking a balance question, so we talked a little bit about deep government, inside or outside.
From page 144...
... I mean, so sometimes we worry about things that we shouldn't be worried about and there are other things we should be worried about but it's also you should know that other people are thinking about those same lines. So I would just put the DOD hat on to say in this kind of open source world is it's really important to understand other people, particularly non-nation states.
From page 145...
... GOLDHAMMER: Yeah. NOLAN: Well let me just throw out another, a very different sort of general observation about the three systems and we talked a little bit about it in our group.
From page 146...
... So always, you know, has international involvement, at least the committees I've been with there's always, we have some international. So that part is not, it's almost like standard practice.
From page 147...
... McCORMICK: This one might be kind of controversial and it's a little bit out of scope in some respects but it just strikes me -- I had a long conversation with a VC this past week and I thought it was a pretty interesting conversation from the perspective of we educate some of the – most of the Ph.D.s around the world in the United States and then our current process and our current thinking is, you know, we make it almost impossible for them to basically get a green card and a visa to stay in the United States. In some respects, if we give somebody a Ph.D., we should be giving them a visa to be able to stay and keep the innovation here and make it easier to monitor at the end of the day.
From page 148...
... TWOHEY: There's a guy, Ron Conway, he's a venture capitalist, his whole thing is, you know, real times. If you hit the real time local trifecta you're going to get a bunch of seed money from him.
From page 149...
... McCORMICK: Well actually to add to that, I think one of the biggest issues that I think exists today is not the analytical tools or the people and stuff like that. It's actually the display.
From page 150...
... If you think about it, as soon as you figure something out that you didn't know you didn't know, it changes your thinking, right? And just by having a constant stream of that emerging, you know, it actually makes you more aware.
From page 151...
... LOUIE: Let me just finish that out, which is I want to blend these two concepts together 'cause I think the value is in the blending of these concepts and not one by itself. That is successful forecasts should provide a roadmap of potential future outcomes that is actionable and trackable.
From page 152...
... So maybe we should list those and like see what worked and what didn't before we just guess. BLOUNT: So I spent a lot, spending twenty years in the financial markets I spent a lot of time looking at this and you get into the risk and the opportunity.
From page 153...
... REED: Oh, sorry. The time horizon, the incubation, the period between the creation of the technology and the time at which the technology actually causes a disruption can be less than ten years.
From page 154...
... LOUIE: I think that there's another inherent problem about disruptive technologies. Until they actually appear and become disruptive they're fundamentally unbelievable, right?
From page 155...
... UNKNOWN: Yeah. McCORMICK: To your point, software, it's always what, Version 3 that actually works?
From page 156...
... CD D-74 Persistent Forecasting of Disruptive Technologies – Report 2 PAYNE: At least when your sponsor's going to be around, right?


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.