Skip to main content

Currently Skimming:

5 Session 4: Learning from Multi-Source Data
Pages 20-22

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 20...
... With such a model, researchers can conduct social network analysis, create entity relationship diagrams, and produce indications and warnings, without worrying about modality-specific problems. Onyshkevych stated that the goal of the DEFT program, which is nearing conclusion, was to create automated capability to transform large volumes of unstructured text into structured data for use by human analysts and downstream analytics.
From page 21...
... The LORELEI Program, which is currently under way, uses the same model of converting unstructured input into structured, actionable output. However, this program specifically considers how to respond quickly to an emergent situation (e.g., humanitarian assistance and disaster relief missions)
From page 22...
... Peter Pirolli, Institute for Human and Machine Cognition, asked whether the hypotheses are representative of what is in the document collection or if they are relevant to decisions that have utility or risk associated with them. Onyshkevych noted that those explanations are not mutually exclusive; AIDA will produce representations and hypotheses that explain the world from a particular data set.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.