Cover Image

PAPERBACK
$63.75



View/Hide Left Panel

Types of Massive Data Sets

Although "massive data sets" is the theme of this workshop, it would be a mistake to think, necessarily, that we are all talking about the same thing. A typology of the origin of massive data sets is relevant to the understanding of their analyses. Amongst others, consider: observational records and surveys (health care, census, environment); process studies (manufacturing control); science experiments (particle physics). Another "factor" to consider might be the questions asked of the data and whether the questions posed were explicitly part of the reason the data were acquired.

Statistical Data Analysis

As a preamble to the following remarks, we would like to state our belief that good data analysis, even in its most exploratory mode, is based on some more or less vague statistical model. It is curious, but we have observed that as data sets go from "small" to "medium," the statistical analysis and models used tend to become more complicated, but in going from "medium" to "large,'' the level of complication may even decrease! That would seem to suggest that as a data set becomes "massive,'' the statistical methodology might once again be very simple (e.g., look for a central tendency, a measure of variability, measures of pairwise association between a number of variables). There are two reasons for this. First, it is often the simpler tools (and the models that imply them) that continue to work. Second, there is less temptation with large and massive data sets to "chase noise." Think of a study of forest health, where there are 2 × 106 observations (say) in Vermont: a statistician could keep him(her)self and a few graduate students busy for quite some time, looking for complex structure in the data. Instead, suppose the study has a national perspective and that the 2 × 106 observations are part of a much larger data base of 5 × 108 (say) observations. One now wishes to make statements about forest health at both the national and regional level but for all regions. But the resources to carry out the bigger study are not 250 times more. The data analyst no longer has the luxury of looking for various nuances in the data and so declares them to be noise. Thus, what could be signal in a study involving Vermont only, becomes noise in a study involving Vermont and all other forested regions in the country.

The massiveness of the data can be overwhelming and may reduce the non statistician to asking over-simplified questions. But the statistician will almost certainly think of stratifying (subsetting), allowing for a between-strata component of variance. Within strata, the analysis may proceed along classical lines that looks for replication in errors. Or, using spatio-temporal analysis, one may invoke the principle that nearby (in space and time) data or objects tend to be more alike than those that are far apart, implying redundancies in the data.

Another important consideration is dimension reduction when the number of variables is large. Dimension reduction is more than linear projections to lower dimensions such as with principal components. Non-linear dimension reduction techniques are needed that can extract lower-dimensional structure present in a massive data set. These new dimension-reduction techniques in turn imply new methods of clustering. Where space and/or time co-ordinates are available, these data should be included with the original observations.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement