Skip to main content

Currently Skimming:

7 The Data Flood: Analysis of Massive and Complex Genomic Data Sets
Pages 26-29

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 26...
... As an example, Dan Roden, of Vanderbilt University, reported on research the original goal of which was to use genetics to predict individual responses to drugs. However, the research quickly evolved into the challenge of navigating through a massive data set.
From page 27...
... Even if the statistical problem can be solved, basic economics makes this straightforward experiment infeasible because of the tremendous cost of recording 100,000 genotypes in each of a thousand people. (If the cost of determining a genotype were only 50 cents, the entire experiment would still cost $50 million.)
From page 28...
... , can be extremely useful, as in the following experiment described by Speed, which identified genes with altered expression between two physiological zones (zone 1 and zone 4) of the olfactory epithelium in mice.
From page 29...
... (2001) , which uses the false discovery rate method an approach to the multiple comparisons problem that controls for the expected proportion of false positives rather than attempting to minimize the absolute chance of false positivesto set cutoff points for these errors.)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.