Skip to main content

Currently Skimming:

5 Innovations Emerging in the Clinical Data Utility
Pages 33-40

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 33...
... • There are multiple approaches to data normalization, but a hybrid approach of new systems standardizing from inception and legacy systems transforming over time is most feasible. • Clinical element models, together with value sets, present op portunity for normalization in a way that maintains the con text and provenance of the data.
From page 34...
... Distributed queries allow querying of data from multiple partners without having to physically aggregate data in one central repository; a query is sent to all partners, and each participant runs this query internally and returns summary results individually. Some example use cases for distributed population queries include population measures related to disease outbreaks, postmarket surveillance, prevention, quality, and performance.
From page 35...
... Despite the obstacles inherent to such queries, several examples, across many domains, are ongoing and have achieved great success. Platt described Mini-Sentinel, an FDA-sponsored pilot initiative that has created a distributed dataset that includes data on 126 million people at 17 data partners to support active safety surveillance of medical products.
From page 36...
... Along these lines, Query Health, an ONCsponsored initiative, is working with many partners to develop standards for distributed data queries. As Elmore emphasized, the idea is to send questions to voluntary, collaborative networks, whose varied data sources may range from EHRs, to health information exchanges (HIEs)
From page 37...
... However, it is likely that many value sets would have to be bound to these CEMs in order to truly have interoperability and a comparable and consistent representation of clinical data. Value-set management, therefore, is a major component of normalization, and terminology services and a national repository of value sets managed by the National Library of Medicine is one suggested approach to handling this challenge.
From page 38...
... Rather than simply digitizing the data contained in paper records, emphasis should be placed on improving data visualization, and leveraging the power of large datasets for extrapolation. The strategy also must address health care specific challenges, including false positives, lack of uniform identifiers, privacy regulations, dirty data, and the multitude of data sources.
From page 39...
... Research into the scale of overlap and missed signal problems associated with systems that do not link records stratified across disease states will help to make the case for improved record linkage. Lastly, Kheterpal suggested development of a stratification model that matches a proposed research question with necessary data types could improve the accuracy and relevance of data linkage efforts.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.