3
Improving Current Capabilities for Data Integration in Science

Any new direction or method of scientific inquiry starts out with a few visionary scientists blazing the path. All are focused on getting results from their research, and invariably they invent new data formats and semantics. This behavior leads to rapid innovation by each individual group but greater difficulty in sharing data across groups, or even across projects in a single group. In the early days of any domain, this state of affairs is a good thing because it maximizes the rate of early innovation, and a similar situation holds as new directions and innovative methods are explored even in mature disciplines.

However, there are drawbacks to this state, and these were noted by workshop participants. Usually, data are available only haphazardly from these early projects—that is, they are not well documented or curated and are not always easily accessible. Individual groups have little incentive to publish data, which slows the progress of the broader field. A new researcher in the domain is presented with a daunting data-discovery problem. And when the data are finally found, they may not be in a usable format. It is common, in this stage, for data to be transmitted to a requester as a bundle of code and data, such that the code is required in order to read the data. But getting code to run in a new environment can be far from trivial because of differences in operating systems, compilers, search paths for libraries, and so on, so that a researcher attempting to reuse the data might spend a good deal of time just getting to the point of being able to read the incoming data. Because most of the areas of scientific research discussed at the workshop are still in this stage with regard to data integration, the researchers share these challenges.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 18
3 Improving Current Capabilities for Data Integration in Science A ny new direction or method of scientific inquiry starts out with a few visionary scientists blazing the path. All are focused on getting results from their research, and invariably they invent new data formats and semantics. This behavior leads to rapid innovation by each individual group but greater difficulty in sharing data across groups, or even across projects in a single group. In the early days of any domain, this state of affairs is a good thing because it maximizes the rate of early innovation, and a similar situation holds as new directions and innovative methods are explored even in mature disciplines. However, there are drawbacks to this state, and these were noted by workshop participants. Usually, data are available only haphazardly from these early projects—that is, they are not well documented or curated and are not always easily accessible. Individual groups have little incentive to publish data, which slows the progress of the broader field. A new researcher in the domain is presented with a daunting data-discovery problem. And when the data are finally found, they may not be in a usable format. It is common, in this stage, for data to be transmitted to a requester as a bundle of code and data, such that the code is required in order to read the data. But getting code to run in a new environment can be far from trivial because of differences in operating systems, compilers, search paths for libraries, and so on, so that a researcher attempting to reuse the data might spend a good deal of time just getting to the point of being able to read the incoming data. Because most of the areas of scien - tific research discussed at the workshop are still in this stage with regard to data integration, the researchers share these challenges. 

OCR for page 18
 IMPROVING CURRENT CAPABILITIES FOR DATA INTEGRATION IN SCIENCE Further, there are multiple ways in which reuse might be hindered. The structure selected by the original researcher to organize the data might be inconvenient for a subsequent user—for example, they might be stored as geographic images, one for each time step, whereas the new researcher needs a time series for each spatial location. Or an under- lying choice that was not even explicitly considered by the original researcher—perhaps the projection that was used to map the data from Earth’s surface onto two dimensions—might not be suitable for the reuse context. (Even for research areas that have matured, such challenges can arise whenever data are applied in unanticipated ways.) The parameters that characterize the projection, or even the units, might not be clear because of incomplete metadata. Lastly, the second researcher’s software tools may not be able to handle the individual data elements. To mas - sage the data into correct format and organization may pose a tedious data-manipulation problem. It can take weeks or more of effort to convert data into a form suitable for reuse. Many new researchers give up before they get to this stage. In short, it is often just too difficult to reuse data gathered by other researchers. It is crucial to focus on this transformation problem. Several work- shop participants noted that it is not difficult to write clear transforms if the relevant metadata are available. Most popular transforms have been written multiple times by multiple labs, which is, of course, inefficient. Workshop participants said it was rarely easy to locate existing transfor- mation software of interest, and some suggested that an online service to share transforms could be established. Such a service would allow scientists to avoid having to reinvent tools, but it would require publish - ing and documenting transforms in a systematic way so that others could locate them. In our Internet-savvy world, one should be able to locate data sets and transforms of interest using the Web. At present this is a hopeless task. Workshop participants identified four steps that would make this task possible: • epositories. Several participants noted the need for domain-specific R (as well as general) repositories where scientific data sets can be archived. Because data decay over time and require periodic main- tenance, such repositories must be staffed with professionals who can do such maintenance as well as assist scientists trying to use data sets in the repository. Good search tools are needed so the con- tents of a repository can be easily browsed and objects of interest located. Lastly, curation facilities are also needed so that the precise semantics of data sets can be documented. Obviously, the curation cannot be such an onerous human task that the repository will not

OCR for page 18
0 STEPS TOWARD LARGE-SCALE DATA INTEGRATION IN THE SCIENCES be used. Curation information must be easy to locate, browse, and understand. Dr. Stonebraker suggested that Genbank and the Sloan Digital Sky Survey are examples of data repositories with effective search tools and good curation, but he said that many more such facilities are needed. • eb-based search. It is nearly impossible to locate structured data W using current text-oriented search engines. Moreover, there seems to be little incentive on the part of the search engine companies to provide this capability. Thus, targeted research will be necessary to enable locating structured data on the Web. Ideas for doing this include a science-oriented tagging system—that is, a system that makes assumptions about the content of a file based on some knowledge of the field of science—and storing science data in hypertext markup language (HTML), which would make them visible to search engines. The latter idea is only feasible for small data and is not a general approach. • ommunity-drien information extraction. Given how much infor- C mation is now available on the Web, the ability to interpret and integrate relevant Web content can have huge benefits. However, search alone can be a tedious means of collecting data from dis- parate sources. Webscale information extraction, assisted by an automated tool, represents a bottom-up complement to top-down approaches like the Semantic Web.1 Another approach is to provide a suite of extraction tools to enable communities of interest on the Web to collaborate in creating and curating integrated datasets in domains they care about. This seems particularly promising for sci- entific domains, given that scientists are technically sophisticated and willing to collaborate. • ocating transforms. As noted above, several workshop participants L suspected that the data transforms they need at any given time have probably already been written at least once, but cannot be found, leaving individual researchers and groups to write their own. The same is true for all sorts of data manipulations, with similar kinds of code modules appearing over and over among 1 The Semantic Web is an ambitious dream of deploying interlinked information via the resource description framework (RDF) throughout the Web. It encompasses a wide variety of philosophies, goals, and technologies. In general, it would rely on the establishment of ontologies and tools to help those who publish data to mark their content in terms that can be recognized semantically. Many of the Semantic Web technologies are proving to be use - ful, especially RDF, SPARQL, and OWL. Because the Semantic Web per se does not provide any particular set of standard entity names (URIs) or any particular approach to semantics, leaving these to particular application layers, any practical system for data integration must add these.

OCR for page 18
 IMPROVING CURRENT CAPABILITIES FOR DATA INTEGRATION IN SCIENCE different research groups. Effort is wasted in writing such trans- forms many times and in maintaining such code as circumstances change. Obviously, it would be best to have a system that allows for reuse of common transforms; such a system might also support the development of more robust transforms. Perhaps one or more repositories (something like SourceForge) could be established to store such code. Another option would be for science funding agencies to form their own code repositories. Workshop participants discussed some of the tools that have been produced by the database community that could help with data integra- tion in the sciences, for both structured and semistructured data. The four subsections that follow provide a sampling of the approaches covered. The workshop was not designed to prioritize the potential value of data- base tools to scientific research data, and so this sample should not be construed as being more than just illustrative. Other critical techniques for data integration in some contexts—such as parallel processing and data indexing, which are very important when working with very large sets of data—are not covered here. FEDERATORS Dr. Haas’s presentation provided an overview of how federators can be used to integrate data. She covered technical federation techniques, not the use of federation as a management or governance concept. Federation engines present users with a virtual repository of information. The users can manipulate information as if it were stored together in a single place with a single interface whereas it may actually be stored in multiple, pos - sibly heterogeneous places. Federation engines come in different flavors, each presenting a different interface to users. The most common interface is that of a relational database management system (DBMS), effected through methods such as the Open Database Connectivity (ODBC) method, the Structured Query Language (SQL), and the relational data model. However, some federation engines present an extensible markup language (XML) interface (supporting some variant of XPath or XQuery, typically), and others might act like an object-oriented database or even a content repository. Besides having different interfaces, the capability of federation engines also varies, from “gateway” systems that allow simple queries against one source at a time while providing a common interface to all sources, to systems that allow users to leverage the full power of their query language to gather or correlate information from multiple diverse sources with a high level of query function.

OCR for page 18
 STEPS TOWARD LARGE-SCALE DATA INTEGRATION IN THE SCIENCES To illustrate the potential of federation, Dr. Haas described the exam - ple of a pharmaceutical company with four main research sites in different countries. Each site has many data sources, including these: • special-purpose store for chemical compound information, A searchable by chemical structure; • A relational database holding results from various assays; and • A literature source linking drugs to diseases and symptoms. Data sizes range from hundreds of thousands of compounds to bil - lions of test results. The four sites focus on different diseases and, for the most part, different compounds, locally storing the information they produce and use. However, as a scientist forms hypotheses about a com- pound, he or she might need to ask a coworker to find a compound with a structure similar to the one he or she is working with that has been associated with asthma and that has assay scores on test X within range [A,B]. Such a query might need data from all of the sites. In this example, federation allows the scientist to pose the query without worrying about the geographic distribution of the data or about the different interfaces for the chemical stores, relational databases, and literature sources. The federation engine bridges this heterogeneity and drives the execution of the query across the different sources, reporting the results to the waiting scientists. The architecture of IBM’s InfoSphere Federation Server (IFS) illus- trates how federation works. IFS has two main components: a query engine that supports either SQL or SQL/XML and a set of wrappers that connect the engine to a wide variety of data sources. A wrapper is a code module that handles four main functions: • t handles the connection to the data source and transaction I management. • n response to requests from the query engine it drives the data I source to produce the required result and retrieve the data. • he wrapper also provides a mapping from the data model and T functions in the underlying source into the relational model. If the underlying database is relational, this is straightforward. But in the case of the chemical store described earlier, the chemical similarity search and the chemical structure must be mapped to relational constructs. • he wrapper participates in query planning, providing estimates T of the costs of various operations to allow the query processor to identify a feasible and efficient plan for the query.

OCR for page 18
 IMPROVING CURRENT CAPABILITIES FOR DATA INTEGRATION IN SCIENCE The query processor is an extended relational query processor. When a query arrives, it is parsed, the table and column names are resolved, the query is rewritten into a canonical form and optimized, and a run-time plan is produced and executed. Each phase of query processing after name resolution is modified to deal with wrappers and distributed data. In addition, a new phase analyzes the rewritten query and looks for oppor- tunities to push work down to the remote data sources. Complex queries can be handled, and all improvements to the basic query processor—for example, new execution strategies or better optimizations—are immedi- ately available for dealing with distributed, heterogeneous data. Federation has been used for many purposes. It is often used to extend an existing database with heterogeneous, hard-to-convert data that are separately owned or that will rarely be used. This usage saves maintenance or creation costs for the warehouse. Federation is also fre - quently used to build a view across multiple organizational units, as in the four research labs in the example. This is an appealing use case, but the query workload must be watched carefully, as it is easy for complex queries to be generated that are challenging to optimize and may lead to unacceptable performance in some circumstances. Portals are more easily built on top of a federation engine, rather than hand-coding access to dif - ferent data sources. Another common use of federation is as a prototyping environment for data-intensive applications. Even if a large materialized store must eventually be built, federation is easy to set up, and it allows testing of queries and early examination of the data. Federation is a powerful tool for data integration, but it is not a panacea. Federation integrates data lazily, as it is needed. It is appropriate when data sets are not too large or when the queries are selective enough that only a small fraction of the data will ever be returned. It works well when the data do not need too much preprocessing or cleansing or when the data change frequently and up-to-date results are desired. The extract, transform, and load (ETL) paradigm, which is commonly used in business, is an alternative approach for the integration of primar- ily structured data. Its first step is to extract data from various sources, which includes conversion into some common format. The collected data are then transformed through a series of rules to prepare them for use. Transformations might include filtering, sorting, cleaning, and translat - ing individual records for consistency, and other such operations. Finally, ETL loads the resulting data into the system where it will be warehoused and used. Generally speaking, ETL has strengths and weaknesses that are complementary to those of federation. Dr. Haas suggested the following as potential steps for improving federation technology:

OCR for page 18
 STEPS TOWARD LARGE-SCALE DATA INTEGRATION IN THE SCIENCES • ederation engines need continued work to minimize data move- F ment, exploit multiprocessors, and leverage caching and even indexing to reduce response times for complex queries and large volumes of data. • ther work is needed to extend the engines’ capabilities. Today, O entity resolution (figuring out when two data elements refer to the same real-world object) and data cleansing (discovering and correcting errors in the data) are typically batch operations. When those steps are necessary, federation cannot be used. Dynamic algorithms for these tasks would enable federation. • ost federation engines today work only on traditional structured M or semistructured data, though they can also return some uninter- preted fields, such as images. As the ability to extract information from unstructured data is improved, federation engines will need to grow to handle these new types. • inally, understanding where data come from is critical to many F scientific endeavors. Hence, mechanisms for tracking provenance must be extended to function in a federated environment. RESOURCE DESCRIPTION FRAMEWORk Orri Erling gave an overview of the resource description framework (RDF) and linked data principles for science data and metadata. Using RDF as the data model for these metadata has numerous advantages. Sometimes, especially in the life sciences, data themselves are also rep- resented in the RDF model. For other domains, such as those involving large arrays of instrument data, RDF is not a convenient format for the bulk of the data but is still appropriate for annotation. From the viewpoint of processes, data and metadata should go hand in hand, but different sizes and modeling characteristics often necessitate different representa - tions for data and metadata. RDF has several advantages for science metadata. To begin with, data are self-describing, and all entities and terms used have universal resource identifiers (URIs). The term “linked data” is used to mean a set of RDF triples where the URIs representing the entities, classes, and properties thereof are dereferenceable via HTTP. In addition, there is a constantly growing body of reusable ontologies, which provide the conceptual bases for RDF. Reusing terminology and modeling metadata has obvious advantages over reinventing the metadata schema for each application. Also, RDF is inherently schemaless—that is, not all entities of a class need have the same properties, and properties can be attached to data instance by instance without any database-wide schema alteration. This makes RDF less cumbersome than, say, relational database manage-

OCR for page 18
 IMPROVING CURRENT CAPABILITIES FOR DATA INTEGRATION IN SCIENCE ment systems (RDBMSs) for highly variable or sparse data. Further, there is a constantly developing set of tools for harvesting, exchanging, stor- ing, and querying RDF. Finally, the RDF model has well-defined seman - tics for inferencing and many features for facilitating mapping between ontologies and instance data sets. Classes, properties, and instances can be declared to be the same for purposes of a query. Scalar values may be typed value by value—for example, denoting a unit of measure. Thus, both issues of different identifiers for the same entities and different units of measure can be made explicit value by value in RDF. Scalability of RDF storage is no longer a major problem, with billions of RDF statements being stored per server and with scale-out clustering available from at least OpenLink, Systap, and Garlic for larger scales. Also, data compression for RDF continues to advance, leading to further improvement of scalability. With the next generation of RDF storage, the performance penalty that RDF suffers when compared to RDBMSs for the same workload is likely to be substantially reduced through use of techniques such as adaptive indexing and caching of intermediate results. Task-specific relational schemas will probably continue to have some per- formance advantage for applications where the schema and workload are stable and known in advance, according to Dr. Erling. Relational databases can also be mapped into RDF without storing the data in RDF. This is possible with tools such as Virtuoso or D2RQ. Thus, if science metadata are already in relational form, the RDF conversion for data interchange and integration can be done declaratively and on demand. A World Wide Web Consortium (W3C) working group aimed at developing standards for such mapping was launched in October 2009. As an example, Dr. Erling described a harvesting model used for media metadata, which could be easily adapted to science metadata. The site bbc.openlinksw.com publishes metadata about programs of the BBC. The bbc.openlinksw.com server periodically crawls this content and pres - ents it for search and structured querying via SPARQL, the SQL equiva- lent for RDF. Additionally, this server, if used as a proxy for accessing other RDF content, caches this content and allows querying over the BBC data and other cached data. For example, one can combine data from the BBC, LastFM, Musicbrainz, and other sources, all of which contain infor- mation about a musical artist. For the content producer, publishing the metadata is as simple as exposing RDF files for HTTP access. These files can be generated within the pipeline for content production. This harvesting example is low-cost, incremental integration that does not require a priori agreement on schema and can accommodate any future data without schema alteration by a database administrator. Query-time inference can be used for identifying different names for the same entity and presenting the union of properties associated with each

OCR for page 18
 STEPS TOWARD LARGE-SCALE DATA INTEGRATION IN THE SCIENCES identifier. More complex matching and inference can be done as an ELT transformation step without altering the source data. The broader utilization of RDF might have positive impacts on the metadata publishing practices of scientific communities over time. Since every element has a URI, many of which can be dereferenced over HTTP, both schema and instance data identifiers point to their source, which provides a means of implicit attribution. Since data and their schema are thus objects of attribution and citation, there is an incentive for publishing data and schemas of high quality. If many RDF data sets are kept in a common repository, it is easy to see which identifiers, ontologies, or taxonomies are in the broadest use. This ease of discovery will drive convergence of terminology. While very complex, centrally administrated ontologies exist, the ones enjoying the fastest adoption are lightweight ones developed through a bottom-up community process. MapReduce AND ITS CLONES MapReduce2 and the accompanying Google File System3 were devel- oped at Google to solve the problem of massive explosion in data by leveraging cheap hardware for both storage and processing. They are designed to scale to thousands of commodity servers, which means that failure is assumed to be not an exception but more of a rule. Hence, many design decisions within these systems are biased toward fault-tolerance, scalability, and agility as opposed to performance. Apache Hadoop 4 is the open-source implementation of MapReduce, and it has the sister technology Hadoop Distributed File System (HDFS). At the workshop, Amr Awadallah of Cloudera Computing described a popular example that illustrates the scalability of Hadoop for economically storing large amounts of scientific data: the Large Hadron Collider Tier 2 site at the University of Nebraska-Lincoln, which currently stores 400 TB of data.5 As scientific data sets continue to grow at exponential rates, the need is paramount for scalable, fault-tolerant systems that can both store and process data economically. MapReduce and its clones represent an option for addressing that need for some types of scientific data. The MapReduce model is a programming paradigm for processing large data sets; it makes it easy to scale execution linearly over a large 2 See http://labs.google.com/papers/mapreduce.html. 3 See http://labs.google.com/papers/gfs.html. 4 See http://hadoop.apache.org/core. 5 Details of this example may be found at http://www.cloudera.com/blog/2009/05/01/ high-energy-hadoop.

OCR for page 18
 IMPROVING CURRENT CAPABILITIES FOR DATA INTEGRATION IN SCIENCE number of servers. In its simplest form, the developer specifies a map function that does the first stage of processing. The output data from the mappers are consistently hash-sorted then pulled by the reducers in what is known as the shuffle stage. Finally, the reducers perform the postpro - cessing of the results from the mappers. The origins of the MapReduce programming model come from functional languages such as LISP. The MapReduce programming model is available in many shapes and forms. In fact, many of the traditional RDBMS vendors (for example, Teradata, Oracle, Greenplum) support MapReduce indirectly through user-defined functions (for mappers) and user-defined aggregates (for reducers). The power of the overall MapReduce system (the distributed sched - uling system that executes MapReduce jobs) comes from its ability to (1) automatically distribute/schedule the jobs and (2) transparently handle failures without requiring the jobs to be reexecuted from scratch (which would be very frustrating for multihour jobs processing large amounts of data). The system also allows the number of servers to be dynamically scaled up or down while jobs are running, so a number of additional servers can be thrown into the processing pool and jobs will begin using them transparently. The system is also designed to run a large number of data-processing jobs with various operating requirements. Some of these jobs can be operational jobs with high priority, so the system will automatically kill (preempt) the mapper or reducer tasks of lower-priority jobs to make room for the operational jobs. The jobs that have been pre - empted are resumed once the system has available resources for them. Furthermore, the system has optimizations to detect partial failure. For example, if one of the mappers executing a part of the job is running slowly compared with the rest of the mappers (maybe that node has unreliable disks), the system automatically starts a redundant mapper on a separate server, and whichever one finishes first wins. The MapReduce system is storage-system independent: It can read data from a normal file system, a distributed file system, an in-memory, key-value store, or even a traditional RDBMS. Dr. Awadallah presented a list of MapReduce scientific examples and presentations that was assembled by members of the NSF Cluster Explor- atory program. The list includes the following: • lorida International University’s Indexing Geospatial Data with F MapReduce, • niversity of Washington’s Scaling the Sky with MapReduce and U Interactive Visualization of Large Data, • niversity of Maryland’s Commodity Computing in Genomics U Research,

OCR for page 18
 STEPS TOWARD LARGE-SCALE DATA INTEGRATION IN THE SCIENCES • arnegie Mellon University’s Cluster Computing for Statistical C Machine Translation, • niversity of California, Irvine, Large-Scale Automated Data Clean- U ing, and • niversity of California, Santa Barbara, Scalable Graph Processing. U Dr. Awadallah believed that MapReduce is most suitable for batch data-processing jobs. This would include ETL jobs that process origi- nal raw data into their relational form (because MapReduce does not require a predefined schema to be able to process data) and complex data transformations that are difficult to express in SQL (e.g., optical correc- tion algorithms for astronomical images). MapReduce also has the abil- ity to process data from multiple heterogeneous systems, such as those that exist in federations, through simple reader and writer functions. For example, one can have a MapReduce job that fetches input data from the distributed file system then joins them with data from a RDBMS. This allows the MapReduce system to run on top of data sources that range from unstructured (for example, collections of text, video streams, or sat - ellite images), to semistructured (for example, XML, JSON, or RDF-like data), to relationally structured data (for example, tables with predefined column schemas). DATA MANAgEMENT FOR SCIENTIFIC DATA Dr. Maier’s workshop presentation covered data management con- cepts that are of use for scientific data. Most commercial data-integration solutions are based on the relational model, with a few using XML as a target model. Such offerings are not likely to be of great help for integrat- ing scientific data sets because there is not much support for some data types common to science, such as sequences, time series, and multidi- mensional arrays. Commercial relational DBMSs offer support for some scientific data types, most often time series and spatial objects, such as are used in geographic information systems (GISs). However, such support is supplied either by an encoding into the underlying relational model or through an extension of an abstract data type (ADT). In either case, the data types are not part of the core model of the system, and there is limited understanding of the types in query and storage-management layers. Many scientific data types exhibit some form of order or, more gen- erally, topology (a notion of adjacent elements and neighborhoods). This structure arises from the organization of the underlying physical world, such as chains of nucleotides or amino acids (ordered sequences) or dis - cretized versions of continuous spaces arising from sensing or simulation

OCR for page 18
 IMPROVING CURRENT CAPABILITIES FOR DATA INTEGRATION IN SCIENCE (multidimensional arrays, finite-element meshes). The desired operations on these data types are often order- or neighborhood-sensitive: examples include pattern matching, image filtering, and regridding. Dr. Maier said that it has long been recognized that relational mod- els and languages lack support for ordered types. While it is possible to encode ordered structures into the relational model, the associated operations can be hard to express, and optimization opportunities are obscured. Over the years, query languages for array and mesh data types have been suggested, such as AQL (Libkin, Machlin, and Wong, 1996), Array Manipulation Language (Marathe and Salem, 2002), and GridFields (Howe and Maier, 2005). However, no full-featured DBMS based on these languages is currently available. Because of the limitations of relational DBMSs for supporting arrays, Maier reported that many scientific data end up in files using array data formats, such as NetCDF6 and HDF.7 While such formats support multi- dimensional arrays directly and appropriate access methods, they offer a file-per-dataset model and limited operations and hence are far from a full DBMS. They support interfaces to languages popular in scientific domains (C++, Fortran, Python) and to multiple data-analysis environments (R, Matlab, Octave). There are libraries of utilities for common operations available on some platforms, but there is no automatic optimization over groups of operators. There are also approaches that layer support for scientific data types over existing storage managers, usually a DBMS. Maier stated that the following are the two main approaches: • rray Model and Query Language. This approach provides an array A data model and query language and performs some optimiza- tion and evaluation natively in that model, with the underlying storage system managing persistent storage, and possibly some degree of support for memory management, access methods, and query execution. Raster Data Manager (RasDaMan) (Baumann et al., 1998) is the most mature example of this approach. RasDaMan is an open-source system supporting an array data model and query language, with commercial support and extensions avail- able. It provides its own query optimization, query evaluation, and main-memory management, using the underlying system (usually a relational DBMS) as a “tile store” for fragments of arrays. A more recent example is the RAM research project (van Ballegooij et al., 2003), which provides an array model and query facility that has 6 See http://www.unidata.ucar.edu/software/netcdf/. 7 See http://www.hdfgroup.org/.

OCR for page 18
0 STEPS TOWARD LARGE-SCALE DATA INTEGRATION IN THE SCIENCES been layered over various back ends, notably MonetDB. RAM per- forms query normalization, simplification, and optimization within its array model before translating into queries on the underlying relational engine. That layer can perform further optimization in the relational model before executing the queries. • econdary-Storage Extensions to Data-Analysis Enironments. The S second approach to layering uses a DBMS to provide relatively seamless access to secondary storage from a data analysis environ- ment. The type system of the environment thus effectively becomes the data model, usually providing vectors, matrices, and higher- dimensional arrays. There is no special query language in this approach—disk-resident data are manipulated with the same func- tions used for in-memory data. It is up to the underlying interface to the DBMS to determine when functions can be performed in the database and when data need to be retrieved for main-memory manipulation. Ohkawa (1993) used this approach with the New S statistical package and an object-oriented DBMS. The RIOT pro- totype (Zhang et al., 2009) supports the R data-analysis environ- ment using a relational DBMS. To create optimization opportuni- ties in the underlying DBMS, both systems use lazy evaluation techniques. An operation on a secondary-storage object merely creates an expression that represents the application of the opera- tion. Repeated deferral allows accumulating operations into one or more expression trees. Such trees are evaluated only when their result is to be output to the user, at which point they may be opti- mized before processing. According to Dr. Maier, the SciDB project (Cudré-Mauroux et al., 2009) has recently begun development of an open-source database with fully native support for an array model, including an array-aware storage manager. In addition to a data model and algebra for multi-dimensional arrays, SciDB will support history and versioning of arrays, provenance, uncertainty annotations, and parallel execution of queries. If successful, it should provide a suitable platform for integrating extremely large sci- entific datasets.