Data Warehouse

Data Lake Integration

Graphic of mountains made from digital binary code patterns, representing data lake integration in a technological landscape.

A data lake is a data storage repository that stores data in full—the data files are kept in their native format—until needed for analysis. Data lake integration is the process to move, prepare, and load data for analysis in a data warehouse. One benefit of a data lake is that it can cost-effectively hold vast amounts of raw data, including structured, semi-structured, and unstructured data until it’s ready to be used.

Why is Data Lake Integration Important?

Data lakes are very useful as storage repositories. Integration technology makes the stored data in a data lake helpful to the business by creating an automated path to an analytics system. The data lake provides a central location to collect data of all types that can be used for analysis when needed. A data lake is different from a data warehouse, which is ideally suited to analyze structured data that’s stored internally. With the right analytic database technology, queries can be extended to access data, including unstructured data, stored externally in a data lake. In this case, the external file is registered to the database, and a connector sends the request to the external data source.

Integrating a Data Lake With an Analytics Platform

Below are some approaches businesses have taken to connect their data lakes to a data analytics solution:

Traditional Data Warehouse

Moving data to a traditional data warehouse from a data lake ideally uses a data integration solution such as Actian DataConnect, which manages the data movements, transformations, and filtering needed to get the data into a suitable form for meaningful analysis.

Extended Data Warehouse

When the source data in the data lake is in a form that is ready for analysis, as is the case for many Hadoop data formats, an analytics technology can be beneficial. For example,  the Actian Vector database can use its built-in Spark connector to access more than 50 data formats, including Hadoop formatted file data. Likewise, the Actian Data Platform can host data warehouse projects and their required data integrations.

Data Lakehouse

The data lakehouse concept combines the data analysis capabilities of a data warehouse with the data lake function, which does not require a separate integration technology. A data lakehouse is a structured data repository stored in a database as tables and can also store semi-structured data formats such as JSON strings. Flat files store unstructured data such as video, audio, and text streams in one or more file systems. An integrated data catalog stores metadata that describes the data format, labels lineage, and more. Data connectors provide the means to access all the data types in the data lakehouse.

Data Integration Functions

Below are essential capabilities of data integration technology:

Data Connectors

Data lakes store a multitude of data types and file formats. The corresponding data integration solution needs connectors that encompass all of the required formats. Open database connectivity (ODBC) provides an open application programming interface (API) for simple formats. Spark connects to more complex data formats used by Hadoop File Systems. The ideal integration technology should provide the ability to build custom connectors if needed. Actian DataConnect supports hundreds of connectors and provides a universal connector for building connections to home-made applications.

Data Pipeline Orchestration

Actian DataConnect and KNIME offer visual workflow design tools for constructing data flows to move the data from the data lake to the target analytic system. Actian DataFlow plugs into KNIME to provide data transformation and analysis functions that can operate as multithreaded parallel operations to reduce execution times.

Scheduling

Integration solutions should provide a centralized view of all data pipelines, allowing IT to schedule and pause data movements.

Central Management

Integration solutions can monitor integrations, log exceptions, handle retries, and alert IT about failures.

Flexible Deployment

Data lakes can reside on-premises and on cloud platforms. A hybrid integration solution provides the most deployment flexibility.

Benefits of Cloud-Based Data Integration

The benefits of using a data integration solution with a data lake include:

  • Makes data assets in the data lake easy to prepare for analysis.
  • Provides ready-built connectors to hundreds of file formats, application APIs, and streamed data managers.
  • Simplifies management of data pipelines through centralized monitoring and administration.
  • Reduces administration costs thanks to being able to reuse scripts and having centralized visibility of data movements.

The data lakehouse architecture offers further benefits, such as providing a metadata catalog that describes formats, lineage, and how different data sets interrelate.

How Actian Enables Data Lake Integration

The Actian Data Platform makes it easier to create high-performance data lakes with data integration. The platform uses a built-in columnar, vectorized database that provides data warehouse capabilities with a fraction of the administration overhead.

The Actian Data Platform can use multiple cloud platforms, including AWS, Azure Cloud, and Google Cloud, along with deployments on-premises and in hybrid environments. The Actian Vector analytic database can access data stored in file systems using its Spark connector, which also supports Hadoop ORC and Parquet formats. Multiple distributed database instances can be accessed using a single distributed SQL query.

Built-in data integration based on Actian DataConnect can profile data, automate data preparation steps, and support streamed data sources. File systems supported by the Actian Data Platform include AWS S3 buckets, Google Drive folders, and Azure Blob storage.