ETL

Data Pipeline

ETL pipeline streaming

A data pipeline is a set of processing steps that move data from a source to a destination system. The steps of the data pipeline are sequential because the output from one step is the input of subsequent steps. The data processing within each step can be done in parallel to reduce processing time. The first step of the data pipeline is typically ingestion. The final step is an insert or load into a data analytics database.

Data pipelines control the flow of data as a well-defined process that supports data governance. They also create opportunities for reuse when building future pipelines. Reusable components can be refined over time, resulting in faster deployment and improved reliability. Data pipelines allow the entire data flow to be instrumented and centrally monitored to reduce management overheads.

Data Pipeline Example

Data pipeline steps will vary based on the data type and tools used. A representative sequence of steps for identifying suitable sources and data pipeline process steps is listed below:

  • Data Identification – Data catalogs help identify potential data sources for the required analysis. In general, the pipeline is used to populate a specific data warehouse, such as a customer data platform for which the data sources are well known. Data catalogs also contain metadata about the data’s quality and trustworthiness, which can be used as selection criteria.
  • Profiling – Profiling helps to understand data formats and generate appropriate scripts for data ingestion. Raw data must sometimes be exported into the comma-delimited format as direct access is challenging.
  • Data Ingestion – Data sources can include operational systems, web clicks, social media posts, and log files. Data integration technology can provide predefined connectors, batch, and streaming APIs. Semi-structured files may need special streamed JSON or XML record formats. Ingestion can happen as batches, micro-batches as records are created as streams.
  • Normalization – Duplicates can be filtered out, and gaps filled with default or calculated values. Data can be sorted into primary key order, later becoming the natural key for a columnar database table. Outliers and null values can be addressed in this step.
  • Formatting – Data has to be made consistent using a uniform format. Format challenges include how U.S. states are written, spelled out, or as a pair of letters.
  • Merging – Multiple files may be needed to construct a single record. Any clashes must be managed during the data merge and reconciliation step.
  • Loading – The analytics repository or database is the usual target for this final data pipeline step. Parallel loaders can be used to load data as multiple streams. The input file must be split before a parallel load to avoid the single file being a performance bottleneck. Adequate CPU cores must be allocated to the load to maximize throughput and reduce the total elapsed time for the load operation.

Essentials for a Robust Data Pipeline

Below are some desirable characteristics of the technology platform that the data pipeline uses:

    1. Hybrid-cloud deployment on-prem and cloud.
    2. Works with CDC tools to sync with the data sources.
    3. Multiple cloud provider support.
    4. Support for legacy big data file formats such as Hadoop.
    5. Data integration technology includes connectors to popular data sources.
    6. Monitoring tools to view and execute data pipeline steps.
    7. Parallel processing within each step in the pipeline.
    8. Data profiling technology to construct pipeline workflows faster.
    9. ETL and ELT capabilities so data can be manipulated inside and outside the target data warehouse.
    10. Data transformation functions.
    11. Default value generation.
    12. Exception handling for failed processes.
    13. Data integrity checking to validate completeness at the end of each step.
    14. Graphical tools to construct pipelines.
    15. Ease of maintenance.
    16. Encryption of data at rest and in flight.
    17. Data masking for compliance.

Benefits of Using Data Pipelines

Some of the benefits of using a data pipeline include the following:

  • Pipelines promote component reuse and stepwise refinement.
  • Allows the end-to-end process to be instrumented, monitored, and managed. Failed steps can then be alerted, mitigated, and retried.
  • Reuse accelerates pipeline development and test times.
  • Data source utilization can be monitored so that unused data can be retired.
  • The use of data can be cataloged, as well as consumers.
  • Future data integration projects can assess existing pipelines for bus or hub-based connections.
  • Data pipelines promote data quality and data governance.
  • Robust data pipelines lead to better-informed decisions.

Data Pipelines in Actian

The Actian Data Platform has sophisticated data integration technology for building data pipelines. The included connectors can access hundreds of data sources. A graphical designer studio lets you lay out data pipelines to connect, profile, transform and load data. The Actian Data Platform uses a columnar database to provide answers faster without worrying about pre-creating and maintaining indexes for optimal query speed.

Actian is cluster-aware and operates on-premise and across multiple public cloud platforms, including Google Cloud, Azure and AWS.