🆕Data pipeline

The Open Targets data pipeline is a complex process orchestrated in Apache Airflow, and it is divideded into data acquisition, transformation and data output.

Introduction

The data pipeline is composed of multiple elements:

  1. Data and evidence generation processes

  2. Input stage

  3. Transformation stage and ETL processes

  4. Output stage

  5. Gentropy-specific processes

  6. Orchestration

GitHub repositories

Data and evidence

  • curation — Open Targets curation repository

  • evidence_datasource_parsers — internal pipelines used to generate evidence

  • json_schema — evidence object schema used for evidence and association scoring

  • OnToma — Python module to map disease or phenotype terms to EFO

Gentropy

  • gentropy — Open Targets' genomics toolkit

See here for more info on the Gentropy pipelines.

Orchestration

See detailed orchestration documentation here.

The Platform ETL (“extract, transform, and load”) and the Genetics ETL were separate processes before, but they are now merged into one single pipeline. This means that the data produced for both Genetics ETL and the Platform are released at the same time. Herein, we refer to this joint pipeline as the "unified pipeline".

The orchestration occurs on Google Airflow using Google Cloud as the cloud resource provider. The logic of the orchestration is based on the steps. The combination of steps forms directed acyclic graphs (DAGs).

Schematic overview of Open Targets pipelines

The unified pipeline uses many static assets (link), like Open Targets related data and data needed to run Genetics ETL.

Unified pipeline

If you have further questions, please get in touch with us on the Open Targets Community.

Last updated

Was this helpful?