Data pipeline
The Open Targets data pipeline is a complex process orchestrated in Apache Airflow, and it is divideded into data acquisition, transformation and data output.
Last updated
Was this helpful?
The Open Targets data pipeline is a complex process orchestrated in Apache Airflow, and it is divideded into data acquisition, transformation and data output.
Last updated
Was this helpful?
The data pipeline is composed of multiple elements:
Data and evidence generation processes
Input stage
Transformation stage and ETL processes
Output stage
Gentropy-specific processes
Orchestration
— Open Targets curation repository
— internal pipelines used to generate evidence
— evidence object schema used for evidence and association scoring
— Python module to map disease or phenotype terms to EFO
The orchestration occurs on Google Airflow using Google Cloud as the cloud resource provider. The logic of the orchestration is based on the steps. The combination of steps forms directed acyclic graphs (DAGs).
— Open Targets' genomics toolkit
See for more info on the Gentropy pipelines.
— Open Targets data pipelines orchestrator
See detailed orchestration documentation .
The Platform ETL (“extract, transform, and load”) and the were separate processes before, but they are now merged into one single pipeline. This means that the data produced for both Genetics ETL and the Platform are released at the same time. Herein, we refer to this joint pipeline as the "unified pipeline".
The unified pipeline uses many (link), like Open Targets related data and data needed to run Genetics ETL.
— Open Targets' Task ExecutoR i.e. scripts that process and prepare data for our ETL pipelines
: ETL pipelines to generate associations, evidence, and entity indices
: ETL pipeline to process Open FDA adverse events data
: ETL pipeline to generate similar entities and publications
: scripts for infrastructure tasks and generating a Platform release
If you have further questions, please get in touch with us on the .