This is a workflow to generate DNA sequence-based CNN models from genomic DNA sequences predicting MPRA activity for e.g.,log(RNA/DNA) values.
The workflow includes respective folders as follows:
sequence_cnn_models/
├── config (contains workflow config files and sample files)
├── docs (for documentation)
├── resources (demo data, reference files, etc.)
└── workflow (snakemake workflow and scripts)
Codes are in respective folders, i.e. scripts
, rules
, and envs
(in workflow\
). The workflow is in the workflow\Snakefile
and the main configuration is in the config\config.yml
file. Please review this file and adjust parameters accordingly.
- Max Schubach (@visze), Berlin Institute of Health (BIH), Computational Genome Biology
- Pyaree Mohan Dash (@vpyareedash), Berlin Institute of Health (BIH), Computational Genome Biology
If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (see above).
Snakemake manages dependencies automatically via conda, please update workflow\envs\
files accordingly.
Please download a copy of Snakemake Wrappers in the resources\
directory or update the config.yml
file accordingly.
The workflow also requires a reference genome in 1. .genome format, and 2. .fasta format.
Please download the reference genome in the resources\
directory or update the config.yml
file accordingly.
- Create a new github repository using this workflow as a template.
- Clone the newly created repository to your local system, into the place where you want to perform the data analysis.
Configure the workflow according to your needs via editing the files in the config/
folder. Adjust config.yml
to configure the workflow execution, and samples.tsv
to specify your sample setup.
Install Snakemake using conda:
conda create -c bioconda -c conda-forge -n snakemake snakemake
For installation details, see the instructions in the Snakemake documentation. A typical installation takes ~ 5 minutes in a normal desktop computer.
Activate the conda environment:
conda activate snakemake
Test your configuration by performing a dry-run via
snakemake --use-conda -n
Execute the workflow locally via
snakemake --use-conda --cores $N
using $N
cores or run it in a cluster environment via
snakemake --use-conda --cluster qsub --jobs 100
or
snakemake --use-conda --drmaa --jobs 100
If you not only want to fix the software stack but also the underlying OS, use
snakemake --use-conda --use-singularity
in combination with any of the modes above. See the Snakemake documentation for further details.
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html
This report can, e.g., be forwarded to your collaborators. An example (using some trivial test data) can be seen here.
Whenever you change something, don't forget to commit the changes back to your github copy of the repository:
git commit -a
git push
Whenever you want to synchronize your workflow copy with new developments from upstream, do the following.
- Once, register the upstream repository in your local copy:
git remote add -f upstream [email protected]:snakemake-workflows/sequence_cnn_models.git
orgit remote add -f upstream https://github.com/snakemake-workflows/sequence_cnn_models.git
if you do not have setup ssh keys. - Update the upstream version:
git fetch upstream
. - Create a diff with the current version:
git diff HEAD upstream/master workflow > upstream-changes.diff
. - Investigate the changes:
vim upstream-changes.diff
. - Apply the modified diff via:
git apply upstream-changes.diff
. - Carefully check whether you need to update the config files:
git diff HEAD upstream/master config
. If so, do it manually, and only where necessary, since you would otherwise likely overwrite your settings and samples.
In case you have also changed or added steps, please consider contributing them back to the original repository:
- Fork the original repo to a personal or lab account.
- Clone the fork to your local system, to a different place than where you ran your analysis.
- Copy the modified files from your analysis to the clone of your fork, e.g.,
cp -r workflow path/to/fork
. Make sure to not accidentally copy config file contents or sample sheets. Instead, manually update the example config files if necessary. - Commit and push your changes to your fork.
- Create a pull request against the original repository.
Test cases are in the subfolder .test
. They are automatically executed via continuous integration with Github Actions.
Let's try to run the workflow with the demo data provided in the resources\
directory. (i.e., resources\demo\
)
The input files are in the resources\demo\
directory. The input files are as follows:
1. `resources\demo\example_sequences.fa` - contains the DNA sequences in fasta format
2. `resources\demo\example_labels.tsv` - contains 3 columns i.e., 'BIN', 'ID', and, 'MEAN' (value1*) in tsv format.
The 'BIN' column contains the bin number (1-10) and the 'ID' column contains the sequence ID. The 'ID' column should match the sequence ID in the fasta file. The 'MEAN' column contains the mean value of the log(RNA/DNA). Adding more columns is possible, as additional values which starts a multi-task learning.
- Add the path of snakemake wrapper directory as follows:
...
wrapper_directory: resources/snakemake_wrappers
...
- Please add the path of the reference genome files as follows:
...
reference:
genome: resources/example.fa.genome # genome file .genome
fasta: resources/example.fa # genome file .fa
...
- Add the path of the input files as follows:
...
input:
fasta: resources/demo/example_sequences.fa
labels: resources/demo/example_labels.tsv
...
Run as follows (if the device has GPU, snakemake will automatically detect it and run the workflow on GPU):
snakemake --snakefile workflow/Snakefile --configfile config/config.yml -c 1 --use-conda -p
Typical runtime on a non-GPU device is ~ 1 hour and 30 minutes. (1 hour 15 minutes in a GPU enabled device)
A successful run ends up with the following output:
...
Finished job 0.
n of n steps (100%) done
Complete log: .snakemake/log/20XX-XX-27T155007.853000.snakemake.log
The output files are now in the results\
directory.
sequence_cnn_models/results/
├── correlation
├── regression.MEAN.tsv.gz (correlation between predicted and observed values)
├── predictions
├── finalConcat.labels.cleaned.tsv.gz
...
├── regression_input (Train, test and validation input files used for training)
└── training (Performance of fitted models, log.tsv files, *model.json*, *model.h5*, etc.)
The file results/predictions/finalConcat.labels.cleaned.tsv.gz
contains the predicted values for the input sequences.
The file results/correlation/regression.MEAN.tsv.gz
contains the correlation between predicted and observed values.
All models are saved in the results/training/
directory as .json
and .h5
files.
To run the workflow with the best model of MPRAnn (introduced in Agarwal et al., 2023), please download the model files from Zenodo and update the config.yml
file accordingly or use config/config_mprann.yml
file.
Note: Please modify cell type names depending on the model or model files used.