Evo 2 is a state of the art DNA language model for long context modeling and design. Evo 2 models DNA sequences at single-nucleotide resolution at up to 1 million base pair context length using the StripedHyena 2 architecture. Evo 2 was pretrained using Savanna. Evo 2 was trained autoregressively on OpenGenome2, a dataset containing 8.8 trillion tokens from all domains of life.
We describe Evo 2 in the preprint: "Genome modeling and design across all domains of life with Evo 2".
Evo 2 is based on StripedHyena 2. A CUDA-capable system is required to build and install the prerequisites. Evo 2 uses FlashAttention, which may not work on all GPU architectures. Please consult the FlashAttention GitHub repository for the current list of supported GPUs.
Please clone and install from GitHub. We recommend using a conda environment with Pytorch. Requires python>=3.11.
git clone --recurse-submodules [email protected]:ArcInstitute/evo2.git
cd evo2
pip install .
If this did not work for whatever reason, you can also install from Vortex and follow the instructions there. PyPi support coming soon!
You can check that the installation was correct by running a test.
python ./test/test_evo2.py --model_name evo2_7b
We provide the following model checkpoints, hosted on HuggingFace:
Checkpoint Name | Description |
---|---|
evo2_40b |
A model pretrained with 1 million context obtained through context extension of evo2_40b_base . |
evo2_7b |
A model pretrained with 1 million context obtained through context extension of evo2_7b_base . |
evo2_40b_base |
A model pretrained with 8192 context length. |
evo2_7b_base |
A model pretrained with 8192 context length. |
evo2_1b_base |
A smaller model pretrained with 8192 context length. |
To use Evo 2 40B, you will need multiple GPUs. Vortex automatically handles device placement, splitting the model across available cuda devices.
Below are simple examples of how to download Evo 2 and use it locally using Python.
Evo 2 can be used to score the likelihoods across a DNA sequence.
import torch
from evo2 import Evo2
evo2_model = Evo2('evo2_7b')
sequence = 'ACGT'
input_ids = torch.tensor(
evo2_model.tokenizer.tokenize(sequence),
dtype=torch.int,
).unsqueeze(0).to('cuda:0')
outputs, _ = evo2_model(input_ids)
logits = outputs[0]
print('Logits: ', logits)
print('Shape (batch, length, vocab): ', logits.shape)
Evo 2 embeddings can be saved for use downstream.
import torch
from evo2 import Evo2
evo2_model = Evo2('evo2_7b')
sequence = 'ACGT'
input_ids = torch.tensor(
evo2_model.tokenizer.tokenize(sequence),
dtype=torch.int,
).unsqueeze(0).to('cuda:0')
layer_name = 'blocks.28.mlp.l3'
outputs, embeddings = evo2_model(input_ids, return_embeddings=True, layer_names=[layer_name])
print('Embeddings shape: ', embeddings[layer_name].shape)
Evo 2 can generate DNA sequences based on prompts.
from evo2 import Evo2
evo2_model = Evo2('evo2_7b')
output = evo2_model.generate(prompt_seqs=["ACGT"], n_tokens=400, temperature=1.0, top_k=4)
print(output.sequences[0])
We provide an example notebook of zero-shot BRCA1 variant effect prediction. This example includes a walkthrough of:
- Performing zero-shot BRCA1 variant effect predictions using Evo 2
- Reference vs alternative allele normalization
We are actively working on optimizing performance for long sequence processing. Vortex can currently compute over very long sequences via teacher prompting. However please note that forward pass on long sequences may currently be slow.
The OpenGenome2 dataset used for pretraining Evo2 is available on HuggingFace . Data is available either as raw fastas or as JSONL files which include preprocessing and data augmentation.
Evo 2 was trained using Savanna, an open source framework for training alternative architectures.
If you find these models useful for your research, please cite the relevant papers
@article{brixi2025genome,
title = {Genome modeling and design across all domains of life with Evo 2},
author = {Brixi, Garyk and Durrant, Matthew G. and Ku, Jerome and Poli, Michael and Brockman, Greg and Chang, Daniel and Gonzalez, Gabriel A. and King, Samuel H. and Li, David B. and Merchant, Aditi T. and Naghipourfar, Mohsen and Nguyen, Eric and Ricci-Tam, Chiara and Romero, David W. and Sun, Gwanggyu and Taghibakshi, Ali and Vorontsov, Anton and Yang, Brandon and Deng, Myra and Gorton, Liv and Nguyen, Nam and Wang, Nicholas K. and Adams, Etowah and Baccus, Stephen A. and Dillmann, Steven and Ermon, Stefano and Guo, Daniel and Ilango, Rajesh and Janik, Ken and Lu, Amy X. and Mehta, Reshma and Mofrad, Mohammad R.K. and Ng, Madelena Y. and Pannu, Jaspreet and Ré, Christopher and Schmok, Jonathan C. and St. John, John and Sullivan, Jeremy and Zhu, Kevin and Zynda, Greg and Balsam, Daniel and Collison, Patrick and Costa, Anthony B. and Hernandez-Boussard, Tina and Ho, Eric and Liu, Ming-Yu and McGrath, Thomas and Powell, Kimberly and Burke, Dave P. and Goodarzi, Hani and Hsu, Patrick D. and Hie, Brian L.},
journal = {Arc Institute Manuscripts},
year = {2025},
url = {https://arcinstitute.org/manuscripts/Evo2}
}