Skip to content

openxla/xla

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Folders and files

NameName
Last commit message
Last commit date
Mar 13, 2025
Mar 10, 2025
Mar 11, 2025
Mar 11, 2025
Mar 11, 2025
Mar 11, 2025
Mar 13, 2025
Feb 25, 2025
Feb 25, 2025
Mar 4, 2024
Aug 7, 2023
Jan 31, 2024
Feb 15, 2024
Apr 26, 2024
Dec 7, 2022
Mar 10, 2025
Mar 4, 2025
Jan 31, 2024
Mar 10, 2025
May 15, 2024
Mar 10, 2025
Mar 4, 2025
Jan 28, 2025
Mar 11, 2025
Mar 4, 2025
Jan 31, 2025
Jan 28, 2025
Jan 28, 2025
Mar 3, 2025
Jan 28, 2025
Feb 13, 2023

Repository files navigation

XLA

XLA (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators.

OpenXLA Ecosystem

The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.

openxla.org is the project's website.

Get started

If you want to use XLA to compile your ML project, refer to the corresponding documentation for your ML framework:

If you're not contributing code to the XLA compiler, you don't need to clone and build this repo. Everything here is intended for XLA contributors who want to develop the compiler and XLA integrators who want to debug or add support for ML frontends and hardware backends.

Contribute

If you'd like to contribute to XLA, review How to Contribute and then see the developer guide.

Contacts

  • For questions, contact the maintainers - maintainers at openxla.org

Resources

Code of Conduct

While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.