Skip to content

Issues: microsoft/onnxruntime

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

[Performance] FP16 Clip and Handle Bias introduces insufficient optimization. performance issues related to performance regressions
#23613 opened Feb 7, 2025 by SuhwanSong
[Feature Request] System.Numerics.Tensors support feature request request for unsupported feature or enhancement .NET Pull requests that update .net code
#23605 opened Feb 6, 2025 by verdie-g
Model Unsupported model IR version: 11, max supported IR version: 10 ep:DML issues related to the DirectML execution provider
#23602 opened Feb 6, 2025 by Pro100rus32
Static Quantization "Shape mismatch" Error quantization issues related to quantization
#23600 opened Feb 6, 2025 by ktadgh
[Build] Cannot build for arm32: error when linking libonnxruntime.so build build issues; typically submitted using template
#23598 opened Feb 6, 2025 by giovanni-trabucco
Custom operators is not a registered function/op (python) .NET Pull requests that update .net code
#23566 opened Feb 3, 2025 by novamind
[Build] Android compatibility with WebGPU build build issues; typically submitted using template ep:WebGPU ort-web webgpu provider platform:mobile issues related to ONNX Runtime mobile; typically submitted using template platform:web issues related to ONNX Runtime web; typically submitted using template
#23565 opened Feb 3, 2025 by FricoRico
[Feature Request] Is ORT support run LLM on Multi-Node GPU? feature request request for unsupported feature or enhancement
#23564 opened Feb 3, 2025 by KnightYao
[Build] How to build CoreML for running C++ code on MacOS build build issues; typically submitted using template ep:CoreML issues related to CoreML execution provider platform:mobile issues related to ONNX Runtime mobile; typically submitted using template
#23556 opened Jan 31, 2025 by ugurkan-syntonym
[Performance] Speed-up TensorRT engine compilation ep:TensorRT issues related to TensorRT execution provider performance issues related to performance regressions
#23546 opened Jan 30, 2025 by loryruta
[QUESTION]: onnxruntime with onednn backend ep:oneDNN questions/issues related to DNNL EP
#23543 opened Jan 30, 2025 by ramyaprabhu-alt
[Build] protocol buffer compiler error MSB8066 build build issues; typically submitted using template ep:OpenVINO issues related to OpenVINO execution provider
#23529 opened Jan 29, 2025 by omelentyev
[Feature Request] Global Threadpool in Python API feature request request for unsupported feature or enhancement
#23523 opened Jan 28, 2025 by alex-halpin
symbolic_shape_infer.py cannot infer torch.nn.normalize converter related to ONNX converters
#23516 opened Jan 28, 2025 by Tytskiy
[Performance] Preload model before inference performance issues related to performance regressions
#23513 opened Jan 28, 2025 by lobanov02
[Build] json dependency update request build build issues; typically submitted using template
#23512 opened Jan 28, 2025 by ranjitshs
[Build] Compilation error when building Onnxrt 1.20.1 with flag onnxruntime_CUDA_MINIMAL=ON with TRT 10.7.23 and Cudnn 9.6.0.74, build build issues; typically submitted using template ep:TensorRT issues related to TensorRT execution provider
#23504 opened Jan 27, 2025 by jcdatin
[Feature Request] Adapters DML support ep:DML issues related to the DirectML execution provider feature request request for unsupported feature or enhancement
#23503 opened Jan 27, 2025 by ambroser53
[Build] thrust::unary_function eprecated in cuda 12.8 build build issues; typically submitted using template ep:CUDA issues related to the CUDA execution provider
#23499 opened Jan 27, 2025 by smuzaffar
[Build] Non-zero status code build build issues; typically submitted using template
#23497 opened Jan 27, 2025 by phhh-xh
[Documentation] CudaContext::AllocDeferredCpuMem documentation improvements or additions to documentation; typically submitted using template
#23485 opened Jan 24, 2025 by axbycc-mark
ProTip! Adding no:label will show everything without a label.