We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have a simple model with a custom operator, but I cannot create an ort session.
Fail Traceback (most recent call last) Cell In[6], line 10 8 onnx_model_path = "model_modified.onnx" 9 providers = ['CPUExecutionProvider'] ---> 10 session = ort.InferenceSession(onnx_model_path, sess_options=so, providers=providers) File ~/projects/onnx_playground/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:465, in InferenceSession.init(self, path_or_bytes, sess_options, providers, provider_options, **kwargs) 462 disabled_optimizers = kwargs.get("disabled_optimizers") 464 try: --> 465 self._create_inference_session(providers, provider_options, disabled_optimizers) 466 except (ValueError, RuntimeError) as e: 467 if self._enable_fallback: File ~/projects/onnx_playground/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:526, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers) 523 self._register_ep_custom_ops(session_options, providers, provider_options, available_providers) 525 if self._model_path: --> 526 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) 527 else: 528 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from model_modified.onnx failed:Fatal error: custom.domain:CustomMatMul(-1) is not a registered function/op
ort version: 1.20.1 opset version: 17 torch version: 2.6.0+cu124
Here is how I export my model to onnx:
torch.onnx.export( model, dummy_input, "model.onnx", export_params=True, opset_version=17, input_names=["input"], output_names=["output"] )
Here is my custom operator:
@onnx_op(op_type="CustomMatMul", domain="custom.domain") def custom_matmul(x, y): return np.dot(x, y) model_path = "model.onnx" model = onnx.load(model_path) graph = model.graph for node in graph.node: if node.op_type == "MatMul": custom_node = helper.make_node( "CustomMatMul", inputs=node.input, outputs=node.output, name=f"{node.name}", domain="custom.domain" ) custom_node_index = list(model.graph.node).index(node) model.graph.node.insert(custom_node_index, custom_node) model.graph.node.remove(node) for next_node in model.graph.node: for idx, inp in enumerate(next_node.input): if inp == node.output[0]: next_node.input[idx] = custom_node.output[0] model.opset_import.append(helper.make_opsetid("custom.domain", 17)) updated_model_path = "model_modified.onnx" onnx.save(model, updated_model_path) print(f"Updated model saved to {updated_model_path}") checker.check_model(model)
Note: No issues with the model ORT session:
import onnxruntime as ort from onnxruntime_extensions import get_library_path so = ort.SessionOptions() so.register_custom_ops_library(get_library_path()) so.graph_optimization_level = ort.GraphOptimizationLevel.ORT_DISABLE_ALL onnx_model_path = "model_modified.onnx" providers = ['CPUExecutionProvider'] session = ort.InferenceSession(onnx_model_path, sess_options=so, providers=providers)
Linux
Ubuntu 24.04.1 LTS
Built from Source
1.20.1
Python
X64
Default CPU
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the issue
I have a simple model with a custom operator, but I cannot create an ort session.
ort version: 1.20.1
opset version: 17
torch version: 2.6.0+cu124
To reproduce
Here is how I export my model to onnx:
Here is my custom operator:
Note: No issues with the model
ORT session:
Platform
Linux
OS Version
Ubuntu 24.04.1 LTS
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.20.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
The text was updated successfully, but these errors were encountered: