-
Hi, Could you provide an example of how to configure an infinite Docker instance to load a model from a local directory instead of referring to the Hugging Face repository? When tried to refer to local path of a ONNX model it refere to hugging face repository, not the local path. Run command: Error : Repository Not Found for url: https://huggingface.co/onnx_distilroberta/resolve/main/config.json. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
@kjoth For the local model to work inside a docker container, I would highly recommend to interactive ssh.
I would be happy about a PR that showcase that. With ONNX/ --engine OPTIMUM models, It might be more complicated, because there are various safeguard that optimize for not downloading all other models from huggingface (jax, rustformers, torch weights) - so It might not even work. Is the model not on HF at all? |
Beta Was this translation helpful? Give feedback.
Updated the config.json "max_position_embeddings": 514 to new value "max_position_embeddings": 512, and it works.