-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
failed to load llama-3.2-vision using ollama-0.5.4-ipex-llm-2.2.0b20250218-win.zip
#12861
opened Feb 20, 2025 by
fanlessfan
glm-4v, A770, got error " RuntimeError: probability tensor contains either
inf
, nan
or element < 0"
user issue
#12857
opened Feb 20, 2025 by
aixiwangintel
qwen2-vl, A770 GPU, got an unexpected keyword argument 'position_embeddings' error
user issue
#12856
opened Feb 20, 2025 by
aixiwangintel
2 x A770 with Ollama Linux , inference responses slow down dramatically
user issue
#12852
opened Feb 19, 2025 by
RobinJing
[SOLVED] How to use IPEX-LLM with a VS Code Extension on Win10 ?
#12851
opened Feb 19, 2025 by
redo33
Feature Request: Implementing a No-Install Runtime Package for Ollama on Intel NPU using IPEX-LLM
#12848
opened Feb 19, 2025 by
ChenZeiShuai
Ollama can't run on 9th gen intel CPU or older, please use a newer version of Ollama
#12844
opened Feb 18, 2025 by
Ejo2001
With Intel chips, how to fine-tune models with LoRA and use it on inference?
#12842
opened Feb 18, 2025 by
JamieVC
[ipex-llm][cpp][ollama]nonsense output when running infer simultaneously (OLLAMA_NUM_PARALLEL!=1)
user issue
#12835
opened Feb 17, 2025 by
jianjungu
Ollama reports model is 100% on CPU when actually running on GPU
user issue
#12831
opened Feb 16, 2025 by
tsobczynski
IGPU limits the inference speed of the entire system
user issue
#12828
opened Feb 14, 2025 by
dttprofessor
Baichuan-M1-14B can save int4 model, but load low bit failed.
user issue
#12824
opened Feb 14, 2025 by
KiwiHana
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.