-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: AttributeError: 'AzureOpenAIEmbedding' object has no attribute 'predict' #18029
Comments
Hey @CarlosECastilloM! I'm here to help you with any bugs, questions, or contributions you have. Let's dive into this issue together! The Here's how you can modify your code to use the correct class for predictions:
from llama_index.llms.azure_openai import AzureOpenAI
client = AzureOpenAI(
engine="AZURE_AZURE_OPENAI_DEPLOYMENT_NAME",
model="YOUR_AZURE_OPENAI_COMPLETION_MODEL_NAME",
api_key=AZURE_OPENAI_API_KEY,
azure_endpoint=AZURE_OPENAI_ENDPOINT,
api_version=OPENAI_API_VERSION,
)
This should resolve the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Thanks for your prompt response. Now I got the following error: response = query_engine.query("What is the summary of the document?") Cell In[369], line 16 File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:322 in wrapper File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\base\base_query_engine.py:52 in query File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:322 in wrapper File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\query_engine\router_query_engine.py:163 in _query File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\base\base_selector.py:88 in select File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:322 in wrapper File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\selectors\llm_selectors.py:106 in _select File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:322 in wrapper File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\llms\llm.py:603 in predict File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\llms\openai\base.py:360 in metadata File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\llms\openai\utils.py:246 in openai_modelname_to_contextsize ValueError: Unknown model 'text-embedding-3-large'. Please provide a valid OpenAI model name in: o1, o1-2024-12-17, o1-preview, o1-preview-2024-09-12, o1-mini, o1-mini-2024-09-12, o3-mini, o3-mini-2025-01-31, gpt-4, gpt-4-32k, gpt-4-1106-preview, gpt-4-0125-preview, gpt-4-turbo-preview, gpt-4-vision-preview, gpt-4-1106-vision-preview, gpt-4-turbo-2024-04-09, gpt-4-turbo, gpt-4o, gpt-4o-audio-preview, gpt-4o-audio-preview-2024-12-17, gpt-4o-audio-preview-2024-10-01, gpt-4o-mini-audio-preview, gpt-4o-mini-audio-preview-2024-12-17, gpt-4o-2024-05-13, gpt-4o-2024-08-06, gpt-4o-2024-11-20, gpt-4.5-preview, gpt-4.5-preview-2025-02-27, chatgpt-4o-latest, gpt-4o-mini, gpt-4o-mini-2024-07-18, gpt-4-0613, gpt-4-32k-0613, gpt-4-0314, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002, gpt-3.5-turbo-instruct, text-ada-001, text-babbage-001, text-curie-001, ada, babbage, curie, davinci, gpt-35-turbo-16k, gpt-35-turbo, gpt-35-turbo-0125, gpt-35-turbo-1106, gpt-35-turbo-0613, gpt-35-turbo-16k-0613 |
Forgot to mention: @dosu |
The error you're encountering is due to using an invalid model name, To resolve this, you need to use a valid model name. For the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Thanks @dosu. vector_index = VectorStoreIndex(
Generating embeddings: 0%| | 0/34 [00:00<?, ?it/s] Error: Traceback (most recent call last): Cell In[386], line 1 File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\vector_store\base.py:76 in init File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\base.py:77 in init File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\vector_store\base.py:310 in build_index_from_nodes File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\vector_store\base.py:279 in _build_index_from_nodes File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\vector_store\base.py:232 in _add_nodes_to_index File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\vector_store\base.py:139 in _get_node_with_embedding File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\indices\utils.py:160 in embed_nodes File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:322 in wrapper File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\base\embeddings\base.py:335 in get_text_embedding_batch File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\embeddings\openai\base.py:465 in _get_text_embeddings File ~\AppData\Local\anaconda3\Lib\site-packages\tenacity_init_.py:289 in wrapped_f File ~\AppData\Local\anaconda3\Lib\site-packages\tenacity_init_.py:379 in call File ~\AppData\Local\anaconda3\Lib\site-packages\tenacity_init_.py:314 in iter File ~\AppData\Local\anaconda3\Lib\concurrent\futures_base.py:449 in result File ~\AppData\Local\anaconda3\Lib\concurrent\futures_base.py:401 in __get_result File ~\AppData\Local\anaconda3\Lib\site-packages\tenacity_init_.py:382 in call File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\embeddings\openai\base.py:458 in _retryable_get_embeddings File ~\AppData\Local\anaconda3\Lib\site-packages\llama_index\embeddings\openai\base.py:169 in get_embeddings File ~\AppData\Local\anaconda3\Lib\site-packages\openai\resources\embeddings.py:125 in create File ~\AppData\Local\anaconda3\Lib\site-packages\openai_base_client.py:1283 in post File ~\AppData\Local\anaconda3\Lib\site-packages\openai_base_client.py:960 in request File ~\AppData\Local\anaconda3\Lib\site-packages\openai_base_client.py:1064 in _request NotFoundError: Error code: 404 - {'error': {'code': 'DeploymentNotFound', 'message': 'The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.'}} Please help me fix it. |
The "NotFoundError: Error code: 404 - {'error': {'code': 'DeploymentNotFound', 'message': 'The API deployment for this resource does not exist'}}" error indicates that the specified deployment does not exist in your Azure OpenAI resource. Here are the steps to resolve this issue:
Ensure that all these parameters are correctly configured to match your Azure setup. If the deployment name is incorrect or does not exist, you will encounter the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Bug Description
This is my code:
1. Load Document
from llama_index.core import SimpleDirectoryReader
doc = SimpleDirectoryReader(input_files=["MetaGPT.pdf"]).load_data()
2. Split Document in Chunks (splitting into sentences)
from llama_index.core.node_parser import SentenceSplitter
splitter = SentenceSplitter(chunk_size=1024)
nodes = splitter.get_nodes_from_documents(doc)
3. LLM Model and Embedding Model Settings (optional step)
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from dotenv import load_dotenv
from openai import AzureOpenAI
load_dotenv()
client = AzureOpenAI(
api_key = AZURE_OPENAI_API_KEY,
azure_endpoint = AZURE_OPENAI_ENDPOINT,
api_version = OPENAI_API_VERSION
)
client_embed = AzureOpenAIEmbedding(
model = AZURE_EMBEDDING_MODEL,
api_key= AZURE_OPENAI_API_KEY,
api_version = OPENAI_API_VERSION,
azure_endpoint = AZURE_OPENAI_ENDPOINT
)
PART II: CREATING 2 INDEXES: SUMMARY INDEX AND A VECTOR INDEX
4. Creating a Summary and a Vector Index
from llama_index.core import SummaryIndex
from llama_index.core import VectorStoreIndex
summary_index = SummaryIndex(nodes)
vector_index = VectorStoreIndex(
nodes = nodes,
embed_model = client_embed,
show_progress = True
5. Defining the Query Tools
summary_query_engine = summary_index.as_query_engine(
response_mode="tree_summarize",
use_async=True,
)
vector_query_engine = vector_index.as_query_engine()
6. Establishing the Query Tools
from llama_index.core.tools import QueryEngineTool
summary_tool = QueryEngineTool.from_defaults(
query_engine = summary_query_engine,
description = (
"Useful for summarization questions related to MetaGPT"
),
)
vector_tool = QueryEngineTool.from_defaults(
query_engine=vector_query_engine,
description=(
"Useful for retrieving specific context from the MetaGPT paper."
),
)
7. Establish a Router from different type of Selectors
from llama_index.core.query_engine.router_query_engine import RouterQueryEngine
from llama_index.core.selectors import LLMSingleSelector
engine = LLMSingleSelector.from_defaults(llm = client_embed)
query_engine = RouterQueryEngine(
)
8. Making queries from the document
response = query_engine.query("What is the summary of the document?")
print(str(response))
Version
0.12.22
Steps to Reproduce
I wonder if you could help me to solve the error I get.
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: