-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Cannot Complete Notebook: Agent Chat with custom model loading #2381
Labels
Comments
@olgavrou could you take a look? |
@DLWCMD could you upload the pic or any screenshot! that is why it can be more elaborated clearly that what you can says |
Thanks for investigating this issue. While I am impressed with the potential of AutoGen, my actual experience has been disappointing. I have spent more time trying to deal with configuration and related issues, such as this one, than successfully executing the notebooks. I hope that trend does not continue.
David L. Wilt
3272 Bayou Road
Longboat Key, Florida 34228
540-420-0844
***@***.******@***.***>
From: Md Azfar Alam ***@***.***>
Reply-To: microsoft/autogen ***@***.***>
Date: Sunday, April 14, 2024 at 12:35 AM
To: microsoft/autogen ***@***.***>
Cc: DLWCMD ***@***.***>, Author ***@***.***>
Subject: Re: [microsoft/autogen] [Bug]: Cannot Complete Notebook: Agent Chat with custom model loading (Issue #2381)
Describe the bug
This notebook requires use of filter dictionary to obtain information associated with building the custom model. Despite the efforts discussed below, I am not able to successfully apply a filter.
Following instructions, I created this expanded OAI_CONFIG_LIST
OAI_CONFIG_LIST = [
{
'model': 'gpt-3.5-turbo',
'api_key': os.environ["OPENAI_API_KEY"],
},
{
'model': 'gpt-4',
'api_key': os.environ["OPENAI_API_KEY"],
},
{
'model': 'gpt-4-turbo-preview',
'api_key': os.environ["OPENAI_API_KEY"]
},
{
'model_custom': 'Open-Orca/Mistral-7B-OpenOrca',
'model_client_cls': 'CustomModelClient',
'device': 'cuda',
'n': 1,
'params': {
'max_length': 1000
}
}
]
Note the list includes the recommended additional dictionary to support the Open-Orca details"
{
"model": "Open-Orca/Mistral-7B-OpenOrca",
"model_client_cls": "CustomModelClient",
"device": "cuda",
"n": 1,
"params": {
"max_length": 1000,
}
},
If I try the recommended approach to filter for model_client_cls, the filter fails and I receive the entire list: [{'model_custom': 'Open-Orca/Mistral-7B-OpenOrca', 'model_client_cls': 'CustomModelClient', 'device': 'cuda', 'n': 1, 'params': {'max_length': 1000}}]
I then decided to try a filter of a element in the original OAI_CONFIG_LIST, as follows: filter_dict = {"model": "gpt-3.5-turbo"} config_list = autogen.filter_config(config_list, filter_dict)
This fails with the following message:
[/usr/local/lib/python3.10/dist-packages/autogen/oai/openai_utils.py](https://localhost:8080/#) in _satisfies(config_value, acceptable_values)
442 return bool(set(config_value) & set(acceptable_values)) # Non-empty intersection
443 else:
…--> 444 return config_value in acceptable_values
445
446 if filter_dict:
TypeError: 'in <string>' requires string as left operand, not NoneType
Steps to reproduce
Using the notebook, try to apply the filter on the Open_Orca dictionary or the original OAI_CONFIG_LIST as shown above.
Model Used
gpt-3.5-turbo gpt_4 gpt-4-turbo-preview Open-Orca/Mistral-7B-OpenOrca
Expected Behavior
Applying the filter dictionary should have returned a filtered list in both examples.
Screenshots and logs
No response
Additional Information
pyautogen-0.2.23-py3-none-any.whl macOS Sonoma 14.4.1 on macBookAir with M2 chip Python 3.10
Could you upload a picture or screenshot? This will help elaborate more clearly on what you want to convey.
—
Reply to this email directly, view it on GitHub<#2381 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALUME6WVYTZILQWHY2HZJ2DY5IBSHAVCNFSM6AAAAABGFY66GWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJTHA4TONJUHE>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Could you use |
Will do and thanks very much. I will report back within 24 hours.
From: Chi Wang ***@***.***>
Reply-To: microsoft/autogen ***@***.***>
Date: Thursday, April 18, 2024 at 11:05 PM
To: microsoft/autogen ***@***.***>
Cc: DLWCMD ***@***.***>, Mention ***@***.***>
Subject: Re: [microsoft/autogen] [Bug]: Cannot Complete Notebook: Agent Chat with custom model loading (Issue #2381)
Could you use filter_dict = {"model": ["gpt-3.5-turbo"]} ?
cc @ekzhu<https://github.com/ekzhu> in case this issue implies any work needed in documentation
—
Reply to this email directly, view it on GitHub<#2381 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALUME6XC6RPMQZWLNU7QXD3Y6CCY3AVCNFSM6AAAAABGFY66GWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGY3DONZQHA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
We are making progress. Using the new model, the example with critics produced its own image, followed by this error message:
BadRequestError: Error code: 400 - {'error': {'message': "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)", 'type': 'invalid_request_error', 'param': None, 'code': None}}
The new model was recognized, which allowed us to initiate the example with critics, and create one image. How do we address the above error?
Thanks. For your help.
David L. Wilt
3272 Bayou Road
Longboat Key, Florida 34228
540-420-0844
***@***.******@***.***>
From: David Wilt ***@***.***>
Date: Friday, April 19, 2024 at 9:12 AM
To: microsoft/autogen ***@***.***>
Subject: Re: [microsoft/autogen] [Bug]: Cannot Complete Notebook: Agent Chat with custom model loading (Issue #2381)
Will do and thanks very much. I will report back within 24 hours.
From: Chi Wang ***@***.***>
Reply-To: microsoft/autogen ***@***.***>
Date: Thursday, April 18, 2024 at 11:05 PM
To: microsoft/autogen ***@***.***>
Cc: DLWCMD ***@***.***>, Mention ***@***.***>
Subject: Re: [microsoft/autogen] [Bug]: Cannot Complete Notebook: Agent Chat with custom model loading (Issue #2381)
Could you use filter_dict = {"model": ["gpt-3.5-turbo"]} ?
cc @ekzhu<https://github.com/ekzhu> in case this issue implies any work needed in documentation
—
Reply to this email directly, view it on GitHub<#2381 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALUME6XC6RPMQZWLNU7QXD3Y6CCY3AVCNFSM6AAAAABGFY66GWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGY3DONZQHA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@DLWCMD Can you explain the max_length in params for me? I don't know what it means? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
This notebook requires use of filter dictionary to obtain information associated with building the custom model. Despite the efforts discussed below, I am not able to successfully apply a filter.
Following instructions, I created this expanded OAI_CONFIG_LIST
Note the list includes the recommended additional dictionary to support the Open-Orca details"
If I try the recommended approach to filter for model_client_cls, the filter fails and I receive the entire list:
[{'model_custom': 'Open-Orca/Mistral-7B-OpenOrca', 'model_client_cls': 'CustomModelClient', 'device': 'cuda', 'n': 1, 'params': {'max_length': 1000}}]
I then decided to try a filter of a element in the original OAI_CONFIG_LIST, as follows:
filter_dict = {"model": "gpt-3.5-turbo"}
config_list = autogen.filter_config(config_list, filter_dict)
This fails with the following message:
Steps to reproduce
Using the notebook, try to apply the filter on the Open_Orca dictionary or the original OAI_CONFIG_LIST as shown above.
Model Used
gpt-3.5-turbo
gpt_4
gpt-4-turbo-preview
Open-Orca/Mistral-7B-OpenOrca
Expected Behavior
Applying the filter dictionary should have returned a filtered list in both examples.
Screenshots and logs
No response
Additional Information
pyautogen-0.2.23-py3-none-any.whl
macOS Sonoma 14.4.1 on macBookAir with M2 chip
Python 3.10
The text was updated successfully, but these errors were encountered: