-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Support perplexity.ai #2405
Comments
Thanks for creating this issue @sinwoobang. Could you post a sample code snippet for how to use perplexity ai to help others get started? Also, since you might be the one with the most knowledge about perplexity ai here, would you like to look into this? It could be a configuration issue. We can add some documentation to: https://microsoft.github.io/autogen/docs/topics/non-openai-models/about-using-nonopenai-models |
@ekzhu The Perplexity AI API Spec is compatible with Open AI Client. It works if you change Here is my snippet https://gist.github.com/sinwoobang/33feda60c4ed36388a944e4177d3d957. Please ask |
@ekzhu https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models#step-1-create-the-custom-model-client-class I got a reference link on Discord. One folk mentioned that the custom model client may resolve this issue. What do you think about it? The sequence of request messages is in control of the model client? |
@sinwoobang any updates? did you manage to re-sequence the messages for this custom model? I'm having trouble calling the OpenAICient from the custom model class. |
@edanweis I did. What part are you struggling with? |
|
May I review the code? I would remove the part below since you have
|
@sinwoobang Yes of course, nice catch! Thanks. I realised that perplexity online models are effectively search engines for which a message history doesn't make much sense. Perhaps this should be a function call. |
@edanweis Exactly. I am still deciding too. Perplexity is somewhere in the middle between a function call and a LLM service. |
Hey folks, what would you recommend to me as someone encountering this same error with Perplexity.ai as my LLM? |
It seems registering a custom ModelClient is a challenge in itself, so I've created an issue for that: #3502 |
Is your feature request related to a problem? Please describe.
Yes, when interacting with perplexity.ai, I encountered a BadRequestError with the following message:
This error indicates that perplexity.ai requires a strict alternating sequence of user and assistant roles in the messages.
Describe the solution you'd like
To support seamless integration with perplexity.ai, the solution should enforce the strict alternating sequence rule for user and assistant roles in the messages. This will ensure that the messages are structured in a way that is compatible with perplexity.ai's requirements, preventing the BadRequestError from occurring.
Additional context
To reproduce the issue, you can create multiple agents with a group chat agent and designate one of the agents to be operated by perplexity.ai. When attempting to interact with this agent, you will encounter the BadRequestError if the message sequence does not adhere to the strict alternating user/assistant role pattern required by perplexity.ai.
By implementing the proposed solution and enforcing the alternating role sequence, the integration with perplexity.ai should become more robust and error-free. This will enhance the overall user experience and allow for seamless communication between Autogen and perplexity.ai.
The text was updated successfully, but these errors were encountered: