-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add remote openai
backend to LLM
#10078
base: master
Are you sure you want to change the base?
Conversation
openai
backend to LLM
Codecov ReportAttention: Patch coverage is
❌ Your patch check has failed because the patch coverage (4.76%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## master #10078 +/- ##
==========================================
- Coverage 86.74% 86.18% -0.56%
==========================================
Files 493 493
Lines 33069 33105 +36
==========================================
- Hits 28685 28531 -154
- Misses 4384 4574 +190 ☔ View full report in Codecov by Sentry. |
93319bf
to
7728eca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks good at a high level but i would like to see it integrated into the G-retriever example with argparser flags, and see a succesfull run of both huggingface and openai backends to confirm correctness and reasonable results (with minimal difference between backends
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also make it clear that when using the openai backend, it only support frozen LLM, while huggingface support LORA and full finetuning as wel.
It would also be cool if you could add support for LORA with the openai backend
No description provided.