Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OpenAI provider #19

Merged
merged 8 commits into from
Feb 18, 2025
Merged

Add OpenAI provider #19

merged 8 commits into from
Feb 18, 2025

Conversation

brichet
Copy link
Collaborator

@brichet brichet commented Nov 8, 2024

No description provided.

@brichet brichet added the enhancement New feature or request label Nov 8, 2024
@jtpio
Copy link
Member

jtpio commented Jan 17, 2025

Just pushed to resolve the yarn.lock conflict.

@brichet brichet force-pushed the openAI branch 2 times, most recently from dee9fb2 to ee784b0 Compare January 24, 2025 21:54
@jtpio jtpio mentioned this pull request Feb 5, 2025
@jtpio
Copy link
Member

jtpio commented Feb 18, 2025

Seems to be working fine locally for both the chat and completer:

jupyterlite-ai-openai.mp4

For the inline completer, there seems to be some issues similar to the other providers. Maybe there is something to be done to improve the quality of the suggestions in general, by checking the response against the input used when the completer was triggered.

}
return 'Unknown provider';
}

/*
* Get an LLM completer from the name.
*/
export function getSettings(name: string): JSONObject | null {
export function getSettings(name: string): any {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this any is fine for now, but we should maybe check if we can put JSONObject | null back as the return type.

@jtpio
Copy link
Member

jtpio commented Feb 18, 2025

Marking as ready for now since it seems to be functional.

@jtpio jtpio marked this pull request as ready for review February 18, 2025 17:12
@brichet
Copy link
Collaborator Author

brichet commented Feb 18, 2025

For the inline completer, there seems to be some issues similar to the other providers. Maybe there is something to be done to improve the quality of the suggestions in general, by checking the response against the input used when the completer was triggered.

I'm not clear about the usage of the filterText but I wonder if this could not help with this. https://github.com/jupyterlab/jupyterlab/blob/f15d555c89f3aa73bb16610d8056a1597bce6edc/packages/completer/src/tokens.ts#L199

@brichet
Copy link
Collaborator Author

brichet commented Feb 18, 2025

LGTM, I can't approve it

@brichet brichet merged commit 1b482ad into jupyterlite:main Feb 18, 2025
7 checks passed
@brichet brichet deleted the openAI branch February 18, 2025 18:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants