-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: improve cancel logic in openAi model #14713
base: master
Are you sure you want to change the base?
fix: improve cancel logic in openAi model #14713
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not notice any performance issues with this PR.
However something in the cancel/error flow does not work properly yet. In case a cancel occurs, the console logs the error two times:
root ERROR Uncaught Exception: Error: Canceled
root ERROR Error: Canceled
I think we should:
- check for cancel errors and not log them. Maybe leave a log message on the
DEBUG
log level - make sure the errors are caught and handled appropriately
At the moment, once the error occurs, the code completion progress message Calculating AI code completion...
is shown forever with the progress never canceling. Likely because the error was not caught.
Edit: Maybe we can also implement a flow without throwing errors
The cancel of a request was not working correctly. With the changes the cancelToken is better taken into account.
2eed0cc
to
914bc53
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works fine! I only have one small suggestion for an edge case
error(id: string, error: Error): void { | ||
if (!this.streams.has(id)) { | ||
throw new Error('Somehow we got a callback for a non existing stream!'); | ||
} | ||
const streamState = this.streams.get(id)!; | ||
streamState.reject?.(error); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the send
callback which is also called from the backend we construct the stream state if it does not exist yet. I would suggest to do the same here.
If the first feedback of a stream is an error it could happen that the error arrives here first before the corresponding stream state was created as we don't control the async flows in the backend. We don't want to throw an error in that case but just handle it like all the other cases too.
@@ -286,6 +292,14 @@ export class FrontendLanguageModelRegistryImpl | |||
throw new Error(`Could not find a tool for ${toolId}!`); | |||
} | |||
|
|||
error(id: string, error: Error): void { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add the same comment here as for send
and toolCall
What it does
The cancel of a request was not working correctly. With the changes the cancelToken is better taken into account.
How to test
Try using the inline completion of the IDE AI Feature. This leads to a very slow responsiveness of the ui as the underlying language model was continuing creating responses.
With the fix this is not the case anymore for the OpenAI Model.
Follow-ups
Improve cancel handling of all other language models.
Breaking changes
Attribution
Review checklist
Reminder for reviewers