-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SK assistant for agentchat #5134
base: main
Are you sure you want to change the base?
SK assistant for agentchat #5134
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #5134 +/- ##
==========================================
+ Coverage 75.41% 75.60% +0.18%
==========================================
Files 171 173 +2
Lines 10467 10622 +155
==========================================
+ Hits 7894 8031 +137
- Misses 2573 2591 +18
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
python/packages/autogen-ext/src/autogen_ext/agents/semantic_kernel/_sk_assistant_agent.py
Outdated
Show resolved
Hide resolved
kernel=self._kernel, | ||
) | ||
# Convert SK's list of responses into a single final text | ||
assistant_reply = "\n".join(r.content for r in sk_responses if r.content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For SK's list of responses, are they separate messages or a single message? Is it possible for this to contain messages that may contain tool calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It returns a list
async def get_chat_message_contents(
self,
chat_history: "ChatHistory",
settings: "PromptExecutionSettings",
**kwargs: Any,
) -> list["ChatMessageContent"]:
But it can return tool calls, it is a good catch. This depends on the configuration of the prompt settings. In the model adapter we are forcing semantic kernel to return tool calls to keep the same contract of the chat completion client. Here in the sample I'm using semantic kernel to automatically execute the function but this is not being forced in the configuration. Do you think we should force it or allow it to return tool calls? We may need to be more explicit about this behavior in the docs and maybe log some warnings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ekzhu , do you have an idea of how you want to handle function calls? The output is similar to what we would get from the client. We have some options.
- Override the prompt setting to execute it
- Execute it manually by calling the kernel function (need to look into it, but looks possible)
- Return a tool call message / handoff? (unsure how handoffs work and what type of agent configuration would be needed).
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ended up not going with any of the suggestions above. If the execution settings are not configured to auto invoke, then we will throw an exception
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And there is also a validation that would throw a value error with a descriptive message in case function call message is returned by the model client, but this is not an expected behavior now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding to the response message being a concatenation of multiple messages. Do you think it should be a sequence of event messages (e.g., ToolCallRequestedEvent) followed by a final response, as in the AssistantAgent
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked Evan form the SK team and he said it actually only returns a single message in the list, the list format was added just in case. We could add an assert/exception handler to check if it is a single element and then update the code to only process the expected message. That could make the processing more explicit while providing a clear indication in case this expectation is broken in the future.
python/packages/autogen-ext/src/autogen_ext/agents/semantic_kernel/_sk_assistant_agent.py
Outdated
Show resolved
Hide resolved
execution_settings (PromptExecutionSettings, optional): | ||
Optional prompt execution settings to override defaults. | ||
|
||
Example usage: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requiring a Bing search API key adds barrier to entry. I think we should have another example before this one to show the usage of basic function-based tools like calculator.
Also we need to reference relevant docs in Semantic Kernel documentation whenever we mention a new concept from SK.
|
||
@property | ||
def produced_message_types(self) -> Sequence[type[ChatMessage]]: | ||
return [TextMessage] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the agent can't produce HandoffMessage
, in that case we need to make a TODO and reference a new issue. Similar to #5496
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any guides on how to implement handoffs? I can add support but I'm not familiar with the conditions for when handoffs should occur.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handoff are basically function calls that models can make. We created the functions under-the-hood while the user provides either just strings for the target agent's names or Handoff
objects. The AssistantAgent
implementation shows how handoff is implemented. You can see the flow diagram: https://microsoft.github.io/autogen/stable/reference/python/autogen_agentchat.agents.html#autogen_agentchat.agents.AssistantAgent
): | ||
for sk_message in sk_message_list: | ||
# Check for function calls | ||
if any(isinstance(item, FunctionCallContent) for item in sk_message.items): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if we should raise here. Because even though the execution setting is auto, we can still get function call content as part of the message list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the exception there because it shouldn't happen based on the constraints. If the execution setting is auto, semantic kernel handles function call content internally. It will cause an exception in either case because the code is not prepared to properly process the function call content, I just thought an explicit message would be easier to debug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried the sample in the API doc and actually got this error when I used OpenAI model.
hi folks - @lspinheiro this is exactly the feature we are after - is there anyway I can help to bring this one forward? Cheers! |
Hi @jsburckhardt , I'm just tidying up the PR and hoping to get it ready for review again today. If you are familiar with semantic kernel, feedback on the implementation would be a great way to help. |
@ekzhu I believe all comments have been addressed.
|
Why are these changes needed?
Adds an agentchat chat agent based on semantic kernel that allows the integration of most of the semantic kernel ecosystem.
Still need to update message conversion to handle non-text types, add docstring, etc. But the basics are thre.
Related issue number
Closes #4741
Checks