Skip to content

Commit

Permalink
sample: Update chainlit sample with streaming (#5304)
Browse files Browse the repository at this point in the history
* Separate agent and team examples
* Add streaming output
* Refactor to better use the chainlit API
* Removed the user proxy example -- this needs a bit more work to
improve the presentation on the ChainLit interface.

---------

Co-authored-by: Victor Dibia <[email protected]>
  • Loading branch information
ekzhu and victordibia authored Jan 31, 2025
1 parent d5007ad commit 88c895f
Show file tree
Hide file tree
Showing 8 changed files with 272 additions and 121 deletions.
118 changes: 37 additions & 81 deletions python/samples/agentchat_chainlit/README.md
Original file line number Diff line number Diff line change
@@ -1,108 +1,64 @@
# Building a Multi-Agent Application with AutoGen and Chainlit

In this sample, we will build a simple chat interface that interacts with a `RoundRobinGroupChat` team built using the [AutoGen AgentChat](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/index.html) api.
In this sample, we will demonstrate how to build simple chat interface that
interacts with an [AgentChat](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/index.html)
agent or a team, using [Chainlit](https://github.com/Chainlit/chainlit),
and support streaming messages.

![AgentChat](docs/chainlit_autogen.png).

## High-Level Description
## Installation

The `app.py` script sets up a Chainlit chat interface that communicates with the AutoGen team. When a chat starts, it

- Initializes an AgentChat team.

```python

async def get_weather(city: str) -> str:
return f"The weather in {city} is 73 degrees and Sunny."

assistant_agent = AssistantAgent(
name="assistant_agent",
tools=[get_weather],
model_client=OpenAIChatCompletionClient(
model="gpt-4o-2024-08-06"))


termination = TextMentionTermination("TERMINATE") | MaxMessageTermination(10)
team = RoundRobinGroupChat(
participants=[assistant_agent], termination_condition=termination)
To run this sample, you will need to install the following packages:

```shell
pip install -U chainlit autogen-agentchat autogen-ext[openai] pyyaml
```

- As users interact with the chat, their queries are sent to the team which responds.
- As agents respond/act, their responses are streamed back to the chat interface.

## Quickstart
To use other model providers, you will need to install a different extra
for the `autogen-ext` package.
See the [Models documentation](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/models.html) for more information.

To get started, ensure you have setup an API Key. We will be using the OpenAI API for this example.

1. Ensure you have an OPENAPI API key. Set this key in your environment variables as `OPENAI_API_KEY`.
## Model Configuration

2. Install the required Python packages by running:
Create a configuration file named `model_config.yaml` to configure the model
you want to use. Use `model_config_template.yaml` as a template.

```shell
pip install -r requirements.txt
```
## Running the Agent Sample

3. Run the `app.py` script to start the Chainlit server.
The first sample demonstrate how to interact with a single AssistantAgent
from the chat interface.

```shell
chainlit run app.py -h
chainlit run app_agent.py -h
```

4. Interact with the Agent Team Chainlit interface. The chat interface will be available at `http://localhost:8000` by default.
You can use one of the starters. For example, ask "What the weather in Seattle?".

### Function Definitions
The agent will respond by first using the tools provided and then reflecting
on the result of the tool execution.

- `start_chat`: Initializes the chat session
- `run_team`: Sends the user's query to the team streams the agent responses back to the chat interface.
- `chat`: Receives messages from the user and passes them to the `run_team` function.
## Running the Team Sample

The second sample demonstrate how to interact with a team of agents from the
chat interface.

## Adding a UserProxyAgent

We can add a `UserProxyAgent` to the team so that the user can interact with the team directly with the input box in the chat interface. This requires defining a function for input that uses the Chainlit input box instead of the terminal.

```python
from typing import Optional
from autogen_core import CancellationToken
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent
from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def chainlit_input_func(prompt: str, cancellation_token: Optional[CancellationToken] = None) -> str:
try:
response = await cl.AskUserMessage(
content=prompt,
author="System",
).send()
return response["output"]

except Exception as e:
raise RuntimeError(f"Failed to get user input: {str(e)}") from e

user_proxy_agent = UserProxyAgent(
name="user_proxy_agent",
input_func=chainlit_input_func,
)
assistant_agent = AssistantAgent(
name="assistant_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o-2024-08-06"))

termination = TextMentionTermination("TERMINATE") | MaxMessageTermination(10)

team = RoundRobinGroupChat(
participants=[user_proxy_agent, assistant_agent],
termination_condition=termination)
```shell
chainlit run app_team.py -h
```
You can use one of the starters. For example, ask "Write a poem about winter.".

The team is a RoundRobinGroupChat, so each agent will respond in turn.
There are two agents in the team: one is instructed to be generally helpful
and the other one is instructed to be a critic and provide feedback.
The two agents will respond in round-robin fashion until
the 'APPROVE' is mentioned by the critic agent.

## Next Steps

## Next Steps (Extra Credit)

In this example, we created a basic AutoGen team with a single agent in a RoundRobinGroupChat team. There are a few ways you can extend this example:
There are a few ways you can extend this example:

- Add more [agents](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/agents.html) to the team.
- Explore custom agents that sent multimodal messages
- Explore more [team](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/teams.html) types beyond the `RoundRobinGroupChat`.
- Try other [agents](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html).
- Try other [team](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/teams.html) types beyond the `RoundRobinGroupChat`.
- Explore custom agents that sent multimodal messages.
38 changes: 0 additions & 38 deletions python/samples/agentchat_chainlit/app.py

This file was deleted.

68 changes: 68 additions & 0 deletions python/samples/agentchat_chainlit/app_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
from typing import List, cast

import chainlit as cl
import yaml
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.base import Response
from autogen_agentchat.messages import ModelClientStreamingChunkEvent, TextMessage
from autogen_core import CancellationToken
from autogen_core.models import ChatCompletionClient


@cl.set_starters # type: ignore
async def set_starts() -> List[cl.Starter]:
return [
cl.Starter(
label="Greetings",
message="Hello! What can you help me with today?",
),
cl.Starter(
label="Weather",
message="Find the weather in New York City.",
),
]


@cl.step(type="tool") # type: ignore
async def get_weather(city: str) -> str:
return f"The weather in {city} is 73 degrees and Sunny."


@cl.on_chat_start # type: ignore
async def start_chat() -> None:
# Load model configuration and create the model client.
with open("model_config.yaml", "r") as f:
model_config = yaml.safe_load(f)
model_client = ChatCompletionClient.load_component(model_config)

# Create the assistant agent with the get_weather tool.
assistant = AssistantAgent(
name="assistant",
tools=[get_weather],
model_client=model_client,
system_message="You are a helpful assistant",
model_client_stream=True, # Enable model client streaming.
reflect_on_tool_use=True, # Reflect on tool use.
)

# Set the assistant agent in the user session.
cl.user_session.set("prompt_history", "") # type: ignore
cl.user_session.set("agent", assistant) # type: ignore


@cl.on_message # type: ignore
async def chat(message: cl.Message) -> None:
# Get the assistant agent from the user session.
agent = cast(AssistantAgent, cl.user_session.get("agent")) # type: ignore
# Construct the response message.
response = cl.Message(content="")
async for msg in agent.on_messages_stream(
messages=[TextMessage(content=message.content, source="user")],
cancellation_token=CancellationToken(),
):
if isinstance(msg, ModelClientStreamingChunkEvent):
# Stream the model client response to the user.
await response.stream_token(msg.content)
elif isinstance(msg, Response):
# Done streaming the model client response. Send the message.
await response.send()
100 changes: 100 additions & 0 deletions python/samples/agentchat_chainlit/app_team.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
from typing import List, cast

import chainlit as cl
import yaml
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.base import TaskResult
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.messages import ModelClientStreamingChunkEvent, TextMessage
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_core import CancellationToken
from autogen_core.models import ChatCompletionClient


@cl.on_chat_start # type: ignore
async def start_chat() -> None:
# Load model configuration and create the model client.
with open("model_config.yaml", "r") as f:
model_config = yaml.safe_load(f)
model_client = ChatCompletionClient.load_component(model_config)

# Create the assistant agent.
assistant = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
model_client_stream=True, # Enable model client streaming.
)

# Create the critic agent.
critic = AssistantAgent(
name="critic",
model_client=model_client,
system_message="You are a critic. Provide constructive feedback. "
"Respond with 'APPROVE' if your feedback has been addressed.",
model_client_stream=True, # Enable model client streaming.
)

# Termination condition.
termination = TextMentionTermination("APPROVE", sources=["critic"])

# Chain the assistant and critic agents using RoundRobinGroupChat.
group_chat = RoundRobinGroupChat([assistant, critic], termination_condition=termination)

# Set the assistant agent in the user session.
cl.user_session.set("prompt_history", "") # type: ignore
cl.user_session.set("team", group_chat) # type: ignore


@cl.set_starters # type: ignore
async def set_starts() -> List[cl.Starter]:
return [
cl.Starter(
label="Poem Writing",
message="Write a poem about the ocean.",
),
cl.Starter(
label="Story Writing",
message="Write a story about a detective solving a mystery.",
),
cl.Starter(
label="Write Code",
message="Write a function that merge two list of numbers into single sorted list.",
),
]


@cl.on_message # type: ignore
async def chat(message: cl.Message) -> None:
# Get the team from the user session.
team = cast(RoundRobinGroupChat, cl.user_session.get("team")) # type: ignore
# Streaming response message.
streaming_response: cl.Message | None = None
# Stream the messages from the team.
async for msg in team.run_stream(
task=[TextMessage(content=message.content, source="user")],
cancellation_token=CancellationToken(),
):
if isinstance(msg, ModelClientStreamingChunkEvent):
# Stream the model client response to the user.
if streaming_response is None:
# Start a new streaming response.
streaming_response = cl.Message(content="", author=msg.source)
await streaming_response.stream_token(msg.content)
elif streaming_response is not None:
# Done streaming the model client response.
# We can skip the current message as it is just the complete message
# of the streaming response.
await streaming_response.send()
# Reset the streaming response so we won't enter this block again
# until the next streaming response is complete.
streaming_response = None
elif isinstance(msg, TaskResult):
# Send the task termination message.
final_message = "Task terminated. "
if msg.stop_reason:
final_message += msg.stop_reason
await cl.Message(content=final_message).send()
else:
# Skip all other message types.
pass
4 changes: 4 additions & 0 deletions python/samples/agentchat_chainlit/model_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Use Open AI with key
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: gpt-4o
26 changes: 26 additions & 0 deletions python/samples/agentchat_chainlit/model_config_template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Use Open AI with key
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: gpt-4o
api_key: REPLACE_WITH_YOUR_API_KEY
# Use Azure Open AI with key
# provider: autogen_ext.models.openai.AzureOpenAIChatCompletionClient
# config:
# model: gpt-4o
# azure_endpoint: https://{your-custom-endpoint}.openai.azure.com/
# azure_deployment: {your-azure-deployment}
# api_version: {your-api-version}
# api_key: REPLACE_WITH_YOUR_API_KEY
# Use Azure OpenAI with AD token provider.
# provider: autogen_ext.models.openai.AzureOpenAIChatCompletionClient
# config:
# model: gpt-4o
# azure_endpoint: https://{your-custom-endpoint}.openai.azure.com/
# azure_deployment: {your-azure-deployment}
# api_version: {your-api-version}
# azure_ad_token_provider:
# provider: autogen_ext.auth.azure.AzureTokenProvider
# config:
# provider_kind: DefaultAzureCredential
# scopes:
# - https://cognitiveservices.azure.com/.default
2 changes: 0 additions & 2 deletions python/samples/agentchat_chainlit/requirements.txt

This file was deleted.

Loading

0 comments on commit 88c895f

Please sign in to comment.