Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flush console output after every message. #5415

Merged
merged 1 commit into from
Feb 7, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,8 @@
self.input_events[request_id] = event


def aprint(output: str, end: str = "\n") -> Awaitable[None]:
return asyncio.to_thread(print, output, end=end)
def aprint(output: str, end: str = "\n", flush: bool = False) -> Awaitable[None]:
return asyncio.to_thread(print, output, end=end, flush=flush)


async def Console(
Expand Down Expand Up @@ -126,7 +126,7 @@
f"Total completion tokens: {total_usage.completion_tokens}\n"
f"Duration: {duration:.2f} seconds\n"
)
await aprint(output, end="")
await aprint(output, end="", flush=True)

Check warning on line 129 in python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py#L129

Added line #L129 was not covered by tests

# mypy ignore
last_processed = message # type: ignore
Expand All @@ -141,7 +141,7 @@
output += f"[Prompt tokens: {message.chat_message.models_usage.prompt_tokens}, Completion tokens: {message.chat_message.models_usage.completion_tokens}]\n"
total_usage.completion_tokens += message.chat_message.models_usage.completion_tokens
total_usage.prompt_tokens += message.chat_message.models_usage.prompt_tokens
await aprint(output, end="")
await aprint(output, end="", flush=True)

Check warning on line 144 in python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py#L144

Added line #L144 was not covered by tests

# Print summary.
if output_stats:
Expand All @@ -156,7 +156,7 @@
f"Total completion tokens: {total_usage.completion_tokens}\n"
f"Duration: {duration:.2f} seconds\n"
)
await aprint(output, end="")
await aprint(output, end="", flush=True)

Check warning on line 159 in python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py#L159

Added line #L159 was not covered by tests

# mypy ignore
last_processed = message # type: ignore
Expand All @@ -169,23 +169,24 @@
message = cast(AgentEvent | ChatMessage, message) # type: ignore
if not streaming_chunks:
# Print message sender.
await aprint(f"{'-' * 10} {message.source} {'-' * 10}", end="\n")
await aprint(f"{'-' * 10} {message.source} {'-' * 10}", end="\n", flush=True)
if isinstance(message, ModelClientStreamingChunkEvent):
await aprint(message.content, end="")
streaming_chunks.append(message.content)
else:
if streaming_chunks:
streaming_chunks.clear()
# Chunked messages are already printed, so we just print a newline.
await aprint("", end="\n")
await aprint("", end="\n", flush=True)

Check warning on line 180 in python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py#L180

Added line #L180 was not covered by tests
else:
# Print message content.
await aprint(_message_to_str(message, render_image_iterm=render_image_iterm), end="\n")
await aprint(_message_to_str(message, render_image_iterm=render_image_iterm), end="\n", flush=True)
if message.models_usage:
if output_stats:
await aprint(
f"[Prompt tokens: {message.models_usage.prompt_tokens}, Completion tokens: {message.models_usage.completion_tokens}]",
end="\n",
flush=True,
)
total_usage.completion_tokens += message.models_usage.completion_tokens
total_usage.prompt_tokens += message.models_usage.prompt_tokens
Expand Down
Loading