From 2f8241d490241b46a4a7838b50c32cab93b2de04 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Mon, 30 Sep 2024 15:14:48 -0400 Subject: [PATCH 01/35] Fix line endings of test/browser_utils/test_files/test_serp.html (#3586) --- test/browser_utils/test_files/test_serp.html | 264 +++++++++---------- 1 file changed, 132 insertions(+), 132 deletions(-) diff --git a/test/browser_utils/test_files/test_serp.html b/test/browser_utils/test_files/test_serp.html index a5d89a22d9e5..1daf5b7fb9e8 100644 --- a/test/browser_utils/test_files/test_serp.html +++ b/test/browser_utils/test_files/test_serp.html @@ -1,147 +1,147 @@ -Microsoft wikipedia - Search
Copilot
Your everyday AI companion
About 310,000 results
Open links in new tab
  1. History of Microsoft - Wikipedia

  2. Microsoft Windows - Wikipedia

    WEBMicrosoft Windows is a product line of proprietary graphical operating systems developed and marketed by Microsoft. It is grouped into families …

    • Developer: Microsoft
    • Latest release: 22H2 (10.0.22621.2428) (October 10, 2023; 0 days ago) [±]
    • Initial release: November 20, 1985; 37 years ago
  3. Microsoft - Simple English Wikipedia, the free encyclopedia

  4. People also ask
    Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Windows.
    en.wikipedia.org/wiki/Microsoft
    Its best-known software products are the Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers.
    en.wikipedia.org/wiki/Microsoft
    Microsoft offers a variety of services, such as Azure, Bing, LinkedIn, Yammer, MSDN, Microsoft 365, OneDrive, Outlook.com, GitHub, TechNet, Pay, Microsoft Store, Windows Update, Xbox Game Pass and Xbox network. How to write a complaint about Microsoft?
    www.complaintsboard.com/microsoft-b107628/contacts
    Microsoft Corporation is an American multinational technology company with headquarters in Redmond, Washington. It develops, manufactures, licenses, supports, and sells computer software, consumer electronics, personal computers, and related services.
    www.definitions.net/definition/Microsoft
  5. About Microsoft | Mission and Vision | Microsoft

    WEBOur mission is to empower every person and every organization on the planet to achieve more. Learn more about our company, who we are and what we value.

  6. Timeline of Microsoft - Wikipedia

  7. Microsoft Corporation | History, Products, & Facts

    WEB2 days ago · Microsoft Corporation is a leading developer of computer software, operating systems, cloud computing, and artificial intelligence applications. The company also produces its own line of hybrid tablet …

  8. A Short History of Microsoft - ThoughtCo

    WEBJan 10, 2020 · Learn how Microsoft was founded by two childhood friends and became a global leader in software and hardware. Explore the milestones of Microsoft products, from MS-DOS and Windows to Office …

  9. Microsoft - Wikiwand

  10. Outline of Microsoft - Wikipedia

  11.  
Copilot
Your everyday AI companion
About 310,000 results
Open links in new tab
  1. History of Microsoft - Wikipedia

  2. Microsoft Windows - Wikipedia

    WEBMicrosoft Windows is a product line of proprietary graphical operating systems developed and marketed by Microsoft. It is grouped into families …

    • Developer: Microsoft
    • Latest release: 22H2 (10.0.22621.2428) (October 10, 2023; 0 days ago) [±]
    • Initial release: November 20, 1985; 37 years ago
  3. Microsoft - Simple English Wikipedia, the free encyclopedia

  4. People also ask
    Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Windows.
    en.wikipedia.org/wiki/Microsoft
    Its best-known software products are the Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers.
    en.wikipedia.org/wiki/Microsoft
    Microsoft offers a variety of services, such as Azure, Bing, LinkedIn, Yammer, MSDN, Microsoft 365, OneDrive, Outlook.com, GitHub, TechNet, Pay, Microsoft Store, Windows Update, Xbox Game Pass and Xbox network. How to write a complaint about Microsoft?
    www.complaintsboard.com/microsoft-b107628/contacts
    Microsoft Corporation is an American multinational technology company with headquarters in Redmond, Washington. It develops, manufactures, licenses, supports, and sells computer software, consumer electronics, personal computers, and related services.
    www.definitions.net/definition/Microsoft
  5. About Microsoft | Mission and Vision | Microsoft

    WEBOur mission is to empower every person and every organization on the planet to achieve more. Learn more about our company, who we are and what we value.

  6. Timeline of Microsoft - Wikipedia

  7. Microsoft Corporation | History, Products, & Facts

    WEB2 days ago · Microsoft Corporation is a leading developer of computer software, operating systems, cloud computing, and artificial intelligence applications. The company also produces its own line of hybrid tablet …

  8. A Short History of Microsoft - ThoughtCo

    WEBJan 10, 2020 · Learn how Microsoft was founded by two childhood friends and became a global leader in software and hardware. Explore the milestones of Microsoft products, from MS-DOS and Windows to Office …

  9. Microsoft - Wikiwand

  10. Outline of Microsoft - Wikipedia

  11.  
From b8d749daac066df2abc5c3553bd171d760b1c1fc Mon Sep 17 00:00:00 2001 From: Rob Date: Mon, 30 Sep 2024 15:15:19 -0400 Subject: [PATCH 02/35] async user hook support added (#3583) * feat: adds async hooks, across all types, and a hookable check * fix: adds test to other hook types --- autogen/agentchat/conversable_agent.py | 107 ++++++++++++- test/agentchat/test_conversable_agent.py | 195 ++++++++++++++++++++++- 2 files changed, 296 insertions(+), 6 deletions(-) diff --git a/autogen/agentchat/conversable_agent.py b/autogen/agentchat/conversable_agent.py index e19cbd56de2b..3a21c0f7d0e4 100644 --- a/autogen/agentchat/conversable_agent.py +++ b/autogen/agentchat/conversable_agent.py @@ -7,7 +7,7 @@ import re import warnings from collections import defaultdict -from typing import Any, Callable, Dict, List, Literal, Optional, Tuple, Type, TypeVar, Union +from typing import Any, Callable, Coroutine, Dict, List, Literal, Optional, Tuple, Type, TypeVar, Union from openai import BadRequestError @@ -247,10 +247,13 @@ def __init__( # Registered hooks are kept in lists, indexed by hookable method, to be called in their order of registration. # New hookable methods should be added to this list as required to support new agent capabilities. - self.hook_lists: Dict[str, List[Callable]] = { + self.hook_lists: Dict[str, List[Union[Callable, Callable[..., Coroutine]]]] = { "process_last_received_message": [], + "a_process_last_received_message": [], "process_all_messages_before_reply": [], + "a_process_all_messages_before_reply": [], "process_message_before_send": [], + "a_process_message_before_send": [], } def _validate_llm_config(self, llm_config): @@ -680,11 +683,24 @@ def _process_message_before_send( """Process the message before sending it to the recipient.""" hook_list = self.hook_lists["process_message_before_send"] for hook in hook_list: + if inspect.iscoroutinefunction(hook): + continue message = hook( sender=self, message=message, recipient=recipient, silent=ConversableAgent._is_silent(self, silent) ) return message + async def _a_process_message_before_send( + self, message: Union[Dict, str], recipient: Agent, silent: bool + ) -> Union[Dict, str]: + """(async) Process the message before sending it to the recipient.""" + hook_list = self.hook_lists["a_process_message_before_send"] + for hook in hook_list: + if not inspect.iscoroutinefunction(hook): + continue + message = await hook(sender=self, message=message, recipient=recipient, silent=silent) + return message + def send( self, message: Union[Dict, str], @@ -774,7 +790,9 @@ async def a_send( Raises: ValueError: if the message can't be converted into a valid ChatCompletion message. """ - message = self._process_message_before_send(message, recipient, ConversableAgent._is_silent(self, silent)) + message = await self._a_process_message_before_send( + message, recipient, ConversableAgent._is_silent(self, silent) + ) # When the agent composes and sends the message, the role of the message is "assistant" # unless it's "function". valid = self._append_oai_message(message, "assistant", recipient, is_sending=True) @@ -2104,11 +2122,11 @@ async def a_generate_reply( # Call the hookable method that gives registered hooks a chance to process all messages. # Message modifications do not affect the incoming messages or self._oai_messages. - messages = self.process_all_messages_before_reply(messages) + messages = await self.a_process_all_messages_before_reply(messages) # Call the hookable method that gives registered hooks a chance to process the last message. # Message modifications do not affect the incoming messages or self._oai_messages. - messages = self.process_last_received_message(messages) + messages = await self.a_process_last_received_message(messages) for reply_func_tuple in self._reply_func_list: reply_func = reply_func_tuple["reply_func"] @@ -2786,6 +2804,19 @@ def register_hook(self, hookable_method: str, hook: Callable): assert hookable_method in self.hook_lists, f"{hookable_method} is not a hookable method." hook_list = self.hook_lists[hookable_method] assert hook not in hook_list, f"{hook} is already registered as a hook." + + # async hookable checks + expected_async = hookable_method.startswith("a_") + hook_is_async = inspect.iscoroutinefunction(hook) + if expected_async != hook_is_async: + context_type = "asynchronous" if expected_async else "synchronous" + warnings.warn( + f"Hook '{hook.__name__}' is {'asynchronous' if hook_is_async else 'synchronous'}, " + f"but it's being registered in a {context_type} context ('{hookable_method}'). " + "Ensure the hook matches the expected execution context.", + UserWarning, + ) + hook_list.append(hook) def process_all_messages_before_reply(self, messages: List[Dict]) -> List[Dict]: @@ -2800,9 +2831,28 @@ def process_all_messages_before_reply(self, messages: List[Dict]) -> List[Dict]: # Call each hook (in order of registration) to process the messages. processed_messages = messages for hook in hook_list: + if inspect.iscoroutinefunction(hook): + continue processed_messages = hook(processed_messages) return processed_messages + async def a_process_all_messages_before_reply(self, messages: List[Dict]) -> List[Dict]: + """ + Calls any registered capability hooks to process all messages, potentially modifying the messages. + """ + hook_list = self.hook_lists["a_process_all_messages_before_reply"] + # If no hooks are registered, or if there are no messages to process, return the original message list. + if len(hook_list) == 0 or messages is None: + return messages + + # Call each hook (in order of registration) to process the messages. + processed_messages = messages + for hook in hook_list: + if not inspect.iscoroutinefunction(hook): + continue + processed_messages = await hook(processed_messages) + return processed_messages + def process_last_received_message(self, messages: List[Dict]) -> List[Dict]: """ Calls any registered capability hooks to use and potentially modify the text of the last message, @@ -2836,6 +2886,8 @@ def process_last_received_message(self, messages: List[Dict]) -> List[Dict]: # Call each hook (in order of registration) to process the user's message. processed_user_content = user_content for hook in hook_list: + if inspect.iscoroutinefunction(hook): + continue processed_user_content = hook(processed_user_content) if processed_user_content == user_content: @@ -2846,6 +2898,51 @@ def process_last_received_message(self, messages: List[Dict]) -> List[Dict]: messages[-1]["content"] = processed_user_content return messages + async def a_process_last_received_message(self, messages: List[Dict]) -> List[Dict]: + """ + Calls any registered capability hooks to use and potentially modify the text of the last message, + as long as the last message is not a function call or exit command. + """ + + # If any required condition is not met, return the original message list. + hook_list = self.hook_lists["a_process_last_received_message"] + if len(hook_list) == 0: + return messages # No hooks registered. + if messages is None: + return None # No message to process. + if len(messages) == 0: + return messages # No message to process. + last_message = messages[-1] + if "function_call" in last_message: + return messages # Last message is a function call. + if "context" in last_message: + return messages # Last message contains a context key. + if "content" not in last_message: + return messages # Last message has no content. + + user_content = last_message["content"] + if not isinstance(user_content, str) and not isinstance(user_content, list): + # if the user_content is a string, it is for regular LLM + # if the user_content is a list, it should follow the multimodal LMM format. + return messages + if user_content == "exit": + return messages # Last message is an exit command. + + # Call each hook (in order of registration) to process the user's message. + processed_user_content = user_content + for hook in hook_list: + if not inspect.iscoroutinefunction(hook): + continue + processed_user_content = await hook(processed_user_content) + + if processed_user_content == user_content: + return messages # No hooks actually modified the user's message. + + # Replace the last user message with the expanded one. + messages = messages.copy() + messages[-1]["content"] = processed_user_content + return messages + def print_usage_summary(self, mode: Union[str, List[str]] = ["actual", "total"]) -> None: """Print the usage summary.""" iostream = IOStream.get_default() diff --git a/test/agentchat/test_conversable_agent.py b/test/agentchat/test_conversable_agent.py index c0d37a7bd7a1..3baa1fd02260 100755 --- a/test/agentchat/test_conversable_agent.py +++ b/test/agentchat/test_conversable_agent.py @@ -7,7 +7,7 @@ import sys import time import unittest -from typing import Any, Callable, Dict, Literal +from typing import Any, Callable, Dict, List, Literal from unittest.mock import MagicMock import pytest @@ -1230,6 +1230,46 @@ def my_summary(sender, recipient, summary_args): print(chat_res_play.summary) +def test_register_hook_async_sync(): + agent = ConversableAgent("test_agent", llm_config=False) + + # Synchronous hook for synchronous method + def sync_hook(): + pass + + agent.register_hook("process_all_messages_before_reply", sync_hook) + assert sync_hook in agent.hook_lists["process_all_messages_before_reply"] + + # Asynchronous hook for asynchronous method + async def async_hook(): + pass + + agent.register_hook("a_process_all_messages_before_reply", async_hook) + assert async_hook in agent.hook_lists["a_process_all_messages_before_reply"] + + # Synchronous hook for asynchronous method (should raise a warning) + with pytest.warns( + UserWarning, match="Hook 'sync_hook' is synchronous, but it's being registered in a asynchronous context" + ): + agent.register_hook("a_process_all_messages_before_reply", sync_hook) + assert sync_hook in agent.hook_lists["a_process_all_messages_before_reply"] + + # Asynchronous hook for synchronous method (should raise a warning) + with pytest.warns( + UserWarning, match="Hook 'async_hook' is asynchronous, but it's being registered in a synchronous context" + ): + agent.register_hook("process_all_messages_before_reply", async_hook) + assert async_hook in agent.hook_lists["process_all_messages_before_reply"] + + # Attempt to register the same hook twice (should raise an AssertionError) + with pytest.raises(AssertionError, match=r" is already registered as a hook"): + agent.register_hook("process_all_messages_before_reply", sync_hook) + + # Attempt to register a hook for a non-existent method (should raise an AssertionError) + with pytest.raises(AssertionError, match="non_existent_method is not a hookable method"): + agent.register_hook("non_existent_method", sync_hook) + + def test_process_before_send(): print_mock = unittest.mock.MagicMock() @@ -1250,6 +1290,159 @@ def send_to_frontend(sender, message, recipient, silent): print_mock.assert_called_once_with(message="hello") +@pytest.mark.asyncio +async def test_a_process_before_send(): + print_mock = unittest.mock.MagicMock() + + # Updated to include sender parameter + async def a_send_to_frontend(sender, message, recipient, silent): + # Simulating an async operation with asyncio.sleep + await asyncio.sleep(0.5) + + assert sender.name == "dummy_agent_1", "Sender is not the expected agent" + if not silent: + print(f"Message sent from {sender.name} to {recipient.name}: {message}") + print_mock(message=message) + return message + + dummy_agent_1 = ConversableAgent(name="dummy_agent_1", llm_config=False, human_input_mode="NEVER") + dummy_agent_2 = ConversableAgent(name="dummy_agent_2", llm_config=False, human_input_mode="NEVER") + dummy_agent_1.register_hook("a_process_message_before_send", a_send_to_frontend) + await dummy_agent_1.a_send("hello", dummy_agent_2) + print_mock.assert_called_once_with(message="hello") + dummy_agent_1.send("silent hello", dummy_agent_2, silent=True) + print_mock.assert_called_once_with(message="hello") + + +def test_process_last_received_message(): + + # Create a mock function to be used as a hook + def expand_message(message): + return message + " [Expanded]" + + dummy_agent_1 = ConversableAgent(name="dummy_agent_1", llm_config=False, human_input_mode="NEVER") + dummy_agent_1.register_hook("process_last_received_message", expand_message) + + # Normal message + messages = [ + {"role": "user", "content": "Hello"}, + {"role": "assistant", "content": "Hi there"}, + {"role": "user", "content": "How are you?"}, + ] + + processed_messages = messages.copy() + dummy_agent_1.generate_reply(messages=processed_messages, sender=None) + assert processed_messages[-2]["content"] == "Hi there" + assert processed_messages[-1]["content"] == "How are you? [Expanded]" + + +@pytest.mark.asyncio +async def test_a_process_last_received_message(): + + # Create a mock function to be used as a hook + async def expand_message(message): + await asyncio.sleep(0.5) + return message + " [Expanded]" + + dummy_agent_1 = ConversableAgent(name="dummy_agent_1", llm_config=False, human_input_mode="NEVER") + dummy_agent_1.register_hook("a_process_last_received_message", expand_message) + + # Normal message + messages = [ + {"role": "user", "content": "Hello"}, + {"role": "assistant", "content": "Hi there"}, + {"role": "user", "content": "How are you?"}, + ] + + processed_messages = messages.copy() + await dummy_agent_1.a_generate_reply(messages=processed_messages, sender=None) + assert processed_messages[-2]["content"] == "Hi there" + assert processed_messages[-1]["content"] == "How are you? [Expanded]" + + +def test_process_all_messages_before_reply(): + + messages = [ + {"role": "user", "content": "hello"}, + {"function_call": {"name": "add_num", "arguments": '{ "num_to_be_added": 5 }'}, "role": "assistant"}, + ] + + def _transform_messages(transformed_messages: List[Dict]) -> List[Dict]: + # ensure we are looking at all messages + assert len(transformed_messages) == len(messages), "Message length does not match" + + # deep copy to ensure hooks applied comprehensively + post_transformed_messages = copy.deepcopy(transformed_messages) + + # directly modify the message content for the function call (additional value) + post_transformed_messages[1]["function_call"]["arguments"] = '{ "num_to_be_added": 6 }' + + return post_transformed_messages + + def add_num(num_to_be_added): + given_num = 10 + return num_to_be_added + given_num + + dummy_agent_2 = ConversableAgent( + name="user_proxy", llm_config=False, human_input_mode="TERMINATE", function_map={"add_num": add_num} + ) + + # Baseline check before hook is executed + assert ( + dummy_agent_2.generate_reply(messages=messages, sender=None)["content"] == "15" + ), "generate_reply not working when sender is None" + + dummy_agent_2.register_hook("process_all_messages_before_reply", _transform_messages) + + # Hook is applied, updating the message content for the function call + assert ( + dummy_agent_2.generate_reply(messages=messages, sender=None)["content"] == "16" + ), "generate_reply not working when sender is None" + + +@pytest.mark.asyncio +async def test_a_process_all_messages_before_reply(): + + messages = [ + {"role": "user", "content": "hello"}, + {"function_call": {"name": "add_num", "arguments": '{ "num_to_be_added": 5 }'}, "role": "assistant"}, + ] + + async def a_transform_messages(transformed_messages: List[Dict]) -> List[Dict]: + + # ensure we are looking at all messages + assert len(transformed_messages) == len(messages), "Message length does not match" + + # Simulating an async operation with asyncio.sleep + await asyncio.sleep(0.5) + + # deep copy to ensure hooks applied comprehensively + post_transformed_messages = copy.deepcopy(transformed_messages) + + # directly modify the message content for the function call (additional value) + post_transformed_messages[1]["function_call"]["arguments"] = '{ "num_to_be_added": 6 }' + + return post_transformed_messages + + def add_num(num_to_be_added): + given_num = 10 + return num_to_be_added + given_num + + dummy_agent_2 = ConversableAgent( + name="user_proxy", llm_config=False, human_input_mode="TERMINATE", function_map={"add_num": add_num} + ) + + # Baseline check before hook is executed + response = await dummy_agent_2.a_generate_reply(messages=messages, sender=None) + assert response["content"] == "15", "generate_reply not working when sender is None" + + dummy_agent_2.register_hook("a_process_all_messages_before_reply", a_transform_messages) + + # Hook is applied, updating the message content for the function call + response = await dummy_agent_2.a_generate_reply(messages=messages, sender=None) + assert response["content"] == "16", "generate_reply not working when sender is None" + + def test_messages_with_carryover(): agent1 = autogen.ConversableAgent( "alice", From 3fdf8dea2205af1fda8d74bfbc51b51ea005faec Mon Sep 17 00:00:00 2001 From: Henry Tu Date: Mon, 30 Sep 2024 17:14:55 -0400 Subject: [PATCH 03/35] Add Cerebras Integration (#3585) * Cerebras Integration * Address feedback * Fix typo * Run formatter --- .github/workflows/contrib-tests.yml | 40 ++ autogen/logger/file_logger.py | 2 + autogen/logger/sqlite_logger.py | 2 + autogen/oai/cerebras.py | 270 ++++++++++ autogen/oai/client.py | 12 + autogen/runtime_logging.py | 2 + .../autogen-studio/autogenstudio/datamodel.py | 1 + setup.py | 1 + test/oai/test_cerebras.py | 248 +++++++++ .../non-openai-models/cloud-cerebras.ipynb | 505 ++++++++++++++++++ 10 files changed, 1083 insertions(+) create mode 100644 autogen/oai/cerebras.py create mode 100644 test/oai/test_cerebras.py create mode 100644 website/docs/topics/non-openai-models/cloud-cerebras.ipynb diff --git a/.github/workflows/contrib-tests.yml b/.github/workflows/contrib-tests.yml index d58098c98e76..c8afaddc8506 100644 --- a/.github/workflows/contrib-tests.yml +++ b/.github/workflows/contrib-tests.yml @@ -474,6 +474,46 @@ jobs: file: ./coverage.xml flags: unittests + CerebrasTest: + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest, macos-latest, windows-2019] + python-version: ["3.9", "3.10", "3.11", "3.12"] + exclude: + - os: macos-latest + python-version: "3.9" + steps: + - uses: actions/checkout@v4 + with: + lfs: true + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + - name: Install packages and dependencies for all tests + run: | + python -m pip install --upgrade pip wheel + pip install pytest-cov>=5 + - name: Install packages and dependencies for Cerebras + run: | + pip install -e .[cerebras_cloud_sdk,test] + - name: Set AUTOGEN_USE_DOCKER based on OS + shell: bash + run: | + if [[ ${{ matrix.os }} != ubuntu-latest ]]; then + echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV + fi + - name: Coverage + run: | + pytest test/oai/test_cerebras.py --skip-openai + - name: Upload coverage to Codecov + uses: codecov/codecov-action@v3 + with: + file: ./coverage.xml + flags: unittests + MistralTest: runs-on: ${{ matrix.os }} strategy: diff --git a/autogen/logger/file_logger.py b/autogen/logger/file_logger.py index 07c9c3b76a76..329510894920 100644 --- a/autogen/logger/file_logger.py +++ b/autogen/logger/file_logger.py @@ -19,6 +19,7 @@ from autogen import Agent, ConversableAgent, OpenAIWrapper from autogen.oai.anthropic import AnthropicClient from autogen.oai.bedrock import BedrockClient + from autogen.oai.cerebras import CerebrasClient from autogen.oai.cohere import CohereClient from autogen.oai.gemini import GeminiClient from autogen.oai.groq import GroqClient @@ -210,6 +211,7 @@ def log_new_client( client: ( AzureOpenAI | OpenAI + | CerebrasClient | GeminiClient | AnthropicClient | MistralAIClient diff --git a/autogen/logger/sqlite_logger.py b/autogen/logger/sqlite_logger.py index f76d039ce9de..3849c19711c7 100644 --- a/autogen/logger/sqlite_logger.py +++ b/autogen/logger/sqlite_logger.py @@ -20,6 +20,7 @@ from autogen import Agent, ConversableAgent, OpenAIWrapper from autogen.oai.anthropic import AnthropicClient from autogen.oai.bedrock import BedrockClient + from autogen.oai.cerebras import CerebrasClient from autogen.oai.cohere import CohereClient from autogen.oai.gemini import GeminiClient from autogen.oai.groq import GroqClient @@ -397,6 +398,7 @@ def log_new_client( client: Union[ AzureOpenAI, OpenAI, + CerebrasClient, GeminiClient, AnthropicClient, MistralAIClient, diff --git a/autogen/oai/cerebras.py b/autogen/oai/cerebras.py new file mode 100644 index 000000000000..e87b048e1366 --- /dev/null +++ b/autogen/oai/cerebras.py @@ -0,0 +1,270 @@ +"""Create an OpenAI-compatible client using Cerebras's API. + +Example: + llm_config={ + "config_list": [{ + "api_type": "cerebras", + "model": "llama3.1-8b", + "api_key": os.environ.get("CEREBRAS_API_KEY") + }] + } + + agent = autogen.AssistantAgent("my_agent", llm_config=llm_config) + +Install Cerebras's python library using: pip install --upgrade cerebras_cloud_sdk + +Resources: +- https://inference-docs.cerebras.ai/quickstart +""" + +from __future__ import annotations + +import copy +import os +import time +import warnings +from typing import Any, Dict, List + +from cerebras.cloud.sdk import Cerebras, Stream +from openai.types.chat import ChatCompletion, ChatCompletionMessageToolCall +from openai.types.chat.chat_completion import ChatCompletionMessage, Choice +from openai.types.completion_usage import CompletionUsage + +from autogen.oai.client_utils import should_hide_tools, validate_parameter + +CEREBRAS_PRICING_1K = { + # Convert pricing per million to per thousand tokens. + "llama3.1-8b": (0.10 / 1000, 0.10 / 1000), + "llama3.1-70b": (0.60 / 1000, 0.60 / 1000), +} + + +class CerebrasClient: + """Client for Cerebras's API.""" + + def __init__(self, api_key=None, **kwargs): + """Requires api_key or environment variable to be set + + Args: + api_key (str): The API key for using Cerebras (or environment variable CEREBRAS_API_KEY needs to be set) + """ + # Ensure we have the api_key upon instantiation + self.api_key = api_key + if not self.api_key: + self.api_key = os.getenv("CEREBRAS_API_KEY") + + assert ( + self.api_key + ), "Please include the api_key in your config list entry for Cerebras or set the CEREBRAS_API_KEY env variable." + + def message_retrieval(self, response: ChatCompletion) -> List: + """ + Retrieve and return a list of strings or a list of Choice.Message from the response. + + NOTE: if a list of Choice.Message is returned, it currently needs to contain the fields of OpenAI's ChatCompletion Message object, + since that is expected for function or tool calling in the rest of the codebase at the moment, unless a custom agent is being used. + """ + return [choice.message for choice in response.choices] + + def cost(self, response: ChatCompletion) -> float: + # Note: This field isn't explicitly in `ChatCompletion`, but is injected during chat creation. + return response.cost + + @staticmethod + def get_usage(response: ChatCompletion) -> Dict: + """Return usage summary of the response using RESPONSE_USAGE_KEYS.""" + # ... # pragma: no cover + return { + "prompt_tokens": response.usage.prompt_tokens, + "completion_tokens": response.usage.completion_tokens, + "total_tokens": response.usage.total_tokens, + "cost": response.cost, + "model": response.model, + } + + def parse_params(self, params: Dict[str, Any]) -> Dict[str, Any]: + """Loads the parameters for Cerebras API from the passed in parameters and returns a validated set. Checks types, ranges, and sets defaults""" + cerebras_params = {} + + # Check that we have what we need to use Cerebras's API + # We won't enforce the available models as they are likely to change + cerebras_params["model"] = params.get("model", None) + assert cerebras_params[ + "model" + ], "Please specify the 'model' in your config list entry to nominate the Cerebras model to use." + + # Validate allowed Cerebras parameters + # https://inference-docs.cerebras.ai/api-reference/chat-completions + cerebras_params["max_tokens"] = validate_parameter(params, "max_tokens", int, True, None, (0, None), None) + cerebras_params["seed"] = validate_parameter(params, "seed", int, True, None, None, None) + cerebras_params["stream"] = validate_parameter(params, "stream", bool, True, False, None, None) + cerebras_params["temperature"] = validate_parameter( + params, "temperature", (int, float), True, 1, (0, 1.5), None + ) + cerebras_params["top_p"] = validate_parameter(params, "top_p", (int, float), True, None, None, None) + + return cerebras_params + + def create(self, params: Dict) -> ChatCompletion: + + messages = params.get("messages", []) + + # Convert AutoGen messages to Cerebras messages + cerebras_messages = oai_messages_to_cerebras_messages(messages) + + # Parse parameters to the Cerebras API's parameters + cerebras_params = self.parse_params(params) + + # Add tools to the call if we have them and aren't hiding them + if "tools" in params: + hide_tools = validate_parameter( + params, "hide_tools", str, False, "never", None, ["if_all_run", "if_any_run", "never"] + ) + if not should_hide_tools(cerebras_messages, params["tools"], hide_tools): + cerebras_params["tools"] = params["tools"] + + cerebras_params["messages"] = cerebras_messages + + # We use chat model by default, and set max_retries to 5 (in line with typical retries loop) + client = Cerebras(api_key=self.api_key, max_retries=5) + + # Token counts will be returned + prompt_tokens = 0 + completion_tokens = 0 + total_tokens = 0 + + # Streaming tool call recommendations + streaming_tool_calls = [] + + ans = None + try: + response = client.chat.completions.create(**cerebras_params) + except Exception as e: + raise RuntimeError(f"Cerebras exception occurred: {e}") + else: + + if cerebras_params["stream"]: + # Read in the chunks as they stream, taking in tool_calls which may be across + # multiple chunks if more than one suggested + ans = "" + for chunk in response: + # Grab first choice, which _should_ always be generated. + ans = ans + (chunk.choices[0].delta.content or "") + + if chunk.choices[0].delta.tool_calls: + # We have a tool call recommendation + for tool_call in chunk.choices[0].delta.tool_calls: + streaming_tool_calls.append( + ChatCompletionMessageToolCall( + id=tool_call.id, + function={ + "name": tool_call.function.name, + "arguments": tool_call.function.arguments, + }, + type="function", + ) + ) + + if chunk.choices[0].finish_reason: + prompt_tokens = chunk.x_cerebras.usage.prompt_tokens + completion_tokens = chunk.x_cerebras.usage.completion_tokens + total_tokens = chunk.x_cerebras.usage.total_tokens + else: + # Non-streaming finished + ans: str = response.choices[0].message.content + + prompt_tokens = response.usage.prompt_tokens + completion_tokens = response.usage.completion_tokens + total_tokens = response.usage.total_tokens + + if response is not None: + if isinstance(response, Stream): + # Streaming response + if chunk.choices[0].finish_reason == "tool_calls": + cerebras_finish = "tool_calls" + tool_calls = streaming_tool_calls + else: + cerebras_finish = "stop" + tool_calls = None + + response_content = ans + response_id = chunk.id + else: + # Non-streaming response + # If we have tool calls as the response, populate completed tool calls for our return OAI response + if response.choices[0].finish_reason == "tool_calls": + cerebras_finish = "tool_calls" + tool_calls = [] + for tool_call in response.choices[0].message.tool_calls: + tool_calls.append( + ChatCompletionMessageToolCall( + id=tool_call.id, + function={"name": tool_call.function.name, "arguments": tool_call.function.arguments}, + type="function", + ) + ) + else: + cerebras_finish = "stop" + tool_calls = None + + response_content = response.choices[0].message.content + response_id = response.id + else: + raise RuntimeError("Failed to get response from Cerebras after retrying 5 times.") + + # 3. convert output + message = ChatCompletionMessage( + role="assistant", + content=response_content, + function_call=None, + tool_calls=tool_calls, + ) + choices = [Choice(finish_reason=cerebras_finish, index=0, message=message)] + + response_oai = ChatCompletion( + id=response_id, + model=cerebras_params["model"], + created=int(time.time()), + object="chat.completion", + choices=choices, + usage=CompletionUsage( + prompt_tokens=prompt_tokens, + completion_tokens=completion_tokens, + total_tokens=total_tokens, + ), + # Note: This seems to be a field that isn't in the schema of `ChatCompletion`, so Pydantic + # just adds it dynamically. + cost=calculate_cerebras_cost(prompt_tokens, completion_tokens, cerebras_params["model"]), + ) + + return response_oai + + +def oai_messages_to_cerebras_messages(messages: list[Dict[str, Any]]) -> list[dict[str, Any]]: + """Convert messages from OAI format to Cerebras's format. + We correct for any specific role orders and types. + """ + + cerebras_messages = copy.deepcopy(messages) + + # Remove the name field + for message in cerebras_messages: + if "name" in message: + message.pop("name", None) + + return cerebras_messages + + +def calculate_cerebras_cost(input_tokens: int, output_tokens: int, model: str) -> float: + """Calculate the cost of the completion using the Cerebras pricing.""" + total = 0.0 + + if model in CEREBRAS_PRICING_1K: + input_cost_per_k, output_cost_per_k = CEREBRAS_PRICING_1K[model] + input_cost = (input_tokens / 1000) * input_cost_per_k + output_cost = (output_tokens / 1000) * output_cost_per_k + total = input_cost + output_cost + else: + warnings.warn(f"Cost calculation not available for model {model}", UserWarning) + + return total diff --git a/autogen/oai/client.py b/autogen/oai/client.py index 4b77815e7eb7..1748b28a7a1f 100644 --- a/autogen/oai/client.py +++ b/autogen/oai/client.py @@ -44,6 +44,13 @@ TOOL_ENABLED = True ERROR = None +try: + from autogen.oai.cerebras import CerebrasClient + + cerebras_import_exception: Optional[ImportError] = None +except ImportError as e: + cerebras_import_exception = e + try: from autogen.oai.gemini import GeminiClient @@ -505,6 +512,11 @@ def _register_default_client(self, config: Dict[str, Any], openai_config: Dict[s self._configure_azure_openai(config, openai_config) client = AzureOpenAI(**openai_config) self._clients.append(OpenAIClient(client)) + elif api_type is not None and api_type.startswith("cerebras"): + if cerebras_import_exception: + raise ImportError("Please install `cerebras_cloud_sdk` to use Cerebras OpenAI API.") + client = CerebrasClient(**openai_config) + self._clients.append(client) elif api_type is not None and api_type.startswith("google"): if gemini_import_exception: raise ImportError("Please install `google-generativeai` to use Google OpenAI API.") diff --git a/autogen/runtime_logging.py b/autogen/runtime_logging.py index 0fd7cc2fc8b9..9036fe5c65cc 100644 --- a/autogen/runtime_logging.py +++ b/autogen/runtime_logging.py @@ -15,6 +15,7 @@ from autogen import Agent, ConversableAgent, OpenAIWrapper from autogen.oai.anthropic import AnthropicClient from autogen.oai.bedrock import BedrockClient + from autogen.oai.cerebras import CerebrasClient from autogen.oai.cohere import CohereClient from autogen.oai.gemini import GeminiClient from autogen.oai.groq import GroqClient @@ -116,6 +117,7 @@ def log_new_client( client: Union[ AzureOpenAI, OpenAI, + CerebrasClient, GeminiClient, AnthropicClient, MistralAIClient, diff --git a/samples/apps/autogen-studio/autogenstudio/datamodel.py b/samples/apps/autogen-studio/autogenstudio/datamodel.py index 92d60cf5c525..ee48818d599f 100644 --- a/samples/apps/autogen-studio/autogenstudio/datamodel.py +++ b/samples/apps/autogen-studio/autogenstudio/datamodel.py @@ -126,6 +126,7 @@ class LLMConfig(SQLModel, table=False): class ModelTypes(str, Enum): openai = "open_ai" + cerebras = "cerebras" google = "google" azure = "azure" anthropic = "anthropic" diff --git a/setup.py b/setup.py index 362aa1217986..63a9c9745ffb 100644 --- a/setup.py +++ b/setup.py @@ -101,6 +101,7 @@ "types": ["mypy==1.9.0", "pytest>=6.1.1,<8"] + jupyter_executor, "long-context": ["llmlingua<0.3"], "anthropic": ["anthropic>=0.23.1"], + "cerebras": ["cerebras_cloud_sdk>=1.0.0"], "mistral": ["mistralai>=1.0.1"], "groq": ["groq>=0.9.0"], "cohere": ["cohere>=5.5.8"], diff --git a/test/oai/test_cerebras.py b/test/oai/test_cerebras.py new file mode 100644 index 000000000000..7f84ae3f9d56 --- /dev/null +++ b/test/oai/test_cerebras.py @@ -0,0 +1,248 @@ +from unittest.mock import MagicMock, patch + +import pytest + +try: + from autogen.oai.cerebras import CerebrasClient, calculate_cerebras_cost + + skip = False +except ImportError: + CerebrasClient = object + InternalServerError = object + skip = True + + +# Fixtures for mock data +@pytest.fixture +def mock_response(): + class MockResponse: + def __init__(self, text, choices, usage, cost, model): + self.text = text + self.choices = choices + self.usage = usage + self.cost = cost + self.model = model + + return MockResponse + + +@pytest.fixture +def cerebras_client(): + return CerebrasClient(api_key="fake_api_key") + + +skip_reason = "Cerebras dependency is not installed" + + +# Test initialization and configuration +@pytest.mark.skipif(skip, reason=skip_reason) +def test_initialization(): + + # Missing any api_key + with pytest.raises(AssertionError) as assertinfo: + CerebrasClient() # Should raise an AssertionError due to missing api_key + + assert ( + "Please include the api_key in your config list entry for Cerebras or set the CEREBRAS_API_KEY env variable." + in str(assertinfo.value) + ) + + # Creation works + CerebrasClient(api_key="fake_api_key") # Should create okay now. + + +# Test standard initialization +@pytest.mark.skipif(skip, reason=skip_reason) +def test_valid_initialization(cerebras_client): + assert cerebras_client.api_key == "fake_api_key", "Config api_key should be correctly set" + + +# Test parameters +@pytest.mark.skipif(skip, reason=skip_reason) +def test_parsing_params(cerebras_client): + # All parameters + params = { + "model": "llama3.1-8b", + "max_tokens": 1000, + "seed": 42, + "stream": False, + "temperature": 1, + "top_p": 0.8, + } + expected_params = { + "model": "llama3.1-8b", + "max_tokens": 1000, + "seed": 42, + "stream": False, + "temperature": 1, + "top_p": 0.8, + } + result = cerebras_client.parse_params(params) + assert result == expected_params + + # Only model, others set as defaults + params = { + "model": "llama3.1-8b", + } + expected_params = { + "model": "llama3.1-8b", + "max_tokens": None, + "seed": None, + "stream": False, + "temperature": 1, + "top_p": None, + } + result = cerebras_client.parse_params(params) + assert result == expected_params + + # Incorrect types, defaults should be set, will show warnings but not trigger assertions + params = { + "model": "llama3.1-8b", + "max_tokens": "1000", + "seed": "42", + "stream": "False", + "temperature": "1", + "top_p": "0.8", + } + result = cerebras_client.parse_params(params) + assert result == expected_params + + # Values outside bounds, should warn and set to defaults + params = { + "model": "llama3.1-8b", + "temperature": 33123, + } + result = cerebras_client.parse_params(params) + assert result == expected_params + + # No model + params = { + "temperature": 1, + } + + with pytest.raises(AssertionError) as assertinfo: + result = cerebras_client.parse_params(params) + + assert "Please specify the 'model' in your config list entry to nominate the Cerebras model to use." in str( + assertinfo.value + ) + + +# Test cost calculation +@pytest.mark.skipif(skip, reason=skip_reason) +def test_cost_calculation(mock_response): + response = mock_response( + text="Example response", + choices=[{"message": "Test message 1"}], + usage={"prompt_tokens": 500, "completion_tokens": 300, "total_tokens": 800}, + cost=None, + model="llama3.1-70b", + ) + calculated_cost = calculate_cerebras_cost( + response.usage["prompt_tokens"], response.usage["completion_tokens"], response.model + ) + + # Convert cost per milliion to cost per token. + expected_cost = ( + response.usage["prompt_tokens"] * 0.6 / 1000000 + response.usage["completion_tokens"] * 0.6 / 1000000 + ) + + assert calculated_cost == expected_cost, f"Cost for this should be ${expected_cost} but got ${calculated_cost}" + + +# Test text generation +@pytest.mark.skipif(skip, reason=skip_reason) +@patch("autogen.oai.cerebras.CerebrasClient.create") +def test_create_response(mock_chat, cerebras_client): + # Mock CerebrasClient.chat response + mock_cerebras_response = MagicMock() + mock_cerebras_response.choices = [ + MagicMock(finish_reason="stop", message=MagicMock(content="Example Cerebras response", tool_calls=None)) + ] + mock_cerebras_response.id = "mock_cerebras_response_id" + mock_cerebras_response.model = "llama3.1-70b" + mock_cerebras_response.usage = MagicMock(prompt_tokens=10, completion_tokens=20) # Example token usage + + mock_chat.return_value = mock_cerebras_response + + # Test parameters + params = { + "messages": [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "World"}], + "model": "llama3.1-70b", + } + + # Call the create method + response = cerebras_client.create(params) + + # Assertions to check if response is structured as expected + assert ( + response.choices[0].message.content == "Example Cerebras response" + ), "Response content should match expected output" + assert response.id == "mock_cerebras_response_id", "Response ID should match the mocked response ID" + assert response.model == "llama3.1-70b", "Response model should match the mocked response model" + assert response.usage.prompt_tokens == 10, "Response prompt tokens should match the mocked response usage" + assert response.usage.completion_tokens == 20, "Response completion tokens should match the mocked response usage" + + +# Test functions/tools +@pytest.mark.skipif(skip, reason=skip_reason) +@patch("autogen.oai.cerebras.CerebrasClient.create") +def test_create_response_with_tool_call(mock_chat, cerebras_client): + # Mock `cerebras_response = client.chat(**cerebras_params)` + mock_function = MagicMock(name="currency_calculator") + mock_function.name = "currency_calculator" + mock_function.arguments = '{"base_currency": "EUR", "quote_currency": "USD", "base_amount": 123.45}' + + mock_function_2 = MagicMock(name="get_weather") + mock_function_2.name = "get_weather" + mock_function_2.arguments = '{"location": "Chicago"}' + + mock_chat.return_value = MagicMock( + choices=[ + MagicMock( + finish_reason="tool_calls", + message=MagicMock( + content="Sample text about the functions", + tool_calls=[ + MagicMock(id="gdRdrvnHh", function=mock_function), + MagicMock(id="abRdrvnHh", function=mock_function_2), + ], + ), + ) + ], + id="mock_cerebras_response_id", + model="llama3.1-70b", + usage=MagicMock(prompt_tokens=10, completion_tokens=20), + ) + + # Construct parameters + converted_functions = [ + { + "type": "function", + "function": { + "description": "Currency exchange calculator.", + "name": "currency_calculator", + "parameters": { + "type": "object", + "properties": { + "base_amount": {"type": "number", "description": "Amount of currency in base_currency"}, + }, + "required": ["base_amount"], + }, + }, + } + ] + cerebras_messages = [ + {"role": "user", "content": "How much is 123.45 EUR in USD?"}, + {"role": "assistant", "content": "World"}, + ] + + # Call the create method + response = cerebras_client.create( + {"messages": cerebras_messages, "tools": converted_functions, "model": "llama3.1-70b"} + ) + + # Assertions to check if the functions and content are included in the response + assert response.choices[0].message.content == "Sample text about the functions" + assert response.choices[0].message.tool_calls[0].function.name == "currency_calculator" + assert response.choices[0].message.tool_calls[1].function.name == "get_weather" diff --git a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb new file mode 100644 index 000000000000..a8e1d3940f4b --- /dev/null +++ b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb @@ -0,0 +1,505 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Cerebras\n", + "\n", + "[Cerebras](https://cerebras.ai) has developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). Notably, the CS-3 system can run large language models like Llama-3.1-8B and Llama-3.1-70B at extremely fast speeds, making it an ideal platform for demanding AI workloads.\n", + "\n", + "While it's technically possible to adapt AutoGen to work with Cerebras' API by updating the `base_url`, this approach may not fully account for minor differences in parameter support. Using this library will also allow for tracking of the API costs based on actual token usage.\n", + "\n", + "For more information about Cerebras Cloud, visit [cloud.cerebras.ai](https://cloud.cerebras.ai). Their API reference is available at [inference-docs.cerebras.ai](https://inference-docs.cerebras.ai)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Requirements\n", + "To use Cerebras with AutoGen, install the `pyautogen[cerebras]` package." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!pip install pyautogen[\"cerebras\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Getting Started\n", + "\n", + "Cerebras provides a number of models to use. See the list of [models here](https://inference-docs.cerebras.ai/introduction).\n", + "\n", + "See the sample `OAI_CONFIG_LIST` below showing how the Cerebras AI client class is used by specifying the `api_type` as `cerebras`.\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1-8b\",\n", + " \"api_key\": \"your Cerebras API Key goes here\",\n", + " \"api_type\": \"cerebras\"\n", + " },\n", + " {\n", + " \"model\": \"llama3.1-70b\",\n", + " \"api_key\": \"your Cerebras API Key goes here\",\n", + " \"api_type\": \"cerebras\"\n", + " }\n", + "]\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Credentials\n", + "\n", + "Get an API Key from [cloud.cerebras.ai](https://cloud.cerebras.ai/) and add it to your environment variables:\n", + "\n", + "```\n", + "export CEREBRAS_API_KEY=\"your-api-key-here\"\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## API parameters\n", + "\n", + "The following parameters can be added to your config for the Cerebras API. See [this link](https://inference-docs.cerebras.ai/api-reference/chat-completions) for further information on them and their default values.\n", + "\n", + "- max_tokens (null, integer >= 0)\n", + "- seed (number)\n", + "- stream (True or False)\n", + "- temperature (number 0..1.5)\n", + "- top_p (number)\n", + "\n", + "Example:\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1-70b\",\n", + " \"api_key\": \"your Cerebras API Key goes here\",\n", + " \"api_type\": \"cerebras\"\n", + " \"max_tokens\": 10000,\n", + " \"seed\": 1234,\n", + " \"stream\" True,\n", + " \"temperature\": 0.5,\n", + " \"top_p\": 0.2, # Note: It is recommended to set temperature or top_p but not both.\n", + " }\n", + "]\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Two-Agent Coding Example\n", + "\n", + "In this example, we run a two-agent chat with an AssistantAgent (primarily a coding agent) to generate code to count the number of prime numbers between 1 and 10,000 and then it will be executed.\n", + "\n", + "We'll use Meta's LLama-3.1-70B model which is suitable for coding." + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "from autogen.oai.cerebras import CerebrasClient, calculate_cerebras_cost\n", + "\n", + "config_list = [{\"model\": \"llama3.1-70b\", \"api_key\": os.environ.get(\"CEREBRAS_API_KEY\"), \"api_type\": \"cerebras\"}]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Importantly, we have tweaked the system message so that the model doesn't return the termination keyword, which we've changed to FINISH, with the code block." + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [], + "source": [ + "from pathlib import Path\n", + "\n", + "from autogen import AssistantAgent, UserProxyAgent\n", + "from autogen.coding import LocalCommandLineCodeExecutor\n", + "\n", + "# Setting up the code executor\n", + "workdir = Path(\"coding\")\n", + "workdir.mkdir(exist_ok=True)\n", + "code_executor = LocalCommandLineCodeExecutor(work_dir=workdir)\n", + "\n", + "# Setting up the agents\n", + "\n", + "# The UserProxyAgent will execute the code that the AssistantAgent provides\n", + "user_proxy_agent = UserProxyAgent(\n", + " name=\"User\",\n", + " code_execution_config={\"executor\": code_executor},\n", + " is_termination_msg=lambda msg: \"FINISH\" in msg.get(\"content\"),\n", + ")\n", + "\n", + "system_message = \"\"\"You are a helpful AI assistant who writes code and the user executes it.\n", + "Solve tasks using your coding and language skills.\n", + "In the following cases, suggest python code (in a python coding block) for the user to execute.\n", + "Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\n", + "When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.\n", + "Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.\n", + "If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.\n", + "When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\n", + "IMPORTANT: Wait for the user to execute your code and then you can reply with the word \"FINISH\". DO NOT OUTPUT \"FINISH\" after your code block.\"\"\"\n", + "\n", + "# The AssistantAgent, using Cerebras AI's model, will take the coding request and return code\n", + "assistant_agent = AssistantAgent(\n", + " name=\"Cerebras Assistant\",\n", + " system_message=system_message,\n", + " llm_config={\"config_list\": config_list},\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mUser\u001b[0m (to Cerebras Assistant):\n", + "\n", + "Provide code to count the number of prime numbers from 1 to 10000.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mCerebras Assistant\u001b[0m (to User):\n", + "\n", + "To count the number of prime numbers from 1 to 10000, we will utilize a simple algorithm that checks each number in the range to see if it is prime. A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself.\n", + "\n", + "Here's how we can do it using a Python script:\n", + "\n", + "```python\n", + "def count_primes(n):\n", + " primes = 0\n", + " for possiblePrime in range(2, n + 1):\n", + " # Assume number is prime until shown it is not. \n", + " isPrime = True\n", + " for num in range(2, int(possiblePrime ** 0.5) + 1):\n", + " if possiblePrime % num == 0:\n", + " isPrime = False\n", + " break\n", + " if isPrime:\n", + " primes += 1\n", + " return primes\n", + "\n", + "# Counting prime numbers from 1 to 10000\n", + "count = count_primes(10000)\n", + "print(count)\n", + "```\n", + "\n", + "Please execute this code. I will respond with \"FINISH\" after you provide the result.\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Replying as User. Provide feedback to Cerebras Assistant. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: \n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[31m\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n" + ] + } + ], + "source": [ + "# Start the chat, with the UserProxyAgent asking the AssistantAgent the message\n", + "chat_result = user_proxy_agent.initiate_chat(\n", + " assistant_agent,\n", + " message=\"Provide code to count the number of prime numbers from 1 to 10000.\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tool Call Example\n", + "\n", + "In this example, instead of writing code, we will show how Meta's Llama-3.1-70B model can perform parallel tool calling, where it recommends calling more than one tool at a time.\n", + "\n", + "We'll use a simple travel agent assistant program where we have a couple of tools for weather and currency conversion.\n", + "\n", + "We start by importing libraries and setting up our configuration to use Llama-3.1-70B and the `cerebras` client class." + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "from typing import Literal\n", + "\n", + "from typing_extensions import Annotated\n", + "\n", + "import autogen\n", + "\n", + "config_list = [\n", + " {\n", + " \"model\": \"llama3.1-70b\",\n", + " \"api_key\": os.environ.get(\"CEREBRAS_API_KEY\"),\n", + " \"api_type\": \"cerebras\",\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Create our two agents." + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [], + "source": [ + "# Create the agent for tool calling\n", + "chatbot = autogen.AssistantAgent(\n", + " name=\"chatbot\",\n", + " system_message=\"\"\"\n", + " For currency exchange and weather forecasting tasks,\n", + " only use the functions you have been provided with.\n", + " When you summarize, make sure you've considered ALL previous instructions.\n", + " Output 'HAVE FUN!' when an answer has been provided.\n", + " \"\"\",\n", + " llm_config={\"config_list\": config_list},\n", + ")\n", + "\n", + "# Note that we have changed the termination string to be \"HAVE FUN!\"\n", + "user_proxy = autogen.UserProxyAgent(\n", + " name=\"user_proxy\",\n", + " is_termination_msg=lambda x: x.get(\"content\", \"\") and \"HAVE FUN!\" in x.get(\"content\", \"\"),\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Create the two functions, annotating them so that those descriptions can be passed through to the LLM.\n", + "\n", + "We associate them with the agents using `register_for_execution` for the user_proxy so it can execute the function and `register_for_llm` for the chatbot (powered by the LLM) so it can pass the function definitions to the LLM." + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "metadata": {}, + "outputs": [], + "source": [ + "# Currency Exchange function\n", + "\n", + "CurrencySymbol = Literal[\"USD\", \"EUR\"]\n", + "\n", + "# Define our function that we expect to call\n", + "\n", + "\n", + "def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:\n", + " if base_currency == quote_currency:\n", + " return 1.0\n", + " elif base_currency == \"USD\" and quote_currency == \"EUR\":\n", + " return 1 / 1.1\n", + " elif base_currency == \"EUR\" and quote_currency == \"USD\":\n", + " return 1.1\n", + " else:\n", + " raise ValueError(f\"Unknown currencies {base_currency}, {quote_currency}\")\n", + "\n", + "\n", + "# Register the function with the agent\n", + "\n", + "\n", + "@user_proxy.register_for_execution()\n", + "@chatbot.register_for_llm(description=\"Currency exchange calculator.\")\n", + "def currency_calculator(\n", + " base_amount: Annotated[float, \"Amount of currency in base_currency\"],\n", + " base_currency: Annotated[CurrencySymbol, \"Base currency\"] = \"USD\",\n", + " quote_currency: Annotated[CurrencySymbol, \"Quote currency\"] = \"EUR\",\n", + ") -> str:\n", + " quote_amount = exchange_rate(base_currency, quote_currency) * base_amount\n", + " return f\"{format(quote_amount, '.2f')} {quote_currency}\"\n", + "\n", + "\n", + "# Weather function\n", + "\n", + "\n", + "# Example function to make available to model\n", + "def get_current_weather(location, unit=\"fahrenheit\"):\n", + " \"\"\"Get the weather for some location\"\"\"\n", + " if \"chicago\" in location.lower():\n", + " return json.dumps({\"location\": \"Chicago\", \"temperature\": \"13\", \"unit\": unit})\n", + " elif \"san francisco\" in location.lower():\n", + " return json.dumps({\"location\": \"San Francisco\", \"temperature\": \"55\", \"unit\": unit})\n", + " elif \"new york\" in location.lower():\n", + " return json.dumps({\"location\": \"New York\", \"temperature\": \"11\", \"unit\": unit})\n", + " else:\n", + " return json.dumps({\"location\": location, \"temperature\": \"unknown\"})\n", + "\n", + "\n", + "# Register the function with the agent\n", + "\n", + "\n", + "@user_proxy.register_for_execution()\n", + "@chatbot.register_for_llm(description=\"Weather forecast for US cities.\")\n", + "def weather_forecast(\n", + " location: Annotated[str, \"City name\"],\n", + ") -> str:\n", + " weather_details = get_current_weather(location=location)\n", + " weather = json.loads(weather_details)\n", + " return f\"{weather['location']} will be {weather['temperature']} degrees {weather['unit']}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We pass through our customer's message and run the chat.\n", + "\n", + "Finally, we ask the LLM to summarise the chat and print that out." + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "\u001b[32m***** Suggested tool call (210f6ac6d): weather_forecast *****\u001b[0m\n", + "Arguments: \n", + "{\"location\": \"New York\"}\n", + "\u001b[32m*************************************************************\u001b[0m\n", + "\u001b[32m***** Suggested tool call (3c00ac7d5): currency_calculator *****\u001b[0m\n", + "Arguments: \n", + "{\"base_amount\": 123.45, \"base_currency\": \"EUR\", \"quote_currency\": \"USD\"}\n", + "\u001b[32m****************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION weather_forecast...\u001b[0m\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION currency_calculator...\u001b[0m\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[32m***** Response from calling tool (210f6ac6d) *****\u001b[0m\n", + "New York will be 11 degrees fahrenheit\n", + "\u001b[32m**************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[32m***** Response from calling tool (3c00ac7d5) *****\u001b[0m\n", + "135.80 USD\n", + "\u001b[32m**************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "New York will be 11 degrees fahrenheit.\n", + "123.45 EUR is equivalent to 135.80 USD.\n", + " \n", + "For a great holiday, explore the Statue of Liberty, take a walk through Central Park, or visit one of the many world-class museums. Also, you'll find great food ranging from bagels to fine dining experiences. HAVE FUN!\n", + "\n", + "--------------------------------------------------------------------------------\n", + "LLM SUMMARY: New York will be 11 degrees fahrenheit. 123.45 EUR is equivalent to 135.80 USD. Explore the Statue of Liberty, walk through Central Park, or visit one of the many world-class museums for a great holiday in New York.\n", + "\n", + "Duration: 73.97937774658203ms\n" + ] + } + ], + "source": [ + "import time\n", + "\n", + "start_time = time.time()\n", + "\n", + "# start the conversation\n", + "res = user_proxy.initiate_chat(\n", + " chatbot,\n", + " message=\"What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\",\n", + " summary_method=\"reflection_with_llm\",\n", + ")\n", + "\n", + "end_time = time.time()\n", + "\n", + "print(f\"LLM SUMMARY: {res.summary['content']}\\n\\nDuration: {(end_time - start_time) * 1000}ms\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can see that the Cerebras Wafer-Scale Engine-3 (WSE-3) completed the query in 74ms -- faster than the blink of an eye!" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.13" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} From d31a2e4392d06846d422827814844c5e9ad0da4b Mon Sep 17 00:00:00 2001 From: Arunabh <91818868+cagnusmarlsen@users.noreply.github.com> Date: Tue, 1 Oct 2024 10:36:58 +0530 Subject: [PATCH 04/35] add composio notebook (#3567) Co-authored-by: Eric Zhu --- ...gentchat_function_call_with_composio.ipynb | 421 ++++++++++++++++++ 1 file changed, 421 insertions(+) create mode 100644 notebook/agentchat_function_call_with_composio.ipynb diff --git a/notebook/agentchat_function_call_with_composio.ipynb b/notebook/agentchat_function_call_with_composio.ipynb new file mode 100644 index 000000000000..001c56960fc9 --- /dev/null +++ b/notebook/agentchat_function_call_with_composio.ipynb @@ -0,0 +1,421 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "aRDQQophCLQs" + }, + "source": [ + "# AI email agent using Composio" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "bZVu2lrqCLQu" + }, + "source": [ + "This notebook demonstrates how to create an AI Email agent using Composio’s Gmail tool with autogen to create an agent that will automatically respond to emails based on provided instructions.\n", + "\n", + "[Composio](https://composio.dev/) allows an AI agent or LLM to easily connect to apps like Gmail, Slack, Trello etc. The key features of Composio are:" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "UI78uqxyCLQu" + }, + "source": [ + "- Repository of Tools: Composio allows LLMs and agents to integrate with 100+ apps (Github, Salesforce, File Manager, Code Execution & More) to perform actions & subscribe to triggers(events).\n", + "\n", + "- Frameworks & LLM Agnostic: Composio provides out of box support for 10+ popular agentic frameworks and works with all the LLM providers using function calling.\n", + "\n", + "- Managed Auth: Composio helps manage authentication for all users/agents from a single dashboard." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Visit [Composio Docs](https://docs.composio.dev/introduction/intro/overview) to learn more." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "16kFQX0kCLQv" + }, + "source": [ + "The notebook demonstrates how to create a Gmail integration with Composio, set up a trigger for new emails, initialize agents with tools and finally we'll see the agent in action." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ixe3kpQrCLQv" + }, + "source": [ + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", + "```bash\n", + "pip install pyautogen composio-autogen\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XK6_H749CLQv" + }, + "source": [ + "## Composio Setup\n", + "\n", + "To get started with using Composio's Gmail tool, we need to create an integration between Composio and Gmail. This can be done using a simple command -" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-1XLfYJRCLQv" + }, + "outputs": [], + "source": [ + "!composio add gmail" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "T-YxALkYCLQw" + }, + "source": [ + "To set up a trigger(basically a listener) for new emails -" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "YUzcCGaCCLQw" + }, + "outputs": [], + "source": [ + "!composio triggers enable gmail_new_gmail_message" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yTM8eqE2CLQw" + }, + "source": [ + "This enables the `gmail_new_gmail_message` trigger, which is fired when a new email is received in the connected account." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "GqlJ06y8CLQw" + }, + "outputs": [], + "source": [ + "import os\n", + "\n", + "from composio_autogen import Action, ComposioToolSet\n", + "\n", + "from autogen.agentchat import AssistantAgent, UserProxyAgent\n", + "\n", + "os.environ[\"OPENAI_API_KEY\"] = \"YOUR_API_KEY\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BHuqMynBCLQw" + }, + "source": [ + "## Initialize agents" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "VzEYr6uuCLQw" + }, + "outputs": [], + "source": [ + "llm_config = {\"config_list\": [{\"model\": \"gpt-4o\", \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]}\n", + "\n", + "# Prompt for email assistant\n", + "email_assistant_prompt = \"\"\"\n", + " You are an AI email assistant specialized in drafting replies to emails.\n", + " You create appropriate and professional replies to emails based on the content of the email.\n", + " After executing the GMAIL_REPLY_TO_THREAD action and sending the email to the user, respond with TERMINATE.\n", + "\"\"\"\n", + "\n", + "# Initialize AssistantAgent\n", + "chatbot = AssistantAgent(\n", + " \"chatbot\",\n", + " system_message=email_assistant_prompt,\n", + " llm_config=llm_config,\n", + ")\n", + "\n", + "# Initialize UserProxyAgent\n", + "user_proxy = UserProxyAgent(\n", + " \"user_proxy\",\n", + " is_termination_msg=lambda x: x.get(\"content\", \"\") and \"TERMINATE\" in x.get(\"content\", \"\"),\n", + " human_input_mode=\"NEVER\",\n", + " code_execution_config=False,\n", + " llm_config=llm_config,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6SB0HFZ7CLQw" + }, + "source": [ + "## Initialize Composio's Toolset\n", + "\n", + "Now, we initialize Composio's toolset and get the tools and actions we need for the agent. Then, we register the tools with the `UserProxyAgent`.\n", + "\n", + "The agent can then call the tools using function calling." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "_nosEvgxCLQx" + }, + "outputs": [], + "source": [ + "# Initialize Composio Toolset\n", + "composio_toolset = ComposioToolSet()\n", + "\n", + "# Get the required tools and register them with the agents\n", + "email_tools = composio_toolset.register_tools(\n", + " caller=user_proxy,\n", + " executor=chatbot,\n", + " actions=[\n", + " Action.GMAIL_REPLY_TO_THREAD,\n", + " ],\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kFkkMIIeCLQx" + }, + "source": [ + "Here, we get the `GMAIL_REPLY_TO_THREAD` action, which is just a function that can be used to reply to an email. We'll be using this action to reply to emails automatically when they arrive." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZsgE3qm9CLQx" + }, + "source": [ + "## Create trigger listener" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fU6TmawGCLQx" + }, + "source": [ + "Now, we create a listener for the trigger that we created above. This listener will listen for new emails and when a new email arrives, it'll provide data associated with the email like the sender email, email content etc. This data will be used by the attached callback function to invoke the agent and to send a reply to the email." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `@listener.callback` decorator registers the function it decorates as a callback for a specific event trigger, in this case, when a new Gmail message is received (`GMAIL_NEW_GMAIL_MESSAGE`). It listens for the specified trigger and invokes the decorated function (`callback_new_message`) when the event occurs.\n", + "\n", + "After extracting the relevant data from the trigger payload, we start a conversation between `user_proxy` and `chatbot` to send a reply to the received email." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 1000 + }, + "id": "aDTm1tQECLQx", + "outputId": "8aa5ab9a-9526-4287-e8f1-7b8ac9cfb0b3" + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:composio.utils.shared:Creating trigger subscription\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Subscribed to triggers!\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:composio.utils.shared:Received trigger event with trigger ID: ea36d63f-5cc9-4581-9a19-b647e7468697 and trigger name: GMAIL_NEW_GMAIL_MESSAGE\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "user_proxy (to chatbot):\n", + "\n", + "\n", + " Analyze the email content and create an appropriate reply. \n", + " a. The email was received from John Doe \n", + " b. The content of the email is: hey, how are you?\n", + " \n", + " c. The thread id is: 1922811a78db4....\n", + " \n", + "\n", + "--------------------------------------------------------------------------------\n", + "chatbot (to user_proxy):\n", + "\n", + "GMAIL_REPLY_TO_THREAD thread_id: 1922811a78db4... message: \n", + "Hi John,\n", + "\n", + "I'm doing well, thank you! How about you?\n", + "\n", + "Best,\n", + "[Your Name]\n", + "\n", + "--------------------------------------------------------------------------------\n", + "user_proxy (to chatbot):\n", + "\n", + "***** Suggested tool call (call_qGQzJ6XgyO8LKSSFnwkQhSCz): GMAIL_REPLY_TO_THREAD_8c4b19f45c *****\n", + "Arguments: \n", + "{\"thread_id\":\"1922811a78db4...\",\"message_body\":\"Hi John,\\n\\nI'm doing well, thank you! How about you?\\n\\nBest,\\n[Your Name]\",\"recipient_email\":\"example_email@gmail.com\"}\n", + "*************************************************************************************************\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\n", + ">>>>>>>> EXECUTING FUNCTION GMAIL_REPLY_TO_THREAD_8c4b19f45c...\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:composio.utils.shared:Executing `GMAIL_REPLY_TO_THREAD` with params={'thread_id': '1922811a78db4...', 'message_body': \"Hi John,\\n\\nI'm doing well, thank you! How about you?\\n\\nBest,\\n[Your Name]\", 'recipient_email': 'example_email@gmail.com'} and metadata={} connected_account_i...\n", + "INFO:composio.utils.shared:Got response={'successfull': True, 'data': {'response_data': {'id': '1922811c1b3ed...', 'threadId': '1922811a78db4...', 'labelIds': ['SENT']}}, 'error': None} from action= with params={'thread_...\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "chatbot (to user_proxy):\n", + "\n", + "chatbot (to user_proxy):\n", + "\n", + "***** Response from calling tool (call_qGQzJ6XgyO8LKSSFnwkQhSCz) *****\n", + "{\"successfull\": true, \"data\": {\"response_data\": {\"id\": \"1922811c1b3ed...\", \"threadId\": \"1922811a78db4...\", \"labelIds\": [\"SENT\"]}}, \"error\": null}\n", + "**********************************************************************\n", + "\n", + "--------------------------------------------------------------------------------\n", + "user_proxy (to chatbot):\n", + "\n", + "I've replied to the email with the following message:\n", + "\n", + "Hi John,\n", + "\n", + "I'm doing well, thank you! How about you?\n", + "\n", + "Best,\n", + "[Your Name]\n", + "\n", + "Is there anything else you need?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "chatbot (to user_proxy):\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\n" + ] + } + ], + "source": [ + "# Create a trigger listener\n", + "listener = composio_toolset.create_trigger_listener()\n", + "\n", + "\n", + "@listener.callback(filters={\"trigger_name\": \"GMAIL_NEW_GMAIL_MESSAGE\"})\n", + "def callback_new_message(event) -> None:\n", + " # Get the payload and extract relevant information\n", + " payload = event.payload # Email payload\n", + " thread_id = payload.get(\"threadId\")\n", + " message = payload.get(\"messageText\")\n", + " sender_mail = payload.get(\"sender\")\n", + " if sender_mail is None:\n", + " print(\"No sender email found\")\n", + " return\n", + "\n", + " analyze_email_task = f\"\"\"\n", + " Analyze the email content and create an appropriate reply.\n", + " a. The email was received from {sender_mail}\n", + " b. The content of the email is: {message}\n", + " c. The thread id is: {thread_id}.\n", + " \"\"\"\n", + " # Initiate the conversation\n", + " res = user_proxy.initiate_chat(chatbot, message=analyze_email_task)\n", + " print(res.summary)\n", + "\n", + "\n", + "print(\"Subscribed to triggers!\")\n", + "# Start listening\n", + "listener.listen()" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "front_matter": { + "description": "Use Composio to create AI agents that seamlessly connect with external tools, Apps, and APIs to perform actions and receive triggers. With built-in support for AutoGen, Composio enables the creation of highly capable and adaptable AI agents that can autonomously execute complex tasks and deliver personalized experiences.", + "tags": [ + "agents", + "tool use" + ] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} From 43b0d59a345df6ce39a6742b67e088ed7431f136 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Tue, 1 Oct 2024 14:50:51 -0400 Subject: [PATCH 05/35] Remove API key warning (#3589) --- autogen/oai/client.py | 4 ---- 1 file changed, 4 deletions(-) diff --git a/autogen/oai/client.py b/autogen/oai/client.py index 1748b28a7a1f..186ee67dc2a3 100644 --- a/autogen/oai/client.py +++ b/autogen/oai/client.py @@ -172,10 +172,6 @@ class OpenAIClient: def __init__(self, client: Union[OpenAI, AzureOpenAI]): self._oai_client = client - if not isinstance(client, openai.AzureOpenAI) and str(client.base_url).startswith(OPEN_API_BASE_URL_PREFIX): - logger.warning( - "The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model." - ) def message_retrieval( self, response: Union[ChatCompletion, Completion] From db287183f2ec9a3e58021da6f6048c15f31d94b5 Mon Sep 17 00:00:00 2001 From: Eric Zhu Date: Tue, 1 Oct 2024 14:53:25 -0700 Subject: [PATCH 06/35] Disable Azure tests and fix open ai tests (#3587) * Disable Azure tests * fix calculator notebook * use gpt-4o-mini for tests * use gpt-4o * use gpt-4o * fix formatting * Fix models used in contrib tests * Fix retrieval test * WIP * Skip * Fix format * Fix formatting --- .../agentchat/contrib/vectordb/pgvectordb.py | 14 +++++----- notebook/agentchat_MathChat.ipynb | 4 +-- ...at_auto_feedback_from_code_execution.ipynb | 4 +-- notebook/agentchat_cost_token_tracking.ipynb | 2 +- ...at_function_call_currency_calculator.ipynb | 2 +- ...tchat_groupchat_finite_state_machine.ipynb | 2 +- notebook/agentchat_groupchat_stateflow.ipynb | 2 +- .../contrib/agent_eval/test_agent_eval.py | 8 +----- .../capabilities/chat_with_teachable_agent.py | 3 ++- .../test_image_generation_capability.py | 2 -- .../capabilities/test_teachable_agent.py | 3 ++- .../test_pgvector_retrievechat.py | 2 +- .../retrievechat/test_qdrant_retrievechat.py | 2 ++ .../contrib/retrievechat/test_retrievechat.py | 4 ++- .../agentchat/contrib/test_agent_optimizer.py | 2 +- test/agentchat/contrib/test_gpt_assistant.py | 27 ++++++++++--------- test/agentchat/contrib/test_web_surfer.py | 2 +- test/agentchat/test_tool_calls.py | 2 +- test/io/test_websockets.py | 8 +----- test/oai/_test_completion.py | 14 ++-------- test/oai/test_client.py | 6 +++-- test/oai/test_client_stream.py | 3 ++- 22 files changed, 52 insertions(+), 66 deletions(-) diff --git a/autogen/agentchat/contrib/vectordb/pgvectordb.py b/autogen/agentchat/contrib/vectordb/pgvectordb.py index 6fce4a6db803..64f4ff333a90 100644 --- a/autogen/agentchat/contrib/vectordb/pgvectordb.py +++ b/autogen/agentchat/contrib/vectordb/pgvectordb.py @@ -4,16 +4,17 @@ from typing import Callable, List, Optional, Union import numpy as np + +# try: +import pgvector +from pgvector.psycopg import register_vector from sentence_transformers import SentenceTransformer from .base import Document, ItemID, QueryResults, VectorDB from .utils import get_logger -try: - import pgvector - from pgvector.psycopg import register_vector -except ImportError: - raise ImportError("Please install pgvector: `pip install pgvector`") +# except ImportError: +# raise ImportError("Please install pgvector: `pip install pgvector`") try: import psycopg @@ -416,6 +417,7 @@ def query( results = [] for query_text in query_texts: vector = self.embedding_function(query_text) + vector_string = "[" + ",".join([f"{x:.8f}" for x in vector]) + "]" if distance_type.lower() == "cosine": index_function = "<=>" @@ -428,7 +430,7 @@ def query( query = ( f"SELECT id, documents, embedding, metadatas " f"FROM {self.name} " - f"{clause} embedding {index_function} '{str(vector)}' {distance_threshold} " + f"{clause} embedding {index_function} '{vector_string}' {distance_threshold} " f"LIMIT {n_results}" ) cursor.execute(query) diff --git a/notebook/agentchat_MathChat.ipynb b/notebook/agentchat_MathChat.ipynb index afa00fb7562c..db7c6594d998 100644 --- a/notebook/agentchat_MathChat.ipynb +++ b/notebook/agentchat_MathChat.ipynb @@ -57,9 +57,7 @@ " \"OAI_CONFIG_LIST\",\n", " filter_dict={\n", " \"model\": {\n", - " \"gpt-4-1106-preview\",\n", - " \"gpt-3.5-turbo\",\n", - " \"gpt-35-turbo\",\n", + " \"gpt-4o\",\n", " }\n", " },\n", ")" diff --git a/notebook/agentchat_auto_feedback_from_code_execution.ipynb b/notebook/agentchat_auto_feedback_from_code_execution.ipynb index 6ea6f662b93b..51b5a5917341 100644 --- a/notebook/agentchat_auto_feedback_from_code_execution.ipynb +++ b/notebook/agentchat_auto_feedback_from_code_execution.ipynb @@ -37,10 +37,10 @@ "\n", "config_list = autogen.config_list_from_json(\n", " \"OAI_CONFIG_LIST\",\n", - " filter_dict={\"tags\": [\"gpt-4\"]}, # comment out to get all\n", + " filter_dict={\"tags\": [\"gpt-4o\"]}, # comment out to get all\n", ")\n", "# When using a single openai endpoint, you can use the following:\n", - "# config_list = [{\"model\": \"gpt-4\", \"api_key\": os.getenv(\"OPENAI_API_KEY\")}]" + "# config_list = [{\"model\": \"gpt-4o\", \"api_key\": os.getenv(\"OPENAI_API_KEY\")}]" ] }, { diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb index d1fc9ccd8d1a..a60fd6de15e9 100644 --- a/notebook/agentchat_cost_token_tracking.ipynb +++ b/notebook/agentchat_cost_token_tracking.ipynb @@ -79,7 +79,7 @@ "config_list = autogen.config_list_from_json(\n", " \"OAI_CONFIG_LIST\",\n", " filter_dict={\n", - " \"model\": [\"gpt-3.5-turbo\", \"gpt-3.5-turbo-16k\"], # comment out to get all\n", + " \"model\": [\"gpt-3.5-turbo\"], # comment out to get all\n", " },\n", ")" ] diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb index ac65ba560f9b..34ff92ff91a9 100644 --- a/notebook/agentchat_function_call_currency_calculator.ipynb +++ b/notebook/agentchat_function_call_currency_calculator.ipynb @@ -65,7 +65,7 @@ "\n", "config_list = autogen.config_list_from_json(\n", " \"OAI_CONFIG_LIST\",\n", - " filter_dict={\"tags\": [\"3.5-tool\"]}, # comment out to get all\n", + " filter_dict={\"tags\": [\"tool\"]}, # comment out to get all\n", ")" ] }, diff --git a/notebook/agentchat_groupchat_finite_state_machine.ipynb b/notebook/agentchat_groupchat_finite_state_machine.ipynb index 8ef101f7d914..74b6f3d40473 100644 --- a/notebook/agentchat_groupchat_finite_state_machine.ipynb +++ b/notebook/agentchat_groupchat_finite_state_machine.ipynb @@ -94,7 +94,7 @@ " \"cache_seed\": 44, # change the seed for different trials\n", " \"config_list\": autogen.config_list_from_json(\n", " \"OAI_CONFIG_LIST\",\n", - " filter_dict={\"tags\": [\"gpt-4\", \"gpt-4-32k\"]}, # comment out to get all\n", + " filter_dict={\"tags\": [\"gpt-4o\"]}, # comment out to get all\n", " ),\n", " \"temperature\": 0,\n", "}" diff --git a/notebook/agentchat_groupchat_stateflow.ipynb b/notebook/agentchat_groupchat_stateflow.ipynb index 53eb0f2ff986..3081056eac9c 100644 --- a/notebook/agentchat_groupchat_stateflow.ipynb +++ b/notebook/agentchat_groupchat_stateflow.ipynb @@ -43,7 +43,7 @@ "config_list = autogen.config_list_from_json(\n", " \"OAI_CONFIG_LIST\",\n", " filter_dict={\n", - " \"tags\": [\"gpt-4\", \"gpt-4-32k\"],\n", + " \"tags\": [\"gpt-4o\"],\n", " },\n", ")" ] diff --git a/test/agentchat/contrib/agent_eval/test_agent_eval.py b/test/agentchat/contrib/agent_eval/test_agent_eval.py index 1ae7562a3722..57f5d20d365c 100644 --- a/test/agentchat/contrib/agent_eval/test_agent_eval.py +++ b/test/agentchat/contrib/agent_eval/test_agent_eval.py @@ -32,13 +32,8 @@ def remove_ground_truth(test_case: str): filter_dict={ "api_type": ["openai"], "model": [ - "gpt-4-turbo", - "gpt-4-turbo-preview", - "gpt-4-0125-preview", - "gpt-4-1106-preview", + "gpt-4o-mini", "gpt-3.5-turbo", - "gpt-3.5-turbo-0125", - "gpt-3.5-turbo-1106", ], }, ) @@ -46,7 +41,6 @@ def remove_ground_truth(test_case: str): aoai_config_list = autogen.config_list_from_json( OAI_CONFIG_LIST, file_location=KEY_LOC, - filter_dict={"api_type": ["azure"]}, ) success_str = open("test/test_files/agenteval-in-out/samples/sample_math_response_successful.txt", "r").read() diff --git a/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py b/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py index 14599c063979..8042d988604d 100755 --- a/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py +++ b/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py @@ -11,7 +11,8 @@ from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST # noqa: E402 # Specify the model to use. GPT-3.5 is less reliable than GPT-4 at learning from user input. -filter_dict = {"model": ["gpt-4-0125-preview"]} +filter_dict = {"model": ["gpt-4o-mini"]} +# filter_dict = {"model": ["gpt-4-0125-preview"]} # filter_dict = {"model": ["gpt-3.5-turbo-1106"]} # filter_dict = {"model": ["gpt-4-0613"]} # filter_dict = {"model": ["gpt-3.5-turbo"]} diff --git a/test/agentchat/contrib/capabilities/test_image_generation_capability.py b/test/agentchat/contrib/capabilities/test_image_generation_capability.py index fa62d1f46a15..39b4d555ceb4 100644 --- a/test/agentchat/contrib/capabilities/test_image_generation_capability.py +++ b/test/agentchat/contrib/capabilities/test_image_generation_capability.py @@ -26,8 +26,6 @@ sys.path.append(os.path.join(os.path.dirname(__file__), "../..")) from conftest import MOCK_OPEN_AI_API_KEY, skip_openai # noqa: E402 -filter_dict = {"model": ["gpt-35-turbo-16k", "gpt-3.5-turbo-16k"]} - RESOLUTIONS = ["256x256", "512x512", "1024x1024"] QUALITIES = ["standard", "hd"] PROMPTS = [ diff --git a/test/agentchat/contrib/capabilities/test_teachable_agent.py b/test/agentchat/contrib/capabilities/test_teachable_agent.py index 720bdc7ef6d2..a823f02946ce 100755 --- a/test/agentchat/contrib/capabilities/test_teachable_agent.py +++ b/test/agentchat/contrib/capabilities/test_teachable_agent.py @@ -28,7 +28,8 @@ # filter_dict={"model": ["gpt-3.5-turbo-1106"]} # filter_dict={"model": ["gpt-3.5-turbo-0613"]} # filter_dict={"model": ["gpt-4"]} -filter_dict = {"tags": ["gpt-35-turbo-16k", "gpt-3.5-turbo-16k"]} +# filter_dict = {"tags": ["gpt-35-turbo-16k", "gpt-3.5-turbo-16k"]} +filter_dict = {"model": ["gpt-4o-mini"]} def create_teachable_agent(reset_db=False, verbosity=0): diff --git a/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py b/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py index 3c566352b3e2..52b14d42101e 100644 --- a/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py +++ b/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py @@ -78,7 +78,7 @@ def test_retrievechat(): }, "embedding_function": sentence_transformer_ef, "get_or_create": True, # set to False if you don't want to reuse an existing collection - "overwrite": False, # set to True if you want to overwrite an existing collection + "overwrite": True, # set to True if you want to overwrite an existing collection }, code_execution_config=False, # set to False if you don't want to execute the code ) diff --git a/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py b/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py index 92ca5aa603a9..3cc8b8780b8d 100755 --- a/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py +++ b/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py @@ -69,6 +69,8 @@ def test_retrievechat(): "client": client, "docs_path": "./website/docs", "chunk_token_size": 2000, + "get_or_create": True, + "overwrite": True, }, ) diff --git a/test/agentchat/contrib/retrievechat/test_retrievechat.py b/test/agentchat/contrib/retrievechat/test_retrievechat.py index 0504fc82be42..d585be36cbfb 100755 --- a/test/agentchat/contrib/retrievechat/test_retrievechat.py +++ b/test/agentchat/contrib/retrievechat/test_retrievechat.py @@ -54,17 +54,19 @@ def test_retrievechat(): ) sentence_transformer_ef = ef.SentenceTransformerEmbeddingFunction() + docs_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../../website/docs")) ragproxyagent = RetrieveUserProxyAgent( name="ragproxyagent", human_input_mode="NEVER", max_consecutive_auto_reply=2, retrieve_config={ - "docs_path": "./website/docs", + "docs_path": docs_path, "chunk_token_size": 2000, "model": config_list[0]["model"], "client": chromadb.PersistentClient(path="/tmp/chromadb"), "embedding_function": sentence_transformer_ef, "get_or_create": True, + "overwrite": True, }, ) diff --git a/test/agentchat/contrib/test_agent_optimizer.py b/test/agentchat/contrib/test_agent_optimizer.py index 9587c9d59755..d31bd9341afb 100644 --- a/test/agentchat/contrib/test_agent_optimizer.py +++ b/test/agentchat/contrib/test_agent_optimizer.py @@ -89,7 +89,7 @@ def test_step(): max_consecutive_auto_reply=3, ) - optimizer = AgentOptimizer(max_actions_per_step=3, llm_config=llm_config) + optimizer = AgentOptimizer(max_actions_per_step=3, llm_config=llm_config, optimizer_model="gpt-4o-mini") user_proxy.initiate_chat(assistant, message=problem) optimizer.record_one_conversation(assistant.chat_messages_for_summary(user_proxy), is_satisfied=True) diff --git a/test/agentchat/contrib/test_gpt_assistant.py b/test/agentchat/contrib/test_gpt_assistant.py index 7132cb72053b..15dcd12d16cb 100755 --- a/test/agentchat/contrib/test_gpt_assistant.py +++ b/test/agentchat/contrib/test_gpt_assistant.py @@ -40,11 +40,12 @@ ], }, ) - aoai_config_list = autogen.config_list_from_json( - OAI_CONFIG_LIST, - file_location=KEY_LOC, - filter_dict={"api_type": ["azure"], "tags": ["assistant"]}, - ) + # TODO: fix azure settings or remove it. + # aoai_config_list = autogen.config_list_from_json( + # OAI_CONFIG_LIST, + # file_location=KEY_LOC, + # filter_dict={"api_type": ["azure"], "tags": ["assistant"]}, + # ) @pytest.mark.skipif( @@ -53,7 +54,8 @@ ) def test_config_list() -> None: assert len(openai_config_list) > 0 - assert len(aoai_config_list) > 0 + # TODO: fix azure settings or remove it. + # assert len(aoai_config_list) > 0 @pytest.mark.skipif( @@ -61,9 +63,8 @@ def test_config_list() -> None: reason=reason, ) def test_gpt_assistant_chat() -> None: - for gpt_config in [openai_config_list, aoai_config_list]: - _test_gpt_assistant_chat({"config_list": gpt_config}) - _test_gpt_assistant_chat(gpt_config[0]) + _test_gpt_assistant_chat({"config_list": openai_config_list}) + _test_gpt_assistant_chat(openai_config_list[0]) def _test_gpt_assistant_chat(gpt_config) -> None: @@ -135,8 +136,8 @@ def ask_ossinsight(question: str) -> str: reason=reason, ) def test_get_assistant_instructions() -> None: - for gpt_config in [openai_config_list, aoai_config_list]: - _test_get_assistant_instructions(gpt_config) + _test_get_assistant_instructions(openai_config_list) + # _test_get_assistant_instructions(aoai_config_list) def _test_get_assistant_instructions(gpt_config) -> None: @@ -164,8 +165,8 @@ def _test_get_assistant_instructions(gpt_config) -> None: reason=reason, ) def test_gpt_assistant_instructions_overwrite() -> None: - for gpt_config in [openai_config_list, aoai_config_list]: - _test_gpt_assistant_instructions_overwrite(gpt_config) + _test_gpt_assistant_instructions_overwrite(openai_config_list) + # _test_gpt_assistant_instructions_overwrite(aoai_config_list) def _test_gpt_assistant_instructions_overwrite(gpt_config) -> None: diff --git a/test/agentchat/contrib/test_web_surfer.py b/test/agentchat/contrib/test_web_surfer.py index fad336b6b76b..97f7e39bb462 100644 --- a/test/agentchat/contrib/test_web_surfer.py +++ b/test/agentchat/contrib/test_web_surfer.py @@ -97,7 +97,7 @@ def test_web_surfer_oai() -> None: llm_config = {"config_list": config_list, "timeout": 180, "cache_seed": 42} # adding Azure name variations to the model list - model = ["gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k"] + model = ["gpt-4o-mini"] model += [m.replace(".", "") for m in model] summarizer_llm_config = { diff --git a/test/agentchat/test_tool_calls.py b/test/agentchat/test_tool_calls.py index e7d45c5918d9..fa7602104fa0 100755 --- a/test/agentchat/test_tool_calls.py +++ b/test/agentchat/test_tool_calls.py @@ -144,7 +144,7 @@ def test_update_tool(): config_list_gpt4 = autogen.config_list_from_json( OAI_CONFIG_LIST, filter_dict={ - "tags": ["gpt-4"], + "tags": ["gpt-4o-mini"], }, file_location=KEY_LOC, ) diff --git a/test/io/test_websockets.py b/test/io/test_websockets.py index 1458afcd88cb..ee2085169e05 100644 --- a/test/io/test_websockets.py +++ b/test/io/test_websockets.py @@ -97,14 +97,8 @@ def on_connect(iostream: IOWebsockets, success_dict: Dict[str, bool] = success_d OAI_CONFIG_LIST, filter_dict={ "model": [ + "gpt-4o-mini", "gpt-3.5-turbo", - "gpt-3.5-turbo-16k", - "gpt-4", - "gpt-4-0314", - "gpt4", - "gpt-4-32k", - "gpt-4-32k-0314", - "gpt-4-32k-v0314", ], }, file_location=KEY_LOC, diff --git a/test/oai/_test_completion.py b/test/oai/_test_completion.py index fe410255d2fb..af479ca56511 100755 --- a/test/oai/_test_completion.py +++ b/test/oai/_test_completion.py @@ -143,13 +143,8 @@ def test_nocontext(): file_location=KEY_LOC, filter_dict={ "model": { + "gpt-4o-mini", "gpt-3.5-turbo", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-16k-0613", - "gpt-3.5-turbo-0301", - "chatgpt-35-turbo-0301", - "gpt-35-turbo-v0301", - "gpt", }, }, ), @@ -179,13 +174,8 @@ def test_humaneval(num_samples=1): env_or_file=OAI_CONFIG_LIST, filter_dict={ "model": { + "gpt-4o-mini", "gpt-3.5-turbo", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-16k-0613", - "gpt-3.5-turbo-0301", - "chatgpt-35-turbo-0301", - "gpt-35-turbo-v0301", - "gpt", }, }, file_location=KEY_LOC, diff --git a/test/oai/test_client.py b/test/oai/test_client.py index bd8b072e6127..3387001cf4ad 100755 --- a/test/oai/test_client.py +++ b/test/oai/test_client.py @@ -66,7 +66,8 @@ def get_usage(response): return {} -@pytest.mark.skipif(skip, reason="openai>=1 not installed") +# @pytest.mark.skipif(skip, reason="openai>=1 not installed") +@pytest.mark.skip(reason="This test is not working until Azure settings are updated") def test_aoai_chat_completion(): config_list = config_list_from_json( env_or_file=OAI_CONFIG_LIST, @@ -88,7 +89,8 @@ def test_aoai_chat_completion(): print(client.extract_text_or_completion_object(response)) -@pytest.mark.skipif(skip or not TOOL_ENABLED, reason="openai>=1.1.0 not installed") +# @pytest.mark.skipif(skip or not TOOL_ENABLED, reason="openai>=1.1.0 not installed") +@pytest.mark.skip(reason="This test is not working until Azure settings are updated") def test_oai_tool_calling_extraction(): config_list = config_list_from_json( env_or_file=OAI_CONFIG_LIST, diff --git a/test/oai/test_client_stream.py b/test/oai/test_client_stream.py index 59abd97151ad..405bc32be27d 100755 --- a/test/oai/test_client_stream.py +++ b/test/oai/test_client_stream.py @@ -33,6 +33,7 @@ @pytest.mark.skipif(skip, reason="openai>=1 not installed") +@pytest.mark.skip(reason="This test is not working until Azure settings are updated.") def test_aoai_chat_completion_stream() -> None: config_list = config_list_from_json( env_or_file=OAI_CONFIG_LIST, @@ -236,7 +237,7 @@ def test_chat_tools_stream() -> None: config_list = config_list_from_json( env_or_file=OAI_CONFIG_LIST, file_location=KEY_LOC, - filter_dict={"tags": ["multitool"]}, + filter_dict={"tags": ["tool"]}, ) tools = [ { From 3a6b88ef8ae5a362d95c92187f8093180c6c2d59 Mon Sep 17 00:00:00 2001 From: Mark Sze <66362098+marklysze@users.noreply.github.com> Date: Wed, 2 Oct 2024 08:06:52 +1000 Subject: [PATCH 07/35] Ollama Client (with tool calling) (#3056) * Ollama client! With function calling. Initial commit, client, no docs or tests yet. * Tidy comments * Cater for missing prompt token count * Removed use of eval, added json parsing support library * Fix to the use of the JSON fix library, handling of Mixtral escape sequence * Fixed 'name' in JSON bug, catered for single function call JSON without [] * removing role='tool' from inner tool result to reduce token usage. * Added Ollama documentation and updated library versions * Added Native Ollama tool calling (v0.3.0 req.) as well as hide/show tools support * Added native tool calling and hide_tools parameter to documentation * Update to Ollama 0.3.1, added tests * Tweak to manual function calling prompt to improve number handling. * Fix formatting Co-authored-by: gagb Co-authored-by: Jack Gerrits * Fix formatting * Better error message --------- Co-authored-by: Eric Zhu Co-authored-by: gagb Co-authored-by: Jack Gerrits --- .github/workflows/contrib-tests.yml | 32 + autogen/logger/file_logger.py | 2 + autogen/logger/sqlite_logger.py | 2 + autogen/oai/client.py | 12 + autogen/oai/ollama.py | 579 ++++++++++++++++ autogen/runtime_logging.py | 2 + setup.py | 1 + test/oai/test_ollama.py | 302 +++++++++ .../non-openai-models/local-ollama.ipynb | 631 ++++++++++++++++++ 9 files changed, 1563 insertions(+) create mode 100644 autogen/oai/ollama.py create mode 100644 test/oai/test_ollama.py create mode 100644 website/docs/topics/non-openai-models/local-ollama.ipynb diff --git a/.github/workflows/contrib-tests.yml b/.github/workflows/contrib-tests.yml index c8afaddc8506..f14a7f09ebe0 100644 --- a/.github/workflows/contrib-tests.yml +++ b/.github/workflows/contrib-tests.yml @@ -709,3 +709,35 @@ jobs: with: file: ./coverage.xml flags: unittests + + OllamaTest: + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest, macos-latest, windows-2019] + python-version: ["3.9", "3.10", "3.11", "3.12"] + exclude: + - os: macos-latest + python-version: "3.9" + steps: + - uses: actions/checkout@v4 + with: + lfs: true + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + - name: Install packages and dependencies for all tests + run: | + python -m pip install --upgrade pip wheel + pip install pytest-cov>=5 + - name: Install packages and dependencies for Ollama + run: | + pip install -e .[ollama,test] + pytest test/oai/test_ollama.py --skip-openai + - name: Upload coverage to Codecov + uses: codecov/codecov-action@v3 + with: + file: ./coverage.xml + flags: unittests diff --git a/autogen/logger/file_logger.py b/autogen/logger/file_logger.py index 329510894920..b902652fd4b1 100644 --- a/autogen/logger/file_logger.py +++ b/autogen/logger/file_logger.py @@ -24,6 +24,7 @@ from autogen.oai.gemini import GeminiClient from autogen.oai.groq import GroqClient from autogen.oai.mistral import MistralAIClient + from autogen.oai.ollama import OllamaClient from autogen.oai.together import TogetherClient logger = logging.getLogger(__name__) @@ -218,6 +219,7 @@ def log_new_client( | TogetherClient | GroqClient | CohereClient + | OllamaClient | BedrockClient ), wrapper: OpenAIWrapper, diff --git a/autogen/logger/sqlite_logger.py b/autogen/logger/sqlite_logger.py index 3849c19711c7..bdfa9928d141 100644 --- a/autogen/logger/sqlite_logger.py +++ b/autogen/logger/sqlite_logger.py @@ -25,6 +25,7 @@ from autogen.oai.gemini import GeminiClient from autogen.oai.groq import GroqClient from autogen.oai.mistral import MistralAIClient + from autogen.oai.ollama import OllamaClient from autogen.oai.together import TogetherClient logger = logging.getLogger(__name__) @@ -405,6 +406,7 @@ def log_new_client( TogetherClient, GroqClient, CohereClient, + OllamaClient, BedrockClient, ], wrapper: OpenAIWrapper, diff --git a/autogen/oai/client.py b/autogen/oai/client.py index 186ee67dc2a3..cbabe93d874c 100644 --- a/autogen/oai/client.py +++ b/autogen/oai/client.py @@ -93,6 +93,13 @@ except ImportError as e: cohere_import_exception = e +try: + from autogen.oai.ollama import OllamaClient + + ollama_import_exception: Optional[ImportError] = None +except ImportError as e: + ollama_import_exception = e + try: from autogen.oai.bedrock import BedrockClient @@ -545,6 +552,11 @@ def _register_default_client(self, config: Dict[str, Any], openai_config: Dict[s raise ImportError("Please install `cohere` to use the Cohere API.") client = CohereClient(**openai_config) self._clients.append(client) + elif api_type is not None and api_type.startswith("ollama"): + if ollama_import_exception: + raise ImportError("Please install with `[ollama]` option to use the Ollama API.") + client = OllamaClient(**openai_config) + self._clients.append(client) elif api_type is not None and api_type.startswith("bedrock"): self._configure_openai_config_for_bedrock(config, openai_config) if bedrock_import_exception: diff --git a/autogen/oai/ollama.py b/autogen/oai/ollama.py new file mode 100644 index 000000000000..cf154f635da9 --- /dev/null +++ b/autogen/oai/ollama.py @@ -0,0 +1,579 @@ +"""Create an OpenAI-compatible client using Ollama's API. + +Example: + llm_config={ + "config_list": [{ + "api_type": "ollama", + "model": "mistral:7b-instruct-v0.3-q6_K" + } + ]} + + agent = autogen.AssistantAgent("my_agent", llm_config=llm_config) + +Install Ollama's python library using: pip install --upgrade ollama + +Resources: +- https://github.com/ollama/ollama-python +""" + +from __future__ import annotations + +import copy +import json +import random +import re +import time +import warnings +from typing import Any, Dict, List, Tuple + +import ollama +from fix_busted_json import repair_json +from ollama import Client +from openai.types.chat import ChatCompletion, ChatCompletionMessageToolCall +from openai.types.chat.chat_completion import ChatCompletionMessage, Choice +from openai.types.completion_usage import CompletionUsage + +from autogen.oai.client_utils import should_hide_tools, validate_parameter + + +class OllamaClient: + """Client for Ollama's API.""" + + # Defaults for manual tool calling + # Instruction is added to the first system message and provides directions to follow a two step + # process + # 1. (before tools have been called) Return JSON with the functions to call + # 2. (directly after tools have been called) Return Text describing the results of the function calls in text format + + # Override using "manual_tool_call_instruction" config parameter + TOOL_CALL_MANUAL_INSTRUCTION = ( + "You are to follow a strict two step process that will occur over " + "a number of interactions, so pay attention to what step you are in based on the full " + "conversation. We will be taking turns so only do one step at a time so don't perform step " + "2 until step 1 is complete and I've told you the result. The first step is to choose one " + "or more functions based on the request given and return only JSON with the functions and " + "arguments to use. The second step is to analyse the given output of the function and summarise " + "it returning only TEXT and not Python or JSON. " + "For argument values, be sure numbers aren't strings, they should not have double quotes around them. " + "In terms of your response format, for step 1 return only JSON and NO OTHER text, " + "for step 2 return only text and NO JSON/Python/Markdown. " + 'The format for running a function is [{"name": "function_name1", "arguments":{"argument_name": "argument_value"}},{"name": "function_name2", "arguments":{"argument_name": "argument_value"}}] ' + 'Make sure the keys "name" and "arguments" are as described. ' + "If you don't get the format correct, try again. " + "The following functions are available to you:[FUNCTIONS_LIST]" + ) + + # Appended to the last user message if no tools have been called + # Override using "manual_tool_call_step1" config parameter + TOOL_CALL_MANUAL_STEP1 = " (proceed with step 1)" + + # Appended to the user message after tools have been executed. Will create a 'user' message if one doesn't exist. + # Override using "manual_tool_call_step2" config parameter + TOOL_CALL_MANUAL_STEP2 = " (proceed with step 2)" + + def __init__(self, **kwargs): + """Note that no api_key or environment variable is required for Ollama. + + Args: + None + """ + + def message_retrieval(self, response) -> List: + """ + Retrieve and return a list of strings or a list of Choice.Message from the response. + + NOTE: if a list of Choice.Message is returned, it currently needs to contain the fields of OpenAI's ChatCompletion Message object, + since that is expected for function or tool calling in the rest of the codebase at the moment, unless a custom agent is being used. + """ + return [choice.message for choice in response.choices] + + def cost(self, response) -> float: + return response.cost + + @staticmethod + def get_usage(response) -> Dict: + """Return usage summary of the response using RESPONSE_USAGE_KEYS.""" + # ... # pragma: no cover + return { + "prompt_tokens": response.usage.prompt_tokens, + "completion_tokens": response.usage.completion_tokens, + "total_tokens": response.usage.total_tokens, + "cost": response.cost, + "model": response.model, + } + + def parse_params(self, params: Dict[str, Any]) -> Dict[str, Any]: + """Loads the parameters for Ollama API from the passed in parameters and returns a validated set. Checks types, ranges, and sets defaults""" + ollama_params = {} + + # Check that we have what we need to use Ollama's API + # https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion + + # The main parameters are model, prompt, stream, and options + # Options is a dictionary of parameters for the model + # There are other, advanced, parameters such as format, system (to override system message), template, raw, etc. - not used + + # We won't enforce the available models + ollama_params["model"] = params.get("model", None) + assert ollama_params[ + "model" + ], "Please specify the 'model' in your config list entry to nominate the Ollama model to use." + + ollama_params["stream"] = validate_parameter(params, "stream", bool, True, False, None, None) + + # Build up the options dictionary + # https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values + options_dict = {} + + if "num_predict" in params: + # Maximum number of tokens to predict, note: -1 is infinite, -2 is fill context, 128 is default + ollama_params["num_predict"] = validate_parameter(params, "num_predict", int, False, 128, None, None) + + if "repeat_penalty" in params: + options_dict["repeat_penalty"] = validate_parameter( + params, "repeat_penalty", (int, float), False, 1.1, None, None + ) + + if "seed" in params: + options_dict["seed"] = validate_parameter(params, "seed", int, False, 42, None, None) + + if "temperature" in params: + ollama_params["temperature"] = validate_parameter( + params, "temperature", (int, float), False, 0.8, None, None + ) + + if "top_k" in params: + ollama_params["top_k"] = validate_parameter(params, "top_k", int, False, 40, None, None) + + if "top_p" in params: + ollama_params["top_p"] = validate_parameter(params, "top_p", (int, float), False, 0.9, None, None) + + if self._native_tool_calls and self._tools_in_conversation and not self._should_hide_tools: + ollama_params["tools"] = params["tools"] + + # Ollama doesn't support streaming with tools natively + if ollama_params["stream"] and self._native_tool_calls: + warnings.warn( + "Streaming is not supported when using tools and 'Native' tool calling, streaming will be disabled.", + UserWarning, + ) + + ollama_params["stream"] = False + + if not self._native_tool_calls and self._tools_in_conversation: + # For manual tool calling we have injected the available tools into the prompt + # and we don't want to force JSON mode + ollama_params["format"] = "" # Don't force JSON for manual tool calling mode + + if len(options_dict) != 0: + ollama_params["options"] = options_dict + + return ollama_params + + def create(self, params: Dict) -> ChatCompletion: + + messages = params.get("messages", []) + + # Are tools involved in this conversation? + self._tools_in_conversation = "tools" in params + + # We provide second-level filtering out of tools to avoid LLMs re-calling tools continuously + if self._tools_in_conversation: + hide_tools = validate_parameter( + params, "hide_tools", str, False, "never", None, ["if_all_run", "if_any_run", "never"] + ) + self._should_hide_tools = should_hide_tools(messages, params["tools"], hide_tools) + else: + self._should_hide_tools = False + + # Are we using native Ollama tool calling, otherwise we're doing manual tool calling + # We allow the user to decide if they want to use Ollama's tool calling + # or for tool calling to be handled manually through text messages + # Default is True = Ollama's tool calling + self._native_tool_calls = validate_parameter(params, "native_tool_calls", bool, False, True, None, None) + + if not self._native_tool_calls: + # Load defaults + self._manual_tool_call_instruction = validate_parameter( + params, "manual_tool_call_instruction", str, False, self.TOOL_CALL_MANUAL_INSTRUCTION, None, None + ) + self._manual_tool_call_step1 = validate_parameter( + params, "manual_tool_call_step1", str, False, self.TOOL_CALL_MANUAL_STEP1, None, None + ) + self._manual_tool_call_step2 = validate_parameter( + params, "manual_tool_call_step2", str, False, self.TOOL_CALL_MANUAL_STEP2, None, None + ) + + # Convert AutoGen messages to Ollama messages + ollama_messages = self.oai_messages_to_ollama_messages( + messages, + ( + params["tools"] + if (not self._native_tool_calls and self._tools_in_conversation) and not self._should_hide_tools + else None + ), + ) + + # Parse parameters to the Ollama API's parameters + ollama_params = self.parse_params(params) + + ollama_params["messages"] = ollama_messages + + # Token counts will be returned + prompt_tokens = 0 + completion_tokens = 0 + total_tokens = 0 + + ans = None + try: + if "client_host" in params: + client = Client(host=params["client_host"]) + response = client.chat(**ollama_params) + else: + response = ollama.chat(**ollama_params) + except Exception as e: + raise RuntimeError(f"Ollama exception occurred: {e}") + else: + + if ollama_params["stream"]: + # Read in the chunks as they stream, taking in tool_calls which may be across + # multiple chunks if more than one suggested + ans = "" + for chunk in response: + ans = ans + (chunk["message"]["content"] or "") + + if "done_reason" in chunk: + prompt_tokens = chunk["prompt_eval_count"] if "prompt_eval_count" in chunk else 0 + completion_tokens = chunk["eval_count"] if "eval_count" in chunk else 0 + total_tokens = prompt_tokens + completion_tokens + else: + # Non-streaming finished + ans: str = response["message"]["content"] + + prompt_tokens = response["prompt_eval_count"] if "prompt_eval_count" in response else 0 + completion_tokens = response["eval_count"] if "eval_count" in response else 0 + total_tokens = prompt_tokens + completion_tokens + + if response is not None: + + # Defaults + ollama_finish = "stop" + tool_calls = None + + # Id and streaming text into response + if ollama_params["stream"]: + response_content = ans + response_id = chunk["created_at"] + else: + response_content = response["message"]["content"] + response_id = response["created_at"] + + # Process tools in the response + if self._tools_in_conversation: + + if self._native_tool_calls: + + if not ollama_params["stream"]: + response_content = response["message"]["content"] + + # Native tool calling + if "tool_calls" in response["message"]: + ollama_finish = "tool_calls" + tool_calls = [] + random_id = random.randint(0, 10000) + for tool_call in response["message"]["tool_calls"]: + tool_calls.append( + ChatCompletionMessageToolCall( + id="ollama_func_{}".format(random_id), + function={ + "name": tool_call["function"]["name"], + "arguments": json.dumps(tool_call["function"]["arguments"]), + }, + type="function", + ) + ) + + random_id += 1 + + elif not self._native_tool_calls: + + # Try to convert the response to a tool call object + response_toolcalls = response_to_tool_call(ans) + + # If we can, then we've got tool call(s) + if response_toolcalls is not None: + ollama_finish = "tool_calls" + tool_calls = [] + random_id = random.randint(0, 10000) + + for json_function in response_toolcalls: + tool_calls.append( + ChatCompletionMessageToolCall( + id="ollama_manual_func_{}".format(random_id), + function={ + "name": json_function["name"], + "arguments": ( + json.dumps(json_function["arguments"]) + if "arguments" in json_function + else "{}" + ), + }, + type="function", + ) + ) + + random_id += 1 + + # Blank the message content + response_content = "" + + else: + raise RuntimeError("Failed to get response from Ollama.") + + # Convert response to AutoGen response + message = ChatCompletionMessage( + role="assistant", + content=response_content, + function_call=None, + tool_calls=tool_calls, + ) + choices = [Choice(finish_reason=ollama_finish, index=0, message=message)] + + response_oai = ChatCompletion( + id=response_id, + model=ollama_params["model"], + created=int(time.time()), + object="chat.completion", + choices=choices, + usage=CompletionUsage( + prompt_tokens=prompt_tokens, + completion_tokens=completion_tokens, + total_tokens=total_tokens, + ), + cost=0, # Local models, FREE! + ) + + return response_oai + + def oai_messages_to_ollama_messages(self, messages: list[Dict[str, Any]], tools: list) -> list[dict[str, Any]]: + """Convert messages from OAI format to Ollama's format. + We correct for any specific role orders and types, and convert tools to messages (as Ollama can't use tool messages) + """ + + ollama_messages = copy.deepcopy(messages) + + # Remove the name field + for message in ollama_messages: + if "name" in message: + message.pop("name", None) + + # Having a 'system' message on the end does not work well with Ollama, so we change it to 'user' + # 'system' messages on the end are typical of the summarisation message: summary_method="reflection_with_llm" + if len(ollama_messages) > 1 and ollama_messages[-1]["role"] == "system": + ollama_messages[-1]["role"] = "user" + + # Process messages for tool calling manually + if tools is not None and not self._native_tool_calls: + # 1. We need to append instructions to the starting system message on function calling + # 2. If we have not yet called tools we append "step 1 instruction" to the latest user message + # 3. If we have already called tools we append "step 2 instruction" to the latest user message + + have_tool_calls = False + have_tool_results = False + last_tool_result_index = -1 + + for i, message in enumerate(ollama_messages): + if "tool_calls" in message: + have_tool_calls = True + if "tool_call_id" in message: + have_tool_results = True + last_tool_result_index = i + + tool_result_is_last_msg = have_tool_results and last_tool_result_index == len(ollama_messages) - 1 + + if ollama_messages[0]["role"] == "system": + manual_instruction = self._manual_tool_call_instruction + + # Build a string of the functions available + functions_string = "" + for function in tools: + functions_string += f"""\n{function}\n""" + + # Replace single quotes with double questions - Not sure why this helps the LLM perform + # better, but it seems to. Monitor and remove if not necessary. + functions_string = functions_string.replace("'", '"') + + manual_instruction = manual_instruction.replace("[FUNCTIONS_LIST]", functions_string) + + # Update the system message with the instructions and functions + ollama_messages[0]["content"] = ollama_messages[0]["content"] + manual_instruction.rstrip() + + # If we are still in the function calling or evaluating process, append the steps instruction + if not have_tool_calls or tool_result_is_last_msg: + if ollama_messages[0]["role"] == "system": + # NOTE: we require a system message to exist for the manual steps texts + # Append the manual step instructions + content_to_append = ( + self._manual_tool_call_step1 if not have_tool_results else self._manual_tool_call_step2 + ) + + if content_to_append != "": + # Append the relevant tool call instruction to the latest user message + if ollama_messages[-1]["role"] == "user": + ollama_messages[-1]["content"] = ollama_messages[-1]["content"] + content_to_append + else: + ollama_messages.append({"role": "user", "content": content_to_append}) + + # Convert tool call and tool result messages to normal text messages for Ollama + for i, message in enumerate(ollama_messages): + if "tool_calls" in message: + # Recommended tool calls + content = "Run the following function(s):" + for tool_call in message["tool_calls"]: + content = content + "\n" + str(tool_call) + ollama_messages[i] = {"role": "assistant", "content": content} + if "tool_call_id" in message: + # Executed tool results + message["result"] = message["content"] + del message["content"] + del message["role"] + content = "The following function was run: " + str(message) + ollama_messages[i] = {"role": "user", "content": content} + + # As we are changing messages, let's merge if they have two user messages on the end and the last one is tool call step instructions + if ( + len(ollama_messages) >= 2 + and not self._native_tool_calls + and ollama_messages[-2]["role"] == "user" + and ollama_messages[-1]["role"] == "user" + and ( + ollama_messages[-1]["content"] == self._manual_tool_call_step1 + or ollama_messages[-1]["content"] == self._manual_tool_call_step2 + ) + ): + ollama_messages[-2]["content"] = ollama_messages[-2]["content"] + ollama_messages[-1]["content"] + del ollama_messages[-1] + + # Ensure the last message is a user / system message, if not, add a user message + if ollama_messages[-1]["role"] != "user" and ollama_messages[-1]["role"] != "system": + ollama_messages.append({"role": "user", "content": "Please continue."}) + + return ollama_messages + + +def response_to_tool_call(response_string: str) -> Any: + """Attempts to convert the response to an object, aimed to align with function format [{},{}]""" + + # We try and detect the list[dict] format: + # Pattern 1 is [{},{}] + # Pattern 2 is {} (without the [], so could be a single function call) + patterns = [r"\[[\s\S]*?\]", r"\{[\s\S]*\}"] + + for i, pattern in enumerate(patterns): + # Search for the pattern in the input string + matches = re.findall(pattern, response_string.strip()) + + for match in matches: + + # It has matched, extract it and load it + json_str = match.strip() + data_object = None + + try: + # Attempt to convert it as is + data_object = json.loads(json_str) + except Exception: + try: + # If that fails, attempt to repair it + + if i == 0: + # Enclose to a JSON object for repairing, which is restored upon fix + fixed_json = repair_json("{'temp':" + json_str + "}") + data_object = json.loads(fixed_json) + data_object = data_object["temp"] + else: + fixed_json = repair_json(json_str) + data_object = json.loads(fixed_json) + except json.JSONDecodeError as e: + if e.msg == "Invalid \\escape": + # Handle Mistral/Mixtral trying to escape underlines with \\ + try: + json_str = json_str.replace("\\_", "_") + if i == 0: + fixed_json = repair_json("{'temp':" + json_str + "}") + data_object = json.loads(fixed_json) + data_object = data_object["temp"] + else: + fixed_json = repair_json("{'temp':" + json_str + "}") + data_object = json.loads(fixed_json) + except Exception: + pass + except Exception: + pass + + if data_object is not None: + data_object = _object_to_tool_call(data_object) + + if data_object is not None: + return data_object + + # There's no tool call in the response + return None + + +def _object_to_tool_call(data_object: Any) -> List[Dict]: + """Attempts to convert an object to a valid tool call object List[Dict] and returns it, if it can, otherwise None""" + + # If it's a dictionary and not a list then wrap in a list + if isinstance(data_object, dict): + data_object = [data_object] + + # Validate that the data is a list of dictionaries + if isinstance(data_object, list) and all(isinstance(item, dict) for item in data_object): + # Perfect format, a list of dictionaries + + # Check that each dictionary has at least 'name', optionally 'arguments' and no other keys + is_invalid = False + for item in data_object: + if not is_valid_tool_call_item(item): + is_invalid = True + break + + # All passed, name and (optionally) arguments exist for all entries. + if not is_invalid: + return data_object + elif isinstance(data_object, list): + # If it's a list but the items are not dictionaries, check if they are strings that can be converted to dictionaries + data_copy = data_object.copy() + is_invalid = False + for i, item in enumerate(data_copy): + try: + new_item = eval(item) + if isinstance(new_item, dict): + if is_valid_tool_call_item(new_item): + data_object[i] = new_item + else: + is_invalid = True + break + else: + is_invalid = True + break + except Exception: + is_invalid = True + break + + if not is_invalid: + return data_object + + return None + + +def is_valid_tool_call_item(call_item: dict) -> bool: + """Check that a dictionary item has at least 'name', optionally 'arguments' and no other keys to match a tool call JSON""" + if "name" not in call_item or not isinstance(call_item["name"], str): + return False + + if set(call_item.keys()) - {"name", "arguments"}: + return False + + return True diff --git a/autogen/runtime_logging.py b/autogen/runtime_logging.py index 9036fe5c65cc..0a9b1687b75c 100644 --- a/autogen/runtime_logging.py +++ b/autogen/runtime_logging.py @@ -20,6 +20,7 @@ from autogen.oai.gemini import GeminiClient from autogen.oai.groq import GroqClient from autogen.oai.mistral import MistralAIClient + from autogen.oai.ollama import OllamaClient from autogen.oai.together import TogetherClient logger = logging.getLogger(__name__) @@ -124,6 +125,7 @@ def log_new_client( TogetherClient, GroqClient, CohereClient, + OllamaClient, BedrockClient, ], wrapper: OpenAIWrapper, diff --git a/setup.py b/setup.py index 63a9c9745ffb..fcce7d5e9dcd 100644 --- a/setup.py +++ b/setup.py @@ -105,6 +105,7 @@ "mistral": ["mistralai>=1.0.1"], "groq": ["groq>=0.9.0"], "cohere": ["cohere>=5.5.8"], + "ollama": ["ollama>=0.3.1", "fix_busted_json>=0.0.18"], "bedrock": ["boto3>=1.34.149"], } diff --git a/test/oai/test_ollama.py b/test/oai/test_ollama.py new file mode 100644 index 000000000000..729e1b95d816 --- /dev/null +++ b/test/oai/test_ollama.py @@ -0,0 +1,302 @@ +from unittest.mock import MagicMock, patch + +import pytest + +try: + from autogen.oai.ollama import OllamaClient, response_to_tool_call + + skip = False +except ImportError: + OllamaClient = object + InternalServerError = object + skip = True + + +# Fixtures for mock data +@pytest.fixture +def mock_response(): + class MockResponse: + def __init__(self, text, choices, usage, cost, model): + self.text = text + self.choices = choices + self.usage = usage + self.cost = cost + self.model = model + + return MockResponse + + +@pytest.fixture +def ollama_client(): + + # Set Ollama client with some default values + client = OllamaClient() + + client._native_tool_calls = True + client._tools_in_conversation = False + + return client + + +skip_reason = "Ollama dependency is not installed" + + +# Test initialization and configuration +@pytest.mark.skipif(skip, reason=skip_reason) +def test_initialization(): + + # Creation works without an api_key + OllamaClient() + + +# Test parameters +@pytest.mark.skipif(skip, reason=skip_reason) +def test_parsing_params(ollama_client): + # All parameters (with default values) + params = { + "model": "llama3.1:8b", + "temperature": 0.8, + "num_predict": 128, + "repeat_penalty": 1.1, + "seed": 42, + "top_k": 40, + "top_p": 0.9, + "stream": False, + } + expected_params = { + "model": "llama3.1:8b", + "temperature": 0.8, + "num_predict": 128, + "top_k": 40, + "top_p": 0.9, + "options": { + "repeat_penalty": 1.1, + "seed": 42, + }, + "stream": False, + } + result = ollama_client.parse_params(params) + assert result == expected_params + + # Incorrect types, defaults should be set, will show warnings but not trigger assertions + params = { + "model": "llama3.1:8b", + "temperature": "0.5", + "num_predict": "128", + "repeat_penalty": "1.1", + "seed": "42", + "top_k": "40", + "top_p": "0.9", + "stream": "True", + } + result = ollama_client.parse_params(params) + assert result == expected_params + + # Only model, others set as defaults if they are mandatory + params = { + "model": "llama3.1:8b", + } + expected_params = {"model": "llama3.1:8b", "stream": False} + result = ollama_client.parse_params(params) + assert result == expected_params + + # No model + params = { + "temperature": 0.8, + } + + with pytest.raises(AssertionError) as assertinfo: + result = ollama_client.parse_params(params) + + assert "Please specify the 'model' in your config list entry to nominate the Ollama model to use." in str( + assertinfo.value + ) + + +# Test text generation +@pytest.mark.skipif(skip, reason=skip_reason) +@patch("autogen.oai.ollama.OllamaClient.create") +def test_create_response(mock_chat, ollama_client): + # Mock OllamaClient.chat response + mock_ollama_response = MagicMock() + mock_ollama_response.choices = [ + MagicMock(finish_reason="stop", message=MagicMock(content="Example Ollama response", tool_calls=None)) + ] + mock_ollama_response.id = "mock_ollama_response_id" + mock_ollama_response.model = "llama3.1:8b" + mock_ollama_response.usage = MagicMock(prompt_tokens=10, completion_tokens=20) # Example token usage + + mock_chat.return_value = mock_ollama_response + + # Test parameters + params = { + "messages": [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "World"}], + "model": "llama3.1:8b", + } + + # Call the create method + response = ollama_client.create(params) + + # Assertions to check if response is structured as expected + assert ( + response.choices[0].message.content == "Example Ollama response" + ), "Response content should match expected output" + assert response.id == "mock_ollama_response_id", "Response ID should match the mocked response ID" + assert response.model == "llama3.1:8b", "Response model should match the mocked response model" + assert response.usage.prompt_tokens == 10, "Response prompt tokens should match the mocked response usage" + assert response.usage.completion_tokens == 20, "Response completion tokens should match the mocked response usage" + + +# Test functions/tools +@pytest.mark.skipif(skip, reason=skip_reason) +@patch("autogen.oai.ollama.OllamaClient.create") +def test_create_response_with_tool_call(mock_chat, ollama_client): + # Mock OllamaClient.chat response + mock_function = MagicMock(name="currency_calculator") + mock_function.name = "currency_calculator" + mock_function.arguments = '{"base_currency": "EUR", "quote_currency": "USD", "base_amount": 123.45}' + + mock_function_2 = MagicMock(name="get_weather") + mock_function_2.name = "get_weather" + mock_function_2.arguments = '{"location": "New York"}' + + mock_chat.return_value = MagicMock( + choices=[ + MagicMock( + finish_reason="tool_calls", + message=MagicMock( + content="Sample text about the functions", + tool_calls=[ + MagicMock(id="gdRdrvnHh", function=mock_function), + MagicMock(id="abRdrvnHh", function=mock_function_2), + ], + ), + ) + ], + id="mock_ollama_response_id", + model="llama3.1:8b", + usage=MagicMock(prompt_tokens=10, completion_tokens=20), + ) + + # Construct parameters + converted_functions = [ + { + "type": "function", + "function": { + "description": "Currency exchange calculator.", + "name": "currency_calculator", + "parameters": { + "type": "object", + "properties": { + "base_amount": {"type": "number", "description": "Amount of currency in base_currency"}, + }, + "required": ["base_amount"], + }, + }, + } + ] + ollama_messages = [ + {"role": "user", "content": "How much is 123.45 EUR in USD?"}, + {"role": "assistant", "content": "World"}, + ] + + # Call the create method + response = ollama_client.create({"messages": ollama_messages, "tools": converted_functions, "model": "llama3.1:8b"}) + + # Assertions to check if the functions and content are included in the response + assert response.choices[0].message.content == "Sample text about the functions" + assert response.choices[0].message.tool_calls[0].function.name == "currency_calculator" + assert response.choices[0].message.tool_calls[1].function.name == "get_weather" + + +# Test function parsing with manual tool calling +@pytest.mark.skipif(skip, reason=skip_reason) +def test_manual_tool_calling_parsing(ollama_client): + # Test the parsing of a tool call within the response content (fully correct) + response_content = """[{"name": "weather_forecast", "arguments":{"location": "New York"}},{"name": "currency_calculator", "arguments":{"base_amount": 123.45, "quote_currency": "EUR", "base_currency": "USD"}}]""" + + response_tool_calls = response_to_tool_call(response_content) + + expected_tool_calls = [ + {"name": "weather_forecast", "arguments": {"location": "New York"}}, + { + "name": "currency_calculator", + "arguments": {"base_amount": 123.45, "quote_currency": "EUR", "base_currency": "USD"}, + }, + ] + + assert ( + response_tool_calls == expected_tool_calls + ), "Manual Tool Calling Parsing of response did not yield correct tool_calls (full string match)" + + # Test the parsing with a substring containing the response content (should still pass) + response_content = """I will call two functions, weather_forecast and currency_calculator:\n[{"name": "weather_forecast", "arguments":{"location": "New York"}},{"name": "currency_calculator", "arguments":{"base_amount": 123.45, "quote_currency": "EUR", "base_currency": "USD"}}]""" + + response_tool_calls = response_to_tool_call(response_content) + + assert ( + response_tool_calls == expected_tool_calls + ), "Manual Tool Calling Parsing of response did not yield correct tool_calls (partial string match)" + + # Test the parsing with an invalid function call + response_content = """[{"function": "weather_forecast", "args":{"location": "New York"}},{"function": "currency_calculator", "args":{"base_amount": 123.45, "quote_currency": "EUR", "base_currency": "USD"}}]""" + + response_tool_calls = response_to_tool_call(response_content) + + assert ( + response_tool_calls is None + ), "Manual Tool Calling Parsing of response did not yield correct tool_calls (invalid function call)" + + # Test the parsing with plain text + response_content = """Call the weather_forecast function and pass in 'New York' as the 'location' argument.""" + + response_tool_calls = response_to_tool_call(response_content) + + assert ( + response_tool_calls is None + ), "Manual Tool Calling Parsing of response did not yield correct tool_calls (no function in text)" + + +# Test message conversion from OpenAI to Ollama format +@pytest.mark.skipif(skip, reason=skip_reason) +def test_oai_messages_to_ollama_messages(ollama_client): + # Test that the "name" key is removed + test_messages = [ + {"role": "system", "content": "You are a helpful AI bot."}, + {"role": "user", "name": "anne", "content": "Why is the sky blue?"}, + ] + messages = ollama_client.oai_messages_to_ollama_messages(test_messages, None) + + expected_messages = [ + {"role": "system", "content": "You are a helpful AI bot."}, + {"role": "user", "content": "Why is the sky blue?"}, + ] + + assert messages == expected_messages, "'name' was not removed from messages" + + # Test that there isn't a final system message and it's changed to user + test_messages.append({"role": "system", "content": "Summarise the conversation."}) + + messages = ollama_client.oai_messages_to_ollama_messages(test_messages, None) + + expected_messages = [ + {"role": "system", "content": "You are a helpful AI bot."}, + {"role": "user", "content": "Why is the sky blue?"}, + {"role": "user", "content": "Summarise the conversation."}, + ] + + assert messages == expected_messages, "Final 'system' message was not changed to 'user'" + + # Test that the last message is a user or system message and if not, add a continue message + test_messages[2] = {"role": "assistant", "content": "The sky is blue because that's a great colour."} + + messages = ollama_client.oai_messages_to_ollama_messages(test_messages, None) + + expected_messages = [ + {"role": "system", "content": "You are a helpful AI bot."}, + {"role": "user", "content": "Why is the sky blue?"}, + {"role": "assistant", "content": "The sky is blue because that's a great colour."}, + {"role": "user", "content": "Please continue."}, + ] + + assert messages == expected_messages, "'Please continue' message was not appended." diff --git a/website/docs/topics/non-openai-models/local-ollama.ipynb b/website/docs/topics/non-openai-models/local-ollama.ipynb new file mode 100644 index 000000000000..95803e50e593 --- /dev/null +++ b/website/docs/topics/non-openai-models/local-ollama.ipynb @@ -0,0 +1,631 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Ollama\n", + "\n", + "[Ollama](https://ollama.com/) is a local inference engine that enables you to run open-weight LLMs in your environment. It has native support for a large number of models such as Google's Gemma, Meta's Llama 2/3/3.1, Microsoft's Phi 3, Mistral.AI's Mistral/Mixtral, and Cohere's Command R models.\n", + "\n", + "Note: Previously, to use Ollama with AutoGen you required LiteLLM. Now it can be used directly and supports tool calling." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Features\n", + "\n", + "When using this Ollama client class, messages are tailored to accommodate the specific requirements of Ollama's API and this includes message role sequences, support for function/tool calling, and token usage.\n", + "\n", + "## Installing Ollama\n", + "\n", + "For Mac and Windows, [download Ollama](https://ollama.com/download).\n", + "\n", + "For Linux:\n", + "\n", + "```bash\n", + "curl -fsSL https://ollama.com/install.sh | sh\n", + "```\n", + "\n", + "## Downloading models for Ollama\n", + "\n", + "Ollama has a library of models to choose from, see them [here](https://ollama.com/library).\n", + "\n", + "Before you can use a model, you need to download it (using the name of the model from the library):\n", + "\n", + "```bash\n", + "ollama pull llama3.1\n", + "```\n", + "\n", + "To view the models you have downloaded and can use:\n", + "\n", + "```bash\n", + "ollama list\n", + "```\n", + "\n", + "## Getting started with AutoGen and Ollama\n", + "\n", + "When installing AutoGen, you need to install the `pyautogen` package with the Ollama library.\n", + "\n", + "``` bash\n", + "pip install pyautogen[ollama]\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "See the sample `OAI_CONFIG_LIST` below showing how the Ollama client class is used by specifying the `api_type` as `ollama`.\n", + "\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1\",\n", + " \"api_type\": \"ollama\"\n", + " },\n", + " {\n", + " \"model\": \"llama3.1:8b-instruct-q6_K\",\n", + " \"api_type\": \"ollama\"\n", + " },\n", + " {\n", + " \"model\": \"mistral-nemo\",\n", + " \"api_type\": \"ollama\"\n", + " }\n", + "]\n", + "```\n", + "\n", + "If you need to specify the URL for your Ollama install, use the `client_host` key in your config as per the below example:\n", + "\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1\",\n", + " \"api_type\": \"ollama\",\n", + " \"client_host\": \"http://192.168.0.1:11434\"\n", + " }\n", + "]\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## API parameters\n", + "\n", + "The following Ollama parameters can be added to your config. See [this link](https://github.com/ollama/ollama/blob/main/docs/api.md#parameters) for further information on them.\n", + "\n", + "- num_predict (integer): -1 is infinite, -2 is fill context, 128 is default\n", + "- repeat_penalty (float)\n", + "- seed (integer)\n", + "- stream (boolean)\n", + "- temperature (float)\n", + "- top_k (int)\n", + "- top_p (float)\n", + "\n", + "Example:\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1:instruct\",\n", + " \"api_type\": \"ollama\",\n", + " \"num_predict\": -1,\n", + " \"repeat_penalty\": 1.1,\n", + " \"seed\": 42,\n", + " \"stream\": False,\n", + " \"temperature\": 1,\n", + " \"top_k\": 50,\n", + " \"top_p\": 0.8\n", + " }\n", + "]\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Two-Agent Coding Example\n", + "\n", + "In this example, we run a two-agent chat with an AssistantAgent (primarily a coding agent) to generate code to count the number of prime numbers between 1 and 10,000 and then it will be executed.\n", + "\n", + "We'll use Meta's Llama 3.1 model which is suitable for coding.\n", + "\n", + "In this example we will specify the URL for the Ollama installation using `client_host`." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "config_list = [\n", + " {\n", + " # Let's choose the Meta's Llama 3.1 model (model names must match Ollama exactly)\n", + " \"model\": \"llama3.1:8b\",\n", + " # We specify the API Type as 'ollama' so it uses the Ollama client class\n", + " \"api_type\": \"ollama\",\n", + " \"stream\": False,\n", + " \"client_host\": \"http://192.168.0.1:11434\",\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Importantly, we have tweaked the system message so that the model doesn't return the termination keyword, which we've changed to FINISH, with the code block." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", + " from .autonotebook import tqdm as notebook_tqdm\n" + ] + } + ], + "source": [ + "from pathlib import Path\n", + "\n", + "from autogen import AssistantAgent, UserProxyAgent\n", + "from autogen.coding import LocalCommandLineCodeExecutor\n", + "\n", + "# Setting up the code executor\n", + "workdir = Path(\"coding\")\n", + "workdir.mkdir(exist_ok=True)\n", + "code_executor = LocalCommandLineCodeExecutor(work_dir=workdir)\n", + "\n", + "# Setting up the agents\n", + "\n", + "# The UserProxyAgent will execute the code that the AssistantAgent provides\n", + "user_proxy_agent = UserProxyAgent(\n", + " name=\"User\",\n", + " code_execution_config={\"executor\": code_executor},\n", + " is_termination_msg=lambda msg: \"FINISH\" in msg.get(\"content\"),\n", + ")\n", + "\n", + "system_message = \"\"\"You are a helpful AI assistant who writes code and the user\n", + "executes it. Solve tasks using your python coding skills.\n", + "In the following cases, suggest python code (in a python coding block) for the\n", + "user to execute. When using code, you must indicate the script type in the code block.\n", + "You only need to create one working sample.\n", + "Do not suggest incomplete code which requires users to modify it.\n", + "Don't use a code block if it's not intended to be executed by the user. Don't\n", + "include multiple code blocks in one response. Do not ask users to copy and\n", + "paste the result. Instead, use 'print' function for the output when relevant.\n", + "Check the execution result returned by the user.\n", + "\n", + "If the result indicates there is an error, fix the error.\n", + "\n", + "IMPORTANT: If it has executed successfully, ONLY output 'FINISH'.\"\"\"\n", + "\n", + "# The AssistantAgent, using the Ollama config, will take the coding request and return code\n", + "assistant_agent = AssistantAgent(\n", + " name=\"Ollama Assistant\",\n", + " system_message=system_message,\n", + " llm_config={\"config_list\": config_list},\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now start the chat." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mUser\u001b[0m (to Ollama Assistant):\n", + "\n", + "Provide code to count the number of prime numbers from 1 to 10000.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mOllama Assistant\u001b[0m (to User):\n", + "\n", + "```python\n", + "def is_prime(n):\n", + " if n <= 1:\n", + " return False\n", + " for i in range(2, int(n**0.5) + 1):\n", + " if n % i == 0:\n", + " return False\n", + " return True\n", + "\n", + "count = sum(is_prime(i) for i in range(1, 10001))\n", + "print(count)\n", + "```\n", + "\n", + "Please execute this code. I will wait for the result.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[31m\n", + ">>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...\u001b[0m\n", + "\u001b[33mUser\u001b[0m (to Ollama Assistant):\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: 1229\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mOllama Assistant\u001b[0m (to User):\n", + "\n", + "FINISH\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n" + ] + } + ], + "source": [ + "# Start the chat, with the UserProxyAgent asking the AssistantAgent the message\n", + "chat_result = user_proxy_agent.initiate_chat(\n", + " assistant_agent,\n", + " message=\"Provide code to count the number of prime numbers from 1 to 10000.\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tool Calling - Native vs Manual\n", + "\n", + "Ollama supports native tool calling (Ollama v0.3.1 library onward). If you install AutoGen with `pip install pyautogen[ollama]` you will be able to use native tool calling.\n", + "\n", + "The parameter `native_tool_calls` in your configuration allows you to specify if you want to use Ollama's native tool calling (default) or manual tool calling.\n", + "\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1\",\n", + " \"api_type\": \"ollama\",\n", + " \"client_host\": \"http://192.168.0.1:11434\",\n", + " \"native_tool_calls\": True # Use Ollama's native tool calling, False for manual\n", + " }\n", + "]\n", + "```\n", + "\n", + "Native tool calling only works with certain models and an exception will be thrown if you try to use it with an unsupported model.\n", + "\n", + "Manual tool calling allows you to use tool calling with any Ollama model. It incorporates guided tool calling messages into the prompt that guide the LLM through the process of selecting a tool and then evaluating the result of the tool. As to be expected, the ability to follow instructions and return formatted JSON is highly dependent on the model.\n", + "\n", + "You can tailor the manual tool calling messages by adding these parameters to your configuration:\n", + "\n", + "- `manual_tool_call_instruction`\n", + "- `manual_tool_call_step1`\n", + "- `manual_tool_call_step2`\n", + "\n", + "To use manual tool calling set `native_tool_calls` to `False`." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Reducing repetitive tool calls\n", + "\n", + "By incorporating tools into a conversation, LLMs can often continually recommend them to be called, even after they've been called and a result returned. This can lead to a never ending cycle of tool calls.\n", + "\n", + "To remove the chance of an LLM recommending a tool call, an additional parameter called `hide_tools` can be used to specify when tools are hidden from the LLM. The string values for the parameter are:\n", + "\n", + "- 'never': tools are never hidden\n", + "- 'if_all_run': tools are hidden if all tools have been called\n", + "- 'if_any_run': tools are hidden if any tool has been called\n", + "\n", + "This can be used with native or manual tool calling, an example of a configuration is shown below.\n", + "\n", + "```python\n", + "[\n", + " {\n", + " \"model\": \"llama3.1\",\n", + " \"api_type\": \"ollama\",\n", + " \"client_host\": \"http://192.168.0.1:11434\",\n", + " \"native_tool_calls\": True,\n", + " \"hide_tools\": \"if_any_run\" # Hide tools once any tool has been called\n", + " }\n", + "]\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tool Call Example\n", + "\n", + "In this example, instead of writing code, we will have an agent assist with some trip planning using multiple tool calling.\n", + "\n", + "Again, we'll use Meta's versatile Llama 3.1.\n", + "\n", + "Native Ollama tool calling will be used and we'll utilise the `hide_tools` parameter to hide the tools once all have been called." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "from typing import Literal\n", + "\n", + "from typing_extensions import Annotated\n", + "\n", + "import autogen\n", + "\n", + "config_list = [\n", + " {\n", + " # Let's choose the Meta's Llama 3.1 model (model names must match Ollama exactly)\n", + " \"model\": \"llama3.1:8b\",\n", + " \"api_type\": \"ollama\",\n", + " \"stream\": False,\n", + " \"client_host\": \"http://192.168.0.1:11434\",\n", + " \"hide_tools\": \"if_any_run\",\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll create our agents. Importantly, we're using native Ollama tool calling and to help guide it we add the JSON to the system_message so that the number fields aren't wrapped in quotes (becoming strings)." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "# Create the agent for tool calling\n", + "chatbot = autogen.AssistantAgent(\n", + " name=\"chatbot\",\n", + " system_message=\"\"\"For currency exchange and weather forecasting tasks,\n", + " only use the functions you have been provided with.\n", + " Example of the return JSON is:\n", + " {\n", + " \"parameter_1_name\": 100.00,\n", + " \"parameter_2_name\": \"ABC\",\n", + " \"parameter_3_name\": \"DEF\",\n", + " }.\n", + " Another example of the return JSON is:\n", + " {\n", + " \"parameter_1_name\": \"GHI\",\n", + " \"parameter_2_name\": \"ABC\",\n", + " \"parameter_3_name\": \"DEF\",\n", + " \"parameter_4_name\": 123.00,\n", + " }.\n", + " Output 'HAVE FUN!' when an answer has been provided.\"\"\",\n", + " llm_config={\"config_list\": config_list},\n", + ")\n", + "\n", + "# Note that we have changed the termination string to be \"HAVE FUN!\"\n", + "user_proxy = autogen.UserProxyAgent(\n", + " name=\"user_proxy\",\n", + " is_termination_msg=lambda x: x.get(\"content\", \"\") and \"HAVE FUN!\" in x.get(\"content\", \"\"),\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Create and register our functions (tools). See the [tutorial chapter on tool use](/docs/tutorial/tool-use) \n", + "for more information." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "# Currency Exchange function\n", + "\n", + "CurrencySymbol = Literal[\"USD\", \"EUR\"]\n", + "\n", + "# Define our function that we expect to call\n", + "\n", + "\n", + "def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:\n", + " if base_currency == quote_currency:\n", + " return 1.0\n", + " elif base_currency == \"USD\" and quote_currency == \"EUR\":\n", + " return 1 / 1.1\n", + " elif base_currency == \"EUR\" and quote_currency == \"USD\":\n", + " return 1.1\n", + " else:\n", + " raise ValueError(f\"Unknown currencies {base_currency}, {quote_currency}\")\n", + "\n", + "\n", + "# Register the function with the agent\n", + "\n", + "\n", + "@user_proxy.register_for_execution()\n", + "@chatbot.register_for_llm(description=\"Currency exchange calculator.\")\n", + "def currency_calculator(\n", + " base_amount: Annotated[\n", + " float,\n", + " \"Amount of currency in base_currency. Type is float, not string, return value should be a number only, e.g. 987.65.\",\n", + " ],\n", + " base_currency: Annotated[CurrencySymbol, \"Base currency\"] = \"USD\",\n", + " quote_currency: Annotated[CurrencySymbol, \"Quote currency\"] = \"EUR\",\n", + ") -> str:\n", + " quote_amount = exchange_rate(base_currency, quote_currency) * base_amount\n", + " return f\"{format(quote_amount, '.2f')} {quote_currency}\"\n", + "\n", + "\n", + "# Weather function\n", + "\n", + "\n", + "# Example function to make available to model\n", + "def get_current_weather(location, unit=\"fahrenheit\"):\n", + " \"\"\"Get the weather for some location\"\"\"\n", + " if \"chicago\" in location.lower():\n", + " return json.dumps({\"location\": \"Chicago\", \"temperature\": \"13\", \"unit\": unit})\n", + " elif \"san francisco\" in location.lower():\n", + " return json.dumps({\"location\": \"San Francisco\", \"temperature\": \"55\", \"unit\": unit})\n", + " elif \"new york\" in location.lower():\n", + " return json.dumps({\"location\": \"New York\", \"temperature\": \"11\", \"unit\": unit})\n", + " else:\n", + " return json.dumps({\"location\": location, \"temperature\": \"unknown\"})\n", + "\n", + "\n", + "# Register the function with the agent\n", + "\n", + "\n", + "@user_proxy.register_for_execution()\n", + "@chatbot.register_for_llm(description=\"Weather forecast for US cities.\")\n", + "def weather_forecast(\n", + " location: Annotated[str, \"City name\"],\n", + ") -> str:\n", + " weather_details = get_current_weather(location=location)\n", + " weather = json.loads(weather_details)\n", + " return f\"{weather['location']} will be {weather['temperature']} degrees {weather['unit']}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And run it!" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "\n", + "\u001b[32m***** Suggested tool call (ollama_func_4506): weather_forecast *****\u001b[0m\n", + "Arguments: \n", + "{\"location\": \"New York\"}\n", + "\u001b[32m********************************************************************\u001b[0m\n", + "\u001b[32m***** Suggested tool call (ollama_func_4507): currency_calculator *****\u001b[0m\n", + "Arguments: \n", + "{\"base_amount\": 123.45, \"base_currency\": \"EUR\", \"quote_currency\": \"USD\"}\n", + "\u001b[32m***********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION weather_forecast...\u001b[0m\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION currency_calculator...\u001b[0m\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[32m***** Response from calling tool (ollama_func_4506) *****\u001b[0m\n", + "New York will be 11 degrees fahrenheit\n", + "\u001b[32m*********************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[32m***** Response from calling tool (ollama_func_4507) *****\u001b[0m\n", + "135.80 USD\n", + "\u001b[32m*********************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "Based on the results, it seems that:\n", + "\n", + "* The weather forecast for New York is expected to be around 11 degrees Fahrenheit.\n", + "* The exchange rate for EUR to USD is currently 1 EUR = 1.3580 USD, so 123.45 EUR is equivalent to approximately 135.80 USD.\n", + "\n", + "As a bonus, here are some holiday tips in New York:\n", + "\n", + "* Be sure to try a classic New York-style hot dog from a street cart or a diner.\n", + "* Explore the iconic Central Park and take a stroll through the High Line for some great views of the city.\n", + "* Catch a Broadway show or a concert at one of the many world-class venues in the city.\n", + "\n", + "And... HAVE FUN!\n", + "\n", + "--------------------------------------------------------------------------------\n", + "LLM SUMMARY: The weather forecast for New York is expected to be around 11 degrees Fahrenheit.\n", + "123.45 EUR is equivalent to approximately 135.80 USD.\n", + "Try a classic New York-style hot dog, explore Central Park and the High Line, and catch a Broadway show or concert during your visit.\n" + ] + } + ], + "source": [ + "# start the conversation\n", + "res = user_proxy.initiate_chat(\n", + " chatbot,\n", + " message=\"What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\",\n", + " summary_method=\"reflection_with_llm\",\n", + ")\n", + "\n", + "print(f\"LLM SUMMARY: {res.summary['content']}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Great, we can see that Llama 3.1 has helped choose the right functions, their parameters, and then summarised them for us." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 27c882841fdccae94b29f7301901f1248136d09c Mon Sep 17 00:00:00 2001 From: Mark Sze <66362098+marklysze@users.noreply.github.com> Date: Wed, 2 Oct 2024 09:55:17 +1000 Subject: [PATCH 08/35] Updated to v0.3.3 with changes to options handling (#3593) --- autogen/oai/ollama.py | 8 ++++---- setup.py | 2 +- test/oai/test_ollama.py | 8 ++++---- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/autogen/oai/ollama.py b/autogen/oai/ollama.py index cf154f635da9..636d152ae85c 100644 --- a/autogen/oai/ollama.py +++ b/autogen/oai/ollama.py @@ -127,7 +127,7 @@ def parse_params(self, params: Dict[str, Any]) -> Dict[str, Any]: if "num_predict" in params: # Maximum number of tokens to predict, note: -1 is infinite, -2 is fill context, 128 is default - ollama_params["num_predict"] = validate_parameter(params, "num_predict", int, False, 128, None, None) + options_dict["num_predict"] = validate_parameter(params, "num_predict", int, False, 128, None, None) if "repeat_penalty" in params: options_dict["repeat_penalty"] = validate_parameter( @@ -138,15 +138,15 @@ def parse_params(self, params: Dict[str, Any]) -> Dict[str, Any]: options_dict["seed"] = validate_parameter(params, "seed", int, False, 42, None, None) if "temperature" in params: - ollama_params["temperature"] = validate_parameter( + options_dict["temperature"] = validate_parameter( params, "temperature", (int, float), False, 0.8, None, None ) if "top_k" in params: - ollama_params["top_k"] = validate_parameter(params, "top_k", int, False, 40, None, None) + options_dict["top_k"] = validate_parameter(params, "top_k", int, False, 40, None, None) if "top_p" in params: - ollama_params["top_p"] = validate_parameter(params, "top_p", (int, float), False, 0.9, None, None) + options_dict["top_p"] = validate_parameter(params, "top_p", (int, float), False, 0.9, None, None) if self._native_tool_calls and self._tools_in_conversation and not self._should_hide_tools: ollama_params["tools"] = params["tools"] diff --git a/setup.py b/setup.py index fcce7d5e9dcd..65f5cbe8f029 100644 --- a/setup.py +++ b/setup.py @@ -105,7 +105,7 @@ "mistral": ["mistralai>=1.0.1"], "groq": ["groq>=0.9.0"], "cohere": ["cohere>=5.5.8"], - "ollama": ["ollama>=0.3.1", "fix_busted_json>=0.0.18"], + "ollama": ["ollama>=0.3.3", "fix_busted_json>=0.0.18"], "bedrock": ["boto3>=1.34.149"], } diff --git a/test/oai/test_ollama.py b/test/oai/test_ollama.py index 729e1b95d816..5491a04d7bec 100644 --- a/test/oai/test_ollama.py +++ b/test/oai/test_ollama.py @@ -65,13 +65,13 @@ def test_parsing_params(ollama_client): } expected_params = { "model": "llama3.1:8b", - "temperature": 0.8, - "num_predict": 128, - "top_k": 40, - "top_p": 0.9, "options": { "repeat_penalty": 1.1, "seed": 42, + "temperature": 0.8, + "num_predict": 128, + "top_k": 40, + "top_p": 0.9, }, "stream": False, } From 32166b5e3e95e353103fc9f0b3e7ce1c25cc6ea6 Mon Sep 17 00:00:00 2001 From: Eric Zhu Date: Tue, 1 Oct 2024 16:55:31 -0700 Subject: [PATCH 09/35] Use native logger formatter (#3594) --- autogen/oai/client.py | 2 +- autogen/oai/client_utils.py | 5 +++++ autogen/oai/cohere.py | 3 +-- autogen/oai/completion.py | 2 +- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/autogen/oai/client.py b/autogen/oai/client.py index cbabe93d874c..833cc7772196 100644 --- a/autogen/oai/client.py +++ b/autogen/oai/client.py @@ -6,7 +6,6 @@ import uuid from typing import Any, Callable, Dict, List, Optional, Protocol, Tuple, Union -from flaml.automl.logger import logger_formatter from pydantic import BaseModel from autogen.cache import Cache @@ -16,6 +15,7 @@ from autogen.runtime_logging import log_chat_completion, log_new_client, log_new_wrapper, logging_enabled from autogen.token_count_utils import count_token +from .client_utils import logger_formatter from .rate_limiters import RateLimiter, TimeRateLimiter TOOL_ENABLED = False diff --git a/autogen/oai/client_utils.py b/autogen/oai/client_utils.py index 55730485b40c..5da4c49c2a9b 100644 --- a/autogen/oai/client_utils.py +++ b/autogen/oai/client_utils.py @@ -1,8 +1,13 @@ """Utilities for client classes""" +import logging import warnings from typing import Any, Dict, List, Optional, Tuple +logger_formatter = logging.Formatter( + "[%(name)s: %(asctime)s] {%(lineno)d} %(levelname)s - %(message)s", "%m-%d %H:%M:%S" +) + def validate_parameter( params: Dict[str, Any], diff --git a/autogen/oai/cohere.py b/autogen/oai/cohere.py index e9a89c9cabd8..cd30c74dd872 100644 --- a/autogen/oai/cohere.py +++ b/autogen/oai/cohere.py @@ -31,12 +31,11 @@ from cohere import Client as Cohere from cohere.types import ToolParameterDefinitionsValue, ToolResult -from flaml.automl.logger import logger_formatter from openai.types.chat import ChatCompletion, ChatCompletionMessageToolCall from openai.types.chat.chat_completion import ChatCompletionMessage, Choice from openai.types.completion_usage import CompletionUsage -from autogen.oai.client_utils import validate_parameter +from .client_utils import logger_formatter, validate_parameter logger = logging.getLogger(__name__) if not logger.handlers: diff --git a/autogen/oai/completion.py b/autogen/oai/completion.py index 5a62cde33df0..d4866d345145 100644 --- a/autogen/oai/completion.py +++ b/autogen/oai/completion.py @@ -8,9 +8,9 @@ import numpy as np from flaml import BlendSearch, tune -from flaml.automl.logger import logger_formatter from flaml.tune.space import is_constant +from .client_utils import logger_formatter from .openai_utils import get_key try: From 289e0f4751c7a1f51e4ae0ec6d1da3ec5961c7ae Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Tue, 1 Oct 2024 20:05:11 -0400 Subject: [PATCH 10/35] Update version to 0.2.36, update package name (#3592) * Update version to 0.2.36, update package name * update publish * Formatting * Update README.md * update email --- .devcontainer/README.md | 2 +- .devcontainer/full/Dockerfile | 2 +- .github/workflows/python-package.yml | 3 - OAI_CONFIG_LIST_sample | 1 - README.md | 13 +- .../contrib/capabilities/text_compressors.py | 2 +- .../contrib/retrieve_user_proxy_agent.py | 4 +- autogen/version.py | 2 +- notebook/Async_human_input.ipynb | 176 +----------------- notebook/JSON_mode_example.ipynb | 4 +- notebook/agentchat_MathChat.ipynb | 2 +- notebook/agentchat_RetrieveChat.ipynb | 2 +- notebook/agentchat_RetrieveChat_mongodb.ipynb | 2 +- .../agentchat_RetrieveChat_pgvector.ipynb | 2 +- notebook/agentchat_RetrieveChat_qdrant.ipynb | 4 +- notebook/agentchat_agentops.ipynb | 2 +- ...at_auto_feedback_from_code_execution.ipynb | 2 +- notebook/agentchat_azr_ai_search.ipynb | 3 +- notebook/agentchat_cost_token_tracking.ipynb | 2 +- notebook/agentchat_custom_model.ipynb | 2 +- notebook/agentchat_dalle_and_gpt4v.ipynb | 2 +- notebook/agentchat_databricks_dbrx.ipynb | 8 +- notebook/agentchat_function_call.ipynb | 6 +- notebook/agentchat_function_call_async.ipynb | 4 +- ...agentchat_function_call_code_writing.ipynb | 2 +- ...at_function_call_currency_calculator.ipynb | 6 +- ...gentchat_function_call_with_composio.ipynb | 2 +- ...at_group_chat_with_llamaindex_agents.ipynb | 2 +- notebook/agentchat_groupchat.ipynb | 11 +- notebook/agentchat_groupchat_RAG.ipynb | 2 +- notebook/agentchat_groupchat_customized.ipynb | 4 +- ...tchat_groupchat_finite_state_machine.ipynb | 6 +- notebook/agentchat_groupchat_research.ipynb | 10 +- notebook/agentchat_groupchat_stateflow.ipynb | 4 +- notebook/agentchat_groupchat_vis.ipynb | 10 +- notebook/agentchat_human_feedback.ipynb | 4 +- ...gentchat_image_generation_capability.ipynb | 2 +- notebook/agentchat_langchain.ipynb | 6 +- notebook/agentchat_lmm_gpt-4v.ipynb | 4 +- notebook/agentchat_lmm_llava.ipynb | 2 +- notebook/agentchat_memory_using_mem0.ipynb | 4 +- notebook/agentchat_microsoft_fabric.ipynb | 6 +- .../agentchat_multi_task_async_chats.ipynb | 4 +- notebook/agentchat_multi_task_chats.ipynb | 4 +- notebook/agentchat_nested_chats_chess.ipynb | 4 +- ...entchat_nested_chats_chess_altmodels.ipynb | 4 +- .../agentchat_nested_sequential_chats.ipynb | 4 +- notebook/agentchat_nestedchat.ipynb | 4 +- notebook/agentchat_nestedchat_optiguide.ipynb | 2 +- ...gentchat_oai_assistant_function_call.ipynb | 6 +- .../agentchat_oai_assistant_groupchat.ipynb | 4 +- notebook/agentchat_oai_code_interpreter.ipynb | 4 +- notebook/agentchat_planning.ipynb | 6 +- notebook/agentchat_society_of_mind.ipynb | 11 +- notebook/agentchat_stream.ipynb | 4 +- notebook/agentchat_surfer.ipynb | 4 +- notebook/agentchat_teachability.ipynb | 102 +++++----- .../agentchat_teachable_oai_assistants.ipynb | 4 +- notebook/agentchat_teaching.ipynb | 4 +- notebook/agentchat_transform_messages.ipynb | 4 +- notebook/agentchat_two_users.ipynb | 4 +- ...eo_transcript_translate_with_whisper.ipynb | 2 +- notebook/agentchat_web_info.ipynb | 6 +- .../agentchat_webscraping_with_apify.ipynb | 2 +- notebook/agentchat_websockets.ipynb | 2 +- notebook/agentchats_sequential_chats.ipynb | 4 +- notebook/agenteval_cq_math.ipynb | 71 +------ notebook/autobuild_agent_library.ipynb | 4 +- notebook/autobuild_basic.ipynb | 4 +- notebook/autogen_uniformed_api_calling.ipynb | 2 +- notebook/contributing.md | 6 +- .../gpt_assistant_agent_function_call.ipynb | 46 +---- notebook/oai_chatgpt_gpt4.ipynb | 2 +- notebook/oai_completion.ipynb | 2 +- samples/apps/auto-anny/requirements.txt | 2 +- .../apps/promptflow-autogen/requirements.txt | 4 +- samples/tools/autogenbench/README.md | 2 +- .../autogenbench/autogenbench/res/Dockerfile | 4 +- .../autogenbench/template/testbed_utils.py | 2 +- samples/tools/autogenbench/pyproject.toml | 2 +- setup.py | 4 +- .../blog/2023-10-18-RetrieveChat/index.mdx | 4 +- .../blog/2023-10-26-TeachableAgent/index.mdx | 4 +- website/blog/2023-11-06-LMM-Agent/index.mdx | 2 +- .../blog/2023-11-13-OAI-assistants/index.mdx | 2 +- .../blog/2023-11-26-Agent-AutoBuild/index.mdx | 2 +- .../blog/2024-03-03-AutoGen-Update/index.mdx | 2 +- .../2024-06-24-AltModels-Classes/index.mdx | 8 +- website/docs/FAQ.mdx | 6 +- website/docs/Getting-Started.mdx | 2 +- website/docs/ecosystem/portkey.md | 2 +- website/docs/installation/Installation.mdx | 18 +- .../installation/Optional-Dependencies.md | 22 +-- .../code-execution/custom-executor.ipynb | 2 +- .../jupyter-code-executor.ipynb | 2 +- .../compressing_text_w_llmligua.md | 4 +- .../intro_to_transform_messages.md | 4 +- .../non-openai-models/cloud-anthropic.ipynb | 4 +- .../non-openai-models/cloud-bedrock.ipynb | 4 +- .../non-openai-models/cloud-cerebras.ipynb | 4 +- .../non-openai-models/cloud-cohere.ipynb | 4 +- .../non-openai-models/cloud-gemini.ipynb | 4 +- .../cloud-gemini_vertexai.ipynb | 2 +- .../non-openai-models/cloud-mistralai.ipynb | 4 +- .../non-openai-models/cloud-togetherai.ipynb | 4 +- .../prompting-and-reasoning/react.ipynb | 2 +- .../prompting-and-reasoning/reflection.ipynb | 4 +- website/docs/topics/task_decomposition.ipynb | 4 +- website/docs/tutorial/introduction.ipynb | 2 +- 109 files changed, 275 insertions(+), 548 deletions(-) diff --git a/.devcontainer/README.md b/.devcontainer/README.md index 8ae045f27d19..c56d9d56cc00 100644 --- a/.devcontainer/README.md +++ b/.devcontainer/README.md @@ -49,7 +49,7 @@ Feel free to modify these Dockerfiles for your specific project needs. Here are - **Setting Environment Variables**: Add environment variables using the `ENV` command for any application-specific configurations. We have prestaged the line needed to inject your OpenAI_key into the docker environment as a environmental variable. Others can be staged in the same way. Just uncomment the line. `# ENV OPENAI_API_KEY="{OpenAI-API-Key}"` to `ENV OPENAI_API_KEY="{OpenAI-API-Key}"` - **Need a less "Advanced" Autogen build**: If the `./full/Dockerfile` is to much but you need more than advanced then update this line in the Dockerfile file. -`RUN pip install pyautogen[teachable,lmm,retrievechat,mathchat,blendsearch] autogenra` to install just what you need. `RUN pip install pyautogen[retrievechat,blendsearch] autogenra` +`RUN pip install autogen-agentchat[teachable,lmm,retrievechat,mathchat,blendsearch]~=0.2 autogenra` to install just what you need. `RUN pip install autogen-agentchat[retrievechat,blendsearch]~=0.2 autogenra` - **Can't Dev without your favorite CLI tool**: if you need particular OS tools to be installed in your Docker container you can add those packages here right after the sudo for the `./base/Dockerfile` and `./full/Dockerfile` files. In the example below we are installing net-tools and vim to the environment. ```code diff --git a/.devcontainer/full/Dockerfile b/.devcontainer/full/Dockerfile index 0787ad240272..525dd3978d4a 100644 --- a/.devcontainer/full/Dockerfile +++ b/.devcontainer/full/Dockerfile @@ -22,7 +22,7 @@ WORKDIR /home/autogen # Install Python packages RUN pip install --upgrade pip -RUN pip install pyautogen[teachable,lmm,retrievechat,mathchat,blendsearch] autogenra +RUN pip install autogen-agentchat[teachable,lmm,retrievechat,mathchat,blendsearch]~=0.2 autogenra RUN pip install numpy pandas matplotlib seaborn scikit-learn requests urllib3 nltk pillow pytest beautifulsoup4 # Expose port diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml index f2967c13f5f0..94edf117de20 100644 --- a/.github/workflows/python-package.yml +++ b/.github/workflows/python-package.yml @@ -49,8 +49,5 @@ jobs: pip install twine python setup.py sdist bdist_wheel - name: Publish to PyPI - env: - TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }} - TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }} shell: pwsh run: twine upload dist/* diff --git a/OAI_CONFIG_LIST_sample b/OAI_CONFIG_LIST_sample index c1711acd7c65..7cb370fd5154 100644 --- a/OAI_CONFIG_LIST_sample +++ b/OAI_CONFIG_LIST_sample @@ -1,5 +1,4 @@ // Please modify the content, remove these four lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code. -// If using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 14 and line 21 below). Use "pip list" to check version of pyautogen installed. // // NOTE: This configuration lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default. [ diff --git a/README.md b/README.md index 8595bb60506c..bc6242ba5999 100644 --- a/README.md +++ b/README.md @@ -5,12 +5,9 @@ AutoGen Logo -![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![PyPI version](https://img.shields.io/badge/PyPI-v0.2.34-blue.svg)](https://pypi.org/project/pyautogen/) +![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![PyPI - Version](https://img.shields.io/pypi/v/autogen-agentchat)](https://pypi.org/project/autogen-agentchat/) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -[![Downloads](https://static.pepy.tech/badge/pyautogen/week)](https://pepy.tech/project/pyautogen) -[![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://aka.ms/autogen-dc) - [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen)
@@ -20,6 +17,10 @@ AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AutoGen aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns. > [!IMPORTANT] +> In order to better align with a new multi-packaging structure we have coming very soon, AutoGen is now available on PyPi as [`autogen-agentchat`](https://pypi.org/project/autogen-agentchat/) as of version `0.2.36`. This is the official package for the AutoGen project. + + +> [!NOTE] > *Note for contributors and users*: [microsoft/autogen](https://aka.ms/autogen-gh) is the official repository of AutoGen project and it is under active development and maintenance under MIT license. We welcome contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. We acknowledge the invaluable contributions from our existing contributors, as listed in [contributors.md](./CONTRIBUTORS.md). Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. For further information please also see [Microsoft open-source contributing guidelines](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing). > > -_Maintainers (Sept 6th, 2024)_ @@ -135,14 +136,14 @@ Find detailed instructions for users [here](https://microsoft.github.io/autogen/ AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment). diff --git a/autogen/agentchat/contrib/capabilities/text_compressors.py b/autogen/agentchat/contrib/capabilities/text_compressors.py index 78554bdc9357..fd203c35fcab 100644 --- a/autogen/agentchat/contrib/capabilities/text_compressors.py +++ b/autogen/agentchat/contrib/capabilities/text_compressors.py @@ -5,7 +5,7 @@ import llmlingua except ImportError: IMPORT_ERROR = ImportError( - "LLMLingua is not installed. Please install it with `pip install pyautogen[long-context]`" + "LLMLingua is not installed. Please install it with `pip install autogen-agentchat[long-context]~=0.2`" ) PromptCompressor = object else: diff --git a/autogen/agentchat/contrib/retrieve_user_proxy_agent.py b/autogen/agentchat/contrib/retrieve_user_proxy_agent.py index b247d7a158f6..ee8f74bb9a6c 100644 --- a/autogen/agentchat/contrib/retrieve_user_proxy_agent.py +++ b/autogen/agentchat/contrib/retrieve_user_proxy_agent.py @@ -9,7 +9,9 @@ try: import chromadb except ImportError as e: - raise ImportError(f"{e}. You can try `pip install pyautogen[retrievechat]`, or install `chromadb` manually.") + raise ImportError( + f"{e}. You can try `pip install autogen-agentchat[retrievechat]~=0.2`, or install `chromadb` manually." + ) from autogen.agentchat import UserProxyAgent from autogen.agentchat.agent import Agent from autogen.agentchat.contrib.vectordb.base import Document, QueryResults, VectorDB, VectorDBFactory diff --git a/autogen/version.py b/autogen/version.py index 9b1b78b4b3a0..c971add65283 100644 --- a/autogen/version.py +++ b/autogen/version.py @@ -1 +1 @@ -__version__ = "0.2.35" +__version__ = "0.2.36" diff --git a/notebook/Async_human_input.ipynb b/notebook/Async_human_input.ipynb index 07459b4a86ba..5d4926bf13ca 100644 --- a/notebook/Async_human_input.ipynb +++ b/notebook/Async_human_input.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" @@ -10,179 +10,9 @@ "id": "tLIs1YRdr8jM", "outputId": "909c1c70-1a22-4e9d-b7f4-a40e2d737fb0" }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: pyautogen>=0.2.3 in /home/vscode/.local/lib/python3.10/site-packages (0.2.3)\n", - "Requirement already satisfied: openai>=1.3 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.6.1)\n", - "Requirement already satisfied: diskcache in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (5.6.3)\n", - "Requirement already satisfied: termcolor in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.4.0)\n", - "Requirement already satisfied: flaml in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.1.1)\n", - "Requirement already satisfied: python-dotenv in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.0.0)\n", - "Requirement already satisfied: tiktoken in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (0.5.2)\n", - "Requirement already satisfied: pydantic<3,>=1.10 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.10.9)\n", - "Requirement already satisfied: anyio<5,>=3.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.2.0)\n", - "Requirement already satisfied: distro<2,>=1.7.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.9.0)\n", - "Requirement already satisfied: httpx<1,>=0.23.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (0.26.0)\n", - "Requirement already satisfied: sniffio in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.3.0)\n", - "Requirement already satisfied: tqdm>4 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.66.1)\n", - "Requirement already satisfied: typing-extensions<5,>=4.7 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.9.0)\n", - "Requirement already satisfied: NumPy>=1.17.0rc1 in /home/vscode/.local/lib/python3.10/site-packages (from flaml->pyautogen>=0.2.3) (1.26.3)\n", - "Requirement already satisfied: regex>=2022.1.18 in /home/vscode/.local/lib/python3.10/site-packages (from tiktoken->pyautogen>=0.2.3) (2023.12.25)\n", - "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.10/site-packages (from tiktoken->pyautogen>=0.2.3) (2.31.0)\n", - "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen>=0.2.3) (3.6)\n", - "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen>=0.2.3) (1.2.0)\n", - "Requirement already satisfied: certifi in /usr/local/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (2023.11.17)\n", - "Requirement already satisfied: httpcore==1.* in /home/vscode/.local/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (1.0.2)\n", - "Requirement already satisfied: h11<0.15,>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (0.14.0)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken->pyautogen>=0.2.3) (3.3.2)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /home/vscode/.local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken->pyautogen>=0.2.3) (1.26.18)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: chromadb in /home/vscode/.local/lib/python3.10/site-packages (0.4.22)\n", - "Requirement already satisfied: build>=1.0.3 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.0.3)\n", - "Requirement already satisfied: requests>=2.28 in /usr/local/lib/python3.10/site-packages (from chromadb) (2.31.0)\n", - "Requirement already satisfied: pydantic>=1.9 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.10.9)\n", - "Requirement already satisfied: chroma-hnswlib==0.7.3 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.7.3)\n", - "Requirement already satisfied: fastapi>=0.95.2 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.108.0)\n", - "Requirement already satisfied: uvicorn>=0.18.3 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.25.0)\n", - "Requirement already satisfied: numpy>=1.22.5 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.26.3)\n", - "Requirement already satisfied: posthog>=2.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (3.1.0)\n", - "Requirement already satisfied: typing-extensions>=4.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.9.0)\n", - "Requirement already satisfied: pulsar-client>=3.1.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (3.4.0)\n", - "Requirement already satisfied: onnxruntime>=1.14.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.16.3)\n", - "Requirement already satisfied: opentelemetry-api>=1.2.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-exporter-otlp-proto-grpc>=1.2.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-instrumentation-fastapi>=0.41b0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-sdk>=1.2.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.22.0)\n", - "Requirement already satisfied: tokenizers>=0.13.2 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.15.0)\n", - "Requirement already satisfied: pypika>=0.48.9 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.48.9)\n", - "Requirement already satisfied: tqdm>=4.65.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.66.1)\n", - "Requirement already satisfied: overrides>=7.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (7.4.0)\n", - "Requirement already satisfied: importlib-resources in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (6.1.1)\n", - "Requirement already satisfied: grpcio>=1.58.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.60.0)\n", - "Requirement already satisfied: bcrypt>=4.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.1.2)\n", - "Requirement already satisfied: typer>=0.9.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.9.0)\n", - "Requirement already satisfied: kubernetes>=28.1.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (28.1.0)\n", - "Requirement already satisfied: tenacity>=8.2.3 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (8.2.3)\n", - "Requirement already satisfied: PyYAML>=6.0.0 in /usr/local/lib/python3.10/site-packages (from chromadb) (6.0.1)\n", - "Requirement already satisfied: mmh3>=4.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.0.1)\n", - "Requirement already satisfied: packaging>=19.0 in /usr/local/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (23.2)\n", - "Requirement already satisfied: pyproject_hooks in /home/vscode/.local/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (1.0.0)\n", - "Requirement already satisfied: tomli>=1.1.0 in /usr/local/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (2.0.1)\n", - "Requirement already satisfied: starlette<0.33.0,>=0.29.0 in /home/vscode/.local/lib/python3.10/site-packages (from fastapi>=0.95.2->chromadb) (0.32.0.post1)\n", - "Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2023.11.17)\n", - "Requirement already satisfied: six>=1.9.0 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.16.0)\n", - "Requirement already satisfied: python-dateutil>=2.5.3 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.8.2)\n", - "Requirement already satisfied: google-auth>=1.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.26.1)\n", - "Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.7.0)\n", - "Requirement already satisfied: requests-oauthlib in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.3.1)\n", - "Requirement already satisfied: oauthlib>=3.2.2 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (3.2.2)\n", - "Requirement already satisfied: urllib3<2.0,>=1.24.2 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.26.18)\n", - "Requirement already satisfied: coloredlogs in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (15.0.1)\n", - "Requirement already satisfied: flatbuffers in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (23.5.26)\n", - "Requirement already satisfied: protobuf in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (4.25.1)\n", - "Requirement already satisfied: sympy in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (1.12)\n", - "Requirement already satisfied: deprecated>=1.2.6 in /usr/local/lib/python3.10/site-packages (from opentelemetry-api>=1.2.0->chromadb) (1.2.14)\n", - "Requirement already satisfied: importlib-metadata<7.0,>=6.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-api>=1.2.0->chromadb) (6.11.0)\n", - "Requirement already satisfied: backoff<3.0.0,>=1.10.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (2.2.1)\n", - "Requirement already satisfied: googleapis-common-protos~=1.52 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (1.62.0)\n", - "Requirement already satisfied: opentelemetry-exporter-otlp-proto-common==1.22.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-proto==1.22.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-instrumentation-asgi==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-instrumentation==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-semantic-conventions==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-util-http==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: setuptools>=16.0 in /usr/local/lib/python3.10/site-packages (from opentelemetry-instrumentation==0.43b0->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (69.0.2)\n", - "Requirement already satisfied: wrapt<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/site-packages (from opentelemetry-instrumentation==0.43b0->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (1.16.0)\n", - "Requirement already satisfied: asgiref~=3.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-asgi==0.43b0->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (3.7.2)\n", - "Requirement already satisfied: monotonic>=1.5 in /home/vscode/.local/lib/python3.10/site-packages (from posthog>=2.4.0->chromadb) (1.6)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.28->chromadb) (3.3.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests>=2.28->chromadb) (3.6)\n", - "Requirement already satisfied: huggingface_hub<1.0,>=0.16.4 in /home/vscode/.local/lib/python3.10/site-packages (from tokenizers>=0.13.2->chromadb) (0.20.2)\n", - "Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.10/site-packages (from typer>=0.9.0->chromadb) (8.1.7)\n", - "Requirement already satisfied: h11>=0.8 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn>=0.18.3->uvicorn[standard]>=0.18.3->chromadb) (0.14.0)\n", - "Requirement already satisfied: httptools>=0.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.6.1)\n", - "Requirement already satisfied: python-dotenv>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (1.0.0)\n", - "Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.19.0)\n", - "Requirement already satisfied: watchfiles>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.21.0)\n", - "Requirement already satisfied: websockets>=10.4 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (12.0)\n", - "Requirement already satisfied: cachetools<6.0,>=2.0.0 in /home/vscode/.local/lib/python3.10/site-packages (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (5.3.2)\n", - "Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/vscode/.local/lib/python3.10/site-packages (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (0.3.0)\n", - "Requirement already satisfied: rsa<5,>=3.1.4 in /home/vscode/.local/lib/python3.10/site-packages (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (4.9)\n", - "Requirement already satisfied: filelock in /home/vscode/.local/lib/python3.10/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb) (3.13.1)\n", - "Requirement already satisfied: fsspec>=2023.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb) (2023.12.2)\n", - "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.10/site-packages (from importlib-metadata<7.0,>=6.0->opentelemetry-api>=1.2.0->chromadb) (3.17.0)\n", - "Requirement already satisfied: anyio<5,>=3.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from starlette<0.33.0,>=0.29.0->fastapi>=0.95.2->chromadb) (4.2.0)\n", - "Requirement already satisfied: humanfriendly>=9.1 in /home/vscode/.local/lib/python3.10/site-packages (from coloredlogs->onnxruntime>=1.14.1->chromadb) (10.0)\n", - "Requirement already satisfied: mpmath>=0.19 in /home/vscode/.local/lib/python3.10/site-packages (from sympy->onnxruntime>=1.14.1->chromadb) (1.3.0)\n", - "Requirement already satisfied: sniffio>=1.1 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.4.0->starlette<0.33.0,>=0.29.0->fastapi>=0.95.2->chromadb) (1.3.0)\n", - "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.4.0->starlette<0.33.0,>=0.29.0->fastapi>=0.95.2->chromadb) (1.2.0)\n", - "Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /home/vscode/.local/lib/python3.10/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (0.5.1)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: sentence_transformers in /home/vscode/.local/lib/python3.10/site-packages (2.2.2)\n", - "Requirement already satisfied: transformers<5.0.0,>=4.6.0 in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (4.36.2)\n", - "Requirement already satisfied: tqdm in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (4.66.1)\n", - "Requirement already satisfied: torch>=1.6.0 in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (2.1.2)\n", - "Requirement already satisfied: torchvision in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (0.16.2)\n", - "Requirement already satisfied: numpy in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (1.26.3)\n", - "Requirement already satisfied: scikit-learn in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (1.3.2)\n", - "Requirement already satisfied: scipy in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (1.11.4)\n", - "Requirement already satisfied: nltk in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (3.8.1)\n", - "Requirement already satisfied: sentencepiece in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (0.1.99)\n", - "Requirement already satisfied: huggingface-hub>=0.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (0.20.2)\n", - "Requirement already satisfied: filelock in /home/vscode/.local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (3.13.1)\n", - "Requirement already satisfied: fsspec>=2023.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (2023.12.2)\n", - "Requirement already satisfied: requests in /usr/local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (2.31.0)\n", - "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (6.0.1)\n", - "Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/vscode/.local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (4.9.0)\n", - "Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (23.2)\n", - "Requirement already satisfied: sympy in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (1.12)\n", - "Requirement already satisfied: networkx in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (3.2.1)\n", - "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (3.1.2)\n", - "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (8.9.2.26)\n", - "Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.3.1)\n", - "Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (11.0.2.54)\n", - "Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (10.3.2.106)\n", - "Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (11.4.5.107)\n", - "Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.0.106)\n", - "Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (2.18.1)\n", - "Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: triton==2.1.0 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (2.1.0)\n", - "Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/vscode/.local/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.6.0->sentence_transformers) (12.3.101)\n", - "Requirement already satisfied: regex!=2019.12.17 in /home/vscode/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence_transformers) (2023.12.25)\n", - "Requirement already satisfied: tokenizers<0.19,>=0.14 in /home/vscode/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence_transformers) (0.15.0)\n", - "Requirement already satisfied: safetensors>=0.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence_transformers) (0.4.1)\n", - "Requirement already satisfied: click in /usr/local/lib/python3.10/site-packages (from nltk->sentence_transformers) (8.1.7)\n", - "Requirement already satisfied: joblib in /home/vscode/.local/lib/python3.10/site-packages (from nltk->sentence_transformers) (1.3.2)\n", - "Requirement already satisfied: threadpoolctl>=2.0.0 in /home/vscode/.local/lib/python3.10/site-packages (from scikit-learn->sentence_transformers) (3.2.0)\n", - "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/vscode/.local/lib/python3.10/site-packages (from torchvision->sentence_transformers) (10.2.0)\n", - "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/site-packages (from jinja2->torch>=1.6.0->sentence_transformers) (2.1.3)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (3.3.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (3.6)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /home/vscode/.local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (1.26.18)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (2023.11.17)\n", - "Requirement already satisfied: mpmath>=0.19 in /home/vscode/.local/lib/python3.10/site-packages (from sympy->torch>=1.6.0->sentence_transformers) (1.3.0)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: tiktoken in /home/vscode/.local/lib/python3.10/site-packages (0.5.2)\n", - "Requirement already satisfied: regex>=2022.1.18 in /home/vscode/.local/lib/python3.10/site-packages (from tiktoken) (2023.12.25)\n", - "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.10/site-packages (from tiktoken) (2.31.0)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (3.3.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (3.6)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /home/vscode/.local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (1.26.18)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (2023.11.17)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: pypdf in /home/vscode/.local/lib/python3.10/site-packages (3.17.4)\n" - ] - } - ], + "outputs": [], "source": [ - "!pip install \"pyautogen>=0.2.3\"\n", + "!pip install \"autogen-agentchat~=0.2\"\n", "!pip install chromadb\n", "!pip install sentence_transformers\n", "!pip install tiktoken\n", diff --git a/notebook/JSON_mode_example.ipynb b/notebook/JSON_mode_example.ipynb index c4b65c4d9f4d..eb09f51983ec 100644 --- a/notebook/JSON_mode_example.ipynb +++ b/notebook/JSON_mode_example.ipynb @@ -29,7 +29,7 @@ "JSON mode is a feature of OpenAI API, however strong models (such as Claude 3 Opus), can generate appropriate json as well.\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -40,7 +40,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"\n", + "# %pip install \"autogen-agentchat~=0.2.3\"\n", "\n", "# In Your OAI_CONFIG_LIST file, you must have two configs,\n", "# one with: \"response_format\": { \"type\": \"text\" }\n", diff --git a/notebook/agentchat_MathChat.ipynb b/notebook/agentchat_MathChat.ipynb index db7c6594d998..69c38031b2fe 100644 --- a/notebook/agentchat_MathChat.ipynb +++ b/notebook/agentchat_MathChat.ipynb @@ -24,7 +24,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[mathchat]\n", + "pip install autogen-agentchat[mathchat]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_RetrieveChat.ipynb b/notebook/agentchat_RetrieveChat.ipynb index 0b829835a0a3..6ca2d1ac5123 100644 --- a/notebook/agentchat_RetrieveChat.ipynb +++ b/notebook/agentchat_RetrieveChat.ipynb @@ -28,7 +28,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat] flaml[automl]\n", + "pip install autogen-agentchat[retrievechat]~=0.2 flaml[automl]\n", "```\n", "\n", "*You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).*\n", diff --git a/notebook/agentchat_RetrieveChat_mongodb.ipynb b/notebook/agentchat_RetrieveChat_mongodb.ipynb index 09c3c44bef27..f1f85f65a802 100644 --- a/notebook/agentchat_RetrieveChat_mongodb.ipynb +++ b/notebook/agentchat_RetrieveChat_mongodb.ipynb @@ -22,7 +22,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat-mongodb] flaml[automl]\n", + "pip install autogen-agentchat[retrievechat-mongodb]~=0.2 flaml[automl]\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_RetrieveChat_pgvector.ipynb b/notebook/agentchat_RetrieveChat_pgvector.ipynb index 4d9dd44c33d8..022b1347a2d2 100644 --- a/notebook/agentchat_RetrieveChat_pgvector.ipynb +++ b/notebook/agentchat_RetrieveChat_pgvector.ipynb @@ -24,7 +24,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat-pgvector] flaml[automl]\n", + "pip install autogen-agentchat[retrievechat-pgvector]~=0.2 flaml[automl]\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_RetrieveChat_qdrant.ipynb b/notebook/agentchat_RetrieveChat_qdrant.ipynb index 0035a8e30817..9be4cbfe528d 100644 --- a/notebook/agentchat_RetrieveChat_qdrant.ipynb +++ b/notebook/agentchat_RetrieveChat_qdrant.ipynb @@ -21,7 +21,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install \"pyautogen[retrievechat-qdrant]\" \"flaml[automl]\"\n", + "pip install \"autogen-agentchat[retrievechat-qdrant]~=0.2\" \"flaml[automl]\"\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -43,7 +43,7 @@ } ], "source": [ - "%pip install \"pyautogen[retrievechat-qdrant]\" \"flaml[automl]\" -q" + "%pip install \"autogen-agentchat[retrievechat-qdrant]~=0.2\" \"flaml[automl]\" -q" ] }, { diff --git a/notebook/agentchat_agentops.ipynb b/notebook/agentchat_agentops.ipynb index 71106e45d3c4..7c5e09c9cc52 100644 --- a/notebook/agentchat_agentops.ipynb +++ b/notebook/agentchat_agentops.ipynb @@ -55,7 +55,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen agentops\n", + "pip install autogen-agentchat~=0.2 agentops\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_auto_feedback_from_code_execution.ipynb b/notebook/agentchat_auto_feedback_from_code_execution.ipynb index 51b5a5917341..31169e1c7a0e 100644 --- a/notebook/agentchat_auto_feedback_from_code_execution.ipynb +++ b/notebook/agentchat_auto_feedback_from_code_execution.ipynb @@ -16,7 +16,7 @@ ":::info Requirements\n", "Install the following packages before running the code below:\n", "```bash\n", - "pip install pyautogen matplotlib yfinance\n", + "pip install autogen-agentchat~=0.2 matplotlib yfinance\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_azr_ai_search.ipynb b/notebook/agentchat_azr_ai_search.ipynb index f4521f60d272..3632a1fd87a1 100644 --- a/notebook/agentchat_azr_ai_search.ipynb +++ b/notebook/agentchat_azr_ai_search.ipynb @@ -84,9 +84,8 @@ "metadata": {}, "outputs": [], "source": [ - "!pip3 install pyautogen==0.2.16\n", + "!pip3 install autogen-agentchat[graph]~=0.2\n", "!pip3 install python-dotenv==1.0.1\n", - "!pip3 install pyautogen[graph]>=0.2.11\n", "!pip3 install azure-search-documents==11.4.0b8\n", "!pip3 install azure-identity==1.12.0" ] diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb index a60fd6de15e9..17106e7c9385 100644 --- a/notebook/agentchat_cost_token_tracking.ipynb +++ b/notebook/agentchat_cost_token_tracking.ipynb @@ -54,7 +54,7 @@ "\n", "AutoGen requires `Python>=3.8`:\n", "```bash\n", - "pip install \"pyautogen\"\n", + "pip install \"autogen-agentchat~=0.2\"\n", "```" ] }, diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb index 5097713a0923..773247ee0b91 100644 --- a/notebook/agentchat_custom_model.ipynb +++ b/notebook/agentchat_custom_model.ipynb @@ -22,7 +22,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen torch transformers sentencepiece\n", + "pip install autogen-agentchat~=0.2 torch transformers sentencepiece\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_dalle_and_gpt4v.ipynb b/notebook/agentchat_dalle_and_gpt4v.ipynb index e07578016a98..afc4524734a3 100644 --- a/notebook/agentchat_dalle_and_gpt4v.ipynb +++ b/notebook/agentchat_dalle_and_gpt4v.ipynb @@ -17,7 +17,7 @@ "source": [ "### Before everything starts, install AutoGen with the `lmm` option\n", "```bash\n", - "pip install \"pyautogen[lmm]>=0.2.3\"\n", + "pip install \"autogen-agentchat[lmm]~=0.2\"\n", "```" ] }, diff --git a/notebook/agentchat_databricks_dbrx.ipynb b/notebook/agentchat_databricks_dbrx.ipynb index 12d40a37db12..c063906ea164 100644 --- a/notebook/agentchat_databricks_dbrx.ipynb +++ b/notebook/agentchat_databricks_dbrx.ipynb @@ -15,7 +15,7 @@ "This notebook will demonstrate a few basic examples of Autogen with DBRX, including the use of `AssistantAgent`, `UserProxyAgent`, and `ConversableAgent`. These demos are not intended to be exhaustive - feel free to use them as a base to build upon!\n", "\n", "## Requirements\n", - "AutoGen must be installed on your Databricks cluster, and requires `Python>=3.8`. This example includes the `%pip` magic command to install: `%pip install pyautogen`, as well as other necessary libraries. \n", + "AutoGen must be installed on your Databricks cluster, and requires `Python>=3.8`. This example includes the `%pip` magic command to install: `%pip install autogen-agentchat~=0.2`, as well as other necessary libraries. \n", "\n", "This code has been tested on: \n", "* [Serverless Notebooks](https://docs.databricks.com/en/compute/serverless.html) (in public preview as of Apr 18, 2024)\n", @@ -47,13 +47,11 @@ { "name": "stdout", "output_type": "stream", - "text": [ - "" - ] + "text": [] } ], "source": [ - "%pip install pyautogen==0.2.25 openai==1.21.2 typing_extensions==4.11.0 --upgrade" + "%pip install autogen-agentchat~=0.2.25 openai==1.21.2 typing_extensions==4.11.0 --upgrade" ] }, { diff --git a/notebook/agentchat_function_call.ipynb b/notebook/agentchat_function_call.ipynb index 2a173c8e2698..ff94c0d4fb0a 100644 --- a/notebook/agentchat_function_call.ipynb +++ b/notebook/agentchat_function_call.ipynb @@ -23,9 +23,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen`:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -36,7 +36,7 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_function_call_async.ipynb b/notebook/agentchat_function_call_async.ipynb index 57233547ebc8..e0e24af9232e 100644 --- a/notebook/agentchat_function_call_async.ipynb +++ b/notebook/agentchat_function_call_async.ipynb @@ -20,9 +20,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_function_call_code_writing.ipynb b/notebook/agentchat_function_call_code_writing.ipynb index 92074e4821b0..924592bbdec0 100644 --- a/notebook/agentchat_function_call_code_writing.ipynb +++ b/notebook/agentchat_function_call_code_writing.ipynb @@ -28,7 +28,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install pyautogen" + "! pip install autogen-agentchat~=0.2" ] }, { diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb index 34ff92ff91a9..36ef81d5edbe 100644 --- a/notebook/agentchat_function_call_currency_calculator.ipynb +++ b/notebook/agentchat_function_call_currency_calculator.ipynb @@ -21,9 +21,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen`:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -34,7 +34,7 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_function_call_with_composio.ipynb b/notebook/agentchat_function_call_with_composio.ipynb index 001c56960fc9..1eea9e908fc4 100644 --- a/notebook/agentchat_function_call_with_composio.ipynb +++ b/notebook/agentchat_function_call_with_composio.ipynb @@ -60,7 +60,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen composio-autogen\n", + "pip install autogen-agentchat~=0.2 composio-autogen\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb b/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb index aea134907b76..3e0c721f65e1 100644 --- a/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb +++ b/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb @@ -26,7 +26,7 @@ "metadata": {}, "outputs": [], "source": [ - "%pip install pyautogen llama-index llama-index-tools-wikipedia llama-index-readers-wikipedia wikipedia" + "%pip install autogen-agentchat~=0.2 llama-index llama-index-tools-wikipedia llama-index-readers-wikipedia wikipedia" ] }, { diff --git a/notebook/agentchat_groupchat.ipynb b/notebook/agentchat_groupchat.ipynb index d2c061d14105..925c7124a29c 100644 --- a/notebook/agentchat_groupchat.ipynb +++ b/notebook/agentchat_groupchat.ipynb @@ -14,9 +14,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -218,8 +218,11 @@ ], "metadata": { "front_matter": { - "tags": ["orchestration", "group chat"], - "description": "Explore the utilization of large language models in automated group chat scenarios, where agents perform tasks collectively, demonstrating how they can be configured, interact with each other, and retrieve specific information from external resources." + "description": "Explore the utilization of large language models in automated group chat scenarios, where agents perform tasks collectively, demonstrating how they can be configured, interact with each other, and retrieve specific information from external resources.", + "tags": [ + "orchestration", + "group chat" + ] }, "kernelspec": { "display_name": "flaml", diff --git a/notebook/agentchat_groupchat_RAG.ipynb b/notebook/agentchat_groupchat_RAG.ipynb index e18bd99c1511..aeb4f714e68a 100644 --- a/notebook/agentchat_groupchat_RAG.ipynb +++ b/notebook/agentchat_groupchat_RAG.ipynb @@ -15,7 +15,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat]\n", + "pip install autogen-agentchat[retrievechat]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_groupchat_customized.ipynb b/notebook/agentchat_groupchat_customized.ipynb index dde124aef7d7..3b85223a731e 100644 --- a/notebook/agentchat_groupchat_customized.ipynb +++ b/notebook/agentchat_groupchat_customized.ipynb @@ -39,9 +39,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_groupchat_finite_state_machine.ipynb b/notebook/agentchat_groupchat_finite_state_machine.ipynb index 74b6f3d40473..cfe45662a8fa 100644 --- a/notebook/agentchat_groupchat_finite_state_machine.ipynb +++ b/notebook/agentchat_groupchat_finite_state_machine.ipynb @@ -18,9 +18,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -35,7 +35,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "%pip install pyautogen[graph]>=0.2.11" + "%pip install autogen-agentchat[graph]~=0.2.11" ] }, { diff --git a/notebook/agentchat_groupchat_research.ipynb b/notebook/agentchat_groupchat_research.ipynb index c448ed8cb7a8..6adf653f9035 100644 --- a/notebook/agentchat_groupchat_research.ipynb +++ b/notebook/agentchat_groupchat_research.ipynb @@ -14,9 +14,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -515,8 +515,10 @@ ], "metadata": { "front_matter": { - "tags": ["group chat"], - "description": "Perform research using a group chat with a number of specialized agents" + "description": "Perform research using a group chat with a number of specialized agents", + "tags": [ + "group chat" + ] }, "kernelspec": { "display_name": "flaml", diff --git a/notebook/agentchat_groupchat_stateflow.ipynb b/notebook/agentchat_groupchat_stateflow.ipynb index 3081056eac9c..6a7869ec5403 100644 --- a/notebook/agentchat_groupchat_stateflow.ipynb +++ b/notebook/agentchat_groupchat_stateflow.ipynb @@ -12,9 +12,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_groupchat_vis.ipynb b/notebook/agentchat_groupchat_vis.ipynb index 29f968752ae6..d660bb0f51ea 100644 --- a/notebook/agentchat_groupchat_vis.ipynb +++ b/notebook/agentchat_groupchat_vis.ipynb @@ -12,9 +12,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -976,8 +976,10 @@ ], "metadata": { "front_matter": { - "tags": ["group chat"], - "description": "Explore a group chat example using agents such as a coder and visualization agent." + "description": "Explore a group chat example using agents such as a coder and visualization agent.", + "tags": [ + "group chat" + ] }, "kernelspec": { "display_name": "flaml", diff --git a/notebook/agentchat_human_feedback.ipynb b/notebook/agentchat_human_feedback.ipynb index 000d788d6a56..3c21c7c9f4f5 100644 --- a/notebook/agentchat_human_feedback.ipynb +++ b/notebook/agentchat_human_feedback.ipynb @@ -28,7 +28,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -45,7 +45,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_image_generation_capability.ipynb b/notebook/agentchat_image_generation_capability.ipynb index b5d298d7f4d2..d8a01fc30327 100644 --- a/notebook/agentchat_image_generation_capability.ipynb +++ b/notebook/agentchat_image_generation_capability.ipynb @@ -20,7 +20,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[lmm]\n", + "pip install autogen-agentchat[lmm]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_langchain.ipynb b/notebook/agentchat_langchain.ipynb index 83ab2df44c28..1d5fea19e066 100644 --- a/notebook/agentchat_langchain.ipynb +++ b/notebook/agentchat_langchain.ipynb @@ -28,9 +28,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen` and `Langchain`:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please install `autogen-agentchat` and `Langchain`:\n", "```bash\n", - "pip install pyautogen Langchain\n", + "pip install autogen-agentchat~=0.2 Langchain\n", "```" ] }, @@ -47,7 +47,7 @@ }, "outputs": [], "source": [ - "%pip install \"pyautogen>=0.2.3\" Langchain" + "%pip install \"autogen-agentchat~=0.2\" Langchain" ] }, { diff --git a/notebook/agentchat_lmm_gpt-4v.ipynb b/notebook/agentchat_lmm_gpt-4v.ipynb index 7c9e3ea125c2..7136ac62e738 100644 --- a/notebook/agentchat_lmm_gpt-4v.ipynb +++ b/notebook/agentchat_lmm_gpt-4v.ipynb @@ -21,9 +21,9 @@ "source": [ "### Before everything starts, install AutoGen with the `lmm` option\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install \"pyautogen[lmm]>=0.2.17\"\n", + "pip install \"autogen-agentchat[lmm]~=0.2\"\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n" diff --git a/notebook/agentchat_lmm_llava.ipynb b/notebook/agentchat_lmm_llava.ipynb index 61efc74e00e1..e5eb26362ec7 100644 --- a/notebook/agentchat_lmm_llava.ipynb +++ b/notebook/agentchat_lmm_llava.ipynb @@ -26,7 +26,7 @@ "source": [ "### Before everything starts, install AutoGen with the `lmm` option\n", "```bash\n", - "pip install \"pyautogen[lmm]>=0.2.3\"\n", + "pip install \"autogen-agentchat[lmm]~=0.2\"\n", "```" ] }, diff --git a/notebook/agentchat_memory_using_mem0.ipynb b/notebook/agentchat_memory_using_mem0.ipynb index d590002164b9..b433e05df41d 100644 --- a/notebook/agentchat_memory_using_mem0.ipynb +++ b/notebook/agentchat_memory_using_mem0.ipynb @@ -17,7 +17,7 @@ "source": [ "This notebook demonstrates an intelligent customer service chatbot system that combines:\n", "\n", - "- PyAutoGen for conversational agents\n", + "- AutoGen for conversational agents\n", "- Mem0 for memory management\n", "\n", "[Mem0](https://www.mem0.ai/) provides a smart, self-improving memory layer for Large Language Models (LLMs), enabling developers to create personalized AI experiences that evolve with each user interaction. Refer [docs](https://docs.mem0.ai/overview) for more information.\n", @@ -50,7 +50,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen mem0ai\n", + "pip install autogen-agentchat~=0.2 mem0ai\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_microsoft_fabric.ipynb b/notebook/agentchat_microsoft_fabric.ipynb index 1aaee58f0921..225000146786 100644 --- a/notebook/agentchat_microsoft_fabric.ipynb +++ b/notebook/agentchat_microsoft_fabric.ipynb @@ -225,8 +225,8 @@ "metadata": {}, "outputs": [], "source": [ - "# pyautogen>0.1.14 supports openai>=1\n", - "%pip install \"pyautogen>0.2\" \"openai>1\" -q" + "# autogen-agentchat>0.1.14 supports openai>=1\n", + "%pip install \"autogen-agentchat~=0.2\" \"openai>1\" -q" ] }, { @@ -418,7 +418,7 @@ }, "outputs": [], "source": [ - "%pip install \"pyautogen[retrievechat,lmm]>=0.2.28\" -q" + "%pip install \"autogen-agentchat[retrievechat,lmm]~=0.2\" -q" ] }, { diff --git a/notebook/agentchat_multi_task_async_chats.ipynb b/notebook/agentchat_multi_task_async_chats.ipynb index ad75618a5460..86b22edd2bec 100644 --- a/notebook/agentchat_multi_task_async_chats.ipynb +++ b/notebook/agentchat_multi_task_async_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_multi_task_chats.ipynb b/notebook/agentchat_multi_task_chats.ipynb index 2c200f52354e..5defb22b13d0 100644 --- a/notebook/agentchat_multi_task_chats.ipynb +++ b/notebook/agentchat_multi_task_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_nested_chats_chess.ipynb b/notebook/agentchat_nested_chats_chess.ipynb index b3e369fba8ce..e5a22fce5a7f 100644 --- a/notebook/agentchat_nested_chats_chess.ipynb +++ b/notebook/agentchat_nested_chats_chess.ipynb @@ -39,7 +39,7 @@ "source": [ "## Installation\n", "\n", - "First you need to install the `pyautogen` and `chess` packages to use AutoGen." + "First you need to install the `autogen-agentchat~=0.2` and `chess` packages to use AutoGen." ] }, { @@ -48,7 +48,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install -qqq pyautogen chess" + "! pip install -qqq autogen-agentchat~=0.2 chess" ] }, { diff --git a/notebook/agentchat_nested_chats_chess_altmodels.ipynb b/notebook/agentchat_nested_chats_chess_altmodels.ipynb index 69d3edbcfb50..8980a87e8818 100644 --- a/notebook/agentchat_nested_chats_chess_altmodels.ipynb +++ b/notebook/agentchat_nested_chats_chess_altmodels.ipynb @@ -40,7 +40,7 @@ "source": [ "## Installation\n", "\n", - "First, you need to install the `pyautogen` and `chess` packages to use AutoGen. We'll include Anthropic and Together.AI libraries." + "First, you need to install the `autogen-agentchat~=0.2` and `chess` packages to use AutoGen. We'll include Anthropic and Together.AI libraries." ] }, { @@ -49,7 +49,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install -qqq pyautogen[anthropic,together] chess" + "! pip install -qqq autogen-agentchat[anthropic,together]~=0.2 chess" ] }, { diff --git a/notebook/agentchat_nested_sequential_chats.ipynb b/notebook/agentchat_nested_sequential_chats.ipynb index 3c4ca199484c..eb27ef72e351 100644 --- a/notebook/agentchat_nested_sequential_chats.ipynb +++ b/notebook/agentchat_nested_sequential_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_nestedchat.ipynb b/notebook/agentchat_nestedchat.ipynb index f81f20398591..bfb1cc680581 100644 --- a/notebook/agentchat_nestedchat.ipynb +++ b/notebook/agentchat_nestedchat.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_nestedchat_optiguide.ipynb b/notebook/agentchat_nestedchat_optiguide.ipynb index c1648bce62b4..cf9f6c5890df 100644 --- a/notebook/agentchat_nestedchat_optiguide.ipynb +++ b/notebook/agentchat_nestedchat_optiguide.ipynb @@ -21,7 +21,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen eventlet gurobipy\n", + "pip install autogen-agentchat~=0.2 eventlet gurobipy\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_oai_assistant_function_call.ipynb b/notebook/agentchat_oai_assistant_function_call.ipynb index bc78819fb198..b96d1eef9098 100644 --- a/notebook/agentchat_oai_assistant_function_call.ipynb +++ b/notebook/agentchat_oai_assistant_function_call.ipynb @@ -19,9 +19,9 @@ "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -36,7 +36,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_oai_assistant_groupchat.ipynb b/notebook/agentchat_oai_assistant_groupchat.ipynb index d38fed4cdaee..e236aa1120e9 100644 --- a/notebook/agentchat_oai_assistant_groupchat.ipynb +++ b/notebook/agentchat_oai_assistant_groupchat.ipynb @@ -16,9 +16,9 @@ "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_oai_code_interpreter.ipynb b/notebook/agentchat_oai_code_interpreter.ipynb index a8aeb6147896..92eb3e19e23d 100644 --- a/notebook/agentchat_oai_code_interpreter.ipynb +++ b/notebook/agentchat_oai_code_interpreter.ipynb @@ -12,9 +12,9 @@ "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_planning.ipynb b/notebook/agentchat_planning.ipynb index 14b393958dc1..9953bf3b57f0 100644 --- a/notebook/agentchat_planning.ipynb +++ b/notebook/agentchat_planning.ipynb @@ -26,9 +26,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen and docker:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please install autogen-agentchat and docker:\n", "```bash\n", - "pip install pyautogen docker\n", + "pip install autogen-agentchat~=0.2 docker\n", "```" ] }, @@ -45,7 +45,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\" docker" + "# %pip install \"autogen-agentchat~=0.2\" docker" ] }, { diff --git a/notebook/agentchat_society_of_mind.ipynb b/notebook/agentchat_society_of_mind.ipynb index df3a6c543397..091ab2e5519b 100644 --- a/notebook/agentchat_society_of_mind.ipynb +++ b/notebook/agentchat_society_of_mind.ipynb @@ -15,9 +15,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -357,8 +357,11 @@ ], "metadata": { "front_matter": { - "tags": ["orchestration", "nested chat"], - "description": "Explore the demonstration of the SocietyOfMindAgent in the AutoGen library, which runs a group chat as an internal monologue, but appears to the external world as a single agent, offering a structured way to manage complex interactions among multiple agents and handle issues such as extracting responses from complex dialogues and dealing with context window constraints." + "description": "Explore the demonstration of the SocietyOfMindAgent in the AutoGen library, which runs a group chat as an internal monologue, but appears to the external world as a single agent, offering a structured way to manage complex interactions among multiple agents and handle issues such as extracting responses from complex dialogues and dealing with context window constraints.", + "tags": [ + "orchestration", + "nested chat" + ] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", diff --git a/notebook/agentchat_stream.ipynb b/notebook/agentchat_stream.ipynb index 8127cdfbab04..5536cef96aa0 100644 --- a/notebook/agentchat_stream.ipynb +++ b/notebook/agentchat_stream.ipynb @@ -28,7 +28,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -45,7 +45,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_surfer.ipynb b/notebook/agentchat_surfer.ipynb index 46c4679e3014..09c9cb3d4f8d 100644 --- a/notebook/agentchat_surfer.ipynb +++ b/notebook/agentchat_surfer.ipynb @@ -15,7 +15,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install AutoGen with the optional `websurfer` dependencies:\n", "```bash\n", - "pip install \"pyautogen[websurfer]\"\n", + "pip install \"autogen-agentchat[websurfer]~=0.2\"\n", "```" ] }, @@ -25,7 +25,7 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install --quiet \"pyautogen[websurfer]\"" + "# %pip install --quiet \"autogen-agentchat[websurfer]~=0.2\"" ] }, { diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb index ac239f793dc7..4be1e135dab1 100644 --- a/notebook/agentchat_teachability.ipynb +++ b/notebook/agentchat_teachability.ipynb @@ -22,7 +22,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[teachable]\n", + "pip install autogen-agentchat[teachable]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -99,8 +99,8 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[92m\n", - "CLEARING MEMORY\u001B[0m\n" + "\u001b[92m\n", + "CLEARING MEMORY\u001b[0m\n" ] } ], @@ -152,14 +152,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "What is the Vicuna model?\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "The term \"Vicuna model\" does not point to a well-known concept or framework in the realms of science, technology, or social sciences as of my last knowledge update in early 2023. It's possible that the term could be a reference to a proprietary model or a concept that has emerged after my last update or it might be a misspelling or a misunderstanding.\n", "\n", @@ -185,14 +185,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Vicuna is a 13B-parameter language model released by Meta.\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "My apologies for the confusion. As of my last update, the Vicuna model had not been part of my database. If Vicuna is indeed a 13-billion-parameter language model developed by Meta (formerly Facebook Inc.), then it would be one of the large-scale transformer-based models akin to those like GPT-3 by OpenAI.\n", "\n", @@ -222,14 +222,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "What is the Orca model?\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "As of my last update, the Orca model appears to reference a new development that I do not have extensive information on, similar to the earlier reference to the Vicuna model.\n", "\n", @@ -255,14 +255,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Orca is a 13B-parameter language model developed by Microsoft. It outperforms Vicuna on most tasks.\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Thank you for providing the context about the Orca model. Based on the new information you've given, Orca is a language model with 13 billion parameters, similar in size to Meta's Vicuna model, but developed by Microsoft. If it outperforms Vicuna on most tasks, it suggests that it could have been trained on a more diverse dataset, use a more advanced architecture, have more effective training techniques, or some combination of these factors.\n", "\n", @@ -297,14 +297,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "How does the Vicuna model compare to the Orca model?\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "The Vicuna model and the Orca model are both large-scale language models with a significant number of parameters—13 billion, to be exact.\n", "\n", @@ -340,7 +340,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Please summarize this abstract.\n", "\n", @@ -350,9 +350,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "AutoGen is an open-source framework designed to facilitate the creation of applications using large language models (LLMs) through the use of multiple conversational agents. These agents can be tailored to users' needs and are capable of interaction in multiple modes, including with other LLMs, human input, and additional tools. With AutoGen, developers have the flexibility to program agent interactions using both natural language and code, enabling the creation of complex patterns suitable for a wide range of applications. The framework has been proven effective across various fields, such as math, coding, question answering, and entertainment, based on empirical studies conducted to test its capabilities.\n", "\n", @@ -386,7 +386,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Please summarize this abstract. \n", "When I'm summarizing an abstract, I try to make the summary contain just three short bullet points: the title, the innovation, and the key empirical results.\n", @@ -397,9 +397,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "- Title: AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation\n", "- Innovation: AutoGen, an open-source framework that supports building large language model (LLM) applications by enabling conversation among multiple customizable and conversable agents.\n", @@ -436,7 +436,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Please summarize this abstract.\n", "\n", @@ -445,9 +445,9 @@ "Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "- Title: Sparks of Artificial General Intelligence: Early experiments with GPT-4\n", "\n", @@ -487,7 +487,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "9 * 4 + 6 * 6 = 72\n", @@ -496,9 +496,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "To solve this problem, we need to find a way to add exactly 27 (since 99 - 72 = 27) to the left hand side of the equation by modifying only one of the integers in the equation. \n", "\n", @@ -563,7 +563,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "9 * 4 + 6 * 6 = 72\n", @@ -584,9 +584,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Given the new set of instructions and the correction that according to a past memory, the solution is \"9 * 1 + 6 * 9\", let's follow the steps carefully to arrive at the correct modified equation.\n", "\n", @@ -668,7 +668,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "9 * 4 + 6 * 6 = 72\n", @@ -677,9 +677,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Let's apply the steps you've provided to solve the problem at hand:\n", "\n", @@ -740,7 +740,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "8 * 3 + 7 * 9 = 87\n", @@ -749,9 +749,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Let's apply the plan step-by-step to find the correct modification:\n", "\n", diff --git a/notebook/agentchat_teachable_oai_assistants.ipynb b/notebook/agentchat_teachable_oai_assistants.ipynb index 3753be414f39..75744d9a397a 100644 --- a/notebook/agentchat_teachable_oai_assistants.ipynb +++ b/notebook/agentchat_teachable_oai_assistants.ipynb @@ -28,7 +28,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install the [teachable] option.\n", "```bash\n", - "pip install \"pyautogen[teachable]\"\n", + "pip install \"autogen-agentchat[teachable]~=0.2\"\n", "```" ] }, @@ -39,7 +39,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen[teachable]\"" + "# %pip install \"autogen-agentchat[teachable]~=0.2\"" ] }, { diff --git a/notebook/agentchat_teaching.ipynb b/notebook/agentchat_teaching.ipynb index a61f3c7e08ed..4d9564276cba 100644 --- a/notebook/agentchat_teaching.ipynb +++ b/notebook/agentchat_teaching.ipynb @@ -16,9 +16,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_transform_messages.ipynb b/notebook/agentchat_transform_messages.ipynb index d0216e05dd2d..fe9ca5d1f519 100644 --- a/notebook/agentchat_transform_messages.ipynb +++ b/notebook/agentchat_transform_messages.ipynb @@ -12,9 +12,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_two_users.ipynb b/notebook/agentchat_two_users.ipynb index eb9e0c1fbf28..14b95b537013 100644 --- a/notebook/agentchat_two_users.ipynb +++ b/notebook/agentchat_two_users.ipynb @@ -27,7 +27,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb index 48812ad01a61..e19e61419ad3 100644 --- a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb +++ b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb @@ -23,7 +23,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen openai openai-whisper\n", + "pip install autogen-agentchat~=0.2 openai openai-whisper\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_web_info.ipynb b/notebook/agentchat_web_info.ipynb index f990c128b78c..e2962e9049fb 100644 --- a/notebook/agentchat_web_info.ipynb +++ b/notebook/agentchat_web_info.ipynb @@ -30,9 +30,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen and docker:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please install autogen-agentchat and docker:\n", "```bash\n", - "pip install pyautogen docker\n", + "pip install autogen-agentchat~=0.2 docker\n", "```" ] }, @@ -49,7 +49,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\" docker" + "# %pip install \"autogen-agentchat~=0.2\" docker" ] }, { diff --git a/notebook/agentchat_webscraping_with_apify.ipynb b/notebook/agentchat_webscraping_with_apify.ipynb index 0429c10f8a7f..c1fec78d83bc 100644 --- a/notebook/agentchat_webscraping_with_apify.ipynb +++ b/notebook/agentchat_webscraping_with_apify.ipynb @@ -23,7 +23,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install -qqq pyautogen apify-client" + "! pip install -qqq autogen-agentchat~=0.2 apify-client" ] }, { diff --git a/notebook/agentchat_websockets.ipynb b/notebook/agentchat_websockets.ipynb index 7e6e449675c1..107c7bc40e65 100644 --- a/notebook/agentchat_websockets.ipynb +++ b/notebook/agentchat_websockets.ipynb @@ -28,7 +28,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[websockets] fastapi uvicorn\n", + "pip install autogen-agentchat[websockets]~=0.2 fastapi uvicorn\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchats_sequential_chats.ipynb b/notebook/agentchats_sequential_chats.ipynb index cffcbfdefcb6..3fdd857cd03f 100644 --- a/notebook/agentchats_sequential_chats.ipynb +++ b/notebook/agentchats_sequential_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agenteval_cq_math.ipynb b/notebook/agenteval_cq_math.ipynb index 43ea28de1a31..199967a91080 100644 --- a/notebook/agenteval_cq_math.ipynb +++ b/notebook/agenteval_cq_math.ipynb @@ -30,12 +30,12 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen, Docker, and OpenAI:\n" + "AutoGen requires `Python>=3.8`. To run this notebook example, please install autogen-agentchat, Docker, and OpenAI:\n" ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" @@ -49,72 +49,9 @@ "id": "68lTZZyJ1_BI", "outputId": "15a55fab-e13a-4654-b8cb-ae117478d6d8" }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: pyautogen>=0.2.3 in /home/vscode/.local/lib/python3.10/site-packages (0.2.17)\n", - "Requirement already satisfied: docker in /home/vscode/.local/lib/python3.10/site-packages (7.0.0)\n", - "Requirement already satisfied: diskcache in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (5.6.3)\n", - "Requirement already satisfied: flaml in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.1.2)\n", - "Requirement already satisfied: tiktoken in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (0.6.0)\n", - "Requirement already satisfied: openai>=1.3 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.14.1)\n", - "Requirement already satisfied: pydantic!=2.6.0,<3,>=1.10 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.6.4)\n", - "Requirement already satisfied: termcolor in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.4.0)\n", - "Requirement already satisfied: python-dotenv in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.0.1)\n", - "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.10/site-packages (from docker) (2.31.0)\n", - "Requirement already satisfied: packaging>=14.0 in /usr/local/lib/python3.10/site-packages (from docker) (24.0)\n", - "Requirement already satisfied: urllib3>=1.26.0 in /usr/local/lib/python3.10/site-packages (from docker) (2.2.1)\n", - "Requirement already satisfied: tqdm>4 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.66.2)\n", - "Requirement already satisfied: httpx<1,>=0.23.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (0.27.0)\n", - "Requirement already satisfied: distro<2,>=1.7.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.9.0)\n", - "Requirement already satisfied: sniffio in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.3.1)\n", - "Requirement already satisfied: anyio<5,>=3.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.3.0)\n", - "Requirement already satisfied: typing-extensions<5,>=4.7 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.10.0)\n", - "Requirement already satisfied: annotated-types>=0.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from pydantic!=2.6.0,<3,>=1.10->pyautogen>=0.2.3) (0.6.0)\n", - "Requirement already satisfied: pydantic-core==2.16.3 in /home/vscode/.local/lib/python3.10/site-packages (from pydantic!=2.6.0,<3,>=1.10->pyautogen>=0.2.3) (2.16.3)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->docker) (2024.2.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->docker) (3.6)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->docker) (3.3.2)\n", - "Requirement already satisfied: NumPy>=1.17 in /home/vscode/.local/lib/python3.10/site-packages (from flaml->pyautogen>=0.2.3) (1.26.4)\n", - "Requirement already satisfied: regex>=2022.1.18 in /home/vscode/.local/lib/python3.10/site-packages (from tiktoken->pyautogen>=0.2.3) (2023.12.25)\n", - "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen>=0.2.3) (1.2.0)\n", - "Requirement already satisfied: httpcore==1.* in /home/vscode/.local/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (1.0.4)\n", - "Requirement already satisfied: h11<0.15,>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (0.14.0)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: scipy in /home/vscode/.local/lib/python3.10/site-packages (1.12.0)\n", - "Requirement already satisfied: numpy<1.29.0,>=1.22.4 in /home/vscode/.local/lib/python3.10/site-packages (from scipy) (1.26.4)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: matplotlib in /home/vscode/.local/lib/python3.10/site-packages (3.8.3)\n", - "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/site-packages (from matplotlib) (24.0)\n", - "Requirement already satisfied: pyparsing>=2.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (3.1.2)\n", - "Requirement already satisfied: contourpy>=1.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (1.2.0)\n", - "Requirement already satisfied: fonttools>=4.22.0 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (4.50.0)\n", - "Requirement already satisfied: python-dateutil>=2.7 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (2.9.0.post0)\n", - "Requirement already satisfied: cycler>=0.10 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (0.12.1)\n", - "Requirement already satisfied: pillow>=8 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (10.2.0)\n", - "Requirement already satisfied: numpy<2,>=1.21 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (1.26.4)\n", - "Requirement already satisfied: kiwisolver>=1.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (1.4.5)\n", - "Requirement already satisfied: six>=1.5 in /home/vscode/.local/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n" - ] - } - ], + "outputs": [], "source": [ - "%pip install \"pyautogen>=0.2.3\" docker\n", + "%pip install \"autogen-agentchat~=0.2\" docker\n", "%pip install scipy\n", "%pip install matplotlib" ] diff --git a/notebook/autobuild_agent_library.ipynb b/notebook/autobuild_agent_library.ipynb index bde50355319f..886875436030 100644 --- a/notebook/autobuild_agent_library.ipynb +++ b/notebook/autobuild_agent_library.ipynb @@ -24,7 +24,7 @@ "source": [ "## Requirement\n", "\n", - "AutoBuild require `pyautogen[autobuild]`, which can be installed by the following command:" + "AutoBuild require `autogen-agentchat[autobuild]~=0.2`, which can be installed by the following command:" ] }, { @@ -36,7 +36,7 @@ }, "outputs": [], "source": [ - "%pip install pyautogen[autobuild]" + "%pip install autogen-agentchat[autobuild]~=0.2" ] }, { diff --git a/notebook/autobuild_basic.ipynb b/notebook/autobuild_basic.ipynb index d100563ac257..d26633f2beac 100644 --- a/notebook/autobuild_basic.ipynb +++ b/notebook/autobuild_basic.ipynb @@ -26,7 +26,7 @@ "source": [ "## Requirement\n", "\n", - "AutoBuild require `pyautogen[autobuild]`, which can be installed by the following command:" + "AutoBuild require `autogen-agentchat[autobuild]~=0.2`, which can be installed by the following command:" ] }, { @@ -38,7 +38,7 @@ }, "outputs": [], "source": [ - "%pip install pyautogen[autobuild]" + "%pip install autogen-agentchat[autobuild]~=0.2" ] }, { diff --git a/notebook/autogen_uniformed_api_calling.ipynb b/notebook/autogen_uniformed_api_calling.ipynb index 08f747e1722f..ef28b99630c6 100644 --- a/notebook/autogen_uniformed_api_calling.ipynb +++ b/notebook/autogen_uniformed_api_calling.ipynb @@ -35,7 +35,7 @@ "By default, AutoGen is installed with OpenAI support.\n", " \n", "```bash\n", - "pip install pyautogen[gemini,anthropic,mistral,together]\n", + "pip install autogen-agentchat[gemini,anthropic,mistral,together]~=0.2\n", "```\n", "\n", "\n", diff --git a/notebook/contributing.md b/notebook/contributing.md index fcafe3c71152..e21ef6392673 100644 --- a/notebook/contributing.md +++ b/notebook/contributing.md @@ -36,9 +36,9 @@ You don't need to explain in depth how to install AutoGen. Unless there are spec `````` ````{=mdx} :::info Requirements -Install `pyautogen`: +Install `autogen-agentchat`: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` For more information, please refer to the [installation guide](/docs/installation/). @@ -54,7 +54,7 @@ Or if extras are needed: Some extra dependencies are needed for this notebook, which can be installed via pip: ```bash -pip install pyautogen[retrievechat] flaml[automl] +pip install autogen-agentchat[retrievechat]~=0.2 flaml[automl] ``` For more information, please refer to the [installation guide](/docs/installation/). diff --git a/notebook/gpt_assistant_agent_function_call.ipynb b/notebook/gpt_assistant_agent_function_call.ipynb index 6febb89cc9b4..db14b2625034 100644 --- a/notebook/gpt_assistant_agent_function_call.ipynb +++ b/notebook/gpt_assistant_agent_function_call.ipynb @@ -22,56 +22,18 @@ }, "source": [ "## Requirements\n", - "AutoGen requires Python 3.8 or newer. For this notebook, please install `pyautogen`:" + "AutoGen requires Python 3.8 or newer. For this notebook, please Install `autogen-agentchat`:" ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "id": "pWFw6-8lMleD" }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: pyautogen in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (0.2.8)\n", - "Requirement already satisfied: openai>=1.3 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (1.6.1)\n", - "Requirement already satisfied: diskcache in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (5.6.3)\n", - "Requirement already satisfied: termcolor in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.4.0)\n", - "Requirement already satisfied: flaml in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.1.1)\n", - "Requirement already satisfied: python-dotenv in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (1.0.0)\n", - "Requirement already satisfied: tiktoken in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (0.5.2)\n", - "Requirement already satisfied: pydantic<3,>=1.10 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.5.3)\n", - "Requirement already satisfied: docker in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (7.0.0)\n", - "Requirement already satisfied: anyio<5,>=3.5.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.2.0)\n", - "Requirement already satisfied: distro<2,>=1.7.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (1.8.0)\n", - "Requirement already satisfied: httpx<1,>=0.23.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (0.26.0)\n", - "Requirement already satisfied: sniffio in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (1.3.0)\n", - "Requirement already satisfied: tqdm>4 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.66.1)\n", - "Requirement already satisfied: typing-extensions<5,>=4.7 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.9.0)\n", - "Requirement already satisfied: annotated-types>=0.4.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pydantic<3,>=1.10->pyautogen) (0.6.0)\n", - "Requirement already satisfied: pydantic-core==2.14.6 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pydantic<3,>=1.10->pyautogen) (2.14.6)\n", - "Requirement already satisfied: packaging>=14.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (23.2)\n", - "Requirement already satisfied: requests>=2.26.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (2.31.0)\n", - "Requirement already satisfied: urllib3>=1.26.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (2.1.0)\n", - "Requirement already satisfied: NumPy>=1.17.0rc1 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from flaml->pyautogen) (1.26.2)\n", - "Requirement already satisfied: regex>=2022.1.18 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from tiktoken->pyautogen) (2023.10.3)\n", - "Requirement already satisfied: idna>=2.8 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen) (3.6)\n", - "Requirement already satisfied: certifi in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen) (2023.11.17)\n", - "Requirement already satisfied: httpcore==1.* in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen) (1.0.2)\n", - "Requirement already satisfied: h11<0.15,>=0.13 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen) (0.14.0)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from requests>=2.26.0->docker->pyautogen) (3.3.2)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n" - ] - } - ], + "outputs": [], "source": [ - "pip install pyautogen" + "pip install autogen-agentchat~=0.2" ] }, { diff --git a/notebook/oai_chatgpt_gpt4.ipynb b/notebook/oai_chatgpt_gpt4.ipynb index 280b7145e931..1994d146c064 100644 --- a/notebook/oai_chatgpt_gpt4.ipynb +++ b/notebook/oai_chatgpt_gpt4.ipynb @@ -34,7 +34,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install with the [blendsearch] option:\n", "```bash\n", - "pip install \"pyautogen[blendsearch]\"\n", + "pip install \"pyautogen[blendsearch]<0.2\"\n", "```" ] }, diff --git a/notebook/oai_completion.ipynb b/notebook/oai_completion.ipynb index ac1b3f9c95f1..451a161bbad9 100644 --- a/notebook/oai_completion.ipynb +++ b/notebook/oai_completion.ipynb @@ -32,7 +32,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install with the [blendsearch] option:\n", "```bash\n", - "pip install pyautogen[blendsearch]\n", + "pip install pyautogen[blendsearch]~=0.1\n", "```" ] }, diff --git a/samples/apps/auto-anny/requirements.txt b/samples/apps/auto-anny/requirements.txt index 13a0ba19c648..e1b27e81bc29 100644 --- a/samples/apps/auto-anny/requirements.txt +++ b/samples/apps/auto-anny/requirements.txt @@ -1,2 +1,2 @@ discord.py -pyautogen +autogen-agentchat~=0.2 diff --git a/samples/apps/promptflow-autogen/requirements.txt b/samples/apps/promptflow-autogen/requirements.txt index 6fe9807785f1..c4b6e9681fdd 100644 --- a/samples/apps/promptflow-autogen/requirements.txt +++ b/samples/apps/promptflow-autogen/requirements.txt @@ -1,7 +1,5 @@ promptflow==1.8.0 -pyautogen==0.2.23 -pyautogen[graph] -pyautogen[redis] +autogen-agentchat[graph,redis]~=0.2 redis semantic-kernel beautifulsoup4 diff --git a/samples/tools/autogenbench/README.md b/samples/tools/autogenbench/README.md index 9c747c9896db..85ee3ace9d0a 100644 --- a/samples/tools/autogenbench/README.md +++ b/samples/tools/autogenbench/README.md @@ -162,7 +162,7 @@ This folder holds the results for the ``two_agent_stocks`` task of the ``default Within each folder, you will find the following files: -- *timestamp.txt*: records the date and time of the run, along with the version of the pyautogen library installed +- *timestamp.txt*: records the date and time of the run, along with the version of the autogen-agentchat library installed - *console_log.txt*: all console output produced by Docker when running AutoGen. Read this like you would a regular console. - *[agent]_messages.json*: for each Agent, a log of their messages dictionaries - *./coding*: A directory containing all code written by AutoGen, and all artifacts produced by that code. diff --git a/samples/tools/autogenbench/autogenbench/res/Dockerfile b/samples/tools/autogenbench/autogenbench/res/Dockerfile index 5c3f5f40968c..0382a00fbb12 100644 --- a/samples/tools/autogenbench/autogenbench/res/Dockerfile +++ b/samples/tools/autogenbench/autogenbench/res/Dockerfile @@ -9,8 +9,8 @@ RUN pip install --upgrade pip RUN ln -snf /usr/share/zoneinfo/US/Pacific /etc/localtime && echo "US/Pacific" > /etc/timezone # Pre-load autogen dependencies, but not autogen itself since we'll often want to install the latest from source -RUN pip install pyautogen[teachable,lmm,graphs,websurfer] -RUN pip uninstall --yes pyautogen +RUN pip install autogen-agentchat[teachable,lmm,graphs,websurfer]~=0.2 +RUN pip uninstall --yes autogen-agentchat~=0.2 # Pre-load popular packages as per https://learnpython.com/blog/most-popular-python-packages/ RUN pip install numpy pandas matplotlib seaborn scikit-learn requests urllib3 nltk pillow pytest diff --git a/samples/tools/autogenbench/autogenbench/template/testbed_utils.py b/samples/tools/autogenbench/autogenbench/template/testbed_utils.py index bce42a625b2a..37b1f69979bf 100644 --- a/samples/tools/autogenbench/autogenbench/template/testbed_utils.py +++ b/samples/tools/autogenbench/autogenbench/template/testbed_utils.py @@ -62,7 +62,7 @@ def init(): # Print some information about the run with open("timestamp.txt", "wt") as f: f.write("Timestamp: " + datetime.now().isoformat() + "\n") - f.write("pyautogen version: " + str(autogen.__version__) + "\n") + f.write("autogen-agentchat version: " + str(autogen.__version__) + "\n") # Start logging if AUTOGEN_VERSION < packaging.version.parse("0.2.0b1"): diff --git a/samples/tools/autogenbench/pyproject.toml b/samples/tools/autogenbench/pyproject.toml index ef1a2fe80dfb..7c730c7c113b 100644 --- a/samples/tools/autogenbench/pyproject.toml +++ b/samples/tools/autogenbench/pyproject.toml @@ -18,7 +18,7 @@ classifiers = [ ] dependencies = [ - "pyautogen", + "autogen-agentchat~=0.2", "docker", "huggingface_hub", "tabulate", diff --git a/setup.py b/setup.py index 65f5cbe8f029..fe55a4a6c2ed 100644 --- a/setup.py +++ b/setup.py @@ -110,10 +110,10 @@ } setuptools.setup( - name="pyautogen", + name="autogen-agentchat", version=__version__, author="AutoGen", - author_email="autogen-contact@service.microsoft.com", + author_email="autogen@microsoft.com", description="Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework", long_description=long_description, long_description_content_type="text/markdown", diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx index d5c78148e446..4bad582eb5df 100644 --- a/website/blog/2023-10-18-RetrieveChat/index.mdx +++ b/website/blog/2023-10-18-RetrieveChat/index.mdx @@ -52,9 +52,9 @@ The conversation terminates if no more documents are available for the context. ## Basic Usage of RAG Agents 0. Install dependencies -Please install pyautogen with the [retrievechat] option before using RAG agents. +Please install autogen-agentchat with the [retrievechat] option before using RAG agents. ```bash -pip install "pyautogen[retrievechat]" +pip install "autogen-agentchat[retrievechat]~=0.2" ``` *You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).* diff --git a/website/blog/2023-10-26-TeachableAgent/index.mdx b/website/blog/2023-10-26-TeachableAgent/index.mdx index ca3992489548..f097acc73726 100644 --- a/website/blog/2023-10-26-TeachableAgent/index.mdx +++ b/website/blog/2023-10-26-TeachableAgent/index.mdx @@ -36,10 +36,10 @@ AutoGen contains four code examples that use `Teachability`. 1. Install dependencies -Please install pyautogen with the [teachable] option before using `Teachability`. +Please install autogen-agentchat~=0.2 with the [teachable] option before using `Teachability`. ```bash -pip install "pyautogen[teachable]" +pip install "autogen-agentchat[teachable]~=0.2" ``` 2. Import agents diff --git a/website/blog/2023-11-06-LMM-Agent/index.mdx b/website/blog/2023-11-06-LMM-Agent/index.mdx index 452079f1c451..0ab92c4dbb90 100644 --- a/website/blog/2023-11-06-LMM-Agent/index.mdx +++ b/website/blog/2023-11-06-LMM-Agent/index.mdx @@ -25,7 +25,7 @@ GPT-4V represents the forefront in image comprehension, while LLaVA is an effici Incorporate the `lmm` feature during AutoGen installation: ```bash -pip install "pyautogen[lmm]" +pip install "autogen-agentchat[lmm]~=0.2" ``` Subsequently, import the **Multimodal Conversable Agent** or **LLaVA Agent** from AutoGen: diff --git a/website/blog/2023-11-13-OAI-assistants/index.mdx b/website/blog/2023-11-13-OAI-assistants/index.mdx index 07216a25969c..2fc9bee359b8 100644 --- a/website/blog/2023-11-13-OAI-assistants/index.mdx +++ b/website/blog/2023-11-13-OAI-assistants/index.mdx @@ -30,7 +30,7 @@ This integration shows great potential and synergy, and we plan to continue enha ## Installation ```bash -pip install pyautogen==0.2.0b5 +pip install autogen-agentchat~=0.2 ``` ## Basic Example diff --git a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx index be71662ab6ed..a6d0025699d3 100644 --- a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx +++ b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx @@ -29,7 +29,7 @@ up an endpoint server automatically without any user participation. ## Installation - AutoGen: ```bash -pip install pyautogen[autobuild] +pip install autogen-agentchat[autobuild]~=0.2 ``` - (Optional: if you want to use open-source LLMs) vLLM and FastChat ```bash diff --git a/website/blog/2024-03-03-AutoGen-Update/index.mdx b/website/blog/2024-03-03-AutoGen-Update/index.mdx index da036094c6e7..7458ec74e053 100644 --- a/website/blog/2024-03-03-AutoGen-Update/index.mdx +++ b/website/blog/2024-03-03-AutoGen-Update/index.mdx @@ -148,7 +148,7 @@ These tools have been used for improving the AutoGen library as well as applicat We are making rapid progress in further improving the interface to make it even easier to build agent applications. For example: - [AutoBuild](/blog/2023/11/26/Agent-AutoBuild). AutoBuild is an ongoing area of research to automatically create or select a group of agents for a given task and objective. If successful, it will greatly reduce the effort from users or developers when using the multi-agent technology. It also paves the way for agentic decomposition to handle complex tasks. It is available as an experimental feature and demonstrated in two modes: free-form [creation](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) and [selection](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) from a library. -- [AutoGen Studio](/blog/2023/12/01/AutoGenStudio). AutoGen Studio is a no-code UI for fast experimentation with the multi-agent conversations. It lowers the barrier of entrance to the AutoGen technology. Models, agents, and workflows can all be configured without writing code. And chatting with multiple agents in a playground is immediately available after the configuration. Although only a subset of `pyautogen` features are available in this sample app, it demonstrates a promising experience. It has generated tremendous excitement in the community. +- [AutoGen Studio](/blog/2023/12/01/AutoGenStudio). AutoGen Studio is a no-code UI for fast experimentation with the multi-agent conversations. It lowers the barrier of entrance to the AutoGen technology. Models, agents, and workflows can all be configured without writing code. And chatting with multiple agents in a playground is immediately available after the configuration. Although only a subset of `autogen-agentchat` features are available in this sample app, it demonstrates a promising experience. It has generated tremendous excitement in the community. - Conversation Programming+. The [AutoGen paper](https://arxiv.org/abs/2308.08155) introduced a key concept of _Conversation Programming_, which can be used to program diverse conversation patterns such as 1-1 chat, group chat, hierarchical chat, nested chat etc. While we offered dynamic group chat as an example of high-level orchestration, it made other patterns relatively less discoverable. Therefore, we have added more convenient conversation programming features which enables easier definition of other types of complex workflow, such as [finite state machine based group chat](/blog/2024/02/11/FSM-GroupChat), [sequential chats](/docs/notebooks/agentchats_sequential_chats), and [nested chats](/docs/notebooks/agentchat_nestedchat). Many users have found them useful in implementing specific patterns, which have been always possible but more obvious with the added features. I will write another blog post for a deep dive. ### Learning/Optimization/Teaching diff --git a/website/blog/2024-06-24-AltModels-Classes/index.mdx b/website/blog/2024-06-24-AltModels-Classes/index.mdx index 1f01fb9402a4..7001e74e83f0 100644 --- a/website/blog/2024-06-24-AltModels-Classes/index.mdx +++ b/website/blog/2024-06-24-AltModels-Classes/index.mdx @@ -72,10 +72,10 @@ Now it's time to try them out. Install the appropriate client based on the model you wish to use. ```sh -pip install pyautogen["mistral"] # for Mistral AI client -pip install pyautogen["anthropic"] # for Anthropic client -pip install pyautogen["together"] # for Together.AI client -pip install pyautogen["groq"] # for Groq client +pip install autogen-agentchat["mistral"]~=0.2 # for Mistral AI client +pip install autogen-agentchat["anthropic"]~=0.2 # for Anthropic client +pip install autogen-agentchat["together"]~=0.2 # for Together.AI client +pip install autogen-agentchat["groq"]~=0.2 # for Groq client ``` ### Configuration Setup diff --git a/website/docs/FAQ.mdx b/website/docs/FAQ.mdx index a367a9b20635..14be83c7baba 100644 --- a/website/docs/FAQ.mdx +++ b/website/docs/FAQ.mdx @@ -4,12 +4,12 @@ import TOCInline from "@theme/TOCInline"; -## Install the correct package - `pyautogen` +## Install the correct package - `autogen-agentchat` -The name of Autogen package at PyPI is `pyautogen`: +The name of Autogen package at PyPI is `autogen-agentchat`: ``` -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` Typical errors that you might face when using the wrong package are `AttributeError: module 'autogen' has no attribute 'Agent'`, `AttributeError: module 'autogen' has no attribute 'config_list_from_json'` etc. diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx index 3e162a098327..3d8639d11fb4 100644 --- a/website/docs/Getting-Started.mdx +++ b/website/docs/Getting-Started.mdx @@ -35,7 +35,7 @@ Microsoft, Penn State University, and University of Washington. ### Quickstart ```sh -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` diff --git a/website/docs/ecosystem/portkey.md b/website/docs/ecosystem/portkey.md index 4825cf78d9a7..a9f67d3871fc 100644 --- a/website/docs/ecosystem/portkey.md +++ b/website/docs/ecosystem/portkey.md @@ -13,7 +13,7 @@ Portkey adds 4 core production capabilities to any AutoGen agent: 1. **Install Required Packages:** 2. ```bash - pip install -qU pyautogen portkey-ai + pip install -qU autogen-agentchat~=0.2 portkey-ai ``` **Configure AutoGen with Portkey:** diff --git a/website/docs/installation/Installation.mdx b/website/docs/installation/Installation.mdx index af3ed6620135..8c41f09cfde1 100644 --- a/website/docs/installation/Installation.mdx +++ b/website/docs/installation/Installation.mdx @@ -13,8 +13,8 @@ When installing AutoGen locally, we recommend using a virtual environment for th Create and activate: ```bash - python3 -m venv pyautogen - source pyautogen/bin/activate + python3 -m venv .venv + source .venv/bin/activate ``` To deactivate later, run: @@ -32,8 +32,8 @@ When installing AutoGen locally, we recommend using a virtual environment for th Create and activate: ```bash - conda create -n pyautogen python=3.10 - conda activate pyautogen + conda create -n autogen python=3.10 + conda activate autogen ``` To deactivate later, run: @@ -52,7 +52,7 @@ When installing AutoGen locally, we recommend using a virtual environment for th poetry init poetry shell - poetry add pyautogen + poetry add autogen-agentchat~=0.2 ``` To deactivate later, run: @@ -69,15 +69,9 @@ When installing AutoGen locally, we recommend using a virtual environment for th AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` -:::info - -`pyautogen<0.2` required `openai<1`. Starting from pyautogen v0.2, `openai>=1` is required. - -::: - ## Install Docker for Code Execution We recommend using Docker for code execution. diff --git a/website/docs/installation/Optional-Dependencies.md b/website/docs/installation/Optional-Dependencies.md index 7d17ce50e37d..3f8164a667e9 100644 --- a/website/docs/installation/Optional-Dependencies.md +++ b/website/docs/installation/Optional-Dependencies.md @@ -6,7 +6,7 @@ To use LLM caching with Redis, you need to install the Python package with the option `redis`: ```bash -pip install "pyautogen[redis]" +pip install "autogen-agentchat[redis]~=0.2" ``` See [LLM Caching](/docs/topics/llm-caching) for details. @@ -17,7 +17,7 @@ To use the IPython code executor, you need to install the `jupyter-client` and `ipykernel` packages: ```bash -pip install "pyautogen[ipython]" +pip install "autogen-agentchat[ipython]~=0.2" ``` To use the IPython code executor: @@ -44,21 +44,21 @@ Example notebooks: ## retrievechat -`pyautogen` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. +AutoGen 0.2 supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. ```bash -pip install "pyautogen[retrievechat]" +pip install "autogen-agentchat[retrievechat]" ``` *You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).* -Alternatively `pyautogen` also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it. +Alternatively AutoGen 0.2 also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it. ```bash -pip install "pyautogen[retrievechat-pgvector]" +pip install "autogen-agentchat[retrievechat-pgvector]~=0.2" ``` ```bash -pip install "pyautogen[retrievechat-qdrant]" +pip install "autogen-agentchat[retrievechat-qdrant]~=0.2" ``` RetrieveChat can handle various types of documents. By default, it can process @@ -83,7 +83,7 @@ Example notebooks: To use Teachability, please install AutoGen with the [teachable] option. ```bash -pip install "pyautogen[teachable]" +pip install "autogen-agentchat[teachable]~=0.2" ``` Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) @@ -93,7 +93,7 @@ Example notebook: [Chatting with a teachable agent](https://github.com/microsoft We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it. ```bash -pip install "pyautogen[lmm]" +pip install "autogen-agentchat[lmm]~=0.2" ``` Example notebooks: @@ -117,7 +117,7 @@ Example notebooks: To use a graph in `GroupChat`, particularly for graph visualization, please install AutoGen with the [graph] option. ```bash -pip install "pyautogen[graph]" +pip install "autogen-agentchat[graph]~=0.2" ``` Example notebook: [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) @@ -127,5 +127,5 @@ Example notebook: [Finite State Machine graphs to set speaker transition constra AutoGen includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option: ```bash -pip install "pyautogen[long-context]" +pip install "autogen-agentchat[long-context]~=0.2" ``` diff --git a/website/docs/topics/code-execution/custom-executor.ipynb b/website/docs/topics/code-execution/custom-executor.ipynb index c6ee4c160189..41d3b59b8fdc 100644 --- a/website/docs/topics/code-execution/custom-executor.ipynb +++ b/website/docs/topics/code-execution/custom-executor.ipynb @@ -18,7 +18,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip -qqq install pyautogen matplotlib yfinance" + "! pip -qqq install autogen-agentchat~=0.2 matplotlib yfinance" ] }, { diff --git a/website/docs/topics/code-execution/jupyter-code-executor.ipynb b/website/docs/topics/code-execution/jupyter-code-executor.ipynb index 09f35f1fdc98..9ee72bccc0a1 100644 --- a/website/docs/topics/code-execution/jupyter-code-executor.ipynb +++ b/website/docs/topics/code-execution/jupyter-code-executor.ipynb @@ -15,7 +15,7 @@ "In order to use Jupyter based code execution some extra dependencies are required. These can be installed with the extra `jupyter-executor`:\n", "\n", "```bash\n", - "pip install 'pyautogen[jupyter-executor]'\n", + "pip install 'autogen-agentchat[jupyter-executor]~=0.2'\n", "```\n", "\n", "## Jupyter Server\n", diff --git a/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md b/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md index e251786f5550..965bbfbd0105 100644 --- a/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md +++ b/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md @@ -5,10 +5,10 @@ Text compression is crucial for optimizing interactions with LLMs, especially wh This guide introduces LLMLingua's integration with AutoGen, demonstrating how to use this tool to compress text, thereby optimizing the usage of LLMs for various applications. :::info Requirements -Install `pyautogen[long-context]` and `PyMuPDF`: +Install `autogen-agentchat[long-context]~=0.2` and `PyMuPDF`: ```bash -pip install "pyautogen[long-context]" PyMuPDF +pip install "autogen-agentchat[long-context]~=0.2" PyMuPDF ``` For more information, please refer to the [installation guide](/docs/installation/). diff --git a/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md b/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md index 52fea15d01e5..fc854f8d8341 100644 --- a/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md +++ b/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md @@ -13,10 +13,10 @@ Why do we need to handle long contexts? The problem arises from several constrai The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. :::info Requirements -Install `pyautogen`: +Install `autogen-agentchat`: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` For more information, please refer to the [installation guide](/docs/installation/). diff --git a/website/docs/topics/non-openai-models/cloud-anthropic.ipynb b/website/docs/topics/non-openai-models/cloud-anthropic.ipynb index a6c87b6a5ca5..ba4c831232c0 100644 --- a/website/docs/topics/non-openai-models/cloud-anthropic.ipynb +++ b/website/docs/topics/non-openai-models/cloud-anthropic.ipynb @@ -21,7 +21,7 @@ "Additionally, this client class provides support for function/tool calling and will track token usage and cost correctly as per Anthropic's API costs (as of June 2024).\n", "\n", "## Requirements\n", - "To use Anthropic Claude with AutoGen, first you need to install the `pyautogen[anthropic]` package.\n", + "To use Anthropic Claude with AutoGen, first you need to install the `autogen-agentchat[anthropic]` package.\n", "\n", "To try out the function call feature of Claude model, you need to install `anthropic>=0.23.1`.\n" ] @@ -32,7 +32,7 @@ "metadata": {}, "outputs": [], "source": [ - "!pip install pyautogen[\"anthropic\"]" + "!pip install autogen-agentchat[\"anthropic\"]~=0.2" ] }, { diff --git a/website/docs/topics/non-openai-models/cloud-bedrock.ipynb b/website/docs/topics/non-openai-models/cloud-bedrock.ipynb index 71c1e2e7ffe3..422598dd2fe1 100644 --- a/website/docs/topics/non-openai-models/cloud-bedrock.ipynb +++ b/website/docs/topics/non-openai-models/cloud-bedrock.ipynb @@ -25,7 +25,7 @@ "It does not, yet, support image generation ([contribute](https://microsoft.github.io/autogen/docs/contributor-guide/contributing/)).\n", "\n", "## Requirements\n", - "To use Amazon Bedrock with AutoGen, first you need to install the `pyautogen[bedrock]` package.\n", + "To use Amazon Bedrock with AutoGen, first you need to install the `autogen-agentchat[bedrock]` package.\n", "\n", "## Pricing\n", "\n", @@ -48,7 +48,7 @@ "outputs": [], "source": [ "# If you need to install AutoGen with Amazon Bedrock\n", - "!pip install pyautogen[\"bedrock\"]" + "!pip install autogen-agentchat[\"bedrock\"]~=0.2" ] }, { diff --git a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb index a8e1d3940f4b..e0b9bbaf2d5e 100644 --- a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb +++ b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb @@ -18,7 +18,7 @@ "metadata": {}, "source": [ "# Requirements\n", - "To use Cerebras with AutoGen, install the `pyautogen[cerebras]` package." + "To use Cerebras with AutoGen, install the `autogen-agentchat[cerebras]` package." ] }, { @@ -27,7 +27,7 @@ "metadata": {}, "outputs": [], "source": [ - "!pip install pyautogen[\"cerebras\"]" + "!pip install autogen-agentchat[\"cerebras\"]~=0.2" ] }, { diff --git a/website/docs/topics/non-openai-models/cloud-cohere.ipynb b/website/docs/topics/non-openai-models/cloud-cohere.ipynb index 73dcc54a75ed..defddf983c3b 100644 --- a/website/docs/topics/non-openai-models/cloud-cohere.ipynb +++ b/website/docs/topics/non-openai-models/cloud-cohere.ipynb @@ -25,10 +25,10 @@ "\n", "## Getting started\n", "\n", - "First you need to install the `pyautogen` package to use AutoGen with the Cohere API library.\n", + "First you need to install the `autogen-agentchat~=0.2` package to use AutoGen with the Cohere API library.\n", "\n", "``` bash\n", - "pip install pyautogen[cohere]\n", + "pip install autogen-agentchat[cohere]~=0.2\n", "```" ] }, diff --git a/website/docs/topics/non-openai-models/cloud-gemini.ipynb b/website/docs/topics/non-openai-models/cloud-gemini.ipynb index a227582c592c..0a36dd62cf91 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini.ipynb @@ -11,7 +11,7 @@ "Install AutoGen with Gemini features:\n", "\n", "```bash\n", - "pip install pyautogen[gemini]\n", + "pip install autogen-agentchat[gemini]~=0.2\n", "```\n", "\n", "## Dependencies of This Notebook\n", @@ -19,7 +19,7 @@ "In this notebook, we will explore how to use Gemini in AutoGen alongside other tools. Install the necessary dependencies with the following command:\n", "\n", "```bash\n", - "pip install pyautogen[gemini,retrievechat,lmm]\n", + "pip install autogen-agentchat[gemini,retrievechat,lmm]~=0.2\n", "```\n", "\n", "## Features\n", diff --git a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb index 637d340dc37f..3456a803f488 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb @@ -16,7 +16,7 @@ "\n", "Install AutoGen with Gemini features:\n", "```bash\n", - "pip install pyautogen[gemini]\n", + "pip install autogen-agentchat[gemini]~=0.2\n", "```\n", "\n", "### Install other Dependencies of this Notebook\n", diff --git a/website/docs/topics/non-openai-models/cloud-mistralai.ipynb b/website/docs/topics/non-openai-models/cloud-mistralai.ipynb index 1228f96db4e5..9babddf601f3 100644 --- a/website/docs/topics/non-openai-models/cloud-mistralai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-mistralai.ipynb @@ -25,10 +25,10 @@ "\n", "## Getting started\n", "\n", - "First you need to install the `pyautogen` package to use AutoGen with the Mistral API library.\n", + "First you need to install the `autogen-agentchat~=0.2` package to use AutoGen with the Mistral API library.\n", "\n", "``` bash\n", - "pip install pyautogen[mistral]\n", + "pip install autogen-agentchat[mistral]~=0.2\n", "```" ] }, diff --git a/website/docs/topics/non-openai-models/cloud-togetherai.ipynb b/website/docs/topics/non-openai-models/cloud-togetherai.ipynb index eccc372ce2e9..6ec9f52bd7fd 100644 --- a/website/docs/topics/non-openai-models/cloud-togetherai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-togetherai.ipynb @@ -23,10 +23,10 @@ "\n", "## Getting started\n", "\n", - "First, you need to install the `pyautogen` package to use AutoGen with the Together.AI API library.\n", + "First, you need to install the `autogen-agentchat~=0.2` package to use AutoGen with the Together.AI API library.\n", "\n", "``` bash\n", - "pip install pyautogen[together]\n", + "pip install autogen-agentchat[together]~=0.2\n", "```" ] }, diff --git a/website/docs/topics/prompting-and-reasoning/react.ipynb b/website/docs/topics/prompting-and-reasoning/react.ipynb index 08f309133488..7663ebc156ff 100644 --- a/website/docs/topics/prompting-and-reasoning/react.ipynb +++ b/website/docs/topics/prompting-and-reasoning/react.ipynb @@ -26,7 +26,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install \"pyautogen>=0.2.18\" \"tavily-python\"" + "! pip install \"autogen-agentchat~=0.2\" \"tavily-python\"" ] }, { diff --git a/website/docs/topics/prompting-and-reasoning/reflection.ipynb b/website/docs/topics/prompting-and-reasoning/reflection.ipynb index 604389044725..bb6fa50b2890 100644 --- a/website/docs/topics/prompting-and-reasoning/reflection.ipynb +++ b/website/docs/topics/prompting-and-reasoning/reflection.ipynb @@ -18,7 +18,7 @@ "id": "5cff1938", "metadata": {}, "source": [ - "First make sure the `pyautogen` package is installed." + "First make sure the `autogen-agentchat` package is installed." ] }, { @@ -28,7 +28,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install \"pyautogen>=0.2.18\"" + "! pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/website/docs/topics/task_decomposition.ipynb b/website/docs/topics/task_decomposition.ipynb index e4c24c9004ef..ffb18ebdee0e 100644 --- a/website/docs/topics/task_decomposition.ipynb +++ b/website/docs/topics/task_decomposition.ipynb @@ -16,7 +16,7 @@ "id": "a6c436c9", "metadata": {}, "source": [ - "First make sure the `pyautogen` package is installed." + "First make sure the `autogen-agentchat` package is installed." ] }, { @@ -26,7 +26,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install \"pyautogen>=0.2.18\"" + "! pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/website/docs/tutorial/introduction.ipynb b/website/docs/tutorial/introduction.ipynb index 88df66b7270f..fd5a362d0351 100644 --- a/website/docs/tutorial/introduction.ipynb +++ b/website/docs/tutorial/introduction.ipynb @@ -38,7 +38,7 @@ "source": [ "## Installation\n", "\n", - "The simplest way to install AutoGen is from pip: `pip install pyautogen`. Find more options in [Installation](/docs/installation/)." + "The simplest way to install AutoGen is from pip: `pip install autogen-agentchat~=0.2`. Find more options in [Installation](/docs/installation/)." ] }, { From 7431b66cf0f0cf7e7ec51fba121720802cbe77e7 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Tue, 1 Oct 2024 20:28:35 -0400 Subject: [PATCH 11/35] Use trusted publisher for pypi release (#3596) --- .github/workflows/python-package.yml | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml index 94edf117de20..b2274e641989 100644 --- a/.github/workflows/python-package.yml +++ b/.github/workflows/python-package.yml @@ -7,6 +7,7 @@ name: python-package on: release: types: [published] + workflow_dispatch: permissions: {} # actions: read # checks: read @@ -19,7 +20,11 @@ jobs: os: ['ubuntu-latest'] python-version: [3.10] runs-on: ${{ matrix.os }} - environment: package + environment: + name: package + url: https://pypi.org/p/autogen-agentchat + permissions: + id-token: write steps: - name: Checkout uses: actions/checkout@v4 @@ -48,6 +53,5 @@ jobs: run: | pip install twine python setup.py sdist bdist_wheel - - name: Publish to PyPI - shell: pwsh - run: twine upload dist/* + - name: Publish package to PyPI + uses: pypa/gh-action-pypi-publish@release/v1 From 249a57bfa268e84bca6284dd1b9f08cf85b7815e Mon Sep 17 00:00:00 2001 From: Henry Tu Date: Wed, 2 Oct 2024 11:10:55 -0400 Subject: [PATCH 12/35] Fix typos in Cerebras doc (#3590) * Fix typos in Cerebras doc Fix typo in Cerebras documentation * FIx formatting --- website/docs/topics/non-openai-models/cloud-cerebras.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb index e0b9bbaf2d5e..71cb9da446c5 100644 --- a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb +++ b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb @@ -17,7 +17,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Requirements\n", + "## Requirements\n", "To use Cerebras with AutoGen, install the `autogen-agentchat[cerebras]` package." ] }, @@ -38,7 +38,7 @@ "\n", "Cerebras provides a number of models to use. See the list of [models here](https://inference-docs.cerebras.ai/introduction).\n", "\n", - "See the sample `OAI_CONFIG_LIST` below showing how the Cerebras AI client class is used by specifying the `api_type` as `cerebras`.\n", + "See the sample `OAI_CONFIG_LIST` below showing how the Cerebras client class is used by specifying the `api_type` as `cerebras`.\n", "```python\n", "[\n", " {\n", @@ -165,7 +165,7 @@ "When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\n", "IMPORTANT: Wait for the user to execute your code and then you can reply with the word \"FINISH\". DO NOT OUTPUT \"FINISH\" after your code block.\"\"\"\n", "\n", - "# The AssistantAgent, using Cerebras AI's model, will take the coding request and return code\n", + "# The AssistantAgent, using Llama-3.1-70B on Cerebras Inference, will take the coding request and return code\n", "assistant_agent = AssistantAgent(\n", " name=\"Cerebras Assistant\",\n", " system_message=system_message,\n", From 9b7909489193e45ea691598b37bb52590d018ef6 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Wed, 2 Oct 2024 14:04:33 -0400 Subject: [PATCH 13/35] Add blog post announcing the new architecture preview (#3599) --- README.md | 4 +- dotnet/nuget/NUGET.md | 3 +- .../img/robots.jpeg | 3 + .../index.mdx | 103 ++++++++++++++++++ website/blog/authors.yml | 4 + website/docs/Getting-Started.mdx | 1 - .../docs/contributor-guide/contributing.md | 4 +- website/docs/contributor-guide/maintainer.md | 8 +- website/docs/tutorial/what-next.md | 3 +- website/docusaurus.config.js | 9 +- 10 files changed, 121 insertions(+), 21 deletions(-) create mode 100644 website/blog/2024-10-02-new-autogen-architecture-preview/img/robots.jpeg create mode 100644 website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx diff --git a/README.md b/README.md index bc6242ba5999..e92083f0df32 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ AutoGen is an open-source programming framework for building AI agents and facil > [!NOTE] -> *Note for contributors and users*: [microsoft/autogen](https://aka.ms/autogen-gh) is the official repository of AutoGen project and it is under active development and maintenance under MIT license. We welcome contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. We acknowledge the invaluable contributions from our existing contributors, as listed in [contributors.md](./CONTRIBUTORS.md). Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. For further information please also see [Microsoft open-source contributing guidelines](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing). +> *Note for contributors and users*: [microsoft/autogen](https://aka.ms/autogen-gh) is the original repository of AutoGen project and it is under active development and maintenance under MIT license. We welcome contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. We acknowledge the invaluable contributions from our existing contributors, as listed in [contributors.md](./CONTRIBUTORS.md). Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. For further information please also see [Microsoft open-source contributing guidelines](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing). > > -_Maintainers (Sept 6th, 2024)_ @@ -242,8 +242,6 @@ In addition, you can find: - [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/main/TRANSPARENCY_FAQS.md) -- [Discord](https://aka.ms/autogen-dc) - - [Contributing guide](https://microsoft.github.io/autogen/docs/Contribute) - [Roadmap](https://github.com/orgs/microsoft/projects/989/views/3) diff --git a/dotnet/nuget/NUGET.md b/dotnet/nuget/NUGET.md index 34fdbca33ca7..cfa7c9801888 100644 --- a/dotnet/nuget/NUGET.md +++ b/dotnet/nuget/NUGET.md @@ -2,7 +2,6 @@ `AutoGen for .NET` is the official .NET SDK for [AutoGen](https://github.com/microsoft/autogen). It enables you to create LLM agents and construct multi-agent workflows with ease. It also provides integration with popular platforms like OpenAI, Semantic Kernel, and LM Studio. ### Gettings started -- Find documents and examples on our [document site](https://microsoft.github.io/autogen-for-net/) -- Join our [Discord channel](https://discord.gg/pAbnFJrkgZ) to get help and discuss with the community +- Find documents and examples on our [document site](https://microsoft.github.io/autogen-for-net/) - Report a bug or request a feature by creating a new issue in our [github repo](https://github.com/microsoft/autogen) - Consume the nightly build package from one of the [nightly build feeds](https://microsoft.github.io/autogen-for-net/articles/Installation.html#nighly-build) \ No newline at end of file diff --git a/website/blog/2024-10-02-new-autogen-architecture-preview/img/robots.jpeg b/website/blog/2024-10-02-new-autogen-architecture-preview/img/robots.jpeg new file mode 100644 index 000000000000..5ec1aba78444 --- /dev/null +++ b/website/blog/2024-10-02-new-autogen-architecture-preview/img/robots.jpeg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daf14746d10ed67ab9de1f50c241827342e523fdb5bd08af8dc5a0fee5a8d37e +size 52503 diff --git a/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx b/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx new file mode 100644 index 000000000000..3b13ac514f8e --- /dev/null +++ b/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx @@ -0,0 +1,103 @@ +--- +title: New AutoGen Architecture Preview +authors: + - autogen-team +tags: [AutoGen] +--- + +# New AutoGen Architecture Preview + +
+ +![What are they doing?](img/robots.jpeg) + +
+ +One year ago, we launched AutoGen, a programming framework designed to build +agentic AI systems. The release of AutoGen sparked massive interest within the +developer community. As an early release, it provided us with a unique +opportunity to engage deeply with users, gather invaluable feedback, and learn +from a diverse range of use cases and contributions. By listening and engaging +with the community, we gained insights into what people were building or +attempting to build, how they were approaching the creation of agentic systems, +and where they were struggling. This experience was both humbling and +enlightening, revealing significant opportunities for improvement in our initial +design, especially for power users developing production-level applications with +AutoGen. + +Through engagements with the community, we learned many lessons: + +- Developers value modular and reusable agents. For example, our built-in agents + that could be directly plugged in or easily customized for specific use cases + were particularly popular. At the same time, there was a desire for more + customizability, such as integrating custom agents built using other + programming languages or frameworks. +- Chat-based agent-to-agent communication was an intuitive collaboration + pattern, making it easy for developers to get started and involve humans in + the loop. As developers began to employ agents in a wider range of scenarios, + they sought more flexibility in collaboration patterns. For instance, + developers wanted to build predictable, ordered workflows with agents, and to + integrate them with new user interfaces that are not chat-based. +- Although it was easy for developers to get started with AutoGen, debugging and + scaling agent teams applications proved more challenging. +- There were many opportunities for improving code quality. + +These learnings, along with many others from other agentic efforts across +Microsoft, prompted us to take a step back and lay the groundwork for a new +direction. A few months ago, we started dedicating time to distilling these +learnings into a roadmap for the future of AutoGen. This led to the development +of AutoGen 0.4, a complete redesign of the framework from the foundation up. +AutoGen 0.4 embraces the actor model of computing to support distributed, highly +scalable, event-driven agentic systems. This approach offers many advantages, +such as: + +- **Composability**. Systems designed in this way are more composable, allowing + developers to bring their own agents implemented in different frameworks or + programming languages and to build more powerful systems using complex agentic + patterns. +- **Flexibility**. It allows for the creation of both deterministic, ordered + workflows and event-driven or decentralized workflows, enabling customers to + bring their own orchestration or integrate with other systems more easily. It + also opens more opportunities for human-in-the-loop scenarios, both active and + reactive. +- **Debugging and Observability**. Event-driven communication moves message delivery + away from agents to a centralized component, making it easier to observe and + debug their activities regardless of agent implementation. +- **Scalability**. An event-based architecture enables distributed and + cloud-deployed agents, which is essential for building scalable AI services + and applications. + +Today, we are delighted to share our progress and invite everyone to collaborate +with us and provide feedback to evolve AutoGen and help shape the future of +multi-agent systems. + +As the first step, we are opening a [pull request](#) into the main branch with the +current state of development of 0.4. After approximately a week, we plan to +merge this into main and continue development. There's still a lot left to do +before 0.4 is ready for release though, so keep in mind this is a work in +progress. + +Starting in AutoGen 0.4, the project will have three main libraries: + +- **Core** - the building blocks for an event-driven agentic system. +- **AgentChat** - a task-driven, high-level API built with core, including group + chat, code execution, pre-built agents, and more. This is the most similar API + to AutoGen 0.2 and will be the easiest API to migrate to. +- **Extensions** - implementations of core interfaces and third-party integrations + (e.g., Azure code executor and OpenAI model client). + +AutoGen 0.2 is still available, developed and maintained out of the [0.2 branch](https://github.com/microsoft/autogen/tree/0.2). +For everyone looking for a stable version, we recommend continuing to use 0.2 +for the time being. It can be installed using: + +```sh +pip install autogen-agentchat~=0.2 +``` + +This new package name was used to align with the new packages that will come with 0.4: +`autogen-core`, `autogen-agentchat`, and `autogen-ext`. + +Lastly, we will be using [GitHub +Discussion](https://github.com/microsoft/autogen/discussions) as the official +community forum for the new version and, going forward, all discussions related +to the AutoGen project. We look forward to meeting you there. diff --git a/website/blog/authors.yml b/website/blog/authors.yml index f9e7495c5f30..107d7d5a9de3 100644 --- a/website/blog/authors.yml +++ b/website/blog/authors.yml @@ -152,3 +152,7 @@ bboynton97: title: AI Engineer at AgentOps url: https://github.com/bboynton97 image_url: https://github.com/bboynton97.png + +autogen-team: + name: AutoGen Team + title: The humans behind the agents diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx index 3d8639d11fb4..4a2bbf63fff5 100644 --- a/website/docs/Getting-Started.mdx +++ b/website/docs/Getting-Started.mdx @@ -131,7 +131,6 @@ The figure below shows an example conversation flow with AutoGen. - Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference) - Read the [API](/docs/reference/agentchat/conversable_agent/) docs - Learn about [research](/docs/Research) around AutoGen -- Chat on [Discord](https://aka.ms/autogen-dc) - Follow on [Twitter](https://twitter.com/pyautogen) - See our [roadmaps](https://aka.ms/autogen-roadmap) diff --git a/website/docs/contributor-guide/contributing.md b/website/docs/contributor-guide/contributing.md index cd2c62e408c1..633cd6c65ab8 100644 --- a/website/docs/contributor-guide/contributing.md +++ b/website/docs/contributor-guide/contributing.md @@ -6,7 +6,7 @@ The project welcomes contributions from developers and organizations worldwide. - Code review of pull requests. - Documentation, examples and test cases. - Readability improvement, e.g., improvement on docstr and comments. -- Community participation in [issues](https://github.com/microsoft/autogen/issues), [discussions](https://github.com/microsoft/autogen/discussions), [discord](https://aka.ms/autogen-dc), and [twitter](https://twitter.com/pyautogen). +- Community participation in [issues](https://github.com/microsoft/autogen/issues), [discussions](https://github.com/microsoft/autogen/discussions), and [twitter](https://twitter.com/pyautogen). - Tutorials, blog posts, talks that promote the project. - Sharing application scenarios and/or related research. @@ -31,4 +31,4 @@ To see what we are working on and what we plan to work on, please check our ## Becoming a Reviewer -There is currently no formal reviewer solicitation process. Current reviewers identify reviewers from active contributors. If you are willing to become a reviewer, you are welcome to let us know on discord. +There is currently no formal reviewer solicitation process. Current reviewers identify reviewers from active contributors. diff --git a/website/docs/contributor-guide/maintainer.md b/website/docs/contributor-guide/maintainer.md index cdbe4da53a93..dd28d1926882 100644 --- a/website/docs/contributor-guide/maintainer.md +++ b/website/docs/contributor-guide/maintainer.md @@ -10,7 +10,7 @@ ## Pull Requests -- For new PR, decide whether to close without review. If not, find the right reviewers. One source to refer to is the roles on Discord. Another consideration is to ask users who can benefit from the PR to review it. +- For new PR, decide whether to close without review. If not, find the right reviewers. Another consideration is to ask users who can benefit from the PR to review it. - For old PR, check the blocker: reviewer or PR creator. Try to unblock. Get additional help when needed. - When requesting changes, make sure you can check back in time because it blocks merging. @@ -28,9 +28,9 @@ ## Issues and Discussions -- For new issues, write a reply, apply a label if relevant. Ask on discord when necessary. For roadmap issues, apply the roadmap label and encourage community discussion. Mention relevant experts when necessary. +- For new issues, write a reply, apply a label if relevant. For roadmap issues, apply the roadmap label and encourage community discussion. Mention relevant experts when necessary. -- For old issues, provide an update or close. Ask on discord when necessary. Encourage PR creation when relevant. +- For old issues, provide an update or close. Encourage PR creation when relevant. - Use “good first issue” for easy fix suitable for first-time contributors. - Use “task list” for issues that require multiple PRs. -- For discussions, create an issue when relevant. Discuss on discord when appropriate. +- For discussions, create an issue when relevant. diff --git a/website/docs/tutorial/what-next.md b/website/docs/tutorial/what-next.md index d9a0062e8ca9..ed1542a56912 100644 --- a/website/docs/tutorial/what-next.md +++ b/website/docs/tutorial/what-next.md @@ -32,8 +32,7 @@ topics: ## Get Help If you have any questions, you can ask in our [GitHub -Discussions](https://github.com/microsoft/autogen/discussions), or join -our [Discord Server](https://aka.ms/autogen-dc). +Discussions](https://github.com/microsoft/autogen/discussions). [![](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat.png)](https://aka.ms/autogen-dc) diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js index 2ae1a581ce6e..1ea8e50aafa2 100644 --- a/website/docusaurus.config.js +++ b/website/docusaurus.config.js @@ -145,11 +145,6 @@ module.exports = { label: "GitHub", position: "right", }, - { - href: "https://aka.ms/autogen-dc", - label: "Discord", - position: "right", - }, { href: "https://twitter.com/pyautogen", label: "Twitter", @@ -177,8 +172,8 @@ module.exports = { // // href: 'https://stackoverflow.com/questions/tagged/pymarlin', // // }, { - label: "Discord", - href: "https://aka.ms/autogen-dc", + label: "GitHub Discussion", + href: "https://github.com/microsoft/autogen/discussions", }, { label: "Twitter", From 6ebf49a9b840e4663495816179322547274a83b0 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Wed, 2 Oct 2024 14:41:19 -0400 Subject: [PATCH 14/35] Update PR link in blog post (#3602) * Update PR link in blog post * Update index.mdx --- .../2024-10-02-new-autogen-architecture-preview/index.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx b/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx index 3b13ac514f8e..42a873d38bb2 100644 --- a/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx +++ b/website/blog/2024-10-02-new-autogen-architecture-preview/index.mdx @@ -71,7 +71,7 @@ Today, we are delighted to share our progress and invite everyone to collaborate with us and provide feedback to evolve AutoGen and help shape the future of multi-agent systems. -As the first step, we are opening a [pull request](#) into the main branch with the +As the first step, we are opening a [pull request](https://github.com/microsoft/autogen/pull/3600) into the main branch with the current state of development of 0.4. After approximately a week, we plan to merge this into main and continue development. There's still a lot left to do before 0.4 is ready for release though, so keep in mind this is a work in @@ -82,12 +82,12 @@ Starting in AutoGen 0.4, the project will have three main libraries: - **Core** - the building blocks for an event-driven agentic system. - **AgentChat** - a task-driven, high-level API built with core, including group chat, code execution, pre-built agents, and more. This is the most similar API - to AutoGen 0.2 and will be the easiest API to migrate to. + to AutoGen [0.2](https://github.com/microsoft/autogen/tree/0.2) and will be the easiest API to migrate to. - **Extensions** - implementations of core interfaces and third-party integrations (e.g., Azure code executor and OpenAI model client). -AutoGen 0.2 is still available, developed and maintained out of the [0.2 branch](https://github.com/microsoft/autogen/tree/0.2). -For everyone looking for a stable version, we recommend continuing to use 0.2 +AutoGen [0.2](https://github.com/microsoft/autogen/tree/0.2) is still available, developed and maintained out of the [0.2 branch](https://github.com/microsoft/autogen/tree/0.2). +For everyone looking for a stable version, we recommend continuing to use [0.2](https://github.com/microsoft/autogen/tree/0.2) for the time being. It can be installed using: ```sh From d4a09939d0b065a54658c5c09efdaf6e68c9eb1f Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Wed, 2 Oct 2024 17:28:57 -0400 Subject: [PATCH 15/35] Create CI to tag issues with needs triage (#3605) --- .github/workflows/issue-needs-triage.yml | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 .github/workflows/issue-needs-triage.yml diff --git a/.github/workflows/issue-needs-triage.yml b/.github/workflows/issue-needs-triage.yml new file mode 100644 index 000000000000..59cb3479c808 --- /dev/null +++ b/.github/workflows/issue-needs-triage.yml @@ -0,0 +1,18 @@ +name: Label issues with needs-triage +on: + issues: + types: + - reopened + - opened +jobs: + label_issues: + runs-on: ubuntu-latest + permissions: + issues: write + steps: + - run: gh issue edit "$NUMBER" --add-label "$LABELS" + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + GH_REPO: ${{ github.repository }} + NUMBER: ${{ github.event.issue.number }} + LABELS: needs-triage From 180c17c1f4b292f798d055fcd02a3f6b9e94d2ab Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Thu, 3 Oct 2024 12:47:07 -0400 Subject: [PATCH 16/35] Update issue templates (#3610) * Update config.yml * Delete .github/ISSUE_TEMPLATE.md * Delete .github/ISSUE_TEMPLATE/general_issue.yml * Update feature_request.yml * Update feature_request.yml * Update feature_request.yml * Update feature_request.yml * Update bug_report.yml * Update .github/ISSUE_TEMPLATE/bug_report.yml Co-authored-by: Eric Zhu * Update .github/ISSUE_TEMPLATE/config.yml Co-authored-by: Eric Zhu * Update bug_report.yml * Update config.yml --------- Co-authored-by: Eric Zhu --- .github/ISSUE_TEMPLATE.md | 57 --------------- .github/ISSUE_TEMPLATE/bug_report.yml | 80 +++++++++++----------- .github/ISSUE_TEMPLATE/config.yml | 4 ++ .github/ISSUE_TEMPLATE/feature_request.yml | 26 +++---- .github/ISSUE_TEMPLATE/general_issue.yml | 41 ----------- 5 files changed, 54 insertions(+), 154 deletions(-) delete mode 100644 .github/ISSUE_TEMPLATE.md delete mode 100644 .github/ISSUE_TEMPLATE/general_issue.yml diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index b3b0d0daeed6..000000000000 --- a/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,57 +0,0 @@ -### Description - - -### Environment -- AutoGen version: -- Python version: -- Operating System: - -### Steps to Reproduce (for bugs) - - -1. Step 1 -2. Step 2 -3. ... - -### Expected Behavior - - -### Actual Behavior - - -### Screenshots / Logs (if applicable) - - -### Additional Information - - -### Possible Solution (if you have one) - - -### Is this a Bug or Feature Request? - - -### Priority - - -### Difficulty - - -### Any related issues? - - -### Any relevant discussions? - - -### Checklist - -- [ ] I have searched for similar issues and didn't find any duplicates. -- [ ] I have provided a clear and concise description of the issue. -- [ ] I have included the necessary environment details. -- [ ] I have outlined the steps to reproduce the issue. -- [ ] I have included any relevant logs or screenshots. -- [ ] I have indicated whether this is a bug or a feature request. -- [ ] I have set the priority and difficulty levels. - -### Additional Comments - diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 434226b3e884..090fa6cc5939 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -1,53 +1,55 @@ name: Bug Report -description: File a bug report -title: "[Bug]: " +description: Report a bug labels: ["bug"] body: - type: textarea - id: description attributes: - label: Describe the bug - description: A clear and concise description of what the bug is. - placeholder: What went wrong? + label: What happened? + description: Please provide as much information as possible, this helps us address the issue. + validations: + required: true - type: textarea - id: reproduce attributes: - label: Steps to reproduce - description: | - Steps to reproduce the behavior: - - 1. Step 1 - 2. Step 2 - 3. ... - 4. See error - placeholder: How can we replicate the issue? + label: What did you expect to happen? + validations: + required: true - type: textarea - id: modelused attributes: - label: Model Used - description: A description of the model that was used when the error was encountered + label: How can we reproduce it (as minimally and precisely as possible)? + description: Please provide steps to reproduce. Provide code that can be run if possible. + validations: + required: true + - type: input + attributes: + label: AutoGen version + description: What version or commit of the library was used + validations: + required: true + - type: dropdown + attributes: + label: Which package was this bug in + options: + - Core + - AgentChat + - Extensions + - AutoGen Studio + - Magentic One + - AutoGen Bench + - Other + validations: + required: true + - type: input + attributes: + label: Model used + description: If a model was used, please describe it here, indicating whether it is a local model or a cloud-hosted model placeholder: gpt-4, mistral-7B etc - - type: textarea - id: expected_behavior + - type: input attributes: - label: Expected Behavior - description: A clear and concise description of what you expected to happen. - placeholder: What should have happened? - - type: textarea - id: screenshots + label: Python version + - type: input attributes: - label: Screenshots and logs - description: If applicable, add screenshots and logs to help explain your problem. - placeholder: Add screenshots here + label: Operating system - type: textarea - id: additional_information - attributes: - label: Additional Information - description: | - - AutoGen Version: - - Operating System: - - Python Version: - - Related Issues: - - Any other relevant information. - placeholder: Any additional details + attributes: + label: Any additional info you think would be helpful for fixing this bug diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml index 0086358db1eb..76afcbcc5f87 100644 --- a/.github/ISSUE_TEMPLATE/config.yml +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -1 +1,5 @@ blank_issues_enabled: true +contact_links: + - name: Questions or general help 💬 + url: https://github.com/microsoft/autogen/discussions + about: Please ask and answer questions here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml index e8a63df7a6e2..57f360761a76 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.yml +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -1,26 +1,18 @@ name: Feature Request -description: File a feature request +description: Request a new feature or enhancement labels: ["enhancement"] -title: "[Feature Request]: " body: - type: textarea - id: problem_description attributes: - label: Is your feature request related to a problem? Please describe. - description: A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - placeholder: What problem are you trying to solve? + label: What feature would you like to be added? + description: Please describe the desired feature. Be descriptive, provide examples and if possible, provide a proposed solution. + validations: + required: true - type: textarea - id: solution_description attributes: - label: Describe the solution you'd like - description: A clear and concise description of what you want to happen. - placeholder: How do you envision the solution? - - - type: textarea - id: additional_context - attributes: - label: Additional context - description: Add any other context or screenshots about the feature request here. - placeholder: Any additional information + label: Why is this needed? + description: Why is it important that this feature is implemented? What problem or need does it solve? + validations: + required: true diff --git a/.github/ISSUE_TEMPLATE/general_issue.yml b/.github/ISSUE_TEMPLATE/general_issue.yml deleted file mode 100644 index b585f4642f44..000000000000 --- a/.github/ISSUE_TEMPLATE/general_issue.yml +++ /dev/null @@ -1,41 +0,0 @@ -name: General Issue -description: File a general issue -title: "[Issue]: " -labels: [] - -body: - - type: textarea - id: description - attributes: - label: Describe the issue - description: A clear and concise description of what the issue is. - placeholder: What went wrong? - - type: textarea - id: reproduce - attributes: - label: Steps to reproduce - description: | - Steps to reproduce the behavior: - - 1. Step 1 - 2. Step 2 - 3. ... - 4. See error - placeholder: How can we replicate the issue? - - type: textarea - id: screenshots - attributes: - label: Screenshots and logs - description: If applicable, add screenshots and logs to help explain your problem. - placeholder: Add screenshots here - - type: textarea - id: additional_information - attributes: - label: Additional Information - description: | - - AutoGen Version: - - Operating System: - - Python Version: - - Related Issues: - - Any other relevant information. - placeholder: Any additional details From f32472e9ce80f878f090360a75ce3a90f03e0342 Mon Sep 17 00:00:00 2001 From: Juan Diego Herrera Date: Thu, 3 Oct 2024 17:05:24 -0700 Subject: [PATCH 17/35] Fix small typo in the docs --- website/docs/tutorial/conversation-patterns.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/tutorial/conversation-patterns.ipynb b/website/docs/tutorial/conversation-patterns.ipynb index 56004e3b3b81..a0c8ddc217cb 100644 --- a/website/docs/tutorial/conversation-patterns.ipynb +++ b/website/docs/tutorial/conversation-patterns.ipynb @@ -728,7 +728,7 @@ "If we were to use the `round_robin` strategy, this list would specify the order\n", "of the agents to be selected.\n", "We also initialize the group chat with an empty message list and a maximum\n", - "round of 6, which means there will be at most 6 iteratiosn of selecting speaker,\n", + "round of 6, which means there will be at most 6 iterations of selecting a speaker,\n", "agent speaks and broadcasting message." ] }, From 56d2090828cd272805da9b8218d43f75b7b1f080 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Fri, 4 Oct 2024 09:54:46 -0400 Subject: [PATCH 18/35] Update CI to target 0.2 branch, remove merge queue (#3656) --- .github/workflows/build.yml | 8 ++- .github/workflows/contrib-openai.yml | 2 +- .github/workflows/contrib-tests.yml | 4 +- .github/workflows/deploy-website.yml | 64 ++--------------------- .github/workflows/dotnet-build.yml | 12 ++--- .github/workflows/lfs-check.yml | 5 +- .github/workflows/openai.yml | 2 +- .github/workflows/pre-commit.yml | 3 +- .github/workflows/python-package.yml | 30 ++--------- .github/workflows/samples-tools-tests.yml | 4 +- .github/workflows/type-check.yml | 4 +- 11 files changed, 27 insertions(+), 111 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index a92044f15b78..0107e7314fcc 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -5,15 +5,13 @@ name: Build on: push: - branches: ["main"] + branches: ["0.2"] pull_request: - branches: ["main"] - merge_group: - types: [checks_requested] + branches: ["0.2"] concurrency: group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }} - cancel-in-progress: ${{ github.ref != 'refs/heads/main' }} + cancel-in-progress: ${{ github.ref != 'refs/heads/0.2' }} permissions: {} jobs: paths-filter: diff --git a/.github/workflows/contrib-openai.yml b/.github/workflows/contrib-openai.yml index 7e8fb0033177..d084e44bd607 100644 --- a/.github/workflows/contrib-openai.yml +++ b/.github/workflows/contrib-openai.yml @@ -5,7 +5,7 @@ name: OpenAI4ContribTests on: pull_request: - branches: ["main"] + branches: ["0.2"] paths: - "autogen/**" - "test/agentchat/contrib/**" diff --git a/.github/workflows/contrib-tests.yml b/.github/workflows/contrib-tests.yml index f14a7f09ebe0..7d779bc5fae8 100644 --- a/.github/workflows/contrib-tests.yml +++ b/.github/workflows/contrib-tests.yml @@ -5,7 +5,7 @@ name: ContribTests on: pull_request: - branches: ["main"] + branches: ["0.2"] paths: - "autogen/**" - "test/agentchat/contrib/**" @@ -16,7 +16,7 @@ on: concurrency: group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }} - cancel-in-progress: ${{ github.ref != 'refs/heads/main' }} + cancel-in-progress: ${{ github.ref != 'refs/heads/0.2' }} permissions: {} # actions: read diff --git a/.github/workflows/deploy-website.yml b/.github/workflows/deploy-website.yml index 2f2ba4d473f3..e576ca1b4db9 100644 --- a/.github/workflows/deploy-website.yml +++ b/.github/workflows/deploy-website.yml @@ -2,20 +2,18 @@ name: docs on: pull_request: - branches: [main] - path: + branches: ["0.2"] + paths: - "autogen/*" - "website/*" - ".github/workflows/deploy-website.yml" push: - branches: [main] - path: + branches: ["0.2"] + paths: - "autogen/*" - "website/*" - ".github/workflows/deploy-website.yml" workflow_dispatch: - merge_group: - types: [checks_requested] permissions: id-token: write pages: write @@ -67,57 +65,3 @@ jobs: npm i --legacy-peer-deps npm run build fi - gh-release: - if: github.event_name != 'pull_request' - runs-on: ubuntu-latest - defaults: - run: - working-directory: website - steps: - - uses: actions/checkout@v4 - with: - lfs: true - - uses: actions/setup-node@v4 - with: - node-version: 18.x - - name: setup python - uses: actions/setup-python@v5 - with: - python-version: "3.8" - - name: pydoc-markdown install - run: | - python -m pip install --upgrade pip - pip install pydoc-markdown pyyaml termcolor - # Pin databind packages as version 4.5.0 is not compatible with pydoc-markdown. - pip install databind.core==4.4.2 databind.json==4.4.2 - - name: pydoc-markdown run - run: | - pydoc-markdown - - name: quarto install - working-directory: ${{ runner.temp }} - run: | - wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.5.23/quarto-1.5.23-linux-amd64.tar.gz - tar -xzf quarto-1.5.23-linux-amd64.tar.gz - echo "$(pwd)/quarto-1.5.23/bin/" >> $GITHUB_PATH - - name: Process notebooks - run: | - python process_notebooks.py render - - name: Build website - run: | - if [ -e yarn.lock ]; then - yarn install --frozen-lockfile --ignore-engines - yarn build - elif [ -e package-lock.json ]; then - npm ci - npm run build - else - npm i --legacy-peer-deps - npm run build - fi - - name: Upload artifact - uses: actions/upload-pages-artifact@v3 - with: - path: "website/build" - - name: Deploy to GitHub Pages - id: deployment - uses: actions/deploy-pages@v4 diff --git a/.github/workflows/dotnet-build.yml b/.github/workflows/dotnet-build.yml index 6aac54d3818c..b333d9065d64 100644 --- a/.github/workflows/dotnet-build.yml +++ b/.github/workflows/dotnet-build.yml @@ -6,15 +6,13 @@ name: dotnet-ci on: workflow_dispatch: pull_request: - branches: [ "main" ] + branches: [ "0.2" ] push: - branches: [ "main" ] - merge_group: - types: [checks_requested] + branches: [ "0.2" ] concurrency: group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }} - cancel-in-progress: ${{ github.ref != 'refs/heads/main' || github.ref != 'refs/heads/dotnet' }} + cancel-in-progress: ${{ github.ref != 'refs/heads/0.2' || github.ref != 'refs/heads/dotnet' }} permissions: contents: read @@ -122,7 +120,7 @@ jobs: defaults: run: working-directory: dotnet - if: success() && (github.ref == 'refs/heads/main') + if: success() && (github.ref == 'refs/heads/0.2') needs: aot-test steps: - uses: actions/checkout@v4 @@ -228,4 +226,4 @@ jobs: env: MYGET_TOKEN: ${{ secrets.MYGET_TOKEN }} continue-on-error: true - + diff --git a/.github/workflows/lfs-check.yml b/.github/workflows/lfs-check.yml index 4baae925de3c..dc5e1678be2f 100644 --- a/.github/workflows/lfs-check.yml +++ b/.github/workflows/lfs-check.yml @@ -1,6 +1,7 @@ name: "Git LFS Check" - -on: pull_request +on: + pull_request: + branches: ["0.2"] permissions: {} jobs: lfs-check: diff --git a/.github/workflows/openai.yml b/.github/workflows/openai.yml index a9ab8e9e0c5f..e29f9d0f1ab2 100644 --- a/.github/workflows/openai.yml +++ b/.github/workflows/openai.yml @@ -5,7 +5,7 @@ name: OpenAI on: pull_request: - branches: ["main"] + branches: ["0.2"] paths: - "autogen/**" - "test/**" diff --git a/.github/workflows/pre-commit.yml b/.github/workflows/pre-commit.yml index 8404de61154d..7ec5a4eb1048 100644 --- a/.github/workflows/pre-commit.yml +++ b/.github/workflows/pre-commit.yml @@ -3,8 +3,7 @@ name: Code formatting # see: https://help.github.com/en/actions/reference/events-that-trigger-workflows on: # Trigger the workflow on pull request or merge pull_request: - merge_group: - types: [checks_requested] + branches: ["0.2"] defaults: run: diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml index b2274e641989..e34678ab0039 100644 --- a/.github/workflows/python-package.yml +++ b/.github/workflows/python-package.yml @@ -5,14 +5,10 @@ name: python-package on: - release: - types: [published] - workflow_dispatch: + push: + tags: + - "0.2.*" permissions: {} - # actions: read - # checks: read - # contents: read - # deployments: read jobs: deploy: strategy: @@ -28,26 +24,6 @@ jobs: steps: - name: Checkout uses: actions/checkout@v4 - # - name: Cache conda - # uses: actions/cache@v4 - # with: - # path: ~/conda_pkgs_dir - # key: conda-${{ matrix.os }}-python-${{ matrix.python-version }}-${{ hashFiles('environment.yml') }} - # - name: Setup Miniconda - # uses: conda-incubator/setup-miniconda@v2 - # with: - # auto-update-conda: true - # auto-activate-base: false - # activate-environment: hcrystalball - # python-version: ${{ matrix.python-version }} - # use-only-tar-bz2: true - - name: Install from source - # This is required for the pre-commit tests - shell: pwsh - run: pip install . - # - name: Conda list - # shell: pwsh - # run: conda list - name: Build shell: pwsh run: | diff --git a/.github/workflows/samples-tools-tests.yml b/.github/workflows/samples-tools-tests.yml index e774e5cb0b1f..9452f0e377e3 100644 --- a/.github/workflows/samples-tools-tests.yml +++ b/.github/workflows/samples-tools-tests.yml @@ -5,7 +5,7 @@ name: SamplesToolsTests on: pull_request: - branches: ["main"] + branches: ["0.2"] paths: - "autogen/**" - "samples/tools/**" @@ -14,7 +14,7 @@ on: concurrency: group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }} - cancel-in-progress: ${{ github.ref != 'refs/heads/main' }} + cancel-in-progress: ${{ github.ref != 'refs/heads/0.2' }} permissions: {} jobs: SamplesToolsFineTuningTests: diff --git a/.github/workflows/type-check.yml b/.github/workflows/type-check.yml index c66fb6ad7b10..3afd32ad886f 100644 --- a/.github/workflows/type-check.yml +++ b/.github/workflows/type-check.yml @@ -2,8 +2,8 @@ name: Type check # see: https://help.github.com/en/actions/reference/events-that-trigger-workflows on: # Trigger the workflow on pull request or merge pull_request: - merge_group: - types: [checks_requested] + branches: ["0.2"] + defaults: run: shell: bash From 11314488cb5cc47a071c378f3fdc7d4c4e671a84 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Fri, 4 Oct 2024 11:14:12 -0400 Subject: [PATCH 19/35] update base url of docusaurus site (#3658) --- website/docusaurus.config.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js index 1ea8e50aafa2..c2dd0d44cbd5 100644 --- a/website/docusaurus.config.js +++ b/website/docusaurus.config.js @@ -16,7 +16,7 @@ module.exports = { title: "AutoGen", tagline: "An Open-Source Programming Framework for Agentic AI", url: "https://microsoft.github.io", - baseUrl: "/autogen/", + baseUrl: "/autogen/0.2/", onBrokenLinks: "throw", onBrokenMarkdownLinks: "warn", favicon: "img/ag.ico", From bf3414c96f3685c3ae6a65b4962a5c2699dcdc2b Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Wed, 9 Oct 2024 16:41:20 -0400 Subject: [PATCH 20/35] Add announcement bar for 0.4 (#3717) --- website/docusaurus.config.js | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js index c2dd0d44cbd5..1a2de465f1f6 100644 --- a/website/docusaurus.config.js +++ b/website/docusaurus.config.js @@ -184,14 +184,14 @@ module.exports = { ], copyright: `Copyright © ${new Date().getFullYear()} AutoGen Authors | Privacy and Cookies | Consumer Health Privacy`, }, - // announcementBar: { - // id: "whats_new", - // content: - // 'What\'s new in AutoGen? Read this blog for an overview of updates', - // backgroundColor: "#fafbfc", - // textColor: "#091E42", - // isCloseable: true, - // }, + announcementBar: { + id: "newdocs", + content: + 'Go here to view the documentation for the work in progress version of AutoGen 0.4.', + backgroundColor: "#fafbfc", + textColor: "#091E42", + isCloseable: true, + }, /* Clarity Config */ clarity: { ID: "lnxpe6skj1", // The Tracking ID provided by Clarity From 3ebd7aeec248d345b7bd3ea63de26f07ba72638f Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Thu, 10 Oct 2024 13:16:25 -0400 Subject: [PATCH 21/35] Update links on 0.2 website (#3734) --- README.md | 10 +-- notebook/agentchat_MathChat.ipynb | 2 +- notebook/agentchat_RetrieveChat.ipynb | 2 +- notebook/agentchat_RetrieveChat_mongodb.ipynb | 2 +- .../agentchat_RetrieveChat_pgvector.ipynb | 2 +- notebook/agentchat_RetrieveChat_qdrant.ipynb | 2 +- notebook/agentchat_agentoptimizer.ipynb | 2 +- notebook/agentchat_cost_token_tracking.ipynb | 2 +- notebook/agentchat_custom_model.ipynb | 2 +- notebook/agentchat_function_call.ipynb | 2 +- ...at_function_call_currency_calculator.ipynb | 2 +- notebook/agentchat_human_feedback.ipynb | 2 +- notebook/agentchat_langchain.ipynb | 2 +- .../agentchat_oai_assistant_retrieval.ipynb | 2 +- ...ntchat_oai_assistant_twoagents_basic.ipynb | 2 +- notebook/agentchat_planning.ipynb | 2 +- notebook/agentchat_stream.ipynb | 2 +- notebook/agentchat_surfer.ipynb | 2 +- notebook/agentchat_teachability.ipynb | 2 +- .../agentchat_teachable_oai_assistants.ipynb | 2 +- notebook/agentchat_two_users.ipynb | 2 +- ...eo_transcript_translate_with_whisper.ipynb | 2 +- notebook/agentchat_web_info.ipynb | 2 +- notebook/agenteval_cq_math.ipynb | 2 +- website/blog/2023-06-28-MathChat/index.mdx | 2 +- .../blog/2023-10-18-RetrieveChat/index.mdx | 12 ++-- .../blog/2023-10-26-TeachableAgent/index.mdx | 8 +-- website/blog/2023-11-06-LMM-Agent/index.mdx | 6 +- .../blog/2023-11-13-OAI-assistants/index.mdx | 8 +-- website/blog/2023-11-20-AgentEval/index.mdx | 4 +- .../blog/2023-11-26-Agent-AutoBuild/index.mdx | 4 +- .../blog/2023-12-01-AutoGenStudio/index.mdx | 2 +- .../blog/2023-12-23-AgentOptimizer/index.mdx | 2 +- .../blog/2024-01-25-AutoGenBench/index.mdx | 8 +-- .../blog/2024-01-26-Custom-Models/index.mdx | 2 +- .../blog/2024-03-03-AutoGen-Update/index.mdx | 8 +-- website/docs/Examples.md | 64 +++++++++---------- website/docs/Getting-Started.mdx | 2 +- website/docs/Migration-Guide.md | 2 +- website/docs/Use-Cases/agent_chat.md | 6 +- website/docs/Use-Cases/enhanced_inference.md | 4 +- website/docs/contributor-guide/docker.md | 2 +- .../docs/contributor-guide/documentation.md | 2 +- website/docs/contributor-guide/tests.md | 10 +-- website/docs/ecosystem/llamaindex.md | 2 +- website/docs/ecosystem/microsoft-fabric.md | 2 +- website/docs/ecosystem/pgvector.md | 2 +- website/docs/installation/Docker.md | 4 +- .../installation/Optional-Dependencies.md | 16 ++--- .../non-openai-models/cloud-gemini.ipynb | 2 +- .../cloud-gemini_vertexai.ipynb | 2 +- website/docs/topics/retrieval_augmentation.md | 8 +-- website/process_notebooks.py | 2 +- website/src/components/GalleryPage.js | 2 +- 54 files changed, 128 insertions(+), 128 deletions(-) diff --git a/README.md b/README.md index e92083f0df32..0599300c5397 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ AutoGen is an open-source programming framework for building AI agents and facil > -_Maintainers (Sept 6th, 2024)_ -![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png) +![AutoGen Overview](https://github.com/microsoft/autogen/blob/0.2/website/static/img/autogen_agentchat.png) - AutoGen enables building next-gen LLM applications based on [multi-agent conversations](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses. - It supports [diverse conversation patterns](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns) for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy, @@ -171,7 +171,7 @@ Features of this use case include: - **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. -For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), +For [example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py), ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json @@ -194,9 +194,9 @@ python test/twoagent.py After the repo is cloned. The figure below shows an example conversation flow with AutoGen. -![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png) +![Agent Chat Example](https://github.com/microsoft/autogen/blob/0.2/website/static/img/chat_example.png) -Alternatively, the [sample code](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. +Alternatively, the [sample code](https://github.com/microsoft/autogen/blob/0.2/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples#automated-multi-agent-chat) for this feature.

@@ -240,7 +240,7 @@ You can find detailed documentation about AutoGen [here](https://microsoft.githu In addition, you can find: -- [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/main/TRANSPARENCY_FAQS.md) +- [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/0.2/TRANSPARENCY_FAQS.md) - [Contributing guide](https://microsoft.github.io/autogen/docs/Contribute) diff --git a/notebook/agentchat_MathChat.ipynb b/notebook/agentchat_MathChat.ipynb index 69c38031b2fe..bb6f713eabc7 100644 --- a/notebook/agentchat_MathChat.ipynb +++ b/notebook/agentchat_MathChat.ipynb @@ -17,7 +17,7 @@ "\n", "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "MathChat is an experimental conversational framework for math problem solving. In this notebook, we demonstrate how to use MathChat to solve math problems. MathChat uses the `AssistantAgent` and `MathUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `MathUserProxyAgent` implements a different auto reply mechanism corresponding to the MathChat prompts. You can find more details in the paper [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337) or the [blogpost](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat).\n", + "MathChat is an experimental conversational framework for math problem solving. In this notebook, we demonstrate how to use MathChat to solve math problems. MathChat uses the `AssistantAgent` and `MathUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `MathUserProxyAgent` implements a different auto reply mechanism corresponding to the MathChat prompts. You can find more details in the paper [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337) or the [blogpost](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat).\n", "\n", "````{=mdx}\n", ":::info Requirements\n", diff --git a/notebook/agentchat_RetrieveChat.ipynb b/notebook/agentchat_RetrieveChat.ipynb index 6ca2d1ac5123..6debb2450df3 100644 --- a/notebook/agentchat_RetrieveChat.ipynb +++ b/notebook/agentchat_RetrieveChat.ipynb @@ -10,7 +10,7 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", "\n", "## Table of Contents\n", "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", diff --git a/notebook/agentchat_RetrieveChat_mongodb.ipynb b/notebook/agentchat_RetrieveChat_mongodb.ipynb index f1f85f65a802..30508ad6c474 100644 --- a/notebook/agentchat_RetrieveChat_mongodb.ipynb +++ b/notebook/agentchat_RetrieveChat_mongodb.ipynb @@ -10,7 +10,7 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", "\n", "## Table of Contents\n", "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", diff --git a/notebook/agentchat_RetrieveChat_pgvector.ipynb b/notebook/agentchat_RetrieveChat_pgvector.ipynb index 022b1347a2d2..18f3c5199e89 100644 --- a/notebook/agentchat_RetrieveChat_pgvector.ipynb +++ b/notebook/agentchat_RetrieveChat_pgvector.ipynb @@ -10,7 +10,7 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", "\n", "## Table of Contents\n", "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", diff --git a/notebook/agentchat_RetrieveChat_qdrant.ipynb b/notebook/agentchat_RetrieveChat_qdrant.ipynb index 9be4cbfe528d..e56bc00e4abb 100644 --- a/notebook/agentchat_RetrieveChat_qdrant.ipynb +++ b/notebook/agentchat_RetrieveChat_qdrant.ipynb @@ -12,7 +12,7 @@ "This notebook demonstrates the usage of Qdrant for RAG, based on [agentchat_RetrieveChat.ipynb](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb).\n", "\n", "\n", - "RetrieveChat is a conversational system for retrieve augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)).\n", + "RetrieveChat is a conversational system for retrieve augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)).\n", "\n", "We'll demonstrate usage of RetrieveChat with Qdrant for code generation and question answering w/ human feedback.\n", "\n", diff --git a/notebook/agentchat_agentoptimizer.ipynb b/notebook/agentchat_agentoptimizer.ipynb index 7de418b5ee70..ea3a0d88e65d 100644 --- a/notebook/agentchat_agentoptimizer.ipynb +++ b/notebook/agentchat_agentoptimizer.ipynb @@ -53,7 +53,7 @@ "source": [ "# MathUserProxy with function_call\n", "\n", - "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/math_user_proxy_agent.py).\n", + "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/math_user_proxy_agent.py).\n", "\n", "It supports using both function_call and python to solve math problems.\n" ] diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb index 17106e7c9385..0d456daa46bd 100644 --- a/notebook/agentchat_cost_token_tracking.ipynb +++ b/notebook/agentchat_cost_token_tracking.ipynb @@ -109,7 +109,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb index 773247ee0b91..364d81517a17 100644 --- a/notebook/agentchat_custom_model.ipynb +++ b/notebook/agentchat_custom_model.ipynb @@ -238,7 +238,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_function_call.ipynb b/notebook/agentchat_function_call.ipynb index ff94c0d4fb0a..e341aecaead7 100644 --- a/notebook/agentchat_function_call.ipynb +++ b/notebook/agentchat_function_call.ipynb @@ -104,7 +104,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb index 36ef81d5edbe..d6d9e05826f2 100644 --- a/notebook/agentchat_function_call_currency_calculator.ipynb +++ b/notebook/agentchat_function_call_currency_calculator.ipynb @@ -104,7 +104,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_human_feedback.ipynb b/notebook/agentchat_human_feedback.ipynb index 3c21c7c9f4f5..77b09fe0e8d5 100644 --- a/notebook/agentchat_human_feedback.ipynb +++ b/notebook/agentchat_human_feedback.ipynb @@ -102,7 +102,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_langchain.ipynb b/notebook/agentchat_langchain.ipynb index 1d5fea19e066..bb4805cb3558 100644 --- a/notebook/agentchat_langchain.ipynb +++ b/notebook/agentchat_langchain.ipynb @@ -139,7 +139,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_oai_assistant_retrieval.ipynb b/notebook/agentchat_oai_assistant_retrieval.ipynb index ec7e41a00ba8..fec9605161a1 100644 --- a/notebook/agentchat_oai_assistant_retrieval.ipynb +++ b/notebook/agentchat_oai_assistant_retrieval.ipynb @@ -6,7 +6,7 @@ "source": [ "## RAG OpenAI Assistants in AutoGen\n", "\n", - "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", + "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", "`UserProxyAgent` in AutoGen." ] }, diff --git a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb index 94da2358c986..295784bea2cc 100644 --- a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb +++ b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb @@ -6,7 +6,7 @@ "source": [ "## OpenAI Assistants in AutoGen\n", "\n", - "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", + "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", "`UserProxyAgent` in AutoGen." ] }, diff --git a/notebook/agentchat_planning.ipynb b/notebook/agentchat_planning.ipynb index 9953bf3b57f0..14fed1c29e0d 100644 --- a/notebook/agentchat_planning.ipynb +++ b/notebook/agentchat_planning.ipynb @@ -105,7 +105,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n", + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n", "\n", "## Construct Agents\n", "\n", diff --git a/notebook/agentchat_stream.ipynb b/notebook/agentchat_stream.ipynb index 5536cef96aa0..d39895827277 100644 --- a/notebook/agentchat_stream.ipynb +++ b/notebook/agentchat_stream.ipynb @@ -102,7 +102,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_surfer.ipynb b/notebook/agentchat_surfer.ipynb index 09c9cb3d4f8d..18be350a41f5 100644 --- a/notebook/agentchat_surfer.ipynb +++ b/notebook/agentchat_surfer.ipynb @@ -479,7 +479,7 @@ "#### Multi-Agent Conversation Framework[​](#multi-agent-conversation-framework \"Direct link to Multi-Agent Conversation Framework\")\n", "\n", "Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.\n", - "By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py),\n", + "By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py),\n", "\n", "The figure below shows an example conversation flow with AutoGen.\n", "\n", diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb index 4be1e135dab1..dd41727a02e5 100644 --- a/notebook/agentchat_teachability.ipynb +++ b/notebook/agentchat_teachability.ipynb @@ -13,7 +13,7 @@ "\n", "In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n", "\n", - "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n", + "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/0.2/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n", "\n", "## Requirements\n", "\n", diff --git a/notebook/agentchat_teachable_oai_assistants.ipynb b/notebook/agentchat_teachable_oai_assistants.ipynb index 75744d9a397a..be37b0d7b5cd 100644 --- a/notebook/agentchat_teachable_oai_assistants.ipynb +++ b/notebook/agentchat_teachable_oai_assistants.ipynb @@ -860,7 +860,7 @@ "metadata": {}, "source": [ "### Create the OpenAI Assistant with retrieval as a tool\n", - "For this example, first upload the [conversable_agent.py](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/conversable_agent.py) file to your OpenAI API account. This can be done manually through the [website](https://platform.openai.com/assistants). Then find the uploaded File ID on the [Files page](https://platform.openai.com/files), and paste that ID into the `file_ids` list in the code below." + "For this example, first upload the [conversable_agent.py](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/conversable_agent.py) file to your OpenAI API account. This can be done manually through the [website](https://platform.openai.com/assistants). Then find the uploaded File ID on the [Files page](https://platform.openai.com/files), and paste that ID into the `file_ids` list in the code below." ] }, { diff --git a/notebook/agentchat_two_users.ipynb b/notebook/agentchat_two_users.ipynb index 14b95b537013..c4fc85861f52 100644 --- a/notebook/agentchat_two_users.ipynb +++ b/notebook/agentchat_two_users.ipynb @@ -82,7 +82,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb index e19e61419ad3..51ad4c6268aa 100644 --- a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb +++ b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb @@ -8,7 +8,7 @@ "# Translating Video audio using Whisper and GPT-3.5-turbo\n", "\n", "In this notebook, we demonstrate how to use whisper and GPT-3.5-turbo with `AssistantAgent` and `UserProxyAgent` to recognize and translate\n", - "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb)\n" + "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call.ipynb)\n" ] }, { diff --git a/notebook/agentchat_web_info.ipynb b/notebook/agentchat_web_info.ipynb index e2962e9049fb..47660502fa76 100644 --- a/notebook/agentchat_web_info.ipynb +++ b/notebook/agentchat_web_info.ipynb @@ -116,7 +116,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agenteval_cq_math.ipynb b/notebook/agenteval_cq_math.ipynb index 199967a91080..a67b0ae992ca 100644 --- a/notebook/agenteval_cq_math.ipynb +++ b/notebook/agenteval_cq_math.ipynb @@ -65,7 +65,7 @@ "## Set your API Endpoint\n", "* The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n" + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n" ] }, { diff --git a/website/blog/2023-06-28-MathChat/index.mdx b/website/blog/2023-06-28-MathChat/index.mdx index be2423de9eed..2521065a6252 100644 --- a/website/blog/2023-06-28-MathChat/index.mdx +++ b/website/blog/2023-06-28-MathChat/index.mdx @@ -76,7 +76,7 @@ We found that compared to basic prompting, which demonstrates the innate capabil For categories like Algebra and Prealgebra, PoT and PS showed little improvement, and in some instances, even led to a decrease in accuracy. However, MathChat was able to enhance total accuracy by around 6% compared to PoT and PS, showing competitive performance across all categories. Remarkably, MathChat improved accuracy in the Algebra category by about 15% over other methods. Note that categories like Intermediate Algebra and Precalculus remained challenging for all methods, with only about 20% of problems solved accurately. The code for experiments can be found at this [repository](https://github.com/yiranwu0/FLAML/tree/gpt_math_solver/flaml/autogen/math). -We now provide an implementation of MathChat using the interactive agents in AutoGen. See this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb) for example usage. +We now provide an implementation of MathChat using the interactive agents in AutoGen. See this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_MathChat.ipynb) for example usage. ## Future Directions diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx index 4bad582eb5df..9a6c23787231 100644 --- a/website/blog/2023-10-18-RetrieveChat/index.mdx +++ b/website/blog/2023-10-18-RetrieveChat/index.mdx @@ -206,7 +206,7 @@ We are using chromadb as the default vector database, you can also use mongodb, by simply set `vector_db` to `mongodb`, `pgvector`, `qdrant` and `couchbase` in `retrieve_config`, respectively. To plugin any other dbs, you can also extend class `agentchat.contrib.vectordb.base`, -check out the code [here](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/vectordb/base.py). +check out the code [here](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/vectordb/base.py). ## Advanced Usage of RAG Agents @@ -401,8 +401,8 @@ The online app and the source code are hosted in [HuggingFace](https://huggingfa ## Read More You can check out more example notebooks for RAG use cases: -- [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) -- [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) -- [Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) -- [Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) -- [Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_mongodb.ipynb) +- [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat.ipynb) +- [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_groupchat_RAG.ipynb) +- [Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) +- [Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) +- [Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb) diff --git a/website/blog/2023-10-26-TeachableAgent/index.mdx b/website/blog/2023-10-26-TeachableAgent/index.mdx index f097acc73726..9ac57fa9995f 100644 --- a/website/blog/2023-10-26-TeachableAgent/index.mdx +++ b/website/blog/2023-10-26-TeachableAgent/index.mdx @@ -24,13 +24,13 @@ In order to make effective decisions about memo storage and retrieval, the `Teac AutoGen contains four code examples that use `Teachability`. -1. Run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py) to converse with a teachable agent. +1. Run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/0.2/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py) to converse with a teachable agent. -2. Run [test_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/test_teachable_agent.py) for quick unit testing of a teachable agent. +2. Run [test_teachable_agent.py](https://github.com/microsoft/autogen/blob/0.2/test/agentchat/contrib/capabilities/test_teachable_agent.py) for quick unit testing of a teachable agent. -3. Use the Jupyter notebook [agentchat_teachability.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) to step through examples discussed below. +3. Use the Jupyter notebook [agentchat_teachability.ipynb](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachability.ipynb) to step through examples discussed below. -4. Use the Jupyter notebook [agentchat_teachable_oai_assistants.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachable_oai_assistants.ipynb) to make arbitrary OpenAI Assistants teachable through `GPTAssistantAgent`. +4. Use the Jupyter notebook [agentchat_teachable_oai_assistants.ipynb](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachable_oai_assistants.ipynb) to make arbitrary OpenAI Assistants teachable through `GPTAssistantAgent`. ## Basic Usage of Teachability diff --git a/website/blog/2023-11-06-LMM-Agent/index.mdx b/website/blog/2023-11-06-LMM-Agent/index.mdx index 0ab92c4dbb90..6613f794ee3d 100644 --- a/website/blog/2023-11-06-LMM-Agent/index.mdx +++ b/website/blog/2023-11-06-LMM-Agent/index.mdx @@ -9,8 +9,8 @@ tags: [LMM, multimodal] **In Brief:** * Introducing the **Multimodal Conversable Agent** and the **LLaVA Agent** to enhance LMM functionalities. * Users can input text and images simultaneously using the `` tag to specify image loading. -* Demonstrated through the [GPT-4V notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb). -* Demonstrated through the [LLaVA notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb). +* Demonstrated through the [GPT-4V notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb). +* Demonstrated through the [LLaVA notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb). ## Introduction Large multimodal models (LMMs) augment large language models (LLMs) with the ability to process multi-sensory data. @@ -62,7 +62,7 @@ The `MultimodalConversableAgent` interprets the input prompt, extracting images ## Advanced Usage Similar to other AutoGen agents, multimodal agents support multi-round dialogues with other agents, code generation, factual queries, and management via a GroupChat interface. -For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent). +For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent). The coder drafts Python code for visualizations, while the critics provide insights for enhancement. Collaboratively, these agents aim to refine visual outputs. With `human_input_mode=ALWAYS`, you can also contribute suggestions for better visualizations. diff --git a/website/blog/2023-11-13-OAI-assistants/index.mdx b/website/blog/2023-11-13-OAI-assistants/index.mdx index 2fc9bee359b8..ae8073285a43 100644 --- a/website/blog/2023-11-13-OAI-assistants/index.mdx +++ b/website/blog/2023-11-13-OAI-assistants/index.mdx @@ -9,12 +9,12 @@ tags: [openai-assistant] ## TL;DR -OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py). +OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/gpt_assistant_agent.py). This enables multiple OpenAI assistants, which form the backend of the now popular GPTs, to collaborate and tackle complex tasks. Checkout example notebooks for reference: -* [Basic example](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) -* [Code interpreter](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) -* [Function calls](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb) +* [Basic example](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) +* [Code interpreter](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_code_interpreter.ipynb) +* [Function calls](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_function_call.ipynb) ## Introduction diff --git a/website/blog/2023-11-20-AgentEval/index.mdx b/website/blog/2023-11-20-AgentEval/index.mdx index 1abb9e6c9f8d..ac7fab7dc8f7 100644 --- a/website/blog/2023-11-20-AgentEval/index.mdx +++ b/website/blog/2023-11-20-AgentEval/index.mdx @@ -14,7 +14,7 @@ tags: [LLM, GPT, evaluation, task utility] **TL;DR:** * As a developer of an LLM-powered application, how can you assess the utility it brings to end users while helping them with their tasks? * To shed light on the question above, we introduce `AgentEval` — the first version of the framework to assess the utility of any LLM-powered application crafted to assist users in specific tasks. AgentEval aims to simplify the evaluation process by automatically proposing a set of criteria tailored to the unique purpose of your application. This allows for a comprehensive assessment, quantifying the utility of your application against the suggested criteria. -* We demonstrate how `AgentEval` work using [math problems dataset](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc). +* We demonstrate how `AgentEval` work using [math problems dataset](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc). ## Introduction @@ -54,7 +54,7 @@ critic = autogen.AssistantAgent( ) ``` -Next, the critic is given successful and failed examples of the task execution; then, it is able to return a list of criteria (Fig. 1). For reference, use the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). +Next, the critic is given successful and failed examples of the task execution; then, it is able to return a list of criteria (Fig. 1). For reference, use the [following notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb). * The goal of `QuantifierAgent` is to quantify each of the suggested criteria (Fig. 1), providing us with an idea of the utility of this system for the given task. Here is an example of how it can be defined: diff --git a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx index a6d0025699d3..c7329a4b1920 100644 --- a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx +++ b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx @@ -14,8 +14,8 @@ user prompt required, powered by a new designed class **AgentBuilder**. AgentBui leveraging [vLLM](https://docs.vllm.ai/en/latest/index.html) and [FastChat](https://github.com/lm-sys/FastChat). Checkout example notebooks and source code for reference: -- [AutoBuild Examples](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) -- [AgentBuilder](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/agent_builder.py) +- [AutoBuild Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb) +- [AgentBuilder](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/agent_builder.py) ## Introduction In this blog, we introduce **AutoBuild**, a pipeline that can automatically build multi-agent systems for complex tasks. diff --git a/website/blog/2023-12-01-AutoGenStudio/index.mdx b/website/blog/2023-12-01-AutoGenStudio/index.mdx index a2558acb01d6..b379e4520458 100644 --- a/website/blog/2023-12-01-AutoGenStudio/index.mdx +++ b/website/blog/2023-12-01-AutoGenStudio/index.mdx @@ -139,7 +139,7 @@ This section focuses on defining the properties of agents and agent workflows. I

-**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/conversable_agent.py) class). +**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/conversable_agent.py) class). **Agent Workflows**: An agent workflow is a specification of a set of agents that can work together to accomplish a task. The simplest version of this is a setup with two agents – a user proxy agent (that represents a user i.e. it compiles code and prints result) and an assistant that can address task requests (e.g., generating plans, writing code, evaluating responses, proposing error recovery steps, etc.). A more complex flow could be a group chat where even more agents work towards a solution. diff --git a/website/blog/2023-12-23-AgentOptimizer/index.mdx b/website/blog/2023-12-23-AgentOptimizer/index.mdx index ce92952cfb6f..8c709e931f3d 100644 --- a/website/blog/2023-12-23-AgentOptimizer/index.mdx +++ b/website/blog/2023-12-23-AgentOptimizer/index.mdx @@ -17,7 +17,7 @@ More information could be found in: **Paper**: https://arxiv.org/abs/2402.11359. -**Notebook**: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb. +**Notebook**: https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentoptimizer.ipynb. ## Introduction In the traditional ML pipeline, we train a model by updating its weights according to the loss on the training set, while in the era of LLM agents, how should we train an agent? diff --git a/website/blog/2024-01-25-AutoGenBench/index.mdx b/website/blog/2024-01-25-AutoGenBench/index.mdx index 3f4b2d4f216b..cd58ecee6436 100644 --- a/website/blog/2024-01-25-AutoGenBench/index.mdx +++ b/website/blog/2024-01-25-AutoGenBench/index.mdx @@ -21,8 +21,8 @@ Today we are releasing AutoGenBench - a tool for evaluating AutoGen agents and w AutoGenBench is a standalone command line tool, installable from PyPI, which handles downloading, configuring, running, and reporting supported benchmarks. AutoGenBench works best when run alongside Docker, since it uses Docker to isolate tests from one another. -- See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/README.md) for information on installation and running benchmarks. -- See the [AutoGenBench CONTRIBUTING guide](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets. +- See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/README.md) for information on installation and running benchmarks. +- See the [AutoGenBench CONTRIBUTING guide](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets. ### Quick Start @@ -125,7 +125,7 @@ Please do not cite these values in academic work without first inspecting and ve From this output we can see the results of the three separate repetitions of each task, and final summary statistics of each run. In this case, the results were generated via GPT-4 (as defined in the OAI_CONFIG_LIST that was provided), and used the `TwoAgents` template. **It is important to remember that AutoGenBench evaluates _specific_ end-to-end configurations of agents (as opposed to evaluating a model or cognitive framework more generally).** -Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via: +Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via: - `autogenbench --help` - `autogenbench clone --help` @@ -145,4 +145,4 @@ For an up to date tracking of our work items on this project, please see [AutoGe ## Call for Participation -Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel! +Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel! diff --git a/website/blog/2024-01-26-Custom-Models/index.mdx b/website/blog/2024-01-26-Custom-Models/index.mdx index 81a9ad383d3e..e0d43c2ac0ac 100644 --- a/website/blog/2024-01-26-Custom-Models/index.mdx +++ b/website/blog/2024-01-26-Custom-Models/index.mdx @@ -13,7 +13,7 @@ AutoGen now supports custom models! This feature empowers users to define and lo ## Quickstart -An interactive and easy way to get started is by following the notebook [here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_custom_model.ipynb) which loads a local model from HuggingFace into AutoGen and uses it for inference, and making changes to the class provided. +An interactive and easy way to get started is by following the notebook [here](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_custom_model.ipynb) which loads a local model from HuggingFace into AutoGen and uses it for inference, and making changes to the class provided. ### Step 1: Create the custom model client class diff --git a/website/blog/2024-03-03-AutoGen-Update/index.mdx b/website/blog/2024-03-03-AutoGen-Update/index.mdx index 7458ec74e053..f1128b523680 100644 --- a/website/blog/2024-03-03-AutoGen-Update/index.mdx +++ b/website/blog/2024-03-03-AutoGen-Update/index.mdx @@ -138,7 +138,7 @@ The community has been working hard to address them in several dimensions: We are working on agent-based evaluation tools and benchmarking tools. For example: -- [AgentEval](/blog/2023/11/20/AgentEval). Our [research](https://arxiv.org/abs/2402.09015) finds that LLM agents built with AutoGen can be used to automatically identify evaluation criteria and assess the performance from task descriptions and execution logs. It is demonstrated as a [notebook example](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Feedback and help are welcome for building it into the library. +- [AgentEval](/blog/2023/11/20/AgentEval). Our [research](https://arxiv.org/abs/2402.09015) finds that LLM agents built with AutoGen can be used to automatically identify evaluation criteria and assess the performance from task descriptions and execution logs. It is demonstrated as a [notebook example](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb). Feedback and help are welcome for building it into the library. - [AutoGenBench](/blog/2024/01/25/AutoGenBench). AutoGenBench is a commandline tool for downloading, configuring, running an agentic benchmark, and reporting results. It is designed to allow repetition, isolation and instrumentation, leveraging the new [runtime logging](/docs/notebooks/agentchat_logging) feature. These tools have been used for improving the AutoGen library as well as applications. For example, the new state-of-the-art performance achieved by a multi-agent solution to the [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark has benefited from these evaluation tools. @@ -147,7 +147,7 @@ These tools have been used for improving the AutoGen library as well as applicat We are making rapid progress in further improving the interface to make it even easier to build agent applications. For example: -- [AutoBuild](/blog/2023/11/26/Agent-AutoBuild). AutoBuild is an ongoing area of research to automatically create or select a group of agents for a given task and objective. If successful, it will greatly reduce the effort from users or developers when using the multi-agent technology. It also paves the way for agentic decomposition to handle complex tasks. It is available as an experimental feature and demonstrated in two modes: free-form [creation](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) and [selection](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) from a library. +- [AutoBuild](/blog/2023/11/26/Agent-AutoBuild). AutoBuild is an ongoing area of research to automatically create or select a group of agents for a given task and objective. If successful, it will greatly reduce the effort from users or developers when using the multi-agent technology. It also paves the way for agentic decomposition to handle complex tasks. It is available as an experimental feature and demonstrated in two modes: free-form [creation](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb) and [selection](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_agent_library.ipynb) from a library. - [AutoGen Studio](/blog/2023/12/01/AutoGenStudio). AutoGen Studio is a no-code UI for fast experimentation with the multi-agent conversations. It lowers the barrier of entrance to the AutoGen technology. Models, agents, and workflows can all be configured without writing code. And chatting with multiple agents in a playground is immediately available after the configuration. Although only a subset of `autogen-agentchat` features are available in this sample app, it demonstrates a promising experience. It has generated tremendous excitement in the community. - Conversation Programming+. The [AutoGen paper](https://arxiv.org/abs/2308.08155) introduced a key concept of _Conversation Programming_, which can be used to program diverse conversation patterns such as 1-1 chat, group chat, hierarchical chat, nested chat etc. While we offered dynamic group chat as an example of high-level orchestration, it made other patterns relatively less discoverable. Therefore, we have added more convenient conversation programming features which enables easier definition of other types of complex workflow, such as [finite state machine based group chat](/blog/2024/02/11/FSM-GroupChat), [sequential chats](/docs/notebooks/agentchats_sequential_chats), and [nested chats](/docs/notebooks/agentchat_nestedchat). Many users have found them useful in implementing specific patterns, which have been always possible but more obvious with the added features. I will write another blog post for a deep dive. @@ -173,8 +173,8 @@ The extensible design of AutoGen makes it easy to integrate with new technologie The above only covers a subset of new features and roadmap. There are many other interesting new features, integration examples or sample apps: -- new features like stateful code execution, [tool decorators](/docs/Use-Cases/agent_chat#tool-calling), [long context handling](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_capability_long_context_handling.ipynb), [web agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_surfer.ipynb). -- integration examples like using [guidance](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb) to generate structured response. +- new features like stateful code execution, [tool decorators](/docs/Use-Cases/agent_chat#tool-calling), [long context handling](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_capability_long_context_handling.ipynb), [web agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_surfer.ipynb). +- integration examples like using [guidance](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_guidance.ipynb) to generate structured response. - sample apps like [AutoAnny](/blog/2024/02/02/AutoAnny). ## Call for Help diff --git a/website/docs/Examples.md b/website/docs/Examples.md index 5efd71748f9d..550c2604659d 100644 --- a/website/docs/Examples.md +++ b/website/docs/Examples.md @@ -18,7 +18,7 @@ Links to notebook examples: - Automated Task Solving by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat) - Automated Data Visualization by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_vis) - Automated Complex Task Solving by Group Chat (with 6 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_research) -- Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_planning.ipynb) +- Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_planning.ipynb) - Automated Task Solving with transition paths specified in a graph - [View Notebook](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) - Running a group chat as an inner-monolgue via the SocietyOfMindAgent - [View Notebook](/docs/notebooks/agentchat_society_of_mind) - Running a group chat with custom speaker selection function - [View Notebook](/docs/notebooks/agentchat_groupchat_customized) @@ -38,73 +38,73 @@ Links to notebook examples: ### Applications -- Automated Continual Learning from New Data - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_stream.ipynb) +- Automated Continual Learning from New Data - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_stream.ipynb) - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization - [AutoAnny](https://github.com/microsoft/autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen ### Tool Use -- **Web Search**: Solve Tasks Requiring Web Info - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_web_info.ipynb) +- **Web Search**: Solve Tasks Requiring Web Info - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_web_info.ipynb) - Use Provided Tools as Functions - [View Notebook](/docs/notebooks/agentchat_function_call_currency_calculator) - Use Tools via Sync and Async Function Calling - [View Notebook](/docs/notebooks/agentchat_function_call_async) -- Task Solving with Langchain Provided Tools as Functions - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_langchain.ipynb) +- Task Solving with Langchain Provided Tools as Functions - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_langchain.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) -- Function Inception: Enable AutoGen agents to update/remove functions during conversations. - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_inception_function.ipynb) +- Function Inception: Enable AutoGen agents to update/remove functions during conversations. - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_inception_function.ipynb) - Agent Chat with Whisper - [View Notebook](/docs/notebooks/agentchat_video_transcript_translate_with_whisper) -- Constrained Responses via Guidance - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb) -- Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_surfer.ipynb) -- **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_sql_spider.ipynb) +- Constrained Responses via Guidance - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_guidance.ipynb) +- Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_surfer.ipynb) +- **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_sql_spider.ipynb) - **Web Scraping**: Web Scraping with Apify - [View Notebook](/docs/notebooks/agentchat_webscraping_with_apify) -- **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_code_writing.ipynb). +- **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb). ### Human Involvement -- Simple example in ChatGPT style [View example](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) -- Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_human_feedback.ipynb) -- Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb) -- Agent Chat with **Async Human Inputs** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/Async_human_input.ipynb) +- Simple example in ChatGPT style [View example](https://github.com/microsoft/autogen/blob/0.2/samples/simple_chat.py) +- Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_human_feedback.ipynb) +- Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_two_users.ipynb) +- Agent Chat with **Async Human Inputs** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/Async_human_input.ipynb) ### Agent Teaching and Learning - Teach Agents New Skills & Reuse via Automated Chat - [View Notebook](/docs/notebooks/agentchat_teaching) - Teach Agents New Facts, User Preferences and Skills Beyond Coding - [View Notebook](/docs/notebooks/agentchat_teachability) -- Teach OpenAI Assistants Through GPTAssistantAgent - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachable_oai_assistants.ipynb) -- Agent Optimizer: Train Agents in an Agentic Way - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb) +- Teach OpenAI Assistants Through GPTAssistantAgent - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachable_oai_assistants.ipynb) +- Agent Optimizer: Train Agents in an Agentic Way - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentoptimizer.ipynb) ### Multi-Agent Chat with OpenAI Assistants in the loop -- Hello-World Chat with OpenAi Assistant in AutoGen - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) -- Chat with OpenAI Assistant using Function Call - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb) -- Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) -- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) -- OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_groupchat.ipynb) -- GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/gpt_assistant_agent_function_call.ipynb) +- Hello-World Chat with OpenAi Assistant in AutoGen - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) +- Chat with OpenAI Assistant using Function Call - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_function_call.ipynb) +- Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_code_interpreter.ipynb) +- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb) +- OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_groupchat.ipynb) +- GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/gpt_assistant_agent_function_call.ipynb) ### Non-OpenAI Models - Conversational Chess using non-OpenAI Models - [View Notebook](/docs/notebooks/agentchat_nested_chats_chess_altmodels) ### Multimodal Agent -- Multimodal Agent Chat with DALLE and GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_dalle_and_gpt4v.ipynb) -- Multimodal Agent Chat with Llava - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) -- Multimodal Agent Chat with GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) +- Multimodal Agent Chat with DALLE and GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_dalle_and_gpt4v.ipynb) +- Multimodal Agent Chat with Llava - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb) +- Multimodal Agent Chat with GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb) ### Long Context Handling - + - Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages) ### Evaluation and Assessment -- AgentEval: A Multi-Agent System for Assess Utility of LLM-powered Applications - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb) +- AgentEval: A Multi-Agent System for Assess Utility of LLM-powered Applications - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb) ### Automatic Agent Building -- Automatically Build Multi-agent System with AgentBuilder - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) -- Automatically Build Multi-agent System from Agent Library - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) +- Automatically Build Multi-agent System with AgentBuilder - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb) +- Automatically Build Multi-agent System from Agent Library - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_agent_library.ipynb) ### Observability -- Track LLM calls, tool usage, actions and errors using AgentOps - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentops.ipynb) +- Track LLM calls, tool usage, actions and errors using AgentOps - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentops.ipynb) ## Enhanced Inferences @@ -112,7 +112,7 @@ Links to notebook examples: - API Unification - [View Documentation with Code Example](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification) - Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration) -- Cost Calculation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_cost_token_tracking.ipynb) +- Cost Calculation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_cost_token_tracking.ipynb) ### Inference Hyperparameters Tuning @@ -120,5 +120,5 @@ AutoGen offers a cost-effective hyperparameter optimization technique [EcoOptiGe Please find documentation about this feature [here](/docs/Use-Cases/enhanced_inference). Links to notebook examples: -* [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) -* [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) +* [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) +* [Optimize for Math](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx index 4a2bbf63fff5..761ed6f3d34d 100644 --- a/website/docs/Getting-Started.mdx +++ b/website/docs/Getting-Started.mdx @@ -118,7 +118,7 @@ Learn more about configuring LLMs for agents [here](/docs/topics/llm_configurati #### Multi-Agent Conversation Framework Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans. -By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), +By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py), The figure below shows an example conversation flow with AutoGen. diff --git a/website/docs/Migration-Guide.md b/website/docs/Migration-Guide.md index f996a157a299..97e16de0770b 100644 --- a/website/docs/Migration-Guide.md +++ b/website/docs/Migration-Guide.md @@ -26,7 +26,7 @@ autogen.runtime_logging.start() # Stop logging autogen.runtime_logging.stop() ``` -Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) to learn more. +Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_logging.ipynb) to learn more. Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). - `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that: diff --git a/website/docs/Use-Cases/agent_chat.md b/website/docs/Use-Cases/agent_chat.md index 76f9959841b8..fa85691810c9 100644 --- a/website/docs/Use-Cases/agent_chat.md +++ b/website/docs/Use-Cases/agent_chat.md @@ -81,13 +81,13 @@ AutoGen, by integrating conversation-driven control utilizing both programming a With the pluggable auto-reply function, one can choose to invoke conversations with other agents depending on the content of the current message and context. For example: - Hierarchical chat like in [OptiGuide](https://github.com/microsoft/optiguide). -- [Dynamic Group Chat](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb) which is a special form of hierarchical chat. In the system, we register a reply function in the group chat manager, which broadcasts messages and decides who the next speaker will be in a group chat setting. +- [Dynamic Group Chat](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_groupchat.ipynb) which is a special form of hierarchical chat. In the system, we register a reply function in the group chat manager, which broadcasts messages and decides who the next speaker will be in a group chat setting. - [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) which is a special form of dynamic group chat. In this approach, a directed transition matrix is fed into group chat. Users can specify legal transitions or specify disallowed transitions. -- Nested chat like in [conversational chess](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_nested_chats_chess.ipynb). +- Nested chat like in [conversational chess](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_nested_chats_chess.ipynb). 2. LLM-Based Function Call -Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls. +Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls. ### Diverse Applications Implemented with AutoGen diff --git a/website/docs/Use-Cases/enhanced_inference.md b/website/docs/Use-Cases/enhanced_inference.md index 14723391e8cf..825f46ff0714 100644 --- a/website/docs/Use-Cases/enhanced_inference.md +++ b/website/docs/Use-Cases/enhanced_inference.md @@ -140,7 +140,7 @@ The `OpenAIWrapper` from `autogen` tracks token counts and costs of your API cal - `mode='actual'`: only print non-cached usage. - `mode='total'`: only print all usage (including cache). -Reset your session's usage data with `clear_usage_summary()` when needed. [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_client_cost.ipynb) +Reset your session's usage data with `clear_usage_summary()` when needed. [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_client_cost.ipynb) Example usage: ```python @@ -307,7 +307,7 @@ When debugging or diagnosing an LLM-based system, it is often convenient to log ### For openai >= 1 -Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) +Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_logging.ipynb) #### Start logging: ```python diff --git a/website/docs/contributor-guide/docker.md b/website/docs/contributor-guide/docker.md index 8ea54ed7c228..b09a53aa35f2 100644 --- a/website/docs/contributor-guide/docker.md +++ b/website/docs/contributor-guide/docker.md @@ -48,4 +48,4 @@ docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/ ## Develop in Remote Container If you use vscode, you can open the autogen folder in a [Container](https://code.visualstudio.com/docs/remote/containers). -We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended. +We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/0.2/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended. diff --git a/website/docs/contributor-guide/documentation.md b/website/docs/contributor-guide/documentation.md index 70a79b700697..3380ff529280 100644 --- a/website/docs/contributor-guide/documentation.md +++ b/website/docs/contributor-guide/documentation.md @@ -2,7 +2,7 @@ ## How to get a notebook rendered on the website -See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website. +See [here](https://github.com/microsoft/autogen/blob/0.2/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website. ## Build documentation locally diff --git a/website/docs/contributor-guide/tests.md b/website/docs/contributor-guide/tests.md index c5eabb907329..a4b03b6379d3 100644 --- a/website/docs/contributor-guide/tests.md +++ b/website/docs/contributor-guide/tests.md @@ -2,12 +2,12 @@ Tests are automatically run via GitHub actions. There are two workflows: -1. [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) -1. [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml) +1. [build.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/build.yml) +1. [openai.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/openai.yml) -The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use [`pytest.mark.skipif`](https://github.com/microsoft/autogen/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py#L19) to make them run in only when `openai` package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml). +The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use [`pytest.mark.skipif`](https://github.com/microsoft/autogen/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py#L19) to make them run in only when `openai` package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in [openai.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/openai.yml). -Make sure all tests pass, this is required for [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) checks to pass +Make sure all tests pass, this is required for [build.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/build.yml) checks to pass ## Running tests locally @@ -27,7 +27,7 @@ Tests for the `autogen.agentchat.contrib` module may be skipped automatically if required dependencies are not installed. Please consult the documentation for each contrib module to see what dependencies are required. -See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#testing) for how to run notebook tests. +See [here](https://github.com/microsoft/autogen/blob/0.2/notebook/contributing.md#testing) for how to run notebook tests. ## Skip flags for tests diff --git a/website/docs/ecosystem/llamaindex.md b/website/docs/ecosystem/llamaindex.md index 4e9cffeedc7d..6561880b4d22 100644 --- a/website/docs/ecosystem/llamaindex.md +++ b/website/docs/ecosystem/llamaindex.md @@ -4,4 +4,4 @@ [Llamaindex](https://www.llamaindex.ai/) allows the users to create Llamaindex agents and integrate them in autogen conversation patterns. -- [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb) +- [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb) diff --git a/website/docs/ecosystem/microsoft-fabric.md b/website/docs/ecosystem/microsoft-fabric.md index d0593a4a7d04..5ef625c71763 100644 --- a/website/docs/ecosystem/microsoft-fabric.md +++ b/website/docs/ecosystem/microsoft-fabric.md @@ -4,4 +4,4 @@ [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. In this notenook, we give a simple example for using AutoGen in Microsoft Fabric. -- [Microsoft Fabric + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_microsoft_fabric.ipynb) +- [Microsoft Fabric + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_microsoft_fabric.ipynb) diff --git a/website/docs/ecosystem/pgvector.md b/website/docs/ecosystem/pgvector.md index 99afa676e7e4..637941e6135e 100644 --- a/website/docs/ecosystem/pgvector.md +++ b/website/docs/ecosystem/pgvector.md @@ -2,4 +2,4 @@ [PGVector](https://github.com/pgvector/pgvector) is an open-source vector similarity search for Postgres. -- [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) +- [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) diff --git a/website/docs/installation/Docker.md b/website/docs/installation/Docker.md index a7fa6bd829b5..a4506f9c10ae 100644 --- a/website/docs/installation/Docker.md +++ b/website/docs/installation/Docker.md @@ -41,7 +41,7 @@ Here's how you can run an application built with AutoGen, using the Docker image Here, `$(pwd)/myapp` is your local directory, and `/home/autogen/autogen/myapp` is the path in the Docker container where your code will be located. -2. **Mount your code:** Now suppose you have your application built with AutoGen in a main script named `twoagent.py` ([example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py)) in a folder named `myapp`. With the command line below, you can mount your folder and run the application in Docker. +2. **Mount your code:** Now suppose you have your application built with AutoGen in a main script named `twoagent.py` ([example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py)) in a folder named `myapp`. With the command line below, you can mount your folder and run the application in Docker. ```python # Mount the local folder `myapp` into docker image and run the script named "twoagent.py" in the docker. @@ -82,6 +82,6 @@ docker run -it -p {WorkstationPortNum}:{DockerPortNum} -v {WorkStation_Dir}:{Doc ## Additional Resources -- Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/blob/main/.devcontainer/README.md) README. +- Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/blob/0.2/.devcontainer/README.md) README. - For more information on Docker usage and best practices, refer to the [official Docker documentation](https://docs.docker.com). - Details on how to use the Dockerfile dev version can be found on the [Contributor Guide](/docs/contributor-guide/docker). diff --git a/website/docs/installation/Optional-Dependencies.md b/website/docs/installation/Optional-Dependencies.md index 3f8164a667e9..89013d023ea6 100644 --- a/website/docs/installation/Optional-Dependencies.md +++ b/website/docs/installation/Optional-Dependencies.md @@ -38,9 +38,9 @@ pip install "pyautogen[blendsearch]<0.2" Example notebooks: -[Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) +[Optimize for Code Generation](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_completion.ipynb) -[Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) +[Optimize for Math](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_chatgpt_gpt4.ipynb) ## retrievechat @@ -72,11 +72,11 @@ You can find a list of all supported document types by using `autogen.retrieve_u Example notebooks: -[Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) +[Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat.ipynb) -[Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) +[Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_groupchat_RAG.ipynb) -[Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) +[Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) ## Teachability @@ -86,7 +86,7 @@ To use Teachability, please install AutoGen with the [teachable] option. pip install "autogen-agentchat[teachable]~=0.2" ``` -Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) +Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachability.ipynb) ## Large Multimodal Model (LMM) Agents @@ -98,7 +98,7 @@ pip install "autogen-agentchat[lmm]~=0.2" Example notebooks: -[LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) +[LLaVA Agent](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb) ## mathchat @@ -110,7 +110,7 @@ pip install "pyautogen[mathchat]<0.2" Example notebooks: -[Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb) +[Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_MathChat.ipynb) ## Graph diff --git a/website/docs/topics/non-openai-models/cloud-gemini.ipynb b/website/docs/topics/non-openai-models/cloud-gemini.ipynb index 0a36dd62cf91..673d8a80f31e 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini.ipynb @@ -495,7 +495,7 @@ "user_proxy.initiate_chat(\n", " image_agent,\n", " message=\"\"\"Describe what is in this image?\n", - ".\"\"\",\n", + ".\"\"\",\n", ")" ] }, diff --git a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb index 3456a803f488..2389092e8dc9 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb @@ -392,7 +392,7 @@ "user_proxy.initiate_chat(\n", " image_agent,\n", " message=\"\"\"Describe what is in this image?\n", - ".\"\"\",\n", + ".\"\"\",\n", ")" ] }, diff --git a/website/docs/topics/retrieval_augmentation.md b/website/docs/topics/retrieval_augmentation.md index 7fdd39f31f92..1ac32e85fef3 100644 --- a/website/docs/topics/retrieval_augmentation.md +++ b/website/docs/topics/retrieval_augmentation.md @@ -124,10 +124,10 @@ ragproxyagent.initiate_chat( ## More Examples and Notebooks For more detailed examples and notebooks showcasing the usage of retrieval augmented agents in AutoGen, refer to the following: - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) -- Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) -- Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) -- Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_mongodb.ipynb) -- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) +- Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) +- Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) +- Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb) +- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) ## Roadmap diff --git a/website/process_notebooks.py b/website/process_notebooks.py index 5e25f5e69510..a3157363ab82 100755 --- a/website/process_notebooks.py +++ b/website/process_notebooks.py @@ -348,7 +348,7 @@ def post_process_mdx(rendered_mdx: Path, source_notebooks: Path, front_matter: D if "{" in title: title = title[: title.find("{")].strip() - github_link = f"https://github.com/microsoft/autogen/blob/main/{repo_relative_notebook}" + github_link = f"https://github.com/microsoft/autogen/blob/0.2/{repo_relative_notebook}" content = ( content[:title_end] + "\n[![Open on GitHub](https://img.shields.io/badge/Open%20on%20GitHub-grey?logo=github)](" diff --git a/website/src/components/GalleryPage.js b/website/src/components/GalleryPage.js index 11f4abe2e7fe..aec7907b41ef 100644 --- a/website/src/components/GalleryPage.js +++ b/website/src/components/GalleryPage.js @@ -91,7 +91,7 @@ const GalleryPage = (props) => { return null; } const colab_href = `https://colab.research.google.com/github/microsoft/autogen/blob/main/${item.source}`; - const github_href = `https://github.com/microsoft/autogen/blob/main/${item.source}`; + const github_href = `https://github.com/microsoft/autogen/blob/0.2/${item.source}`; return ( Open In Colab Static Badge From 32022b2df64de1f10ad823436a01134b252ff8fe Mon Sep 17 00:00:00 2001 From: Zoltan Lux Date: Thu, 10 Oct 2024 22:26:05 +0200 Subject: [PATCH 22/35] Function Calling Support for Gemini - Part 2 (#3726) * Added function calling support to GeminiClient * Appending a continue message from model to alternate between user and model * Fixed cost calculation to include <128K costing and new 1.5-flash model. Added test case for function_call. * Add notebook with samples for Gemini * Updated test case * Fix to handle not dict response in function call * Handle non dict function results and append dummy model message between function response and user message * Fixing message order in gemini * Append text as multiple parts instead of test concatenation * Raising error for Union data types in function parameter * Delete default key * Update gemini.py for multiple tool calls + pre-commit formatting * no function role * start adding function calling config * do not serialize tool_config * improve tool config parsing * add hint * improve function calling config * removunnecessary comments * try removing allowed function names in tool config conversion * fix tool config parsing with empty tools list * improve logging and case handling with vertexai tool config parsing * reset file * check if text is in part * fix empty part checking case * fix bug with attribute handling * skip test if gemini deps are not installed --------- Co-authored-by: Arjun G Co-authored-by: Beibin Li Co-authored-by: Chi Wang Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> --- autogen/oai/gemini.py | 390 +++++++++++---- autogen/oai/openai_utils.py | 1 + notebook/agentchat_gemini.ipynb | 809 ++++++++++++++++++++++++++++++++ test/oai/test_gemini.py | 184 +++++++- 4 files changed, 1290 insertions(+), 94 deletions(-) create mode 100644 notebook/agentchat_gemini.ipynb diff --git a/autogen/oai/gemini.py b/autogen/oai/gemini.py index 33790c9851c6..d44e41112ac9 100644 --- a/autogen/oai/gemini.py +++ b/autogen/oai/gemini.py @@ -32,6 +32,8 @@ from __future__ import annotations import base64 +import copy +import json import logging import os import random @@ -39,24 +41,39 @@ import time import warnings from io import BytesIO -from typing import Any, Dict, List, Mapping, Union +from typing import Any, Dict, List, Union import google.generativeai as genai import requests import vertexai -from google.ai.generativelanguage import Content, Part +from google.ai.generativelanguage import Content, FunctionCall, FunctionDeclaration, FunctionResponse, Part, Tool from google.api_core.exceptions import InternalServerError from google.auth.credentials import Credentials -from openai.types.chat import ChatCompletion +from openai.types.chat import ChatCompletion, ChatCompletionMessageToolCall from openai.types.chat.chat_completion import ChatCompletionMessage, Choice +from openai.types.chat.chat_completion_message_tool_call import Function from openai.types.completion_usage import CompletionUsage from PIL import Image -from vertexai.generative_models import Content as VertexAIContent +from vertexai.generative_models import ( + Content as VertexAIContent, +) +from vertexai.generative_models import ( + FunctionDeclaration as VertexAIFunctionDeclaration, +) +from vertexai.generative_models import ( + GenerationConfig as VertexAIGenerationConfig, +) from vertexai.generative_models import GenerativeModel from vertexai.generative_models import HarmBlockThreshold as VertexAIHarmBlockThreshold from vertexai.generative_models import HarmCategory as VertexAIHarmCategory from vertexai.generative_models import Part as VertexAIPart from vertexai.generative_models import SafetySetting as VertexAISafetySetting +from vertexai.generative_models import ( + Tool as VertexAITool, +) +from vertexai.generative_models import ( + ToolConfig as VertexAIToolConfig, +) logger = logging.getLogger(__name__) @@ -107,7 +124,7 @@ def __init__(self, **kwargs): Args: api_key (str): The API key for using Gemini. - credentials (google.auth.credentials.Credentials): credentials to be used for authentication with vertexai. + credentials (google.auth.credentials.Credentials): credentials to be used for authentication with vertexai. google_application_credentials (str): Path to the JSON service account key file of the service account. Alternatively, the GOOGLE_APPLICATION_CREDENTIALS environment variable can also be set instead of using this argument. @@ -171,6 +188,8 @@ def create(self, params: Dict) -> ChatCompletion: params.get("api_type", "google") # not used messages = params.get("messages", []) + tools = params.get("tools", []) + tool_config = params.get("tool_config", {}) stream = params.get("stream", False) n_response = params.get("n", 1) system_instruction = params.get("system_instruction", None) @@ -183,6 +202,7 @@ def create(self, params: Dict) -> ChatCompletion: } if self.use_vertexai: safety_settings = GeminiClient._to_vertexai_safety_settings(params.get("safety_settings", {})) + tool_config = GeminiClient._to_vertexai_tool_config(tool_config, tools) else: safety_settings = params.get("safety_settings", {}) @@ -198,12 +218,15 @@ def create(self, params: Dict) -> ChatCompletion: if "vision" not in model_name: # A. create and call the chat model. gemini_messages = self._oai_messages_to_gemini_messages(messages) + gemini_tools = self._oai_tools_to_gemini_tools(tools) if self.use_vertexai: model = GenerativeModel( model_name, generation_config=generation_config, safety_settings=safety_settings, system_instruction=system_instruction, + tools=gemini_tools, + tool_config=tool_config, ) chat = model.start_chat(history=gemini_messages[:-1], response_validation=response_validation) else: @@ -213,12 +236,13 @@ def create(self, params: Dict) -> ChatCompletion: generation_config=generation_config, safety_settings=safety_settings, system_instruction=system_instruction, + tools=gemini_tools, ) genai.configure(api_key=self.api_key) chat = model.start_chat(history=gemini_messages[:-1]) max_retries = 5 for attempt in range(max_retries): - ans = None + ans: Union[Content, VertexAIContent] = None try: response = chat.send_message( gemini_messages[-1].parts, stream=stream, safety_settings=safety_settings @@ -234,7 +258,7 @@ def create(self, params: Dict) -> ChatCompletion: raise RuntimeError(f"Google GenAI exception occurred while calling Gemini API: {e}") else: # `ans = response.text` is unstable. Use the following code instead. - ans: str = chat.history[-1].parts[0].text + ans: Union[Content, VertexAIContent] = chat.history[-1] break if ans is None: @@ -262,7 +286,7 @@ def create(self, params: Dict) -> ChatCompletion: # Gemini's vision model does not support chat history yet # chat = model.start_chat(history=gemini_messages[:-1]) # response = chat.send_message(gemini_messages[-1].parts) - user_message = self._oai_content_to_gemini_content(messages[-1]["content"]) + user_message = self._oai_content_to_gemini_content(messages[-1]) if len(messages) > 2: warnings.warn( "Warning: Gemini's vision model does not support chat history yet.", @@ -273,16 +297,14 @@ def create(self, params: Dict) -> ChatCompletion: response = model.generate_content(user_message, stream=stream) # ans = response.text if self.use_vertexai: - ans: str = response.candidates[0].content.parts[0].text + ans: VertexAIContent = response.candidates[0].content else: - ans: str = response._result.candidates[0].content.parts[0].text + ans: Content = response._result.candidates[0].content prompt_tokens = model.count_tokens(user_message).total_tokens - completion_tokens = model.count_tokens(ans).total_tokens + completion_tokens = model.count_tokens(ans.parts[0].text).total_tokens - # 3. convert output - message = ChatCompletionMessage(role="assistant", content=ans, function_call=None, tool_calls=None) - choices = [Choice(finish_reason="stop", index=0, message=message)] + choices = self._gemini_content_to_oai_choices(ans) response_oai = ChatCompletion( id=str(random.randint(0, 1000)), @@ -295,31 +317,87 @@ def create(self, params: Dict) -> ChatCompletion: completion_tokens=completion_tokens, total_tokens=prompt_tokens + completion_tokens, ), - cost=calculate_gemini_cost(prompt_tokens, completion_tokens, model_name), + cost=self._calculate_gemini_cost(prompt_tokens, completion_tokens, model_name), ) return response_oai - def _oai_content_to_gemini_content(self, content: Union[str, List]) -> List: + # If str is not a json string return str as is + def _to_json(self, str) -> dict: + try: + return json.loads(str) + except ValueError: + return str + + def _oai_content_to_gemini_content(self, message: Dict[str, Any]) -> List: """Convert content from OAI format to Gemini format""" rst = [] - if isinstance(content, str): - if content == "": - content = "empty" # Empty content is not allowed. + if isinstance(message["content"], str): + if message["content"] == "": + message["content"] = "empty" # Empty content is not allowed. + if self.use_vertexai: + rst.append(VertexAIPart.from_text(message["content"])) + else: + rst.append(Part(text=message["content"])) + return rst + + if "tool_calls" in message: + if self.use_vertexai: + for tool_call in message["tool_calls"]: + rst.append( + VertexAIPart.from_dict( + { + "functionCall": { + "name": tool_call["function"]["name"], + "args": json.loads(tool_call["function"]["arguments"]), + } + } + ) + ) + else: + for tool_call in message["tool_calls"]: + rst.append( + Part( + function_call=FunctionCall( + name=tool_call["function"]["name"], + args=json.loads(tool_call["function"]["arguments"]), + ) + ) + ) + return rst + + if message["role"] == "tool": + if self.use_vertexai: + rst.append( + VertexAIPart.from_function_response( + name=message["name"], response={"result": self._to_json(message["content"])} + ) + ) + else: + rst.append( + Part( + function_response=FunctionResponse( + name=message["name"], response={"result": self._to_json(message["content"])} + ) + ) + ) + return rst + + if isinstance(message["content"], str): if self.use_vertexai: - rst.append(VertexAIPart.from_text(content)) + rst.append(VertexAIPart.from_text(message["content"])) else: - rst.append(Part(text=content)) + rst.append(Part(text=message["content"])) return rst - assert isinstance(content, list) + assert isinstance(message["content"], list) - for msg in content: + for msg in message["content"]: if isinstance(msg, dict): assert "type" in msg, f"Missing 'type' field in message: {msg}" if msg["type"] == "text": if self.use_vertexai: - rst.append(VertexAIPart.from_text(text=msg["text"])) + rst.append(VertexAIPart.from_text(msg["text"])) else: rst.append(Part(text=msg["text"])) elif msg["type"] == "image_url": @@ -340,34 +418,32 @@ def _oai_content_to_gemini_content(self, content: Union[str, List]) -> List: raise ValueError(f"Unsupported message type: {type(msg)}") return rst - def _concat_parts(self, parts: List[Part]) -> List: - """Concatenate parts with the same type. - If two adjacent parts both have the "text" attribute, then it will be joined into one part. - """ - if not parts: - return [] - - concatenated_parts = [] - previous_part = parts[0] - - for current_part in parts[1:]: - if previous_part.text != "": - if self.use_vertexai: - previous_part = VertexAIPart.from_text(previous_part.text + current_part.text) - else: - previous_part.text += current_part.text - else: - concatenated_parts.append(previous_part) - previous_part = current_part - - if previous_part.text == "": - if self.use_vertexai: - previous_part = VertexAIPart.from_text("empty") - else: - previous_part.text = "empty" # Empty content is not allowed. - concatenated_parts.append(previous_part) + def _calculate_gemini_cost(self, input_tokens: int, output_tokens: int, model_name: str) -> float: + if "1.5-pro" in model_name: + if (input_tokens + output_tokens) <= 128000: + # "gemini-1.5-pro" + # When total tokens is less than 128K cost is $3.5 per million input tokens and $10.5 per million output tokens + return 3.5 * input_tokens / 1e6 + 10.5 * output_tokens / 1e6 + # "gemini-1.5-pro" + # Cost is $7 per million input tokens and $21 per million output tokens + return 7.0 * input_tokens / 1e6 + 21.0 * output_tokens / 1e6 + + if "1.5-flash" in model_name: + if (input_tokens + output_tokens) <= 128000: + # "gemini-1.5-flash" + # Cost is $0.35 per million input tokens and $1.05 per million output tokens + return 0.35 * input_tokens / 1e6 + 1.05 * output_tokens / 1e6 + # "gemini-1.5-flash" + # When total tokens is less than 128K cost is $0.70 per million input tokens and $2.10 per million output tokens + return 0.70 * input_tokens / 1e6 + 2.10 * output_tokens / 1e6 + + if "gemini-pro" not in model_name and "gemini-1.0-pro" not in model_name: + warnings.warn( + f"Cost calculation is not implemented for model {model_name}. Using Gemini-1.0-Pro.", UserWarning + ) - return concatenated_parts + # Cost is $0.5 per million input tokens and $1.5 per million output tokens + return 0.5 * input_tokens / 1e6 + 1.5 * output_tokens / 1e6 def _oai_messages_to_gemini_messages(self, messages: list[Dict[str, Any]]) -> list[dict[str, Any]]: """Convert messages from OAI format to Gemini format. @@ -376,38 +452,154 @@ def _oai_messages_to_gemini_messages(self, messages: list[Dict[str, Any]]) -> li """ prev_role = None rst = [] - curr_parts = [] + + def append_parts(parts, role): + if self.use_vertexai: + rst.append(VertexAIContent(parts=parts, role=role)) + else: + rst.append(Content(parts=parts, role=role)) + + def append_text_to_last(text): + if self.use_vertexai: + rst[-1] = VertexAIContent(parts=[*rst[-1].parts, VertexAIPart.from_text(text)], role=rst[-1].role) + else: + rst[-1] = Content(parts=[*rst[-1].parts, Part(text=text)], role=rst[-1].role) + + def is_function_call(parts): + return self.use_vertexai and parts[0].function_call or not self.use_vertexai and "function_call" in parts[0] + for i, message in enumerate(messages): - parts = self._oai_content_to_gemini_content(message["content"]) + + # Since the tool call message does not have the "name" field, we need to find the corresponding tool message. + if message["role"] == "tool": + message["name"] = [ + m["tool_calls"][i]["function"]["name"] + for m in messages + if "tool_calls" in m + for i, tc in enumerate(m["tool_calls"]) + if tc["id"] == message["tool_call_id"] + ][0] + + parts = self._oai_content_to_gemini_content(message) role = "user" if message["role"] in ["user", "system"] else "model" - if (prev_role is None) or (role == prev_role): - curr_parts += parts - elif role != prev_role: - if self.use_vertexai: - rst.append(VertexAIContent(parts=curr_parts, role=prev_role)) - else: - rst.append(Content(parts=curr_parts, role=prev_role)) - curr_parts = parts - prev_role = role - # handle the last message - if self.use_vertexai: - rst.append(VertexAIContent(parts=curr_parts, role=role)) - else: - rst.append(Content(parts=curr_parts, role=role)) + # In Gemini if the current message is a function call then previous message should not be a model message. + if is_function_call(parts): + # If the previous message is a model message then add a dummy "continue" user message before the function call + if prev_role == "model": + append_parts(self._oai_content_to_gemini_content({"content": "continue"}), "user") + append_parts(parts, role) + # In Gemini if the current message is a function response then next message should be a model message. + elif role == "function": + append_parts(parts, "function") + # If the next message is not a model message then add a dummy "continue" model message after the function response + if len(messages) > (i + 1) and messages[i + 1]["role"] in ["user", "system"]: + append_parts(self._oai_content_to_gemini_content({"content": "continue"}), "model") + # If the role is the same as the previous role and both are text messages then concatenate the text + elif role == prev_role: + append_text_to_last(parts[0].text) + # If this is first message or the role is different from the previous role then append the parts + else: + # If the previous text message is empty then update the text to "empty" as Gemini does not support empty messages + if ( + (len(rst) > 0) + and hasattr(rst[-1].parts[0], "_raw_part") + and hasattr(rst[-1].parts[0]._raw_part, "text") + and (rst[-1].parts[0]._raw_part.text == "") + ): + append_text_to_last("empty") + append_parts(parts, role) + + prev_role = role # The Gemini is restrict on order of roles, such that # 1. The messages should be interleaved between user and model. # 2. The last message must be from the user role. # We add a dummy message "continue" if the last role is not the user. - if rst[-1].role != "user": + if rst[-1].role != "user" and rst[-1].role != "function": if self.use_vertexai: - rst.append(VertexAIContent(parts=self._oai_content_to_gemini_content("continue"), role="user")) + rst.append( + VertexAIContent(parts=self._oai_content_to_gemini_content({"content": "continue"}), role="user") + ) else: - rst.append(Content(parts=self._oai_content_to_gemini_content("continue"), role="user")) - + rst.append(Content(parts=self._oai_content_to_gemini_content({"content": "continue"}), role="user")) return rst + def _oai_tools_to_gemini_tools(self, tools: List[Dict[str, Any]]) -> List[Tool]: + """Convert tools from OAI format to Gemini format.""" + if len(tools) == 0: + return None + function_declarations = [] + for tool in tools: + if self.use_vertexai: + function_declaration = VertexAIFunctionDeclaration( + name=tool["function"]["name"], + description=tool["function"]["description"], + parameters=tool["function"]["parameters"], + ) + else: + function_declaration = FunctionDeclaration( + name=tool["function"]["name"], + description=tool["function"]["description"], + parameters=self._oai_function_parameters_to_gemini_function_parameters( + copy.deepcopy(tool["function"]["parameters"]) + ), + ) + function_declarations.append(function_declaration) + if self.use_vertexai: + return [VertexAITool(function_declarations=function_declarations)] + else: + return [Tool(function_declarations=function_declarations)] + + def _oai_function_parameters_to_gemini_function_parameters( + self, function_definition: dict[str, any] + ) -> dict[str, any]: + """ + Convert OpenAPI function definition parameters to Gemini function parameters definition. + The type key is renamed to type_ and the value is capitalized. + """ + assert "anyOf" not in function_definition, "Union types are not supported for function parameter in Gemini." + # Delete the default key as it is not supported in Gemini + if "default" in function_definition: + del function_definition["default"] + + function_definition["type_"] = function_definition["type"].upper() + del function_definition["type"] + if "properties" in function_definition: + for key in function_definition["properties"]: + function_definition["properties"][key] = self._oai_function_parameters_to_gemini_function_parameters( + function_definition["properties"][key] + ) + if "items" in function_definition: + function_definition["items"] = self._oai_function_parameters_to_gemini_function_parameters( + function_definition["items"] + ) + return function_definition + + def _gemini_content_to_oai_choices(self, response: Union[Content, VertexAIContent]) -> List[Choice]: + """Convert response from Gemini format to OAI format.""" + text = None + tool_calls = [] + for part in response.parts: + if part.function_call: + if self.use_vertexai: + arguments = VertexAIPart.to_dict(part)["function_call"]["args"] + else: + arguments = Part.to_dict(part)["function_call"]["args"] + tool_calls.append( + ChatCompletionMessageToolCall( + id=str(random.randint(0, 1000)), + type="function", + function=Function(name=part.function_call.name, arguments=json.dumps(arguments)), + ) + ) + elif part.text: + text = part.text + message = ChatCompletionMessage( + role="assistant", content=text, function_call=None, tool_calls=tool_calls if len(tool_calls) > 0 else None + ) + return [Choice(finish_reason="tool_calls" if tool_calls else "stop", index=0, message=message)] + @staticmethod def _to_vertexai_safety_settings(safety_settings): """Convert safety settings to VertexAI format if needed, @@ -437,6 +629,49 @@ def _to_vertexai_safety_settings(safety_settings): else: return safety_settings + @staticmethod + def _to_vertexai_tool_config(tool_config, tools): + """Convert tool config to VertexAI format, + like when specifying them in the OAI_CONFIG_LIST + """ + if ( + isinstance(tool_config, dict) + and (len(tool_config) > 0) + and all([isinstance(tool_config[tool_config_entry], dict) for tool_config_entry in tool_config]) + ): + if ( + tool_config["function_calling_config"]["mode"] + not in VertexAIToolConfig.FunctionCallingConfig.Mode.__members__ + ): + invalid_mode = tool_config["function_calling_config"] + logger.error(f"Function calling mode {invalid_mode} is invalid") + return None + else: + # Currently, there is only function calling config + func_calling_config_params = {} + func_calling_config_params["mode"] = VertexAIToolConfig.FunctionCallingConfig.Mode[ + tool_config["function_calling_config"]["mode"] + ] + if ( + (func_calling_config_params["mode"] == VertexAIToolConfig.FunctionCallingConfig.Mode.ANY) + and (len(tools) > 0) + and all(["function_name" in tool for tool in tools]) + ): + # The function names are not yet known when parsing the OAI_CONFIG_LIST + func_calling_config_params["allowed_function_names"] = [tool["function_name"] for tool in tools] + vertexai_tool_config = VertexAIToolConfig( + function_calling_config=VertexAIToolConfig.FunctionCallingConfig(**func_calling_config_params) + ) + return vertexai_tool_config + elif isinstance(tool_config, VertexAIToolConfig): + return tool_config + elif len(tool_config) == 0 and len(tools) == 0: + logger.debug("VertexAI tool config is empty!") + return None + else: + logger.error("Invalid VertexAI tool config!") + return None + def _to_pil(data: str) -> Image.Image: """ @@ -470,16 +705,3 @@ def get_image_data(image_file: str, use_b64=True) -> bytes: return base64.b64encode(content).decode("utf-8") else: return content - - -def calculate_gemini_cost(input_tokens: int, output_tokens: int, model_name: str) -> float: - if "1.5" in model_name or "gemini-experimental" in model_name: - # "gemini-1.5-pro-preview-0409" - # Cost is $7 per million input tokens and $21 per million output tokens - return 7.0 * input_tokens / 1e6 + 21.0 * output_tokens / 1e6 - - if "gemini-pro" not in model_name and "gemini-1.0-pro" not in model_name: - warnings.warn(f"Cost calculation is not implemented for model {model_name}. Using Gemini-1.0-Pro.", UserWarning) - - # Cost is $0.5 per million input tokens and $1.5 per million output tokens - return 0.5 * input_tokens / 1e6 + 1.5 * output_tokens / 1e6 diff --git a/autogen/oai/openai_utils.py b/autogen/oai/openai_utils.py index 3844795c24f5..ceb7ef90c933 100644 --- a/autogen/oai/openai_utils.py +++ b/autogen/oai/openai_utils.py @@ -21,6 +21,7 @@ "azure_ad_token", "azure_ad_token_provider", "credentials", + "tool_config", ] DEFAULT_AZURE_API_VERSION = "2024-02-01" OAI_PRICE1K = { diff --git a/notebook/agentchat_gemini.ipynb b/notebook/agentchat_gemini.ipynb new file mode 100644 index 000000000000..699d9dc0235c --- /dev/null +++ b/notebook/agentchat_gemini.ipynb @@ -0,0 +1,809 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Assistants with Google Gemini" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Gemini is family of Generative AI Models built by Google. It support upto 1 Million tokens as of now. Gemini is now natively supported in Autogen.\n", + "\n", + "This notebook will demonstrate few samples of Autogen with Gemini Models.\n", + "\n", + "## Requirements\n", + "\n", + "You must have a [API Key](https://aistudio.google.com/app/apikey) from Google AI.\n", + "\n", + "## Setup Gemini config list\n", + "\n", + "The list of all supported Gemini Models along with OpenAI's gtp-4o,\n", + "\n", + "```python\n", + "config_list = [\n", + " {\n", + " 'model': 'gpt-4o',\n", + " 'api_key': '',\n", + " 'tags': ['tool', 'gpt-4'],\n", + " },\n", + " {\n", + " 'model': 'gemini-1.5-pro',\n", + " 'api_key': '',\n", + " 'api_type': 'google',\n", + " 'tags': ['tool', 'gemini'],\n", + " },\n", + " {\n", + " 'model': 'gemini-1.5-flash',\n", + " 'api_key': '',\n", + " 'api_type': 'google',\n", + " 'tags': ['tool', 'gemini'],\n", + " },\n", + " {\n", + " 'model': 'gemini-1.0-pro',\n", + " 'api_key': '',\n", + " 'api_type': 'google',\n", + " 'tags': ['gemini'],\n", + " }\n", + "]\n", + "```\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Hello World Example\n", + "\n", + "Our first example will be with a simple `UserProxyAgent` asking a question to an `AssistantAgent`. This is based on the tutorial demo [here](https://microsoft.github.io/autogen/docs/tutorial/introduction).\n", + "\n", + "After sending the question and seeing a response, you can type `exit` to end the chat or continue to converse." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser\u001b[0m (to assistant):\n", + "\n", + "Hi, what is a LLM ?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33massistant\u001b[0m (to user):\n", + "\n", + "A large language model (LLM) is a type of artificial intelligence (AI) system that excels at natural language processing (NLP) tasks. These models are trained on massive text datasets, enabling them to understand, interpret, and generate human-like text in response to a wide range of prompts and questions. \n", + "\n", + "Here are some key characteristics and capabilities of LLMs:\n", + "\n", + "* **Text Generation:** LLMs can generate coherent and contextually relevant text, including stories, articles, summaries, and conversations.\n", + "* **Language Translation:** They can translate text between multiple languages with impressive accuracy.\n", + "* **Question Answering:** LLMs can comprehend questions and provide relevant answers based on their training data.\n", + "* **Summarization:** They can condense large amounts of text into concise summaries while preserving key information.\n", + "* **Sentiment Analysis:** LLMs can analyze text to determine the emotional tone or sentiment expressed.\n", + "\n", + "Essentially, LLMs are powerful tools that can understand and process human language in a sophisticated manner, opening up a world of possibilities in various fields, such as:\n", + "\n", + "* **Chatbots and Virtual Assistants:** Providing more natural and engaging conversational experiences.\n", + "* **Content Creation:** Automating content generation for articles, marketing materials, and more.\n", + "* **Customer Service:** Enhancing support interactions through automated responses and sentiment analysis.\n", + "* **Education:** Personalizing learning experiences and providing on-demand tutoring.\n", + "\n", + "Overall, LLMs represent a significant advancement in AI, enabling machines to communicate and interact with humans more effectively than ever before. As research and development continue, we can expect even more impressive applications of these models in the future.\n", + "\n", + "TERMINATE\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "import autogen\n", + "\n", + "config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\", filter_dict={\"tags\": [\"gemini\"]})\n", + "\n", + "llm_config = {\"config_list\": config_list, \"timeout\": 120}\n", + "\n", + "# Create Assistant and User\n", + "assistant = autogen.AssistantAgent(name=\"assistant\", llm_config=llm_config)\n", + "\n", + "user_proxy = autogen.UserProxyAgent(name=\"user\", code_execution_config=False)\n", + "\n", + "# Initiate chat from user_proxy side\n", + "chat_result = user_proxy.initiate_chat(assistant, message=\"Hi, what is a LLM ?\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## In a Group Chat with OpenAI\n", + "\n", + "Here is an example of Gemini participating in a Group Cat with a GPT-4" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "c:\\Users\\arjun\\anaconda3\\envs\\autogen\\Lib\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", + " from .autonotebook import tqdm as notebook_tqdm\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mUser_proxy\u001b[0m (to chat_manager):\n", + "\n", + "Find a latest paper about gpt-4 on arxiv and find its potential applications in software.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: User_proxy\n", + "\u001b[0m\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mUser_proxy\u001b[0m (to chat_manager):\n", + "\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: User_proxy\n", + "\u001b[0m\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mUser_proxy\u001b[0m (to chat_manager):\n", + "\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Coder\n", + "\u001b[0m\n", + "\u001b[33mCoder\u001b[0m (to chat_manager):\n", + "\n", + "Let's start by searching for the latest paper about GPT-4 on arXiv. We can use the arXiv API to fetch the latest paper related to GPT-4.\n", + "\n", + "I'll provide a Python script to search arXiv for the latest papers related to \"GPT-4.\"\n", + "\n", + "```python\n", + "import requests\n", + "from xml.etree import ElementTree\n", + "\n", + "# arXiv API url for querying papers related to GPT-4\n", + "url = \"http://export.arxiv.org/api/query?search_query=ti:GPT-4&start=0&max_results=1&sortBy=submittedDate&sortOrder=descending\"\n", + "\n", + "response = requests.get(url)\n", + "if response.status_code == 200:\n", + " root = ElementTree.fromstring(response.content)\n", + " entry = root.find(\"{http://www.w3.org/2005/Atom}entry\")\n", + " if entry is not None:\n", + " title = entry.find(\"{http://www.w3.org/2005/Atom}title\").text\n", + " summary = entry.find(\"{http://www.w3.org/2005/Atom}summary\").text\n", + " link = entry.find(\"{http://www.w3.org/2005/Atom}id\").text\n", + " \n", + " print(f\"Title: {title}\")\n", + " print(f\"Summary: {summary}\")\n", + " print(f\"Link: {link}\")\n", + " else:\n", + " print(\"No entries found.\")\n", + "else:\n", + " print(f\"Failed to fetch data from arXiv API. Status code: {response.status_code}\")\n", + "```\n", + "\n", + "Please execute this script. It will output the title, summary, and link of the latest paper about \"GPT-4\" on arXiv. After obtaining the relevant paper, I will analyze its potential applications in software.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: User_proxy\n", + "\u001b[0m\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[31m\n", + ">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n", + "\u001b[33mUser_proxy\u001b[0m (to chat_manager):\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: \n", + "Title: I See You: Teacher Analytics with GPT-4 Vision-Powered Observational\n", + " Assessment\n", + "Summary: This preliminary study explores the integration of GPT-4 Vision (GPT-4V)\n", + "technology into teacher analytics, focusing on its applicability in\n", + "observational assessment to enhance reflective teaching practice. This research\n", + "is grounded in developing a Video-based Automatic Assessment System (VidAAS)\n", + "empowered by GPT-4V. Our approach aims to revolutionize teachers' assessment of\n", + "students' practices by leveraging Generative Artificial Intelligence (GenAI) to\n", + "offer detailed insights into classroom dynamics. Our research methodology\n", + "encompasses a comprehensive literature review, prototype development of the\n", + "VidAAS, and usability testing with in-service teachers. The study findings\n", + "provide future research avenues for VidAAS design, implementation, and\n", + "integration in teacher analytics, underscoring the potential of GPT-4V to\n", + "provide real-time, scalable feedback and a deeper understanding of the\n", + "classroom.\n", + "\n", + "Link: http://arxiv.org/abs/2405.18623v2\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Product_manager\n", + "\u001b[0m\n", + "\u001b[33mProduct_manager\u001b[0m (to chat_manager):\n", + "\n", + "This is exciting! The paper you found, \"I See You: Teacher Analytics with GPT-4 Vision-Powered Observational Assessment\", explores an innovative application of GPT-4 Vision. Let's break down the potential software applications based on this:\n", + "\n", + "**Software Applications Inspired by \"I See You\":**\n", + "\n", + "* **Video-Based Educational Analytics Platforms:** This paper lays the groundwork for a new breed of educational software. Imagine platforms that analyze classroom recordings to provide automated feedback to teachers. These platforms could:\n", + " * **Track student engagement:** Identify students who seem disengaged or confused.\n", + " * **Analyze classroom interactions:** Quantify the quality and nature of teacher-student interactions.\n", + " * **Assess student understanding:** Potentially even gauge student comprehension through facial expressions and body language.\n", + "* **Real-time Teacher Assistance Tools:** GPT-4 Vision could power real-time feedback tools for teachers during live lessons. Imagine:\n", + " * **Subtle alerts:** Discretely notifying a teacher if a student appears to be struggling.\n", + " * **Personalized suggestions:** Providing on-the-fly recommendations for teaching strategies based on real-time classroom dynamics. \n", + "* **Teacher Training and Professional Development:** \n", + " * **Simulation Training:** GPT-4 Vision could create realistic virtual classroom simulations for teacher training, allowing educators to practice techniques and receive AI-powered feedback.\n", + " * **Reflective Practice:** Video analysis tools could help teachers reflect on their own teaching styles and identify areas for improvement. \n", + "\n", + "**Beyond Education:**\n", + "\n", + "While this paper focuses on education, the core technology has broader implications:\n", + "\n", + "* **Meeting Analytics:** Imagine software that analyzes video conferences to track engagement, identify key discussion points, or even assess team dynamics.\n", + "* **Healthcare Training:** Similar applications could revolutionize healthcare training by providing automated feedback during simulated patient interactions. \n", + "* **Retail Analytics:** GPT-4 Vision could analyze customer behavior in retail environments, providing insights into product placement, customer service interactions, and more.\n", + "\n", + "**Challenges and Ethical Considerations:**\n", + "\n", + "* **Bias in AI:** Ensuring that the AI models are trained on diverse datasets to avoid perpetuating existing biases in education or other fields. \n", + "* **Privacy Concerns:** Video analysis raises significant privacy concerns. Clear guidelines and user consent are essential.\n", + "* **Teacher Autonomy:** It's crucial to design these systems as tools to *augment* teacher expertise, not replace it.\n", + "\n", + "**In conclusion,** the paper you found highlights the immense potential of GPT-4 Vision to revolutionize software in education and beyond. It's an exciting area of exploration with the potential to create powerful tools for learning, analysis, and understanding human behavior. \n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Coder\n", + "\u001b[0m\n", + "\u001b[33mCoder\u001b[0m (to chat_manager):\n", + "\n", + "The paper you found titled \"I See You: Teacher Analytics with GPT-4 Vision-Powered Observational Assessment\" outlines a fascinating and innovative application of GPT-4 Vision technology. Here are potential software applications based on this research:\n", + "\n", + "### 1. Video-Based Educational Analytics Platforms\n", + "These platforms can analyze classroom recordings to automate feedback for teachers. They could:\n", + "- **Track Student Engagement:** Identify students who seem disengaged or confused.\n", + "- **Analyze Teacher-Student Interactions:** Quantify the quality and nature of interactions.\n", + "- **Assess Understanding:** Gauge student comprehension through facial expressions and body language.\n", + "\n", + "### 2. Real-Time Teacher Assistance Tools\n", + "GPT-4 Vision could power tools that provide real-time feedback during live lessons by:\n", + "- **Alerting Teachers Subtly:** Notifying if a student is struggling.\n", + "- **Offering Personalized Suggestions:** Providing on-the-fly recommendations for teaching strategies.\n", + "\n", + "### 3. Teacher Training and Professional Development\n", + "- **Simulation Training:** Enable realistic virtual classroom simulations for practice and AI-powered feedback.\n", + "- **Reflective Practice:** Allow teachers to reflect on their own teaching and identify areas for improvement through video analysis tools.\n", + "\n", + "### Applications Beyond Education\n", + "While this paper focuses on education, the underlying technology has broader implications across various domains:\n", + "#### Meeting Analytics\n", + "Software could analyze video conferences to:\n", + "- Track engagement.\n", + "- Identify key discussion points.\n", + "- Assess team dynamics.\n", + "\n", + "#### Healthcare Training\n", + "Tools could provide automated feedback during simulated patient interactions.\n", + "\n", + "#### Retail Analytics\n", + "GPT-4 Vision could analyze customer behavior in retail environments, offering insights into:\n", + "- Product placements.\n", + "- Customer service interactions.\n", + "- Sales strategies.\n", + "\n", + "### Challenges and Ethical Considerations\n", + "- **Bias in AI:** Ensuring diverse datasets to avoid perpetuating biases.\n", + "- **Privacy Concerns:** Addressing significant privacy concerns with clear guidelines and user consent.\n", + "- **Teacher Autonomy:** Designing systems to augment rather than replace teacher expertise.\n", + "\n", + "**Conclusion:**\n", + "The \"I See You\" paper exemplifies the immense potential of GPT-4 Vision technology to innovate educational software and beyond. By addressing challenges and ethical considerations, we can harness this technology to create powerful tools for learning, analysis, and human behavior understanding.\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Product_manager\n", + "\u001b[0m\n", + "\u001b[33mProduct_manager\u001b[0m (to chat_manager):\n", + "\n", + "That's an excellent summary and expansion of the potential applications! You've clearly grasped the key points of the paper and extrapolated them into a range of innovative software ideas. \n", + "\n", + "I especially appreciate your inclusion of:\n", + "\n", + "* **Specific examples** within each application category (like tracking engagement in meeting analytics or analyzing customer service interactions in retail). \n", + "* **The broader applications** beyond education, demonstrating the wide-reaching impact of this technology.\n", + "* **The critical emphasis on challenges and ethical considerations**, which are essential to responsible development and deployment of such powerful AI systems. \n", + "\n", + "This kind of analysis is crucial for turning research like the \"I See You\" paper into real-world solutions that can benefit various industries. You've highlighted the exciting possibilities and important considerations for the future of AI-powered software! \n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Product_manager\n", + "\u001b[0m\n", + "\u001b[33mProduct_manager\u001b[0m (to chat_manager):\n", + "\n", + "You're right! We've just scratched the surface. Let's dive deeper into some specific software product ideas and how we might overcome the challenges:\n", + "\n", + "**1. \"Classroom Insights\" for Education Analytics**\n", + "\n", + "* **Features:**\n", + " * **Engagement Heatmap:** Visualize student engagement over time, highlighting moments of high and low participation. \n", + " * **Interaction Analysis:** Quantify teacher-student talk time, question types, and wait time for responses.\n", + " * **Sentiment Detection:** (With appropriate ethical safeguards) gauge general classroom sentiment (positive, negative, neutral) at different points during the lesson. \n", + " * **Personalized Recommendations:** Provide teachers with tailored suggestions for improving engagement, questioning techniques, or addressing individual student needs. \n", + "\n", + "* **Addressing Challenges:**\n", + " * **Bias Mitigation:** Train the AI model on diverse classroom settings and demographics, and allow for manual adjustments based on teacher feedback. \n", + " * **Privacy:** Implement strict data anonymization, secure storage, and clear consent procedures for both teachers and students (or parents/guardians).\n", + " * **Teacher Autonomy:** Emphasize that the tool provides insights, not judgments. Allow teachers to customize the feedback and focus areas.\n", + "\n", + "**2. \"Simulate Teach\" for Teacher Training**\n", + "\n", + "* **Features:**\n", + " * **Virtual Classrooms:** Create realistic virtual classroom environments with diverse student avatars exhibiting different behaviors and learning styles. \n", + " * **Scenario-Based Training:** Present trainees with various teaching challenges (e.g., classroom management, differentiated instruction) to practice in a safe space.\n", + " * **Real-Time Feedback:** Provide immediate AI-powered feedback on the trainee's teaching strategies, body language, and classroom management techniques. \n", + "\n", + "* **Addressing Challenges:**\n", + " * **Realism:** Continuously improve the virtual students' AI to respond more naturally and authentically to the trainee's actions.\n", + " * **Accessibility:** Design the software to be usable across various devices and internet speeds to benefit trainees in diverse locations.\n", + " * **Mentorship Integration:** Combine the AI feedback with opportunities for reflection and discussion with experienced mentors.\n", + "\n", + "**3. \"Meeting Insights Pro\" for Business**\n", + "\n", + "* **Features:**\n", + " * **Engagement Timeline:** Track attendee engagement levels throughout the meeting, identifying moments of high and low interest.\n", + " * **Action Item Detection:** Automatically extract key decisions, action items, and assigned owners from meeting transcripts.\n", + " * **Sentiment & Tone Analysis:** (With appropriate ethical considerations) analyze the overall sentiment and communication style within the team. \n", + " * **Meeting Efficiency Recommendations:** Provide data-driven suggestions for improving meeting structure, duration, and facilitation techniques.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Privacy:** Ensure robust data security, user consent, and clear communication about how the data is used.\n", + " * **Bias in Tone Analysis:** Train on diverse communication styles and cultural contexts to avoid misinterpretations of tone. \n", + " * **Human Oversight:** Allow for manual review and adjustments to AI-generated insights to account for nuances in communication and context. \n", + "\n", + "These are just a few examples. The potential applications of GPT-4 Vision in software are vast, spanning industries and use cases. By thoughtfully addressing the ethical and practical challenges, we can create transformative tools that empower educators, professionals, and individuals. \n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Product_manager\n", + "\u001b[0m\n", + "\u001b[33mProduct_manager\u001b[0m (to chat_manager):\n", + "\n", + "You're right, let's continue exploring! Beyond those initial ideas, here are some more speculative, but potentially high-impact applications:\n", + "\n", + "**4. \"CodeMentor\" for Personalized Programming Education**\n", + "\n", + "* **Features:**\n", + " * **Code Analysis & Feedback:** GPT-4 Vision analyzes code in real-time, identifying errors, suggesting improvements, and explaining concepts visually. \n", + " * **Personalized Learning Paths:** The software adapts to the learner's pace, style, and identified knowledge gaps to create a customized curriculum.\n", + " * **Virtual Debugging Assistant:** GPT-4 Vision \"looks\" at the code alongside the learner, anticipating common errors and providing interactive debugging guidance.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Complexity of Programming:** Training the AI on a vast dataset of code, programming paradigms, and best practices would be crucial.\n", + " * **Pedagogical Effectiveness:** Integrating proven teaching methods and ensuring the AI's feedback aligns with sound learning principles.\n", + " * **Avoiding Over-Reliance:** Encouraging problem-solving skills and independent thinking alongside AI assistance.\n", + "\n", + "**5. \"DesignSpark\" for Collaborative Creative Work**\n", + "\n", + "* **Features:**\n", + " * **Visual Brainstorming:** GPT-4 Vision assists teams in brainstorming by generating images, mockups, and design variations based on keywords, sketches, or mood boards.\n", + " * **Real-time Feedback & Iteration:** AI provides instant feedback on design elements, suggesting improvements to composition, color, and typography. \n", + " * **Cross-Cultural Design:** GPT-4 Vision analyzes design trends and preferences across different cultures, helping teams create globally resonant visuals.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Subjectivity in Design:** Balancing objective design principles with the subjective nature of aesthetics and creativity.\n", + " * **Copyright & Ownership:** Establishing clear guidelines for the use and ownership of AI-generated design elements. \n", + " * **Preserving Human Creativity:** Ensuring that AI augments, not replaces, the unique vision and skills of human designers.\n", + "\n", + "**6. \"Accessible World\" for Assistive Technology**\n", + "\n", + "* **Features:**\n", + " * **Real-Time Object Recognition:** For visually impaired individuals, the software provides audio descriptions of objects, scenes, and text in their environment.\n", + " * **Sign Language Translation:** GPT-4 Vision translates sign language into text or speech in real-time, facilitating communication.\n", + " * **Personalized Sensory Feedback:** The software creates customized sensory experiences (e.g., haptic feedback, light patterns) for individuals with diverse sensory processing needs.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Accuracy and Reliability:** Ensuring the AI's interpretations are accurate and dependable in real-world situations.\n", + " * **Privacy and Dignity:** Designing the technology in a way that respects user privacy and avoids stigmatization.\n", + " * **Affordability & Accessibility:** Making the technology accessible to a wide range of users, regardless of their financial or technical resources. \n", + "\n", + "These expanded examples highlight the potential of GPT-4 Vision to not only automate tasks but also to enhance human capabilities, foster creativity, and promote inclusivity. By approaching these innovations with a focus on ethical considerations and human-centered design, we can unlock incredible opportunities for positive change. \n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Product_manager\n", + "\u001b[0m\n", + "\u001b[33mProduct_manager\u001b[0m (to chat_manager):\n", + "\n", + "You're right, there's so much more to explore! Let's venture into some even more futuristic and ambitious applications, pushing the boundaries of what we currently imagine possible:\n", + "\n", + "**7. \"Empathic Tech\" for Mental Health Support:**\n", + "\n", + "* **Features:**\n", + " * **Emotionally Aware Virtual Companions:** GPT-4 Vision powers AI companions that can recognize and respond to human emotions expressed through facial expressions, body language, and vocal tone.\n", + " * **Personalized Mental Health Exercises:** The software provides tailored mindfulness practices, breathing exercises, or cognitive-behavioral therapy (CBT) techniques based on real-time emotional cues.\n", + " * **Early Warning System:** GPT-4 Vision analyzes patterns in user behavior and emotional expression to identify potential signs of mental health challenges and connect them with appropriate resources.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Ethical Considerations:** Ensuring user privacy, data security, and responsible use of sensitive health information is paramount.\n", + " * **Accuracy and Sensitivity:** Training AI to accurately interpret complex human emotions and respond with empathy and cultural sensitivity is a significant challenge.\n", + " * **Human Connection:** Emphasizing that technology should complement, not replace, professional mental health care and human connection.\n", + "\n", + "**8. \"EcoVision\" for Environmental Monitoring and Conservation:**\n", + "\n", + "* **Features:**\n", + " * **Real-Time Environmental Analysis:** GPT-4 Vision analyzes images and videos from drones, satellites, or ground-based cameras to monitor deforestation, pollution levels, wildlife populations, and other environmental factors.\n", + " * **Predictive Modeling for Conservation:** The software uses AI to predict environmental changes, identify areas at risk, and inform conservation efforts.\n", + " * **Citizen Science Platform:** EcoVision empowers individuals to contribute to environmental monitoring by uploading images and observations that the AI can analyze and integrate into its models.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Data Accessibility and Accuracy:** Ensuring access to high-quality, diverse environmental data from various sources is crucial.\n", + " * **Bias in Data and Algorithms:** Mitigating potential biases in data collection and algorithm design to avoid skewed environmental assessments.\n", + " * **Collaboration and Action:** Translating AI insights into concrete actions by fostering collaboration between scientists, policymakers, and local communities.\n", + "\n", + "**9. \"HistoryLens\" for Immersive Historical Experiences:**\n", + "\n", + "* **Features:**\n", + " * **Interactive Historical Reenactments:** GPT-4 Vision recreates historical events, figures, and locations in immersive virtual reality or augmented reality experiences.\n", + " * **Personalized Learning Journeys:** The software tailors historical narratives and perspectives based on user interests and background, providing a deeper understanding of the past.\n", + " * **Preservation and Accessibility:** HistoryLens digitally preserves historical artifacts, documents, and oral histories, making them accessible to a wider audience.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Historical Accuracy and Bias:** Ensuring historical representations are accurate, nuanced, and avoid perpetuating biases or historical revisionism. \n", + " * **Ethical Considerations of Representation:** Carefully navigating the ethical complexities of recreating sensitive historical events or representing marginalized communities. \n", + " * **Balancing Entertainment and Education:** Creating engaging experiences that also foster critical thinking and historical understanding.\n", + "\n", + "These futuristic applications highlight the immense potential of GPT-4 Vision to not only solve practical problems but also enhance our understanding of ourselves, our planet, and our history. As we continue to develop this technology, it is essential to proceed with thoughtful consideration for its ethical implications and societal impact, ensuring that it is used to benefit humanity and create a more just and sustainable future. \n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Product_manager\n", + "\u001b[0m\n", + "\u001b[33mProduct_manager\u001b[0m (to chat_manager):\n", + "\n", + "You're right, there's always more to imagine! Let's keep pushing the boundaries and explore some applications that sound like they're straight out of science fiction:\n", + "\n", + "**10. \"Dream Weaver\" for Artistic Co-Creation:**\n", + "\n", + "* **Features:**\n", + " * **Concept Visualization:** Users can input textual descriptions, rough sketches, or even their emotions, and GPT-4 Vision generates stunning visuals, musical compositions, or even short films that capture the essence of their ideas. \n", + " * **Style Transfer & Remixing:** The software enables artists to blend different artistic styles, seamlessly merging the realism of a photograph with the brushstrokes of Van Gogh or the abstract patterns of Kandinsky.\n", + " * **Interactive Storytelling:** GPT-4 Vision becomes a collaborative partner in storytelling, generating dynamic environments, characters, and plot twists in response to user input, blurring the lines between audience and creator.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Defining Creativity:** Exploring the philosophical and technical boundaries of AI creativity and ensuring it complements, not replaces, human artistic expression.\n", + " * **Copyright and Authorship:** Establishing clear guidelines for ownership and attribution when AI contributes significantly to the creative process. \n", + " * **Accessibility and Democratization:** Making these powerful creative tools accessible to a wide audience, fostering a more inclusive and imaginative future for art.\n", + "\n", + "\n", + "**11. \"Universal Translator\" for Real-Time Cross-Cultural Communication:**\n", + "\n", + "* **Features:**\n", + " * **Seamless Language Translation:** GPT-4 Vision goes beyond text, translating spoken language in real-time while accounting for nuances in tone, dialect, and cultural context.\n", + " * **Nonverbal Communication Interpretation:** The software analyzes facial expressions, gestures, and body language to bridge cultural differences in nonverbal communication, fostering greater understanding.\n", + " * **Cultural Sensitivity Guidance:** GPT-4 Vision provides users with real-time insights into cultural norms and customs, helping to avoid misunderstandings and promote respectful interactions.\n", + "\n", + "* **Addressing Challenges:**\n", + " * **Linguistic Complexity and Nuance:** Accurately translating the full richness and complexity of human language, including idioms, slang, and cultural references, remains a significant hurdle.\n", + " * **Bias and Stereotyping:** Ensuring the AI avoids perpetuating cultural biases or stereotypes in its translations and interpretations is crucial.\n", + " * **Preserving Linguistic Diversity:** Promoting language learning and cultural exchange while using technology to bridge communication gaps is essential. \n", + "\n", + "**12. \"Guardian AI\" for Personalized Safety and Well-being:**\n", + "\n", + "* **Features:**\n", + " * **Predictive Risk Assessment:** GPT-4 Vision analyzes real-time data from personal devices, social networks, and public safety feeds to identify potential risks or threats to an individual's safety and well-being. \n", + " * **Proactive Safety Measures:** The AI can alert users to potential dangers, suggest alternative routes, or even contact emergency services in critical situations. \n", + " * **Personalized Health Monitoring:** GPT-4 Vision monitors an individual's physical and mental health through wearable sensors and behavioral patterns, providing early warnings of potential health issues and suggesting preventative measures. \n", + "\n", + "* **Addressing Challenges:**\n", + " * **Privacy vs. Security:** Striking a delicate balance between protecting individual privacy and utilizing data to enhance safety.\n", + " * **Bias and Discrimination:** Ensuring the AI does not perpetuate existing societal biases or discriminate against individuals based on factors like race, gender, or socioeconomic status.\n", + " * **Over-Reliance and Autonomy:** Avoiding over-dependence on AI for safety and encouraging individuals to maintain their own situational awareness and decision-making abilities. \n", + "\n", + "These futuristic applications highlight the potential of GPT-4 Vision to not only solve problems but also enhance human connection, understanding, and well-being. As we develop such powerful AI, responsible innovation, ethical considerations, and a focus on human-centered design will be paramount in shaping a future where technology empowers and uplifts all of humanity. \n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + }, + { + "data": { + "text/plain": [ + "ChatResult(chat_id=None, chat_history=[{'content': 'Find a latest paper about gpt-4 on arxiv and find its potential applications in software.', 'role': 'assistant'}, {'content': '', 'role': 'assistant'}, {'content': '', 'role': 'assistant'}, {'content': 'Let\\'s start by searching for the latest paper about GPT-4 on arXiv. We can use the arXiv API to fetch the latest paper related to GPT-4.\\n\\nI\\'ll provide a Python script to search arXiv for the latest papers related to \"GPT-4.\"\\n\\n```python\\nimport requests\\nfrom xml.etree import ElementTree\\n\\n# arXiv API url for querying papers related to GPT-4\\nurl = \"http://export.arxiv.org/api/query?search_query=ti:GPT-4&start=0&max_results=1&sortBy=submittedDate&sortOrder=descending\"\\n\\nresponse = requests.get(url)\\nif response.status_code == 200:\\n root = ElementTree.fromstring(response.content)\\n entry = root.find(\"{http://www.w3.org/2005/Atom}entry\")\\n if entry is not None:\\n title = entry.find(\"{http://www.w3.org/2005/Atom}title\").text\\n summary = entry.find(\"{http://www.w3.org/2005/Atom}summary\").text\\n link = entry.find(\"{http://www.w3.org/2005/Atom}id\").text\\n \\n print(f\"Title: {title}\")\\n print(f\"Summary: {summary}\")\\n print(f\"Link: {link}\")\\n else:\\n print(\"No entries found.\")\\nelse:\\n print(f\"Failed to fetch data from arXiv API. Status code: {response.status_code}\")\\n```\\n\\nPlease execute this script. It will output the title, summary, and link of the latest paper about \"GPT-4\" on arXiv. After obtaining the relevant paper, I will analyze its potential applications in software.', 'name': 'Coder', 'role': 'user'}, {'content': \"exitcode: 0 (execution succeeded)\\nCode output: \\nTitle: I See You: Teacher Analytics with GPT-4 Vision-Powered Observational\\n Assessment\\nSummary: This preliminary study explores the integration of GPT-4 Vision (GPT-4V)\\ntechnology into teacher analytics, focusing on its applicability in\\nobservational assessment to enhance reflective teaching practice. This research\\nis grounded in developing a Video-based Automatic Assessment System (VidAAS)\\nempowered by GPT-4V. Our approach aims to revolutionize teachers' assessment of\\nstudents' practices by leveraging Generative Artificial Intelligence (GenAI) to\\noffer detailed insights into classroom dynamics. Our research methodology\\nencompasses a comprehensive literature review, prototype development of the\\nVidAAS, and usability testing with in-service teachers. The study findings\\nprovide future research avenues for VidAAS design, implementation, and\\nintegration in teacher analytics, underscoring the potential of GPT-4V to\\nprovide real-time, scalable feedback and a deeper understanding of the\\nclassroom.\\n\\nLink: http://arxiv.org/abs/2405.18623v2\\n\", 'role': 'assistant'}, {'content': 'This is exciting! The paper you found, \"I See You: Teacher Analytics with GPT-4 Vision-Powered Observational Assessment\", explores an innovative application of GPT-4 Vision. Let\\'s break down the potential software applications based on this:\\n\\n**Software Applications Inspired by \"I See You\":**\\n\\n* **Video-Based Educational Analytics Platforms:** This paper lays the groundwork for a new breed of educational software. Imagine platforms that analyze classroom recordings to provide automated feedback to teachers. These platforms could:\\n * **Track student engagement:** Identify students who seem disengaged or confused.\\n * **Analyze classroom interactions:** Quantify the quality and nature of teacher-student interactions.\\n * **Assess student understanding:** Potentially even gauge student comprehension through facial expressions and body language.\\n* **Real-time Teacher Assistance Tools:** GPT-4 Vision could power real-time feedback tools for teachers during live lessons. Imagine:\\n * **Subtle alerts:** Discretely notifying a teacher if a student appears to be struggling.\\n * **Personalized suggestions:** Providing on-the-fly recommendations for teaching strategies based on real-time classroom dynamics. \\n* **Teacher Training and Professional Development:** \\n * **Simulation Training:** GPT-4 Vision could create realistic virtual classroom simulations for teacher training, allowing educators to practice techniques and receive AI-powered feedback.\\n * **Reflective Practice:** Video analysis tools could help teachers reflect on their own teaching styles and identify areas for improvement. \\n\\n**Beyond Education:**\\n\\nWhile this paper focuses on education, the core technology has broader implications:\\n\\n* **Meeting Analytics:** Imagine software that analyzes video conferences to track engagement, identify key discussion points, or even assess team dynamics.\\n* **Healthcare Training:** Similar applications could revolutionize healthcare training by providing automated feedback during simulated patient interactions. \\n* **Retail Analytics:** GPT-4 Vision could analyze customer behavior in retail environments, providing insights into product placement, customer service interactions, and more.\\n\\n**Challenges and Ethical Considerations:**\\n\\n* **Bias in AI:** Ensuring that the AI models are trained on diverse datasets to avoid perpetuating existing biases in education or other fields. \\n* **Privacy Concerns:** Video analysis raises significant privacy concerns. Clear guidelines and user consent are essential.\\n* **Teacher Autonomy:** It\\'s crucial to design these systems as tools to *augment* teacher expertise, not replace it.\\n\\n**In conclusion,** the paper you found highlights the immense potential of GPT-4 Vision to revolutionize software in education and beyond. It\\'s an exciting area of exploration with the potential to create powerful tools for learning, analysis, and understanding human behavior. \\n', 'name': 'Product_manager', 'role': 'user'}, {'content': 'The paper you found titled \"I See You: Teacher Analytics with GPT-4 Vision-Powered Observational Assessment\" outlines a fascinating and innovative application of GPT-4 Vision technology. Here are potential software applications based on this research:\\n\\n### 1. Video-Based Educational Analytics Platforms\\nThese platforms can analyze classroom recordings to automate feedback for teachers. They could:\\n- **Track Student Engagement:** Identify students who seem disengaged or confused.\\n- **Analyze Teacher-Student Interactions:** Quantify the quality and nature of interactions.\\n- **Assess Understanding:** Gauge student comprehension through facial expressions and body language.\\n\\n### 2. Real-Time Teacher Assistance Tools\\nGPT-4 Vision could power tools that provide real-time feedback during live lessons by:\\n- **Alerting Teachers Subtly:** Notifying if a student is struggling.\\n- **Offering Personalized Suggestions:** Providing on-the-fly recommendations for teaching strategies.\\n\\n### 3. Teacher Training and Professional Development\\n- **Simulation Training:** Enable realistic virtual classroom simulations for practice and AI-powered feedback.\\n- **Reflective Practice:** Allow teachers to reflect on their own teaching and identify areas for improvement through video analysis tools.\\n\\n### Applications Beyond Education\\nWhile this paper focuses on education, the underlying technology has broader implications across various domains:\\n#### Meeting Analytics\\nSoftware could analyze video conferences to:\\n- Track engagement.\\n- Identify key discussion points.\\n- Assess team dynamics.\\n\\n#### Healthcare Training\\nTools could provide automated feedback during simulated patient interactions.\\n\\n#### Retail Analytics\\nGPT-4 Vision could analyze customer behavior in retail environments, offering insights into:\\n- Product placements.\\n- Customer service interactions.\\n- Sales strategies.\\n\\n### Challenges and Ethical Considerations\\n- **Bias in AI:** Ensuring diverse datasets to avoid perpetuating biases.\\n- **Privacy Concerns:** Addressing significant privacy concerns with clear guidelines and user consent.\\n- **Teacher Autonomy:** Designing systems to augment rather than replace teacher expertise.\\n\\n**Conclusion:**\\nThe \"I See You\" paper exemplifies the immense potential of GPT-4 Vision technology to innovate educational software and beyond. By addressing challenges and ethical considerations, we can harness this technology to create powerful tools for learning, analysis, and human behavior understanding.\\n\\nTERMINATE', 'name': 'Coder', 'role': 'user'}, {'content': 'That\\'s an excellent summary and expansion of the potential applications! You\\'ve clearly grasped the key points of the paper and extrapolated them into a range of innovative software ideas. \\n\\nI especially appreciate your inclusion of:\\n\\n* **Specific examples** within each application category (like tracking engagement in meeting analytics or analyzing customer service interactions in retail). \\n* **The broader applications** beyond education, demonstrating the wide-reaching impact of this technology.\\n* **The critical emphasis on challenges and ethical considerations**, which are essential to responsible development and deployment of such powerful AI systems. \\n\\nThis kind of analysis is crucial for turning research like the \"I See You\" paper into real-world solutions that can benefit various industries. You\\'ve highlighted the exciting possibilities and important considerations for the future of AI-powered software! \\n', 'name': 'Product_manager', 'role': 'user'}, {'content': 'You\\'re right! We\\'ve just scratched the surface. Let\\'s dive deeper into some specific software product ideas and how we might overcome the challenges:\\n\\n**1. \"Classroom Insights\" for Education Analytics**\\n\\n* **Features:**\\n * **Engagement Heatmap:** Visualize student engagement over time, highlighting moments of high and low participation. \\n * **Interaction Analysis:** Quantify teacher-student talk time, question types, and wait time for responses.\\n * **Sentiment Detection:** (With appropriate ethical safeguards) gauge general classroom sentiment (positive, negative, neutral) at different points during the lesson. \\n * **Personalized Recommendations:** Provide teachers with tailored suggestions for improving engagement, questioning techniques, or addressing individual student needs. \\n\\n* **Addressing Challenges:**\\n * **Bias Mitigation:** Train the AI model on diverse classroom settings and demographics, and allow for manual adjustments based on teacher feedback. \\n * **Privacy:** Implement strict data anonymization, secure storage, and clear consent procedures for both teachers and students (or parents/guardians).\\n * **Teacher Autonomy:** Emphasize that the tool provides insights, not judgments. Allow teachers to customize the feedback and focus areas.\\n\\n**2. \"Simulate Teach\" for Teacher Training**\\n\\n* **Features:**\\n * **Virtual Classrooms:** Create realistic virtual classroom environments with diverse student avatars exhibiting different behaviors and learning styles. \\n * **Scenario-Based Training:** Present trainees with various teaching challenges (e.g., classroom management, differentiated instruction) to practice in a safe space.\\n * **Real-Time Feedback:** Provide immediate AI-powered feedback on the trainee\\'s teaching strategies, body language, and classroom management techniques. \\n\\n* **Addressing Challenges:**\\n * **Realism:** Continuously improve the virtual students\\' AI to respond more naturally and authentically to the trainee\\'s actions.\\n * **Accessibility:** Design the software to be usable across various devices and internet speeds to benefit trainees in diverse locations.\\n * **Mentorship Integration:** Combine the AI feedback with opportunities for reflection and discussion with experienced mentors.\\n\\n**3. \"Meeting Insights Pro\" for Business**\\n\\n* **Features:**\\n * **Engagement Timeline:** Track attendee engagement levels throughout the meeting, identifying moments of high and low interest.\\n * **Action Item Detection:** Automatically extract key decisions, action items, and assigned owners from meeting transcripts.\\n * **Sentiment & Tone Analysis:** (With appropriate ethical considerations) analyze the overall sentiment and communication style within the team. \\n * **Meeting Efficiency Recommendations:** Provide data-driven suggestions for improving meeting structure, duration, and facilitation techniques.\\n\\n* **Addressing Challenges:**\\n * **Privacy:** Ensure robust data security, user consent, and clear communication about how the data is used.\\n * **Bias in Tone Analysis:** Train on diverse communication styles and cultural contexts to avoid misinterpretations of tone. \\n * **Human Oversight:** Allow for manual review and adjustments to AI-generated insights to account for nuances in communication and context. \\n\\nThese are just a few examples. The potential applications of GPT-4 Vision in software are vast, spanning industries and use cases. By thoughtfully addressing the ethical and practical challenges, we can create transformative tools that empower educators, professionals, and individuals. \\n', 'name': 'Product_manager', 'role': 'user'}, {'content': 'You\\'re right, let\\'s continue exploring! Beyond those initial ideas, here are some more speculative, but potentially high-impact applications:\\n\\n**4. \"CodeMentor\" for Personalized Programming Education**\\n\\n* **Features:**\\n * **Code Analysis & Feedback:** GPT-4 Vision analyzes code in real-time, identifying errors, suggesting improvements, and explaining concepts visually. \\n * **Personalized Learning Paths:** The software adapts to the learner\\'s pace, style, and identified knowledge gaps to create a customized curriculum.\\n * **Virtual Debugging Assistant:** GPT-4 Vision \"looks\" at the code alongside the learner, anticipating common errors and providing interactive debugging guidance.\\n\\n* **Addressing Challenges:**\\n * **Complexity of Programming:** Training the AI on a vast dataset of code, programming paradigms, and best practices would be crucial.\\n * **Pedagogical Effectiveness:** Integrating proven teaching methods and ensuring the AI\\'s feedback aligns with sound learning principles.\\n * **Avoiding Over-Reliance:** Encouraging problem-solving skills and independent thinking alongside AI assistance.\\n\\n**5. \"DesignSpark\" for Collaborative Creative Work**\\n\\n* **Features:**\\n * **Visual Brainstorming:** GPT-4 Vision assists teams in brainstorming by generating images, mockups, and design variations based on keywords, sketches, or mood boards.\\n * **Real-time Feedback & Iteration:** AI provides instant feedback on design elements, suggesting improvements to composition, color, and typography. \\n * **Cross-Cultural Design:** GPT-4 Vision analyzes design trends and preferences across different cultures, helping teams create globally resonant visuals.\\n\\n* **Addressing Challenges:**\\n * **Subjectivity in Design:** Balancing objective design principles with the subjective nature of aesthetics and creativity.\\n * **Copyright & Ownership:** Establishing clear guidelines for the use and ownership of AI-generated design elements. \\n * **Preserving Human Creativity:** Ensuring that AI augments, not replaces, the unique vision and skills of human designers.\\n\\n**6. \"Accessible World\" for Assistive Technology**\\n\\n* **Features:**\\n * **Real-Time Object Recognition:** For visually impaired individuals, the software provides audio descriptions of objects, scenes, and text in their environment.\\n * **Sign Language Translation:** GPT-4 Vision translates sign language into text or speech in real-time, facilitating communication.\\n * **Personalized Sensory Feedback:** The software creates customized sensory experiences (e.g., haptic feedback, light patterns) for individuals with diverse sensory processing needs.\\n\\n* **Addressing Challenges:**\\n * **Accuracy and Reliability:** Ensuring the AI\\'s interpretations are accurate and dependable in real-world situations.\\n * **Privacy and Dignity:** Designing the technology in a way that respects user privacy and avoids stigmatization.\\n * **Affordability & Accessibility:** Making the technology accessible to a wide range of users, regardless of their financial or technical resources. \\n\\nThese expanded examples highlight the potential of GPT-4 Vision to not only automate tasks but also to enhance human capabilities, foster creativity, and promote inclusivity. By approaching these innovations with a focus on ethical considerations and human-centered design, we can unlock incredible opportunities for positive change. \\n', 'name': 'Product_manager', 'role': 'user'}, {'content': 'You\\'re right, there\\'s so much more to explore! Let\\'s venture into some even more futuristic and ambitious applications, pushing the boundaries of what we currently imagine possible:\\n\\n**7. \"Empathic Tech\" for Mental Health Support:**\\n\\n* **Features:**\\n * **Emotionally Aware Virtual Companions:** GPT-4 Vision powers AI companions that can recognize and respond to human emotions expressed through facial expressions, body language, and vocal tone.\\n * **Personalized Mental Health Exercises:** The software provides tailored mindfulness practices, breathing exercises, or cognitive-behavioral therapy (CBT) techniques based on real-time emotional cues.\\n * **Early Warning System:** GPT-4 Vision analyzes patterns in user behavior and emotional expression to identify potential signs of mental health challenges and connect them with appropriate resources.\\n\\n* **Addressing Challenges:**\\n * **Ethical Considerations:** Ensuring user privacy, data security, and responsible use of sensitive health information is paramount.\\n * **Accuracy and Sensitivity:** Training AI to accurately interpret complex human emotions and respond with empathy and cultural sensitivity is a significant challenge.\\n * **Human Connection:** Emphasizing that technology should complement, not replace, professional mental health care and human connection.\\n\\n**8. \"EcoVision\" for Environmental Monitoring and Conservation:**\\n\\n* **Features:**\\n * **Real-Time Environmental Analysis:** GPT-4 Vision analyzes images and videos from drones, satellites, or ground-based cameras to monitor deforestation, pollution levels, wildlife populations, and other environmental factors.\\n * **Predictive Modeling for Conservation:** The software uses AI to predict environmental changes, identify areas at risk, and inform conservation efforts.\\n * **Citizen Science Platform:** EcoVision empowers individuals to contribute to environmental monitoring by uploading images and observations that the AI can analyze and integrate into its models.\\n\\n* **Addressing Challenges:**\\n * **Data Accessibility and Accuracy:** Ensuring access to high-quality, diverse environmental data from various sources is crucial.\\n * **Bias in Data and Algorithms:** Mitigating potential biases in data collection and algorithm design to avoid skewed environmental assessments.\\n * **Collaboration and Action:** Translating AI insights into concrete actions by fostering collaboration between scientists, policymakers, and local communities.\\n\\n**9. \"HistoryLens\" for Immersive Historical Experiences:**\\n\\n* **Features:**\\n * **Interactive Historical Reenactments:** GPT-4 Vision recreates historical events, figures, and locations in immersive virtual reality or augmented reality experiences.\\n * **Personalized Learning Journeys:** The software tailors historical narratives and perspectives based on user interests and background, providing a deeper understanding of the past.\\n * **Preservation and Accessibility:** HistoryLens digitally preserves historical artifacts, documents, and oral histories, making them accessible to a wider audience.\\n\\n* **Addressing Challenges:**\\n * **Historical Accuracy and Bias:** Ensuring historical representations are accurate, nuanced, and avoid perpetuating biases or historical revisionism. \\n * **Ethical Considerations of Representation:** Carefully navigating the ethical complexities of recreating sensitive historical events or representing marginalized communities. \\n * **Balancing Entertainment and Education:** Creating engaging experiences that also foster critical thinking and historical understanding.\\n\\nThese futuristic applications highlight the immense potential of GPT-4 Vision to not only solve practical problems but also enhance our understanding of ourselves, our planet, and our history. As we continue to develop this technology, it is essential to proceed with thoughtful consideration for its ethical implications and societal impact, ensuring that it is used to benefit humanity and create a more just and sustainable future. \\n', 'name': 'Product_manager', 'role': 'user'}, {'content': 'You\\'re right, there\\'s always more to imagine! Let\\'s keep pushing the boundaries and explore some applications that sound like they\\'re straight out of science fiction:\\n\\n**10. \"Dream Weaver\" for Artistic Co-Creation:**\\n\\n* **Features:**\\n * **Concept Visualization:** Users can input textual descriptions, rough sketches, or even their emotions, and GPT-4 Vision generates stunning visuals, musical compositions, or even short films that capture the essence of their ideas. \\n * **Style Transfer & Remixing:** The software enables artists to blend different artistic styles, seamlessly merging the realism of a photograph with the brushstrokes of Van Gogh or the abstract patterns of Kandinsky.\\n * **Interactive Storytelling:** GPT-4 Vision becomes a collaborative partner in storytelling, generating dynamic environments, characters, and plot twists in response to user input, blurring the lines between audience and creator.\\n\\n* **Addressing Challenges:**\\n * **Defining Creativity:** Exploring the philosophical and technical boundaries of AI creativity and ensuring it complements, not replaces, human artistic expression.\\n * **Copyright and Authorship:** Establishing clear guidelines for ownership and attribution when AI contributes significantly to the creative process. \\n * **Accessibility and Democratization:** Making these powerful creative tools accessible to a wide audience, fostering a more inclusive and imaginative future for art.\\n\\n\\n**11. \"Universal Translator\" for Real-Time Cross-Cultural Communication:**\\n\\n* **Features:**\\n * **Seamless Language Translation:** GPT-4 Vision goes beyond text, translating spoken language in real-time while accounting for nuances in tone, dialect, and cultural context.\\n * **Nonverbal Communication Interpretation:** The software analyzes facial expressions, gestures, and body language to bridge cultural differences in nonverbal communication, fostering greater understanding.\\n * **Cultural Sensitivity Guidance:** GPT-4 Vision provides users with real-time insights into cultural norms and customs, helping to avoid misunderstandings and promote respectful interactions.\\n\\n* **Addressing Challenges:**\\n * **Linguistic Complexity and Nuance:** Accurately translating the full richness and complexity of human language, including idioms, slang, and cultural references, remains a significant hurdle.\\n * **Bias and Stereotyping:** Ensuring the AI avoids perpetuating cultural biases or stereotypes in its translations and interpretations is crucial.\\n * **Preserving Linguistic Diversity:** Promoting language learning and cultural exchange while using technology to bridge communication gaps is essential. \\n\\n**12. \"Guardian AI\" for Personalized Safety and Well-being:**\\n\\n* **Features:**\\n * **Predictive Risk Assessment:** GPT-4 Vision analyzes real-time data from personal devices, social networks, and public safety feeds to identify potential risks or threats to an individual\\'s safety and well-being. \\n * **Proactive Safety Measures:** The AI can alert users to potential dangers, suggest alternative routes, or even contact emergency services in critical situations. \\n * **Personalized Health Monitoring:** GPT-4 Vision monitors an individual\\'s physical and mental health through wearable sensors and behavioral patterns, providing early warnings of potential health issues and suggesting preventative measures. \\n\\n* **Addressing Challenges:**\\n * **Privacy vs. Security:** Striking a delicate balance between protecting individual privacy and utilizing data to enhance safety.\\n * **Bias and Discrimination:** Ensuring the AI does not perpetuate existing societal biases or discriminate against individuals based on factors like race, gender, or socioeconomic status.\\n * **Over-Reliance and Autonomy:** Avoiding over-dependence on AI for safety and encouraging individuals to maintain their own situational awareness and decision-making abilities. \\n\\nThese futuristic applications highlight the potential of GPT-4 Vision to not only solve problems but also enhance human connection, understanding, and well-being. As we develop such powerful AI, responsible innovation, ethical considerations, and a focus on human-centered design will be paramount in shaping a future where technology empowers and uplifts all of humanity. \\n', 'name': 'Product_manager', 'role': 'user'}], summary='You\\'re right, there\\'s always more to imagine! Let\\'s keep pushing the boundaries and explore some applications that sound like they\\'re straight out of science fiction:\\n\\n**10. \"Dream Weaver\" for Artistic Co-Creation:**\\n\\n* **Features:**\\n * **Concept Visualization:** Users can input textual descriptions, rough sketches, or even their emotions, and GPT-4 Vision generates stunning visuals, musical compositions, or even short films that capture the essence of their ideas. \\n * **Style Transfer & Remixing:** The software enables artists to blend different artistic styles, seamlessly merging the realism of a photograph with the brushstrokes of Van Gogh or the abstract patterns of Kandinsky.\\n * **Interactive Storytelling:** GPT-4 Vision becomes a collaborative partner in storytelling, generating dynamic environments, characters, and plot twists in response to user input, blurring the lines between audience and creator.\\n\\n* **Addressing Challenges:**\\n * **Defining Creativity:** Exploring the philosophical and technical boundaries of AI creativity and ensuring it complements, not replaces, human artistic expression.\\n * **Copyright and Authorship:** Establishing clear guidelines for ownership and attribution when AI contributes significantly to the creative process. \\n * **Accessibility and Democratization:** Making these powerful creative tools accessible to a wide audience, fostering a more inclusive and imaginative future for art.\\n\\n\\n**11. \"Universal Translator\" for Real-Time Cross-Cultural Communication:**\\n\\n* **Features:**\\n * **Seamless Language Translation:** GPT-4 Vision goes beyond text, translating spoken language in real-time while accounting for nuances in tone, dialect, and cultural context.\\n * **Nonverbal Communication Interpretation:** The software analyzes facial expressions, gestures, and body language to bridge cultural differences in nonverbal communication, fostering greater understanding.\\n * **Cultural Sensitivity Guidance:** GPT-4 Vision provides users with real-time insights into cultural norms and customs, helping to avoid misunderstandings and promote respectful interactions.\\n\\n* **Addressing Challenges:**\\n * **Linguistic Complexity and Nuance:** Accurately translating the full richness and complexity of human language, including idioms, slang, and cultural references, remains a significant hurdle.\\n * **Bias and Stereotyping:** Ensuring the AI avoids perpetuating cultural biases or stereotypes in its translations and interpretations is crucial.\\n * **Preserving Linguistic Diversity:** Promoting language learning and cultural exchange while using technology to bridge communication gaps is essential. \\n\\n**12. \"Guardian AI\" for Personalized Safety and Well-being:**\\n\\n* **Features:**\\n * **Predictive Risk Assessment:** GPT-4 Vision analyzes real-time data from personal devices, social networks, and public safety feeds to identify potential risks or threats to an individual\\'s safety and well-being. \\n * **Proactive Safety Measures:** The AI can alert users to potential dangers, suggest alternative routes, or even contact emergency services in critical situations. \\n * **Personalized Health Monitoring:** GPT-4 Vision monitors an individual\\'s physical and mental health through wearable sensors and behavioral patterns, providing early warnings of potential health issues and suggesting preventative measures. \\n\\n* **Addressing Challenges:**\\n * **Privacy vs. Security:** Striking a delicate balance between protecting individual privacy and utilizing data to enhance safety.\\n * **Bias and Discrimination:** Ensuring the AI does not perpetuate existing societal biases or discriminate against individuals based on factors like race, gender, or socioeconomic status.\\n * **Over-Reliance and Autonomy:** Avoiding over-dependence on AI for safety and encouraging individuals to maintain their own situational awareness and decision-making abilities. \\n\\nThese futuristic applications highlight the potential of GPT-4 Vision to not only solve problems but also enhance human connection, understanding, and well-being. As we develop such powerful AI, responsible innovation, ethical considerations, and a focus on human-centered design will be paramount in shaping a future where technology empowers and uplifts all of humanity. \\n', cost={'usage_including_cached_inference': {'total_cost': 0}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import autogen\n", + "\n", + "gpt_config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\", filter_dict={\"tags\": [\"gpt-4\"]})\n", + "\n", + "gpt_llm_config = {\"config_list\": gpt_config_list, \"timeout\": 120}\n", + "\n", + "gemini_config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\", filter_dict={\"tags\": [\"gemini\"]})\n", + "\n", + "gemini_llm_config = {\"config_list\": gemini_config_list, \"timeout\": 120}\n", + "\n", + "user_proxy = autogen.UserProxyAgent(\n", + " name=\"User_proxy\",\n", + " system_message=\"A human admin.\",\n", + " code_execution_config={\n", + " \"last_n_messages\": 2,\n", + " \"work_dir\": \"groupchat\",\n", + " \"use_docker\": False,\n", + " }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n", + " human_input_mode=\"TERMINATE\",\n", + ")\n", + "coder = autogen.AssistantAgent(\n", + " name=\"Coder\",\n", + " llm_config=gpt_llm_config,\n", + ")\n", + "pm = autogen.AssistantAgent(\n", + " name=\"Product_manager\",\n", + " system_message=\"Creative in software product ideas.\",\n", + " llm_config=gemini_llm_config,\n", + ")\n", + "groupchat = autogen.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)\n", + "manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=gpt_llm_config)\n", + "user_proxy.initiate_chat(\n", + " manager, message=\"Find a latest paper about gpt-4 on arxiv and find its potential applications in software.\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Function Calling with Gemini\n", + "\n", + "Here is an example of Gemini with Function Calling," + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "Draw two agents chatting with each other with an example dialog. Don't add plt.show().\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "\u001b[32m***** Suggested tool call (call_l7Rz8YLE4F2y8nGLCaroD6XL): python *****\u001b[0m\n", + "Arguments: \n", + "{\n", + " \"cell\": `\n", + "import matplotlib.pyplot as plt\n", + "\n", + "# Create figure and axes\n", + "fig, ax = plt.subplots()\n", + "\n", + "# Define agent coordinates\n", + "agent1_x, agent1_y = 1, 1\n", + "agent2_x, agent2_y = 3, 1\n", + "\n", + "# Draw agents as circles\n", + "agent1 = plt.Circle((agent1_x, agent1_y), 0.1, color='blue')\n", + "agent2 = plt.Circle((agent2_x, agent2_y), 0.1, color='green')\n", + "ax.add_patch(agent1)\n", + "ax.add_patch(agent2)\n", + "\n", + "# Add example dialog\n", + "dialog1 = \"Hi, how are you?\"\n", + "dialog2 = \"I'm good, thanks!\"\n", + "\n", + "# Add text labels for the dialog\n", + "plt.text(agent1_x, agent1_y + 0.3, dialog1, fontsize=12, ha='center')\n", + "plt.text(agent2_x, agent2_y + 0.3, dialog2, fontsize=12, ha='center')\n", + "\n", + "# Set limits and turn off the axes\n", + "ax.set_xlim(0, 4)\n", + "ax.set_ylim(0, 2)\n", + "ax.axis('off')\n", + "`\n", + "}\n", + "\u001b[32m***********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[32m***** Response from calling tool (call_l7Rz8YLE4F2y8nGLCaroD6XL) *****\u001b[0m\n", + "Error: Expecting value: line 1 column 12 (char 11)\n", + " You argument should follow json format.\n", + "\u001b[32m**********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "\u001b[32m***** Suggested tool call (call_VjzGKf34UMKMlSHCrQpJ6ICK): python *****\u001b[0m\n", + "Arguments: \n", + "{\n", + " \"cell\": \"import matplotlib.pyplot as plt\\n\\n# Create figure and axes\\nfig, ax = plt.subplots()\\n\\n# Define agent coordinates\\nagent1_x, agent1_y = 1, 1\\nagent2_x, agent2_y = 3, 1\\n\\n# Draw agents as circles\\nagent1 = plt.Circle((agent1_x, agent1_y), 0.1, color='blue')\\nagent2 = plt.Circle((agent2_x, agent2_y), 0.1, color='green')\\nax.add_patch(agent1)\\nax.add_patch(agent2)\\n\\n# Add example dialog\\ndialog1 = \\\"Hi, how are you?\\\"\\ndialog2 = \\\"I'm good, thanks!\\\"\\n\\n# Add text labels for the dialog\\nplt.text(agent1_x, agent1_y + 0.3, dialog1, fontsize=12, ha='center')\\nplt.text(agent2_x, agent2_y + 0.3, dialog2, fontsize=12, ha='center')\\n\\n# Set limits and turn off the axes\\nax.set_xlim(0, 4)\\nax.set_ylim(0, 2)\\nax.axis('off')\"\n", + "}\n", + "\u001b[32m***********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION python...\u001b[0m\n" + ] + }, + { + "data": { + "text/plain": [ + "(0.0, 4.0, 0.0, 2.0)" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgMAAAGFCAYAAABg2vAPAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy80BEi2AAAACXBIWXMAAA9hAAAPYQGoP6dpAAAgJklEQVR4nO3deXRU9d3H8c9kshGihGBIgGLYgsqmKKIgAQQUBEGQTUUhWAEVBS1aly5Cq3CktQdl0YCURdzKUhSNFmSRFBdEcAOMIItiyk4AC4Ek83v+yJMpQxIIYUIw3/frnBzNnTv33lzmN3nPnXsnHuecEwAAMCukvDcAAACUL2IAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YAADAOGIAAADjiAEAAIwjBgAAMI4YOAt16tRRSkpKqe7bvn17NWnSJLgbBABFaN++vdq3bx/UZW7btk0ej0d//etfg7rckvJ4PHrggQfKZd0VETHw/2bOnCmPx6M1a9YUeTu/vG346KOP1LdvX1188cWKjo5W69at9eGHH5b3ZuEMFTWeR48erTp16pTfRv1CpaWlafTo0eW9GWWq4PGybdu28t6UckMMnIWMjAxNmzatvDcDQTRgwADt27dPjzzyiJ555hnt3btXXbp00bffflvemwaUi7S0NI0ZM6a8NwNlLLS8N+CXLCIiorw34byTm5srn8+n8PDw8t6UUnnjjTd0zTXX+L+/6aabdMkll2j+/Pn63e9+V45bBgBlhyMDZ+FszhkosGHDBl1//fWKiopSrVq1NH78+ELz7N69W7/+9a8VHx+vyMhIXX755Zo1a1bAPFdeeaVuvfXWgGlNmzaVx+PRV1995Z/25ptvyuPxaOPGjcVu0/Hjx/XHP/5RV111lapUqaLKlSsrOTlZy5cvD5jvxPcMJ0yYoPr16ysiIkIbNmyQJH377bfq06ePYmNjFRkZqRYtWujtt98+5f5wzqlOnTq65ZZbCt2WnZ2tKlWqaNiwYWe0b1asWCGPx6MVK1YUuf0zZ870TzsxBCQpMjLSv09Q8RS87zx37lw1atRIlSpVUqtWrfT1119LklJTU9WgQQNFRkaqffv2JT6MvGLFCrVo0UKRkZGqX7++UlNTNXr0aHk8noD5cnNz9ec//9k/durUqaMnn3xSx44dK7TMKVOmqHHjxoqIiFDNmjU1fPhwZWVlFZpv6tSpql+/vipVqqSWLVsqPT39jPdLgZSUFE2ePFlS/r4q+CpunREREbr66qv12WefBdz+1VdfKSUlRfXq1VNkZKQSEhJ09913a9++fQHzFeyjzZs3KyUlRTExMapSpYoGDx6sI0eOnHZ7n376aYWEhGjixIn+aRMnTlTjxo0VFRWlqlWrqkWLFnrttddKszsqNI4MnOTgwYPau3dvoek5OTlBX9eBAwfUpUsX3XrrrerXr5/mzZunxx57TE2bNtVNN90kSTp69Kjat2+vzZs364EHHlDdunU1d+5cpaSkKCsrSyNHjpQkJScn6/XXX/cve//+/Vq/fr1CQkKUnp6uZs2aSZLS09MVFxenyy67rNjtOnTokF5++WXdfvvtGjJkiA4fPqzp06erc+fOWr16ta644oqA+WfMmKHs7GwNHTpUERERio2N1fr163XdddepVq1aevzxx1W5cmX94x//UM+ePTV//nz16tWryHV7PB7deeedGj9+vPbv36/Y2Fj/bYsWLdKhQ4d05513ntG+KS2fz6dRo0YpIiJCAwYMOKtl4fyVnp6ut99+W8OHD5ckjRs3TjfffLN++9vfasqUKbr//vt14MABjR8/XnfffbeWLVt2yuWtW7dOXbp0UY0aNTRmzBjl5eXpT3/6k+Li4grNe88992jWrFnq06ePRo0apU8//VTjxo3Txo0b9c9//tM/3+jRozVmzBh16tRJ9913nzIyMvTiiy/qs88+06pVqxQWFiZJmj59uoYNG6bWrVvroYce0pYtW9SjRw/Fxsaqdu3aZ7xvhg0bpszMTC1ZskSvvPJKkfO89tprOnz4sIYNGyaPx6Px48fr1ltv1ZYtW/zbtWTJEm3ZskWDBw9WQkKC1q9fr6lTp2r9+vX65JNPCgVGv379VLduXY0bN05r167Vyy+/rOrVq+vZZ58tdlt///vfa+zYsUpNTdWQIUMkSdOmTdOIESPUp08fjRw5UtnZ2frqq6/06aef6o477jjj/VGhOTjnnJsxY4aTdMqvxo0bB9wnMTHRDRo0qFTra9eunZPkZs+e7Z927Ngxl5CQ4Hr37u2fNmHCBCfJzZkzxz/t+PHjrlWrVi46OtodOnTIOefc3LlznSS3YcMG55xzb7/9touIiHA9evRw/fv399+3WbNmrlevXqfcttzcXHfs2LGAaQcOHHDx8fHu7rvv9k/bunWrk+QuvPBCt3v37oD5O3bs6Jo2beqys7P903w+n2vdurVLSko65fozMjKcJPfiiy8GTO/Ro4erU6eO8/l8Z7Rvli9f7iS55cuXByyvYPtnzJhR5HYMHTrUeTwe99prr51ye3H+KRjPn3322Snnk+QiIiLc1q1b/dNSU1OdJJeQkOB/DDnn3BNPPOEkBcxblO7du7uoqCj3008/+adt2rTJhYaGuhOfcr/44gsnyd1zzz0B93/kkUecJLds2TLnnHO7d+924eHh7sYbb3R5eXn++SZNmuQkub///e/OufzHfvXq1d0VV1wRMH6nTp3qJLl27dqdcruLM3z4cFfUr4qC8VOtWjW3f/9+//S33nrLSXKLFi3yTzty5Eih+7/++utOklu5cqV/2lNPPeUkBTzPOOdcr169XLVq1QKmSXLDhw93zjk3atQoFxIS4mbOnBkwzy233FLoebsoBY+X0/3bVmS8TXCSyZMna8mSJYW+Cl5ZB1N0dLT/Va4khYeHq2XLltqyZYt/WlpamhISEnT77bf7p4WFhWnEiBH6+eef/We6JycnS5JWrlwpKf/VztVXX60bbrjBf5gwKytL33zzjX/e4ni9Xv97/j6fT/v371dubq5atGihtWvXFpq/d+/eAa969u/fr2XLlqlfv346fPiw9u7dq71792rfvn3q3LmzNm3apJ9++qnY9Tds2FDXXHONXn311YBlvvfeexowYID/VURJ901pTJ8+XVOnTtVzzz0XsHxUPB07dgy4yqDgraLevXvrggsuKDT9xPF5sry8PH3wwQfq2bOnatas6Z/eoEED/9G+AmlpaZKk3/zmNwHTR40aJUl69913JUkffPCBjh8/roceekghIf97yh4yZIguvPBC/3xr1qzR7t27de+99wacs5OSkqIqVaqcZi+UXv/+/VW1alX/9wXPLyfup0qVKvn/Pzs7W3v37tW1114rSUU+p9x7770B3ycnJ2vfvn06dOhQwHTnnB544AE9//zzmjNnjgYNGhRwe0xMjHbs2FHobQsURgycpGXLlurUqVOhrxMf7MHyq1/9qtDhsapVq+rAgQP+77dv366kpKSAJwFJ/sP827dvlyTFx8crKSnJ/4s/PT1dycnJatu2rTIzM7VlyxatWrVKPp/vtDEgSbNmzVKzZs0UGRmpatWqKS4uTu+++64OHjxYaN66desGfL9582Y55/SHP/xBcXFxAV9PPfWUpPz3+k9l4MCBWrVqlf/nmzt3rnJycnTXXXed8b4pjVdeeUUNGzbUww8/XOpl4Jfh4osvDvi+4BfnyYfVC6afOD5Ptnv3bh09elQNGjQodNvJ07Zv366QkJBC0xMSEhQTE+N//Bb895JLLgmYLzw8XPXq1Ss0X1JSUsB8YWFhqlevXrHbfLZO3n8Fz5Un7qf9+/dr5MiRio+PV6VKlRQXF+d/3ijqOaUky5Sk2bNna/LkyZo4cWKR0f7YY48pOjpaLVu2VFJSkoYPH65Vq1aV4qes+IiBcuT1eouc7pwr1fLatGmj9PR0HT16VJ9//rmSk5PVpEkTxcTEKD09Xenp6YqOjlbz5s1PuZw5c+YoJSVF9evX1/Tp0/X+++9ryZIl6tChg3w+X6H5T6x+Sf55HnnkkSKPsixZsqTIJ8sT3XbbbQoLC/MfHZgzZ45atGhR6AmxJIo64UnKfxVXnH379qlGjRpnvC788hQ3DoM9PotT3OPzl6Ik+6lfv36aNm2a7r33Xi1YsECLFy/W+++/L0lFPqeUdN9fd911io+P16RJk7R///5C81922WXKyMjQG2+8oTZt2mj+/Plq06aN/0UJ/ocYOM8lJiZq06ZNhQZMwXXviYmJ/mnJycn64Ycf9MYbbygvL0+tW7dWSEiIPxLS09PVunXrYgdagXnz5qlevXpasGCB7rrrLnXu3FmdOnVSdnZ2iba54FVIWFhYkUdZOnXqFHD4tSixsbHq1q2bXn31VW3fvl2rVq0KOCpwJvum4FXFyWden+rIwe23317sSY5AcapXr67IyEht3ry50G0nT0tMTJTP59OmTZsCpu/atUtZWVn+x2/BfzMyMgLmO378uLZu3VpovpOXl5OTo61bt5b6ZzrbWDlw4ICWLl2qxx9/XGPGjFGvXr10ww03BOVoRYMGDbR48WJlZmaqS5cuOnz4cKF5KleurP79+2vGjBn64Ycf1K1bNz3zzDMBz2cpKSn+K5msIgbOc127dtXOnTv15ptv+qfl5uZq4sSJio6OVrt27fzTCw7/P/vss2rWrJn/sGZycrKWLl2qNWvWlOgtgoJYOLHCP/30U3388ccl2ubq1aurffv2Sk1N1X/+859Ct+/Zs6dEy7nrrru0YcMGPfroo/J6vbrtttsCbi/pvklMTJTX6/WfT1FgypQpxa67f//+xADOmNfrVadOnbRw4UJlZmb6p2/evFnvvfdewLxdu3aVJE2YMCFg+t/+9jdJUrdu3SRJnTp1Unh4uF544YWAMTl9+nQdPHjQP1+LFi0UFxenl156KeBS2JkzZxZ5CWJJVa5cWVLhmC6pop5PpMI/d2k1a9ZMaWlp2rhxo7p3766jR4/6bzv50sXw8HA1atRIzrmAK8T27t2rb7/9tkyuGvul4NLCICsoy2B9rOXQoUOVmpqqlJQUff7556pTp47mzZunVatWacKECQGvsBs0aKCEhARlZGTowQcf9E9v27atHnvsMUkqUQzcfPPNWrBggXr16qVu3bpp69ateumll9SoUSP9/PPPJdruyZMnq02bNmratKmGDBmievXqadeuXfr444+1Y8cOffnll6ddRrdu3VStWjXNnTtXN910k6pXr16qfVOlShX17dtXEydOlMfjUf369fXOO++c8ryFgpPKTv5sAuB0Ro8ercWLF+u6667Tfffdp7y8PE2aNElNmjTRF1984Z/v8ssv16BBgzR16lRlZWWpXbt2Wr16tWbNmqWePXvq+uuvlyTFxcXpiSee0JgxY9SlSxf16NFDGRkZmjJliq6++mr/SchhYWF6+umnNWzYMHXo0EH9+/fX1q1bNWPGjCJfhbdv314ffvjhad/2uOqqqyRJI0aMUOfOnYsM81O58MIL1bZtW40fP145OTmqVauWFi9efFZHK0527bXX6q233lLXrl3Vp08fLVy4UGFhYbrxxhuVkJDgfzth48aNmjRpkrp16xbw3Dlp0iSNGTNGW7dutXt0oHwuYjj/nO5SpHbt2pXo0sKLLrrIXXvttaddX1HLc865QYMGucTExIBpu3btcoMHD3YXXXSRCw8Pd02bNi32cri+ffs6Se7NN9/0Tzt+/LiLiopy4eHh7ujRo6fdNp/P58aOHesSExNdRESEa968uXvnnXcKbVvBpUV/+ctfilzO999/7wYOHOgSEhJcWFiYq1Wrlrv55pvdvHnzTrsNBe6//34nqdjL+0q6b/bs2eN69+7toqKiXNWqVd2wYcPcN998U+ylhYmJiaW+FAvl70wuLSy4PK1AcY/rgktU586de9r1L1261DVv3tyFh4e7+vXru5dfftmNGjXKRUZGBsyXk5PjxowZ4+rWrevCwsJc7dq13RNPPBFwSW6BSZMmuUsvvdSFhYW5+Ph4d99997kDBw4Umm/KlCmubt26LiIiwrVo0cKtXLnStWvXrtDj+aqrrnIJCQmn/Vlyc3Pdgw8+6OLi4pzH4/FfZniq8S/JPfXUU/7vd+zY4Xr16uViYmJclSpVXN++fV1mZmah+QouLdyzZ0/A8oq69K+of7u33nrLhYaGuv79+7u8vDyXmprq2rZt66pVq+YiIiJc/fr13aOPPuoOHjwYcL+C9Vq+tNDjXJDPhjFsw4YNaty4sd555x3/oTucnYcffljTp0/Xzp07FRUVVd6bA5Raz549tX79+kLv6ZeHw4cPKzY2VhMmTPB/2BJs45yBIFq+fLlatWpFCARJdna25syZo969exMC+EU58X1rKf+kvrS0tKD/GeHSWrlypWrVquX/pD6AIwM47+zevVsffPCB5s2bp4ULF2rt2rWFPgIZOJ/VqFHD/1n827dv14svvqhjx45p3bp1hT4HADgfcAIhzjsbNmzQgAEDVL16db3wwguEAH5xunTpotdff107d+5URESEWrVqpbFjxxICOG9xZAAAAOM4ZwAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwjhgAAMA4YgAAAOOIAQAAjCMGAAAwLrS8NwDlz+eTli6VFi3K/9q5M396QoLUvXv+V8eOUgjpCPzi+JxPS7cs1aLvFmnRd4u08+f8AZ4QnaDuDbure8Pu6livo0I8DHDLPM45V94bgfLz5ZfS0KHS6tVSaKiUmxt4e8G0li2ladOkZs3KZzsBnLkvd36poYuGanXmaoWGhCrXFzjAC6a1rNVS07pPU7N4BrhVpKBhzz0nXXmltHZt/vcnh8CJ09aulZo3z78PgPPfcx89pyunXqm1O/MH+MkhcOK0tZlr1Ty1uZ77iAFuFUcGjBo3TnryydLf9/HHg7s9AIJnXPo4PbmsdAN8XMdxerwNA9waYsCgF16QRo48u2U8/7w0YkRwtgdA8Lzw6Qsa+f7ZDfDnuzyvEdcwwC0hBoxZty7//f+i3hI4E6Gh+ecZNG8enO0CcPbW/WedWr7cssi3BM5EaEioVt+zWs1rMMCt4JwBQ3JzpYEDpWDkn3PSoEFnHxUAgiPXl6uBCwcqGK/vnHMatHDQWUcFfjmIAUNmz5a++UbKyzv7ZeXlSV9/nb9MAOVv9pez9c3ub5Tnzn6A57k8fb37a83+kgFuBW8TGOGcdOml0qZNwTkyIEkej9SwobRxY/7/AygfzjldOvlSbdq3SU7BGeAeedSwWkNtHL5RHgZ4hceRASP+9S/pu++CFwJS/rIyMqTFi4O3TABn7l/f/0vf7fsuaCEgSU5OGfsytPh7BrgFxIARqamS1xv85Xq90ksvBX+5AEoudU2qvJ7gD3Cvx6uX1jDALeBtAgNycqSYGOnIkbJZflSUlJUlhYWVzfIBFC8nL0cxz8boSE7ZDPCosChlPZalMC8DvCLjyIABH39cdiEg5S/7k0/KbvkAivfxjo/LLAQk6UjOEX2ygwFe0REDBixZkv+5AGUlNJTzBoDysuT7JQoNKbsBHhoSynkDBhADBqxfH5zLCYvj80kbNpTd8gEUb/2e9crzld0A9zmfNuxhgFd0xIABGRnBvYrgZD5f/joAnHsZ+zKCehXByXzOp4x9DPCKjhio4JyTtm0r+/Vs21a2wQGgMOectmVtK/P1bMvaFpRPNsT5ixio4A4fLtuTBwv897/Szz+X/XoA/M/h44fL9OTBAv/N+a9+Ps4Ar8iIgQru2LGKuS4A0rHcczfojuUxwCsyYqCC8/kq5roA5L+fXxHXhXOPGKjgwsMr5roASOHeczfozuW6cO4RAxXcBRdIIefgX9nrlaKjy349AP7ngogLFOIp+wHu9XgVHc4Ar8iIgQouNFSqVavs11OzZtl+sBGAwkJDQlXrgrIf4DUvqFmmH2yE8kcMGJCUVPbraNiw7NcBoLCkamU/wBtWY4BXdMSAAUlJZftHhMLCzk1wACgsKTZJYSFlN8DDQsKUFMsAr+iIAQOSk/P/cmFZycnJXweAcy/54mTl+MpugOf4cpScyACv6PgTxgbs2SPFx5fdJwR6PNKuXVJcXNksH0Dx9vx3j+L/Gl9mH0nskUe7HtmluMoM8IqMIwMGxMVJzZqV3fIvv5wQAMpLXOU4NYsvuwF+ecLlhIABxIARAwaUzSWGHo90xx3BXy6AkhvQdECZXGLokUd3NGGAW8DbBEZkZUk1akjZ2cFdbqVKUmamFBMT3OUCKLms7CzVeK6GsnODO8ArhVZS5qhMxUTGBHW5OP9wZMCImBhp6ND8DwcKFq83f5mEAFC+YiJjNPTKofJ6gjfAvR6vhl41lBAwgiMDhvz4Y/7nAQTr6EBkpPTdd1Lt2sFZHoDS+/Hgj2o4qWHQjg5Ehkbquwe+U+0qDHALODJgSO3a0tixwVveuHGEAHC+qF2ltsZ2CN4AH9dxHCFgCEcGjPH5pA4dpH//W8rLK90yvN78zxVYuvTc/N0DACXjcz51mNVB//7h38pzpRvgXo9XyYnJWjpw6Tn5uwc4P/AvbUxIiLRggdSoUenOH/B6pSZNpPnzCQHgfBPiCdGC/gvUKK5Rqc4f8Hq8alK9ieb3m08IGMO/tkGxsdKKFfm/1M/kF3pISP59li3LXwaA809spVitSFmhJtWbnNEv9BBPiJpUb6Jlg5YpthID3BpiwKjYWCk9XRoxIv+zAk51lMDrzZ9nxIj8+xACwPkttlKs0gena0TLEfLIc8qjBF6PVx55NKLlCKUPTicEjOKcAWjduvwTC9PSpCNHAm+LipK6dpWefFJq3rx8tg9A6a37zzqNTR+rtM1pOpITOMCjwqLUtUFXPZn8pJrXYIBbRgzA7/hx6aOP8j9ESJJq1pRat5bCw8t3uwCcveN5x/XRjx8p83D+AK95QU21rt1a4V4GOIgBAADM45wBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADCOGAAAwDhiAAAA44gBAACMIwYAADDu/wC+Ka8LXmqY8wAAAABJRU5ErkJggg==", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[33muser_proxy\u001b[0m (to chatbot):\n", + "\n", + "\u001b[32m***** Response from calling tool (call_VjzGKf34UMKMlSHCrQpJ6ICK) *****\u001b[0m\n", + "(0.0, 4.0, 0.0, 2.0)\n", + "\u001b[32m**********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mchatbot\u001b[0m (to user_proxy):\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "from IPython import get_ipython\n", + "from typing_extensions import Annotated\n", + "\n", + "import autogen\n", + "from autogen.cache import Cache\n", + "\n", + "config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\", filter_dict={\"tags\": [\"gemini\", \"tool\"]})\n", + "\n", + "llm_config = {\n", + " \"config_list\": config_list,\n", + " \"timeout\": 120,\n", + "}\n", + "chatbot = autogen.AssistantAgent(\n", + " name=\"chatbot\",\n", + " system_message=\"For coding tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.\",\n", + " llm_config=llm_config,\n", + ")\n", + "\n", + "# create a UserProxyAgent instance named \"user_proxy\"\n", + "user_proxy = autogen.UserProxyAgent(\n", + " name=\"user_proxy\",\n", + " is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=10,\n", + " code_execution_config={\n", + " \"work_dir\": \"coding\",\n", + " \"use_docker\": False,\n", + " }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n", + ")\n", + "\n", + "\n", + "# define functions according to the function description\n", + "\n", + "\n", + "# one way of registering functions is to use the register_for_llm and register_for_execution decorators\n", + "@user_proxy.register_for_execution()\n", + "@chatbot.register_for_llm(name=\"python\", description=\"run cell in ipython and return the execution result.\")\n", + "def exec_python(cell: Annotated[str, \"Valid Python cell to execute.\"]) -> str:\n", + " ipython = get_ipython()\n", + " result = ipython.run_cell(cell)\n", + " log = str(result.result)\n", + " if result.error_before_exec is not None:\n", + " log += f\"\\n{result.error_before_exec}\"\n", + " if result.error_in_exec is not None:\n", + " log += f\"\\n{result.error_in_exec}\"\n", + " return log\n", + "\n", + "\n", + "# another way of registering functions is to use the register_function\n", + "def exec_sh(script: Annotated[str, \"Valid Python cell to execute.\"]) -> str:\n", + " return user_proxy.execute_code_blocks([(\"sh\", script)])\n", + "\n", + "\n", + "autogen.agentchat.register_function(\n", + " exec_python,\n", + " caller=chatbot,\n", + " executor=user_proxy,\n", + " name=\"sh\",\n", + " description=\"run a shell script and return the execution result.\",\n", + ")\n", + "\n", + "with Cache.disk() as cache:\n", + " # start the conversation\n", + " user_proxy.initiate_chat(\n", + " chatbot,\n", + " message=\"Draw two agents chatting with each other with an example dialog. Don't add plt.show().\",\n", + " cache=cache,\n", + " max_turns=3,\n", + " )" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "autogen", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/test/oai/test_gemini.py b/test/oai/test_gemini.py index 61fdbe6d735a..e6ff3e02672d 100644 --- a/test/oai/test_gemini.py +++ b/test/oai/test_gemini.py @@ -1,3 +1,4 @@ +import json import os from unittest.mock import MagicMock, patch @@ -10,7 +11,9 @@ from google.cloud.aiplatform.initializer import global_config as vertexai_global_config from vertexai.generative_models import HarmBlockThreshold as VertexAIHarmBlockThreshold from vertexai.generative_models import HarmCategory as VertexAIHarmCategory + from vertexai.generative_models import Part as VertexAIPart from vertexai.generative_models import SafetySetting as VertexAISafetySetting + from vertexai.generative_models import ToolConfig as VertexAIToolConfig from autogen.oai.gemini import GeminiClient @@ -20,6 +23,8 @@ VertexAIHarmBlockThreshold = object VertexAIHarmCategory = object VertexAISafetySetting = object + VertexAIPart = object + VertexAIToolConfig = object vertexai_global_config = object InternalServerError = object skip = True @@ -234,8 +239,6 @@ def test_vertexai_safety_setting_list(gemini_client): for category in harm_categories ] - print(safety_settings) - converted_safety_settings = GeminiClient._to_vertexai_safety_settings(safety_settings) def compare_safety_settings(converted_safety_settings, expected_safety_settings): @@ -250,6 +253,59 @@ def compare_safety_settings(converted_safety_settings, expected_safety_settings) assert all(settings_comparison), "Converted safety settings are incorrect" +@pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") +def test_vertexai_tool_config(gemini_client): + + tools = [{"function_name": "calculator"}] + + tool_config = {"function_calling_config": {"mode": "ANY"}} + + expected_tool_config = VertexAIToolConfig( + function_calling_config=VertexAIToolConfig.FunctionCallingConfig( + mode=VertexAIToolConfig.FunctionCallingConfig.Mode.ANY, + allowed_function_names=["calculator"], + ) + ) + + converted_tool_config = GeminiClient._to_vertexai_tool_config(tool_config, tools) + + converted_mode = converted_tool_config._gapic_tool_config.function_calling_config.mode + expected_mode = expected_tool_config._gapic_tool_config.function_calling_config.mode + converted_allowed_func = converted_tool_config._gapic_tool_config.function_calling_config.allowed_function_names + expected_allowed_func = expected_tool_config._gapic_tool_config.function_calling_config.allowed_function_names + + assert converted_mode == expected_mode, "Function calling mode is not converted correctly" + assert ( + converted_allowed_func == expected_allowed_func + ), "Function calling allowed function names is not converted correctly" + + +@pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") +def test_vertexai_tool_config_no_functions(gemini_client): + + tools = [] + + tool_config = {"function_calling_config": {"mode": "ANY"}} + + expected_tool_config = VertexAIToolConfig( + function_calling_config=VertexAIToolConfig.FunctionCallingConfig( + mode=VertexAIToolConfig.FunctionCallingConfig.Mode.ANY, + ) + ) + + converted_tool_config = GeminiClient._to_vertexai_tool_config(tool_config, tools) + + converted_mode = converted_tool_config._gapic_tool_config.function_calling_config.mode + expected_mode = expected_tool_config._gapic_tool_config.function_calling_config.mode + converted_allowed_func = converted_tool_config._gapic_tool_config.function_calling_config.allowed_function_names + expected_allowed_func = expected_tool_config._gapic_tool_config.function_calling_config.allowed_function_names + + assert converted_mode == expected_mode, "Function calling mode is not converted correctly" + assert ( + converted_allowed_func == expected_allowed_func + ), "Function calling allowed function names is not converted correctly" + + # Test error handling @patch("autogen.oai.gemini.genai") @pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") @@ -279,9 +335,10 @@ def test_cost_calculation(gemini_client, mock_response): @pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") +@patch("autogen.oai.gemini.Content") @patch("autogen.oai.gemini.genai.GenerativeModel") @patch("autogen.oai.gemini.genai.configure") -def test_create_response(mock_configure, mock_generative_model, gemini_client): +def test_create_response(mock_configure, mock_generative_model, mock_content, gemini_client): # Mock the genai model configuration and creation process mock_chat = MagicMock() mock_model = MagicMock() @@ -292,6 +349,8 @@ def test_create_response(mock_configure, mock_generative_model, gemini_client): # Set up a mock for the chat history item access and the text attribute return mock_history_part = MagicMock() mock_history_part.text = "Example response" + mock_history_part.function_call = None + mock_chat.history.__getitem__.return_value.parts.__iter__.return_value = iter([mock_history_part]) mock_chat.history.__getitem__.return_value.parts.__getitem__.return_value = mock_history_part # Setup the mock to return a mocked chat response @@ -306,6 +365,55 @@ def test_create_response(mock_configure, mock_generative_model, gemini_client): assert response.choices[0].message.content == "Example response", "Response content should match expected output" +@pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") +@patch("autogen.oai.gemini.Part") +@patch("autogen.oai.gemini.Content") +@patch("autogen.oai.gemini.genai.GenerativeModel") +@patch("autogen.oai.gemini.genai.configure") +def test_create_function_call_response(mock_configure, mock_generative_model, mock_content, mock_part, gemini_client): + # Mock the genai model configuration and creation process + mock_chat = MagicMock() + mock_model = MagicMock() + mock_configure.return_value = None + mock_generative_model.return_value = mock_model + mock_model.start_chat.return_value = mock_chat + + mock_part.to_dict.return_value = { + "function_call": {"name": "function_name", "args": {"arg1": "value1", "arg2": "value2"}} + } + + # Set up a mock for the chat history item access and the text attribute return + mock_history_part = MagicMock() + mock_history_part.text = None + mock_history_part.function_call.name = "function_name" + mock_history_part.function_call.args = {"arg1": "value1", "arg2": "value2"} + mock_chat.history.__getitem__.return_value.parts.__iter__.return_value = iter([mock_history_part]) + + # Setup the mock to return a mocked chat response + mock_chat.send_message.return_value = MagicMock( + history=[ + MagicMock( + parts=[ + MagicMock( + function_call=MagicMock(name="function_name", arguments='{"arg1": "value1", "arg2": "value2"}') + ) + ] + ) + ] + ) + + # Call the create method + response = gemini_client.create( + {"model": "gemini-pro", "messages": [{"content": "Hello", "role": "user"}], "stream": False} + ) + + # Assertions to check if response is structured as expected + assert ( + response.choices[0].message.tool_calls[0].function.name == "function_name" + and json.loads(response.choices[0].message.tool_calls[0].function.arguments)["arg1"] == "value1" + ), "Response content should match expected output" + + @pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") @patch("autogen.oai.gemini.GenerativeModel") @patch("autogen.oai.gemini.vertexai.init") @@ -320,7 +428,9 @@ def test_vertexai_create_response(mock_init, mock_generative_model, gemini_clien # Set up a mock for the chat history item access and the text attribute return mock_history_part = MagicMock() mock_history_part.text = "Example response" - mock_chat.history.__getitem__.return_value.parts.__getitem__.return_value = mock_history_part + mock_history_part.function_call = None + mock_history_part.role = "model" + mock_chat.history.__getitem__.return_value.parts.__iter__.return_value = iter([mock_history_part]) # Setup the mock to return a mocked chat response mock_chat.send_message.return_value = MagicMock(history=[MagicMock(parts=[MagicMock(text="Example response")])]) @@ -330,10 +440,60 @@ def test_vertexai_create_response(mock_init, mock_generative_model, gemini_clien {"model": "gemini-pro", "messages": [{"content": "Hello", "role": "user"}], "stream": False} ) - # Assertions to check if response is structured as expected assert response.choices[0].message.content == "Example response", "Response content should match expected output" +@pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") +@patch("autogen.oai.gemini.VertexAIPart") +@patch("autogen.oai.gemini.VertexAIContent") +@patch("autogen.oai.gemini.GenerativeModel") +@patch("autogen.oai.gemini.vertexai.init") +def test_vertexai_create_function_call_response( + mock_init, mock_generative_model, mock_content, mock_part, gemini_client_with_credentials +): + # Mock the genai model configuration and creation process + mock_chat = MagicMock() + mock_model = MagicMock() + mock_init.return_value = None + mock_generative_model.return_value = mock_model + mock_model.start_chat.return_value = mock_chat + + mock_part.to_dict.return_value = { + "function_call": {"name": "function_name", "args": {"arg1": "value1", "arg2": "value2"}} + } + + # Set up a mock for the chat history item access and the text attribute return + mock_history_part = MagicMock() + mock_history_part.text = None + mock_history_part.function_call.name = "function_name" + mock_history_part.function_call.args = {"arg1": "value1", "arg2": "value2"} + mock_chat.history.__getitem__.return_value.parts.__iter__.return_value = iter([mock_history_part]) + + # Setup the mock to return a mocked chat response + mock_chat.send_message.return_value = MagicMock( + history=[ + MagicMock( + parts=[ + MagicMock( + function_call=MagicMock(name="function_name", arguments='{"arg1": "value1", "arg2": "value2"}') + ) + ] + ) + ] + ) + + # Call the create method + response = gemini_client_with_credentials.create( + {"model": "gemini-pro", "messages": [{"content": "Hello", "role": "user"}], "stream": False} + ) + + # Assertions to check if response is structured as expected + assert ( + response.choices[0].message.tool_calls[0].function.name == "function_name" + and json.loads(response.choices[0].message.tool_calls[0].function.arguments)["arg1"] == "value1" + ), "Response content should match expected output" + + @pytest.mark.skipif(skip, reason="Google GenAI dependency is not installed") @patch("autogen.oai.gemini.GenerativeModel") @patch("autogen.oai.gemini.vertexai.init") @@ -348,6 +508,8 @@ def test_vertexai_default_auth_create_response(mock_init, mock_generative_model, # Set up a mock for the chat history item access and the text attribute return mock_history_part = MagicMock() mock_history_part.text = "Example response" + mock_history_part.function_call = None + mock_chat.history.__getitem__.return_value.parts.__iter__.return_value = iter([mock_history_part]) mock_chat.history.__getitem__.return_value.parts.__getitem__.return_value = mock_history_part # Setup the mock to return a mocked chat response @@ -373,11 +535,11 @@ def test_create_vision_model_response(mock_configure, mock_generative_model, gem # Set up a mock to simulate the vision model behavior mock_vision_response = MagicMock() - mock_vision_part = MagicMock(text="Vision model output") + mock_vision_part = MagicMock(text="Vision model output", function_call=None) # Setting up the chain of return values for vision model response - mock_vision_response._result.candidates.__getitem__.return_value.content.parts.__getitem__.return_value = ( - mock_vision_part + mock_vision_response._result.candidates.__getitem__.return_value.content.parts.__iter__.return_value = iter( + [mock_vision_part] ) mock_model.generate_content.return_value = mock_vision_response @@ -420,10 +582,12 @@ def test_vertexai_create_vision_model_response(mock_init, mock_generative_model, # Set up a mock to simulate the vision model behavior mock_vision_response = MagicMock() - mock_vision_part = MagicMock(text="Vision model output") + mock_vision_part = MagicMock(text="Vision model output", function_call=None) # Setting up the chain of return values for vision model response - mock_vision_response.candidates.__getitem__.return_value.content.parts.__getitem__.return_value = mock_vision_part + mock_vision_response.candidates.__getitem__.return_value.content.parts.__iter__.return_value = iter( + [mock_vision_part] + ) mock_model.generate_content.return_value = mock_vision_response From ec4f3c0f191ea5ff127f58cd43d332e7a4721682 Mon Sep 17 00:00:00 2001 From: Matteo Frattaroli <63751894+Matteo-Frattaroli@users.noreply.github.com> Date: Fri, 11 Oct 2024 18:05:23 +0200 Subject: [PATCH 23/35] Fix #2643 - groupchat model registration (#2696) * remove unused import statement * fix #2643: register custom model clients within GroupChat * add docs for fix #2643 * Update website/docs/topics/groupchat/using_custom_models.md Co-authored-by: Chi Wang * Update website/docs/topics/groupchat/using_custom_models.md Co-authored-by: Chi Wang * fix: removed unnecessary llm_config from checking agent * fix: handle missing config or "config_list" key in config * fix: code formatting * Isolate method for internal agents creation * Add unit test to verify that internal agents' client actually registers ModelClient class * fix: function arguments formatting * chore: prepend "select_speaker_auto_" to llm_config and model_client_cls attributes in GroupChat * feat: use selector's llm_config for speaker selection agent if none is passed to GroupChat * Update test/agentchat/test_groupchat.py * Update groupchat.py - moved class parameters around, added to docstring * Update groupchat.py - added selector to async select speaker functions * Update test_groupchat.py - Corrected test cases for custom model client class * Update test_groupchat.py pre-commit tidy --------- Co-authored-by: Matteo Frattaroli Co-authored-by: Chi Wang Co-authored-by: Eric Zhu Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> --- autogen/agentchat/groupchat.py | 123 ++++++++++++------ test/agentchat/test_groupchat.py | 61 ++++++++- .../topics/groupchat/using_custom_models.md | 79 +++++++++++ 3 files changed, 219 insertions(+), 44 deletions(-) create mode 100644 website/docs/topics/groupchat/using_custom_models.md diff --git a/autogen/agentchat/groupchat.py b/autogen/agentchat/groupchat.py index c6355a13b94d..b0f0b9520aaf 100644 --- a/autogen/agentchat/groupchat.py +++ b/autogen/agentchat/groupchat.py @@ -12,9 +12,9 @@ from ..formatting_utils import colored from ..graph_utils import check_graph_validity, invert_disallowed_to_allowed from ..io.base import IOStream +from ..oai.client import ModelClient from ..runtime_logging import log_new_agent, logging_enabled from .agent import Agent -from .chat import ChatResult from .conversable_agent import ConversableAgent try: @@ -105,6 +105,8 @@ def custom_speaker_selection_func( "clear history" phrase in user prompt. This is experimental feature. See description of GroupChatManager.clear_agents_history function for more info. - send_introductions: send a round of introductions at the start of the group chat, so agents know who they can speak to (default: False) + - select_speaker_auto_model_client_cls: Custom model client class for the internal speaker select agent used during 'auto' speaker selection (optional) + - select_speaker_auto_llm_config: LLM config for the internal speaker select agent used during 'auto' speaker selection (optional) - role_for_select_speaker_messages: sets the role name for speaker selection when in 'auto' mode, typically 'user' or 'system'. (default: 'system') """ @@ -142,6 +144,8 @@ def custom_speaker_selection_func( Respond with ONLY the name of the speaker and DO NOT provide a reason.""" select_speaker_transform_messages: Optional[Any] = None select_speaker_auto_verbose: Optional[bool] = False + select_speaker_auto_model_client_cls: Optional[Union[ModelClient, List[ModelClient]]] = None + select_speaker_auto_llm_config: Optional[Union[Dict, Literal[False]]] = None role_for_select_speaker_messages: Optional[str] = "system" _VALID_SPEAKER_SELECTION_METHODS = ["auto", "manual", "random", "round_robin"] @@ -591,6 +595,79 @@ def _finalize_speaker(self, last_speaker: Agent, final: bool, name: str, agents: agent = self.agent_by_name(name) return agent if agent else self.next_agent(last_speaker, agents) + def _register_client_from_config(self, agent: Agent, config: Dict): + model_client_cls_to_match = config.get("model_client_cls") + if model_client_cls_to_match: + if not self.select_speaker_auto_model_client_cls: + raise ValueError( + "A custom model was detected in the config but no 'model_client_cls' " + "was supplied for registration in GroupChat." + ) + + if isinstance(self.select_speaker_auto_model_client_cls, list): + # Register the first custom model client class matching the name specified in the config + matching_model_cls = [ + client_cls + for client_cls in self.select_speaker_auto_model_client_cls + if client_cls.__name__ == model_client_cls_to_match + ] + if len(set(matching_model_cls)) > 1: + raise RuntimeError( + f"More than one unique 'model_client_cls' with __name__ '{model_client_cls_to_match}'." + ) + if not matching_model_cls: + raise ValueError( + "No model's __name__ matches the model client class " + f"'{model_client_cls_to_match}' specified in select_speaker_auto_llm_config." + ) + select_speaker_auto_model_client_cls = matching_model_cls[0] + else: + # Register the only custom model client + select_speaker_auto_model_client_cls = self.select_speaker_auto_model_client_cls + + agent.register_model_client(select_speaker_auto_model_client_cls) + + def _register_custom_model_clients(self, agent: ConversableAgent): + if not self.select_speaker_auto_llm_config: + return + + config_format_is_list = "config_list" in self.select_speaker_auto_llm_config.keys() + if config_format_is_list: + for config in self.select_speaker_auto_llm_config["config_list"]: + self._register_client_from_config(agent, config) + elif not config_format_is_list: + self._register_client_from_config(agent, self.select_speaker_auto_llm_config) + + def _create_internal_agents( + self, agents, max_attempts, messages, validate_speaker_name, selector: Optional[ConversableAgent] = None + ): + checking_agent = ConversableAgent("checking_agent", default_auto_reply=max_attempts) + + # Register the speaker validation function with the checking agent + checking_agent.register_reply( + [ConversableAgent, None], + reply_func=validate_speaker_name, # Validate each response + remove_other_reply_funcs=True, + ) + + # Override the selector's config if one was passed as a parameter to this class + speaker_selection_llm_config = self.select_speaker_auto_llm_config or selector.llm_config + + # Agent for selecting a single agent name from the response + speaker_selection_agent = ConversableAgent( + "speaker_selection_agent", + system_message=self.select_speaker_msg(agents), + chat_messages={checking_agent: messages}, + llm_config=speaker_selection_llm_config, + human_input_mode="NEVER", + # Suppresses some extra terminal outputs, outputs will be handled by select_speaker_auto_verbose + ) + + # Register any custom model passed in select_speaker_auto_llm_config with the speaker_selection_agent + self._register_custom_model_clients(speaker_selection_agent) + + return checking_agent, speaker_selection_agent + def _auto_select_speaker( self, last_speaker: Agent, @@ -644,28 +721,8 @@ def validate_speaker_name(recipient, messages, sender, config) -> Tuple[bool, Un # Two-agent chat for speaker selection # Agent for checking the response from the speaker_select_agent - checking_agent = ConversableAgent("checking_agent", default_auto_reply=max_attempts) - - # Register the speaker validation function with the checking agent - checking_agent.register_reply( - [ConversableAgent, None], - reply_func=validate_speaker_name, # Validate each response - remove_other_reply_funcs=True, - ) - - # NOTE: Do we have a speaker prompt (select_speaker_prompt_template is not None)? If we don't, we need to feed in the last message to start the nested chat - - # Agent for selecting a single agent name from the response - speaker_selection_agent = ConversableAgent( - "speaker_selection_agent", - system_message=self.select_speaker_msg(agents), - chat_messages=( - {checking_agent: messages} - if self.select_speaker_prompt_template is not None - else {checking_agent: messages[:-1]} - ), - llm_config=selector.llm_config, - human_input_mode="NEVER", # Suppresses some extra terminal outputs, outputs will be handled by select_speaker_auto_verbose + checking_agent, speaker_selection_agent = self._create_internal_agents( + agents, max_attempts, messages, validate_speaker_name, selector ) # Create the starting message @@ -747,24 +804,8 @@ def validate_speaker_name(recipient, messages, sender, config) -> Tuple[bool, Un # Two-agent chat for speaker selection # Agent for checking the response from the speaker_select_agent - checking_agent = ConversableAgent("checking_agent", default_auto_reply=max_attempts) - - # Register the speaker validation function with the checking agent - checking_agent.register_reply( - [ConversableAgent, None], - reply_func=validate_speaker_name, # Validate each response - remove_other_reply_funcs=True, - ) - - # NOTE: Do we have a speaker prompt (select_speaker_prompt_template is not None)? If we don't, we need to feed in the last message to start the nested chat - - # Agent for selecting a single agent name from the response - speaker_selection_agent = ConversableAgent( - "speaker_selection_agent", - system_message=self.select_speaker_msg(agents), - chat_messages={checking_agent: messages}, - llm_config=selector.llm_config, - human_input_mode="NEVER", # Suppresses some extra terminal outputs, outputs will be handled by select_speaker_auto_verbose + checking_agent, speaker_selection_agent = self._create_internal_agents( + agents, max_attempts, messages, validate_speaker_name, selector ) # Create the starting message diff --git a/test/agentchat/test_groupchat.py b/test/agentchat/test_groupchat.py index 39e8fb063026..0cb652ce54b7 100755 --- a/test/agentchat/test_groupchat.py +++ b/test/agentchat/test_groupchat.py @@ -4,11 +4,11 @@ import io import json import logging +from types import SimpleNamespace from typing import Any, Dict, List, Optional -from unittest import TestCase, mock +from unittest import mock import pytest -from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST import autogen from autogen import Agent, AssistantAgent, GroupChat, GroupChatManager @@ -2062,6 +2062,60 @@ def test_manager_resume_messages(): return_agent, return_message = manager.resume(messages="Let's get this conversation started.") +def test_custom_model_client(): + class CustomModelClient: + def __init__(self, config, **kwargs): + print(f"CustomModelClient config: {config}") + + def create(self, params): + num_of_responses = params.get("n", 1) + + response = SimpleNamespace() + response.choices = [] + response.model = "test_model_name" + + for _ in range(num_of_responses): + text = "this is a dummy text response" + choice = SimpleNamespace() + choice.message = SimpleNamespace() + choice.message.content = text + choice.message.function_call = None + response.choices.append(choice) + return response + + def message_retrieval(self, response): + choices = response.choices + return [choice.message.content for choice in choices] + + def cost(self, response) -> float: + response.cost = 0 + return 0 + + @staticmethod + def get_usage(response): + return {} + + llm_config = {"config_list": [{"model": "test_model_name", "model_client_cls": "CustomModelClient"}]} + + group_chat = autogen.GroupChat( + agents=[], + messages=[], + max_round=3, + select_speaker_auto_llm_config=llm_config, + select_speaker_auto_model_client_cls=CustomModelClient, + ) + + checking_agent, speaker_selection_agent = group_chat._create_internal_agents( + agents=[], messages=[], max_attempts=3, validate_speaker_name=(True, "test") + ) + + # Check that the custom model client is assigned to the speaker selection agent + assert isinstance(speaker_selection_agent.client._clients[0], CustomModelClient) + + # Check that the LLM Config is assigned + assert speaker_selection_agent.client._config_list == llm_config["config_list"] + + def test_select_speaker_transform_messages(): """Tests adding transform messages to a GroupChat for speaker selection when in 'auto' mode""" @@ -2127,8 +2181,9 @@ def test_select_speaker_transform_messages(): # test_select_speaker_auto_messages() # test_manager_messages_to_string() # test_manager_messages_from_string() - test_manager_resume_functions() + # test_manager_resume_functions() # test_manager_resume_returns() # test_manager_resume_messages() + # test_custom_model_client() # test_select_speaker_transform_messages() pass diff --git a/website/docs/topics/groupchat/using_custom_models.md b/website/docs/topics/groupchat/using_custom_models.md new file mode 100644 index 000000000000..4948d8ab9f6e --- /dev/null +++ b/website/docs/topics/groupchat/using_custom_models.md @@ -0,0 +1,79 @@ +# Using Custom Models + +When using `GroupChatManager` we need to pass a `GroupChat` object in the constructor, a dataclass responsible for +gathering agents, preparing messages from prompt templates and selecting speakers +(eventually using `speaker_selection_method` as described [here](customized_speaker_selection)). + +To do so GroupChat internally initializes two instances of ConversableAgent. +In order to control the model clients used by the agents instantiated within the GroupChat, which already receives the +`llm_config` passed to GroupChatManager, the optional `model_client_cls` attribute can be set. + + +## Example +First we need to define an `llm_config` and define some agents that will partake in the group chat: +```python +from autogen import GroupChat, ConversableAgent, GroupChatManager, UserProxyAgent +from somewhere import MyModelClient + + +# Define the custom model configuration +llm_config = { + "config_list": [ + { + "model": "gpt-3.5-turbo", + "model_client_cls": "MyModelClient" + } + ] +} + +# Initialize the agents with the custom model +agent1 = ConversableAgent( + name="Agent 1", + llm_config=llm_config +) +agent1.register_model_client(model_client_cls=MyModelClient) + +agent2 = ConversableAgent( + name="Agent 2", + llm_config=llm_config +) +agent2.register_model_client(model_client_cls=MyModelClient) + +agent3 = ConversableAgent( + name="Agent 2", + llm_config=llm_config +) +agent3.register_model_client(model_client_cls=MyModelClient) + +user_proxy = UserProxyAgent(name="user", llm_config=llm_config, code_execution_config={"use_docker": False}) +user_proxy.register_model_client(MyModelClient) +``` + +Note that the agents definition illustrated here is minimal and might not suit your needs. The only aim is to show a +basic setup for a group chat scenario. + +We then create a `GroupChat` and, if we want the underlying agents used by GroupChat to use our +custom client, we will pass it in the `model_client_cls` attribute. + +Finally we create an instance of `GroupChatManager` and pass the config to it. This same config will be forwarded to +the GroupChat, that (if needed) will automatically handle registration of custom models only. + +```python +# Create a GroupChat instance and add the agents +group_chat = GroupChat(agents=[agent1, agent2, agent3], messages=[], model_client_cls=MyModelClient) + +# Create the GroupChatManager with the GroupChat, UserProxy, and model configuration +chat_manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config) +chat_manager.register_model_client(model_client_cls=MyModelClient) + +# Initiate the chat using the UserProxy +user_proxy.initiate_chat(chat_manager, initial_message="Suggest me the most trending papers in microbiology that you think might interest me") + +``` + +This attribute can either be a class or a list of classes which adheres to the `ModelClient` protocol (see +[this link](../non-openai-models/about-using-nonopenai-models) for more info about defining a custom model client +class). + +Note that it is not necessary to define a `model_client_cls` when working with Azure OpenAI, OpenAI or other non-custom +models natively supported by the library. From 11ef58b98e1bcb6567a8f8b87e70540123782c5e Mon Sep 17 00:00:00 2001 From: Kirushikesh DB <49152921+Kirushikesh@users.noreply.github.com> Date: Fri, 11 Oct 2024 21:45:52 +0530 Subject: [PATCH 24/35] Added a demonstartion notebook featuring the usage of Langchain with AutoGen (#3461) * Added langchain integrated demo notebook * Added output * Update notebook/agentchat_huggingface_langchain.ipynb Co-authored-by: gagb * Changed the underlying LLM and lint fix * lint fix --------- Co-authored-by: gagb Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> Co-authored-by: Eric Zhu --- .../agentchat_huggingface_langchain.ipynb | 866 ++++++++++++++++++ 1 file changed, 866 insertions(+) create mode 100644 notebook/agentchat_huggingface_langchain.ipynb diff --git a/notebook/agentchat_huggingface_langchain.ipynb b/notebook/agentchat_huggingface_langchain.ipynb new file mode 100644 index 000000000000..248ad18f19c1 --- /dev/null +++ b/notebook/agentchat_huggingface_langchain.ipynb @@ -0,0 +1,866 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "# Using AutoGen AgentChat with LangChain-based Custom Client and Hugging Face Models" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "This notebook demonstrates how you can use LangChain's extensive support for LLMs to enable flexible use of various Language Models (LLMs) in agent-based conversations in AutoGen.\n", + "\n", + "What we'll cover:\n", + "\n", + "1. Creating a custom model client that uses LangChain to load and interact with LLMs\n", + "2. Configuring AutoGen to use our custom LangChain-based model\n", + "3. Setting up AutoGen agents with the custom model\n", + "4. Demonstrating a simple conversation using this setup\n", + "\n", + "While we used a Hugging Face model in this example, the same approach can be applied to any LLM supported by LangChain, including models from OpenAI, Anthropic, or custom models. This integration opens up a wide range of possibilities for creating sophisticated, multi-model conversational agents using AutoGen\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Requirements\n", + "\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", + "```bash\n", + "pip install pyautogen torch transformers sentencepiece langchain-huggingface \n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````\n", + "\n", + "**NOTE: Depending on what model you use, you may need to play with the default prompts of the Agent's**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup and Imports\n", + "\n", + "First, let's import the necessary libraries and define our custom model client." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "from types import SimpleNamespace\n", + "\n", + "from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n", + "from langchain_huggingface import ChatHuggingFace, HuggingFacePipeline\n", + "\n", + "from autogen import AssistantAgent, UserProxyAgent, config_list_from_json" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create and configure the custom model\n", + "\n", + "A custom model class can be created in many ways, but needs to adhere to the `ModelClient` protocol and response structure which is defined in client.py and shown below.\n", + "\n", + "The response protocol has some minimum requirements, but can be extended to include any additional information that is needed.\n", + "Message retrieval therefore can be customized, but needs to return a list of strings or a list of `ModelClientResponseProtocol.Choice.Message` objects.\n", + "\n", + "\n", + "```python\n", + "class ModelClient(Protocol):\n", + " \"\"\"\n", + " A client class must implement the following methods:\n", + " - create must return a response object that implements the ModelClientResponseProtocol\n", + " - cost must return the cost of the response\n", + " - get_usage must return a dict with the following keys:\n", + " - prompt_tokens\n", + " - completion_tokens\n", + " - total_tokens\n", + " - cost\n", + " - model\n", + "\n", + " This class is used to create a client that can be used by OpenAIWrapper.\n", + " The response returned from create must adhere to the ModelClientResponseProtocol but can be extended however needed.\n", + " The message_retrieval method must be implemented to return a list of str or a list of messages from the response.\n", + " \"\"\"\n", + "\n", + " RESPONSE_USAGE_KEYS = [\"prompt_tokens\", \"completion_tokens\", \"total_tokens\", \"cost\", \"model\"]\n", + "\n", + " class ModelClientResponseProtocol(Protocol):\n", + " class Choice(Protocol):\n", + " class Message(Protocol):\n", + " content: Optional[str]\n", + "\n", + " message: Message\n", + "\n", + " choices: List[Choice]\n", + " model: str\n", + "\n", + " def create(self, params) -> ModelClientResponseProtocol:\n", + " ...\n", + "\n", + " def message_retrieval(\n", + " self, response: ModelClientResponseProtocol\n", + " ) -> Union[List[str], List[ModelClient.ModelClientResponseProtocol.Choice.Message]]:\n", + " \"\"\"\n", + " Retrieve and return a list of strings or a list of Choice.Message from the response.\n", + "\n", + " NOTE: if a list of Choice.Message is returned, it currently needs to contain the fields of OpenAI's ChatCompletion Message object,\n", + " since that is expected for function or tool calling in the rest of the codebase at the moment, unless a custom agent is being used.\n", + " \"\"\"\n", + " ...\n", + "\n", + " def cost(self, response: ModelClientResponseProtocol) -> float:\n", + " ...\n", + "\n", + " @staticmethod\n", + " def get_usage(response: ModelClientResponseProtocol) -> Dict:\n", + " \"\"\"Return usage summary of the response using RESPONSE_USAGE_KEYS.\"\"\"\n", + " ...\n", + "```\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example of simple custom client\n", + "\n", + "Following the huggingface example for using [Mistral's Open-Orca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)\n", + "\n", + "For the response object, python's `SimpleNamespace` is used to create a simple object that can be used to store the response data, but any object that follows the `ClientResponseProtocol` can be used.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "# custom client with custom model loader\n", + "\n", + "\n", + "class CustomModelClient:\n", + " \"\"\"Custom model client implementation for LangChain integration with AutoGen.\"\"\"\n", + "\n", + " def __init__(self, config, **kwargs):\n", + " \"\"\"Initialize the CustomModelClient.\"\"\"\n", + " print(f\"CustomModelClient config: {config}\")\n", + " self.device = config.get(\"device\", \"cpu\")\n", + "\n", + " gen_config_params = config.get(\"params\", {})\n", + " self.model_name = config[\"model\"]\n", + " pipeline = HuggingFacePipeline.from_model_id(\n", + " model_id=self.model_name,\n", + " task=\"text-generation\",\n", + " pipeline_kwargs=gen_config_params,\n", + " device=self.device,\n", + " )\n", + " self.model = ChatHuggingFace(llm=pipeline)\n", + " print(f\"Loaded model {config['model']} to {self.device}\")\n", + "\n", + " def _to_chatml_format(self, message):\n", + " \"\"\"Convert message to ChatML format.\"\"\"\n", + " if message[\"role\"] == \"system\":\n", + " return SystemMessage(content=message[\"content\"])\n", + " if message[\"role\"] == \"assistant\":\n", + " return AIMessage(content=message[\"content\"])\n", + " if message[\"role\"] == \"user\":\n", + " return HumanMessage(content=message[\"content\"])\n", + " raise ValueError(f\"Unknown message type: {type(message)}\")\n", + "\n", + " def create(self, params):\n", + " \"\"\"Create a response using the model.\"\"\"\n", + " if params.get(\"stream\", False) and \"messages\" in params:\n", + " raise NotImplementedError(\"Local models do not support streaming.\")\n", + "\n", + " num_of_responses = params.get(\"n\", 1)\n", + " response = SimpleNamespace()\n", + " inputs = [self._to_chatml_format(m) for m in params[\"messages\"]]\n", + " response.choices = []\n", + " response.model = self.model_name\n", + "\n", + " for _ in range(num_of_responses):\n", + " outputs = self.model.invoke(inputs)\n", + " text = outputs.content\n", + " choice = SimpleNamespace()\n", + " choice.message = SimpleNamespace()\n", + " choice.message.content = text\n", + " choice.message.function_call = None\n", + " response.choices.append(choice)\n", + "\n", + " return response\n", + "\n", + " def message_retrieval(self, response):\n", + " \"\"\"Retrieve messages from the response.\"\"\"\n", + " return [choice.message.content for choice in response.choices]\n", + "\n", + " def cost(self, response) -> float:\n", + " \"\"\"Calculate the cost of the response.\"\"\"\n", + " response.cost = 0\n", + " return 0\n", + "\n", + " @staticmethod\n", + " def get_usage(response):\n", + " \"\"\"Get usage statistics.\"\"\"\n", + " return {}" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set your API Endpoint\n", + "\n", + "The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n", + "\n", + "It first looks for an environment variable of a specified name (\"OAI_CONFIG_LIST\" in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).\n", + "\n", + "The json looks like the following:\n", + "```json\n", + "[\n", + " {\n", + " \"model\": \"gpt-4\",\n", + " \"api_key\": \"\"\n", + " },\n", + " {\n", + " \"model\": \"gpt-4\",\n", + " \"api_key\": \"\",\n", + " \"base_url\": \"\",\n", + " \"api_type\": \"azure\",\n", + " \"api_version\": \"2024-02-01\"\n", + " },\n", + " {\n", + " \"model\": \"gpt-4-32k\",\n", + " \"api_key\": \"\",\n", + " \"base_url\": \"\",\n", + " \"api_type\": \"azure\",\n", + " \"api_version\": \"2024-02-01\"\n", + " }\n", + "]\n", + "```\n", + "\n", + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set the config for the custom model\n", + "\n", + "You can add any paramteres that are needed for the custom model loading in the same configuration list.\n", + "\n", + "It is important to add the `model_client_cls` field and set it to a string that corresponds to the class name: `\"CustomModelClient\"`." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "os.environ[\"OAI_CONFIG_LIST\"] = json.dumps(\n", + " [\n", + " {\n", + " \"model\": \"mistralai/Mistral-7B-Instruct-v0.2\",\n", + " \"model_client_cls\": \"CustomModelClient\",\n", + " \"device\": 0,\n", + " \"n\": 1,\n", + " \"params\": {\n", + " \"max_new_tokens\": 500,\n", + " \"top_k\": 50,\n", + " \"temperature\": 0.1,\n", + " \"do_sample\": True,\n", + " \"return_full_text\": False,\n", + " },\n", + " }\n", + " ]\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "config_list_custom = config_list_from_json(\n", + " \"OAI_CONFIG_LIST\",\n", + " filter_dict={\"model_client_cls\": [\"CustomModelClient\"]},\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import getpass\n", + "\n", + "from huggingface_hub import login\n", + "\n", + "# The Mistral-7B-Instruct-v0.2 is a gated model which requires API token to access\n", + "login(token=getpass.getpass(\"Enter your HuggingFace API Token\"))" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Construct Agents\n", + "\n", + "Consturct a simple conversation between a User proxy and an Assistent agent" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[autogen.oai.client: 09-01 12:53:51] {484} INFO - Detected custom model client in config: CustomModelClient, model client can not be used until register_model_client is called.\n" + ] + } + ], + "source": [ + "assistant = AssistantAgent(\"assistant\", llm_config={\"config_list\": config_list_custom})\n", + "user_proxy = UserProxyAgent(\"user_proxy\", code_execution_config=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Register the custom client class to the assistant agent" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "CustomModelClient config: {'model': 'microsoft/Phi-3.5-mini-instruct', 'model_client_cls': 'CustomModelClient', 'device': 0, 'n': 1, 'params': {'max_new_tokens': 100, 'top_k': 50, 'temperature': 0.1, 'do_sample': True, 'return_full_text': False}}\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:07<00:00, 3.51s/it]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Loaded model microsoft/Phi-3.5-mini-instruct to 0\n" + ] + } + ], + "source": [ + "assistant.register_model_client(model_client_cls=CustomModelClient)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33muser_proxy\u001b[0m (to assistant):\n", + "\n", + "Write python code to print Hello World!\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "You are not running the flash-attention implementation, expect numerical differences.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33massistant\u001b[0m (to user_proxy):\n", + "\n", + " ```python\n", + "# filename: hello_world.py\n", + "\n", + "print(\"Hello World!\")\n", + "```\n", + "\n", + "To execute this code, save it in a file named `hello_world.py`. Then, open your terminal or command prompt, navigate to the directory containing the file, and run the following command:\n", + "\n", + "```\n", + "python hello_world.py\n", + "```\n", + "\n", + "The output should be:\n", + "\n", + "```\n", + "Hello World!\n", + "```\n", + "\n", + "If you encounter any errors,\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + }, + { + "data": { + "text/plain": [ + "ChatResult(chat_id=None, chat_history=[{'content': 'Write python code to print Hello World!', 'role': 'assistant', 'name': 'user_proxy'}, {'content': ' ```python\\n# filename: hello_world.py\\n\\nprint(\"Hello World!\")\\n```\\n\\nTo execute this code, save it in a file named `hello_world.py`. Then, open your terminal or command prompt, navigate to the directory containing the file, and run the following command:\\n\\n```\\npython hello_world.py\\n```\\n\\nThe output should be:\\n\\n```\\nHello World!\\n```\\n\\nIf you encounter any errors,', 'role': 'user', 'name': 'assistant'}], summary=' ```python\\n# filename: hello_world.py\\n\\nprint(\"Hello World!\")\\n```\\n\\nTo execute this code, save it in a file named `hello_world.py`. Then, open your terminal or command prompt, navigate to the directory containing the file, and run the following command:\\n\\n```\\npython hello_world.py\\n```\\n\\nThe output should be:\\n\\n```\\nHello World!\\n```\\n\\nIf you encounter any errors,', cost={'usage_including_cached_inference': {'total_cost': 0}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=['exit'])" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "user_proxy.initiate_chat(assistant, message=\"Write python code to print Hello World!\")" + ] + } + ], + "metadata": { + "front_matter": { + "description": "Define and laod a custom model", + "tags": [ + "custom model" + ] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.6" + }, + "vscode": { + "interpreter": { + "hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1" + } + }, + "widgets": { + "application/vnd.jupyter.widget-state+json": { + "state": { + "2d910cfd2d2a4fc49fc30fbbdc5576a7": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + }, + "454146d0f7224f038689031002906e6f": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HBoxModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HBoxModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "HBoxView", + "box_style": "", + "children": [ + "IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26", + "IPY_MODEL_577e1e3cc4db4942b0883577b3b52755", + "IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45" + ], + "layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555", + "tabbable": null, + "tooltip": null + } + }, + "577e1e3cc4db4942b0883577b3b52755": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "FloatProgressModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "FloatProgressModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "ProgressView", + "bar_style": "success", + "description": "", + "description_allow_html": false, + "layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7", + "max": 1, + "min": 0, + "orientation": "horizontal", + "style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62", + "tabbable": null, + "tooltip": null, + "value": 1 + } + }, + "6086462a12d54bafa59d3c4566f06cb2": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + }, + "74a6ba0c3cbc4051be0a83e152fe1e62": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "ProgressStyleModel", + "state": { + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "ProgressStyleModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "StyleView", + "bar_color": null, + "description_width": "" + } + }, + "7d3f3d9e15894d05a4d188ff4f466554": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLStyleModel", + "state": { + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLStyleModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "StyleView", + "background": null, + "description_width": "", + "font_size": null, + "text_color": null + } + }, + "b40bdfb1ac1d4cffb7cefcb870c64d45": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "HTMLView", + "description": "", + "description_allow_html": false, + "layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8", + "placeholder": "​", + "style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466", + "tabbable": null, + "tooltip": null, + "value": " 1/1 [00:00<00:00, 44.69it/s]" + } + }, + "ca245376fd9f4354af6b2befe4af4466": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLStyleModel", + "state": { + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLStyleModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "StyleView", + "background": null, + "description_width": "", + "font_size": null, + "text_color": null + } + }, + "dc83c7bff2f241309537a8119dfc7555": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + }, + "e4ae2b6f5a974fd4bafb6abb9d12ff26": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "HTMLView", + "description": "", + "description_allow_html": false, + "layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2", + "placeholder": "​", + "style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554", + "tabbable": null, + "tooltip": null, + "value": "100%" + } + }, + "f1355871cc6f4dd4b50d9df5af20e5c8": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + } + }, + "version_major": 2, + "version_minor": 0 + } + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 4f060e5efa29f2332b5c5f670779cb8f76116e42 Mon Sep 17 00:00:00 2001 From: Krishna Shedbalkar <60742358+krishnashed@users.noreply.github.com> Date: Sat, 12 Oct 2024 06:39:12 +0530 Subject: [PATCH 25/35] Autobuild Function calling (#3238) * [Fix] Precommit issues * [Fix] checks * [Fix] iterating through list_of_functions * [Fix] pre-commit checks * Update test/agentchat/contrib/test_agent_builder.py Co-authored-by: Chi Wang <4250911+sonichi@users.noreply.github.com> --------- Co-authored-by: Chi Wang Co-authored-by: Ryan Sweet Co-authored-by: Chi Wang <4250911+sonichi@users.noreply.github.com> --- autogen/agentchat/contrib/agent_builder.py | 79 +++- notebook/autobuild_function_calling.ipynb | 470 +++++++++++++++++++ test/agentchat/contrib/test_agent_builder.py | 141 +++++- 3 files changed, 671 insertions(+), 19 deletions(-) create mode 100644 notebook/autobuild_function_calling.ipynb diff --git a/autogen/agentchat/contrib/agent_builder.py b/autogen/agentchat/contrib/agent_builder.py index 430017d13fc9..7eaec3eef747 100644 --- a/autogen/agentchat/contrib/agent_builder.py +++ b/autogen/agentchat/contrib/agent_builder.py @@ -172,6 +172,26 @@ class AgentBuilder: ``` """ + AGENT_FUNCTION_MAP_PROMPT = """Consider the following function. + Function Name: {function_name} + Function Description: {function_description} + + The agent details are given in the format: {format_agent_details} + + Which one of the following agents should be able to execute this function, preferably an agent with programming background? + {agent_details} + + Hint: + # Only respond with the name of the agent that is most suited to execute the function and nothing else. + """ + + UPDATED_AGENT_SYSTEM_MESSAGE = """ + {agent_system_message} + + You have access to execute the function: {function_name}. + With following description: {function_description} + """ + def __init__( self, config_file_or_env: Optional[str] = "OAI_CONFIG_LIST", @@ -358,6 +378,7 @@ def build( self, building_task: str, default_llm_config: Dict, + list_of_functions: Optional[List[Dict]] = None, coding: Optional[bool] = None, code_execution_config: Optional[Dict] = None, use_oai_assistant: Optional[bool] = False, @@ -373,6 +394,7 @@ def build( coding: use to identify if the user proxy (a code interpreter) should be added. code_execution_config: specific configs for user proxy (e.g., last_n_messages, work_dir, ...). default_llm_config: specific configs for LLM (e.g., config_list, seed, temperature, ...). + list_of_functions: list of functions to be associated with Agents use_oai_assistant: use OpenAI assistant api instead of self-constructed agent. user_proxy: user proxy's class that can be used to replace the default user proxy. @@ -480,8 +502,9 @@ def build( "code_execution_config": code_execution_config, } ) + _config_check(self.cached_configs) - return self._build_agents(use_oai_assistant, user_proxy=user_proxy, **kwargs) + return self._build_agents(use_oai_assistant, list_of_functions, user_proxy=user_proxy, **kwargs) def build_from_library( self, @@ -653,13 +676,18 @@ def build_from_library( return self._build_agents(use_oai_assistant, user_proxy=user_proxy, **kwargs) def _build_agents( - self, use_oai_assistant: Optional[bool] = False, user_proxy: Optional[autogen.ConversableAgent] = None, **kwargs + self, + use_oai_assistant: Optional[bool] = False, + list_of_functions: Optional[List[Dict]] = None, + user_proxy: Optional[autogen.ConversableAgent] = None, + **kwargs, ) -> Tuple[List[autogen.ConversableAgent], Dict]: """ Build agents with generated configs. Args: use_oai_assistant: use OpenAI assistant api instead of self-constructed agent. + list_of_functions: list of functions to be associated to Agents user_proxy: user proxy's class that can be used to replace the default user proxy. Returns: @@ -695,6 +723,53 @@ def _build_agents( ) agent_list = agent_list + [user_proxy] + agent_details = [] + + for agent in agent_list[:-1]: + agent_details.append({"name": agent.name, "description": agent.description}) + + if list_of_functions: + for func in list_of_functions: + resp = ( + self.builder_model.create( + messages=[ + { + "role": "user", + "content": self.AGENT_FUNCTION_MAP_PROMPT.format( + function_name=func["name"], + function_description=func["description"], + format_agent_details='[{"name": "agent_name", "description": "agent description"}, ...]', + agent_details=str(json.dumps(agent_details)), + ), + } + ] + ) + .choices[0] + .message.content + ) + + autogen.agentchat.register_function( + func["function"], + caller=self.agent_procs_assign[resp][0], + executor=agent_list[0], + name=func["name"], + description=func["description"], + ) + + agents_current_system_message = [ + agent["system_message"] for agent in agent_configs if agent["name"] == resp + ][0] + + self.agent_procs_assign[resp][0].update_system_message( + self.UPDATED_AGENT_SYSTEM_MESSAGE.format( + agent_system_message=agents_current_system_message, + function_name=func["name"], + function_description=func["description"], + ) + ) + + print(f"Function {func['name']} is registered to agent {resp}.") + return agent_list, self.cached_configs.copy() def save(self, filepath: Optional[str] = None) -> str: diff --git a/notebook/autobuild_function_calling.ipynb b/notebook/autobuild_function_calling.ipynb new file mode 100644 index 000000000000..f414de4b84a4 --- /dev/null +++ b/notebook/autobuild_function_calling.ipynb @@ -0,0 +1,470 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AutoBuild Agents function calling\n", + "By: [Krishna Shedbalkar](https://github.com/krishnashed/)\n", + "\n", + "In this notebook, we introduce a way for Agents created using `Autobuild` to do function calling. Developers can specify a function, function name and function description which will thereafter be assigned and executed by the most suitable agent created using AutoBuild." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Requirement\n", + "\n", + "AutoBuild require `pyautogen[autobuild]`, which can be installed by the following command:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install pyautogen[autobuild]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 1: Prepare configuration and some useful functions\n", + "\n", + "Prepare a `config_file_or_env` for assistant agent to limit the choice of LLM you want to use in this task. This config can be a path of json file or a name of environment variable. A `default_llm_config` is also required for initialize the specific config of LLMs like seed, temperature, etc. Preventing UserProxy agent being called multiple times by adding `allow_repeat_speaker=agent_list[:-1]`" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import autogen\n", + "from autogen.agentchat.contrib.agent_builder import AgentBuilder\n", + "\n", + "config_file_or_env = \"OAI_CONFIG_LIST\"\n", + "config_list = autogen.config_list_from_json(config_file_or_env, filter_dict={\"model\": [\"gpt-4-1106-preview\", \"gpt-4\"]})\n", + "llm_config = {\n", + " \"config_list\": config_list,\n", + " \"timeout\": 120,\n", + "}\n", + "\n", + "\n", + "def start_task(execution_task: str, agent_list: list):\n", + " group_chat = autogen.GroupChat(agents=agent_list, messages=[], allow_repeat_speaker=agent_list[:-1], max_round=12)\n", + " manager = autogen.GroupChatManager(groupchat=group_chat, llm_config={\"config_list\": config_list})\n", + " agent_list[0].initiate_chat(manager, message=execution_task)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 2: Create a AgentBuilder\n", + "\n", + "Create a `AgentBuilder` with the specified `config_path_or_env`. AgentBuilder will use `gpt-4` in default to complete the whole process, you can specify the `builder_model` and `agent_model` to other OpenAI model to match your task. You can also specify an open-source LLM supporting by vLLM and FastChat, see blog for more details." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "builder = AgentBuilder(\n", + " config_file_or_env=config_file_or_env, builder_model=\"gpt-4-1106-preview\", agent_model=\"gpt-4-1106-preview\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 3: Specify a building task\n", + "\n", + "Specify a building task with a general description. Building task will help build manager (a LLM) decide what agents should be built." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "building_task = \"Analyze and list the trending topics in arxiv papers related to GPT-4\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 4: Define functions\n", + "\n", + "Define functions to be executed by the Agents of AutoBuild, further specify details like `name`, `description` and `function` of all the functions in an array called `list_of_functions` which will be passed to `builder.build()`" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "from datetime import datetime, timedelta\n", + "from typing import Dict\n", + "\n", + "import feedparser\n", + "\n", + "\n", + "def get_arxiv_paper_from_a_week(search_topic: str) -> Dict:\n", + " # arXiv API endpoint\n", + " url = \"http://export.arxiv.org/api/query?\"\n", + "\n", + " # Search parameters\n", + " max_results = 10\n", + "\n", + " query = (\n", + " f\"{url}search_query=all:{search_topic}&max_results={max_results}&sortBy=lastUpdatedDate&sortOrder=descending\"\n", + " )\n", + "\n", + " # Parse the feed\n", + " feed = feedparser.parse(query)\n", + "\n", + " now = datetime.now()\n", + " week_ago = now - timedelta(weeks=1)\n", + "\n", + " papers = []\n", + "\n", + " # Get papers from last week\n", + " for entry in feed.entries:\n", + " published_time = datetime.strptime(entry.published, \"%Y-%m-%dT%H:%M:%SZ\")\n", + " if published_time > week_ago:\n", + " list_of_authors = \", \".join(author.name for author in entry.authors)\n", + "\n", + " papers.append(\n", + " {\n", + " \"title\": entry.title,\n", + " \"authors\": list_of_authors,\n", + " \"published_on\": time.strftime(\"%B %d, %Y\", entry.published_parsed),\n", + " \"summary\": entry.summary,\n", + " \"link\": entry.link,\n", + " }\n", + " )\n", + "\n", + " return papers\n", + "\n", + "\n", + "list_of_functions = [\n", + " {\n", + " \"name\": \"get_arxiv_paper_from_a_week\",\n", + " \"description\": \"Get arxiv papers published in last week\",\n", + " \"function\": get_arxiv_paper_from_a_week,\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 5: build group chat agents\n", + "\n", + "Use `build()` to let build manager (the specified `builder_model`) complete the group chat agents generation. Specify `list_of_functions` to be used by the Agents" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[32m==> Generating agents...\u001b[0m\n", + "['NLP_Expert', 'DataAnalysis_Expert', 'AIResearch_Expert'] are generated.\n", + "\u001b[32m==> Generating system message...\u001b[0m\n", + "Preparing system message for NLP_Expert\n", + "Preparing system message for DataAnalysis_Expert\n", + "Preparing system message for AIResearch_Expert\n", + "\u001b[32m==> Generating description...\u001b[0m\n", + "Preparing description for NLP_Expert\n", + "Preparing description for DataAnalysis_Expert\n", + "Preparing description for AIResearch_Expert\n", + "\u001b[32m==> Creating agents...\u001b[0m\n", + "Creating agent NLP_Expert...\n", + "Creating agent DataAnalysis_Expert...\n", + "Creating agent AIResearch_Expert...\n", + "Adding user console proxy...\n", + "Function get_arxiv_paper_from_a_week is registered to agent DataAnalysis_Expert.\n" + ] + } + ], + "source": [ + "agent_list, agent_configs = builder.build(building_task, llm_config, list_of_functions, max_agents=3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here you can see that Function `exec_python` has been associated with `ArxivAPI_Expert` Agent." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 6: execute task\n", + "\n", + "Let agents generated in `build()` to complete the task collaboratively in a group chat." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mNLP_Expert\u001b[0m (to chat_manager):\n", + "\n", + "Analyze and list the trending topics in arxiv papers related to GPT-4\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[32m\n", + "Next speaker: DataAnalysis_Expert\n", + "\u001b[0m\n", + "\u001b[33mDataAnalysis_Expert\u001b[0m (to chat_manager):\n", + "\n", + "\u001b[32m***** Suggested tool call (call_hkKs7wbCyAOMkC4QjOYMLgtd): get_arxiv_paper_from_a_week *****\u001b[0m\n", + "Arguments: \n", + "{\"search_topic\":\"GPT-4\"}\n", + "\u001b[32m********************************************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: NLP_Expert\n", + "\u001b[0m\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION get_arxiv_paper_from_a_week...\u001b[0m\n", + "\u001b[33mNLP_Expert\u001b[0m (to chat_manager):\n", + "\n", + "\u001b[33mNLP_Expert\u001b[0m (to chat_manager):\n", + "\n", + "\u001b[32m***** Response from calling tool (call_hkKs7wbCyAOMkC4QjOYMLgtd) *****\u001b[0m\n", + "[{\"title\": \"Self-Training with Direct Preference Optimization Improves\\n Chain-of-Thought Reasoning\", \"authors\": \"Tianduo Wang, Shichen Li, Wei Lu\", \"published_on\": \"July 25, 2024\", \"summary\": \"Effective training of language models (LMs) for mathematical reasoning tasks\\ndemands high-quality supervised fine-tuning data. Besides obtaining annotations\\nfrom human experts, a common alternative is sampling from larger and more\\npowerful LMs. However, this knowledge distillation approach can be costly and\\nunstable, particularly when relying on closed-source, proprietary LMs like\\nGPT-4, whose behaviors are often unpredictable. In this work, we demonstrate\\nthat the reasoning abilities of small-scale LMs can be enhanced through\\nself-training, a process where models learn from their own outputs. We also\\nshow that the conventional self-training can be further augmented by a\\npreference learning algorithm called Direct Preference Optimization (DPO). By\\nintegrating DPO into self-training, we leverage preference data to guide LMs\\ntowards more accurate and diverse chain-of-thought reasoning. We evaluate our\\nmethod across various mathematical reasoning tasks using different base models.\\nOur experiments show that this approach not only improves LMs' reasoning\\nperformance but also offers a more cost-effective and scalable solution\\ncompared to relying on large proprietary LMs.\", \"link\": \"http://arxiv.org/abs/2407.18248v1\"}, {\"title\": \"C2P: Featuring Large Language Models with Causal Reasoning\", \"authors\": \"Abdolmahdi Bagheri, Matin Alinejad, Kevin Bello, Alireza Akhondi-Asl\", \"published_on\": \"July 25, 2024\", \"summary\": \"Causal reasoning is the primary bottleneck that Large Language Models (LLMs)\\nmust overcome to attain human-level intelligence. To address this, we introduce\\nthe Causal Chain of Prompting (C2P) as the first reasoning framework that\\nequips current LLMs with causal reasoning capabilities. C2P operates\\nautonomously, avoiding reliance on external tools or modules during both the\\ncausal learning and reasoning phases, and can be seamlessly implemented during\\nthe training or fine-tuning of LLMs. Experimental results across various\\nbenchmark datasets demonstrate a significant improvement in causal learning and\\nsubsequent reasoning accuracy of LLMs. We illustrate how C2P enhances LLMs'\\nability to causally reason in real-world scenarios, addressing complex problems\\nin fields such as healthcare, medicine, economics, education, social sciences,\\nenvironmental science, and marketing. With few-shot learning, GPT-4 Turbo using\\nC2P with as few as six examples achieves significant performance improvements,\\nboasting over a 33% increase in reasoning accuracy over the most\\nstate-of-the-art LLMs, which perform nearly randomly in similar circumstances.\\nThis demonstrates the transformative potential of integrating C2P into LLM\\ntraining or fine-tuning processes, thereby empowering these models with\\nadvanced causal reasoning capabilities.\", \"link\": \"http://arxiv.org/abs/2407.18069v1\"}, {\"title\": \"Is the Digital Forensics and Incident Response Pipeline Ready for\\n Text-Based Threats in LLM Era?\", \"authors\": \"Avanti Bhandarkar, Ronald Wilson, Anushka Swarup, Mengdi Zhu, Damon Woodard\", \"published_on\": \"July 25, 2024\", \"summary\": \"In the era of generative AI, the widespread adoption of Neural Text\\nGenerators (NTGs) presents new cybersecurity challenges, particularly within\\nthe realms of Digital Forensics and Incident Response (DFIR). These challenges\\nprimarily involve the detection and attribution of sources behind advanced\\nattacks like spearphishing and disinformation campaigns. As NTGs evolve, the\\ntask of distinguishing between human and NTG-authored texts becomes critically\\ncomplex. This paper rigorously evaluates the DFIR pipeline tailored for\\ntext-based security systems, specifically focusing on the challenges of\\ndetecting and attributing authorship of NTG-authored texts. By introducing a\\nnovel human-NTG co-authorship text attack, termed CS-ACT, our study uncovers\\nsignificant vulnerabilities in traditional DFIR methodologies, highlighting\\ndiscrepancies between ideal scenarios and real-world conditions. Utilizing 14\\ndiverse datasets and 43 unique NTGs, up to the latest GPT-4, our research\\nidentifies substantial vulnerabilities in the forensic profiling phase,\\nparticularly in attributing authorship to NTGs. Our comprehensive evaluation\\npoints to factors such as model sophistication and the lack of distinctive\\nstyle within NTGs as significant contributors for these vulnerabilities. Our\\nfindings underscore the necessity for more sophisticated and adaptable\\nstrategies, such as incorporating adversarial learning, stylizing NTGs, and\\nimplementing hierarchical attribution through the mapping of NTG lineages to\\nenhance source attribution. This sets the stage for future research and the\\ndevelopment of more resilient text-based security systems.\", \"link\": \"http://arxiv.org/abs/2407.17870v1\"}, {\"title\": \"Cost-effective Instruction Learning for Pathology Vision and Language\\n Analysis\", \"authors\": \"Kaitao Chen, Mianxin Liu, Fang Yan, Lei Ma, Xiaoming Shi, Lilong Wang, Xiaosong Wang, Lifeng Zhu, Zhe Wang, Mu Zhou, Shaoting Zhang\", \"published_on\": \"July 25, 2024\", \"summary\": \"The advent of vision-language models fosters the interactive conversations\\nbetween AI-enabled models and humans. Yet applying these models into clinics\\nmust deal with daunting challenges around large-scale training data, financial,\\nand computational resources. Here we propose a cost-effective instruction\\nlearning framework for conversational pathology named as CLOVER. CLOVER only\\ntrains a lightweight module and uses instruction tuning while freezing the\\nparameters of the large language model. Instead of using costly GPT-4, we\\npropose well-designed prompts on GPT-3.5 for building generation-based\\ninstructions, emphasizing the utility of pathological knowledge derived from\\nthe Internet source. To augment the use of instructions, we construct a\\nhigh-quality set of template-based instructions in the context of digital\\npathology. From two benchmark datasets, our findings reveal the strength of\\nhybrid-form instructions in the visual question-answer in pathology. Extensive\\nresults show the cost-effectiveness of CLOVER in answering both open-ended and\\nclosed-ended questions, where CLOVER outperforms strong baselines that possess\\n37 times more training parameters and use instruction data generated from\\nGPT-4. Through the instruction tuning, CLOVER exhibits robustness of few-shot\\nlearning in the external clinical dataset. These findings demonstrate that\\ncost-effective modeling of CLOVER could accelerate the adoption of rapid\\nconversational applications in the landscape of digital pathology.\", \"link\": \"http://arxiv.org/abs/2407.17734v1\"}, {\"title\": \"My Ontologist: Evaluating BFO-Based AI for Definition Support\", \"authors\": \"Carter Benson, Alec Sculley, Austin Liebers, John Beverley\", \"published_on\": \"July 24, 2024\", \"summary\": \"Generative artificial intelligence (AI), exemplified by the release of\\nGPT-3.5 in 2022, has significantly advanced the potential applications of large\\nlanguage models (LLMs), including in the realms of ontology development and\\nknowledge graph creation. Ontologies, which are structured frameworks for\\norganizing information, and knowledge graphs, which combine ontologies with\\nactual data, are essential for enabling interoperability and automated\\nreasoning. However, current research has largely overlooked the generation of\\nontologies extending from established upper-level frameworks like the Basic\\nFormal Ontology (BFO), risking the creation of non-integrable ontology silos.\\nThis study explores the extent to which LLMs, particularly GPT-4, can support\\nontologists trained in BFO. Through iterative development of a specialized GPT\\nmodel named \\\"My Ontologist,\\\" we aimed to generate BFO-conformant ontologies.\\nInitial versions faced challenges in maintaining definition conventions and\\nleveraging foundational texts effectively. My Ontologist 3.0 showed promise by\\nadhering to structured rules and modular ontology suites, yet the release of\\nGPT-4o disrupted this progress by altering the model's behavior. Our findings\\nunderscore the importance of aligning LLM-generated ontologies with top-level\\nstandards and highlight the complexities of integrating evolving AI\\ncapabilities in ontology engineering.\", \"link\": \"http://arxiv.org/abs/2407.17657v1\"}, {\"title\": \"Can GPT-4 learn to analyze moves in research article abstracts?\", \"authors\": \"Danni Yu, Marina Bondi, Ken Hyland\", \"published_on\": \"July 22, 2024\", \"summary\": \"One of the most powerful and enduring ideas in written discourse analysis is\\nthat genres can be described in terms of the moves which structure a writer's\\npurpose. Considerable research has sought to identify these distinct\\ncommunicative acts, but analyses have been beset by problems of subjectivity,\\nreliability and the time-consuming need for multiple coders to confirm\\nanalyses. In this paper we employ the affordances of GPT-4 to automate the\\nannotation process by using natural language prompts. Focusing on abstracts\\nfrom articles in four applied linguistics journals, we devise prompts which\\nenable the model to identify moves effectively. The annotated outputs of these\\nprompts were evaluated by two assessors with a third addressing disagreements.\\nThe results show that an 8-shot prompt was more effective than one using two,\\nconfirming that the inclusion of examples illustrating areas of variability can\\nenhance GPT-4's ability to recognize multiple moves in a single sentence and\\nreduce bias related to textual position. We suggest that GPT-4 offers\\nconsiderable potential in automating this annotation process, when human actors\\nwith domain specific linguistic expertise inform the prompting process.\", \"link\": \"http://arxiv.org/abs/2407.15612v2\"}, {\"title\": \"I Could've Asked That: Reformulating Unanswerable Questions\", \"authors\": \"Wenting Zhao, Ge Gao, Claire Cardie, Alexander M. Rush\", \"published_on\": \"July 24, 2024\", \"summary\": \"When seeking information from unfamiliar documents, users frequently pose\\nquestions that cannot be answered by the documents. While existing large\\nlanguage models (LLMs) identify these unanswerable questions, they do not\\nassist users in reformulating their questions, thereby reducing their overall\\nutility. We curate CouldAsk, an evaluation benchmark composed of existing and\\nnew datasets for document-grounded question answering, specifically designed to\\nstudy reformulating unanswerable questions. We evaluate state-of-the-art\\nopen-source and proprietary LLMs on CouldAsk. The results demonstrate the\\nlimited capabilities of these models in reformulating questions. Specifically,\\nGPT-4 and Llama2-7B successfully reformulate questions only 26% and 12% of the\\ntime, respectively. Error analysis shows that 62% of the unsuccessful\\nreformulations stem from the models merely rephrasing the questions or even\\ngenerating identical questions. We publicly release the benchmark and the code\\nto reproduce the experiments.\", \"link\": \"http://arxiv.org/abs/2407.17469v1\"}]\n", + "\u001b[32m**********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: DataAnalysis_Expert\n", + "\u001b[0m\n", + "\u001b[33mDataAnalysis_Expert\u001b[0m (to chat_manager):\n", + "\n", + "After reviewing the recent arXiv papers related to GPT-4, the following trending topics have been identified:\n", + "\n", + "1. **Chain-of-Thought Reasoning and Self-Training**: One study proposes self-training methods, coupled with Direct Preference Optimization (DPO), to improve chain-of-thought reasoning in smaller language models, addressing the limitations of depending on proprietary large language models like GPT-4 for knowledge distillation ([source](http://arxiv.org/abs/2407.18248v1)).\n", + "\n", + "2. **Causal Reasoning in LLMs**: Another research introduces the Causal Chain of Prompting (C2P) framework designed to equip large language models with causal reasoning capabilities. The study shows that this framework, when used with GPT-4, achieves significant performance improvements in various practical scenarios ([source](http://arxiv.org/abs/2407.18069v1)).\n", + "\n", + "3. **Digital Forensics and Text-Based Security Threats**: This paper evaluates how well the Digital Forensics and Incident Response (DFIR) pipeline can handle text-based threats in the LLM era, specifically in the context of detecting and attributing authorship to texts generated by neural text generators like GPT-4 ([source](http://arxiv.org/abs/2407.17870v1)).\n", + "\n", + "4. **Cost-Effective Instruction Learning**: A research team discusses a new cost-effective instruction learning framework for conversational pathology named CLOVER, which leverages well-designed prompts on GPT-3.5, demonstrating that savings can be made on computation and financial resources while applying language models in a clinical setting ([source](http://arxiv.org/abs/2407.17734v1)).\n", + "\n", + "5. **Ontology Generation and AI Alignment**: An investigation into the feasibility of GPT-4 supporting ontology development based on the Basic Formal Ontology (BFO). The findings suggest challenges in aligning the rapidly evolving capabilities of LLMs with well-structured ontology standards ([source](http://arxiv.org/abs/2407.17657v1)).\n", + "\n", + "6. **Automating Annotation Processes in Discourse Analysis**: Researchers employ GPT-4's capacity to automate the annotation of communicative moves in written discourse, using natural language prompts to identify structural elements in academic abstracts more effectively ([source](http://arxiv.org/abs/2407.15612v2)).\n", + "\n", + "7. **Reformulating Unanswerable Questions**: Another study creates a benchmark for evaluating LLMs' performance in reformulating unanswerable questions. The findings indicate that even state-of-the-art models like GPT-4 struggle with this task, often failing to significantly alter the original unanswerable question ([source](http://arxiv.org/abs/2407.17469v1)).\n", + "\n", + "These topics illustrate the varied applications of GPT-4 and highlight the research community's focus on enhancing reasoning, security, cost-effectiveness, and interoperability within LLMs. Additionally, the challenges of annotation and question reformulation show that there is still room for improvement in understanding and processing natural language through AI.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: AIResearch_Expert\n", + "\u001b[0m\n", + "\u001b[33mAIResearch_Expert\u001b[0m (to chat_manager):\n", + "\n", + "Based on the synthesized information provided by the DataAnalysis_Expert, the current trending topics for GPT-4 research seem to cover a spectrum of cognitive and practical applications: from enhancing reasoning capabilities, improving training methodologies, tackling new cybersecurity issues, to the more nuanced tasks of ontology engineering and discourse annotation.\n", + "\n", + "A future direction that seems particularly pertinent is investigating the interplay between GPT-4's abilities and human intelligence. For example, how GPT-4 can support specialized professionals in tasks that require highly structured knowledge, such as legal document analysis, medical diagnosis, or engineering design. There's scope to explore how the communication between GPT-4 and humans could be streamlined for cooperative problem-solving.\n", + "\n", + "Another promising direction is delving into the area of affective computing: understanding emotions, sarcasm, and subtleties in text to improve human-computer interactions. This can lead to breakthrough applications in personalized digital assistants, education (tailored student feedback), and mental health (empathetic conversational agents).\n", + "\n", + "Finally, with the mentioned challenges in reformulation of unanswerable questions and ontology alignment, extensive research focused on understanding the limitations of GPT-4's language comprehension could result in more nuanced teaching mechanisms for AI, leading to more reliable autonomous decision-making applications.\n", + "\n", + "Would the NLP_Expert or DataAnalysis_Expert like to weigh in on some applications or future research directions specifically leveraging linguistic or data-driven nuances that GPT-4 may be capable of addressing?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: NLP_Expert\n", + "\u001b[0m\n", + "\u001b[33mNLP_Expert\u001b[0m (to chat_manager):\n", + "\n", + "Sure, I'd like to add some insights from the perspective of NLP.\n", + "\n", + "The field of representational learning is particularly important as we continue to seek ways to improve GPT-4's performance. More sophisticated embedding methods could capture nuanced semantic relationships, make fine-grained distinctions among synonyms, handle polysemy more effectively, and distinguish subtle connotations—a direction that might enhance GPT-4's capabilities in tasks like sentiment analysis, irony detection, and humor recognition.\n", + "\n", + "In terms of practical applications, one future direction lies in the domain of legal and ethical reasoning. As systems like GPT-4 are leveraged for legal research and analysis, it will be essential to teach the model to navigate complex ethical considerations and the nuanced language of legal literature. This could involve training on specialized datasets that include case law and legal precedence, as well as philosophical texts dealing with ethics.\n", + "\n", + "Moreover, the trend towards cross-domain applications — such as pathology vision-language analysis mentioned in the papers — suggests that GPT-4 could be tailored to handle multi-modal inputs more effectively. Here the interaction between visual data and textual information requires further exploration, particularly in the way that GPT-4 processes and generates language grounded in non-textual contexts.\n", + "\n", + "Lastly, the recurring theme of cost-effectiveness in model training opens up a broader discussion about sustainable AI practices. Finding ways to reduce the data, energy, and computational resources necessary for training and deploying models like GPT-4 isn't just a technical challenge, but also an environmental and economic one. This includes research into more efficient algorithms and architectures, as well as methodologies for leveraging smaller models to approximate the capabilities of more substantial systems.\n", + "\n", + "There are indeed a myriad of pathways for future research, all of which underscore the evolving nature of NLP and its entwinement with the continually advancing capabilities of models like GPT-4. Any further explorations or examinations on these topics from the group?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: MachineLearning_Expert\n", + "\u001b[0m\n", + "\u001b[33mMachineLearning_Expert\u001b[0m (to chat_manager):\n", + "\n", + "Considering the points made by the NLP_Expert and AIResearch_Expert, I'd like to contribute a take on how we might further the state-of-the-art in machine learning leveraging the capabilities of GPT-4.\n", + "\n", + "For one, the convergence of unsupervised, semi-supervised, and supervised learning methodologies might be beneficial for GPT-4's future iterations or similar models. The incorporation of unsupervised learning can help in better understanding context and meaning without substantial labeled datasets. This could improve GPT-4's potential in low-resource languages and niche applications where labeled data is scarce.\n", + "\n", + "Another realm of exploration could be few-shot and one-shot learning. As models become more capable, their ability to generalize from fewer examples is critical. Fine-tuning GPT-4's few-shot learning capabilities can have practical implications in personalized AI services where the model needs to adapt quickly to individual user needs and preferences.\n", + "\n", + "Additionally, given the recent trends in GPT-4 research, the incorporation of neuro-symbolic approaches may offer a promising avenue for enhancing reasoning capabilities. This hybrid approach combines neural networks' learning prowess with symbolic AI's rule-based reasoning - providing a pathway to improve GPT-4's causal reasoning, problem-solving, and comprehension of complex systems.\n", + "\n", + "Finally, as we progress toward AI models that comprehend and generate human-like text, the ethical dimension becomes paramount. It's crucial for future research to focus on models' accountability, interpretability, and fairness. By building robust mechanisms for transparency and control, we can ensure that advancements in GPT-4 and similar AI are developed responsibly and augment human abilities without unintended consequences.\n", + "\n", + "If the group agrees, these threads can weave together a forward-looking agenda for research in machine learning, focusing on resource efficiency, learning dexterity, cognitive depth, and ethical robustness. Are we in consensus to proceed on finalizing our analysis and concluding our discussion?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: AI_Research_Expert\n", + "\u001b[0m\n", + "\u001b[33mAI_Research_Expert\u001b[0m (to chat_manager):\n", + "\n", + "The interdisciplinary perspectives presented here create a comprehensive view of the potential for GPT-4 and its related research domains. We have touched upon cognitive enhancements in machine reasoning, representational learning, cross-domain applications, sustainable AI, few-shot learning, neuro-symbolic approaches, and ethical considerations in AI development.\n", + "\n", + "As an AI Research Expert, I second the synthesis of these insights and propose that our analysis has reached a natural conclusion with consensus on the future directions and implications of GPT-4's continuing evolution within the AI landscape.\n", + "\n", + "If there are no additional insights or questions from the group, I suggest we may consider our task complete. Shall we proceed to close our discussion?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: Computer_terminal\n", + "\u001b[0m\n", + "\u001b[33mComputer_terminal\u001b[0m (to chat_manager):\n", + "\n", + "There is no code from the last 1 message for me to execute. Group chat manager should let other participants to continue the conversation. If the group chat manager want to end the conversation, you should let other participant reply me only with \"TERMINATE\"\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32m\n", + "Next speaker: AI_Research_Expert\n", + "\u001b[0m\n", + "\u001b[33mAI_Research_Expert\u001b[0m (to chat_manager):\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "start_task(execution_task=building_task, agent_list=agent_list)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Step 7 (Optional): clear all agents and prepare for the next task\n", + "\n", + "You can clear all agents generated in this task by the following code if your task is completed or the next task is largely different from the current task. If the agent's backbone is an open-source LLM, this process will also shut down the endpoint server. If necessary, you can use `recycle_endpoint=False` to retain the previous open-source LLMs' endpoint server." + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mAll agents have been cleared.\u001b[0m\n" + ] + } + ], + "source": [ + "builder.clear_all_agents(recycle_endpoint=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Save & load configs\n", + "\n", + "You can save all necessary information of the built group chat agents. Here is a case for those agents generated in the above task:" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[32mBuilding config saved to ./save_config_8e0d96e24673563ecb572d92ed003d2a.json\u001b[0m\n" + ] + } + ], + "source": [ + "saved_path = builder.save()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.14" + }, + "orig_nbformat": 4, + "vscode": { + "interpreter": { + "hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a" + } + } + }, + "nbformat": 4, + "nbformat_minor": 2 + } diff --git a/test/agentchat/contrib/test_agent_builder.py b/test/agentchat/contrib/test_agent_builder.py index e2e39e8ba43b..e0d8515c0fe0 100755 --- a/test/agentchat/contrib/test_agent_builder.py +++ b/test/agentchat/contrib/test_agent_builder.py @@ -3,15 +3,18 @@ import json import os import sys +from unittest.mock import MagicMock, patch import pytest +import autogen from autogen.agentchat.contrib.agent_builder import AgentBuilder sys.path.append(os.path.join(os.path.dirname(__file__), "..")) sys.path.append(os.path.join(os.path.dirname(__file__), "../..")) + from conftest import reason, skip_openai # noqa: E402 -from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST # noqa: E402 +from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST # noqa: E402 # noqa: E402 try: import chromadb @@ -22,6 +25,7 @@ skip = False here = os.path.abspath(os.path.dirname(__file__)) +llm_config = {"temperature": 0} def _config_check(config): @@ -37,10 +41,27 @@ def _config_check(config): assert agent_config.get("system_message", None) is not None -@pytest.mark.skipif( - skip_openai, - reason=reason, -) +# Function initializes a group chat with agents and starts a execution_task. +def start_task(execution_task: str, agent_list: list): + group_chat = autogen.GroupChat(agents=agent_list, messages=[], max_round=12) + manager = autogen.GroupChatManager( + groupchat=group_chat, + llm_config={"config_list": autogen.config_list_from_json(f"{KEY_LOC}/{OAI_CONFIG_LIST}"), **llm_config}, + ) + + agent_list[0].initiate_chat(manager, message=execution_task) + + +ask_ossinsight_mock = MagicMock() + + +# Function to test function calling +def ask_ossinsight(question: str) -> str: + ask_ossinsight_mock(question) + return "The repository microsoft/autogen has 123,456 stars on GitHub." + + +@pytest.mark.skipif(skip_openai, reason=reason) def test_build(): builder = AgentBuilder( config_file_or_env=OAI_CONFIG_LIST, @@ -69,6 +90,99 @@ def test_build(): assert len(agent_config["agent_configs"]) <= builder.max_agents +@pytest.mark.skipif(skip_openai or skip, reason=reason + "OR dependency not installed") +def test_build_assistant_with_function_calling(): + list_of_functions = [ + { + "name": "ossinsight_data_api", + "description": "This is an API endpoint allowing users (analysts) to input question about GitHub in text format to retrieve the related and structured data.", + "function": ask_ossinsight, + } + ] + + builder = AgentBuilder( + config_file_or_env=OAI_CONFIG_LIST, config_file_location=KEY_LOC, builder_model="gpt-4", agent_model="gpt-4" + ) + building_task = "How many stars microsoft/autogen has on GitHub?" + + agent_list, agent_config = builder.build( + building_task=building_task, + default_llm_config={"temperature": 0}, + code_execution_config={ + "last_n_messages": 2, + "work_dir": f"{here}/test_agent_scripts", + "timeout": 60, + "use_docker": "python:3", + }, + list_of_functions=list_of_functions, + ) + + _config_check(agent_config) + + # check number of agents + assert len(agent_config["agent_configs"]) <= builder.max_agents + + # Mock the 'ask_ossinsight' function in the '_main_' module using a context manager. + with patch(f"{__name__}.ask_ossinsight") as mocked_function: + # Execute 'start_task' which should trigger 'ask_ossinsight' due to the given execution task. + start_task( + execution_task="How many stars microsoft/autogen has on GitHub?", + agent_list=agent_list, + ) + + # Verify that 'ask_ossinsight' was called exactly once during the task execution. + mocked_function.assert_called() + + +@pytest.mark.skipif( + skip_openai, + reason="requested to skip", +) +def test_build_gpt_assistant_with_function_calling(): + list_of_functions = [ + { + "name": "ossinsight_data_api", + "description": "This is an API endpoint allowing users (analysts) to input question about GitHub in text format to retrieve the related and structured data.", + "function": ask_ossinsight, + } + ] + + builder = AgentBuilder( + config_file_or_env=OAI_CONFIG_LIST, config_file_location=KEY_LOC, builder_model="gpt-4", agent_model="gpt-4" + ) + + building_task = "Determine number of stars of GitHub repositories" + + agent_list, agent_config = builder.build( + building_task=building_task, + default_llm_config={"temperature": 0}, + code_execution_config={ + "last_n_messages": 2, + "work_dir": f"{here}/test_agent_scripts", + "timeout": 60, + "use_docker": "python:3", + }, + list_of_functions=list_of_functions, + use_oai_assistant=True, + ) + + _config_check(agent_config) + + # check number of agents + assert len(agent_config["agent_configs"]) <= builder.max_agents + + # Mock the 'ask_ossinsight' function in the '_main_' module using a context manager. + with patch(f"{__name__}.ask_ossinsight") as mocked_function: + # Execute 'start_task' which should trigger 'ask_ossinsight' due to the given execution task. + start_task( + execution_task="How many stars microsoft/autogen has on GitHub?", + agent_list=agent_list, + ) + + # Verify that 'ask_ossinsight' was called exactly once during the task execution. + mocked_function.assert_called() + + @pytest.mark.skipif( skip_openai or skip, reason=reason + "OR dependency not installed", @@ -122,10 +236,7 @@ def test_build_from_library(): assert len(agent_config["agent_configs"]) <= builder.max_agents -@pytest.mark.skipif( - skip_openai, - reason=reason, -) +@pytest.mark.skipif(skip_openai, reason=reason) def test_save(): builder = AgentBuilder( config_file_or_env=OAI_CONFIG_LIST, @@ -159,10 +270,7 @@ def test_save(): _config_check(saved_configs) -@pytest.mark.skipif( - skip_openai, - reason=reason, -) +@pytest.mark.skipif(skip_openai, reason=reason) def test_load(): builder = AgentBuilder( config_file_or_env=OAI_CONFIG_LIST, @@ -188,10 +296,7 @@ def test_load(): _config_check(loaded_agent_configs) -@pytest.mark.skipif( - skip_openai, - reason=reason, -) +@pytest.mark.skipif(skip_openai, reason=reason) def test_clear_agent(): builder = AgentBuilder( config_file_or_env=OAI_CONFIG_LIST, @@ -218,6 +323,8 @@ def test_clear_agent(): if __name__ == "__main__": test_build() + test_build_assistant_with_function_calling() + test_build_gpt_assistant_with_function_calling() test_build_from_library() test_save() test_load() From 8a66f03b41a061cf3f6261ea235d20727d8c7920 Mon Sep 17 00:00:00 2001 From: Victor Dibia Date: Fri, 11 Oct 2024 18:23:12 -0700 Subject: [PATCH 26/35] Update Docs to Point to 0.4 (#3764) * update docs, add link to v0.4 * update homepage text * Update website/src/pages/index.js --------- Co-authored-by: Eric Zhu --- website/src/pages/index.js | 50 ++++++++++++++++++++---------- website/src/pages/index.module.css | 14 +++++++++ 2 files changed, 48 insertions(+), 16 deletions(-) diff --git a/website/src/pages/index.js b/website/src/pages/index.js index 203c72c02223..d3e8d0f5627e 100644 --- a/website/src/pages/index.js +++ b/website/src/pages/index.js @@ -1,24 +1,41 @@ -import React from 'react'; -import clsx from 'clsx'; -import Layout from '@theme/Layout'; -import Link from '@docusaurus/Link'; -import useDocusaurusContext from '@docusaurus/useDocusaurusContext'; -import styles from './index.module.css'; -import HomepageFeatures from '../components/HomepageFeatures'; +import React from "react"; +import clsx from "clsx"; +import Layout from "@theme/Layout"; +import Link from "@docusaurus/Link"; +import useDocusaurusContext from "@docusaurus/useDocusaurusContext"; +import styles from "./index.module.css"; +import HomepageFeatures from "../components/HomepageFeatures"; function HomepageHeader() { - const {siteConfig} = useDocusaurusContext(); + const { siteConfig } = useDocusaurusContext(); return ( -
+

{siteConfig.title}

{siteConfig.tagline}

- - Getting Started - 3min ⏱️ - +
+ + Get Started + +

+ Current stable version of AutoGen +

+
+
+ + Preview v0.4 + +

+ A new event driven architecture for AutoGen +

+
@@ -26,11 +43,12 @@ function HomepageHeader() { } export default function Home() { - const {siteConfig} = useDocusaurusContext(); + const { siteConfig } = useDocusaurusContext(); return ( + description="Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework" + >
diff --git a/website/src/pages/index.module.css b/website/src/pages/index.module.css index 5c0d4f6e20c4..dc827fa4c423 100644 --- a/website/src/pages/index.module.css +++ b/website/src/pages/index.module.css @@ -22,4 +22,18 @@ display: flex; align-items: center; justify-content: center; + gap: 10px; +} + +.buttonWrapper { + display: flex; + flex-direction: column; + align-items: center; +} + +.buttonTagline { + font-size: 0.8rem; + margin-top: 5px; + text-align: center; + max-width: 200px; } From 0c4ae702d71ab3ce66d110c0e24c643680157247 Mon Sep 17 00:00:00 2001 From: William Espegren <131612909+WilliamEspegren@users.noreply.github.com> Date: Sat, 12 Oct 2024 03:27:15 +0200 Subject: [PATCH 27/35] Notebook on web crawling (#2720) * Web crawling with Spider * reset run count * spell correction * crawl agent * reset execution counters * correct return types * metadat for website * format * Update notebook/agentchat_webcrawling_with_spider.ipynb Co-authored-by: Eric Zhu * Update website/docs/Examples.md Co-authored-by: Eric Zhu * Update agentchat_webcrawling_with_spider.ipynb * Update agentchat_webcrawling_with_spider.ipynb Co-authored-by: Eric Zhu --------- Co-authored-by: Chi Wang Co-authored-by: Qingyun Wu Co-authored-by: Eric Zhu --- .../agentchat_webcrawling_with_spider.ipynb | 426 ++++++++++++++++++ website/docs/Examples.md | 1 + 2 files changed, 427 insertions(+) create mode 100644 notebook/agentchat_webcrawling_with_spider.ipynb diff --git a/notebook/agentchat_webcrawling_with_spider.ipynb b/notebook/agentchat_webcrawling_with_spider.ipynb new file mode 100644 index 000000000000..45d270b37e5e --- /dev/null +++ b/notebook/agentchat_webcrawling_with_spider.ipynb @@ -0,0 +1,426 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Web Scraping using Spider API\n", + "\n", + "This notebook shows how to use the open \n", + "source [Spider](https://spider.cloud/) web crawler together with AutoGen agents." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First we need to install the Spider SDK and the AutoGen library." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "! pip install -qqq pyautogen spider-client" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Setting up the LLM configuration and the Spider API key is also required." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "config_list = [\n", + " {\"model\": \"gpt-4o\", \"api_key\": os.getenv(\"OPENAI_API_KEY\")},\n", + "]\n", + "\n", + "spider_api_key = os.getenv(\"SPIDER_API_KEY\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's define the tool for scraping and crawling data from any website with Spider.\n", + "Read more about tool use in this [tutorial chapter](/docs/tutorial/tool-use)." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[{'content': 'Spider - The Fastest Web Crawling Service[Spider v1 Logo Spider ](/)[Pricing](/credits/new)[GitHub](https://github.com/spider-rs/spider) [Twitter](https://twitter.com/spider_rust) Toggle ThemeSign InRegisterTo help you get started with Spider, we’ll give you $200 in credits when you spend $100. [Get Credits](/credits/new)LangChain integration [now available](https://python.langchain.com/docs/integrations/document_loaders/spider)The World\\'s Fastest and Cheapest Crawler API==========View Demo* Basic* StreamingExample requestPythonCopy```import requests, osheaders = { \\'Authorization\\': os.environ[\"SPIDER_API_KEY\"], \\'Content-Type\\': \\'application/json\\',}json_data = {\"limit\":50,\"url\":\"http://www.example.com\"}response = requests.post(\\'https://api.spider.cloud/crawl\\', headers=headers, json=json_data)print(response.json())```Example ResponseUnmatched Speed----------### 2.5secs ###To crawl 200 pages### 100-500x ###Faster than alternatives### 500x ###Cheaper than traditional scraping services Benchmarks displaying performance between Spider Cloud, Firecrawl, and Apify.Example used tailwindcss.com - 04/16/2024[See framework benchmarks ](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md)Foundations for Crawling Effectively----------### Leading in performance ###Spider is written in Rust and runs in full concurrency to achieve crawling dozens of pages in secs.### Optimal response format ###Get clean and formatted markdown, HTML, or text content for fine-tuning or training AI models.### Caching ###Further boost speed by caching repeated web page crawls.### Smart Mode ###Spider dynamically switches to Headless Chrome when it needs to.Beta### Scrape with AI ###Do custom browser scripting and data extraction using the latest AI models.### Best crawler for LLMs ###Don\\'t let crawling and scraping be the highest latency in your LLM & AI agent stack.### Scrape with no headaches ###* Proxy rotations* Agent headers* Avoid anti-bot detections* Headless chrome* Markdown LLM Responses### The Fastest Web Crawler ###* Powered by [spider-rs](https://github.com/spider-rs/spider)* Do 20,000 pages in seconds* Full concurrency* Powerful and simple API* 5,000 requests per minute### Do more with AI ###* Custom browser scripting* Advanced data extraction* Data pipelines* Perfect for LLM and AI Agents* Accurate website labelingSee what\\'s being said----------[\"Merrick](https://twitter.com/iammerrick/status/1787873425446572462)[Merrick Christensen](https://twitter.com/iammerrick/status/1787873425446572462)[@iammerrick ](https://twitter.com/iammerrick/status/1787873425446572462)· [Follow](https://twitter.com/intent/follow?screen_name=iammerrick)[](https://twitter.com/iammerrick/status/1787873425446572462)Rust based crawler Spider is next level for crawling & scraping sites. So fast. Their cloud offering is also so easy to use. Good stuff. [ github.com/spider-rs/spid… ](https://github.com/spider-rs/spider)[ 3:53 PM · May 7, 2024 ](https://twitter.com/iammerrick/status/1787873425446572462) [](https://help.twitter.com/en/twitter-for-websites-ads-info-and-privacy)[12 ](https://twitter.com/intent/like?tweet_id=1787873425446572462) [Reply ](https://twitter.com/intent/tweet?in_reply_to=1787873425446572462)[ Read more on Twitter ](https://twitter.com/iammerrick/status/1787873425446572462)[\"William](https://twitter.com/WilliamEspegren/status/1789419820821184764)[William Espegren](https://twitter.com/WilliamEspegren/status/1789419820821184764)[@WilliamEspegren ](https://twitter.com/WilliamEspegren/status/1789419820821184764)· [Follow](https://twitter.com/intent/follow?screen_name=WilliamEspegren)[](https://twitter.com/WilliamEspegren/status/1789419820821184764)Web crawler built in rust, currently the nr1 performance in the world with crazy resource management Aaaaaaand they have a cloud offer, that’s wayyyy cheaper than any competitor Name a reason for me to use anything else? [ github.com/spider-rs/spid… ](https://github.com/spider-rs/spider)[ 10:18 PM · May 11, 2024 ](https://twitter.com/WilliamEspegren/status/1789419820821184764) [](https://help.twitter.com/en/twitter-for-websites-ads-info-and-privacy)[2 ](https://twitter.com/intent/like?tweet_id=1789419820821184764) [Reply ](https://twitter.com/intent/tweet?in_reply_to=1789419820821184764)[ Read 1 reply ](https://twitter.com/WilliamEspegren/status/1789419820821184764)[\"Troy](https://twitter.com/Troyusrex/status/1791497607925088307)[Troy Lowry](https://twitter.com/Troyusrex/status/1791497607925088307)[@Troyusrex ](https://twitter.com/Troyusrex/status/1791497607925088307)· [Follow](https://twitter.com/intent/follow?screen_name=Troyusrex)[](https://twitter.com/Troyusrex/status/1791497607925088307)[ @spider\\\\_rust ](https://twitter.com/spider_rust) First, the good: Spider has enabled me to speed up my scraping 20X and with a bit higher quality than I was getting before. I am having a few issues however. First, the documentation link doesn\\'t work ([ spider.cloud/guides/(/docs/… ](https://spider.cloud/guides/(/docs/api)))I\\'ve figured out how to get it to work…[ 3:54 PM · May 17, 2024 ](https://twitter.com/Troyusrex/status/1791497607925088307) [](https://help.twitter.com/en/twitter-for-websites-ads-info-and-privacy)[1 ](https://twitter.com/intent/like?tweet_id=1791497607925088307) [Reply ](https://twitter.com/intent/tweet?in_reply_to=1791497607925088307)[ Read 2 replies ](https://twitter.com/Troyusrex/status/1791497607925088307)FAQ----------Frequently asked questions about Spider
What is Spider?----------Spider is a leading web crawling tool designed for speed and cost-effectiveness, supporting various data formats including LLM-ready markdown.
Why is my website not crawling?----------Your crawl may fail if it requires JavaScript rendering. Try setting your request to \\'chrome\\' to solve this issue.
Can you crawl all pages?----------Yes, Spider accurately crawls all necessary content without needing a sitemap.
What formats can Spider convert web data into?----------Spider outputs HTML, raw, text, and various markdown formats. It supports JSON, JSONL, CSV, and XML for API responses.
Is Spider suitable for large scraping projects?----------Absolutely, Spider is ideal for large-scale data collection and offers a cost-effective dashboard for data management.
How can I try Spider?----------Purchase credits for our cloud system or test the Open Source Spider engine to explore its capabilities.
Does it respect robots.txt?----------Yes, compliance with robots.txt is default, but you can disable this if necessary.
[API](/docs/api) [Pricing](/credits/new) [Guides](/guides) [About](/about) [Docs](https://docs.rs/spider/latest/spider/) [Privacy](/privacy) [Terms](/eula) [FAQ](/faq)© 2024 Spider from A11yWatch[GitHubGithub](https://github.com/spider-rs/spider) [X - Twitter ](https://twitter.com/spider_rust)', 'error': None, 'status': 200, 'url': 'https://spider.cloud'}]\n" + ] + } + ], + "source": [ + "from typing import Any, Dict, List\n", + "\n", + "from spider import Spider\n", + "from typing_extensions import Annotated\n", + "\n", + "\n", + "def scrape_page(\n", + " url: Annotated[str, \"The URL of the web page to scrape\"],\n", + " params: Annotated[dict, \"Dictionary of additional params.\"] = None,\n", + ") -> Annotated[Dict[str, Any], \"Scraped content\"]:\n", + " # Initialize the Spider client with your API key, if no api key is specified it looks for SPIDER_API_KEY in your environment variables\n", + " client = Spider(spider_api_key)\n", + "\n", + " if params is None:\n", + " params = {\"return_format\": \"markdown\"}\n", + "\n", + " scraped_data = client.scrape_url(url, params)\n", + " return scraped_data[0]\n", + "\n", + "\n", + "def crawl_page(\n", + " url: Annotated[str, \"The url of the domain to be crawled\"],\n", + " params: Annotated[dict, \"Dictionary of additional params.\"] = None,\n", + ") -> Annotated[List[Dict[str, Any]], \"Scraped content\"]:\n", + " # Initialize the Spider client with your API key, if no api key is specified it looks for SPIDER_API_KEY in your environment variables\n", + " client = Spider(spider_api_key)\n", + "\n", + " if params is None:\n", + " params = {\"return_format\": \"markdown\"}\n", + "\n", + " crawled_data = client.crawl_url(url, params)\n", + " return crawled_data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Create the agents and register the tool." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen import ConversableAgent, register_function\n", + "\n", + "# Create web scraper agent.\n", + "scraper_agent = ConversableAgent(\n", + " \"WebScraper\",\n", + " llm_config={\"config_list\": config_list},\n", + " system_message=\"You are a web scraper and you can scrape any web page to retrieve its contents.\"\n", + " \"Returns 'TERMINATE' when the scraping is done.\",\n", + ")\n", + "\n", + "# Create web crawler agent.\n", + "crawler_agent = ConversableAgent(\n", + " \"WebCrawler\",\n", + " llm_config={\"config_list\": config_list},\n", + " system_message=\"You are a web crawler and you can crawl any page with deeper crawling following subpages.\"\n", + " \"Returns 'TERMINATE' when the scraping is done.\",\n", + ")\n", + "\n", + "# Create user proxy agent.\n", + "user_proxy_agent = ConversableAgent(\n", + " \"UserProxy\",\n", + " llm_config=False, # No LLM for this agent.\n", + " human_input_mode=\"NEVER\",\n", + " code_execution_config=False, # No code execution for this agent.\n", + " is_termination_msg=lambda x: x.get(\"content\", \"\") is not None and \"terminate\" in x[\"content\"].lower(),\n", + " default_auto_reply=\"Please continue if not finished, otherwise return 'TERMINATE'.\",\n", + ")\n", + "\n", + "# Register the functions with the agents.\n", + "register_function(\n", + " scrape_page,\n", + " caller=scraper_agent,\n", + " executor=user_proxy_agent,\n", + " name=\"scrape_page\",\n", + " description=\"Scrape a web page and return the content.\",\n", + ")\n", + "\n", + "register_function(\n", + " crawl_page,\n", + " caller=crawler_agent,\n", + " executor=user_proxy_agent,\n", + " name=\"crawl_page\",\n", + " description=\"Crawl an entire domain, following subpages and return the content.\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Start the conversation for scraping web data. We used the\n", + "`reflection_with_llm` option for summary method\n", + "to perform the formatting of the output into a desired format.\n", + "The summary method is called after the conversation is completed\n", + "given the complete history of the conversation." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mUserProxy\u001b[0m (to WebScraper):\n", + "\n", + "Can you scrape william-espegren.com for me?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mWebScraper\u001b[0m (to UserProxy):\n", + "\n", + "\u001b[32m***** Suggested tool call (call_qCNYeQCfIPZkUCKejQmm5EhC): scrape_page *****\u001b[0m\n", + "Arguments: \n", + "{\"url\":\"https://www.william-espegren.com\"}\n", + "\u001b[32m****************************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION scrape_page...\u001b[0m\n", + "\u001b[33mUserProxy\u001b[0m (to WebScraper):\n", + "\n", + "\u001b[33mUserProxy\u001b[0m (to WebScraper):\n", + "\n", + "\u001b[32m***** Response from calling tool (call_qCNYeQCfIPZkUCKejQmm5EhC) *****\u001b[0m\n", + "[{\"content\": \"William Espegren - Portfoliokeep scrollingMADE WITHCSS, JSMADE BYUppsalaWilliam EspegrenWith \\u00b7LoveOpen For Projects[CONTACT ME](https://www.linkedin.com/in/william-espegren/)[Instagram](https://www.instagram.com/williamespegren/)[LinkedIn](https://www.linkedin.com/in/william-espegren/)[Twitter](https://twitter.com/WilliamEspegren)[team-collaboration/version-control/github Created with Sketch.Github](https://github.com/WilliamEspegren)\", \"error\": null, \"status\": 200, \"url\": \"https://www.william-espegren.com\"}]\n", + "\u001b[32m**********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mWebScraper\u001b[0m (to UserProxy):\n", + "\n", + "I successfully scraped the website \"william-espegren.com\". Here is the content retrieved:\n", + "\n", + "```\n", + "William Espegren - Portfolio\n", + "\n", + "keep scrolling\n", + "\n", + "MADE WITH\n", + "CSS, JS\n", + "\n", + "MADE BY\n", + "Uppsala\n", + "\n", + "William Espegren\n", + "With Love\n", + "\n", + "Open For Projects\n", + "\n", + "[CONTACT ME](https://www.linkedin.com/in/william-espegren/)\n", + "[Instagram](https://www.instagram.com/williamespegren/)\n", + "[LinkedIn](https://www.linkedin.com/in/william-espegren/)\n", + "[Twitter](https://twitter.com/WilliamEspegren)\n", + "[Github](https://github.com/WilliamEspegren)\n", + "```\n", + "\n", + "Is there anything specific you would like to do with this information?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mUserProxy\u001b[0m (to WebScraper):\n", + "\n", + "Please continue if not finished, otherwise return 'TERMINATE'.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mWebScraper\u001b[0m (to UserProxy):\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "# Scrape page\n", + "scraped_chat_result = user_proxy_agent.initiate_chat(\n", + " scraper_agent,\n", + " message=\"Can you scrape william-espegren.com for me?\",\n", + " summary_method=\"reflection_with_llm\",\n", + " summary_args={\"summary_prompt\": \"\"\"Summarize the scraped content\"\"\"},\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mUserProxy\u001b[0m (to WebCrawler):\n", + "\n", + "Can you crawl william-espegren.com for me, I want the whole domains information?\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mWebCrawler\u001b[0m (to UserProxy):\n", + "\n", + "\u001b[32m***** Suggested tool call (call_0FkTtsxBtA0SbChm1PX085Vk): crawl_page *****\u001b[0m\n", + "Arguments: \n", + "{\"url\":\"http://www.william-espegren.com\"}\n", + "\u001b[32m***************************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[35m\n", + ">>>>>>>> EXECUTING FUNCTION crawl_page...\u001b[0m\n", + "\u001b[33mUserProxy\u001b[0m (to WebCrawler):\n", + "\n", + "\u001b[33mUserProxy\u001b[0m (to WebCrawler):\n", + "\n", + "\u001b[32m***** Response from calling tool (call_0FkTtsxBtA0SbChm1PX085Vk) *****\u001b[0m\n", + "[{\"content\": \"William Espegren - Portfoliokeep scrollingMADE WITHCSS, JSMADE BYUppsalaWilliam EspegrenWith \\u00b7LoveOpen For Projects[CONTACT ME](https://www.linkedin.com/in/william-espegren/)[Instagram](https://www.instagram.com/williamespegren/)[LinkedIn](https://www.linkedin.com/in/william-espegren/)[Twitter](https://twitter.com/WilliamEspegren)[team-collaboration/version-control/github Created with Sketch.Github](https://github.com/WilliamEspegren)\", \"error\": null, \"status\": 200, \"url\": \"http://www.william-espegren.com\"}]\n", + "\u001b[32m**********************************************************************\u001b[0m\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mWebCrawler\u001b[0m (to UserProxy):\n", + "\n", + "The crawl of [william-espegren.com](http://www.william-espegren.com) has been completed. Here is the gathered content:\n", + "\n", + "---\n", + "\n", + "**William Espegren - Portfolio**\n", + "\n", + "Keep scrolling\n", + "\n", + "**MADE WITH:** CSS, JS\n", + "\n", + "**MADE BY:** Uppsala\n", + "\n", + "**William Espegren**\n", + "\n", + "**With Love**\n", + "\n", + "**Open For Projects**\n", + "\n", + "**[CONTACT ME](https://www.linkedin.com/in/william-espegren/)**\n", + "\n", + "- [Instagram](https://www.instagram.com/williamespegren/)\n", + "- [LinkedIn](https://www.linkedin.com/in/william-espegren/)\n", + "- [Twitter](https://twitter.com/WilliamEspegren)\n", + "- [Github](https://github.com/WilliamEspegren)\n", + "\n", + "---\n", + "\n", + "If you need further information or details from any specific section, please let me know!\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mUserProxy\u001b[0m (to WebCrawler):\n", + "\n", + "Please continue if not finished, otherwise return 'TERMINATE'.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mWebCrawler\u001b[0m (to UserProxy):\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "# Crawl page\n", + "crawled_chat_result = user_proxy_agent.initiate_chat(\n", + " crawler_agent,\n", + " message=\"Can you crawl william-espegren.com for me, I want the whole domains information?\",\n", + " summary_method=\"reflection_with_llm\",\n", + " summary_args={\"summary_prompt\": \"\"\"Summarize the crawled content\"\"\"},\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The output is stored in the summary." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The website belongs to William Espegren, who is based in Uppsala and possesses skills in CSS and JavaScript. He is open to new projects. You can contact him through the following links:\n", + "\n", + "- [LinkedIn](https://www.linkedin.com/in/william-espegren/)\n", + "- [Instagram](https://www.instagram.com/williamespegren/)\n", + "- [Twitter](https://twitter.com/WilliamEspegren)\n", + "- [GitHub](https://github.com/WilliamEspegren)\n", + "\n", + "Feel free to reach out to him for project collaborations.\n" + ] + } + ], + "source": [ + "print(scraped_chat_result.summary)\n", + "# print(crawled_chat_result.summary) # We show one for cleaner output" + ] + } + ], + "metadata": { + "front_matter": { + "description": "Scraping/Crawling web pages and summarizing the content using agents.", + "tags": [ + "web scraping", + "spider", + "tool use" + ], + "title": "Web Scraper & Crawler Agent using Spider" + }, + "kernelspec": { + "display_name": "autogen", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/website/docs/Examples.md b/website/docs/Examples.md index 550c2604659d..df06f97fb124 100644 --- a/website/docs/Examples.md +++ b/website/docs/Examples.md @@ -55,6 +55,7 @@ Links to notebook examples: - Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_surfer.ipynb) - **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_sql_spider.ipynb) - **Web Scraping**: Web Scraping with Apify - [View Notebook](/docs/notebooks/agentchat_webscraping_with_apify) +- **Web Crawling**: Crawl entire domain with Spider API - [View Notebook](/docs/notebooks/agentchat_webcrawling_with_spider) - **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb). ### Human Involvement From c179efbb13b1040df92f9b4a8c997963dc1596df Mon Sep 17 00:00:00 2001 From: Eric Zhu Date: Sat, 12 Oct 2024 14:52:00 -0700 Subject: [PATCH 28/35] update homepage link to 0.4 doc (#3772) --- website/src/pages/index.js | 90 +++++++++++++++++++------------------- 1 file changed, 45 insertions(+), 45 deletions(-) diff --git a/website/src/pages/index.js b/website/src/pages/index.js index d3e8d0f5627e..9c1d4ddb3fea 100644 --- a/website/src/pages/index.js +++ b/website/src/pages/index.js @@ -7,52 +7,52 @@ import styles from "./index.module.css"; import HomepageFeatures from "../components/HomepageFeatures"; function HomepageHeader() { - const { siteConfig } = useDocusaurusContext(); - return ( -
-
-

{siteConfig.title}

-

{siteConfig.tagline}

-
-
- - Get Started - -

- Current stable version of AutoGen -

-
-
- - Preview v0.4 - -

- A new event driven architecture for AutoGen -

-
-
-
-
- ); + const { siteConfig } = useDocusaurusContext(); + return ( +
+
+

{siteConfig.title}

+

{siteConfig.tagline}

+
+
+ + Get Started + +

+ Current stable version of AutoGen +

+
+
+ + Preview v0.4 + +

+ A new event driven architecture for AutoGen +

+
+
+
+
+ ); } export default function Home() { - const { siteConfig } = useDocusaurusContext(); - return ( - - -
- -
-
- ); + const { siteConfig } = useDocusaurusContext(); + return ( + + +
+ +
+
+ ); } From 0e37b34f7f63ef78fd5d75f8fdd4b33a2652eb3d Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Mon, 14 Oct 2024 09:55:53 -0400 Subject: [PATCH 29/35] Remove path filter for website testing in 0.2 (#3782) --- .github/workflows/deploy-website.yml | 9 --------- 1 file changed, 9 deletions(-) diff --git a/.github/workflows/deploy-website.yml b/.github/workflows/deploy-website.yml index e576ca1b4db9..4770c8d0d0dc 100644 --- a/.github/workflows/deploy-website.yml +++ b/.github/workflows/deploy-website.yml @@ -3,23 +3,14 @@ name: docs on: pull_request: branches: ["0.2"] - paths: - - "autogen/*" - - "website/*" - - ".github/workflows/deploy-website.yml" push: branches: ["0.2"] - paths: - - "autogen/*" - - "website/*" - - ".github/workflows/deploy-website.yml" workflow_dispatch: permissions: id-token: write pages: write jobs: checks: - if: github.event_name != 'push' runs-on: ubuntu-latest defaults: run: From 874985ef974a872ef3d3c271556a90811994a51d Mon Sep 17 00:00:00 2001 From: gagb Date: Mon, 14 Oct 2024 07:08:46 -0700 Subject: [PATCH 30/35] Fix broken ing URL in README (#3776) Co-authored-by: Jack Gerrits --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0599300c5397..86118d4c46a6 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@
-AutoGen Logo +AutoGen Logo ![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![PyPI - Version](https://img.shields.io/pypi/v/autogen-agentchat)](https://pypi.org/project/autogen-agentchat/) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) From 28345307baf09684de2a3d7d9a57b40dcda38bd9 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Mon, 14 Oct 2024 10:17:28 -0400 Subject: [PATCH 31/35] Clarify stable package name and version on home page (#3775) --- website/src/pages/index.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/src/pages/index.js b/website/src/pages/index.js index 9c1d4ddb3fea..887a4ca2a977 100644 --- a/website/src/pages/index.js +++ b/website/src/pages/index.js @@ -22,7 +22,7 @@ function HomepageHeader() { Get Started

- Current stable version of AutoGen + Current stable version of AutoGen (autogen-agentchat~=0.2)

From 7e297149f86a6a60953a2e3e713e2299b6e4514e Mon Sep 17 00:00:00 2001 From: Victor Dibia Date: Mon, 14 Oct 2024 16:00:53 -0700 Subject: [PATCH 32/35] align tagline buttons (#3788) --- website/src/pages/index.js | 93 +++++++++++++++--------------- website/src/pages/index.module.css | 9 ++- 2 files changed, 56 insertions(+), 46 deletions(-) diff --git a/website/src/pages/index.js b/website/src/pages/index.js index 887a4ca2a977..99d423e03705 100644 --- a/website/src/pages/index.js +++ b/website/src/pages/index.js @@ -7,52 +7,55 @@ import styles from "./index.module.css"; import HomepageFeatures from "../components/HomepageFeatures"; function HomepageHeader() { - const { siteConfig } = useDocusaurusContext(); - return ( -
-
-

{siteConfig.title}

-

{siteConfig.tagline}

-
-
- - Get Started - -

- Current stable version of AutoGen (autogen-agentchat~=0.2) -

-
-
- - Preview v0.4 - -

- A new event driven architecture for AutoGen -

-
-
-
-
- ); + const { siteConfig } = useDocusaurusContext(); + return ( +
+
+

{siteConfig.title}

+

{siteConfig.tagline}

+
+
+ + Get Started + +

+ Current stable version of AutoGen (autogen-agentchat~=0.2) +

+
+
+ + Preview v0.4 + +

+ A new event driven, asynchronous architecture for AutoGen +

+
+
+
+
+ ); } export default function Home() { - const { siteConfig } = useDocusaurusContext(); - return ( - - -
- -
-
- ); + const { siteConfig } = useDocusaurusContext(); + return ( + + +
+ +
+
+ ); } diff --git a/website/src/pages/index.module.css b/website/src/pages/index.module.css index dc827fa4c423..883ab870aaea 100644 --- a/website/src/pages/index.module.css +++ b/website/src/pages/index.module.css @@ -29,11 +29,18 @@ display: flex; flex-direction: column; align-items: center; + width: 200px; +} + +.buttonLink { + width: 100%; } .buttonTagline { font-size: 0.8rem; margin-top: 5px; text-align: center; - max-width: 200px; + height: 100px; + overflow-y: auto; + padding: 5px; } From 02977ee250a649ad85bec13a729b672c07360ed5 Mon Sep 17 00:00:00 2001 From: kiyoung Date: Thu, 17 Oct 2024 04:22:50 +0900 Subject: [PATCH 33/35] K8s code executor (#3419) * remove coding directory from gitignore * add k8s PodCommandLineCodeExecutor * add error handlings and custom pod spec parameter * change parameter name to kube_config_file * add param container_name * add test case for PodCommandLineCodeExecutor * add test guide * draft for docs notebook * test code fix indent * add document * add license info * revise documentation * modify document: install autogen-agentchat * apply pre-commit * revert change to gitignore * error handling: move import block into try block --------- Co-authored-by: Ryan Sweet Co-authored-by: Jack Gerrits --- LICENSE-CODE-KUBERNETES | 201 +++++ autogen/coding/kubernetes/__init__.py | 5 + .../pod_commandline_code_executor.py | 323 ++++++++ setup.py | 1 + ...st_kubernetes.commandline_code_executor.md | 44 + ...st_kubernetes_commandline_code_executor.py | 203 +++++ ...rnetes-pod-commandline-code-executor.ipynb | 773 ++++++++++++++++++ 7 files changed, 1550 insertions(+) create mode 100644 LICENSE-CODE-KUBERNETES create mode 100644 autogen/coding/kubernetes/__init__.py create mode 100644 autogen/coding/kubernetes/pod_commandline_code_executor.py create mode 100644 test/coding/test_kubernetes.commandline_code_executor.md create mode 100644 test/coding/test_kubernetes_commandline_code_executor.py create mode 100644 website/docs/topics/code-execution/kubernetes-pod-commandline-code-executor.ipynb diff --git a/LICENSE-CODE-KUBERNETES b/LICENSE-CODE-KUBERNETES new file mode 100644 index 000000000000..45cadb395ec4 --- /dev/null +++ b/LICENSE-CODE-KUBERNETES @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2014 The Kubernetes Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/autogen/coding/kubernetes/__init__.py b/autogen/coding/kubernetes/__init__.py new file mode 100644 index 000000000000..3129ec86bf3e --- /dev/null +++ b/autogen/coding/kubernetes/__init__.py @@ -0,0 +1,5 @@ +from .pod_commandline_code_executor import PodCommandLineCodeExecutor + +__all__ = [ + "PodCommandLineCodeExecutor", +] diff --git a/autogen/coding/kubernetes/pod_commandline_code_executor.py b/autogen/coding/kubernetes/pod_commandline_code_executor.py new file mode 100644 index 000000000000..a232a3922717 --- /dev/null +++ b/autogen/coding/kubernetes/pod_commandline_code_executor.py @@ -0,0 +1,323 @@ +from __future__ import annotations + +import atexit +import importlib +import sys +import textwrap +import uuid +from hashlib import md5 +from pathlib import Path +from time import sleep +from types import TracebackType +from typing import Any, ClassVar, Dict, List, Optional, Type, Union + +client = importlib.import_module("kubernetes.client") +config = importlib.import_module("kubernetes.config") +ApiException = importlib.import_module("kubernetes.client.rest").ApiException +stream = importlib.import_module("kubernetes.stream").stream + +from ...code_utils import TIMEOUT_MSG, _cmd +from ..base import CodeBlock, CodeExecutor, CodeExtractor, CommandLineCodeResult +from ..markdown_code_extractor import MarkdownCodeExtractor +from ..utils import _get_file_name_from_content, silence_pip + +if sys.version_info >= (3, 11): + from typing import Self +else: + from typing_extensions import Self + + +class PodCommandLineCodeExecutor(CodeExecutor): + DEFAULT_EXECUTION_POLICY: ClassVar[Dict[str, bool]] = { + "bash": True, + "shell": True, + "sh": True, + "pwsh": False, + "powershell": False, + "ps1": False, + "python": True, + "javascript": False, + "html": False, + "css": False, + } + LANGUAGE_ALIASES: ClassVar[Dict[str, str]] = { + "py": "python", + "js": "javascript", + } + LANGUAGE_FILE_EXTENSION: ClassVar[Dict[str, str]] = { + "python": "py", + "javascript": "js", + "bash": "sh", + "shell": "sh", + "sh": "sh", + } + + def __init__( + self, + image: str = "python:3-slim", + pod_name: Optional[str] = None, + namespace: Optional[str] = None, + pod_spec: Optional[client.V1Pod] = None, # type: ignore + container_name: Optional[str] = "autogen-code-exec", + timeout: int = 60, + work_dir: Union[Path, str] = Path("/workspace"), + kube_config_file: Optional[str] = None, + stop_container: bool = True, + execution_policies: Optional[Dict[str, bool]] = None, + ): + """(Experimental) A code executor class that executes code through + a command line environment in a kubernetes pod. + + The executor first saves each code block in a file in the working + directory, and then executes the code file in the container. + The executor executes the code blocks in the order they are received. + Currently, the executor only supports Python and shell scripts. + For Python code, use the language "python" for the code block. + For shell scripts, use the language "bash", "shell", or "sh" for the code + block. + + Args: + image (_type_, optional): Docker image to use for code execution. + Defaults to "python:3-slim". + pod_name (Optional[str], optional): Name of the kubernetes pod + which is created. If None, will autogenerate a name. Defaults to None. + namespace (Optional[str], optional): Namespace of kubernetes pod + which is created. If None, will use current namespace of this instance + pod_spec (Optional[client.V1Pod], optional): Specification of kubernetes pod. + custom pod spec can be provided with this param. + if pod_spec is provided, params above(image, pod_name, namespace) are neglected. + container_name (Optional[str], optional): Name of the container where code block will be + executed. if pod_spec param is provided, container_name must be provided also. + timeout (int, optional): The timeout for code execution. Defaults to 60. + work_dir (Union[Path, str], optional): The working directory for the code + execution. Defaults to Path("/workspace"). + kube_config_file (Optional[str], optional): kubernetes configuration file path. + If None, will use KUBECONFIG environment variables or service account token(incluster config) + stop_container (bool, optional): If true, will automatically stop the + container when stop is called, when the context manager exits or when + the Python process exits with atext. Defaults to True. + execution_policies (dict[str, bool], optional): defines supported execution language + + Raises: + ValueError: On argument error, or if the container fails to start. + """ + if kube_config_file is None: + config.load_config() + else: + config.load_config(config_file=kube_config_file) + + self._api_client = client.CoreV1Api() + + if timeout < 1: + raise ValueError("Timeout must be greater than or equal to 1.") + self._timeout = timeout + + if isinstance(work_dir, str): + work_dir = Path(work_dir) + self._work_dir: Path = work_dir + + if container_name is None: + container_name = "autogen-code-exec" + self._container_name = container_name + + # Start a container from the image, read to exec commands later + if pod_spec: + pod = pod_spec + else: + if pod_name is None: + pod_name = f"autogen-code-exec-{uuid.uuid4()}" + if namespace is None: + namespace_path = "/var/run/secrets/kubernetes.io/serviceaccount/namespace" + if not Path(namespace_path).is_file(): + raise ValueError("Namespace where the pod will be launched must be provided") + with open(namespace_path, "r") as f: + namespace = f.read() + + pod = client.V1Pod( + metadata=client.V1ObjectMeta(name=pod_name, namespace=namespace), + spec=client.V1PodSpec( + restart_policy="Never", + containers=[ + client.V1Container( + args=["-c", "while true;do sleep 5; done"], + command=["/bin/sh"], + name=container_name, + image=image, + ) + ], + ), + ) + + try: + pod_name = pod.metadata.name + namespace = pod.metadata.namespace + self._pod = self._api_client.create_namespaced_pod(namespace=namespace, body=pod) + except ApiException as e: + raise ValueError(f"Creating pod failed: {e}") + + self._wait_for_ready() + + def cleanup() -> None: + try: + self._api_client.delete_namespaced_pod(pod_name, namespace) + except ApiException: + pass + atexit.unregister(cleanup) + + self._cleanup = cleanup + + if stop_container: + atexit.register(cleanup) + + self.execution_policies = self.DEFAULT_EXECUTION_POLICY.copy() + if execution_policies is not None: + self.execution_policies.update(execution_policies) + + def _wait_for_ready(self, stop_time: float = 0.1) -> None: + elapsed_time = 0.0 + name = self._pod.metadata.name + namespace = self._pod.metadata.namespace + while True: + sleep(stop_time) + elapsed_time += stop_time + if elapsed_time > self._timeout: + raise ValueError( + f"pod name {name} on namespace {namespace} is not Ready after timeout {self._timeout} seconds" + ) + try: + pod_status = self._api_client.read_namespaced_pod_status(name, namespace) + if pod_status.status.phase == "Running": + break + except ApiException as e: + raise ValueError(f"reading pod status failed: {e}") + + @property + def timeout(self) -> int: + """(Experimental) The timeout for code execution.""" + return self._timeout + + @property + def work_dir(self) -> Path: + """(Experimental) The working directory for the code execution.""" + return self._work_dir + + @property + def code_extractor(self) -> CodeExtractor: + """(Experimental) Export a code extractor that can be used by an agent.""" + return MarkdownCodeExtractor() + + def execute_code_blocks(self, code_blocks: List[CodeBlock]) -> CommandLineCodeResult: + """(Experimental) Execute the code blocks and return the result. + + Args: + code_blocks (List[CodeBlock]): The code blocks to execute. + + Returns: + CommandlineCodeResult: The result of the code execution.""" + + if len(code_blocks) == 0: + raise ValueError("No code blocks to execute.") + + outputs = [] + files = [] + last_exit_code = 0 + for code_block in code_blocks: + lang = self.LANGUAGE_ALIASES.get(code_block.language.lower(), code_block.language.lower()) + if lang not in self.DEFAULT_EXECUTION_POLICY: + outputs.append(f"Unsupported language {lang}\n") + last_exit_code = 1 + break + + execute_code = self.execution_policies.get(lang, False) + code = silence_pip(code_block.code, lang) + if lang in ["bash", "shell", "sh"]: + code = "\n".join(["#!/bin/bash", code]) + + try: + filename = _get_file_name_from_content(code, self._work_dir) + except ValueError: + outputs.append("Filename is not in the workspace") + last_exit_code = 1 + break + + if not filename: + extension = self.LANGUAGE_FILE_EXTENSION.get(lang, lang) + filename = f"tmp_code_{md5(code.encode()).hexdigest()}.{extension}" + + code_path = self._work_dir / filename + + exec_script = textwrap.dedent( + """ + if [ ! -d "{workspace}" ]; then + mkdir {workspace} + fi + cat <{code_path}\n + {code} + EOM + chmod +x {code_path}""" + ) + exec_script = exec_script.format(workspace=str(self._work_dir), code_path=code_path, code=code) + stream( + self._api_client.connect_get_namespaced_pod_exec, + self._pod.metadata.name, + self._pod.metadata.namespace, + command=["/bin/sh", "-c", exec_script], + container=self._container_name, + stderr=True, + stdin=False, + stdout=True, + tty=False, + ) + + files.append(code_path) + + if not execute_code: + outputs.append(f"Code saved to {str(code_path)}\n") + continue + + resp = stream( + self._api_client.connect_get_namespaced_pod_exec, + self._pod.metadata.name, + self._pod.metadata.namespace, + command=["timeout", str(self._timeout), _cmd(lang), str(code_path)], + container=self._container_name, + stderr=True, + stdin=False, + stdout=True, + tty=False, + _preload_content=False, + ) + + stdout_messages = [] + stderr_messages = [] + while resp.is_open(): + resp.update(timeout=1) + if resp.peek_stderr(): + stderr_messages.append(resp.read_stderr()) + if resp.peek_stdout(): + stdout_messages.append(resp.read_stdout()) + outputs.extend(stdout_messages + stderr_messages) + exit_code = resp.returncode + resp.close() + + if exit_code == 124: + outputs.append("\n" + TIMEOUT_MSG) + + last_exit_code = exit_code + if exit_code != 0: + break + + code_file = str(files[0]) if files else None + return CommandLineCodeResult(exit_code=last_exit_code, output="".join(outputs), code_file=code_file) + + def stop(self) -> None: + """(Experimental) Stop the code executor.""" + self._cleanup() + + def __enter__(self) -> Self: + return self + + def __exit__( + self, exc_type: Optional[Type[BaseException]], exc_val: Optional[BaseException], exc_tb: Optional[TracebackType] + ) -> None: + self.stop() diff --git a/setup.py b/setup.py index fe55a4a6c2ed..dfc5e9127794 100644 --- a/setup.py +++ b/setup.py @@ -107,6 +107,7 @@ "cohere": ["cohere>=5.5.8"], "ollama": ["ollama>=0.3.3", "fix_busted_json>=0.0.18"], "bedrock": ["boto3>=1.34.149"], + "kubernetes": ["kubernetes>=27.2.0"], } setuptools.setup( diff --git a/test/coding/test_kubernetes.commandline_code_executor.md b/test/coding/test_kubernetes.commandline_code_executor.md new file mode 100644 index 000000000000..ad9348dc6007 --- /dev/null +++ b/test/coding/test_kubernetes.commandline_code_executor.md @@ -0,0 +1,44 @@ +# Test Environment for autogen.coding.kubernetes.PodCommandLineCodeExecutor + +To test PodCommandLineCodeExecutor, the following environment is required. +- kubernetes cluster config file +- autogen package + +## kubernetes cluster config file + +kubernetes cluster config file, kubeconfig file's location should be set on environment variable `KUBECONFIG` or +It must be located in the .kube/config path of your home directory. + +For Windows, `C:\Users\<>\.kube\config`, +For Linux or MacOS, place the kubeconfig file in the `/home/<>/.kube/config` directory. + +## package install + +Clone autogen github repository for package install and testing + +Clone the repository with the command below. + +before contribution +```sh +git clone -b k8s-code-executor https://github.com/questcollector/autogen.git +``` + +after contribution +```sh +git clone https://github.com/microsoft/autogen.git +``` + +install autogen with kubernetes >= 27.0.2 + +```sh +cd autogen +pip install .[kubernetes] -U +``` + +## test execution + +Perform the test with the following command + +```sh +pytest test/coding/test_kubernetes_commandline_code_executor.py +``` diff --git a/test/coding/test_kubernetes_commandline_code_executor.py b/test/coding/test_kubernetes_commandline_code_executor.py new file mode 100644 index 000000000000..09e6b36aafaf --- /dev/null +++ b/test/coding/test_kubernetes_commandline_code_executor.py @@ -0,0 +1,203 @@ +import importlib +import os +import sys +from pathlib import Path + +import pytest + +from autogen.code_utils import TIMEOUT_MSG +from autogen.coding.base import CodeBlock, CodeExecutor + +try: + from autogen.coding.kubernetes import PodCommandLineCodeExecutor + + client = importlib.import_module("kubernetes.client") + config = importlib.import_module("kubernetes.config") + + kubeconfig = Path(".kube/config") + if os.environ.get("KUBECONFIG", None): + kubeconfig = Path(os.environ["KUBECONFIG"]) + elif sys.platform == "win32": + kubeconfig = os.environ["userprofile"] / kubeconfig + else: + kubeconfig = os.environ["HOME"] / kubeconfig + + if kubeconfig.is_file(): + config.load_config(config_file=str(kubeconfig)) + api_client = client.CoreV1Api() + api_client.list_namespace() + skip_kubernetes_tests = False + else: + skip_kubernetes_tests = True + + pod_spec = client.V1Pod( + metadata=client.V1ObjectMeta( + name="abcd", namespace="default", annotations={"sidecar.istio.io/inject": "false"} + ), + spec=client.V1PodSpec( + restart_policy="Never", + containers=[ + client.V1Container( + args=["-c", "while true;do sleep 5; done"], + command=["/bin/sh"], + name="abcd", + image="python:3.11-slim", + env=[ + client.V1EnvVar(name="TEST", value="TEST"), + client.V1EnvVar( + name="POD_NAME", + value_from=client.V1EnvVarSource( + field_ref=client.V1ObjectFieldSelector(field_path="metadata.name") + ), + ), + ], + ) + ], + ), + ) +except Exception: + skip_kubernetes_tests = True + + +@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible") +def test_create_default_pod_executor(): + with PodCommandLineCodeExecutor(namespace="default", kube_config_file=str(kubeconfig)) as executor: + assert executor.timeout == 60 + assert executor.work_dir == Path("/workspace") + assert executor._container_name == "autogen-code-exec" + assert executor._pod.metadata.name.startswith("autogen-code-exec-") + _test_execute_code(executor) + + +@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible") +def test_create_node_pod_executor(): + with PodCommandLineCodeExecutor( + image="node:22-alpine", + namespace="default", + work_dir="./app", + timeout=30, + kube_config_file=str(kubeconfig), + execution_policies={"javascript": True}, + ) as executor: + assert executor.timeout == 30 + assert executor.work_dir == Path("./app") + assert executor._container_name == "autogen-code-exec" + assert executor._pod.metadata.name.startswith("autogen-code-exec-") + assert executor.execution_policies["javascript"] + + # Test single code block. + code_blocks = [CodeBlock(code="console.log('hello world!')", language="javascript")] + code_result = executor.execute_code_blocks(code_blocks) + assert code_result.exit_code == 0 and "hello world!" in code_result.output and code_result.code_file is not None + + # Test multiple code blocks. + code_blocks = [ + CodeBlock(code="console.log('hello world!')", language="javascript"), + CodeBlock(code="let a = 100 + 100; console.log(a)", language="javascript"), + ] + code_result = executor.execute_code_blocks(code_blocks) + assert ( + code_result.exit_code == 0 + and "hello world!" in code_result.output + and "200" in code_result.output + and code_result.code_file is not None + ) + + # Test running code. + file_lines = ["console.log('hello world!')", "let a = 100 + 100", "console.log(a)"] + code_blocks = [CodeBlock(code="\n".join(file_lines), language="javascript")] + code_result = executor.execute_code_blocks(code_blocks) + assert ( + code_result.exit_code == 0 + and "hello world!" in code_result.output + and "200" in code_result.output + and code_result.code_file is not None + ) + + +@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible") +def test_create_pod_spec_pod_executor(): + with PodCommandLineCodeExecutor( + pod_spec=pod_spec, container_name="abcd", kube_config_file=str(kubeconfig) + ) as executor: + assert executor.timeout == 60 + assert executor._container_name == "abcd" + assert executor._pod.metadata.name == pod_spec.metadata.name + assert executor._pod.metadata.namespace == pod_spec.metadata.namespace + _test_execute_code(executor) + + # Test bash script. + if sys.platform not in ["win32"]: + code_blocks = [CodeBlock(code="echo $TEST $POD_NAME", language="bash")] + code_result = executor.execute_code_blocks(code_blocks) + assert ( + code_result.exit_code == 0 and "TEST abcd" in code_result.output and code_result.code_file is not None + ) + + +@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible") +def test_pod_executor_timeout(): + with PodCommandLineCodeExecutor(namespace="default", timeout=5, kube_config_file=str(kubeconfig)) as executor: + assert executor.timeout == 5 + assert executor.work_dir == Path("/workspace") + assert executor._container_name == "autogen-code-exec" + assert executor._pod.metadata.name.startswith("autogen-code-exec-") + # Test running code. + file_lines = ["import time", "time.sleep(10)", "a = 100 + 100", "print(a)"] + code_blocks = [CodeBlock(code="\n".join(file_lines), language="python")] + code_result = executor.execute_code_blocks(code_blocks) + assert code_result.exit_code == 124 and TIMEOUT_MSG in code_result.output and code_result.code_file is not None + + +def _test_execute_code(executor: CodeExecutor) -> None: + # Test single code block. + code_blocks = [CodeBlock(code="import sys; print('hello world!')", language="python")] + code_result = executor.execute_code_blocks(code_blocks) + assert code_result.exit_code == 0 and "hello world!" in code_result.output and code_result.code_file is not None + + # Test multiple code blocks. + code_blocks = [ + CodeBlock(code="import sys; print('hello world!')", language="python"), + CodeBlock(code="a = 100 + 100; print(a)", language="python"), + ] + code_result = executor.execute_code_blocks(code_blocks) + assert ( + code_result.exit_code == 0 + and "hello world!" in code_result.output + and "200" in code_result.output + and code_result.code_file is not None + ) + + # Test bash script. + if sys.platform not in ["win32"]: + code_blocks = [CodeBlock(code="echo 'hello world!'", language="bash")] + code_result = executor.execute_code_blocks(code_blocks) + assert code_result.exit_code == 0 and "hello world!" in code_result.output and code_result.code_file is not None + + # Test running code. + file_lines = ["import sys", "print('hello world!')", "a = 100 + 100", "print(a)"] + code_blocks = [CodeBlock(code="\n".join(file_lines), language="python")] + code_result = executor.execute_code_blocks(code_blocks) + assert ( + code_result.exit_code == 0 + and "hello world!" in code_result.output + and "200" in code_result.output + and code_result.code_file is not None + ) + + # Test running code has filename. + file_lines = ["# filename: test.py", "import sys", "print('hello world!')", "a = 100 + 100", "print(a)"] + code_blocks = [CodeBlock(code="\n".join(file_lines), language="python")] + code_result = executor.execute_code_blocks(code_blocks) + print(code_result.code_file) + assert ( + code_result.exit_code == 0 + and "hello world!" in code_result.output + and "200" in code_result.output + and code_result.code_file.find("test.py") > 0 + ) + + # Test error code. + code_blocks = [CodeBlock(code="print(sys.platform)", language="python")] + code_result = executor.execute_code_blocks(code_blocks) + assert code_result.exit_code == 1 and "Traceback" in code_result.output and code_result.code_file is not None diff --git a/website/docs/topics/code-execution/kubernetes-pod-commandline-code-executor.ipynb b/website/docs/topics/code-execution/kubernetes-pod-commandline-code-executor.ipynb new file mode 100644 index 000000000000..2cad17e0deb5 --- /dev/null +++ b/website/docs/topics/code-execution/kubernetes-pod-commandline-code-executor.ipynb @@ -0,0 +1,773 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Kubernetes Pod Commandline Code Executor" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `PodCommandLineCodeExecutor` in the `autogen.coding.kubernetes` module is designed to execute code blocks using a pod in Kubernetes.\n", + "It functions similarly to the `DockerCommandLineCodeExecutor`, but specifically creates container within Kubernetes environments.\n", + "\n", + "There are two condition to use PodCommandLineCodeExecutor.\n", + "- Access to a Kubernetes cluster\n", + "- installation `autogen` with the extra requirements `'pyautogen[kubernetes]'`\n", + "\n", + "For local development and testing, this document uses a Minikube cluster.\n", + "\n", + "Minikube is a tool that allows you to run a single-node Kubernetes cluster on you local machine. \n", + "You can refer to the link below for installation and setup of Minikube.\n", + "\n", + "🔗 https://minikube.sigs.k8s.io/docs/start/" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Access kubernetes cluster" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are four options PodCommandLineCodeExecutor to access kubernetes API server.\n", + "- default kubeconfig file path: `~/.kube/config`\n", + "- Provide a custom kubeconfig file path using the `kube_config_file` argument of `PodCommandLineCodeExecutor`.\n", + "- Set the kubeconfig file path using the `KUBECONFIG` environment variable.\n", + "- Provide token from Kubernetes ServiceAccount with sufficient permissions" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Generally, if kubeconfig file is located in `~/.kube/config`, there's no need to provide kubeconfig file path on parameter or environment variables.\n", + "\n", + "The tutorial of providing ServiceAccount Token is in the last section" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example\n", + "\n", + "In order to use kubernetes Pod based code executor, you need to install Kubernetes Python SDK.\n", + "\n", + "You can do this by running the following command:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pip install 'kubernetes>=27'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Alternatively, you can install it with the extra features for Kubernetes:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pip install 'autogen-agentchat[kubernetes]~=0.2'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To provide kubeconfig file path with environment variable, It can be added with `os.environ[\"KUBECONFIG\"]`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "# Set the KUBECONFIG environment variable\n", + "# if the kubeconfig file is not in the default location(~/.kube/config).\n", + "os.environ[\"KUBECONFIG\"] = \"path/to/your/kubeconfig\"" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen.coding import CodeBlock\n", + "from autogen.coding.kubernetes import PodCommandLineCodeExecutor" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "exit_code=0 output='Hello, World!\\n' code_file='/workspace/tmp_code_07da107bb575cc4e02b0e1d6d99cc204.py'\n" + ] + } + ], + "source": [ + "with PodCommandLineCodeExecutor(\n", + " namespace=\"default\",\n", + " # kube_config_file=\"kubeconfig/file/path\" # If you have another kubeconfig file, you can add it on kube_config_file argument\n", + ") as executor:\n", + " print(\n", + " executor.execute_code_blocks(\n", + " # Example of executing a simple Python code block within a Kubernetes pod.\n", + " code_blocks=[\n", + " CodeBlock(language=\"python\", code=\"print('Hello, World!')\"),\n", + " ]\n", + " )\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Using a context manager(the `with` statement), the pod created by `PodCommandLineCodeExecutor` is automatically deleted after the tasks are completed.\n", + "\n", + "Although the pod is automatically deleted when using a context manager, you might sometimes need to delete it manually. You can do this using `stop()` method, as shown below:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "executor = PodCommandLineCodeExecutor(namespace=\"default\")" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "NAME READY STATUS RESTARTS AGE\n", + "autogen-code-exec-afd217ac-f77b-4ede-8c53-1297eca5ec64 1/1 Running 0 10m\n" + ] + } + ], + "source": [ + "%%bash\n", + "# This command lists all pods in the default namespace. \n", + "# The default pod name follows the format autogen-code-exec-{uuid.uuid4()}.\n", + "kubectl get pod -n default" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "python:3-slim" + ] + } + ], + "source": [ + "%%bash\n", + "# This command shows container's image in the pod.\n", + "# The default container image is python:3-slim\n", + "kubectl get pod autogen-code-exec-afd217ac-f77b-4ede-8c53-1297eca5ec64 -o jsonpath={.spec.containers[0].image}" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "executor.stop()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To use a different container image for code executor pod, specify the desired image tag using `image` argument.\n", + "\n", + "`PodCommandLineCodeExecutor` has a default execution policy that allows Python and shell script code blocks. You can enable other languages with `execution_policies` argument." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "exit_code=0 output='Hello, World!\\n' code_file='app/tmp_code_8c34c8586cb47943728afe1297b7a51c.js'\n" + ] + } + ], + "source": [ + "with PodCommandLineCodeExecutor(\n", + " image=\"node:22-alpine\", # Specifies the runtime environments using a container image\n", + " namespace=\"default\",\n", + " work_dir=\"./app\", # Directory within the container where code block files are stored\n", + " timeout=10, # Timeout in seconds for pod creation and code block execution (default is 60 seconds)\n", + " execution_policies={\n", + " \"javascript\": True\n", + " }, # Enable execution of Javascript code blocks by updating execution policies\n", + ") as executor:\n", + " print(\n", + " executor.execute_code_blocks(\n", + " code_blocks=[\n", + " CodeBlock(language=\"javascript\", code=\"console.log('Hello, World!')\"),\n", + " ]\n", + " )\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you want to apply custom settings for executor pod, such as annotations, environment variables, commands, volumes etc., \n", + "you can provide a custom pod specification using `kubernetes.client.V1Pod` format.\n", + "\n", + "The `container_name` argument should also be provided because `PodCommandLineCodeExecutor` does not automatically recognize the container where code blocks will be executed." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [], + "source": [ + "from kubernetes import client\n", + "\n", + "pod = client.V1Pod(\n", + " metadata=client.V1ObjectMeta(name=\"abcd\", namespace=\"default\", annotations={\"sidecar.istio.io/inject\": \"false\"}),\n", + " spec=client.V1PodSpec(\n", + " restart_policy=\"Never\",\n", + " containers=[\n", + " client.V1Container(\n", + " args=[\"-c\", \"while true;do sleep 5; done\"],\n", + " command=[\"/bin/sh\"],\n", + " name=\"abcd\", # container name where code blocks will be executed should be provided using `container_name` argument\n", + " image=\"python:3.11-slim\",\n", + " env=[\n", + " client.V1EnvVar(name=\"TEST\", value=\"TEST\"),\n", + " client.V1EnvVar(\n", + " name=\"POD_NAME\",\n", + " value_from=client.V1EnvVarSource(\n", + " field_ref=client.V1ObjectFieldSelector(field_path=\"metadata.name\")\n", + " ),\n", + " ),\n", + " ],\n", + " )\n", + " ],\n", + " ),\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "exit_code=0 output='Hello, World!\\n' code_file='/autogen/tmp_code_07da107bb575cc4e02b0e1d6d99cc204.py'\n", + "exit_code=0 output='TEST abcd\\n' code_file='/autogen/tmp_code_202399627ea7fb8d8e816f4910b7f87b.sh'\n" + ] + } + ], + "source": [ + "with PodCommandLineCodeExecutor(\n", + " pod_spec=pod, # custom executor pod spec\n", + " container_name=\"abcd\", # To use custom executor pod spec, container_name where code block will be executed should be specified\n", + " work_dir=\"/autogen\",\n", + " timeout=60,\n", + ") as executor:\n", + " print(\n", + " executor.execute_code_blocks(\n", + " code_blocks=[\n", + " CodeBlock(language=\"python\", code=\"print('Hello, World!')\"),\n", + " ]\n", + " )\n", + " )\n", + " print(\n", + " executor.execute_code_blocks(\n", + " code_blocks=[\n", + " CodeBlock(\n", + " code=\"echo $TEST $POD_NAME\", language=\"bash\"\n", + " ), # echo environment variables specified in pod_spec\n", + " ]\n", + " )\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Integrates with AutoGen Agents" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`PodCommandLineCodeExecutor` can be integrated with Agents." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen import config_list_from_json\n", + "\n", + "config_list = config_list_from_json(\n", + " env_or_file=\"OAI_CONFIG_LIST\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[33mcode_executor_agent\u001b[0m (to code_writer):\n", + "\n", + "Write Python code to calculate the moves of disk on tower of hanoi with 3 disks\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mcode_writer\u001b[0m (to code_executor_agent):\n", + "\n", + "The problem of the Tower of Hanoi with 3 disks involves moving the disks from one peg to another, following these rules:\n", + "1. Only one disk can be moved at a time.\n", + "2. Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty peg.\n", + "3. No disk may be placed on top of a smaller disk.\n", + "\n", + "In the solution, I will use a recursive function to calculate the moves and print them out. Here's the Python code to accomplish this:\n", + "\n", + "```python\n", + "def tower_of_hanoi(n, from_rod, to_rod, aux_rod):\n", + " if n == 1:\n", + " print(f\"Move disk 1 from rod {from_rod} to rod {to_rod}\")\n", + " return\n", + " tower_of_hanoi(n-1, from_rod, aux_rod, to_rod)\n", + " print(f\"Move disk {n} from rod {from_rod} to rod {to_rod}\")\n", + " tower_of_hanoi(n-1, aux_rod, to_rod, from_rod)\n", + "\n", + "n = 3 # Number of disks\n", + "tower_of_hanoi(n, 'A', 'C', 'B') # A, B and C are names of the rods\n", + "```\n", + "\n", + "This script defines a function `tower_of_hanoi` that will print out each move necessary to solve the Tower of Hanoi problem with the specified number of disks `n`. This specific setup will solve for 3 disks moving from rod 'A' to rod 'C' with the help of rod 'B'.\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[31m\n", + ">>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...\u001b[0m\n", + "\u001b[33mcode_executor_agent\u001b[0m (to code_writer):\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: Move disk 1 from rod A to rod C\n", + "Move disk 2 from rod A to rod B\n", + "Move disk 1 from rod C to rod B\n", + "Move disk 3 from rod A to rod C\n", + "Move disk 1 from rod B to rod A\n", + "Move disk 2 from rod B to rod C\n", + "Move disk 1 from rod A to rod C\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mcode_writer\u001b[0m (to code_executor_agent):\n", + "\n", + "The execution of the provided code successfully calculated and printed the moves for solving the Tower of Hanoi with 3 disks. Here are the steps it performed:\n", + "\n", + "1. Move disk 1 from rod A to rod C.\n", + "2. Move disk 2 from rod A to rod B.\n", + "3. Move disk 1 from rod C to rod B.\n", + "4. Move disk 3 from rod A to rod C.\n", + "5. Move disk 1 from rod B to rod A.\n", + "6. Move disk 2 from rod B to rod C.\n", + "7. Move disk 1 from rod A to rod C.\n", + "\n", + "This sequence effectively transfers all disks from rod A to rod C using rod B as an auxiliary, following the rules of the Tower of Hanoi puzzle. If you have any more tasks or need further explanation, feel free to ask!\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mcode_executor_agent\u001b[0m (to code_writer):\n", + "\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "from autogen import ConversableAgent\n", + "\n", + "# The code writer agent's system message is to instruct the LLM on how to\n", + "# use the code executor with python or shell script code\n", + "code_writer_system_message = \"\"\"\n", + "You have been given coding capability to solve tasks using Python code.\n", + "In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.\n", + " 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.\n", + " 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.\n", + "Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\n", + "When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.\n", + "If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.\n", + "\"\"\"\n", + "with PodCommandLineCodeExecutor(namespace=\"default\") as executor:\n", + "\n", + " code_executor_agent = ConversableAgent(\n", + " name=\"code_executor_agent\",\n", + " llm_config=False,\n", + " code_execution_config={\n", + " \"executor\": executor,\n", + " },\n", + " human_input_mode=\"NEVER\",\n", + " )\n", + "\n", + " code_writer_agent = ConversableAgent(\n", + " \"code_writer\",\n", + " system_message=code_writer_system_message,\n", + " llm_config={\"config_list\": config_list},\n", + " code_execution_config=False, # Turn off code execution for this agent.\n", + " max_consecutive_auto_reply=2,\n", + " human_input_mode=\"NEVER\",\n", + " )\n", + "\n", + " chat_result = code_executor_agent.initiate_chat(\n", + " code_writer_agent, message=\"Write Python code to calculate the moves of disk on tower of hanoi with 10 disks\"\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "93802984-3207-430b-a205-82f0a77df2b2", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ChatResult(chat_id=None,\n", + " chat_history=[{'content': 'Write Python code to calculate the moves '\n", + " 'of disk on tower of hanoi with 3 disks',\n", + " 'name': 'code_executor_agent',\n", + " 'role': 'assistant'},\n", + " {'content': 'The problem of the Tower of Hanoi with 3 '\n", + " 'disks involves moving the disks from one '\n", + " 'peg to another, following these rules:\\n'\n", + " '1. Only one disk can be moved at a '\n", + " 'time.\\n'\n", + " '2. Each move consists of taking the '\n", + " 'upper disk from one of the stacks and '\n", + " 'placing it on top of another stack or on '\n", + " 'an empty peg.\\n'\n", + " '3. No disk may be placed on top of a '\n", + " 'smaller disk.\\n'\n", + " '\\n'\n", + " 'In the solution, I will use a recursive '\n", + " 'function to calculate the moves and '\n", + " \"print them out. Here's the Python code \"\n", + " 'to accomplish this:\\n'\n", + " '\\n'\n", + " '```python\\n'\n", + " 'def tower_of_hanoi(n, from_rod, to_rod, '\n", + " 'aux_rod):\\n'\n", + " ' if n == 1:\\n'\n", + " ' print(f\"Move disk 1 from rod '\n", + " '{from_rod} to rod {to_rod}\")\\n'\n", + " ' return\\n'\n", + " ' tower_of_hanoi(n-1, from_rod, '\n", + " 'aux_rod, to_rod)\\n'\n", + " ' print(f\"Move disk {n} from rod '\n", + " '{from_rod} to rod {to_rod}\")\\n'\n", + " ' tower_of_hanoi(n-1, aux_rod, to_rod, '\n", + " 'from_rod)\\n'\n", + " '\\n'\n", + " 'n = 3 # Number of disks\\n'\n", + " \"tower_of_hanoi(n, 'A', 'C', 'B') # A, B \"\n", + " 'and C are names of the rods\\n'\n", + " '```\\n'\n", + " '\\n'\n", + " 'This script defines a function '\n", + " '`tower_of_hanoi` that will print out '\n", + " 'each move necessary to solve the Tower '\n", + " 'of Hanoi problem with the specified '\n", + " 'number of disks `n`. This specific setup '\n", + " 'will solve for 3 disks moving from rod '\n", + " \"'A' to rod 'C' with the help of rod 'B'.\",\n", + " 'name': 'code_writer',\n", + " 'role': 'user'},\n", + " {'content': 'exitcode: 0 (execution succeeded)\\n'\n", + " 'Code output: Move disk 1 from rod A to '\n", + " 'rod C\\n'\n", + " 'Move disk 2 from rod A to rod B\\n'\n", + " 'Move disk 1 from rod C to rod B\\n'\n", + " 'Move disk 3 from rod A to rod C\\n'\n", + " 'Move disk 1 from rod B to rod A\\n'\n", + " 'Move disk 2 from rod B to rod C\\n'\n", + " 'Move disk 1 from rod A to rod C\\n',\n", + " 'name': 'code_executor_agent',\n", + " 'role': 'assistant'},\n", + " {'content': 'The execution of the provided code '\n", + " 'successfully calculated and printed the '\n", + " 'moves for solving the Tower of Hanoi '\n", + " 'with 3 disks. Here are the steps it '\n", + " 'performed:\\n'\n", + " '\\n'\n", + " '1. Move disk 1 from rod A to rod C.\\n'\n", + " '2. Move disk 2 from rod A to rod B.\\n'\n", + " '3. Move disk 1 from rod C to rod B.\\n'\n", + " '4. Move disk 3 from rod A to rod C.\\n'\n", + " '5. Move disk 1 from rod B to rod A.\\n'\n", + " '6. Move disk 2 from rod B to rod C.\\n'\n", + " '7. Move disk 1 from rod A to rod C.\\n'\n", + " '\\n'\n", + " 'This sequence effectively transfers all '\n", + " 'disks from rod A to rod C using rod B as '\n", + " 'an auxiliary, following the rules of the '\n", + " 'Tower of Hanoi puzzle. If you have any '\n", + " 'more tasks or need further explanation, '\n", + " 'feel free to ask!',\n", + " 'name': 'code_writer',\n", + " 'role': 'user'},\n", + " {'content': '',\n", + " 'name': 'code_executor_agent',\n", + " 'role': 'assistant'}],\n", + " summary='',\n", + " cost={'usage_excluding_cached_inference': {'total_cost': 0},\n", + " 'usage_including_cached_inference': {'gpt-4-turbo-2024-04-09': {'completion_tokens': 499,\n", + " 'cost': 0.0269,\n", + " 'prompt_tokens': 1193,\n", + " 'total_tokens': 1692},\n", + " 'total_cost': 0.0269}},\n", + " human_input=[])\n" + ] + } + ], + "source": [ + "import pprint\n", + "\n", + "pprint.pprint(chat_result)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Use ServiceAccount token" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If a `PodCommandLineCodeExecutor` instance runs inside of Kubernetes Pod, it can use a token generated from a ServiceAccount to access Kubernetes API server.\n", + "\n", + "The `PodCommandLineCodeExecutor` requires the following permissions:\n", + "the verbs `create`, `get`, `delete` for `pods` resource, and the verb `get` for resources `pods/status`, `pods/exec`.\n", + "\n", + "You can create a ServiceAccount, ClusterRole and RoleBinding with `kubectl` as shown below:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "serviceaccount/autogen-executor-sa created\n" + ] + } + ], + "source": [ + "%%bash\n", + "# Create ServiceAccount on default namespace\n", + "kubectl create sa autogen-executor-sa" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "clusterrole.rbac.authorization.k8s.io/autogen-executor-role created\n" + ] + } + ], + "source": [ + "%%bash\n", + "# Create ClusterRole that has sufficient permissions\n", + "kubectl create clusterrole autogen-executor-role \\\n", + " --verb=get,create,delete --resource=pods,pods/status,pods/exec" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "rolebinding.rbac.authorization.k8s.io/autogen-executor-rolebinding created\n" + ] + } + ], + "source": [ + "%%bash\n", + "# Create RoleBinding that binds ClusterRole and ServiceAccount\n", + "kubectl create rolebinding autogen-executor-rolebinding \\\n", + " --clusterrole autogen-executor-role --serviceaccount default:autogen-executor-sa" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A pod with a previously created ServiceAccount can be launched using the following command." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "pod/autogen-executor created\n" + ] + } + ], + "source": [ + "%%bash\n", + "# create pod with serviceaccount\n", + "kubectl run autogen-executor --image python:3 \\\n", + " --overrides='{\"spec\":{\"serviceAccount\": \"autogen-executor-sa\"}}' \\\n", + " -- bash -c 'pip install pyautogen[kubernetes] && sleep inifinity'" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can execute `PodCommandLineCodeExecutor` inside the Python interpreter process from `autogen-executor` Pod.\n", + "\n", + "It creates new pod for code execution using token generated from `autogen-executor-sa` ServiceAccount." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "kubectl exec autogen-executor -it -- python" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "kube_config_path not provided and default location (~/.kube/config) does not exist. Using inCluster Config. This might not work.\n", + "exit_code=0 output='Hello, World!\\n' code_file='/workspace/tmp_code_07da107bb575cc4e02b0e1d6d99cc204.py'" + ] + } + ], + "source": [ + "from autogen.coding import CodeBlock\n", + "from autogen.coding.kubernetes import PodCommandLineCodeExecutor\n", + "\n", + "# PodCommandLineCodeExecutor uses token generated from ServiceAccount by kubernetes incluster config\n", + "with PodCommandLineCodeExecutor() as executor:\n", + " print(\n", + " executor.execute_code_blocks(\n", + " code_blocks=[\n", + " CodeBlock(language=\"python\", code=\"print('Hello, World!')\"),\n", + " ]\n", + " )\n", + " )" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "autogen", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 1960eaba1a2815c551fa8146b99ad8b7506d05af Mon Sep 17 00:00:00 2001 From: Lokesh Goel <113521973+lokesh-couchbase@users.noreply.github.com> Date: Thu, 17 Oct 2024 17:48:03 +0530 Subject: [PATCH 34/35] Add Couchbase Vector DB Example Notebook and Minor Bug Fix (#3804) --- .../agentchat/contrib/vectordb/couchbase.py | 19 +- .../agentchat_RetrieveChat_couchbase.ipynb | 579 ++++++++++++++++++ .../blog/2023-10-18-RetrieveChat/index.mdx | 1 + website/docs/topics/retrieval_augmentation.md | 1 + 4 files changed, 589 insertions(+), 11 deletions(-) create mode 100644 notebook/agentchat_RetrieveChat_couchbase.ipynb diff --git a/autogen/agentchat/contrib/vectordb/couchbase.py b/autogen/agentchat/contrib/vectordb/couchbase.py index 66691fa2f2b7..ae978ef126b4 100644 --- a/autogen/agentchat/contrib/vectordb/couchbase.py +++ b/autogen/agentchat/contrib/vectordb/couchbase.py @@ -56,16 +56,7 @@ def __init__( wait_until_index_ready (float | None): Blocking call to wait until the database indexes are ready. None means no wait. Default is None. wait_until_document_ready (float | None): Blocking call to wait until the database documents are ready. None means no wait. Default is None. """ - print( - "CouchbaseVectorDB", - connection_string, - username, - password, - bucket_name, - scope_name, - collection_name, - index_name, - ) + self.embedding_function = embedding_function self.index_name = index_name @@ -119,6 +110,7 @@ def create_collection( try: collection_mgr = self.bucket.collections() collection_mgr.create_collection(self.scope.name, collection_name) + self.cluster.query(f"CREATE PRIMARY INDEX ON {self.bucket.name}.{self.scope.name}.{collection_name}") except Exception: if not get_or_create: @@ -287,7 +279,12 @@ def upsert_docs( [doc["content"]] ).tolist() # Gets new embedding even in case of document update - doc_content = {TEXT_KEY: doc["content"], "metadata": doc.get("metadata", {}), EMBEDDING_KEY: embedding} + doc_content = { + TEXT_KEY: doc["content"], + "metadata": doc.get("metadata", {}), + EMBEDDING_KEY: embedding, + "id": doc_id, + } docs_to_upsert[doc_id] = doc_content collection.upsert_multi(docs_to_upsert) diff --git a/notebook/agentchat_RetrieveChat_couchbase.ipynb b/notebook/agentchat_RetrieveChat_couchbase.ipynb new file mode 100644 index 000000000000..17e9284a379a --- /dev/null +++ b/notebook/agentchat_RetrieveChat_couchbase.ipynb @@ -0,0 +1,579 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Using RetrieveChat Powered by Couchbase Capella for Retrieve Augmented Code Generation and Question Answering\n", + "\n", + "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", + "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", + "\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "\n", + "## Table of Contents\n", + "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", + "\n", + "- [Example 1: Generate code based off docstrings w/o human feedback](#example-1)\n", + "\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", + "```bash\n", + "pip install pyautogen[retrievechat-couchbase] flaml[automl]\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````\n", + "\n", + "Ensure you have a Couchbase Capella cluster running. Read more on how to get started [here](https://docs.couchbase.com/cloud/get-started/intro.html)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set your API Endpoint\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "models to use: ['gpt-4o-mini']\n" + ] + } + ], + "source": [ + "import os\n", + "import sys\n", + "\n", + "from autogen import AssistantAgent\n", + "\n", + "sys.path.append(os.path.abspath(\"/workspaces/autogen/autogen/agentchat/contrib\"))\n", + "\n", + "from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n", + "\n", + "# Accepted file formats for that can be stored in\n", + "# a vector database instance\n", + "from autogen.retrieve_utils import TEXT_FORMATS\n", + "\n", + "config_list = [{\"model\": \"gpt-4o-mini\", \"api_key\": os.environ[\"OPENAI_API_KEY\"], \"api_type\": \"openai\"}]\n", + "assert len(config_list) > 0\n", + "print(\"models to use: \", [config_list[i][\"model\"] for i in range(len(config_list))])" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````\n", + "\n", + "## Construct agents for RetrieveChat\n", + "\n", + "We start by initializing the `AssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for AssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.message_generator` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Accepted file formats for `docs_path`:\n", + "['txt', 'json', 'csv', 'tsv', 'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml', 'pdf']\n" + ] + } + ], + "source": [ + "print(\"Accepted file formats for `docs_path`:\")\n", + "print(TEXT_FORMATS)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 1. create an AssistantAgent instance named \"assistant\"\n", + "assistant = AssistantAgent(\n", + " name=\"assistant\",\n", + " system_message=\"You are a helpful assistant.\",\n", + " llm_config={\n", + " \"timeout\": 600,\n", + " \"cache_seed\": 42,\n", + " \"config_list\": config_list,\n", + " },\n", + ")\n", + "\n", + "# 2. create the RetrieveUserProxyAgent instance named \"ragproxyagent\"\n", + "# Refer to https://microsoft.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent\n", + "# and https://microsoft.github.io/autogen/docs/reference/agentchat/contrib/vectordb/couchbase\n", + "# for more information on the RetrieveUserProxyAgent and CouchbaseVectorDB\n", + "ragproxyagent = RetrieveUserProxyAgent(\n", + " name=\"ragproxyagent\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=3,\n", + " retrieve_config={\n", + " \"task\": \"code\",\n", + " \"docs_path\": [\n", + " \"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md\",\n", + " \"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md\",\n", + " ],\n", + " \"chunk_token_size\": 2000,\n", + " \"model\": config_list[0][\"model\"],\n", + " \"vector_db\": \"couchbase\", # Couchbase Capella VectorDB\n", + " \"collection_name\": \"demo_collection\", # Couchbase Capella collection name to be utilized/created\n", + " \"db_config\": {\n", + " \"connection_string\": os.environ[\"CB_CONN_STR\"], # Couchbase Capella connection string\n", + " \"username\": os.environ[\"CB_USERNAME\"], # Couchbase Capella username\n", + " \"password\": os.environ[\"CB_PASSWORD\"], # Couchbase Capella password\n", + " \"bucket_name\": \"test_db\", # Couchbase Capella bucket name\n", + " \"scope_name\": \"test_scope\", # Couchbase Capella scope name\n", + " \"index_name\": \"vector_index\", # Couchbase Capella index name to be created\n", + " },\n", + " \"get_or_create\": True, # set to False if you don't want to reuse an existing collection\n", + " \"overwrite\": False, # set to True if you want to overwrite an existing collection, each overwrite will force a index creation and reupload of documents\n", + " },\n", + " code_execution_config=False, # set to False if you don't want to execute the code\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example 1\n", + "\n", + "[Back to top](#table-of-contents)\n", + "\n", + "Use RetrieveChat to help generate sample code and automatically run the code and fix errors if there is any.\n", + "\n", + "Problem: Which API should I use if I want to use FLAML for a classification task and I want to train the model in 30 seconds. Use spark to parallel the training. Force cancel jobs if time limit is reached.\n", + "\n", + "Note: You may need to create an index on the cluster to query" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-10-16 12:08:07,062 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - \u001b[32mUse the existing collection `demo_collection`.\u001b[0m\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Trying to create collection.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-10-16 12:08:07,953 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Found 2 chunks.\u001b[0m\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "VectorDB returns doc_ids: [['bdfbc921', '7968cf3c']]\n", + "\u001b[32mAdding content of doc bdfbc921 to context.\u001b[0m\n", + "\u001b[32mAdding content of doc 7968cf3c to context.\u001b[0m\n", + "\u001b[33mragproxyagent\u001b[0m (to assistant):\n", + "\n", + "You're a retrieve augmented coding assistant. You answer user's questions based on your own knowledge and the\n", + "context provided by the user.\n", + "If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.\n", + "For code generation, you must obey the following rules:\n", + "Rule 1. You MUST NOT install any packages because all the packages needed are already installed.\n", + "Rule 2. You must follow the formats below to write your code:\n", + "```language\n", + "# your code\n", + "```\n", + "\n", + "User's question is: How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached.\n", + "\n", + "Context is: # Integrate - Spark\n", + "\n", + "FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n", + "\n", + "- Use Spark ML estimators for AutoML.\n", + "- Use Spark to run training in parallel spark jobs.\n", + "\n", + "## Spark ML Estimators\n", + "\n", + "FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n", + "\n", + "### Data\n", + "\n", + "For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n", + "\n", + "This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n", + "\n", + "This function also accepts optional arguments `index_col` and `default_index_type`.\n", + "\n", + "- `index_col` is the column name to use as the index, default is None.\n", + "- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n", + "\n", + "Here is an example code snippet for Spark Data:\n", + "\n", + "```python\n", + "import pandas as pd\n", + "from flaml.automl.spark.utils import to_pandas_on_spark\n", + "\n", + "# Creating a dictionary\n", + "data = {\n", + " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", + " \"Age_Years\": [20, 15, 10, 7, 25],\n", + " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", + "}\n", + "\n", + "# Creating a pandas DataFrame\n", + "dataframe = pd.DataFrame(data)\n", + "label = \"Price\"\n", + "\n", + "# Convert to pandas-on-spark dataframe\n", + "psdf = to_pandas_on_spark(dataframe)\n", + "```\n", + "\n", + "To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n", + "\n", + "Here is an example of how to use it:\n", + "\n", + "```python\n", + "from pyspark.ml.feature import VectorAssembler\n", + "\n", + "columns = psdf.columns\n", + "feature_cols = [col for col in columns if col != label]\n", + "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", + "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n", + "```\n", + "\n", + "Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n", + "\n", + "### Estimators\n", + "\n", + "#### Model List\n", + "\n", + "- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n", + "\n", + "#### Usage\n", + "\n", + "First, prepare your data in the required format as described in the previous section.\n", + "\n", + "By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n", + "\n", + "Here is an example code snippet using SparkML models in AutoML:\n", + "\n", + "```python\n", + "import flaml\n", + "\n", + "# prepare your data in pandas-on-spark format as we previously mentioned\n", + "\n", + "automl = flaml.AutoML()\n", + "settings = {\n", + " \"time_budget\": 30,\n", + " \"metric\": \"r2\",\n", + " \"estimator_list\": [\"lgbm_spark\"], # this setting is optional\n", + " \"task\": \"regression\",\n", + "}\n", + "\n", + "automl.fit(\n", + " dataframe=psdf,\n", + " label=label,\n", + " **settings,\n", + ")\n", + "```\n", + "\n", + "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb)\n", + "\n", + "## Parallel Spark Jobs\n", + "\n", + "You can activate Spark as the parallel backend during parallel tuning in both [AutoML](/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning) and [Hyperparameter Tuning](/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning), by setting the `use_spark` to `true`. FLAML will dispatch your job to the distributed Spark backend using [`joblib-spark`](https://github.com/joblib/joblib-spark).\n", + "\n", + "Please note that you should not set `use_spark` to `true` when applying AutoML and Tuning for Spark Data. This is because only SparkML models will be used for Spark Data in AutoML and Tuning. As SparkML models run in parallel, there is no need to distribute them with `use_spark` again.\n", + "\n", + "All the Spark-related arguments are stated below. These arguments are available in both Hyperparameter Tuning and AutoML:\n", + "\n", + "- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n", + "- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n", + "- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n", + "\n", + "An example code snippet for using parallel Spark jobs:\n", + "\n", + "```python\n", + "import flaml\n", + "\n", + "automl_experiment = flaml.AutoML()\n", + "automl_settings = {\n", + " \"time_budget\": 30,\n", + " \"metric\": \"r2\",\n", + " \"task\": \"regression\",\n", + " \"n_concurrent_trials\": 2,\n", + " \"use_spark\": True,\n", + " \"force_cancel\": True, # Activating the force_cancel option can immediately halt Spark jobs once they exceed the allocated time_budget.\n", + "}\n", + "\n", + "automl.fit(\n", + " dataframe=dataframe,\n", + " label=label,\n", + " **automl_settings,\n", + ")\n", + "```\n", + "\n", + "[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb)\n", + "# Research\n", + "\n", + "For technical details, please check our research publications.\n", + "\n", + "- [FLAML: A Fast and Lightweight AutoML Library](https://www.microsoft.com/en-us/research/publication/flaml-a-fast-and-lightweight-automl-library/). Chi Wang, Qingyun Wu, Markus Weimer, Erkang Zhu. MLSys 2021.\n", + "\n", + "```bibtex\n", + "@inproceedings{wang2021flaml,\n", + " title={FLAML: A Fast and Lightweight AutoML Library},\n", + " author={Chi Wang and Qingyun Wu and Markus Weimer and Erkang Zhu},\n", + " year={2021},\n", + " booktitle={MLSys},\n", + "}\n", + "```\n", + "\n", + "- [Frugal Optimization for Cost-related Hyperparameters](https://arxiv.org/abs/2005.01571). Qingyun Wu, Chi Wang, Silu Huang. AAAI 2021.\n", + "\n", + "```bibtex\n", + "@inproceedings{wu2021cfo,\n", + " title={Frugal Optimization for Cost-related Hyperparameters},\n", + " author={Qingyun Wu and Chi Wang and Silu Huang},\n", + " year={2021},\n", + " booktitle={AAAI},\n", + "}\n", + "```\n", + "\n", + "- [Economical Hyperparameter Optimization With Blended Search Strategy](https://www.microsoft.com/en-us/research/publication/economical-hyperparameter-optimization-with-blended-search-strategy/). Chi Wang, Qingyun Wu, Silu Huang, Amin Saied. ICLR 2021.\n", + "\n", + "```bibtex\n", + "@inproceedings{wang2021blendsearch,\n", + " title={Economical Hyperparameter Optimization With Blended Search Strategy},\n", + " author={Chi Wang and Qingyun Wu and Silu Huang and Amin Saied},\n", + " year={2021},\n", + " booktitle={ICLR},\n", + "}\n", + "```\n", + "\n", + "- [An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models](https://aclanthology.org/2021.acl-long.178.pdf). Susan Xueqing Liu, Chi Wang. ACL 2021.\n", + "\n", + "```bibtex\n", + "@inproceedings{liuwang2021hpolm,\n", + " title={An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models},\n", + " author={Susan Xueqing Liu and Chi Wang},\n", + " year={2021},\n", + " booktitle={ACL},\n", + "}\n", + "```\n", + "\n", + "- [ChaCha for Online AutoML](https://www.microsoft.com/en-us/research/publication/chacha-for-online-automl/). Qingyun Wu, Chi Wang, John Langford, Paul Mineiro and Marco Rossi. ICML 2021.\n", + "\n", + "```bibtex\n", + "@inproceedings{wu2021chacha,\n", + " title={ChaCha for Online AutoML},\n", + " author={Qingyun Wu and Chi Wang and John Langford and Paul Mineiro and Marco Rossi},\n", + " year={2021},\n", + " booktitle={ICML},\n", + "}\n", + "```\n", + "\n", + "- [Fair AutoML](https://arxiv.org/abs/2111.06495). Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2111.06495 (2021).\n", + "\n", + "```bibtex\n", + "@inproceedings{wuwang2021fairautoml,\n", + " title={Fair AutoML},\n", + " author={Qingyun Wu and Chi Wang},\n", + " year={2021},\n", + " booktitle={ArXiv preprint arXiv:2111.06495},\n", + "}\n", + "```\n", + "\n", + "- [Mining Robust Default Configurations for Resource-constrained AutoML](https://arxiv.org/abs/2202.09927). Moe Kayali, Chi Wang. ArXiv preprint arXiv:2202.09927 (2022).\n", + "\n", + "```bibtex\n", + "@inproceedings{kayaliwang2022default,\n", + " title={Mining Robust Default Configurations for Resource-constrained AutoML},\n", + " author={Moe Kayali and Chi Wang},\n", + " year={2022},\n", + " booktitle={ArXiv preprint arXiv:2202.09927},\n", + "}\n", + "```\n", + "\n", + "- [Targeted Hyperparameter Optimization with Lexicographic Preferences Over Multiple Objectives](https://openreview.net/forum?id=0Ij9_q567Ma). Shaokun Zhang, Feiran Jia, Chi Wang, Qingyun Wu. ICLR 2023 (notable-top-5%).\n", + "\n", + "```bibtex\n", + "@inproceedings{zhang2023targeted,\n", + " title={Targeted Hyperparameter Optimization with Lexicographic Preferences Over Multiple Objectives},\n", + " author={Shaokun Zhang and Feiran Jia and Chi Wang and Qingyun Wu},\n", + " booktitle={International Conference on Learning Representations},\n", + " year={2023},\n", + " url={https://openreview.net/forum?id=0Ij9_q567Ma},\n", + "}\n", + "```\n", + "\n", + "- [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah. ArXiv preprint arXiv:2303.04673 (2023).\n", + "\n", + "```bibtex\n", + "@inproceedings{wang2023EcoOptiGen,\n", + " title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},\n", + " author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},\n", + " year={2023},\n", + " booktitle={ArXiv preprint arXiv:2303.04673},\n", + "}\n", + "```\n", + "\n", + "- [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023).\n", + "\n", + "```bibtex\n", + "@inproceedings{wu2023empirical,\n", + " title={An Empirical Study on Challenging Math Problem Solving with GPT-4},\n", + " author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang},\n", + " year={2023},\n", + " booktitle={ArXiv preprint arXiv:2306.01337},\n", + "}\n", + "```\n", + "\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33massistant\u001b[0m (to ragproxyagent):\n", + "\n", + "```python\n", + "import pandas as pd\n", + "from pyspark.ml.feature import VectorAssembler\n", + "import flaml\n", + "from flaml.automl.spark.utils import to_pandas_on_spark\n", + "\n", + "# Creating a dictionary for the example data\n", + "data = {\n", + " \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n", + " \"Age_Years\": [20, 15, 10, 7, 25],\n", + " \"Price\": [100000, 200000, 300000, 240000, 120000],\n", + "}\n", + "\n", + "# Creating a pandas DataFrame\n", + "dataframe = pd.DataFrame(data)\n", + "label = \"Price\"\n", + "\n", + "# Convert to pandas-on-spark dataframe\n", + "psdf = to_pandas_on_spark(dataframe)\n", + "\n", + "# Prepare features using VectorAssembler\n", + "columns = psdf.columns\n", + "feature_cols = [col for col in columns if col != label]\n", + "featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n", + "psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[[\"index\", \"features\"]]\n", + "\n", + "# Setting up and running FLAML for AutoML with Spark\n", + "automl = flaml.AutoML()\n", + "automl_settings = {\n", + " \"time_budget\": 30, # Set the time budget to 30 seconds\n", + " \"metric\": \"r2\", # Performance metric\n", + " \"task\": \"regression\", # Problem type\n", + " \"n_concurrent_trials\": 2, # Number of concurrent trials\n", + " \"use_spark\": True, # Use Spark for parallel jobs\n", + " \"force_cancel\": True, # Force cancel jobs if time limit is reached\n", + "}\n", + "\n", + "automl.fit(\n", + " dataframe=psdf,\n", + " label=label,\n", + " **automl_settings\n", + ")\n", + "```\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33mragproxyagent\u001b[0m (to assistant):\n", + "\n", + "\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[33massistant\u001b[0m (to ragproxyagent):\n", + "\n", + "UPDATE CONTEXT\n", + "\n", + "--------------------------------------------------------------------------------\n", + "\u001b[32mUpdating context and resetting conversation.\u001b[0m\n", + "VectorDB returns doc_ids: [['bdfbc921', '7968cf3c']]\n", + "\u001b[32mNo more context, will terminate.\u001b[0m\n", + "\u001b[33mragproxyagent\u001b[0m (to assistant):\n", + "\n", + "TERMINATE\n", + "\n", + "--------------------------------------------------------------------------------\n" + ] + } + ], + "source": [ + "# reset the assistant. Always reset the assistant before starting a new conversation.\n", + "assistant.reset()\n", + "\n", + "# given a problem, we use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message.\n", + "# the assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing.\n", + "# The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected.\n", + "# With human-in-loop, the conversation will continue until the user says \"exit\".\n", + "code_problem = \"How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached.\"\n", + "chat_result = ragproxyagent.initiate_chat(assistant, message=ragproxyagent.message_generator, problem=code_problem)" + ] + } + ], + "metadata": { + "front_matter": { + "description": "Explore the use of AutoGen's RetrieveChat for tasks like code generation from docstrings, answering complex questions with human feedback, and exploiting features like Update Context, custom prompts, and few-shot learning.", + "tags": [ + "RAG" + ] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + }, + "skip_test": "Requires interactive usage" + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx index 9a6c23787231..84bcca5b3064 100644 --- a/website/blog/2023-10-18-RetrieveChat/index.mdx +++ b/website/blog/2023-10-18-RetrieveChat/index.mdx @@ -406,3 +406,4 @@ You can check out more example notebooks for RAG use cases: - [Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) - [Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) - [Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb) +- [Using RetrieveChat Powered by Couchbase for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_couchbase.ipynb) diff --git a/website/docs/topics/retrieval_augmentation.md b/website/docs/topics/retrieval_augmentation.md index 1ac32e85fef3..3fa9d5754fd6 100644 --- a/website/docs/topics/retrieval_augmentation.md +++ b/website/docs/topics/retrieval_augmentation.md @@ -127,6 +127,7 @@ For more detailed examples and notebooks showcasing the usage of retrieval augme - Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) - Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) - Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb) +- Automated Code Generation and Question Answering with [Couchbase](https://www.couchbase.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_couchbase.ipynb) - Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) From 76a4bd05d90d03b524877b5ea7da1b23bd204561 Mon Sep 17 00:00:00 2001 From: Daniel Chalef <131175+danielchalef@users.noreply.github.com> Date: Thu, 17 Oct 2024 10:08:30 -0700 Subject: [PATCH 35/35] Add Zep ecosystem doc and notebook (#3681) * Add Zep ecosystem doc and notebook * fix linting and formatting issues * Fix url * Update agent-memory-with-zep.md * add notebook metadata * newline --------- Co-authored-by: Jack Gerrits --- notebook/agent_memory_using_zep.ipynb | 532 ++++++++++++++++++ .../docs/ecosystem/agent-memory-with-zep.md | 102 ++++ .../img/ecosystem-zep-ce-walkthrough.png | 3 + .../img/ecosystem-zep-fact-ratings.png | 3 + .../ecosystem/img/ecosystem-zep-session.gif | Bin 0 -> 10967477 bytes 5 files changed, 640 insertions(+) create mode 100644 notebook/agent_memory_using_zep.ipynb create mode 100644 website/docs/ecosystem/agent-memory-with-zep.md create mode 100644 website/docs/ecosystem/img/ecosystem-zep-ce-walkthrough.png create mode 100644 website/docs/ecosystem/img/ecosystem-zep-fact-ratings.png create mode 100644 website/docs/ecosystem/img/ecosystem-zep-session.gif diff --git a/notebook/agent_memory_using_zep.ipynb b/notebook/agent_memory_using_zep.ipynb new file mode 100644 index 000000000000..3212f63414f8 --- /dev/null +++ b/notebook/agent_memory_using_zep.ipynb @@ -0,0 +1,532 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Building an Agent with Long-term Memory using Autogen and Zep\n", + "\n", + "This notebook walks through how to build an Autogen Agent with long-term memory. Zep builds a knowledge graph from user interactions with the agent, enabling the agent to recall relevant facts from previous conversations or user interactions.\n", + "\n", + "In this notebook we will:\n", + "- Create an Autogen Agent class that extends `ConversableAgent` by adding long-term memory\n", + "- Create a Mental Health Assistant Agent, CareBot, that acts as a counselor and coach.\n", + "- Create a user Agent, Cathy, who stands in for our expected user.\n", + "- Demonstrate preloading chat history into Zep.\n", + "- Demonstrate the agents in conversation, with CareBot recalling facts from previous conversations with Cathy.\n", + "- Inspect Facts within Zep, and demonstrate how to use Zep's Fact Ratings to improve the quality of returned facts.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Requirements\n", + "\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", + "```bash\n", + "pip install autogen~=0.3 zep-cloud python-dotenv\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "flaml.automl is not available. Please install flaml[automl] to enable AutoML functionalities.\n" + ] + } + ], + "source": [ + "import os\n", + "import uuid\n", + "from typing import Dict, Union\n", + "\n", + "from dotenv import load_dotenv\n", + "\n", + "from autogen import Agent, ConversableAgent\n", + "\n", + "load_dotenv()\n", + "\n", + "config_list = [\n", + " {\n", + " \"model\": \"gpt-4o-mini\",\n", + " \"api_key\": os.environ.get(\"OPENAI_API_KEY\"),\n", + " \"max_tokens\": 1024,\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## initiualize the Zep Client\n", + "\n", + "You can sign up for a Zep account here: https://www.getzep.com/" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "from zep_cloud import FactRatingExamples, FactRatingInstruction, Message\n", + "from zep_cloud.client import AsyncZep\n", + "\n", + "MIN_FACT_RATING = 0.3\n", + "\n", + "# Configure Zep\n", + "zep = AsyncZep(api_key=os.environ.get(\"ZEP_API_KEY\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "def convert_to_zep_messages(chat_history: list[dict[str, str | None]]) -> list[Message]:\n", + " \"\"\"\n", + " Convert chat history to Zep messages.\n", + "\n", + " Args:\n", + " chat_history (list): List of dictionaries containing chat messages.\n", + "\n", + " Returns:\n", + " list: List of Zep Message objects.\n", + " \"\"\"\n", + " return [\n", + " Message(\n", + " role_type=msg[\"role\"],\n", + " role=msg.get(\"name\", None),\n", + " content=msg[\"content\"],\n", + " )\n", + " for msg in chat_history\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ZepConversableAgent\n", + "\n", + "The `ZepConversableAgent` is a custom implementation of the `ConversableAgent` that integrates with Zep for long-term memory management. This class extends the functionality of the base `ConversableAgent` by adding Zep-specific features for persisting and retrieving facts from long-term memory." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "class ZepConversableAgent(ConversableAgent):\n", + " \"\"\"\n", + " A custom ConversableAgent that integrates with Zep for long-term memory.\n", + " \"\"\"\n", + "\n", + " def __init__(\n", + " self,\n", + " name: str,\n", + " system_message: str,\n", + " llm_config: dict,\n", + " function_map: dict,\n", + " human_input_mode: str,\n", + " zep_session_id: str,\n", + " ):\n", + " super().__init__(\n", + " name=name,\n", + " system_message=system_message,\n", + " llm_config=llm_config,\n", + " function_map=function_map,\n", + " human_input_mode=human_input_mode,\n", + " )\n", + " self.zep_session_id = zep_session_id\n", + " # store the original system message as we will update it with relevant facts from Zep\n", + " self.original_system_message = system_message\n", + " self.register_hook(\"a_process_last_received_message\", self.persist_user_messages)\n", + " self.register_hook(\"a_process_message_before_send\", self.persist_assistant_messages)\n", + "\n", + " async def persist_assistant_messages(\n", + " self, sender: Agent, message: Union[Dict, str], recipient: Agent, silent: bool\n", + " ):\n", + " \"\"\"Agent sends a message to the user. Add the message to Zep.\"\"\"\n", + "\n", + " # Assume message is a string\n", + " zep_messages = convert_to_zep_messages([{\"role\": \"assistant\", \"name\": self.name, \"content\": message}])\n", + " await zep.memory.add(session_id=self.zep_session_id, messages=zep_messages)\n", + "\n", + " return message\n", + "\n", + " async def persist_user_messages(self, messages: list[dict[str, str]] | str):\n", + " \"\"\"\n", + " User sends a message to the agent. Add the message to Zep and\n", + " update the system message with relevant facts from Zep.\n", + " \"\"\"\n", + " # Assume messages is a string\n", + " zep_messages = convert_to_zep_messages([{\"role\": \"user\", \"content\": messages}])\n", + " await zep.memory.add(session_id=self.zep_session_id, messages=zep_messages)\n", + "\n", + " memory = await zep.memory.get(self.zep_session_id, min_rating=MIN_FACT_RATING)\n", + "\n", + " # Update the system message with the relevant facts retrieved from Zep\n", + " self.update_system_message(\n", + " self.original_system_message\n", + " + f\"\\n\\nRelevant facts about the user and their prior conversation:\\n{memory.relevant_facts}\"\n", + " )\n", + "\n", + " return messages" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Zep User and Session Management\n", + "\n", + "### Zep User\n", + "A Zep User represents an individual interacting with your application. Each User can have multiple Sessions associated with them, allowing you to track and manage interactions over time. The unique identifier for each user is their `UserID`, which can be any string value (e.g., username, email address, or UUID).\n", + "\n", + "### Zep Session\n", + "A Session represents a conversation and can be associated with Users in a one-to-many relationship. Chat messages are added to Sessions, with each session having many messages.\n", + "\n", + "### Fact Rating\n", + " \n", + "Fact Rating is a feature in Zep that allows you to rate the importance or relevance of facts extracted from conversations. This helps in prioritizing and filtering information when retrieving memory artifacts. Here, we rate facts based on poignancy. We provide a definition of poignancy and several examples of highly poignant and low-poignancy facts. When retrieving memory, you can use the `min_rating` parameter to filter facts based on their importance.\n", + " \n", + "Fact Rating helps ensure the most relevant information, especially in long or complex conversations, is used to ground the agent.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Session(classifications=None, created_at='2024-10-07T21:12:13.952672Z', deleted_at=None, ended_at=None, fact_rating_instruction=FactRatingInstruction(examples=FactRatingExamples(high=\"The user received news of a family member's serious illness.\", low='The user bought a new brand of toothpaste.', medium='The user completed a challenging marathon.'), instruction='Rate the facts by poignancy. Highly poignant \\nfacts have a significant emotional impact or relevance to the user. \\nLow poignant facts are minimally relevant or of little emotional \\nsignificance.'), fact_version_uuid=None, facts=None, id=774, metadata=None, project_uuid='00000000-0000-0000-0000-000000000000', session_id='f3854ad0-5bd4-4814-a814-ec0880817953', updated_at='2024-10-07T21:12:13.952672Z', user_id='Cathy1023', uuid_='31ab3314-5ac8-4361-ad11-848fb7befedf')" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "bot_name = \"CareBot\"\n", + "user_name = \"Cathy\"\n", + "\n", + "user_id = user_name + str(uuid.uuid4())[:4]\n", + "session_id = str(uuid.uuid4())\n", + "\n", + "await zep.user.add(user_id=user_id)\n", + "\n", + "fact_rating_instruction = \"\"\"Rate the facts by poignancy. Highly poignant\n", + " facts have a significant emotional impact or relevance to the user.\n", + " Low poignant facts are minimally relevant or of little emotional significance.\n", + "\"\"\"\n", + "\n", + "fact_rating_examples = FactRatingExamples(\n", + " high=\"The user received news of a family member's serious illness.\",\n", + " medium=\"The user completed a challenging marathon.\",\n", + " low=\"The user bought a new brand of toothpaste.\",\n", + ")\n", + "\n", + "await zep.memory.add_session(\n", + " user_id=user_id,\n", + " session_id=session_id,\n", + " fact_rating_instruction=FactRatingInstruction(\n", + " instruction=fact_rating_instruction,\n", + " examples=fact_rating_examples,\n", + " ),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Preload a prior conversation into Zep\n", + "\n", + "We'll load a prior conversation into long-term memory. We'll use facts derived from this conversation when Cathy restarts the conversation with CareBot, ensuring Carebot has context." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "SuccessResponse(message='OK')" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chat_history = [\n", + " {\n", + " \"role\": \"assistant\",\n", + " \"name\": \"carebot\",\n", + " \"content\": \"Hi Cathy, how are you doing today?\",\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"name\": \"Cathy\",\n", + " \"content\": \"To be honest, I've been feeling a bit down and demotivated lately. It's been tough.\",\n", + " },\n", + " {\n", + " \"role\": \"assistant\",\n", + " \"name\": \"CareBot\",\n", + " \"content\": \"I'm sorry to hear that you're feeling down and demotivated, Cathy. It's understandable given the challenges you're facing. Can you tell me more about what's been going on?\",\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"name\": \"Cathy\",\n", + " \"content\": \"Well, I'm really struggling to process the passing of my mother.\",\n", + " },\n", + " {\n", + " \"role\": \"assistant\",\n", + " \"name\": \"CareBot\",\n", + " \"content\": \"I'm deeply sorry for your loss, Cathy. Losing a parent is incredibly difficult. It's normal to struggle with grief, and there's no 'right' way to process it. Would you like to talk about your mother or how you're coping?\",\n", + " },\n", + " {\n", + " \"role\": \"user\",\n", + " \"name\": \"Cathy\",\n", + " \"content\": \"Yes, I'd like to talk about my mother. She was a kind and loving person.\",\n", + " },\n", + "]\n", + "\n", + "# Convert chat history to Zep messages\n", + "zep_messages = convert_to_zep_messages(chat_history)\n", + "\n", + "await zep.memory.add(session_id=session_id, messages=zep_messages)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Review all facts in Zep\n", + "\n", + "We query all session facts for this user session. Only facts that meet the `MIN_FACT_RATING` threshold are returned." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "created_at='2024-10-07T21:12:15.96584Z' fact='Cathy describes her mother as a kind and loving person.' rating=0.5 uuid_='6a086a73-d4b8-4c1b-9b2f-08d5d326d813'\n", + "created_at='2024-10-07T21:12:15.96584Z' fact='Cathy has been feeling down and demotivated lately.' rating=0.5 uuid_='e19d959c-2a01-4cc7-9d49-108719f1a749'\n", + "created_at='2024-10-07T21:12:15.96584Z' fact='Cathy is struggling to process the passing of her mother.' rating=0.75 uuid_='d6c12a5d-d2a0-486e-b25d-3d4bdc5ff466'\n" + ] + } + ], + "source": [ + "response = await zep.memory.get_session_facts(session_id=session_id, min_rating=MIN_FACT_RATING)\n", + "\n", + "for r in response.facts:\n", + " print(r)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create the Autogen agent, CareBot, an instance of `ZepConversableAgent`\n", + "\n", + "We pass in the current `session_id` into the CareBot agent which allows it to retrieve relevant facts related to the conversation with Cathy." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "carebot_system_message = \"\"\"\n", + "You are a compassionate mental health bot and caregiver. Review information about the user and their prior conversation below and respond accordingly.\n", + "Keep responses empathetic and supportive. And remember, always prioritize the user's well-being and mental health. Keep your responses very concise and to the point.\n", + "\"\"\"\n", + "\n", + "agent = ZepConversableAgent(\n", + " bot_name,\n", + " system_message=carebot_system_message,\n", + " llm_config={\"config_list\": config_list},\n", + " function_map=None, # No registered functions, by default it is None.\n", + " human_input_mode=\"NEVER\", # Never ask for human input.\n", + " zep_session_id=session_id,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create the Autogen agent, Cathy\n", + "\n", + "Cathy is a stand-in for a human. When building a production application, you'd replace Cathy with a human-in-the-loop pattern.\n", + "\n", + "**Note** that we're instructing Cathy to start the conversation with CareBit by asking about her previous session. This is an opportunity for us to test whether fact retrieval from Zep's long-term memory is working. " + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "cathy = ConversableAgent(\n", + " user_name,\n", + " system_message=\"You are returning to your conversation with CareBot, a mental health bot. Ask the bot about your previous session.\",\n", + " llm_config={\"config_list\": config_list},\n", + " human_input_mode=\"NEVER\", # Never ask for human input.\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Start the conversation\n", + "\n", + "We use Autogen's `a_initiate_chat` method to get the two agents conversing. CareBot is the primary agent.\n", + "\n", + "**NOTE** how Carebot is able to recall the past conversation about Cathy's mother in detail, having had relevant facts from Zep added to its system prompt." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = await agent.a_initiate_chat(\n", + " cathy,\n", + " message=\"Hi Cathy, nice to see you again. How are you doing today?\",\n", + " max_turns=3,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Review current facts in Zep\n", + "\n", + "Let's see how the facts have evolved as the conversation has progressed." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "created_at='2024-10-07T20:04:28.397184Z' fact=\"Cathy wants to reflect on a previous conversation about her mother and explore the topic of her mother's passing further.\" rating=0.75 uuid_='56488eeb-d8ac-4b2f-8acc-75f71b56ad76'\n", + "created_at='2024-10-07T20:04:28.397184Z' fact='Cathy is struggling to process the passing of her mother and has been feeling down and demotivated lately.' rating=0.75 uuid_='0fea3f05-ed1a-4e39-a092-c91f8af9e501'\n", + "created_at='2024-10-07T20:04:28.397184Z' fact='Cathy describes her mother as a kind and loving person.' rating=0.5 uuid_='131de203-2984-4cba-9aef-e500611f06d9'\n" + ] + } + ], + "source": [ + "response = await zep.memory.get_session_facts(session_id, min_rating=MIN_FACT_RATING)\n", + "\n", + "for r in response.facts:\n", + " print(r)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Search over Facts in Zep's long-term memory\n", + "\n", + "In addition to the `memory.get` method which uses the current conversation to retrieve facts, we can also search Zep with our own keywords. Here, we retrieve facts using a query. Again, we use fact ratings to limit the returned facts to only those with a high poignancy rating.\n", + "\n", + "The `memory.search_sessions` API may be used as an Agent tool, enabling an agent to search across user memory for relevant facts." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "created_at='2024-10-07T20:04:28.397184Z' fact='Cathy describes her mother as a kind and loving person.' rating=0.5 uuid_='131de203-2984-4cba-9aef-e500611f06d9'\n", + "created_at='2024-10-07T20:04:28.397184Z' fact='Cathy is struggling to process the passing of her mother and has been feeling down and demotivated lately.' rating=0.75 uuid_='0fea3f05-ed1a-4e39-a092-c91f8af9e501'\n", + "created_at='2024-10-07T20:04:28.397184Z' fact=\"Cathy wants to reflect on a previous conversation about her mother and explore the topic of her mother's passing further.\" rating=0.75 uuid_='56488eeb-d8ac-4b2f-8acc-75f71b56ad76'\n" + ] + } + ], + "source": [ + "response = await zep.memory.search_sessions(\n", + " text=\"What do you know about Cathy's family?\",\n", + " user_id=user_id,\n", + " search_scope=\"facts\",\n", + " min_fact_rating=MIN_FACT_RATING,\n", + ")\n", + "\n", + "for r in response.results:\n", + " print(r.fact)" + ] + } + ], + "metadata": { + "front_matter": { + "tags": [ + "memory" + ], + "description": "Agent Memory with Zep." + }, + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/website/docs/ecosystem/agent-memory-with-zep.md b/website/docs/ecosystem/agent-memory-with-zep.md new file mode 100644 index 000000000000..c4551296dc7e --- /dev/null +++ b/website/docs/ecosystem/agent-memory-with-zep.md @@ -0,0 +1,102 @@ +# Agent Memory with Zep + +Zep logo + +[Zep](https://www.getzep.com/?utm_source=autogen) is a long-term memory service for agentic applications used by both startups and enterprises. With Zep, you can build personalized, accurate, and production-ready agent applications. + +Zep's memory continuously learns facts from interactions with users and your changing business data. With [just two API calls](https://help.getzep.com/memory?utm_source=autogen), you can persist chat history to Zep and recall facts relevant to the state of your agent. + +Zep is powered by a temporal Knowledge Graph that allows reasoning with facts as they change. A combination of semantic and graph search enables accurate and low-latency fact retrieval. + +Sign up for [Zep Cloud](https://www.getzep.com/?utm_source=autogen) or visit the [Zep Community Edition Repo](https://github.com/getzep/zep). + +| Feature | Description | +| ---------------------------------------------- | ------------------------------------------------------------------------------------- | +| 💬 **Capture Detailed Conversational Context** | Zep's Knowledge Graph-based memory captures episodic, semantic, and temporal contexts | +| 🗄️ **Business Data is Context, too** | Zep is able to extract facts from JSON and unstructured text as well | +| ⚙️ **Tailor For Your Business** | Fact Ratings and other tools allow you to fine-tune retrieval for your use case | +| ⚡️ **Instant Memory Retrieval** | Retrieve relevant facts in under 100ms | +| 🔐 **Compliance & Security** | User Privacy Management, SOC 2 Type II certification, and other controls | +| 🖼️ **Framework Agnostic & Future-Proof** | Use with AutoGen or any other framework, current or future | + +
+ Zep Community Edition Walkthrough + + Zep Fact Ratings + +
+ +
+ User Chat Session and Facts + + Chat Session and Facts + +
+ +
+ Implementing Fact Ratings + + Zep Fact Ratings + +
+ +## How Zep works + +1. Add chat messages or data artifacts to Zep during each user interaction or agent event. +2. Zep intelligently integrates new information into the user's (or groups of users) Knowledge Graph, updating existing context as needed. +3. Retrieve relevant facts from Zep for subsequent interactions or events. + +Zep's temporal Knowledge Graph maintains contextual information about facts, enabling reasoning about state changes and providing data provenance insights. Each fact includes `valid_at` and `invalid_at` dates, allowing agents to track changes in user preferences, traits, or environment. + +## Zep is fast + +Retrieving facts is simple and very fast. Unlike other memory solutions, Zep does not use agents to ensure facts are relevant. It precomputes facts, entity summaries, and other artifacts asynchronously. For on-premise use, retrieval speed primarily depends on your embedding service's performance. + +## Zep supports many types of data + +You can add a variety of data artifacts to Zep: + +- Adding chat history messages. +- Ingestion of JSON and unstructured text. + +Zep supports chat session, user, and group-level graphs. Group graphs allow for capturing organizational knowledge. + +## Getting Started + +### Zep Cloud + +1. Sign up for [Zep Cloud](https://www.getzep.com?utm_source=autogen) and create a [Project API Key](https://help.getzep.com/projects?utm_source=autogen). + +2. Install one of the [Zep Python, TypeScript or Go SDKs](https://help.getzep.com/sdks?utm_source=autogen). Python instructions shown below. + +```shell +pip install zep-cloud +``` + +3. Initialize a client + +```python +import os +from zep_cloud.client import AsyncZep + +API_KEY = os.environ.get('ZEP_API_KEY') +client = AsyncZep( + api_key=API_KEY, +) +``` + +3. Review the Zep and Autogen [notebook example](/docs/notebooks/agent_memory_using_zep/) for agent-building best practices. + +### Zep Community Edition + +Follow the [Getting Started guide](https://help.getzep.com/ce/quickstart?utm_source=autogen) or visit the [GitHub Repo](https://github.com/getzep/zep?utm_source=autogen). + +## Autogen + Zep examples + +- [Autogen Agents with Zep Memory Notebook](/docs/notebooks/agent_memory_using_zep/) + +## Extra links + +- [📙 Documentation](https://help.getzep.com/?utm_source=autogen) +- [🐦 Twitter / X](https://x.com/zep_ai/) +- [📢 Discord](https://discord.com/invite/W8Kw6bsgXQ) diff --git a/website/docs/ecosystem/img/ecosystem-zep-ce-walkthrough.png b/website/docs/ecosystem/img/ecosystem-zep-ce-walkthrough.png new file mode 100644 index 000000000000..b6937a043985 --- /dev/null +++ b/website/docs/ecosystem/img/ecosystem-zep-ce-walkthrough.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0829b29a48ca05e2694aca00446ef5768c1b8edec56ce5035527f25f9ee4c81 +size 421633 diff --git a/website/docs/ecosystem/img/ecosystem-zep-fact-ratings.png b/website/docs/ecosystem/img/ecosystem-zep-fact-ratings.png new file mode 100644 index 000000000000..a4b12f8a96f3 --- /dev/null +++ b/website/docs/ecosystem/img/ecosystem-zep-fact-ratings.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:179241bd4fa3ed89d721deeb1810a31b9838e7f54582d521bd91f29cbae044f2 +size 233905 diff --git a/website/docs/ecosystem/img/ecosystem-zep-session.gif b/website/docs/ecosystem/img/ecosystem-zep-session.gif new file mode 100644 index 0000000000000000000000000000000000000000..5ff9eb5dbfd34c78d49db6f049fbc08d4e985a77 GIT binary patch literal 10967477 zcmV(;K-<4ZNk%w1VN3+j0{8y_`u6#KeSq`t_NJn#_xbnP+1fBJG$tr3U|wPQ`T73- z{=2)qIypUKVr9w3&vtcy_V)LWj*|NN`k0=mw6nMU{r%+Z^lNNwgoTGsP*OufMeOMA z-`(TG!^MS%j_2pa>+J3B@1&%q(azI$f{sl}QIU+5 z`Std$qPzL={QLF&xU#}%V{Da+n#RP_AuBs!Qe67;{WL5~`t0%3%GG^~p%o!EcuIEE z!q{<6Yf~vzVp3g}WRkmwv~fX7w~NWUi@b<0f`~77a42t7D0+%6Y)~O|ZYqY9F?e<; zY1vm%q-0UDJ%zhIhGHj%kTQW;A!m{>acw4Mi!gVWGk;_xZhR?aW+Z2yHGP9DX=W&1 z!;Q9QC240Tc!w@_g)DD(DtLh{c5o?qcr1f?DsO)+en}r&RwiEt7%g>ymVaxb+nmJ$ z7b#;phy1+aKvQj2C4f$9f4R}^o3hX{Mr4d0nrkhdOeLRLCz44cl2Ij-elU}UGn{BEl6f$hbugQ5Et+~Tm5noy zVJVVTC5cienSe5!oi~tkEs|R@p4pRDb}^-WO4(~Ip!$JEcrc!JFr0fZo^?CjcQ)a9 zIO=*e=m7r!A^!_bMO0HmK~P09E-(WD0000X`2+ zoVkr=&6_rJ;@s(yXG@5Zw=rcj?swKp{?RH|38YOT37E7z~z zyo%i#mTc3qRMV)y?~ z_il#3g9{&?vAFT$$d993&b+zv=Q^WHpH97c(d*c=Yu{egs#x#ax`O`|KfZi;^GVC1 zAG@A?diU?)o_)(+zWw`l^XuQwSaE;<{{eWIfd3Up*@2&Vrr?4MHkjaptVJl{gcROp z;e{Ax=-Y-Ic1YZZAcmOSh$NOs5p*b~sN#y&wdmq?FZRUUj5OA`)Q#Q62Ud>c@hF~p zK*HBveP$6k7Jd9BspOJO66mCX{yiyWW`H4RC4>%UspXaxcIoAp8ipxmhh&y%rif^! zY2un}w#nj~aK;(qoOIS{4>x*rk|8X6cocampo@pN1+bn533E=BZ|)nrd%qw(6>zu*Ux?=d85WYUi!E=Bnqe zcfRNAjyVPjEO>nuyC|bU4yj|I=N%gbRzs zYp$y3uBxtY^pN@LyYR{?udVdfYp<^O=F988{2p4YzrPx*tgyruoG76HGmGrP&gS>1 zqx)4WaktiLO0l;H4k#|D8&qkcM z(or|=IJZ(qF1gl@TikJ9UU#ke=9!!A`RAaYt@h{~n!L6&^+;2W+pNd!dUABfE_>a2 z(_Z`CeFq!(z<$FwsNd=ljyRHs51;I@crpI?@{^;(eC1Vh-nsPDV?Mp;*psgPhNkak zdiSa07QSt*m;d_tC(f>Z-L~Hzllyt!4*2}^_ddAb1|w{!{?Z!nfAI}d-tz_sIp-OW zfL)7T^%ls$23n6xq062Gw@1NMjn6i!W1IOlh`tW4?}O}Xp9rZVzY^N5e)sF0;T~p{ z<29>)L5d*&6GuZC+OUQQY@QBtc)%U{@P|OuSP%alIK(0v@rX!Fq7sQ1!6rKKiBPQI z+NO9W4qDNJ=mVh^LpZ`N#?OTPLZR;R=P&$~ad#|y;Te5*#{b>$jc|;k4d;l%Ali|K zcRXSq_qfD9`tgr|4CFZmNytJP@{mzXq#~ur$liQUd>gtT`L0MwES_(RnB<}tH~B>{ z>P?JeG~?e?sK)OU?v!=(hB@Sjmm?jCkFZQ$EN4l}Sqk!&xXdLYcgf3M`f`!K1ZE@I z)gejlCWE@Eq%tje$xJ#^lbrNqC#5;YYFd+&*F+^7wHZflvQm|)q$6Hdc^Gou#hkJn zXF6-i&UWhZo$!ojFy~3ndfM}z!=&OAjXD3xW%`quBzmSmp&3nT8uXx|Oeh(#Ik4Xe z500js4?4p+(QlShoOoQPM%~HLj>_|+AjPLhM@rIsGO~LdGUoXH$x?s@bf7RLXh9K5 zQyfVpoAh%i3i;>FHu{vD;Or())oD?p2K8BsGwMapnNg2A)s7*Js#Hr#)v8w2q*6pC zZjjm1uDZ0DF%7FsUF43kn)R$`O)FYG84jGf^{sFPrCe9oQ;6ENsIJ6oMa#0+=JmCy zPYvu+;~~VU8rFd7K&)b|%2;7K7OVPPl~*T=jjygrte72ZuGGp|wcfO?pABPZM@!n$ zf_AQ`T__pfSl8C-^|i1atZX-$j=%rbmbQY;tq%{o*u)a{p1P`6Oy#}`yC4p+h)2xb5=+;t-_@>(GwYG`y7)0l3=&YXdX$5VZ7aOu=)SZaE^0~Z%k*Y)Op5twzHn`yyrgSxW|AFG=Tv;XrLOJ z(1%X6qKCX_8#a2$g?_Z8C%xn*J6XyPjNbH)95~TFx8!IQoDP>?~b>rX=B{T+FN`X zWOc0f&F?XPx4X0sIIf|~>wy;>#Rz9^!WSOwhCBS>b58cL`8@H8TfE{DU-q)0UG0u@ z{No;UtB$Ms7I?ph zPIQAC{pctralvnm^rash;!uye)YqEvizD0W7_Yj}Rj? z+S4B9q0aq;5&L$y*IeGf55Dm2?(S5tx}|?lzVerUVuCxL@X(LE^ra7Z^iKZz5%=un zce4G=d%yeN_YVK($ItB17fk%8PyhK-&;C`qzxAqbKgmT-e{LIE<+r!J{`Vhy{`M|w-xgs}I7N|=8DD1}sbfK_OPSeS)cxO)k>fW%jPJ(qX50f7@(h8M_z z6qtsPw}u#~hHf~53I~TGIEQptg4Q>A;75ac7=tsohkeM0eyD?jIEaMEgF|?TLnseU zXoyh=e@Oq>h>#eGk~oQ1Xm*vjg_xL$nz)I-=Y^j5iC;)z#y5t@CxO!tc&M0)Ah?RG zXmqU@g0L8ivN(&h$at@Ki?x`Gk%xK=w^j@jL2wK$+(Qn*o?@C zgNPW7(m0KaSdG?rjp>Jt+DM7qn19~5X_r`u*T#wDSdLtHj_3%A>NtuTREnmkitu=f z@)(b}*j%}Ik8y~P`nZq$2#o&NkN+5u{&;kOjGy zz;=*+nU_k5l!7^!fw`2W=ZxuRgHXAcj5(E#`IuGtjsAy~;HZ@Y7>)!uj$XN$VcD5s z*p6*8kFEx0Yzdd9d77x1nmDPNtJ#`$`I>kMo3iPXvlyGU$(Mf#n7L_;yxE(D`J2FL zm=HOfjai(=37JxvoXEMHkx6^a`G1y)nVdPD@qlV6>6tsnNL=<$^hlbd$(rC9p5p&G zp5r;2<_VPNNt?I1p6scc!TFx>8K1Y&l7nG$-L7D}DfnW502q3~v%9?GF0ik)^>LFI#*-sa^`EUn&lvI;x(^kJfOCq$;Y}m8zv<$2ye2imc13tjX%D(E6;qI<3)4 zt<`$1)(WlKx~<&Wt={^r;2N&t>a7DRr{#LC=8CTQnXc;!r|kN!?Fz5%DzEBFum1U` z^_s8ey084opZ)r;0PC;%8L$L9utR#V_$s6do3ISKunzmE54*4sJFya5u@+0Q7<;iA zo3R|Lu#P&hkBYG%tFR$kvLt)5BKxr_+p!#LvM#%_i~zGL8?!W9voil{vmCmik{YEu zOQ}|BsgVVAp;?oh8k0p^s!02(s*0+(y0lKqs!to2vpTi4nyOUWv{>7;SG%>t%Byec zwYLhkVLJ|D%d2H8t=O8jYP+^*JFaf~w%H1|^D4J=JGXXAujZPrczd_{y0?0(pMKl7 z^D4N6JGgKvum+2`1-rP3%eao~rwxm+1G}-5JGquyxtRO1m;0!j8?m0tvY?ByntQXO zTe_x8x~QA7C5y8*ySl7J(-6G5I=sb8 zyiyygSv$7OTfAKByw4lG*owB)ONZDhwrX3w)yuZ0TDNgqx8VO9zI;2r;VZa-OTOoe zzUiC3iu%Qyyxa}*y^gF+ftFe)bzxvy`i^{*1Te|*xv7tM_1gyHT+rOuqzzV#; z2Q0e|{JB7YPDi2wZbc=v+BHKOR8sEywJPC$6CU? z+QQNbwlWOEDXg{I%e^>k4c>agI4r(C{KG&jw}898L#)0;e8lnkzK3hSOI*KB{KWCg z#FD$X`6kNeTyHB&wWv!x? z^Z2F7Fb#K{$9SB_dd$bkz{h+1$AUb_gj~pme8|<{$BF-($co&^jQq%d49Sfw$csG5 zm3+vUjL4C^$(%gNo=gp#49dv>%644Jq)f`A%*Uvl%8*PAm0ZfI%*vzu%CgMLsC>(_ zY|Far%CgMMzTC^be9F1(%fkH2#JtPMY|Oxn%gnsYvwX_X{LIhn%G6xV)_l#_49(fR z&D`A0+dR$G49?md&EDJwu58ZKfX?ci&gb0D?5qs$9M9AM&uu`@>1@yS%+C3I&+9zT z{fy82jL!B9&jT&b{%p<#9nkc=&Jn$9R1NA z9nu|5&?DW@3{BD{UD69u1bqj}pwm3P(>{F%ZD0wR zAk;ry)IELFMxE41z0^u=)Iv?vckt9vz0;QP)K+Z?hJe*ooz+>b)n3ijS{>9~J=Rhk z)?dBVS{>DAUDasK2$ta1aQ)UX8x7g18*h9%p!s2Re2U6&4SoIB+5p&qjSYQWS%pp5 zq&e7%jo8yr*p4j_kKNdi9oUqu43>S_mn{yKP1t|k*_{2^n;qJoo!O$T*`+<&s14ev zP0*y>+L+DSuN~X6-Px`!+OvJzfnD3VZQAZ|+rItVz#ZJeJ>0}y+{S&}$erBEz1++# z+m>+I&#es6J>8Z~-PA4J)s5ZHz1`W(-Btgs-QMlp@&MlC{oUTJ-RN!J>Al|Pz1@}| z4{Xg2@jc&FE#LLc*7%*@_zmCs-QV}^-{zp-0q)-fUf};t;QVdi3clbB-rx@Y;OVg6 z5>DR-{@xQ_;oorK7Ea$0ZV4Uk;s4#?13utvy$JYi;wS##1dihU4F^>n4^&B#?a(Q-sMt`4-k*kk03SKIxbK=9-@A zmY(E)?&X{w=40;ZogV6wZs?}&=Y>A#c#i3=4(g?T>aafRu}1B@Qg1+j$ zPUO7a=y9IuThIn8z)LzFv~sN1MOzF+{sqDi3erC9)L!k@e(l)K3)#Ny+}`cp{_Wr% z?&3b~7MWW{_pq>@YNm+ z+FtMmfA9#O@B+W^z`*bh|L_D)?GZol)2{FkfAJWf@s;rI8*lL*|M4In@)|Gl;6CyU zPw@{A?O5;#kDB9YqgAT&fGYnJUB6%qD1Y(xUh+2|?L6P}KL7JTAM`sf^xsbOMt}53 zpY%$<^i1FIB=7W4|MVt5^;BQ=R)6(apY>Y5^<3ZeUa#>(|Mg-&_GDl7W`FisKMIh5 z4m`dNui^Hql5Z$8UBnRfDWCIpAN5VY_k7>?e*gD?ANYc=_k>^ghL84$pZJQu_)^dD zjnDXx5BZYc_5ObDm2deRfBE}P0~^ozk>L56-}#si@Sb1#qW}3FKl!SE_^f~T|3{S;m)O7cc5LodG+p0+`n}1MwVMP>At0f4<}yS_;KXPl`m)B-1+n3sGTY%h^0+DbqgPF z-yYff!_MJBekSiRv`AvWp})jMjH>V6@a4~^U*G(Qd|3>ujM<9a~a!4YJG_s}kc4Bc!CRd|wi%ztY^1=)!^6<*>&eJlq zWO(#3Nt{xoCK+YQEVC0Gaa40nHrt%d%ixZbb51(zwDV3phXk`p+IajTj|!)x60#~S z!}3ZHwfz6&3k9)aw9%Cwg*4Jh*?X`Uj^xo)o^(2`P?@&9>MQ4FiXpP3dVSyLT+aWJUW5Lp4`eTMe}pT*SPF3QpyeeEa?7R()ejGvLFS4T;DX`@`#5l#Lu;iO-u zd#V5WFsN^edS)G!b~e48`n9$zdeRcS!PK#dGmKpPI={O&%-vWFGgX2 zmeXl&Ih2{UK>d@{SEniU*t04Xws0&#;~PdiLBkB+i^e1wo&s-0lf35^r4vkANk#6m zX-R%=OfXr+i{(W@WtHZ=xPEX+T3H^GxxKi1sJyX9Qd*W%ikteF)%M2?-QEfgxJ!D3H+TRf6U`sRrci~?nH=AsrU|NDP-NIqh~`cPEkk|IS{5;nwL&sFi6x0Yr#qeO zOh=R=DoAEEp&iT|btz4u!3v%z0~O2gJ839mMKi<11Vq7!2}o@QlHf!WG*JLAg5rFm z$e{N?A&Lob!V6{8LRxB{FZEsEH%>2Fm>5OE+uuSBy+S z1yC&De6HD5T4=&N)s=9C!z5-gjd{$Nu*MXdXay!1$cauI%NP=hQVpSD!|wl7>2q1) zVd%C5#AFh&C|FPnRKzKiT`^H8sdM7EbjB$(Tp|pYkmo$jkVV&Vu@#)?03l~^wSU<`Z3Rg`k|!- zh2lsK5YnmB{Lcs}I5==3Q>$C$YFAM>xmsj!6&l3E?RJ7WYhsggoYN3Pwz;Kl`tUr( zSe;8|c207t7dlC1x!KJ0>wtP@o|USD?ft&|nKo5bys*t+**CHetQ~ z$*qB%KxBhske`+;BW@RP(z-PPf}bp76S_Oc zA>@&_CFW#_8@P%G@OF8A{9cFSXSe4K&W?X%po(t_;}pvg6UF}>!u94@$4=lvdr(te z_@X(CU1oPE8T@DoII+}ZA&zxX7N&a5d}cJK**UNIMD^k;6>8GghMCLUecS3*{ZdP= zqARbr0^F$5A(`$YM_^W|?9Ly%f@0oYSmmX&${oZBk+ z14F+`4~EBG(syU&iwX`-)=oV%nbmx4Y-d~B!(p$MWmuhMrM12{#B-kEx@Wp>CD5-O zw9NuLsX#L^ye!cfIQq-dirQr80N@0mBn@ZXTtd;*J+-4N1aaV9j3iwcT7R`d` zLS8zXS;M6wHHa70;ph{J!Pj`@nP4^9`b>piEpD4xot&3q?(?=?{&JYd+$G(9w&mhX zLpf`1P&mAdx%cbCTC|+rr4u?Y1Tqt$6TRpwLA1TgQM96w;_0M_bj?}x-L%QenIiW# z$fUw`J~^mi`f(uh8fNycdwXzc_qx@?Uh^^=UFLJAd)>F)cDE-6b2;bxxGAzB4*C4t z{ayNi>WvU|M5HqBV*A^XMGU9bO7eJ@CaWS}dCXs4NtDa`#c*|Gt0y*Ah0$J^l#q0<2nETd*5eX_`?T&@r{4{&m$l7^N#!J zo&S93Uraj7FUc2{g>!}UUEC6}GqeOx`;b&4M9eQx>fNM#^ar2#>sNpJ-T!|0!-o9P zr+@wJk1_L`%1dA9MC)Cz{cOsR@2fqLxCURCzX9RB^NT+QY(VyNKnRS$kGZ}3t3V69 zz&Xjkn~0F~kcm%_GZ54{LAf*g8o)hE1|CW`UV#n`TtEqY!3Tsv8Du~N#6TOoK^#nw z{Hux%1UE+TzyI65DS?*FF+l>fy&qA)8C*gpY(giLK?+_RX6LNE+NF&slOEJHIqLo_@?aalvs_=5i}hzB4HK>&0JX2CD-QyI`X!5^wY0O`Hm zdqO}AL_u6aK1@SHJVZoHL`7UgMr=e!d_+i$L`j^)D6GB@Y`r)PLOQfB5{w9Q13$v6 zL`R&yKO96;JVjJYzd{TKR%}IAd_`D{MOmChTC7D|yhU8hMP1xQUhG9*{6%04MqwOA zVk|~uJVsp zihN3_j7q7TN}CAE#MqXrY>%VNO0L{Wuf)o))Jm`%OR+4=t|ZH|G|R9|OSW7~w?xag zR7<#=OS!Dewxr9uw9B~6OTOGozr@SG)Jwn|Ou;P7z9h`UG|a$E%(PTWqI68iOiIao z%*l*Q%Dha=%*@K%Ow8m=&FoCjl*_A3%z?N>`qD=c#1M)2M*`eNZuAezb4uBKNQbOV z+q_NO%uU^#P2TKH-~3JB3{L;y98TgaPUDQh<3vv7Tu$a}&i13hnOGJ~gu`g`Kh)$z z{OZRXnmzK+64wj~svyPWq)OV{P4hfY^h{6nBv1BiPxpLJ_>52aoKO0!Py4)2{CrO5 z+)w`OPyhT+09DSJFwN5>hw0SBA+*NtV9nOtLnX}5@`O+colpu@4GX8;6iv|pT~QWoQ5Stt0A;=Dj7FKvzJ07o95O;}oQ7?@mhr66 zyo*gC9a17KQX@T5Bu!E!RZ%8wQYU>b%YX zv@-^+y=~MyAQeZM!GixUVABI=(=SK?H+@q%jZ-*H0X)r9J=Ie??bAK=Q$GDuK@C(w z1yn;F)I%*)L|s%x9aKksR7j0fNu5+mtyD|BR7}lOP2E&Z?Nm?wR8S37Q5{uME!9zF z(?N@*OSAL~efDKrJ9aw`EScFYjg|UqAgmZJzAtqTBTiDrfpiMeOjoETB)5{ zs;ye9y;_ew0IgkFmgU-Ph1FOM+i#6oTP?SoG}mfC!5+<7oqfu7tyXS9I$W4mtgTzS zy<5D^TfN;|z6IL8{ae5dT#i*$7%g0{eMq*Q!EP1XStVPW+|IK_+nb#Rm1|q_oQXP3 z0j%5t)WF<%jaR=7UBMk)(k)%nJzdmIUDaJ(y`7oEeO<%VPq=}gtb5X1@m9u#S;w_Q z4>8vr<)Ra)o8TIZc!qSe}|wceneUhTzK z>*e0)g|+_ja_V=+dd^+rZnEWE#L4}T?roD3Z7sbxL^$4R}CIt5B^{fPF)6I zfa?9%cWua8P%7s^8~Gi+2wJwpJ-!%b640tCt$~;rGzC-0GX3;lc z{?*XL#RS|9U|G$%S}ohi=v@OoU~~oE8UE3#!QtX_(>g_;+IFlUZL&apzQ&G z{a*hyE@Lv5SAgw-5{}~xo?|)A*f!SSHm=tSHsdrl<2+v2KXzk(4P>JwYM60V&voE^KG+MScNcBm2JsaMNVfrUgveLV-4VAs$~Ej zDA<0@05l$B2DkwY*ynwQ<9k&Be;#4y?EwwQ03AT%10ZOB-d7zkXn#&<5@zTYh-m+L zwP&p@Xbphl1JD7C)@Xf&=!I6;g1%^jrs#WB=#$QX?v>|*=3{~`>49$Od(G(XJ?MMh z=y}%Km!@cb4rrDpVUlj?e8$%v*y)>2XqwL8q;^_?erbK)Xue& zjt*%@?&`l~02Y`44Y*hE)zcGxPQJ-xgE@s&*yO1KKKyZ}{5h`UYGI@yo?YmKQr7ET zeQV2e0wH$gD#(Hv?wi4Zn!)LtoE3&@Zk`FKf+{EhSdeTwXo3gu0wc&~^h@UU16yVu zU?_&%X_;bc5ybc@)8yL?kMHfopfzja}7FHqpB1r(GK7yWF;bQ+k9@w%5 zf`iWP z4?k$KF7Oa90ZLx*0MO_f7x4cc0v|8$8HZ=D{_?0jfD4#_AGq-29%TRIWfjg+>$V=} zwGO)V!yi^lg=}~=UDySQbI4N01XU;)RZxXe_UmJb8ZCE%Ey!ODc;EFy9t%2!#b)f_ zn{xm!0Xhig$%ci1iY|FcKfC5U0RHUVy)S5PP~bgR@yl7oj?w#BPS=K!>QMzAM!tC! zqLT8lJZb{NIgCjbY54~6xJYpKX4q-fB;~D@YaAID1aXTatttn z@P=d-Xo9hh?`E%Zvd#cvzi${|_6ex+G zc4|1*?iUF07RYHaxA%=^@D%8Q>t^7Pi;nMx&gmN{0j>RRG%jd*C+Y^c zfs{{pf|vMSzVMA!a+g1E^d@Pf$ATA_XNSjWIVE_LM|d6R?(4pJ6}V_KFZm5Gco5I% zdZqV#_W7vp_n5aB{n;-b1cKNG^=paV)j19Xoc&7R*S2K7+) z$H;Bao#at8l{MmFZHKX?yNP9mSr}E2>(n;}J>{aY;vS-iCaB=zBA~a6thI9xomcsN=$gAr~^FP!Ryh4Gkq=B4Kc_$3d7S zdK}0QGbV!zO>XqKQ9`DM6iItztU2Vs0hvgUV9-JIX9|odUVhbaHS5k28Im%6{}joShfGoJotcS;Kf75Vq|br=8&d4F$4B0 z^T0u^R1YuiN!IeruquY=B&u`kQ>-mm!_5e^wc(#w;wFtMR!4vyFVm_hTKs^9CSPYN zMt(tV;Sw4`M-B>edWis;w?dVA0n3Z4Rb3x#kQcA62S9UA#zExV|~36rg; zvWhCI>_W_pRUTQ)C!L6K0Ucyy!N8SP7TN#FDM8A33Xo>PxMqmB2qWf;nzXWt2Rj8o z2P`GfkzE8{tTF_fMG89Tn3%vhXflkzp(vw`LPO4@kV-l!IqOVnDW;f~Q;s&+bo!~M zp-v;}sHK`ZYBi-MGfk?irmAL`zQ~G+DW;%^BCVZJDJ!qN`YP)uwhD`)DTW^UD<;Ms zOQ?}wT$sX!y#U)IEzyp$g_^s;ur0TOW#CasG63bE1_da`zznsGb;NfXSTw{)HyE)% zQytg`UI!Pr%YwWG=;SXJ0n89pMibB*uuMmIbpb<57?i;RL$NS}aL?iAR}Y^_bleai z4P_L2nMnuQbo0T~mTMJ4XBo{qSjD?B;F-SZsmqseK7nTO$k5TH!4SFx z@tQ8FkxW!p#WmLz(LnF^QcigJG#yZ>we}Ls_&JuHZ_{~wHBm-o_nkyW<1p1hZqam5 z3t03Sh-RP%jSxC(^n)>Bw7#jFDAbiMVQg0ag!JMFdGeml2o4>3V|YnQ;nce!6^ zA+#FC8e)m3q{5_)Dzd06voqFMW3@VF*<&XUO@B`&!d4YUbvbeYe$2N_rN(Qdtl$nL42T8LV-U2X} zP?j!hH_PJUh87>GXkhsFjT>7MAuQXb7wxR>4)Y4_vgb6WKNg54AMtA1PP2=K%1hqB9leHw8G%J>qlVwE* z^@+|}W)q*cJaR2}GfQA<*-F+CFKSQ`Wau6so67ZsH5wU;(B`9={E%#9g~1(0Gpf;z za#VPr`~hJavK9bkC|Y8|g(*k`J`yP}6`BymDj;~M8eu{gXAr{~=x~Odp1}<~Jp-8J zSBqrq;(Os!(k_yOKJ`hf7h6CA88om3Od#M4JFF6!Wcnc#T@^%bIus^Mmyk3`89O|h&OG)(&mTconyq_|g+ zSdnO48j>Bd`qOdb8Si{&rp~C$tvx^4%70aP0bRWEzV=03(+u}BvYxFzeM`=OG4P#{ z9^kqM#R~t07h|;0j0QJs!x~vKz~Lr9ttT>zC1{+I7YG4Gkd%q$%6dn%ja3XN>je&7 zZg9(u^-ed$ESoT2fMFbf#4U>JD3-IV$Sq#D-;uQuL#)DIdM8P5wp-9bg z(yEv=fum0Jln!pNGf(w!)13-6NV_oKlb|G}q&}8I9D<(?w$Oez?!3iX^15Nb6+S=BI z0ha&pfPivu@{^M@);S+T~8qdN2sCID*79coA};|B1uC_Z;eLy<5XnCXX-;k2h< zoakicXh*^L!NDnwqI2!MAjPcfIn*z+mgKAxCe9hI(hjpu=S0{jOL++~A@me(g2lN- zLyCJV4mnD)?p)e2Wo?wq%tiu}z5F#wZ<~OVkzOY~1nV-FBQN>M&n^Rg^y3?8LgxQx zl~mU;>comduOb|M;?mq}EB+XO2|VkR4rACU>~ZS(DHSzJMa9IFd`*ijXu_;5r~=V% z4I>==(5;a@R;3;NA~LBc0_q{)og_3?IExyGqE3{lPp#?=uDaE)hIOn*HR}@+;~FXy zzNB@JEXV4eR=GO95HGsa7hO+RNMMo|Li^sT{BDc2P4|QZM-S}45=h8dI8s({*bX!S z)=5ZL=+|!{L_(Yk0{y^Mlz_fHV0sXaA2oq&IN)>y#@g_OO0;=3wDq1fdIN05j>wNc|q(u;B~bhBf`yYOEM0X#^Vzhsqq2 zJ%L8iIN{|yj%{e3j`c+v;u6uRlO+yQ41fS}QJyNUA}bb%M8wt&$QbWz-WQgcNm&Ul zd_swYUMr+RV)dHx4GQ%*!JYLK>>0orAfqvQ-Y+CoQE9@T5t{J*kP{36PBj1-WB|4> z)n?6*HwForFiG2^Zcq=(25`%xAy49}S%%VbH5XpNRfiA4{@R(|wA#1O@PXpw^$j2ChX zEmoo0Ay-Huh*3<9AXZ_`u}*%dA+6C-s=Sc?A2MLd0sM@Pm;_~?mgBE4FR=GW07EjDiBMp+=?lP<2V{7 zvrNwefB{aO;S*TF^i|7a*$VZ|NLClb#CW&Y9UC{CU{n-<XmL3IV=4NuHK7P<=_Ty-ZW;G1fYC>pgwrFau<`=~15Y68$ zMvH6?(QJxj&E;rayr+AbXM4J0k1B|da%YepDUv4Xc`E-Yh zUMYIUXj4JxFUpYh?PprvKwOS#Pt5_Ju_$7y&-P%+X>zEW!sC`~hDll8R`a5Ur`usu1d-J}b1ADx^lLXjQACcBODyDyW9)wsNbtb}P7k ztGJFUxrS@Go~yZ{tGZ%oL%!;%#_F2TnX3AfnHK-!PHiKa78b0&E5FjJq5W&W%1^)! zELwUfo(8L<@Toq6(6RpM_$})SB?G)hE5%mqt6uA(W^ANlD|Tw@Ptw4;f~?4fEXkIv z$)0SwrmV`MEX$5;%f2kk#w=9s2EfWH!J4X~?UbFFsa*2x8Sv`9TI|gZEz#C&(ax&D z0&Buz5V3Y22f13q%32CNtO}K;v)U}yX05;O3bfrQj&iJpK&i}*E!mc>*`6)hrmfnp zE!(!O+q$i_2Cc7cfet`|69}kVzTOtJtJccx(f%#q2JX9l0`@8G(l#xtbzjs1B*glo zvg)hf_O0Y@Es;#CL{=8qHlK1GP z18l(<#O~|vEzq(q?&hxUPA<~kso_4#Ju2+NF0RyuCbJIe)$*?LQm&~&s`EZC^hU47 zQY*C*o3m6e^;$3WUN81`@1kli_*)a9TcU*9Nf=3o-hPsu35lwI=_uh9q&bB17^dOL8MuGA3WLCR6edH);=mvI>K; zD37w%CWG((F4GEcIXo>5*RTy6?+PVvD9)Eo(D3zp5!CY>37%t6_>OzjCX% zO0&xHHp9u6uJbV)Z8OI+Jj?$xJ=e26-}5}MY8#F>E zv_da5L;JHkck@F>G(=Cd{Cx8ZYi25o^D1vNIhQkv3TmLLvqXGT#wOEfeS(mk0pEX*iwOX&WSle1MXu=&N z0xAoup89kvyD}^XYO9P$Do8?K|FstYwqW~pU=#LWAGTp9HexS!VGH(RBlcKH0%1#b zWKVWtj}>5Jwqh$bFE0P~W_vbhf3{*bHfalXX`i-fr#5P@wraOFYrnQ@$2M%wwrtln zZQr(S=QeKdwr=+}Z~wM$2RCpJw{RCXaUZvFCpU60cWWDV=moZEW43ftwscoFbzgU3 zXSa1%cVKt7c5ioP_qBKTqIYArd4Kn8kGFcSH+i$Sd$%`yySIGD_k7EDeUJBwhyqWC zv#O1AN8`0Q<8U&7f&~<~fgd=6C%A$yIDZ%a96bONk4>^(7K#?E+Ig%&2k}o-vH@TBPIg}f@lt=lL8@UBkIhJR+mTx(ice$5; zIhcpJn2$M`m${jrIhv=rny)#Vx4E0YIh@D2oXi0OI;2Osq)$4fSGuKNI;Lm3rf)i@ceoq!811JRfr_cJD=K!tWIHg)zx&6ByvUC{$(OvzKRmJ5BgNyXjo-MS zUp&JKygBQ&x<~uT*SW&qyw2}D&-c8~|NOA)JB?R7u_`-91Gw=Db&q>I(6@QcH$Bux zz0^-V)$hE}Q+%KoeYt1KxzBv9<*?0HeVISK*q6Q8pFP^AJ+K!$(fd2OUp&$udXMKY z)3>|WlfByKz25IV-}gPvU%j})u-k7v*K2&^rn{?GL)hCr)#JV2KR)C~zT`_j;J3Zj z6Mn|ae9|BPFRP$)-A}&bgTCmGKIxbK&9nWYSbj!hKB1fFz|(vVE54$eKJC}O?cYA` z$9v_^wZ+5y(VzR~D}4?RIJ@V*@gG0(C%^Jn`tGZ~46{D)dwuL1{OE`N@?SspXTSE7 zK$|yzsW(bpKdtZUxH!0e@YB4jTtV?mzV&av`@cW@PyPtZKbw31_xC&Zv3|_Uyw?Ln zvT6edQVB7FL=A-t88&p75Mo4x6Ddltc=2IIjT-rV{EWv4F$qp1)?TVD2K-ad_IC5nXxpV2(wR;zDUcEf_^7Y#n@L$1!2@f_5L5N$$ ziy1d|{1|d%$&+jDMVuJ3%ef~CbY@cE1Q*djTskm%I#JT5P@SrIN_DE$t6sU*4tu+7 zZh;0XQueJmF5RD&GZ#009C>o(%b7QK{u_F9>C>rKw|*UDVZCuroLDe}g^^_#Smf#e znr}t)102?%ZSro=uikBJ2cg2riQf9I&?p6b zb#v2l%{>?0blp@S*dnvz%mf);FmahRbBQIFT;|DTnG=`TLB+ZLe5gfOW0kf4&N^+w zRo7k1;#Dkx{7mVR0Sx7!3-m%;b~Lp-p5kM4q3uXoXH7mC<&=AqP~DbYei`OV#}znQ z9z0_r7c-aH_ug21#uwFPJ_~qYf?Y&PPN-g6de&O+%vD!Dt2t}xUyI(T;%0#bU~4NV z8I%$#2Y<(_-enC-qB@4VH;*o3|V(0~qm^Nr=6pJSQl z#Pj;*faNz0PI_snB&W4L8*ha=PyHa~kD!TBy2NWPtmOh(GfJm`k`6A}pbJ7Jc-+8c)*_uO^gy)s4@bUPz>``%%h4C)ag0XPH<4UR?H#3VwY5PNpfZZ{f`K z)LiP_K)yPoOJG@`V|KX+L(tXBKi_o%Hw8ZS+U1|0?%R34AOHOI#}Rz}2!cxq@z=)f{Az%PC= z?_Le`85D1Zp1N7^g70Y>t)A8}?s1MFK%$rx2(iA7&81_cd*RuRBZBX-aWOJH;@dhy zERJBSI~dF1ME-}u8~)Kjd+gyL30X)ADGGo*Kmhr8fC3{(XaNELFv0KK#S!5RkBPZp zLB9gmz@618oK8PuTVIM_w9Y|clXJIN)?aEnWPtx&gA)-IKGiCe7E zI3s-H>Ru;;y?g|hy){WAR{xtW-__iAVC@uoZ%GbhMpuu zF!hZu8YqDs6ZB02@y&p_fJH7|!O2k_?nI;7mkgc*HFq) z6c;G*Sk6lH7LUEfVJXN}ohrx#8lZqe3g7|IU_b{t_QhzzDcqXI>{XN*Oq@t@#4w;nFJdpwAxi0B2(1b;uW``1Y|G&3fTb@IJK0pfI@7xO9Go=b)|j9 z*KT=>SCB$brIjcea0yufv(_QYhrI|7s*Lh@@x4RWE7a71F{Vb3wnSt6cCrc?|g5(1ouEt z_RME(9i>o1ve%>pbTt^q3=}0` z@rss;uMV~(5@|^R215}dwS-XL`EF4L4F7nBOzusIo6LbFIFbP@m_Y_eS%Oew;09yO zz>IPH};|JkkHc}H?qnLXaEMR+GMmY z?lVWwWP0g2kGGmC%;xDQz{p%d`OcSefOYem-`p!XlcWoEwuPaVMG2W|bhXRMuwoU> zA(O(wgk{4wDa4kn(&C%Z7%tzxElU{@OT=wEknK=yI_eqsK-3Kgjj01b9=q;XZ7fhh z2W~(%C4fNIw6^uF!DD3K6%p7eS2=ctT`fHdSCR!NK?d^J?JQfwyy#W$T;=*no;h#J zHmBh!zfI7;*0;^}9V}tbBE~t>F|`=|+C&943sQW|45Ia=o{QPIOS}TJqam~%F3ee2 zkfID{Hz`KfcZo9Cq2NAtt)?q-05CZ-~^z_B}lIW=!svJVXk(@0Rv}e79bY9>SnkrO3aL}(oBoOt#WP; zp%jYg4D04ZqR(tW($dG5C`!@)(ucsVPWlM$rbOTpNWuC(rei3fvR=o0bnw#r2r>>X zLg2?(X27NtLW#TzUNU0=>`E?_s0qm|UdRR^IKTi;4*%vy|N4&$yHI5QFChv*o%lkD z)1G>Lh3CKvAEt#;qi`OQFim+(0Z^g3lI8 zLBObh5{OTuO2G^`3=Po06?`x5IN+CTDGV$i6NB&1)Mrp$M+a}=&KPF{(jcQGEn7lN z2#1hCq-8MRC<}iK5515UYq3(mPyuql36r4WifRFH5f9h!GiHEa-iiU!tDaIM=w@;H zVo&zKt!bFz7{!00Uhs4xwykQ7Rx&ghM@C@Zo)44Kr(7B-6;Nnr{$Anhb7SyHPE z*v+C$>z4p*lAvQ18O(=9ju~x{ANw&waIqqA3?M~s^rEeA7O*^yM_=ObaOx!qgAV(o z#~Ej@1EZ16rjZ&4O9bJJ1hY{s28FX$FmM*q%U;0%%cL|i&g#! zGL23~RtW?HQ7bueAh0naql0wl02USu7GV=LRZ}|a<~6I+I&<>=9K|uU%+$`n2->P% zO2kA|Wz1GcDt}XzH0TebM!!I)Ba_o7oHIE_hqNwiKy(r>r&B+>>pJ_>KTk86lwb#t zfB`gMP0XMH>*Yl1J1D{SJf0It=%PQBCI5!eAO*1}~M2|$GI`FfS_ESX9>OV`= zM0?_!#3eufHKN5bKm&Au4$1>O-*8Rrs*WDiBsB0Dp>Z+|YB;m!x@D9|%FCv-*_rZR!FJ+Dzi z#|cE8v`O`{N~2Ux=k#6fkszgX29gT|0)RHBt$64F%f1v$CA3GSkv1?gp)NE?IR`o6 zlTEo}BqCK(4}(%G)lx4NQ!`alH!HHLw6CG=2VBXeI1N?oz_a9~W}NR#wNBY#(+q?-p+{6hqy$NP{hMKNoaq zac&8hOs)2Cvo&cWhAX#Lhx|5lUl(?DNNVY}Zo>_8Z+3Avm!a6!b7R+ce;3KbmUgws zHE8vAv9)4TS9Po55qMXtTo-twS9)J&bSu|2EVpK_HW0-jca76|vkH2rSA54;Q_PHV zp@LQo_jvo2VwY}lg;a#z)I!3SeCyYKw`6uH_j>m?eam5e*;jEH_gz2LZdmhv4;X>F zB!AC08u+((v-fn7S3el)4fd;3FIYP=Sc5m1gF6_5KNy5VScFHIgiF|jPZ)*&Q&@#p zn1x%|g(ScPDwpfJfqPhk@EWi8TAsk!V(7LcM8?!T8v+ok9<$A8EmMLTdrVaJ6^IEB| zxv6WKvUzs1Tidl?8@6Ndu%8v6KYObOnwpCmtWi6)2V%0{;IHZE7qhc~W81io8@ZFa zemWa6?b?S+d$)P}eTTSR7!Cx~sUAc6OO%_tyW6|J+ne%;wgdXMog#gqTdyIzw@>-E zS6fM3Ot%;S@O%<2)oBT;jLJ}e<1SzU3_t)QsQ{>qaqJt^pB>u&qaBk)9mIFpMowLAO}t{YxzFE* zbW!4{p2IM>app3=Du+Zx-*iFFHs>!%Sj;NTOlQCS{^IhNfy>_VGzeW4G zv3;qrd9tS_xEtczi@?Pg=LgnF)`!U4vmCEt>$n(Toy710-aWxt;K6602mHqbP(TS# zpvvl-)A=3cQ(on{$=_`|$rn4sAsE~B9E6-2$}b(ingMF+_zAf;5<*y#=vwmgz+vQ(A=11GBXC9%~`+ZeiJ8_-| zC_v|xpy9it+z~G0n=AmH>*3$;-IL(&RAB1=;T-{ty#`Ex#%c{tX-4Y{-|!EAQp6jV z$$Li~80YWlq#jQbP3a)%$YO)#f`IxGp9IfJ9kp6_R|{Bp>L`stL9(|#TrbT zI(-T?s?@1et6IH^H7nH|6T5o-3O20Rv1H4dJ&QK2+O=%kx_t{buH3nF>&~rsF(b&m zBTJq{>33bpmV^thqlvAiVx2r2`xHuasN^e3?P|S@IkV=gTrGS43_7&v(WFb8K8-rH z>eZ}848;2Zqei}cYX=5w7^dOHycs|KEjegp%B?pWKfbwh@#V~$JAV#6y7cMPtGk6A zdp2#5es8l}nR|Ed@s52yj*OnTb>!@kD_;*kzWn+0>)XGNpEP#8+Y@#7ej9kY73LCo zHy!9(W5F#5R5EX<*ItAYPDo+@RPW`7VTKxR$YF;be(2eL+I7?)fB}YAn1KkcSl%?k zLG~h05RyP+g*M)ZBWLt=us{+MRsdv>Lb4@;3?HmE0|h+fh}DoFJ_%)%QbsA_MkH?c z-(M)9D4tGUCaBh1B2+}A0`=J_QT|G8*LC>1;m1!oe4nJ(+xfIp8!@Eh;$-%`Pv{}Rn($?@nu84xX!^S*t zu*4E=Mr*>z4xub)unQ%-K*k-od$PGCGx07HA1AxR2_~?x!=5IXoCLcczg(@$y>dW6 z$3A*ow46z747JCKvg|akLl#i&y$lU~XaU1oR6xahQeZ^QN6dgVotRS0u97>be1Z&1 zC~F8u+AzeLZ!U2}91x2FvsWAO z_t48KQ9!dP+bcx>pv30N!VfjiuzJ@sFM+|{nIqJJ5sTVsItxbj&g|WwXYB2~$#Y9| zvT%w$HVZNDs!*;S&mb@8d;5C)5wR<&J*G@g-;mzZUhHwbv*(^S2Ft#!tFJ+)?sl_7 z^Eoxq;`(TU2ZV>4xBv=pfF1JJzdk}LEkTY;lA9bgDAy`j5s*TiL)H(Z1-6n6E(s~? znDF{`Gruih2g!R|-r`h%3x*7Oy;?#Es>YB7=*~=OOB)JKm@&PH@C5Gr-sphmEfY$F z0~o--{}hsg1&~W+OfZ79Hbwy^!i`=)4BFBjQh*{dp&?3u8}*8yC?0VDY<7|%@_2MZ zA#!kbx4K{dQ~H)Cu#sN~R)YSoNZ7Tr2dsad?dA*>e43zLH(iB;+Bp%R?m{kE_65S`SUQysyG*n{B#@I+D zK#~W|qa`&z>8n9C5mVmdm=O3jtWe%@o$PF9Y77XlED7v8r)r=B2RRu-9x{>qywM_G zmdzv(EtigxU8G>y$Qcj~W12i?8Jl%6wnft+O?2WZe|ZE_GG${aL!-xxq|DMuD+wBv z;n_U@STbEw)Ch)hlr)z((j6!*Wfg1L-J&KgnGUX*ugR$YeEG|5UJqrUjH2HVr8E64 zvubNBLPPIHslU1DSjtqXJFkk>tjYzR@#JG4!(-2!APk>HVOT%U%A|ibB!$>xS1Om( zI}MuPenT7K-##Y;CIn8R^qS=kympYG1wdC?Db`%C*-_kGt&Ib1o-Bt|v*OY2YR%Kr zD33U^5T>hQr3+$XIlH6`b|3&Iu&fqwx>m#94Ol41t6^!DH<12K0Xtxyr2+>wyDmTo zM4Td`uIW9(YIV58EpA!hDH84^a7*-LU|3frNQEWExH<|8Tb1>#x8am+&AXpALkCj- z(=7pNY|Wdpgd4rVUbKaw8*jP7XFZT{w`L#cR%`j^O80V)g$PMu@|vUoOA@P$uj>=> z=o=@dM6t5CVku@*tE}xA@4nh>q|QLeTj4n+lN(hQ1(0flZpAOZ)TFM7PmE$nE$#~b z*`qw^Nl!njZ{h5372g=#B0N{F30O%vh=hw6(GkY+Y>4CN?G zIaDil@r%JjuyRO)#x=$#ne>U)Fsn(e(p)dEw0XrFjSdK?&k{(wHu_sZWjS`DohD^W0M8K0WAAzY5Wcu61%4o$6fcde^+R%%v>AI0SCm z&jgFfb7SpgTHAWrm%(+fpAGG3OS>zpZZWYtjUrep2%m-Qm)vYAGiF1~+0!n!xz8PK zO6A^=@_vUSP!+_|60x zK*3Zlkl^~tAmba?xW7Fv@}LWS=!rA&3jhOl zp7pKQpa5Q9zynG^TBio!1%u7o;V(c!sLv4XPEottFOT`m)92U0F1g9GjHE2N?@|qzLaj1jsz%Q_q0vVQz#2M0^Ie$3O`tN_k6Ze)z;MK8E(=fVi_8 z=`aa>16vfq=r`RxC!~P9L`G^J1VQkG$G{0*kO9eR00}H8fdP7#fgUYE0|8jVkb}>2 zD1vEWbewi_fM#s=G*+N?ceiE)8jt}oD1$fja#)vk zF?a+fKmk{$O_}rp8XyC2!0nBf3^03PKbwjn1@hk zg3ZMY?zUzA$AZ&UdKPkcZU#o-moz=lZy)tRp`}FhMQ54xEj5%gMbd9*cZYh&iJe$~ ze5fUVR&0Stg~%pv0caH*Z~@-uOHYRZ<)Q(ZXf^ORA)Dy`iMNQ0mjHU>HS|E?{IFN}( zkN1d>32ABhsE^Dyg_6?_3ulE`NRS0-k-?{s8L5$%#*kPxitEUZ5ea?5*n-b!l6AO| zDXEeR33vL}kmyJt52+I&i4y=BBcykdIk}LR;b<%Alb+a-`{;}Q_H;-(>0?Fo0Du=N7+?Sp=ZpnGQi!)(e%Y0cIe#75 zX@AI%>DYWk36Vw#8LxtqsD+l`=xvMF13hp62N0S%pl?0Kn6%?X3XnujZ~!%zaVV)c zGw=W!Fq)%TnvKMmh-sS!-~nuw5Ds_&vRMEO@R~~?6}#zUW9OL7iEbpvYRfm7Ln)Ig zGLru%92Ci!ow<}_F?FJeo8fr?Dj%BO!t>0h!}~B`5N(=0$bRZBvghv0DLJm zOckS}MOrUXMI%CERJjCBxsoK62QNm}m#AZ$a5{+tFj@F%L-i6gJ;fuR>V5;-q@$W= zPYR_c$B)!GV^oSD!kpCq$}yUO1mTm?;hz1iy-*f5n;^Km+3^ zraUOBqzbW6_LZmlXQ?WsFVTFgsw%EZl9nT@vr3S6NKytnC_=Xex}jj3n_XK1H+!=%yE0xH1jYJ_1RAk( zyJr(iu_w3=lKF>zWSIsNYp-fK+?b^xi&i50u5at1Cfg`xbA}9n1SzAZ3@}~;P_*ND zUnokW0idTv$^?~K0!N~XFn|Lf@L*KnnxeU(L+}L5B%uqS0edbRU;10|p`TYDrgsstmdw61+0tdi>BanBbDXcPZyl=~j&ib_N zm$-E6y- zI|@Jnr#l2=bhrzcH8mi-LSTLxpaSx8GV$31ybGYdX%Sj$n}654I*_vUs(Wdp0WDjt z!1=vJOkH++w^+8f>Zq~kYo$R^6tOCy@4IaA>$2hIXk76&-_ryIj8wg8!1n95`#E)^ zSpw%Y1MAu*1WcNOiJt<%E<>;dw+XLJAOIZj0Pwn)9V4e1gS-vQ6z|%>7jw9zo2Nz_ zruNz`OdFb9>#vYk5f}`%{)4SS&;SvD$q_IFja(-Rt9w{%#G!n|c`+hM%#ZCxnHk%} zts0E$J8M(yzElkVzfOm(q^V?0fhjUjn}0g9{3-;P8^*sp#*#vKX6#TIkfI!n0Hx() z7BEI^v&&eptk{#s&x)`Hp#%U5$O0h98Q{!?JW*Z?uNJTa#A>^t$)S!rM&K!+c=D!{ z{Hw+~wd2{zEgN-9D$4nsAErvNN=%)roWAJ0zU&*JvfQ0c*R!NK0YEicDgeB`k^!Y@ zhRW*89RS8nfw^Ri0LCl;;{1Tvy1UL8q#I>^G=K%XTFpan&Ae&Gi_*<>vci)RpIC4M zsAJC18-qHa(<)Gb{@TuRV!F~iIInBcPGHnI?a=l7$!Dq0P0cU-+|T42lML#&+BneE z5V#*}(EnEdtF~K#Cd;&JevmWZv-u%`oT{5{s*>!@h-vVs9YQsaI zvsXabj(P=ST88j+`h5A)m+V`B+9lSiUK!)1EI+y zAPsrq@`7o7B;`!5-p88n{B;`**G7}OFu1Ng)^!P7vJUVBbKC=H`UI8AtHN!ZEpP=N z9MaAB(!KlDGL6Vy``t|S15MyQ&J3PTu+RZ!epv8#AF#a{Jgj+ z%>ruyR}Q<+n||Ul$@I(3oUDpp;N>-93e z$Z{Lzv+gNlPUgO7=4Woz3@*jgmomQ&?05C%%@&RyaMm?UqZSoMX^XtS`??2!*u&k@ z%FT5sZs-p6?5}yt>sbPBOSKVr!NQ&2==`{SdFeos>1N2-OD)szI@mc-0BK9*W~gsm zSEydWxw{~1Q7&{@xVLnP{;8igu^*=daJO)*Lfl5iZ+@&Fo|12hTaT#CmQx77x9@S!rjk(~dl!Ax?=tXXR&}YHdYaS}O#N)OsfFaswN@^BrUx!~#^?RQoR z{sj6{&CsFiiW+r>bX`(uO`E!d8g-o1sa4yJvxc=@R%>0oeg*sH$`pkX%+^4gR_)rf zM|5;hB7|*{02ku2#bAH|k{KM_Ug%)Nz=9tyIxJXHphp7=Gg$1n5TJtw4;@fskdb%B ztRIb^=J$Ro*Y0uN!AwVhk{Zr;6b{{|jh_;BLIjUPvz zT)FTZHF4(b89gXb>PPV?Wrls}c4<_-TLsT`t2k<5%{w4_n)O`&w{YdIRZ@3!L>?Su zl+Zz$#a<&9GN70t10+B@0lyNK*a0#O3R|E)15m)=1``N)K?TTkIIsX7*5j~457BaM zIT1(ntwa+^L@`AbS7fn87hi<2xaK~1E)40W!?8N+xMRwxr@jL(yzz!K@4WPCJL^Lp z+S^dQ`0SdiuObRS;swDj_<@DTPM9IW20AbyGY#HSVZpbO;GhR10(hYS4jMqY z@B}6A#1q6A_dKyrKmX)2P(cSJv`|A2Ep*141aRQS9CcJmyB>M$4#*&d6tBpz#=9vEPOOBEQP?1cS#z^s50e%OHm z1`g05F&5%X0!|b1TC715Q*}1wa93uzWtU%uIcAw>rnzRD#dYqcM$J8U$Ll2Zk==IP zeRtkBsuAl*Pd{`E2PE8_E$Swbbkc+mP!@vRtBa5tW!b*|S|GWmrY)|pPi9tSwQpv- zZMWZsJ8rq>rn_#0#-*`bo_m%QXs3oIx@e=3=2SiZCI8xi0J2Az+VGa<%UW!&(Vl#5 z%k9QIbImvBymQY#*A{P`_dXXXpZ%^Ba7;5DJZXEC{D1$8X2o_S}`_{c_(y z2R?Y=hbO*x<3rb4C!QWH{isRTOJv->5mtOGJUEf{z`tQd-fBm`VAAJAM2VejNI6wjx@F3~4(eir6zMrKFdhxpo zdE_TAupo_IN?Th0G-$v5ZP0`N1EB#$I6@MZ(1bHX;QAQYzI6F0D%%JL1gU4VqaDq9 z*ZW`&b+|*;c&}NF(_TtY7DRz$uTzjaVG@=9xI`v4k!~qe-~w5Qy1&6th7+uy1?h)F z9(K`-I`pAPfLJ?|OwMeT!$_*mHWPp$F^y+bqmyWp!7)0~j(5ak9`$%4DCQ_fMrmLm zsrbSeUU59FyCE09xJX7OD~upX!e&~N7$SazCMf7Y4FYqkAIKnaWTeOueo%r27-0bb zqev2tptv_Jp+ojr0Fg10`=(QCM@9s7ogAb4e5s`V3IKu`2*C&#!T>3OnS_gUfRr8>bDr!-TW|T*TZ9 z5!C`t=~WH&XO;H(DiDs~9us8$AOIDp&xRzXQ){NxxW`3qa-p?Wi=M89E`+8b2lthF z(GjoL#im|4!dH~4v<9`B0j(PLv03?t0!OG2#yC5rn&M}OyE+yvcM6dO3}%<~>A--x z^eYpTfB`bV2?MO;OO@^itO&qBUzecMkd1ajOsFXkcA2jQhz(&&AgCLayI~G@*uw`* zUKE=vNVh^4uF{>C786-q?6w$;C*iJNy$b@s)a0`W#%jWF^#jh%w6~U(?0ZK=qfbVJ z0=WgHo3{D_%MNU7tGZfBW63ZD$ly$7S+LbECQB=6Zvi0Su|mE)CRZll!)Hcwn$ybM z5pSn~lQMC1PaHV8R_~+#)^%~7Uku{`O|7@(#4rN3d}Ix%00gVWmC2mi-V6xY2z{MM zX^WgHJ8=L5Ncg3igfK9BJ{q`8e$sH&5+E%fiA#KxZ>D!`LTr@IA6*mhH*FR-3)9ylGsH(CoxkLz!=u{zGxs?&*C_U=T^+u4ZI&^=2$od?`2EhpAbYCCs!In(?(u!wR6mzycU+dB zRg_N^OP9{TqsN_24Uk(0>izrJ7;`li7{JtNk%5Hjb7?Js8r-GKlDj=~vBDg5=uXFc z<~9Gs5UY^jBzC6Iq}%HhS2x&PNBy!;qR+Y@ApjKEseMY&+Q#Nm;}%acXoRp&v?B;f zn4O3c0;A5AGF#j}y=-fhuqDM|47fAsf$Wl0fQE_uV+X1~zqFtU zuyHEMDUGeVV+n_H92o2k0tBV;t2>YD zKZBSLvQj!4q{1q+!iBIwp_n2#%RyZ`F&@;4Fzms(q6UT&!ZSn-B8(Yn;2zBLIZNaVvwG(}59#Z(j^Ow7dhLBv7=!>i~-qcKBe zd6zG%{e+MEGDv9zcS{>H&UZkH|ARw(zA$`T-;a#mpL+ zvC2kn^v7=m$bht&a3rxe>yawrM05O#YD!17$QZ?Nw*V;^B*4cSn1RFE3-?$H!}6yd z>z*uGHs|ZdeZvEcyH>VZuE+5rZb zJJ@tJiZPWRz_dy`&)xj5AGl5L`xQe2&85Vi@v%&UtU8u#$(Nj; zv+R&RbG1!EJ82uE4_FMGpb{4FO{$DO+sw?@)yVa%w6odVb#_t z9m5BWRy{0OgALXqw7>y$*_Vacn3dU?rP-RDz?*H^oWHQrP`~t+N{Oet<~DD<=U_H+MdmzB!o^8TcPW? z*U*E9wS|X~<=1}QicEk>j#XLi8PH#y#<11fz2)1a^V_}!+`tvw!6n?nHQd8R+{9Jf z#bw;ab==2|+Of?IkiATNMaUp+Rxu<&GKAH-<+{5~B_%}IVkOD;1H(a2bN*p z5MC1w;Py@7<%QM_=3p@r-x;pqBEG^A-e1*Kp8lndw5{N^9pD1S;Icg67aqbGCSoJj zVsAv^gz!-ceqt1sVjt#Q7q(*X&D}1};w@(5BesEYmEb7S5&y+rFy7(gMPB7y;T8sB z0mb4qZsR}}L^TF6C2j=Pontyy&?ko73wB=vj^8u?zOFPLVn7z;OJ+JkhQc`R8AV14 zJI+io9%D#0V@aOhKel8{R%K&-V;tt>bMfTYHC{&!L)x{5NH%4WqU2OwXVyGmCWsvNWMYP5V?Ji)Rpw>R)-tMOXNKl*My>^Z zPD4iIM6TxS#bYVXW^LwX+-+uX7Uy}MHF7TG8ad?Xyi;qw=5-!rQtsoG?ci^gXL>eh zh_dHRHfJVIXHY(2YzF9UmgpAS<$^xwi$1f24(3-z=W6EXV&>O>j$1Ok6bm^jg>6k|9YBD`S#D&Aw&kgI zNR?^~rgFfyWQ(Bo0F#Tml+_^!fhp(p=CLm8z(#1l)eWY;>3-(nM<(NBMp+e2JXCS1 zAw!KI$h!+*0HchU^vNEn(VCn|AF|mRG}&yEQ_E0|#J>h?!B%ZJB&aZ<~0F!>6!D!H*=%9z!vF_5&W@H3UTvl#H) z7nb&G1t#d!UhVEi(AExKv`%ZaPHe^hZflcX4;9U?#AvDBMj{>{P009?zZ)4X`8#fd z(ZqWI48wsi$wee>fde5Wn`Ea|*-{lifp+|W(lUZ`Y7Wjgs(a#1S^26+Ishin7`X5%xbRI< z3b4-TJE$~({vxb5AuI?-Z3(CFB`2Y?e(nC9Izwz$Z^&b)MrN}_Z`PO$MVmWQ;u;i3 zIm#}UzId!oBGVptfCBIULAy<{vH;o)ll=Uc#Hla4vj85rY;K!?8y_+Y7|Mhuwg^{p zCMR?Na&mkYu?*(}bY_YTSLdAno?Sh*ZK=+3uJ*27f+H|TK;U-3zng$}8~`Zu3^d0r z0Z)NAr*i~n%C`Wh`g(vZ4JbIr&&{}Qus&Epx1>QY^j*iCLys~{gm5ipTX%Z5 z_m&qI3%6bjm+d+xb}F}dV~_M?|JV5{wyQT`OicSuI0-!Qt(6I2Cn_z>kl*f*s%Vd3 zf&;ia03#~#GXe%6EyD;v8mY#pN%*Y^)tvEImUsEFpJteEi1ayvL|61g?)NZ;bSoG5 zflsH|8x>3q0P)o8)EEGwh9!PX6kEqXl0Sfe%9;XckOg=e=yTCel9<-hDnk8?^Q5su z`%kMW3_(LBu_yb@4`#C`VVT$Pk1lqc&v|4|cDGi?oOzDEzIAME`i_W8s|J%_NvI*m z&&4;3Y3(}$n-enuRj4lvhABJ%#{fF<01wuD&aZcTU*cd_l=Q2`!%@d%L&O)I=rT9qGAlfsy}TI98PFhGNMC|4k#G#SfXRc2MHWr1hA>2gbV~3E@%kA69o?z844&_ zYNW@703|TIY$-E?iIt2FMmG$`u6YR&#!+!|NeRzzh_fqMLB@TWrt{%AZQCpg9mD( zJ*e7h-ndp_Y_ZJ-kwgLV)}eHFqNpt|@bXdg%8K4Es zc#{G*vFB(3|2rJvsGOHxnW?6ma@whsiwBI+M1z5 zxmwh#xZ~RStCGPEo36U+GTW}Z@4_3eyz)9Lm#LEpGA*^$I*6^T+`iVWx3q0YuE7U~ zC@i|{GTgAk^Fkc4#1m6oF<6?2%Wm(XKR?b z%^w^1;&3faHL;kw*bjJSM?6vD2z4X&lzc=q} z`L3_W{gy7F*s6oi?X3f&4JY!;qtCp;>7U%b`|raazX4lc*PqZ3y-Hj_Q6mi&9lw(XqbZ=;*c>q+#wHp=))fZF^EDOA`y#dMC%~&h)PT% z6PM`3CPMLvQj8)Mr)b40V)2SPOn?CSl*KP<5sYFCBN@kN#xkPujA~3H8`tPYA$IVC zYRTTJ>e#d30qu0*TOI2h_re$ga)yqnA#j9X$U_Dakn&q3BOB?+M?x}^lAI(ZD``ne zA`t@i(j*r<8AnfoGL)hmB`Hg3%2T2;m8x7ND_iNxSHd!uvYaI?YiU3^j_PldA)&`i zh^;;H(OWD$S|W?toAsTGJ&J_h0vG5-`6=^~(wrtWt7*+^Vl$iC+$J|M8BOVw|Eh1L z(&Y$w8NBYP=6fn!VYkXhus<4ed4eotGP~JKaN;wc`rIc!`{~br0yLlk9VkHyYS4os z)SmZTD7>C&7hIx+j{EAG@cu=%V772Jj3Z_|JBpHdl7)~g0Vy*}%24S!bX2xHDNB!) z(r1bDS>#NWIelqUc7|p2B&gape8V8wi`=&{Y+R}Rx)hsVPDpM)8RK5jhrZweW zMS}*+oo3XJ3I*!qNGcYQssy4&9cxpq$0wD-v#e^}6eq(O)dB5ts=n-=IV|9_QTUrmY@d zM?2i(J_)s(RIQ=}=UO|$wyG(lZEYJuS&r&PC?+VbaSnh1roF%jVC`+6HtW333KzN5 z<8E3>tKRo|aIrD9mtzmqHNv3=Tc;Zyb|JD1v~?=ER_cao=yhDi-8O{g#{bL}Fs*+;fg)VC&aV?RLjfJpQI`NJ9Z0DO zFt7l&@HwSRKr*qb-8=a*H?Gg6F(0BE+a0G9$3CtuWdS_R8DK@gPsX$iHLU}%Joy4f zSOA2*aswxPN5MC^z!DgM6eKVpQ28Y9d7Z|R8=cxuC7=p~|0xitQZwbUD3F1JCxygi zv-rc-j#yH&#snvTdPp;n0EP=?f`vOAE6zzd#*fh8n=And6vwy{c0lnJo1nsaZtuo* z_hPNtIcN3dHbqP5FCFuE+_r54c2DvF28>+0Q@#q3Wylo>C`G|C@PLt?v+xUKpaa-3 z0S6es?*lLe1xTv8F=t(oT2BXlJ)+4)0HuNk=!n5a`auWqjsz9>8ago|wZ?m%w~n`^ z07!9gSVsMKBbWdI2q?kiJv07)!Z_4$$T1I8A9^&E#Y_L8%AR5QTB z6!-xX&fk;;7{Lt`3JZY1bx4JFS;YXhT2$y=|A7Fa$lr0b-nC^L6UbG+d|U0|)a~J( zMlqeFIi2r80Ps;=M-;#X^uPgZ2V|TFmdSvtF@O$u9tSXoye$QxWIzw(gzCwYdC`#E z9YMlT0I4nE42(x;#a{&44C3|9f^9{a-GHGK!3okt?xXlft&5s${nzYZRs8d_1+A~ z*?3G@0|3sQz*TuYQzoH6`t_T5HR6|T-yqG&jd*~eG#Cr?gxEpe!Yx>*5diry0m5AX z4@O}e&I=W;1#=`~S3JjsIa~nz+7Z0K7Z$}5Jm3xRpUowKk+qtTOv;p%KpmtS1= z9on|xA*|fk(ea^9IU1yyoAB+{+A#qk`pK`&$qryZvOykM<%ZkEoxzdFd6bz4tlAP7 zzz-hcb%9`q;LL8EfXXGA ztofiNjs*|M0OfU>ie01;_?nuPS}Kz>ElMmOh1l_HzpgpNx%^pK?(FCy72&> zO$s4a075E&oPC9Z`NTI$*_|yxvMpPK8D(O1B>gF)wy8?Dv1O|~qv_xn>TqBOx>M6t zA`Fou+~LKCNpm4NP?SZ&gIgPW@!>9uNbE} zBIkEDXMQYac!sBCo)~mq;9JTH(ly#NQlnjNr}A`X_km}9lBa!)=Y1xpd5TsAnv-sJ z<{pO6Rf*>L+}}Bziq)Wj+eY^G?&d?X0jrhR4T?7?M8&K77IsCAi-IEv^;q2_U#=!#BBg`%R6 z_FR#wXl*WI*UV@i7+pnmXpc#wLj2)}zUNBzDD04EkaDS!76y?b=3#lMX(4Hn%2Z~` z=zTfqlftESwx@eGgm7l*|A_u*p75uH;%T1hsdn;dpZe*a{%MOkC(jXTp&II;B5I;4 z>Y_4g09B}qUMPahXp=f4j_oGxZJ=aHDMI*Wfm-P>VdaOx?ullMQ?FIaaj+$y{fSTl+dg^F~s;IgQsoE(e zLTj{2>$Fm9wW8RyVr#Z)>$Y-hw|eWhifOot>$s9@xtc4WqHDUU>$_OMWcs)s+|6- zf^O-+Vywby?8g3u|FDLnm=tRU?qQVLRk8|9`S@$aYF3?QY|C=&%fjq`yyzS`7O`sR zrJjb49jm6oX`E(4%Bt*XvaHJzZOj_&(Jm}{KG&q$Yo%u3ehui9?yJeV3B?BOs0M7& zYAw=o?Z$d6nqsKUiY&x3n!Zx)#5M%bUah!f?bf<&*TU_IeyyR=E767Q*j8u6N^OpM zpxUl2cV6t<#_iw|E|ttJYlSVbQm4-Dti+;iHU2Hu4z1u)ZsA(4YaVXhiY>7p)*&TKA$ZQ|Z6WI=4yZtCoED&Jb|ZY@lN!bD?kXX%=5>H=@@ z9!BPB=H}|=|I?1F?C$IG#)fF3t>2OaKk0_{-KFm`3=hGO@M`b&cI@g#)W<6Bw><;I zLap*nnNF>4f!WH{YrDWo+?rxv;ewBcLDN9V8oS6Xk@^Akd3-P9D(~^mLYT%6m z6H0pO*@9^LdISd~hgSxGSAGQxjGV8uuZ%G!=>~2~Fo6Mha1YSqKz+pt@M4+%uK%KN zYyz-lFzwBD?#Svb?N+JnMw*?TN3wNZ4tGTWAVCQhRuS+dPimtFP$ar}#|Ip7YDVr@ z5XS)$1%q|K5&Q|oMaszzI2M39atF=H1qmC0C0}wT zTeDHFu@9m4<%$S<7=vuzv!4Y=RDUB?U9r0<>JL4KhOnm{yHa>iBjHB&TvWwWFj@7Q{C z|4D*#DVr?%+G*!8iY~FQ69O~S(7*xUu|mspcvasL^xj2=UF7Ki`N88p@|qHH;%MCU=~B-X$J zKu5N|^Q1i45_-oCpa8DH^fG%u*dehB%yZ4{GBbf+6>@;KWmsFuS5qQKoNQe-i-iH6QlWQKo3Zs;(cCtWv^ljH`g+D|3;7T zRG&^&TXh;Quq9!3!jUu!1|e_J09OE9rV_+x2_LcZD12e21_2mhtD}_eblub2~=_i};9tg;nm}nOO?_Zjr6|kaqmt zbO-ZFn5-2f~H~Hv3d6b*^?Myj-)ptqSXpc+j|C?j^kKe&U z%(;-0`TW9mnct`Up1Jm>`8Gc+a&tJG6Z&Lpd0p$xUp3H6`?&||R9c+>3$9bk} zx}Dp*h$?&i5_!QJ{J|r9!Ylm3Gkn84{KG?h#7q3dQ+&l+{KQYn|Fg|cJ~kyxUhu_( ze8`LZ$di1@oBYY6e9Ei*%Co%53uoIIwcKvQ{G_c?N*vok)Z2E`u zyukZBt(AM*)6Cl^z1-71r0hN3=e?VmFt5Lg*9UrJXFK6{s<)T@v7^1ysXf-Qz2gTO zmj@vx3AH<=9d+B$WjGy=MFZN5qz1)+(3rzk?+&%Pb@850X6;Q>LyK@lWz=G3{9XHTC$fd&;ilxR_-Ba4VM z7?Glq6Hh;E=%VP*8moM;X2psp4%a?jnT2&Gb}T%2v&^DZ>y53sw(Q`>m3t=~-Ma1Q zX41QtZ=1bWrZBBBm~dgkhY=@MyqIxg$B!XD&LNng|4hm?mR{DpnRDl(pFxKXZIVe! zi7iq4ppaAnhaxi**c3gRc5U0Yap%^(yCZ9mnSVolI<@Lmty{r-@%6$FS$GB@#WE{SI?V$`=F#vlSqx4f&%-lec#mYo_~M;{{ak8 zK#B762)L4jlZq zMI;C|1zt1Jz8PB_&_)5_Q;DVdT!;@pnq=GW#vzF;(#RtLEU-5qG)s^;2Dw7+!Q>=N z%gPF)yD&oy@nR>tJ3MS7#2RHR)66na^h`y;|5&_ivNwfL%)U3*Z>{kw+iv6w=Q_5lvLlMKw~Aqc$IPazQAgba2WDt(*`#Ew|)yLombqaLhA?NGbDMCA_Eb=hq<#uVEnh=l-JYhVdxOO5r1 z1_D?DgAy*NwaFuRD1il5rA>IWlj7rXK0@C$RN;v!uGr#?sKwaY;DmdU(n}%ymcj|M z9M{t>%?)$ZeB*7|W$;wB(@bD5z`z70|Bk?b-_k%}00kx*xB!V@O)vt_TRA|X0U4HH zf$0D|P@-rgFff8v7Cb-z0~7+7^(i9&rnQ5EZ$Kfl1pxjPhywmC;bvU_MPXi>_q^Ed ziH>zR*&bp0abt<^4qWiTjm#J$R~wbIIBr4yR#Pl3#87f_%S~BjbyL_ zjCnJJ9dea}22@}H0R-6Ym+Bo#$Y9qObnsUL8IYJEi5c9cngS3m$lzxk&=$S~1_C%j zhYOa_KzJ5382g2Jktl%c6Q*CPfSQ+}dVA)D-T?y*u95~@{|vtfaf}uJxZ98;1g?>h|IB13Lm6K{b*`0V?lB)I&CP^RfSA+`bQS!L=$cff z1u)=dCs@r1jvxbov8@44LElqu2R05=EeQTfs9E43vQqS5ZvnrCD7flEO;gix~WVv=mDNyQ-D5V zAOjcZfX)bT0wkENg)%6>4LX2;8WJx6EZBkfR6qe89N{TDOCAsAB@^OxPiPD{!u7Hj zr+F0sMju!rRLBQ45LQnD|4S$V>Bg9-ENXE_MBriq@b$%DE|ZzfG^0}fW}q}O&QffQ z5acEVM*|`-R3O4!9T~HP9$-L(8IS?mddRX`Ma*-Ii6X23>8|nsvoe`8feUf=11Zj6 zg%=>9Oqi$2mkg}|u0fv+RB!?3A>jv~3|8}C7%Wr!n*|=VFzg)|0xJ*hXnHFYzKEVX%Xxd039G80~$!H(E@6)9KIlS{DhuDjiAVb zZI71z(y2`c+YD}1*1d)3Z*KE*|G3gv~6Dh%?nSi)kJ{T@$Ph6zFt0`L?T7=Q*bQQ8gssf3`ZAOitFLIzkz0jbi?XuE4c zBdIGit>F%MD6sBz=lQZZ&7@Wc9bzqQ*|rWO0Y)guWAF$@<}Jghok__UaAsPQm1fvR z5Z*rwF88va1vqa`6Y-KMSe}KLgYAf}>5gXdXXGyV&x!GbDCxKxIJ_$VISE>U9ZwAV{ zK&O9j{{W~t0R%gM>-eav0do56BtU+w&WNXj9LoxMGk{>51BvTb>3SfZeWuHpeJEnL zgb2#qkHHfjaf#Pv!vyN^MzEDFj}f}ikUMdruYJHqYx_hd;2OY6(68l=G~6yyx6abm zgQAT)Zag&U4-9((c`wmy9B2W*%vUv=$(ziIM1lcc0P$OA`K$4C`qL-Au+r!VH5Hf- z1NWr#sBfL?KdCq}FP^v$Z=B;D2bZ|g5%QusdR0YRUe^Ftk*G_j_<17 zCCNcUp5%BMxSh@q20aeg+>i*+z_=x71m_EH2Sir_&I?P?%R`>&O~?HCU5}&92Z|*& zm{GDCPLtAC&D2X^Z?e;$fBl!%*&Ne2?WJwnNQ~NqO3_kG+%9&~F9zkBrt2SzLg^Zh{6< zV@B$43GDBj=Bk5MuLXV3&-Sb&Vh@eR4gdpC00|Jq)GoF}uA6)>GCn}DS}uMV|EdVG zYQj`73kw4UA7b*rN())=2c@oIV$e`hBmRIO2j>d)jzGGMO$_0X*?O?@ngZI65DBs5 z#By&BEhho}g0@x#^}6c3B8j`E5UId$UAVAx5-|+pus`xf1{**>h=2$LWk%qOuBc18 z9`6x75ePp^e}-_UicpOj$M#yw01c2_c5lVPWII@}5d*ILM#t7<@fMFQ7j{diLn@m(J2^g7e_B(S_3tRKnH3<6O-%1=Fk|e@fxu)8?|v8xv?9)@f*Q0 z9K~@Q$?+J2@Ha-$jO?(Q@GuEMP9FGB_Ym+EVbKwB(Pb8~A-qr@O(Ns`|8WHWF(3hQ zAO*4@3GyHfG9eLiAsMnE9r7U|G9o2%A}O*WEs`RWsKy{*8JST8e83W0Le5Z+6BFbj zP4XmBG9^`VC0Vj1UGgPiGA3nmCQs7+91i2sF{jq?6j4zL;V~YuE!z|@7Vi-s^Rb?Y zvM9@ODVee_Y>Rg*ECb*bXriz49x;GAzY%EXlGg&GIbKGA-3| zE!nay-SREr(*L4@?Cua1p+gVTjt_&99_vvl{Zeyi(H;eJFmJLj4f8M&Gcgr&F&Psv zk3|7Y4+pVEKlbJ(ctC>aurW1rGdZ&}J@Yd`Gc-kWG)c2GP4hHS|I^|E?f*;>2?uZh zl`t=XvK3v?51;TaadR<_@(T&GH+?e>kEQ<3iD|41Mk>=jQZF+_@<1#l-Fff0!JDC$S5{ZjYAP3oy0-EOiY7j>7FEzijJ((>;wPi7J3;^vk#;NQo+qCO|SI?_(d?b3!S!LM`+{ zF*HLpbVE%OK6jEQSIZsuuv2LBI=v|=1$1V(b3j>?K)sFw7;FtgklRp#0)8MGSHco$ zBsqJKLw)o|fiy^kbV!MmLOrw;qton22ITUyHu;i2Q#3aL|1=>av_-AdMNez`7|ac4 z0)%Wd+=$=+yh!!V5lhW9;*JzLlhiJ=L@)Q!5C2e17BEWtb4n4SO0P6K&vYoFOQx3M z1CvNs4kwAYa!(C4;nH+2jj%3zGB({*_q;?u_j69^R7zJgPhCVy3l&q(v`YbUS-4G! zZZrt%uMP3^P*Ihp5*5VOv{5DJCt2}L*{)Jk)J|QbRy%+VHPJaawFTX5rlRtXW+df) zAO|s122(XzF>F=klTlq2KbtgVY?D%H)h}(e;811GYSBf`%Q6y-V0u+qWy67XK)fJ8 zgd+3)PE}ml^@^BvQ9(3--00&Tb@!MMPBXy}(8U3$|Fv3+@>&mKUFYX1tcaevOTog@ zxSRq(;|kouRaub{q~wMLawVij@(tTHW0&b&TT?n=6EC4PTC+`G|IiOk)L-iqV9nz) zE=ekz$f*t^X@YH??CZ;NfOFCcPYNPpY-zffKyB*lVZ}ANlqjThP(gWBW0iJrIJQ+W z#XeIpKkqex@KsJx6lKLjWl!~^4~e^-g5>Q;W23 zZMQxC7I4cV4}*$uMHX@QVp{dIaRupdSvF{D?UUG62%re1&}O8Rh{2FRdYG?z*o$J7 zD9EB$Y2XQN)oh3Qra@m9cKcB#I`MYVw|;VWKI3(F2lw_yG># zh>~n`C8=ftKn0lKlWxYl#H^DF@d7}IdfY~H3z%ydsk&IM0-{JO8*F@0by=9CN7J{1 zM=K}m5J`2nK1=LA>vd{__e=J*U-378FV!J{<&ow^zZ95r;ftR>Y0Q3roIEP!xb0tJ zsK6Q+t&*UV{DqeeB4|$n11}hZJ@|>C|9E27*IDJ&KH>LKsbhW@mxWt63-wnpVz>jA z$Ac(JbwX%}%4*VdsgHh0lMt!_I>3)$2d(0GznCYEgKSP7h|847HJmt#0XdMd2!!$G zL)+IKQA~0QS8_@;h4q4DTp-mTmdL2$?_(*<(NME`2vzc~5cg*NzC%JZP7aPvnc) zLQ{!+n1c_)lJO-QV-WjA9*NGBz6eu z1&~M70EW~w5;sk-M3kAIHTs`9x}*Q(o8uE+2O5!07>*G4q=nLCt#&;A7rD9*$2M_^ zD!Ocuwv1-x@tEa@))fk0c%yyVqk%f8bt0hIw-mk>s zeC51~PoHvixfI(Z%56N$|GJ(ZEv-?TtyO!qU+>vO7_NEuie*!3?^0tW8}d`6tG zF95Kc2{M2N2(N*;Thx@O2b>SNlliPuo3-IPzH!;DSygvG7O4w5e(e>d!32^QTAjIA zH$jPpc>8$ft)kU!xOL8v5}fDGtqr$^x$p~)6{=j-?FeE=j33~KIO+LbZRTic0xaox zJV@qNc?f`}pu~GtF5ClTuDvb!z2$qxX}oYidY5@QsY@D|sTQv#xh$Xh1WZk;$A*U) z0KtX(G8;((XMg|-|De<|AOkw42gYm(h9Hi0zyiE>ygcA`3;=*zK-FSmgu2`{lt!=| z&yZrm2m}~;FyjH1<^cFf0>L$F-CM?MJkRxfT5kNcshBz98o%cn_g>4cA$6hsn`D2y zD;vmrx|Fw7Psy5?2NIkI7T^G^Chx3}x@l)P80^R`poec@)A!584T+p|unAP)DU3{f z46FOjCkS#t%`G5nm_P%5>3^3%&P%+`0W-$&e9wh_*a5oF)fBd^c*hH!kq;eI`n$*d z!hQohsTvTu#%BSf%aB&1&4NJRjz@NM?goBaAqw$Vh$eu)L2jDQhwr z#ZPU6QilP=|5V8^KziJ+YkWoqeyi7E+@6D-*bV;R!PD5^+P<#@seL&*2z}WN-Ps*^ zg&i(ca4xIYn+B9St6!X*bYS~ViM|xdoF>=#h!~IFZfa_v0EQs=CSa`&*%XR9%rU{Gq{PrKj|e(9N>j1<1Uk$o(5{FiyW;wyg9u`y6@ z`1<^4IDv~7Y4%+@hj#UO{QlGIsg#)W=hq2d>6`xT;U4zryIKF7palqI>6+OS8h9bO z?@yM|$-`3-s=Tf_Ky$4-=h-_l+U?`M@g2Xlc@adWD zlb-SM|M~GrU(bnpKEomqeroFR{)C%7^Yit;^%d)%^79q{&P9LpOMmx;`t;e+aRNQR zE&uW-9`=oYr9I#FXA#(QpZA@A*nJ;l0$1wYi1G*B^%5TltNW`J4aw#h=Eb z->1OkzO7i1hrjqK-oLT4_Psw<7C-#Qzy4Lb{2|1s1E_@M8X#twOE<8fL34KsCS1ty zP9}#BBN~#J$-xv6BnU{@SfiuIk03*e97(dI$&)Bks$3~D$B36OW6GQvGYLzY94+ur z;h8QtZ8_$NWT+GPvrs>nDQ>#AhvaV~HHebu$018d*+qQG#4x<|l@7=wB=N4X@ zxbNV|f#WWYJWQBNKjliFPQAMI>)5kv-_E_e_wUo`dcCW6Z{NV^2_s(Y__2HUlP6oA zk9jj^hZ8A&wun(<>D2%RDByr5wI2%?7~c9`LW&D0Y`7^8vb;)^iGDC3MY)@b96I8G&ASLEr%mtX0rx87px|2gBB zWJtzG(313NmYIF`c_t-)r2R+Wl~`tZlYt82lwg-mh0za({9I^enP{e|=9+A_>E@eo z#+hb6V?x(re>vu<=bn7_>F1w-1}fE#T=kgOk3kA~*rG)qnPhyFHCZ2&3+ZR6Xn0zw z<))l=S_GG1!gg91p7`SrnRKSA>Z+`^>gube#tQ4Kw6Z#hor317>#n@^>g%t$61tV4 z<^iebu|ziN7^Lz^dfB8;RtiymomF(Frk-Z2?P{Qg3e%TeZtx(gvX*OZt>>nz?z-%v zOJ_Zv=mhM%^ww+dz4+#fm9Sb7i)f+r)U|8?ta4d5zx zMLqJ2+3v<1ckJ=Urb?J9rcY(6@5v~qtn$h%zr}A>{sJ7ZvhOhqslf*$Y-t<$k?dc? z5C<)^M-qEm@qv@L3$oHnGrh4VTA1AO)Kph(_0=f9yf0opo+tB=G!rZ~&N=J++1XPv zJoMUZcT{w!N6VJ9(@ocHx6<*djrHDq_wD!JFKTTU*LeLaItqBtM2^s&_|E2;8w+KEV5wJuDC#q|3~Y+wD?`B_wVEb zAH0H?*QGhf-g=db_%{NqS{Rn|M)da@@lnV*$t3@iZLLwV)(2C=1zR1!=W5mqeBGouqo@C6&fz#oJ(AS zPG|C<5j*I|=kO6;ml$FZH<+p_YHW~?6lDFFw?!~o@{*X`|6dAqM8>b7F>%{tV;jwu zFp4nHhI341fhw7i3DTs3v;xH?9^?r$jLA2oY6c}d@t`2q@)Dci<+zpy3R5jFh?f8b zZ%S1UDue+PPRz|E(C{EDJjXVkXoe@Q>BL`BlU9@1!2L2w&T^Xbgqs{xCq4N|_`nB- z2qb0r;8-P8s`8#htD|dNNfqL~YLC!_*FHP}3MD4ekJ?Nm8aVKrFnkCN;u0Az7dp*k z!fT1i{N*OXumNg1N0!qh=5u)Y99b&zRUu^MIakWkmS#!yMoN^U{9eN5 z37Q((6QB6hXI=O?C?CQKh|rlPEj5?{T_oLtzigT#zJoL5qdu zQF-IlL^`KI7TqI*!m3SXMre7Pn1nV3(8R4qHKYsuWj)3^%Vz?ys*r_OGbZ6yRQ$1s zuCkwQ_sie@@`<-h@$KxiXVc;ucOl5VLxPn%|Gwrn_qj76s!WXfPd-wDtN?`$FRicv zu`bV$-fhMnftk_Jt=FWoq-ZTcvEJ+^mAu>B=rT(y%VOdebXxSUj(5!C?*Lc<0tRlf z!Uj(S9vDASl2YXA>EI?qSX6MMFiL}HRkynJmWr$;Q&$By>r(^4=!8|Y-_KDj2*l`^fY7_vdm zsZ|iVV*6CJJU;p{(v4i?jq{u1lGXXspbm8yTUVYk>Y2brHn0Mde9E5|+JT4uR-!-X zzH~`a%d^_Fx@_xdQTzJWq5g4@2VB4c|Hnhs2~03gdA8hF2Q}8Mob{q~h;@XK7P6!sV#& z=+gG#^o}>w8cgw)yS$4QFC4}*K3w`VY1Ip4`N(lta&L(#EagKNV@Hc1s8#&W6oyWa=a}O1%=wbIp z6Y-IOgk-hjr>e|*{`31t(a;_L_tKkQWr0V#3jrSRwI5gNZzuHQZy0&H$CiU+igfNZ zw)@`m{`bD$J@9{TZ6j__gTDm(>6g!ZmZg61;%5EgY|eH-4_0?npFPi2x^OQ&(>YfF zzxc;5{_vas{K`E4_sP$G^|xRBS4e;O<$wP3i!*AZZ@&KbzpLj%zu0|7eTj!)*Vh?K zCVSckdE5seQosd5;eZg>fDt%>6!?G@s1y>Iff{&$_p^Z>sC<0KeRgfN|Df|7t^nHF$$In1lI8f;`xReqw@wK|pEOdMvn82FNq92X`>& zJGG~6PWXgS=z~%?g);(#|Mw`{;|knUgh(c61@{>TsDw*6IZYUaW_X5ZNQG*+hHYVm zzojVACw-71A7q1gIn{y%*lc4+hGm$Be)xxM7>I(H9d7uBG}m-mXkgZ7grJdxVz`Gx z(}#dKiIg~qmUxM0QHY-u2y#e-cKC_rW`vKUhmd%6l30nTn2MOVima$YcQ!2Gc7<3t zhqO2giO7Xrn1?%9iVky%s`!h**owlqhMEY6t7jz7Pzs2+g}4ZHjYx{TC^WtpjMP|- z!+4F&SBx3QXIaRM|Dbq>iid~P5{8c$jlC#?)|ig!h>h%cdfK>+$Y^Zeh-Pr9~qM}8Im-KTRQ;(A=q27D2vAs zj|3@jp?HKD>5`SxZk;ldNLiCg`D-_slL5Aq&<7c_Xp-JoT;V8=6fu-UsgQDIlu7xO zOBt5I)|5^OWKelV0(m6L_>)!%Xjy43TeOt{;+13xmUbzYc-eA4rjvQW0ZPY{+(?zS zSd}S>8CRK$|6N#c2q~8@*_EI2m`$dUk~x`_S(%o3nV6ZGnz@;r*_odCnV=b(qB)wR zS(>JKny8tYs=1n?S(jfiY(mI%i1>-$xDW_&n22eRi^-S}^n{RklQ|Ha!a1D8S)9gs zoXDA+%DJ4(*__7to6s4Z(m9>fS)JB-o!FV3+PR(F*`40`o!}Xs;yIq=S)S&3p6HpL z>baim*`Ds%o?-!;|3;BcR~gkeO1PPu60w`Sc_m&cD!}QS0y>}sTA&7cpaGho3c8>S z+Mo{lpb#3N5;~z2TA>ztp%|K>8oHqz+Myo$p&%NfB08cZTB0U;q7_<}>eO!-H<8Oo zm?;^G{{=Ur{Mnxa@}F^OqgA(~Jldl^`lCP^q(VBRL|UXqdZb92q)NJ^OxmPQ`lL|m zq-RNNe72VI7(@3tDftOG3#OGi+L(74red0xWZGZy36I@KT!m?vGWwQY`lY=&rgB=Q zbZUubil*I2n=09+G}@AIni@Ddr-E9igsPWQil@T`qXvQq*|$_YI>LIqDWGBRLQ9W;i&|8m|PllNcgCtnnR_U ztEalFN0+LoW~C=Nm}|u=Bu*#qSXqke2Rpjx~-Awt?UY}?mB1UI)spdlJgM{=8Bb|ny%_fNA23L z@A|L&6|dten8I4Gdy1R0+NavOuPx!P4hyglOIHFrkn>uf_Gzxh>V>A1tPIPo5BsqZ z8?u=+u@vixqR*rt!bI51$(mAnz6@vk1V^cFZ;7G8?^j0 zvn)DzB@0hD>!~`6tP9(-ZUeMXE3{JkJVbkOr?z@F>k3Jmv$3kN3fr_j%dAm5wo_ZS zyi&C?$4*$AwTfDsT&tg%;0JICwk#{QWP7(}o3}qmIA^D}HJhe4YcMg|wUFbr|4d7_ z9c#CEySRGWxHj?=dBFvywkV2HKrh<1ubR2{3b&ABt4`anj2pU+JG#@;5hkDqe#&umnHAkVve=YV4Fuyuzx>iRNol9W2D`3%jiOr3%La z0k8m0aRNqw0el>q6c7Li&;ljE0$dz+lGg(?&;e<@#*mD{nZg%MOt2iBzEe!axB0dX zYzV|_#PFM=9H0XvXpU1-Cx%diTk0X`4_r#u2YpvomcERpLG|6+`jpHdz?V3r2p z0|$TvC7=UDZ~zqW#eP`;crnfDoXtY;&V<7OM<4@VBF~SB&u-I~?fl5Xtj5G#%xqh( zoobkyJj90!$^xuX9KZn4?9M?z1J^7BERX>@@B$Zr0!_gI81MoNpaUh~$R|(&7eL4w zkj_9h13usbEID76VPwGyT(kg$PC4<8jjWbzgeuk z?Ka8}jYkn(0w%Bo27m($zyP)k)x50A2O!fOAj`9C07eaD|3`oWL~sL3umg&10w=%( zMxe@gJcoms{3n zUB#~pzzR(gZ4J%HP0c|d-vPkd8BGBlPyj3t-x(bM|KY6QJ)qtNAi+hj0~fH`tgQhZ z(BA5e0WO~13lQHRz2A6&0JD4n6hHxfNdf&m14iHjJ;2APo!};b-7kO_D<0+TO#~ot z;EcUk5+3IIg4^?X;d7kK2+iRg4!UvK&~GgS96$r|eFO;51AGkvGGGKbe%H2)0QUU@ zvc~}hkmmuQ0rBnQL?F&RZOuv$%dw5f3-AK9YycWi)EzM5dF}%m(9TGZ;78B{+07G6 zP|N#`1PoBump#+9E#~w~=3zasWc|dwjpm%Y*2VkbK5*rx+uw2>)H8t8a*hEjZs1Gc z_~S9fwb-=%~zNo;h;UGNR&AhJ)R|2f<+6w^9 z2awn?Z3NG4&e|Q(7=Yhc?cJcR>@jiO25tgJ@Ytoi1P)!)M^MgyjM4jz0^)qtFEH^; z&gGXr?yw#1Mqt<~Z~~yd?n2P+@E+3no#6Cd@AhuH_-^5ryzji-?-`!fwNBi(E>{!) z>a}dxJ#Yatjs%bm0;;?K1#sh*&C5M713Pf>(*Wl{7{3e?CP2!N?E{{^^@=Xr`<>br zE#nu!$S(i_+n(4kAlm2-1FBs2MKIu^KFb|I*Z(+>0V>entPJ!)FZ7h`#;RKM3(Tqi z9^5lr!=K#YMwtXM00Od&(O(apXCg>3E}5X*Mn02;mZH-O8hp8-@)07f9`eT@M>kNAo2xI<6$ zj(_G!o8}3<=Elp=9WdIH-IN8u@hR?ZBhc83-Ol4r+rB>9GalYNU;-wf_4FRpW5 zlqprNWZBZ?OPDcb&ZJq>=1rVAb?)TZ)BopBph1O-AX=2fQKU(gE@j%(=u@Lz;!LGl zwND_d%(6b?+Vz;%uVL%Vaz)lvJhW-mu1$Lfj@!3!-5Q#ECzIQ|ey&XUSi|pMz<~u1 zCS2I?VZ@0QFJ?SAM_CM{9gG}#tAZEh~;B$NrpL6Q==X3Ap-{1d#00R_oKmrT&?Ler48!oHj zx++dNvXm?9Ikup)P`c?dw5~e6`2Ru>JP<<^am2#J)9*wSQ&e$916y?QMHpk0aYhsess_PAPD19Iqp(5@HI!>Z>)Pl-3-w5pa!M*0Gm%9sv($1+E^YMk zOEAL}b4)VJ+=vSwv9c;iAtC&#tYeBiGCDeyq^?Orob-^#D*N>FPsgt8a!^7GHPlQ* z6IFCkMjLgsw={obv(2#JEOJ70=B)EVJ2`B_knC^-bW~Eq6ZBA2Q&sg*R$F!TRaj#c zbyhxWZ7R|=C#5t(J9uSIQ%-}WZb?t|jIvZ@lTGZ@RhxD8Sy`i%c3NtyeHBefb=`Hj zIN!t+SYbP5@>64JRrcI-ng4}$U3S~G_FZ`6m3LlS>$Ue@rEp~s$m6!{cC20l{ubQm z#J!HLQ1L}~VTRRp_hE?Xm3U%`E4DZRj5AhvW2W@oS6erI{WVjP=M4AGgYyJ+PmUXQ z8Bm8KmU-rjYqt4joGZpQTaYOg_-BF(Hn`X)kA3;*K4YeNX{K{_`e~@6{%z`(HMaU{ zth3gdPmkGTGvrH+EZO9t@x+ehs*zSZM5UW{`)#S?mV0ixx3>H4ySwgrNRfFBI?1xl zF529+3pW*R#1pT3amE|J*gVK1mwa-(ZPj~kzQY!q;FIknH{rAm7v0LX6*v8K$5U5* z^)M-S{q@R|x*YQeeg8FU;J*V`d1aJF_Z`I2Q5T+d;)^%_aoCfO-Ey93Clctf$?hCw zx;_Ox^xv}&ba?K&KmL30!)Hx-@|TyrdFOu)J9oeXw;pZmwU@t$;JvqBeEjp*|3v`$ zhvNV80Wg4go88{PHZu0ECzCWTqGnI4SC2Jmd^s4sE0hX@yKgLQj(Lr zMl;Y5M=r>*J@f;O*3#5O141#1>X4$iQ20j%3NnydEMzMY`AS&IO?ev_V+^!$#%WMe zmywjmBrd7RBc>;gO5D*WK}o46im;S+I^`)F)q!tq79>|HdazopV#DPKch9vD9B@t-@L{y z2>Q)~njs8?c*PuEu?%x=(vwWgBNX?@M+Kttiuc6lK0WGBkcRYK$=d)ZaM?|iQj!Xn zm_tij8vl!Prc<314W&g#xlBLKMWgoQ=*#l?QJ{)cs6(Yu@~9xtgQj$nxlD#CD1nAs zxO5J;h^b7EV$5TAq-Kq zt5tdQL9K2Tp7GRO3IRD-2Nw3QtF2*cUHe+t##XkorEP6(dt2P*R=2z5ZEt=1Ti^y) zxWnD3)=oo^D>zb;a5V)ehWOmgaN!}-UEWF#k9$xCK(lb!rzC`Vb!Q>Jp2t&Cy= zSGc4nr6ebylo~#9z!bWahbg?g;S}pNtK;43Ux_tOXP(x^S>)1=8CG08=ULBt=5wF@ z{AWN1TF`?gbfFD>XhbJk(KQZZd_Uj`Omx%BED`f0;W~{;paPiS-Rqj!3_Tdf82>GO zq^El4+`=u@hqbF#Y^z;8EBF?m8C;$$HmD$H9<{Zb@(9DG)7)vk_SMC1K4D!NH$}L|NjPfcqes0 z(GH#Hotv?#kB+pYFJ0|UXL}c69`${z0^gB~jJ|6f$!hGJh+Gf2z(qK0v2UvAMnAi< zSEP}(C%)~A-yh6nzVBcB{rSDm zn)}C4+RcdU-LFCODd2&MpaL|QfGD7X4rquQ;s7w{rex5x@^hWw82>)>E4cMLzVk{r zhkL(4Vv**XzxoS7cgsH!q&E)mgl*HMs)G_aVj13(x@kBC4)C>}qreKZK+&7AdQvwI z1dp^c8X$BC5G+9wj0u%n9f_I`N{g=Ovz* z!KLy&p8G72Lng2xH^2iH!2_)1>p^At!O7@=1(?Hu&<})w!Xb1zB|r!nID!vkAXPhw zB$$CFRGo^TfP@%`gSd~D-~kyBh#i=LBfz~voSF~#00A(!NE@8#MkAO36!?J-NP-`r zfIzgrOk{#fJVp~(#w2J)*eOOuw8ks)M3}n*ZRo$a`X+g322ao?tV;$lq(!3mI2!~z z%~3!0W5bO5#bklQ$gszVSb%vnfhFj}&Cp0m+zdX1#B(fyBbY{oFo*@%NNs$^XFGvo z>_{V6fI~#a6ClTtB*__&$q%4L6cC6GP)YjHNEDC(Z#18rp~jP>074u}B{+eUlt~FN z2uT!;0+4~JbVL(afB=X|rxXn)Sb`;BMh-y4a~wvlO#j5J49kbeN~JtU;)PXGo6BiHk3$Gp-9MBfS5FaiRb_pC`T5sh{y!X z0vN`PB*c`gh##m-$Lvg=Oh%S`wIe9X3J}0$#DJl!NzjymWdzG{Oac^8O%vFGAAn7F ztV|;?O~+A!+r*59Sk06qM~svK79h?OC`X&*0Te(83`j>NFoN2wOtmD(*>p|}NWgT& zM(W&2)uhH90HfF03{(Kaxx_oVl%yztBO0v1g=EOvxfER-Og}nI#DtmCTNY!K#smaU z$dp8Q%m}Sa2!UYC9bigy49m$RNe{rs(R4M)EdNB9JOU9YO-Uq5iC}_A?8)fN2%9WT zMEn7Th(?7-9S%sx=3C7Soks4oh|dhmctppH;7St+fS+W6wTwrJC{39JONWR_$W(}7 z97`6M!I%_JaLXh#cs>71&#AMvyQ>FQXio`z&uziQi<(bHc*xP~#SQ#L#mq4dxJ;yE zfpG*&3aE(CbV@VLMgu)k-<-_m6aw3H&<9m4+YC>P0m&bjLCgfpu(ZzIe9aj>Qx=#* zBk;-q_(l_r%z13aopen>_qUHr48T&D?Crgx`tGMqjO5aDz!!N8vl#T!P3B-Pc4-vIJ3|DOqwwLhy|cfgOE0DX= zEK=K)Ov;4B2JO~Yi^gfJ#v|y{sN~7xjLyh3fm! z*$LoI>(pANw1RjH*wp;Y+nmh*H2>3u%}$ps&InM>QPr`Dbvs3RRa3Z2S^Wh`@&W~< z(p>e}=K)!f9a(Z9I!hos_d8kq^gs`i!<%Hwj1WMwBmk6L#LEQ9IebjfNCJ%HS<)oS z8I(tmAXI}*Rys_A9Y{dZt=zR#+{|5_v3%SRZ~&1!2zNY0s652R^+O9tz!0biOk~R# zEnV3az(t(I3b5TAMXr@hq**c|P^bs)?Xi1 z3m|?o2Z%^5HeoLA;xDdaiCf?ct}p7svdW3#TZ~V?)x|Xi;ZXzP5zfCj7T`IiV>@o- zmc(NL>thOZlQa%lLB?Xb7yvjl)B-}_D_?EMDjwt?UJ^J!fiIKgE%xMEzGY?>sa%#Xqa*}9 z@VnyU;b0!*Ep21DsDc*!QZS|EWo~A1R$wDTyi&d!SfJ)#2IgyKO)T=4r!7J4gFewcZR1UTe30Yq*YUxt?pfu4}fIFuTrcA=_)d=Hop6 zYorcr!5(bFE^Na-?7mKH#pY|hZfwVXY{-UeEt_o0uIwDfAj1d)tG?<7@d|+vEv{Z^ z(eCQ?gJ}!tUSSafvi>gCPU>&&WSx#}vz9+3a&6nb?WWFcsMc-F=55@bZJ-YBz3%OL zd4Q4DgRD+&z?|aIZf+~y->^pQnI@KKIOMY4<#|rvsKmqHULkqyZtxCo1J-WxerVq& zVd75eOseXuM(&b+Z{?yrt% zPT}9L3gS|jgb_mn8n1C1zi}MTaUHkuG^p_&|8XD>av>jbA}?|yKk^{=aV0Nu5*zX* zfASbWNuOOeb|yCv{RFg+Orc_iptPKWWfTZuQPkEJJd6u{MjZgWHSNV=7`I#?yo4_ z*Lk5IdZI6SfIF=x2WEpML6(e(S$}?9YDf$9^{0gYN%+@DG3SAAj#JfAc?o^iO~FUw`&* zfA@cXIe`E8pMUwMfBV0G{Lg>=-+%K52sr`=5-e!&Ai_Ti7cy+<@FB#A68|SstZ4Bf z#*7*_a_s0)kVKFoN0M|i(j>~1DpTrd>GCDaKzd{bsyVYJ&N?;g=+Wu(XPsk0huWD{ z)M!yxN|!RV#kAWws8XY-O#1Hhv7{ znQQkh-nno5)a}dnuiwCc2MaDt_^@Hbi2*Hc%vkZ`$dVmX_B*-IWz3m3bMEZ?l zhtL;BuTK5C_3YTUTmR_&JNWP=Nk$xR{yanU>esJq-ySA>nltC)tpA^LXCR=@f)Y)N zRA^F73FwqlQyI7wf?q-N66h-_9*G3kXEW=ke3dr9j8co`doNFIyow- zrJ5=ws!~$9>XlZ?L?2D|$;6XR`PuqqfB%t5Aem-TrRJIoHvc&6n{U#2EQWV#sHdOI zHmf3@98#7dw1XkaXlU4KyDhifa!V<=nu(%84w*c-fiPh8|Wd?Zp^t%xsAr>v>tV zph2S9$Rt}^av>N2F~T?4DQZ6``k0oTS<=H#+)c97J@74O!w>3^ZdHrv|x9*p#t_AOkFq*(>C2Yf8iB+t_5m#vE z#4Ouw@x~c%e6q(O-)$M8e+$0a;4d4FXU}*AeR$2H+5apw=X7dOQDd;LHOqt@PD1&p7tk)A)Gg-A7LO&*6hFzUP}aqPh9yb6$Sx>9-I1 zysf_ve|qwt=lZJoV+qSy%JO|xEQuF$8OB}ma!UIfV;O^~HTkqeOI@SY8vi9QIL7gS2D9TG)h4W~ zuuYE=gwP+enaDtLbDI=GBsjOJI7UtqlI1ifI?t(2PI|JP?tCXZ;n~Sg7Dt}vLkQ09 z>Aq3wv!6?BB|rlz&{`6-paT_&K@&>Lh5zcZp|^Y}L?hZuiBhzph|nKKEh@~7a&)6- z^q(=+I4gen;Z`a&1gDMK@1`u!}^FkZHlv-kF+UHa~jT_k~5thawG|Nc$Y;E zwWvWgDpKQVPo=JqsY`t-J^3kBe@>OERlO=!^J&$r9<-~ce5gXP7}l|Vm8=XsYe6-t z*0ln2q8EK@N8>8jxxyrCb^Rz_KxJuY&S3s<@_x4F&DtE)oVui4oSU~e27Gf~<* zX9@_hiM?YUVH(r8JxorLo$Rqnh~5g}%(JQ0EKuXyQ_`+BwDwI4Ygvom`|`K6XVL0@ zV=LeR6Zp2ZH86s=dSC_bw!sb_B3RA(+b#lk!pb#naWAal4Rd(8&i!y)`Cvxql6YTx z?duwQ;@AAB7_j^iTXvgi*n)YNr4M8*Os`2^$nsd5>-{mE)a%|k4Y|lczHfe$ya*+? zm&pJQaDk(|i*m$kPQHZz(joMz#&xy^3Q@SEW* zXD!BAmmR({o>y`R5AT`JAph?3p96hYkCwQ9#FVa0x>g@LlysRd=8plj%UwFw*v7z` z=J3Y*Smd1-$e#9be?x6#QIDF`rWW;;q5Nbkv%1x;el@JIEbCa)y4GO+@~v~dWij)5 z%)X}TuY(P2VGrAi#NO?iW9jBB*16fqb~d!<9BpY&yV`laHlXpB?QQ$H+uRm3xC<@r zai5W|#!NK2Yr^OiyV1phg|wt+eAr4`8jm<`?2aD`>QC?6vqC;~Q2~zXRM%S7ww^V@ z6Rz-sGyK>NpX$SHyYOD0GUCOi_`{d2abt7*o*VzTut6?zW^;VxPO~=3ubs=3Kbz$) ze>t|lz1(k$cnvkb8vmegZgZXUyyq_&G0^Qw?wb$2=te)fUXy;}b5EDL0&H}C9KE~k zP8xx>fF`|dT+OW)uj@K?EZE`H=^hKb-~2vxlhK}bR(p`Lid;|}+^)4lE%*Y&JD ze)qj&yzhQ*bH>kS^1zpz@Pj`*;#;2hhr*oNjemTf;~e?POFrhx#X07Eo_U&U{&hj` z{N{NMVx*Ja=pj8Z*KU&HoYW3Kjeaz|#T4(VQ#$MQJ`)6U{PnQ&e(V-B`@V5T_O~OM z_`_d2!3{2aySqL4&VN4iqc8nnGoSi_&p6+&ul?;?pZg9EIpDu9c<_@y@xM0z$9_{YEg_76S&Ie-8B^B?q`_dfvsTj}J|PrFVE|6y z|Mj00QsL2Uo&j#*0h(AEZOzgpAit~;PB5BJv>2B}9d@nZz)&F7Row+*Un{?oV zc%WHC76~5T9!6gu!l3j87$6QJA=Y5?g`lrh9}g}f4K|`9KH?4zp|7#rBub+BY23>> zLK1G`68}n}C(7R^?wlx&A}N-lDW0O{RiP?kVJfyF(Y+!U)?63DqSEP$EY>0}#$qn+ zA}{u$FP>i00niw(m;*jt8p0kjPM{lBVC}`B9BSabEeIZdAPDkdAYS7iVq+n0BR6(q z41OadhNC!+BRQ6%Ir<S977q;!d)Frpj31d!9oN|(4^yRo4%63kKr1@ACpGdiOj(qY$0 z<4AhlG=88rrld-)BsiL5OEN;R!DJ%HBu&PoP0r*^)}%}BWF@|1PX44%0%cJ4SQg|$jwM-^rCFXO zTBfC14x|FcfkZA~x=AE4){3n>U<6|1)a9kTIa7gvWJpG1Ne1RgqGUC$pb4@hVy>W0 zR?uSh9%J$(1qr2O;v{7@W@TFDWo9O3Zl+H*W>HS&XM!eDI;A=)rBt3~RH9~Qx>;(n zW}m&I{n6tmssSI+rft?HZr)~C?j~>crf>cxZsO)z4kvLIr*WcXTPCMlid56BUPQ*_ zL=q#5UF7S*M}8#M`6ce%m`jPIV49>%It)rWW?>R0dZwp(CT3jEecC9I4k>X8 zR%mlVB%r#bT#hL1i0FvkM@hY&yXnr#Zg65EVW~rX)JJw`px+<*tr>xE@tw!l;-YTx8}Q>u$F1D-sZ7_X(<}(vNkKTJ}b0FtF%rlomT6aUMsd{tF>;coN{Z1dMljnDY$Mc zo>ptQiYvN)Yr3xMxwfmiva7m=sJs4YpT?`8-s_>}tG@0lzdE2B)g{y|YVKGkcD`YD zM(U*2Ta4-+rkX{=4(5-7s>H5ps8%e+#_Fo7Dw1MstFkJ`4(rDftH_Ql$(F3io@}zB ztZlyI%C>CEx-785tjxx2n9{7xQme1xtj_N2wT`RLx+%~Gt%K1RO6}Lq zF74K??QU&F?nhnbZorBSqnfRwo~_jluXd_!!m^axk|f+dEZx%R##*fNMlY-0E%kCN z-(D~FW^dqbFXDEu_kQo>hOhW0F8P-4_?|ENrmy9yZ|Jr!>;FP%=eqCcrtbXK@B3Em z>gF&0#xDQbYyZmY?FO&_4{!iK1Jvm*+2V?Irl`SYCjzal@rGnpOcMn|T?y&sQ zFY1=*{RXlA`f&di?fwp}{}!6hE;6Be2-^?w4Ss@0Km_3NP_?u`Aqy zD?sq^wk+v4*@!!%gAkVNS zK5ig0ZXxUN`XcfnC^94Wup`H>ZVIs^PqNffa_PFOCI2(ACL{4CcXAV#D*(4^6pu0~ zNAVOFV-=sWDw`Bu!rKCGWbit$7hhc+MQ|YY@Gs9S;_5i2$SF^TjyLMSebc_+jD^s*uFU2fh zG)8MQZ@;xKld(s~bzKK{GyAo1ANOAmwqY+fO*c1lKeu8>cQi=%Vo!Hf zEYC6m-GW;WxK`k@)?M%!Be+~IEY}Tp@+S8)-*tpTICNLIgARlgqb#N4b^laLpiTIx$E_Ml?S`94?D5rrLklANuhONX*rh@ zuToqzn1{JvD)^XF`&^qbwP$;^r?~}ldzycHx5K%(?|Dv>ySd-_xu?6j^D?>Txw}Wh zyQ_PND>}XNbi6wCTPd%tgbrU(41Upm1H{J@thr@OkspZcjE{EI6*szbcQ z$NG>WvZuVtYuB>-ce6KzZa;gp&%AF} z`?YI3ns0l!JNV9fJGlG2&WPi@(ARO$6Fr_^Fozet(QkX5tMr@$dYrR6iA()XOMSk> zyH8&|pQpRM=eyQ>y}xI;!ToJq5$&6_xL>fFh*r_Y~2g9;r=v}nq6-I6L@ z%2XiJcS#2-HA=Or)vH*uYTd@QtJkk!!-f;CG@XxWJzlI(yx-{w2 zq6dcd%(}Jfp>n0lcDh&H+S+>czTRzfuy4e`g9{%H5%X}!;*2L(&U`U%&t#)ZpN_ih z*XMw}sW#`0J3H?Jy{G@{j=enFwoYlIyP8|K`S&}2`^HK6VB)~y>@TOhzrS+$tWhW6 zfYp%&;928|7oK)8h}WHi2|gI%ffTMs5PIsJBNbe7ap+-(AhstWZX=e68-*yQC=q`K zJ!V;R{=vxKWsHqOqm4J_s2Gkq_UPk{DlP8paN;HAoB+xP8GeN6x7>>ZIff@V zfBvbMem@qfBb_WQNsyuit)`J_+aPMymQhkk>6BD{si~KoZrbUm35F`_cx$3*4sG0} zgJF4aWoTiUXm08j9SZxq?(=gQf{N zmVE_6T5Y7w`Xp+WQf>>XrVD!et+?clJLVA?w9!9H>kev{RqT$%I+DgU!q!?_wX;|{_#LLPg8V6i3PKDX$DGF9mN});Eo3~b z&Z{@xCcXch^Q?PiXy({e2R#&GPQPe4)xjd&qT)~!mhj|*HoowpKPGo{!^-w3>D66v z?G@IdN6aYUlXiKw*=w)HD9qJVLCqRHftxt+ic9bv0v00_DcQw#CeDsG6GueK$c0zDsu#6w2O+ zx4{k;?|&d{$l`=Xz2QhNa@H%M3Yo^j?5U4>mt&l+=A^%i-LHT3+ghIt(!Uy#j!2F2 z;Sc{q$V1n$?rcl~*=)iUKqjirfsJaQ6fsyq2tKfi7VMq78pSu9P!Wq&^q?5^Hm04m zjEL}oo*Gp+4K}(Fh1L_2;f98=mW1z)c+BGq$MG~B&M=339H06Mwv{9S5`ICvq5Xh-qbhfoNeP-VCS#1H zEN4l}+Q^EMxoV^2bV(eNNG>A3Y>^&!Im}`j^O(g1CNi(bOk@)5j?ZMKz|QxYi&0FG zpd+9Vxo0&=dh?s$#N{Q8=*a?xvLr`L=P0}6N>kqMfLzof6`?rK4BBLswalkJWhwto zT=DXa0R1O11u9U*351~2IA|UR+Q#B6^PvzurZ6Ll$7wYdZX;*7a)G%KX zzO7~zqWVDyA%*FhX0}hHj!YzuK;ys2npR%zbZIGXD$l#FlC`g7Z9D_}(|!Ni_O`e^ zBRLrRn&UXuv3>IGZ>btgUDD^Y3U#JOJgYv-URIB1QQukBirnfJWuAJ4Z9QRY*TAke zJocpGVQ(77^46BN=S^>J#kgA!wKu8njW1N>E0p2dS3ZF;X~3+T#urXhe0svwbS25( z0;evpu?4SqWj8>()@r*9QSW-qOW~oq^1>K?3p6CSVGVnDFCA`6h(B!0n#p&!Y}!yc zcDvgZySN-Oxo`q7%*NW;S}!kbbnJ3H@j|TRJUpFvlGR zlIcy4x33D$v_>@D=~C+s4xB!GNMY+)Pw*uoYFvX@8>M$~1o7kcrEXYyM#qq)Y_R)~Gq{9ng%`_eRlbe;RG=W@>( zA$F#7u+nXAMfa)QdM+z21_g%Tp_ zP?P%M3qf^*Qf=#pzq+jsFE+%%3+n^^n%KbxdB`PhkdXsh9%^mw{GyRd%e;?4?9XcT@X`)m*Qq`h{P>^2e!A}ooR<>#!*Rfdo{f5F88&`^^Wp` z>^<+!K0DfB?dy^QzU9Z}H1c2`b7xX-=rhurzz|Jm?&2Y%bf>-LIU9M`{o zc7s-}_?n&k@4M!^tdS3WzPkh(7t(UB70-AUW1k^4CrJPJV|;TNZ+?W+NX9g(agb^x zL+Jxie=_7xf2sODCCQ-1{Ppkt{O|w$55a%=Cx8S9fC%A#2Qh&DSAYcAfC`9b52$}c z7l8;^6axr>7?^<3=*o1IEg-XZ> zS7-^sHgnPD6WL~ez5-nN6<|@N4A)?WXs8WosD@`KhB07<*Z_xdC>Uut4Rr{Ic<5Mp zXonM#hG=MpVW$j&IEaK;i0+^aADD=Ws60)W4paYV3G(oW-yn&UIE9gDiIv!hnFxtf z=!lkp4w@K>o;Zq47!#J@5Yh07TG)iC7!3o_idv`+(eMr`#)Um0T!q1T13?Q5VGOwN zi@x}a!Z?h$U<`|>62zE{$v6ta7>vrejLn!3%m|ItNQ}%l5Li%*4>64gfdwd%jlVdJ z;uwzP2#)4Bj^|j8=qQfq*p9;Jj_`;J$M_uT=#JAEkKl-o)rgJ`;f?~~joo;S1`&<{ z36KRrkOL`@1sRUaXpk|$kjlssUto^Hh!FCKjqr#Nm;ebNHfE5w4zvgpUs!pIvqr^0 z459D^k&u!s*^36zus zluBuoOnH<#F_PDod6O3d#t4~=$&{cem3`@zcZrvZxtW@Yn`XJ3 z)M;Rsd7YbSn%cRY4)K_H840Qxl;QuWoDM+)WNDt@NtR5Bp6HpL-r1S#d7kn)oAg7>vo4 zE%BZOags@j5xrTaW(txYITB!~Z4$YXKZ>QVd68m=3`<(4zJR26nx}Y5rF`0_A_1pa z8mNLgs6bj1I7tk}xr=(bs0jZti)Q+$;kTP;s-^>4c?jB?3PGqqN})h!r=Cg+Vk)M7 zI;x}!6yfQW{6(Ql8WCFn33Owo(PpMM3VG77o61B`*p{g@+LS~3qsd91+DIX_fEBfH zlAlnj$eOIm>JYN95ii*Wp9rbq7m@-on2$HBD<%ym;ceeGq;C4A9m=3;d8sM-B*jpy zp*j@jYOJEVtnT`*Vgd|*A*amhm{`D(9od_)8ld(fn2?vF*T4~AI-OUFlbsn6G1?l% z+KcF_5PAx)5F4=xC$A_v40f6kIoSq|3aL2i6#seR=buIT>?s=LS&pHK`E+p|6!q&;b!!8!~YAq-!jo8HH<*t)dZnxh4k zvb?Ghw6F!th?Us_sp2rY`GDj zdbf(Hs3TzybioOdFaey95SYLT56}vzFafOq3Yg#pguA+|%eqFJs2E$L5k?MZT4r^D ztuwK!wD}MYX}3G8l2s`Xb-S}8u>n8<3KK98oiGEUkPu$b37xxQM?KhKntsoy#ryr5D^7jJP>~y0~4?c znxMrae8yko=pq98nB+ArPJ*4UP8i^N%{VLHtM!M8G+wN1*y#H*;(90RJ`%H&+mo!58@al#y15bLn5 z!wIf(3=uzRvf}EiW=YSKt15go3#A|prJxKui>Hg)%r!gBO1zCqys%wc(8vtK)*Qg$ zjJ1cU(B@px7Hwc@o3=^&4vX8dxJ#)m?7}ZxwD!Bp!HW^=8V}LxsPZenO>5AfnzJuU zr!)WDiwV)NHG9*CYq1P{q@PN(;C#_SJ=7QtsVMA`?VGK>_?YP&D5xwsU}g? zc4ji_OL&<>5vfF0O^J=lb85Pf~vTV2+Qz1V1!){gCcNzKu;8j2vrJdWlyi&*ZoK2y(BHG2uzFq1dFrrts?(s&&{Ur0 zYEGp@KHdqzzB6;)4qoE2ZGeWo=6IgxXIyMG?tN@*-Xi=b*=pp~y=DzLb zvE1MvuH=Kxob1fIey;A9)Fna%w^jw%--!d@a^Cp?&3b~&}} zIvx_qu24x_>bOF_0#WY%{_oIQg{xq_s9 ziRu8K@#LQH9N+QTXKZL~yVOnMuq{=X4dJ=&F9+}i8sG8-PZbAm@Kpb5?jFDI3@gG% z{^?2H)**rN_LuAWzUwYO^v!M*F+UJEfWLg&0|x*DJ#Yg$0PekB^ZTyrgZq6sZ{Int zdu2JaTk!_3>QX=MO@H=ePY^|~?o)s6rJ4a6Y!FS~?^OR9h+7b?pvaPJ3OK#G z+t@raJ>c*02lBBERX?nUl11%06IY68E_Dt zUl2-P_q*}X2eAsLpzK$lZ1z&#kFN9SE$AFU=qLa6+TJYPoDec_1P^c!qc6PwcmPIl z5IZmf3y>ARKdL$~^(vqNNFWfQPx=ab(5KJ}pS;BaLF#u;xQqX8~K5i=%h z;6sQJB~GMR(c(pn88vR?*wN$1ii|{#j0f!^xo8r}m0K6F9LR~)ra_dMY+B8M!(i^@ z+0!S-11tva*jQp>2OR@-{bX9PK|~cpnogx!)#}d|2YN`2(V_%{hbgX(C0o|)S+r@@ zu7xN|7A{SzPPKYyP}JO-ZTI%H;6L8LWK)-X<&#RG$=s@_%clE zfQV9nP{IpwIN*T!u+Z>C6jM}jEWIE)WtD+C>4dVp93smX9FbXZM~)WpuB5;s3QxT9 zihPMF$R3hxpkyBMg-7EiSVE!+1{~r5)*yNU1>t%y?X(}Z`w9yUqS9r=))){mgBCuM zP(Uc-?C6GpDy%50`Q+4dPd>9_?h{iWIw&VmW^9qg9BGXIA(>q?X>^xbc44T`47>Yr zNFo_iY}1w=>a?lHB9rO5Uy6B>(zNn~h(nAF5KW>WVufK=T00~WP2EJ?%2rxM>=g%K zf93VoU{QrHSY(rxwOCk$QooUf&nv6jwa;btLEMR#R@yiZrV4wsKFwl2df;)tjShPsQVJi(-6^>wtRgIy( zfs$3Y-tCB0c3F%&_V{CshOu#6iNa;h#;LX~HJOzql~>$uO9i8DjhGx5pl%nO&ymHJlkJgxpP-e?yxMkM7DZ0HY%%m7F9Z%EM7X#JZn;v3WB~p^o&1O~0$zCZ*~gk`-+x~pclp<6%IMtZhj@IDuIB%L0Di7Uzi3P72nZ5f6vHA0 zWT5B*=n+fRj4-MzjNU%=6H$4uChvOS`<8Y8JK({pe#X;?|K8U@6sAyxD>Tc=+(rzT zY++IqabEOnNIe#6q;$}@6p=PXmcR|9ChkEY2=%ACikL8J=NnB9o%lp3M)4vp#7;-j zR+BoTl16APo@CqM^U(1`fcp9KA9TN-*$c{22# z61`_dEqYOfQWT>b@z@uMRpX-Q3b(u-15qbr>!OC#D+n7&k|@pLIoV`|fx zDpaQ-eHUy^rtX|iAd8(P=czcn>}%0IJ@W_at=u_=p0KB*|VMk0`QvN{3ch? zwpFl(Rb}C$1y{{l*0Z8Dt!rhgTHD%HEcQaKZ=Gvh?RrI1fEUWb}1Vi$X=$2zwEv5K8+WcNx}u?E(KWc?;*Pv%*)g7%8u6s_kT``N*w z_N%JBtZ7M`+R(ywv{WqZ0G)K(oU|5gauS+TNjFX~-b`n$Yuy-|!cOlo>4m;!ZgZXc z+>ceUPrZfiUnF9#s7h71oGKC-H7GCHR#&{^C2x7l3py>D_hu3FVg(1162Tbvxb57a zdhL5({N`7`zGZI@AEMoKau>epv!zZ?cr94RJLYkZeGDH9 zfMO95c`;PU`QOv+OvyI?KCTd}fz^QsLCRC6at*3{Wh`e|%UkAhm$Cd~Fo#*pW%06= z&3tAwr&-NwX0w^2{AG)@S z7lRvoa!sl+Xfzj^)Vt<&uYG+yQ=7R9Y#ZRpW>-LtG5J0=y<0f~x&7GZVivZQduJm=kI8{x-8cDRS1`Kt5ZhYrk-}?r) z1cXg&V;>v4+oiGpvKcdPK*!70gXVX`9sY2LcO>1}4!EW9*6MY+guo!7H+yP*g%Cuf z(-9|m$xUwViBr1bG4^8441REg2T_R00EGZfesi4X{MYFo_R52_X8^yKyNE%!-@uFO zk?UOPOJ}-|Q_gOmvwR~%_X)itv!0qaUF%!tdVZfCa8(anNJ)M<%x4Ygg@?QAYiE1g zoyv27`S;A(C{!M_%%i z2bthk9+K0YQ{zWEi%n|Icglxe^rPPs+%LbqOAiV0<%v1HpN#ZRlV0|-r+u8@K;&dk zaP^#@uSC}Wvv(3ezV^c>zQeifEIeO?82ILl_so`lt!1+J?8jgJJ0C&t!GC3p#aQ{|a5;L^<^SlDIVZCT z$D2O`Jit)1zbdOg2DCn7S+~Ix32mFZOIwWe`@aAbK#2f31nfW$6ti6W81gf=|H8cR zPy@o42*%jBjzBKFtBC^qKpC9D1H-ZgtbhvemB2%()Tt!~Ls3 z-7~vCOFIWlL`tm0h#S0Aqri*1IH8k6_FIyPun9x7L{copa^ph@R6fC&yY5QZPqbxCK(6h;s}_J21ve-~vhjM^XsK zS0IIWyo3d~g>+m;XZ%Nid=^yvxkjwQ#N)U+6h$deKt+qaar8%vpoDU)h)XC(LWqd} zTet*`tjB@afPnl+kVKVQltR=SLr&yG46=yVn*xS($c=CXc9e*Ue281f2ujd`ODF?7 zAcT120*wSmO3+7sOoMn-!x%7wQnDIegoulT zh?>-hV0#EsxJfC4Qvz*WP%PkA>_xjQtJM1}+jf8+wRk0N$*F+LW@_Wf(ea*$mD>52v7R_P!Pq4(ww?ntirQP$TV!ph71D) z_0Wu9$@=30D*#7&yv$oD%Q7ef8;uCgu z%2wnE7kp9zgi*u9(g*DT-|SOAr3AK&0^bY+3}s0NWrCrI&;pQ9y5vwTb$|k3&~dZ{ z3FQDPfY7;=R5jgH`7F)*#81H7&;H~HaR5ba#7nRYLAqR3R$bN4Y*kiO09ai}G{DSR z#nTYvR9)RmOf*fu1l1Y)KS89^QpLiGNCs@gwv7nOtE|3f!&Mj+&oaYP0}V&WvpH#X z$z9#nf3#22R6YXRPdYrq)+|g^ebI~1){d|MI+N5}VAph2R&JfwZjC#+GeOnl&tbhW z7=%`JeNiZo0ID!Fh+tGiqgRCe$LLhKFkDi7RkpFXsuX=qzmPKjcAeI{6o890vyE-D zg#Fl3^j0?&E{7GcdE1CP>I5Segb8b4rzcH2K3O!Ojao|FfmOTLrz%+{)mf(PMNQR{DwA5UO~fx` z*!99!QDsdHga}yl*<1}?WkoZ+9eAvav*^by* z6qT0*;9JFA+@T=aylK)d3e|^YwwqX5nXuf}TwBK7T&ZPKB;~W!8{B;*)`S!}&OP0* z^;@7j&B(=CQBB&7xZL)8Jk+gSp&i@Rb%eQnUBV^ZI<+&=3|%Ygx%(?!zUZU=w)C1oln*ELtoXzQ5DuftlT=>UjsH$+%;O{b=|X-2yoB{ zcc}(yXw64}*#o}d(JbC$TU`rU(&bGIX_!-SsNd2hQVc#}y{z8Pb%gl6t`Gj;6P;jV z9a0po;m{mk5=Ds@-rlM+TtZ1p+z@6#IK~JuCd8o-WJu0Kd+mree&KO#WLtZK zNZ#Z&6u2B-2M-QlFpix+#$u)9WK{0JFd#Pn{oiw8V;vsdh~VRl_}%IA-c-(I1Vpv6 zJu+MjWlN@rMYf1yg=16JWn}(9n@t^pL_ZY#!J|Dc!yV@7a{v=gW^9hX4OX>Zj%86E zR|!7mY#!$WG+q|&W?)|4p4eGT)?{*i=lSzwW_Dp%2IY?!2XV+}MlNN{6+ktHXMnCf zLZ-lU{^nWEXMN5HOn!(rNYQ|P=*VMHs3hqBXMV3KUI#sP;BkQ9aF*eYZs>46Wr!Z> z$b(<}Jler6TzppAj(%j4erdoHxPcbBE-q-%&FDQwXhybWkbb{tg=wH>I$kcpTf|-& zE5l$G=3y4*kS=A0w&0+S>YMujD8|>CW@8s7;+(GKr=|%sbbz0hYO#(u3;2kW_GUL$ z>hvw>XR> zXpG{kj#K*8XpQJ+%5La}hUHZQyG-`WQ4(7tGlCg!DHYt(M&WOxH#{D~7tf*+vn?0#?UW@RvZ zFYEZ9155XO!?ff6VI+g$Ao z&+!}*fK?vw5PxjFhHQ0?0c>yw6W?PMZ*bK9<2NucbCuVg2oN2g^5)`jMr39oPiqt3 z@&)H-2QOvt3Q(rd0T7^aGv}iJffGByu+a_D|CLj5!;_1H7- z^9MTY0k?77b1tuQYFKD^nbIQNML%&mr12YgmTnOLy~!Ky*%LYRVQOuts($Uv_REFEs%2@W^r!Pvl_l zayp;(Bxh^N&O3*m@NR$i6#{e;1@~zGbvQ?KzbVzFjbktebv@s9cn|mo(shmy_ZKdA zVc+ybk98S#Y9`@XF&B9MiRT|vuXK8*=|aC~g_m=kridk1l0po1fS-7hubo_H-Cu^& zde?Yh*K~C6_?K_$7?@y?R$`HV^pel{v1xXqrszvo&4SiwYFBb0>hykh@rc)Xr5_!c zW?`q-T`bRdgD3HGPk3uzS~2!?kiX`o?|RIs_@;+?Kdk8x*SHW)WYR`=cX;`Q_wAXt zTzBSrub=y{G4Acg;cva(Z)f#H7H08Icdh^Qj!tj7FMPRqdaHhjy+7zyp=dDi=MMhu)Lk2tsbzRq`thwaNh{jMSKvrg{i9`|vF+}Kt0w>NfM zy8_g|eRARX4mWB4LT`Ov-?+8s=!7nB3FdFyKYmnMZ{6?i-A8;?_kG4^{N`114x0I` zM}F)#7Uj2VMPOa$e|n5xd8+q(SFdF|5B&Wu#Oz=H9mxUZ*L_r{=e>^zJaGN!2Y!TC z_#`iW_6LX<0tXT-Xz(DygbEijZ0PVI#E23nQmkn4BF2mwH*)Og@gvBP0vU~T#Ken9*k1WViax43{owYU1>g&iiID>eQ-Nvu^GBHSE~3 zXUAyl81h(yk_TS4X|UYHphZVhvrEl%UI(<7GjHzvIrQk#r&GUZ+d{#}jtEL_>9RN5 zo6C@HMvv2SIJ^c9c=Pv}CUFCh2Y1Ll-Z7}XnbqzFJXLt=*%^TGuQ;l-; zIrt!i5lT2Ag%wtm!$4k%RaROIf!7v@3JLgMQO4bMB65YPP=E&lv0x#LG0He2jWyah zkVywcgHl4hIpf2M>IQA#-_l~oRA+lCxIWLZoy{Rkvn zn*}KUT06s?cu;~0h7rPG7YKC24oX-#C!KZLc_*F>U3Ud;CpjcjW+8qDkz4|uXP}7; z9`*tga_0Fcq>)NG>4O$LhMPq%WhSDKKZSN6ilHK?C}AcT0fGr1P`WCst-AUuYGL)6 zqeA>)rlU0f1&QXF#wGK{6@gK3P!X^idn~faDtnY|I_d@!jtJq=)^B10*r{*@k(Q_f zTKpwM4dIGg>_A0~KtK)4s=F?`?IOG9tld!tZ4CTn_3xE4=WXjd7P7LX+9rCBD>tXD!Cpa%PZT+y*u-$P9}-GRY-RId5fL<{Lw<2;~(2 zCS3#-b!dVMc6(U3ChNR2&ppr9ERG~4loCOJ^0uXr zAb!ay%t3K%l#vNuktEe+n|(Ig@;Myz(3DB*W6SztOW=VEsmW&AdF#D5-|kiRo7RDk zXKmN!&5WMdeJj2=1brx)ZpMyMyx!}7yFPki*n zCsYILNR~U1@Xnh)JNV)AwRYrNYU+~iHEE`_kYpMjv!MsRkxbeK9B=>w2ME;vK@#95 z|9!azFo6LaL0lcs0RARGg4Uf*3I6+E@)m#u2J|2Z_cH(kbie`*K#+Jzpr8Sazyoo0 zPQxVdJG>(vmBNAjWhY|vDv0GQ_Lks$#$VJp;@Nr-017Ms13h@;0Rr#=8~TqR zSI7VsLRhc^y6A!6VjvjH7{_yYfQuQJ!PD}0fFTHvAUPOe96xA54@|QE2=*CB=;l~3 z2YynQW()xbUKoK;64Qe;oE`+smB(c=vm>>;)OKP38F{@8U(C2%M3x4sP{l6>M>?BV zUfF>gT(AQf>VOO;z=8pC@LUU+fGVp%g#uWBiXr5{JC|sIBA7Dr|FLH&L5y)0rVW6EF<{=3M|2uBly4$WH190xQGHu23nkm$wUMMVE(vL!%Q8e|%jUhuUq^N2( zw!9T#09=fr0$AjPE*@YD7odU=IG}?}a?t=Ks3!sqfCMJp@1QgP;DH4&d4LSW$(=7$ zBS8;9gA9OGTpSp{3?@*^51`-%s>C1?Ov=0iAi)C&B*;xw6@$4V;E@5?K|y=4f=Ol- z17p3Q4jvF$4s787UkHF>Q(!+0Y*2a}aBLD8aM^H)l?Db-0SN4;#&i+2x4u;f`Lbq` zKeEhT*ksv5`qdL696elTAK%`5`_3e!W9azg3=oFeiuON{j7_F`>8-6r8L0m z&h>*As0iqu@f(|MeTe|L3zmI%mOl^7sfmpCfF?c{0EqH@Teqe%R z?Lj6P@LlXoR)1T}p9YX*0TjMaAOk18liP&6WTKCirm+CdmR2%98?BAgf7(+1=2VV5|4j z0f^^J-jt%S%sCKiTNwa?3oK!hD1J0i!)errv{((iIhs<@=a52Oi)Nup6aDmNMRz`+ z1Q__IKr-2ceO4I@NWvZ!Q=ozc9NNPg6@m&@i=h}~00%VKZR#x0P`!3H1zXS{hy@_% z9?((%EW)4;_~+l&E&ze6{7OuRN!xfEkog@W2W_3))o|;K$MSp9umxQUO#@AnmNTlGhf5A()_&lbzxU6zl;O zc;N|9+yb2R%-EMZ+`O12@6IFegIuqY35f1Cx$rIZJboaY4XPE3ky>0Y#aQ6kB)F%Y z3#`3`YoB1uft_`rfwfy33z8JT(_Rhtv=#))j0IN#o)m*0y!MfFtsdU-Ie-A98h`Wr zaRX{T>Kx?HvAl!mLXinPyH$LzPX+3qWZHqwwXWr?(R%`#5H}%!) zurd3_de$k&8)jg-sZP~abI)UtxT8%7Oh7^g5J2x4U=rGDzo2O6u4>`&Gs6ZMf-5<| z)>#k4$)LQYE_d(%B?_d_iC;X0HA;{Oe+3ENeRbg%9B&BsKmA_iD8Y41+KSB{z}4O$4cu~Rj_&}BK>P~X%@hp1 z*`HYiWK!~7l4(ucN7RLT>A+j5o{M0TVX=Axm-B1+-HOvL6OK9CkG!6H-8SJ)D#A76N|Z z=R6ZxTn*&_T&8*5*UebiA;-X!P5+z#3y7RfO_m~YP!7OT3FhEu*`EUB019GU=B*%? zWk5kV!ONXa|JhFi@Q)PaUlXvOd_{nVX`2NEfJ3DU5)R=*72bEj4uM4;1+Bo?)Yk-* z+5`v$cd?cLU=O!x02Q2o`R&ye+|&q^ViV9?LF^hB?qcJB1p^KZca-7EbX{^0$$4;- z-Z;TpHGq{(oC!3*YO%mTk(DHkzz&Sn%nblN{hX_iM^WgeByyM!A$9z1ytGp z-Qt6ea3;}vE1Pbi`-w>qKkWJPTY*7$=fd)K)&t1?8q)MvwKw`~+XK4Vl zVcTSt053))RT;z{#lY1al^M3hH6e{tX_L$xM~Zk8f`l4hFrP^p;7N*7$gl@j=@x_i zKwAA*;Z@@^mX#z`TPJm60T#^0-5f%289laSS=rw!IaW;R7ebv2JQ9=n0U1j*Wl$Z2 z^cmQL8GsW!4d$&`J!-%JuppDXP+0|?WTn@rU6EOyC0eGXTCOEqwxwIXC0xd(T+St3 z)}>wEC0^#GUhXAd_N8BTP%e4}MZ|=Q8QVI~FFR=)L!6tUSQW2U})Jdduf|D=)q7Mn< zv}~l{NT71fOxRr@uxLRUz0L{5<3WJH3v}Ev6Is&sI1#zn=kmD76G+!8GLH-}K|xS}R}d)Z2*7oQ z=-$AKBCMF()C<>fr0vbcfCPndA!Fwxhk35#L^$2v;b1juxdnn`8@Jy3O2XH-Ij zxIk2XK4|mU5e|<35(AB+IC|psw9;>01x|2;=P`bo zX(Yoa(kMhQ=k*1^cMa%{#**-`2)!+y!9WNd9aHK=Y1u(zML_A0!W2{Fl9m=~q>14e zk|E`!DD7c{i!P?8+02Zh3#H1ZN_rlW&Z(P1L`ZdjVD8U+C4o=HNx7U$zd_*9op ziW3AtvW1bM#;UAxgl&XDqSgxT1X2Vt)tHtEr0QOMTB@)b5Syl0M4-{yZBPfW=L=u} z9KOH}6svx+sa8en3k-x4;MSu=T}Rp5tbQxFZp5Kg-MQWBQNfMYAf|{E$gh^Eqz>z_ z?hnjm#GDcT1V@6Kk?3*Bfr-B)q;#Zy>fYB;}wzjO)POZC?XuU{8 z#Snyd@~X|=Ec`I$sC3KE%4f#%l$n(P+M+G+p}?{34+~&`z`no&m>QUAsFNUp!0H{U zC_x5jEYj z1iB2Cqu3HJ39kde680dkxfE~eHm~zmO%7JX%)S`cCgzaDz{DOBa)>SNg3L3DnVeqk zYW`>$dE%YQo!lYs^R6%Z(ugl!T{OKe^-_e_jzi9hisN=K#&)PNNd#c}C}aX@uzF$p z7O(-UO%9stpJe3xHiSpguDdcw&!UUMDii4Z>FYq|0dFt|&j+~{?$sJ@VRGcaflBTT z#Hh?q#rkab_TFFSDe{N_e4-96$#9i+Fb<0hfGV(uFz}0kXS*Is3a>B)M@;~~@Lcl$ z=?n+(la@{nKQY3XJ>ulT-Y3?FeH&L;%a5R*bN z8!rp0`jd%r=fhU-?eWC@BBKOL0M)3e{}L|=%&U(2)f$s!6PIxUH1YB}ko5R*O17~h z`;69wY2p%67GrTk92_{Yt&Y-=i}>IIz^M@QQw;bkEiRtoeEEorDiSE#z2EnQT&dRS?Rr*5aJTR2A4XI^rJM$aIrH=uR-h)=9h-4MwH1gGY$Nl%|hXv0Mb)>x}PdP z<4iGuxWrp7M`fcO1QRfTDvwSlYgMuqVZdtF1sIkFX`hqvz!DtvK=~87O!NW-?Ge~5 zuLUYehxMbRbXok+N}tcxZLL6*LPaQMc@%Tn*liK00AE$Li##(mN86kHz!7j31RPWM z{baQz}vmaeBmf8X;+4_Dol5VLZtA)>2Vj!^$Q?Dsz_F6&KCvXb$t15Kyz|AGBqx7E+f{^o~YuS9dCMx z)eMML3{ik9Dl+jY_JM&L2w;`LdNy^B^?d-*iRw;9v_pYlala6=*p^NIsdDDobORs( z0yJxFM-M~b)$$OZ5B_L3KebF1!Fs{c7k!l84Fm?fK&3V}5186uF=rokz*^B$btib9 zNCvH3+Hc?i;(|%Uy0zUP657ZhTonKX^ceCUu$6zl-yS# z0Qd?ta84+Zt`#Ge9CN}pJ_)yPDy0bJV(F2PZ&R6uEggtu(Lob8kFU9u& zc2gjWBI(o!p95XMV?Xm>b!NvRttkKG>ns2W+;!z9L1AHl2lQDCxgaS!0U6D8qdAJR z5^&wUEx1`C$Nqp3bSs;GdX4z_Nvkx7cHqPcA zis%h)*H`xRu0j=Yilts4yOuA4Ot{}r5_EfEcXK(DuNtHOF%SrCwAXuuu=9SbfxneH zrky%RSn{tbg{8AfTmB>)m3vtxyru3y$Ic)b+dIThh^V7ZWRf%TsTo5FZ#lGdVJ@cS`uGFPF`SJtpJ) zMa*YKUOn2U{YOx|vv`O+&ok2Z3siM{NF<#@t3BT5y+q)9+iZP=|A-ZLBzL!{+<(nX zb8$!LmsNzZ-Zy?5kNQe4L_F851jc<3Tc93|P*e>6yyJI14o^F&NB-Y8D#QxD<*%z$ z{H{|-sOPu-0l#^JCqz7N4gJFOi*kkP530NuwkL_N>;FE>o-`npy1%0+jGZCI4@luN zP3p5g@IODqgZ@KY6ZJdzgxfy$i@o-8PE+_k^nbs-PdrS(IRpmW%+IszR*|BQ z`=>nlqhVoEKjk~H%uu*7GKJ~BKmP+n41og)7BqMeVM2uq88&qI5Mo4$6Dd}-coAbl zM;kdhQix|ANRc7eIV)*0V!Lq!RSs;IkeW4_8EMwEc@t+&ojZB<^!XELP@zMK7BzYl zX;P(2nKo5~P|e1MCYhC-`0}8*b}?mc?fMn}=njEAiWF&j7HwL!YuUDS`xb6oxpV2( zJ*uc<-l)(%p0pZ6n#7d`skyX9c&5-Ab{RKz{1|d%$&)EpwtU%VBE4#+`aMZe<+zqH zyoxUUm1{MiC(L4Y{Tg;`*|TZawtf3-M~Hp}I-?mHYu2tn%i=bE9C>o(%b7QKzH_s~ zs#Mq2)H<3aYMF1Uz45vDd3f>T$(J{O9=%Y!c_sP={P$vMfnK#A{#hY?e*OFT_xJxF zGra^_W1*@yu8fY&(jXY3MP5m4()?R)662_Y_+uuFDpCKGNW*?>oP~~9GF6c5o#cT-EG}{7v6X?sQyEH@h7DRhL~73Z9taikq#})`N~rG+c(=d@z|X z(1kc)i%mWmb4LbP;*@pXnP)kxjGz?uMnL$>YN^2SU~&anB|LRi5CP?p!n>y=lsy4mPb43)ce(3@rY(vpn*s$NSu9=w|^gg zrNTDqd&Dwly?lsdFhNoU){nS-|BK%M2^bW)B}H6U8OZuDLIJWo3V=8ZAjAq-K?~w! zeiTs>wN?d_>xqCS;DMn3CRi5tT@ZySq{y)9rMv_(FMQ6ES6Rryz7uqy2saeM2vpcZ zAI9Z{`D&i?{`9T@aDX8GTLJ<&FoO~lfQLVn;uNJtJEEL!Au^1Y2PklY0(?LLJTQS0 zSipc2EN}!$@ZuR`kc2RPkq2OmBOH@3#v+6fjUuGt9{)Bw@8u36opP7D1cC$^@Sp=s zL_tB;cLE~>-~tH$IAS4hFiC>2paf31z!y=Ffl6`!jC>TO-&lBvqI{26Slf=gmN0`I z6o3IUsDTU^z=TC6kpw5Wfeuh$gjg292tUAp6Mj$t12ljI8DKy#dw_xk+@O$0#Lp7iOB2L$mOVHE3kH&b9qh0GFF?Wah@tz>i z0Tip*&#sk{ekmbhyvR~V2B4q_3kbqEd(cF4-fWvQ!H)lBx^Jv5PSL) zq+atQHUSAk?(FDY1W?xpdJq7C1ObU{l)*k5nE?#Mv7IP5=n^U!(I04Fj6DztFBgD< zCFn7vNkv)zDaZDfZNh6^Nhs$5c2LFw%z&mf;DGf6q5=#&QwJG{r$Ck<)v?%<110D| zSW&vt69l@wqSA%nq8HmsrlF^X#(>sME-X|(7raVl)Wo#XU>;$9AGyPW4v2<(2PzpVJYM+*d{$yB7z)62nHtm%afO zFtlj@)Pt-#$Fri0WG>vVrA`5oYY5VLuiU}~)Mfn#(NQmnH zgkUe%t6{99=^Ez*;rP;5Uf7libG@E9I@mcW^|X6UZ3r;L*U{d|20p-nE+Aq(hYBMhR5=-V#tk4e@<%gq)z)^VZ2?xg7>@hx<oOF> z8Z@XQqBR`wv@{xF0X!J_OfAHWN+0B4u~y&)xM zI>HmK@P}BrCd@vq;7L>SsBsnV7J=qi!WaM@2m}L6KmY}3@ZaT{5eiU(B~*VR1o*hx zv8q`93FPcCAg7Nzx%^Egv;-YEKoEcr$q%4kAqZ~o_-?dV&cqJl<&-VYc5Fws;|;dX zX_x>2D*}pIgYA3_{-|#(bk6q>f|KNF0wY52T8{uDr-~ZF>cS8A+Q0$-4KD;8Cw&HD z_!8p%V&DSYC?Ps8`o_WtAnyek;`&_g1bc(D>dZ)L&my$!O;`{De-LoQ=m(OBAef*A zQ+PVI(I0t$iyovyBCj1KE%ZO-=}555ES)`@V18ln}3)kY_@T4^NHW7BTOFumtnON;pQ&&g{e#Bn8=o z#c)pp9nll_NB~JN5{(UL?8YFjEZt&G!^EczKG7Aa2@J#S^+ad;%#=H3aTnQ9 zAl_ihkZU7Ta&aIMXy(!Hkj)Z_roD8l(^S$XIR_x;@gr@K2rbRT9&R8qF;y^Y4{y>a z`-UZHhr(><&`$Bl{Oc&A(r_1!{-;_Fu_lX>Es$z26EjNQ zvKm#6DYvmM)iPP;!zA7DAy5({j?2qdgn<;(GYvv4$%ZGLvdm@=74H(YGE*~GQz9e? zfO@1eKT|dtvoY&xaI}#w^>Jn_=MLs0l{iiC)}=K&%QY`^HC@X@G*dR0Gp1^j4-jQ5 z*D)b@(=giM4pibgy^=C*MmUGlH7CeB0S1>+B;b~FHWQ{S0TV5O(jdHn9o`c@*}*q? z6Fae@yV$K2KZqi5<_`O_pg_qYXei`X1P|fOJol2|DvPcv;u$ba7Zc4rBQ!qGrapVK zIy;CZPD(%1Zv>hM0|Ed3V0Xj;fRNzjt^eDHKWn^?c8#FD+luUi|InDb;gPogqvk^dR2TC34g^ zA(h2Uk~^KW4dc&~mUUSf;06qU{|u8MZmF&RnyK@?t_O_N1}}t*l)#g~BY0v!1x$bf z=Fe7da!V!Aip~)$pWzf~&l?6JUUw5d&vXT~Y))HKAfCt$1(23-NDmVN0fs761vOhe zKnISX02%=CWN;1rBj0%71`r^eI^Y8W-~|?d0RZ(}RT5FxHRa@z;Tp7Rgta}XlSIt) zOp8?^Ej9i2mGO2+W`%$b72=tOYMxkNn7)WePo$NSK%FW;V;4YGJJu*awr(i0U1yY0 zGtov(c0S6KK3CSgs(vDB*v`{smtYUQotEV7ffI@oU02-kDH17y{ z!0yzJ0X$FnjJ6?>cCI7`*+%v>&GJV7f0aJ#H9NBteBK0WS!-Y4PXGz-U$5;UGT{8i zB36$Nijcq{vQ?!bMEwLbj0zNB)(>>;)*q3UER9iDBU2fl)>jK6N3YX$FVt!~rBV;~ zGCx4{E&z9P7k7uM0t@0pKQvTN0C-GbdQKpCooSpFt6ZBi1`cY600iEwjgS~gbnn(V zC(_VPaT_OR7E~8-Efkf^lu}!kb{&FUh*v~Iv`sGn=QfUW`R#M3&4g;`1A4G}@7KJp zw+|+V`v&n8C6px|RY!^S4Brh|PfZ9?bl(;y9;whL2b|1h6r=RJ#Imz1ERE9aUJI?-B`OOv^ZMF*J7Bn2+Zq7U6g_%@Q}i z^u%zqOo2FMMOY#8vyU4YOhU1XVNVd{xD@&JESt82Sr?C8mx@VaaFZrK)!30o8A$#Z zkhe=|+ZD&cbcgNOQKOcV5m}KzS%^v5mV*S6!_AAe_tg&4mCa!~&A667d693~m`iyW zy*QOEnQ1$C9q)LP2{?BDA%u~S*_u@(mnFF+6_jtyQY*jtAyoE-q`8={*_=_tluy}Z zEcp^8^Mhmga){YWsQH}h8AGyJn<v0AQE<_9i70Tf#yGVkax!eTFA=u9|*Dq9#W0JArmAQ+C22{G`1c%W&z zno;Hg9$*KaX_+#Ltc1Jh8URY2=>qEr1CZ&fjCZ*MUep~dJOk9}L9sxhex1?r<|W=Bg#4;0)zvQ^;j_im?3#rC zEC{L7e&E6$Xc(LbNt2F)5+a}$Wh-7afyp5D8}AMD*EVx_e8BIAg68A0?HwHMgZZtA zoT8@_mBg1$pY*8P%_5CN9%r#tNpC@=+y9l4^4j+6GnM6F_G9Tgu9-zStV#aZttCK%q zYt{L6;yJhZ;nR`oH(!%ES(ADGAX1+pW_;!QU3jx!rxW1#ISWexB8I?$F_ILqK`^1h zg$x@ydib>=xQ>Corea(E|BSK8-rH>eY`KNCj$mM}ZSOO0&L=JGbuLBvr1Q*zzSq zx9}j|ta&q`O^K#?JAV#6I_?Sy2)u?}q_kD(+`D`K4%o8U%fn~AeOOcOT)KA}=6wSc zy!`p}>kDnJT>%r?2PN454`6@-ju!(;!(pZwL$kPbolOq_FG`XZ+|93k<4i02mmt#0(u|m0h~_-rHD;(zfN&l9f7$=|D~rq$j?9rHb$f3ZKiI5X+8` zAH)XhxYa@!3^Z>6Jk&dJ#~y!0>yiSoIVq(THR)t>obF@@%b8p;g~A_q^}z=ZUnF9= z*fBh_g#|byn?XGfO*B#}@^K)7bDA6jgS$?I@^S>@$&&*`HzhF$4;++{&9z-6!>d>L<#af@WKz3U$DgL-+euKY5j?7eDgyeO|7+YD?PH)Yl0rSj5x73njd>Q z-_Z}SyDMZuN|c(A3Su{~gA5()u!Io;=`O?qGMG?2`0^#8fB%=Kw68UFj#Apv*3(^c#=B!+(?BNv0n)ka)KSOF9}A#fD8t50Ss_VAS{r<#bQvq*$m_d zJamo(u2{t^;!9dctCqF;<)#>7Pa!UXkm*Xd6RT-)A{r3L{}E##zCa1200@|(3FPNN z71&P&9sFPnN>G3ul)zO4K>`P`xW+^(a*-j?7XYQjjObO5ZU!`u3}>jo&#bIQnMmCg z80nBC1Yiau=ztMMZ~_l>P=ooKA_iu|0wi331pzpL3>v^Y%BeDh0x97RK?%%Y3NtN^ zl%CS+)+Q#E(M}BXi5iCq6#g}fa$+)!h@`ofgKW$o8$)H<2nQm_DUSdP8z(f+iOwtr z@P(b*3}-}#vcCC^AUS+yIu~{>Rq0bzL%}CMcc#yD3Ur|T>*7h0lol9P@_^Y>pr)7^ zCC_MwpdRw)!!C+Zf?%{*3EL;nC<@Y$UJaqcp(HZR{{xo%;(32Ha zdo#LW8ysi|o1(L+7mcb}&)Pgky0D}Q&6OpIY7PW8kdr;3Pz~-nmc_+NrXJN9TK@{z zwe4%FRArN4Jx4$=!bPHgTc87b;>nnqb1Wq=0Ic}B(ZFhUvqjU_+#>1Ik&VoNi)}1d z9ZQ!sz^n$71q%ay@PfT!HlUoXt!=j`T7?ASux-t4N)ecl#|l@6JG?7v#RA(5Tz0nq zysdMe8!N&V7POM$ZONE~H@d#T0g}z;alfL1|A8E!1W$AnbDIm@^s1LnkKEj6+nTGO za;CATldDTRiMrLcmXYV((0cz1U@95ZA-gSbZ=0so&Tva7IRUFPyK7vrpm3Vz6+wRk zjNuHQXsR!yE)3&KkTJ^DMJk=^JUz5uu8ib zB0O!ffg=eMx1!V{Mm_2s9;>?TV)?sT&g_`)jAz|Qce^b@XaEP>Qq05uzj{J zac-PUkEoQMFO~CgBTQ=VUQ+>=$>jsxI8^XQQmGi?n&J zXv_+BMnNS?P*gN93lK?+((wz;315;LRORtX8TGTq$Zmx>zK+7`eiqR@z>I9w&7gK!Xg(>>e)L-){HVCaFOKnyYkcDz@3_Z5 zj`4nooZuwaETqm{J>O=UX*q_ca!wnA;kIF*`{sv2{DE_kkNh_I{$R+6#) zVu)L=GQGjHnJyjZJ_J(poDU@D|2nU_)vr!0plh9?;;Hl`Z)L4$D_kx+hBJOa>W_p_ zeeGArdSmJ#>aEKinmcWwpRF5Tcfa#RC5~v!sjU}#M!fB8C%oI|j`)%zw9+2_v#!*v zRlK1R23X$#4`&BHweV|a%_Xw!`Pi$F!^2_P z<3TJFfWe#c^c`l;8&dV>Y=GRdFaGS?`{Y=kRMWZVR&aqe^K(uD_-fZVYBsNs*GtU! z+%Krm;kN0OuNfmrhv=^$)H2HplO7X^`t^fPIpTMJ{;LE1BkykAI7xfqr)ThqA1ipE z#03tKb`i02r}299r-1c`|9tNi2%<+o3deu_7jFzyTiADfo^VKhwSXNsQth=?xbj|7 zMt(ZxP^i^o>K9m(fCgyrZ<65&3Fv_{I5r??cSQ$i%!hOiMS+xeLjed%zqb)#P!9xg z37~KZm*9SV!4DcJe>4b%GjkS_lv@w@UeLyACb)Zj^KjDV5tl#^VQ>ka;0XdS5Jkv@ zQK*JexFUF$Rr^;|WJEeH6PSQ~%_ppXz@5C#-C5S{=;U4eZL!+~pPi1zY*W`-bu zb`fwuMxwKOEu(rVs9E(e33yl;4F^M#i|5S#cf>4EqV#omk7X#ep3C-Y(G0=w#p@(7751ue?E{2ND7+nAa zjZQ{mm{CbZw|D$kev(*de^n1;=#3ILb$>XFQ`dsSh=EYigd6CL>nJKr=3cExT>}Jp z6XA%#rC^?sdR+)uBFAt1xDQlUa|Boq0m&7~D1+>Xkd0DzlxBr1w-BVIRkDbUez!8I zmy1xvdTa*~8c8$c#*7IGl2#Ip2?c?U_ihtuVz;P|Xjgy}(UKcUbzV_%F-V9ZX_I6! zh3SSX_=i}q=yG-viQ33AA$5)nk&!XEkz`>39(j{Z`6vy@Iq>+93?&iqfQ=5~gO7E2 zn8%f@M}S57{}t-!lw+wQ(8v<&7IwMfllQ1Mz2ac1gNqRdlSf&257CvGmlSBgYh=ln zWvP>$gO=BWdx8du75HW+c4F!Dk`h6ecX^je5qEtlnHa)C1C)Gjr4d@laG*vP7b#4p zn3TF90+UIa6{1$U^^nkZNjX+S+0{wl@>gI95=e;@%Q%{)iJLO^m%woh&QL(G*p^8L zRs<7$>GW^=Ko7Dxb$I!Y&FP#^M{&w2VN98u)j1#0_>R_cnL2nfWOr0bCvPFJeL-en z0^teZ*brzaaqBsC6XzIGV4d+vA5%zRZm4$^A&bF@6O;IN6jh!C0SbI5paUwPS74x5 z5TFK{|DXrTpkc6}4_crHiZKFO81iYM;GtFkgmMFZAS|xiHg5`mkndvKActefVQT`!JrWP|nDx^a?1rehKM2e(Gnxsm~ zq)XbQNGhXIs+-@Zpt>g+WuZgg_YO*!klo?BOBokr}3lUc7 zv$2??m1>x9hN_Kbvqw9Yj%tN}CYY+(vOSA8<4SAX<2^`gwI?ew2{&54`L0>I{}4Y5 zv_J{6n;CCcYqovKt!w3m-`ch45f7mei*ft0wDuwnMzLp$w;@@hXPGOaYAFr^wh1A$ zAUZWxW`}u;xCsfVfF`g)1t(rRv2h!=b|QXpfhBg2k!4+uMCqXl9=auw83O!5g(dOS}=Ax)s?; zoXNfI8+&Y;G+~D=f-AX_JFoSNz1(w{>F2)vn|OQ6o8tSmr3<&!OQ*EU|Cy>I5Rv7- z4ZL+!<+{A3mhJYgPFkuIAEV^V&#WA(&Ng%49)ahz&LEs1WY>WtH}(_(L8jUd*RUE z{K<~HB1MeCQ+&@VOt)#g(F9jc9j&RU64LK%X^;vL(d-JAgweHJx)@!}l&imKyhuGt zRz+>pM~&1;ebg|`s50HmzU*g_Jjo|btl*Ntc&x_!#b`>+|J7aX)lJ3J@hQ`8AkyCZ z3|~Cap-a6NjlG$K&4&vR%&;gh2PlyWEf*qfF zP)rPo*1(+8M9kBroY0kAs4aEL5mW(6Z~!z=OMA`N;A8}&9RxH$0|%f)gc#fkvjaNN z13O^aP316SGET~^+N&+ns0qiRE5%Vv+lRc5KDHd!$nAkqP015W?F|7+)s-u;c<*Dar{jh)gM&S@4A zYkkP)YmcBr1M$ZKH=qGJ&;X<)1OYC zGXhM20UE#n8sGsYz5^TH;f|98On?L#P}+9{0v$d`B7!^aV&5iS1h=FE_RZkc>EMs- zTW3AcvFx^S%hp!5JaHkG{68mpak`O06oy#FK`5gV<-o30Hyt< zG(ZFyumo5UG$#-M#mxXXfCDOU1Ru}>c%J5f^aF+70vZ4VPVhcGKm!Y40vBNEhR)^` zpadFl=Lg^eK~Ulp(Azs+1Ug_SFwg@xU<4r|0|!6^h|U6hZUn#%0|D^k9=-#G?nW7) z1EwPC32_8QVCENa0-%lqMj!*A{@ejT1U+y;F~H*!mg;0l*bi={uMMq>J;mT1<`+H? znOo~H;M=kj-$-C8ZjJ&(aOSks18Tki3~=ifB@hNc?S|6kKLGI!@B@QR1W0fMX8z=~ zfdIx$JQOe+3y=_bP5=eK;!xBB13^SMU;rI35aD&-`A(Wwo`{zz60{7P6k)Del`zv7%=JvAm?*#I7_8G15sXol<_+*|K}GV;tY`WRK)8+pz&Yo@=)YUQX~*LZJ(3VCump$n0W)Cn zNi^TTJ_AdT=t$H9KF|X#|JMSs?Z!^pMu7Py5c4-M?>hhTK_KICuj>5H>YeNA0Dr#( zi%t}v1EbCYCvXAKUgwX$1ZJM?3y}Lp#6;b+Fh39kb96^$p7A5U1A-Lp2p}&Re@kRP zKMQ~ZZp7ye(Dk~GMB;?=6rlRmx$kuEidr5L!W_l*XvbKc%_(#MoUi>nZ~sGE^y>8XRC(c;1eFBX=dzRuewr!8Sg=#JyoOlNB(1mmNF3`J~4Eb#n zCd605g$+lAXs|*7BN}=pSsXya!xt0??6{CY222GbI*(Xzp+}D(3>uUy!dORXBrFNY z5Iwp?hZ8#_kJj*`_KOS#Lr*p>qr!*Gx@#=R0HT1B9~p2Q(4qM7bn4ZuU&o$Z`*!Z# zy?+PKL~KvA2K)GYM%A28|GYi-y8r3-?|ks}?H65Of+7JPwI6%_z=RHVI02?46eGYh z1%)W$gCmYGsK6Q&=ug1>bQ0o(0tQ?N!`cR1ZaDp7NU#9k3Q*7l4j`k*gc>T`p@aKo zq_IXDZ^SW29e3ohtFX!f(yX)snJS&Rjzkg;isX_@FC~MTFOwdpG)w{nD_F3CE4ADb zOD{kB5==3(9Fxom%M26EFN+XD&CjTWGfp|@q_a*t@8nLrID$IwswDqh&#t_l9I7w+ z@I+K8EPa9i%>B$<^UTg3l~jl>KVuV8H_L1@QZXr=G*MAUCACyjPerx5uznJ<&luWb z>nU{p{7EkMhyv`d|4UQlwWvxp1vXe^(l2G=#rAW#c z1QT~$dJjf8VTBiFxM7D4(lf{*C$4s1LHivvV~zE)OIH#LuHj*kM<%&slTSt&yH&~i zDb{L-n(N|?GxjOhE5HRgWu15Cxo4k$Za89zC#E(ke)nz9y=`LvtS1q220Ch~r>44U zt8=ustd;|DSyq^3_Lp0a?Q$CGtItL|ZMD~C8!4fO&bMM(!KO38a?3Br+)fgDdRxfChwM5iSKb;uC~Hjz zG;F5lI~4DaC(k@~*=MJ{_QN(OPb|1WPh2R{e;?hIB^N(3>3`!h#E_f!w!L}hpNBqr zpm2v+cWEiYH*}zYpT&;ghqwFB#%W%?>DLXGK7IAqXMb(#&Etbrq7fIJbc4Fb|N9uw zA%13x{vuyb0uut}waTXJ|^+SaR9@h$DRf|1$NMb9|R$? z-WP+crDbc{^PTrpC_(;Fux>vAprKgj7f?+ogg3-t4s|%C5$ZvH^F!fUB6z=^oX&!b zJ0kKj|M4v{?a+x&gklt})59M=&?qX*p9G7NqyV}uc^vzR2&6bhGM3SdVT1w@)QGeF z1f^i`+Y<>**s`L`VlBbbVhc_8ue-$nfd6aV83j2=LLN{X*to_a(YQv*@rOUt6W8ai z*OdHaQH%YycSvhP}gCY{* z0uYiAN>KH(l!?=tQtWp>;3?&Wd|cWrmAOo2QZ13sgyuD(Acg6*(U(5_98eGktGE0v zf-9VglP-9du{G12=R_yNrV&kO%EJnmsHSwT8B7w=@rrgtk9x=k$aMD8pZ|1MGT52O z|8-I#45A#Rd7_mRw^S03myF66FZeisR@9;wE!98`vI|O3Vh*;LLrCYa6GoQpX>goa z9EG^eIuO&F#e9|UswK`CV$`NL#p#^{Y6diflu?<2fJsS-&0hkMn@9Z}MIIW?E|xQ= zPlak!v$Kmts-mjV6a^(>0n(6Of|v3Eh+6q$$E3PewzkzPyX3+K4D#D)0A&-#bz>x}7&lZx z7ot*`uH&w|UiPBXOEp|nPuLj_)e7Z}uf-lz9!k3NHf3hXYj1!B97}YaZ-Z0dz;CY+ z-Xho+cO?YkJyp1ttk5)&1IBQMqnHLqX*UMVSSL{kXsMq%#kmbV#`dr>C=IvR#be~e z28Bw1TmIH5;mYPxQVc<^Sop<12C@P{VO*ZTcPAVG@j9WoT>&(CwP<8+gkL4w?NB$! zSH`mQs5;H~dRW04_0@y3XGIlf2f$fIbDCw>T?8>oK{>DhMUvbU9KUwV|Gg8PcC4&s zKKB`LG48?xA{%FlaKZ}H9LhCX0bU13+0T!LG&8TEiuxKv6qA_51TZZMFs#BL@O(<0 ze-aKPN=w!n%uz-Vp^ zbQOdiim1zC#bU!n+o1h+zV$sXamZs5rs#<&{yhmR2!+i9TX!xU%!MwD+Nk!dGPn0V zaf*`*%#%Ltf^BUSc>@v8Ju&vN_grz3mz-Gw)woZtk&L8>tKst||1-$D3DaSA1=S|U zdCu=d)q%3pAlmSWDm)>8qtw~s^_H!@4U^uR>)h#pQ$jO)o=B?t1m#MOP{MPdMr*5- zWk;#`(r>%G>^|M>Xo~=Cm|zeTu%ss_U;#$S?Q9?*6beFNY0Lat+rl8xj|40G*#%!U zwLjnpOK6b^X21dhMQ8wIoO<%_TLJ~~E{@@I*YdMIMITZUQ^}CL|Fv2S=!ZljLB(#dm zatrjptQgb|E-NxDG{YK%39D$8^ZU4-FfS@R!#I?~ImtHj%bz*K!#w1X-oY$lx-2~e z#6X;p6I&_gg2Av8L^><2MP$SSbHqo4#7IoUv!j--|2aQ9%(8Xq8It0}PUOT%q{M9k zISs@gP~;&;Ld8^^EK@W)40OY9y0RXWMOpNjS+qq~bUNkREFwa)Gg&QP1jbra#bKl| zT%<%dObT0IzV=!dPjtpce8y;m#%XLuY0L^~#KvsY#%<)rZuCY^G)85jJ)p1!4g1D( zM8|YgM{T?cVsyvEg2#A_MtP*id9=rR#K&k1$7S-KDSW7ZR7Pdo$EHXFLMTXsJjjGZ z$c0qMg6s*0e8`4`$bl3*e+0;YbOnK2FGv^zNT3Lg^hkpM$&u_xk~GPaM9Gv?$(3Zu zmUPLIY)P1u$(TgRK?un~5J{WF$rykHoy^Ie|NP0F>`9>HNuMlAq8v)0G|Hq5%BDO@ zr6fv$V#991g^wNt^CTc1k14$OR_A>voy=J>`JsM1hjO^w}i{M zY|FW%%eu77yTr@9)XTc0h&}L$JzxXC6wJV61B39(o)Ao_6vuxgN14LNfDEv#6w9fR z%gWr#%f!sgoJ++ltZ@vovy{xx%!9zu%+fT?LimcqVdl+3`mY#&f+xA;#|!RN+rhBz;e7Uv+T*ajLxm}8M-uzRLh;&P2v1a+O*5x1Wx+&PWh})`~*+? z%ufCE&+ZJ&|Ma8aEYI~p&&G7l>Z;AWgiqTH&%3102OUlc4N(3hP~?oz1|87h1Wft# zPykiU3H?nD1yMN|P7_7Z0+k)$VNMua#@xEip7G7i>`n@$&<^EI+O$v{)zAw4&J*QP z+!PAq)K4J2Pajp$_BqhUv`XDV&Au^CAnncY{7Wd2(!#{eD?L%d>`N{6(l6yw0L{`c z-B3AD3c_pz6h%`tUDGy2(Q*3Pnph#3JC0AkP&2bgZVue&6 zZBs$T6Hb-aM->E9B#ThhgWgGte9c$waL$Yb*ca8gMPP&EDOfZ$Sb{y#gjH3BJ=lhY zScvu3iKW)o#ok{|JBxY71?!dSCt*wa@A9!On2-{L6xj;osdXZGR71}_x z+g@$kF2!8I&D+maSI`|+t{7dEX#~IpR*=oqzlGY@Rh2H0N+vz=Bt0>q+DA>Nl-tEoYk8Rt{ zbqbm_TXk*M{q5Y)?O;nC ziuDEE?KNTb1>q6~Tve}ze*fOSIGuB+* zjNl9I)iOrm>wV)mc4Iky0Trr|)A)X6Mmj) zK(1**g=A%JX|LvLnFi~Are>V};ezh!7|`LZc59l3>$jF`M^Kh+w(Flh!*M=3rB*6^ zWkVrq<>lkA6>`lMby0b?6F4AP#cnS^E?*n*XZJuRzAcxT&f{#&RM<@M!*Z&w(ZWsZN>H%)d@kgKIpRkThERt z%cg6!*6GK7W0Fn z7U?f8ipX7F#D;9}#%lI>k_cz(Mfh$Cr;hO!Zwue>?`ALg#&6BuTl+rf{LUQ^NAb_z zTQ!(z7*+#sX7L!8aS@jB6i@MiydGfsI%plQ=Z?7@DPX7OMaG50FJ4Y+CG721awfe6 z+1}^5xbP^yZG*6IOGw`}J|@GEau0`+3m>fz4`Le^@!*zkRx$D5cJUe~aUfRnHizpP zPjfJT^A~4p{LWi9ACGWM>fM=^u@dm4#*v~yzP_%b>h_*zumeO_^nQfJ!!B}?|6bn1 z=58FR?IW^npRn>v*ES9plqnD7eD>}Qr*hip^%tNce?U_=abAu}}kr$FO4Gb3iW@u33u#7e(`E zW#pYOkY@Dm8S+P`ib%JSm{}5NX!n)h4h&!UmtS|7m-(6Z?hk_Zo1b?Q{|9z_=l43l z`8lWaI>+;&Cwee%bBJGaq(AzlS9+#*`lgTgsCW3Om-?!w`m2X}tk-&me=71ac5)UK z>gM94>-a7P>2d6yi(K+CZtyX_50!^|+xB+3zZROuaOrY&_ptlDe|ckqd-jO?4qEy+ zZ}XftdNU7pp$~Qx=Xo$6_OHwNT`%~jr+lb?e8tcDt=Ig`rwX$OlAtCm&*|}FM-&4t znoC^evR8BpLvnCx{i&|m22an~zIK%_qT1(sF1L!iukhUW{oj{+-rpiWlhsWf4M|1H(mk<&ai7+IsD$|3GX~t@e|*?IYQC zR?>Ese+q5?i7kKgyJv*?um5d_>-&%U{nvkhVB|<3L1+aHB8;JMA;X3Y9XX6BaU#Wv z7B6DVsBt65jtUK`2`O?U$&w~dj`WxjB_5U`RXS{WQYOusHgDqOlXEA}o<3Vr?1N{a z(4s<#_CZS1DAT4*VM2{6bt=`WR%1L1q;)ITu3oc({Tgbx z_FUCq;nJp0qizUtwaeBnQ>*+bdp7Obv!^=ciBu`h+)Q~7|Aj4lIPu~N#Uh`m)p*pg zL@TzHD}8TWiGnc>Zs>QrL%$0J-(n7RG5PX}KTAGsIW*{p0#&}}$euO*`p`bo7q5Rm zVb}gkGDnkYIT1CJf&3wOTu}`<6_jtgF=UiPM;0<%ITGX@VlH6P%@+V(6u(ULI!= zQ=g7Xs;Q@L?EtND~DuC*%nmP)kzU|deRtdc7N%)W1r*znkC0x>KzPi#L-S%vBVcYYgMF6Mnmbm zA5$e$xN1BHvdOr$^d&~7#>6hG@xmPQt6iO3Gq!AI6lczS#ZV*9*8SQqo_XClbY45& z5)c3(`2+oVhJ!&6_rJ;@qjTCr_V0fd(D=v#8Od zNN>vHxpOJgr%I!GlS*}})v8XfYRzh|X4k7fOti9$LR$0oxNy_^;u>hX*Ilr#M|>$BQ2`E=(EnWXG2?1J;bsZ)eV*Lwg<# zTD0lYs8g%f$GWxa*RW&Do=v;9?RU3v>)y?~x9{J;%LNxsytwh>$deam$h^7p=NY3* zpH6-H%Iny(Yp1!LyZ7(lk$V46O1`{M^XSu;=gOX}`>kTlzL!rwzWw^|@1JeIzkk{P ze9hHXT}!DI;DCI&b>M;wMkbkq4@M~Agq%rOS%svnmf?n_b*SNoAcmORh$NP1;&3RY zsG@Q!vgo3V(8Va@jMdd>YX@abit4ItQb=a2X~O?%VT5wlYU{0Z;+p5Ky!P7XuYU&n;-JJ9Yv{4a=BV9{%r5He zvyetBEv3{}YVEa`VtVPeoOb%8wx4<$Dyig-%Bs1lrkj_#Yql%vyYOZf@4LFzYwx}I z`l|1~!1gQbzknJm@W9CyY|=*1CaiFy(>7c!whu=P@x7jC%IP)B_w;v0KRwc}S?EqTdZSN`?oVViAo%Vld$ zcITjnF1p&Jmp=FDsHblC>a5>QoHX@FQ;y!W^KJW$falJr;JmxjJK={1-?ZY17f<~0 z$WLx+$dNO@wepr@F1_c}S8x5-qi2t@_S|>x{rBMK>Za=C(_Mb+uA`4!?8LDzTCP={wT2Oq9 zIiCj2w?Xx#?}Ns1pKxTiocNh=ekeqk{Z=R>{ORw7{PUd*`$xm}01#XQWSRj9s5s|k z>3|Ewi#h-9GDL3_v2sUrpahr5#3nkCWfzR16fro(b6D|;SnQw{x9Gtxg77#e3}Xt% z$igz-PlhgpVGL`?#x~wChdLb44iTt8ItFoaNz5Yy^Qgop`tgr|tfC+XX}T;Ll8c8- zBqF`Y$mG!DMIxM?7$-?aY*_M=n9QW@%$P>}q4ASy1Z61QNXj?H@sy}cB^+0&Jdn8( zh)C=jJEGUeK-%(_){6%rcgf3MD)N`W45l!LNsdMulSL#fAtjeNMol{Nnb72Q1`4dbmi0Ep=(ZqDj`Wn$>>dfaynO z%GR2;^{sIAi8g;S*PPllJ$21%Hfy5SzE;eyfc>dp2TRz9E>)dGb*j0P+D^uz?5R<5p?Wh8(7&PHaDq~&8&2%OI@m3SF_dq9%Z-N z+3tGRvs?uuX%B+h@|yR&=soWzPn*{AmZZJ#RqcD@3yyJgB)0a&uYUK--`E-{w|fP! zfCs!=OZC*Z2v((kxCY#!I{1isG_Hhad0gcJ(!!99Zgw}!;SPH^d>;m1h`-C-5}WwM zBrc|kxftG)jyJt9j=F865+pWWFGq4b5+3__zB{gOkNZpHA{)8Lv<)zUt1@6E z8yKhuPOy|4JmD&r>%qdcvPLP&PM2XAx)27lhCS@u>W=u#9kw8v(Tw6YyZQgka8B`x z=L}w*MN*kBCNGWm%;!GixW<3}@sNQ`=s=4yzehgwClQ_KM$fjk*j%!KCCy|fQ`*vg z-HL#x3{(nxS}UK{GK7yhYEhdy%crjJgt7c(F)#JiVcuw&9W(3a*4ox)#`T(a-927w zH+yo9bFk^m8)6f?#li9?So7@XW;^@Y&eXA>rCsQ1PutPh&bGDz`)Epc``eceH&2^9 zS51eS)8;OLPwc2^nJtadlNB^>WA*E`nm=BQ(J&2L}#+TUshxFI<#?1CHIyT-2b zaX=ed*^QRjdWN>d`)uQA2ibRmzP84xz41$OeA|nrugF18a!iw3w%-4qp}1ELWpbDM z<>+n;spZ}7n$z6o_kOpkX`ORgJNM^6$G6a3O?2xHoy`5db*_~zaDO))bWcCH)Td5$ zxg}iT?Sn^hAU^S~dtKyU&t%wtoN=*VyyIv;d)i0-aYdK>?cla+xaF>Lle4_jb~kyt z_m1~-&)n}f4?NE4E%=`2`|wAPb>5gYQCwd!O^*4}bsXJ&$ZZiN5p{p9cDa zjC$6)zx8Fu`03{#``6ds<7=lqwc$GZ+xtKF|A%{Yr+az#dj)ua26%j+rhp5mP3+)+ z5cq&F7kvz9d=|)ji&uCWczxB!fgad_k_Un!sD0U|ecX3~DCle7Cw?s0g5bq)!xB*I zCwA&rgEn}BIGBUz*M2Js9IEG?KhGlq$7np`>NP!sGhSaBXZy1LkSc14lhub%T zc9@4Ic!wc~eJS{dE9ioPs8!_`FfwRXI4BV5r-O(%gFXNFh&~8^ML3D5H-t=>gqJ9V znW%}J*ojkkg`Q}I2nc~HM}~ZLid~3`X1I#1*otWAim$kaZYYb>$A-@bhjN&Uy10va z_;`ERi@(T+eJG5-*nNRGh{#BOpLc$X$cWJ7jL;a3J}8Zh2w&EygV&gi+PIC}2#NOB zjr|9Xh=z&cSdQB`j<&anp}3CYkXP(Tg#&1X@(7DEM2hrykFl7K_^6Ls$dCT`kNUWL z0%?o0Sda#Jki0mDx&(|0xsVRokilqzfH;v9X^h91jGtEy?!%1FsECc|kse8nA~}-N zSd!*Qk|$}BDY=s7h>kAVlKG{MG3kji`Hr7>liL4elQ@}^Jo%D6*^Z26iu*W}L|K#o zd6Y<*ln6d6sCI zmTD=IEZLTBNsi&@lW_@`bUBwZX_qyrlX|(A@c5H{`ImADl!6JEgjtwId6JTn=1K!vWb_rS(`9712` znY}2TnMsV*$(f%in%L=drJ0eaX_6rco8tdDp5zIR)rgz;hn_FFo3VM9zWJW;8K1!k zoWfb3^@yDKnV-tJpZwXM(D|GI8l3|Akkomd25Oz#iCJ*Kl?hr*AC_>Qw^g3?ovaz2 z;aQ#-dY08tgB!}B9{QmmYM$yjn0zF^ zbfXy$UHYYFreatcXA!E5$hMJzLV6jxrfeFb*w~@%1*hj(qI9aBEZUwgs+)NVqbM4q ze)^|<3aB?~pMzScIC`jvdZYomsQ>@Ts8L##QyO#x3Z+A|pp#0VmP)DHnW>!WrJmZU zp!%s;8mePjs;26x+Xbc&TBc^IKFP#0YI>nAa$2jms(!f&j=MUibsDTN zNey>un}KQ#DSE8RDh|u)qQ_dS(AuKSnyl0+sMh+dhWeY~(5Ktlt=#&p-Wsm$@T0Lf zuG(s@;+m-Hx~_`qq?GEe?;5W~I$=tGJ&^4xt&j zkUP1O3$YTrVjz^JtC@bXdV`&7vCz1yy~?W_OS;p*v8H>v*O;;5z`C}YjUWrBCY!Rf zE3+#*ySAIVExWtDiMzzAyT9wZJ3F#COT0Hbw8)#h>Dsf(+q_sw8x)+?m zugk&K=(-(>tQ#uABRj&aTf!)8vMQXyEG)dl`obkl4KEDCG~B{AY{S8O!@@hQIXn(M z+{43*qCz~wL(HwnTEs}4#7exx#Cycf`^3v@w8@*jKYP8?d&O8x#n_w0+iSkts&9mM#_{{Ua(ut^YsZCq$9N3De4M}e+rRMp$AY}K z1YF3BYrqIhxe5QQz^?PEdZT_4%$gP~o}=r*sjI=3e8L*r!5<8}!28K^>bti)%AkzH zrd-3Qe9Ed^%0v9gf;p|tO2o1}%Sr6SObo@ioXb+I#k{=BTO7T>+{?i%%)e~CzRbPd z8^&q8%w;RD%>2w{e7;nhzR^6#du+|uoXzX2&GgD*t)tRz}bc%+p$gCww>F! z9owIs+n-(AyDi(CJ=@AK+m?{p$8Fro-P^UT+_nAN#tq%T9o@z~-OgPt+%5j%EneI(UgI%N<2a7pIiB1+E)UzS<3P^iJ3iz< ze&aS?<2)|pKfdHl-sDdHPu|>?00!DE z=3O2Q-fanL-sWun=4~Dgz_AXD>?b4yJJLWfh*%8CFbaN-3xFQzf-dOB5a_aC=!5@0 z=!XvIw1DV|{^*b%>5@L_lwRqUZs~vS=$F3foZji4KIm8g>7kzKj!x>IzUii(=%b$M zqJHYG{_2&^>ZVTWq(17lKI@r|>#OeQwhrpP9_qjz=)peh!d~pee(Jei>dNlvxW4SK zF6*!!?b1H&)Nbt7e(c!(1=*hMt$yv@{_WPT?b|-?-d^tJPVVO}?BY)Bz`pM8&hEet z@919cThIn8fMUi`OmuGE$z~iYH6!U=p~Zj;q3{L5zzYSR@Cv{14BzkspYRAD451M5 z5tlLANV{E_X2j-U9DANgee_>-UZmVfz}pZS!(`JAu$ zp8xrvfB2#w`lMg_qc8cLZ~CXd`mEpjuK)V5ANzbS`>Q_+kYHl_es%l)FC>KC?Y!R8 zKn$}#{KSv=#;^50fBee7{IuWvsPFvHAN|rl{i>h()?fYDPyLmz{n`KD{oG&t-v9mL zKmO!j{wO~RCARzger!3T=Mfyy%n$z7pYVll4E2Bi_MiXwumAfG5XMFc97xb$L4*er zE@arN;X{ZDB~GMx(BMUk86y_N70RP90Fpp$zmUNq@gh0WknaFwsC+uS)_ZPBGqr(Qkqb?n&(|NTX*H}A!`#ug)A-n=aI>D8ZS zPuzWb&%-Ofr(ZuleEj+K+vmUky>IXW6mUQS3pDUR1QS$nK?WOi&?ey4>(93>AW`Q+ z=HAM%xwSTY&aJ!3v2H{XwKH+BWW0M2rwIL%&_x*glP^XZgOjmF9COsKERk09@kbzo z6mm!+i!|~`1;=}FMzRpYq7Ai{(=e_Ns}!+BED780uPwDKhDRiY5l}2ZS_^+IMs7dT5GlSR$Oz{H9$`h zWsk=#l8I7E4kPu@ElLH$)Y4@){cff&@`Nc`X^`^8a1^|8&fq# zPB>jfx4m`QOIO|T+Rd+%Lw&8KP~twJ7gbd4t@mGZ1$uT@f(thIV1yG^I3{@gB~DRc z9evc;V?#`~(i4~c%UPxz!(tR{L-ANvlt#W5LT$PImY!zxLFXN{l-UB-cZ=CKUU^k* z<>s6}5#g0?h&Pr5yGPGY=b6G9C+jkU9IB7cb$VcHh@}V{7nexjg2mO=IJ8?ZYd~3=46iilE z9aT&~*;L=xKUqZ-q*vi&2&YA<1rtqbhdN(VR?)YXR)DsTWSB&sm22#?*M58Mf%F;~ z@JkomJViTU>q2OVJ^R>l?ozw$w>91vaGCBNg@I4<-V|`h^oKpq#>`9u`3p|STJpVc z$O8@bh=wQ7fH*WTP8s5(#nD=U2}}Ub3!>-*CR!o?ir)>b6QYm=2{Gt}-;Hn+tH6X8 zQ0RpYN@9aCl*B5C2fGo@uoIdLrm=OiXB{H&hBK26?za9t-y}~sirUbbx)3Sq+=cJxS;PL zBZK=YV|^f&K8dLhZIXJ<`!W{4OL6H-Q?py?^ar?t;x5#2?|^P^0E`tL4i1E(ul15UG65Z3KpKz z3$_%+CPdM>ElOg9oFE-5YXQP2vXM=*^p`I|u>j+>abS+=LA0w%DKU(Oo2egnCN>vw007WvTGo_v; z<-;hE}wrC9T%- zcu!EQBDI(ppK!`|K0pd`VpcLF{N$%d&AKEZ{=>rFR$|GSKGG$3BOt1T3l;=6;}V-d zgHNeIiA$it6+#_Gyi##YPJm(vNo~R^d>KYBOn`Eo2yX{L0nQSbaGPB1L<)1~1x+*| zz7}n76O4Pz$aTVqwAki$jW+=#y6^$Ivup0+3&UEv?z}Uw;wCnziS24)3$DXNQ?Zai zV!BgwX{E&~mgYfzNp`aB%v)NUJ8+ zh*^hFnR6-d?Tl@GYh360K{XBz26c=c9`{&E!L5EpqH z;QxeTznNMaW$THHC!%`^=u!d=q?pAJhDsa{CM}mx<=`t(Xx}|LdV{iTp%v*B#dM=O z-x-9ecUjjo@@_GVJmez(E#6s+@I)y}F^f`!c}b|^VKd*lIq@7Xu_hkV=IGVC_1rTR zU|&z_YB4&Zb*^@)=r}{@<#AcERCJbOJKwj+(#=KfYvY4;tW7F1m(N`Bi)Xyz-4tw& z68p#PJGNhX{&S$iE%7AjNY$J|t)bry@wE<|d>X~LSx5uxvzSFG$@O+QvGzsCQ#sgP zA9A!utzE2pdj}u?mmYm7Z?dq?{jUe_D&za{@Yid6^PT^E?ImA7e`EyCIa(L?8N0U2 zzU?%sQPMqqFCODZhuNfY{_~qVM>S7-;c=#WGyX!%?x)%C1LmTpLtp><=YNc)FHn&9 z0`q~b$fFBmE2IVj?tG@|U67a#O4a~p`JhJ6uJ`o&2 z5-dRzJV6vpK^0s<7HmNme8Cd@Kl%WZFA$d4$~hdIm|-C#Nx2vkS-|*7hHH?82i&y} z#J~g%R#n~L^K?v9#q3M%)FqZLIzpHJe?IOhr{(MOJJ@ zSA0cS#EnH14HXH%Ef_y?h(t=1ybLkFGt@pzL?lgg8#ffXJ)uKWOh#o~MrLeAJ^Vv2 zj7Djk#_nkhYrICK8$()l2u3y>M$HS8FF-~LbVhYtM|NaKRHR0Dj7NEt zqc70LTvUfh+{QBW5OCZ%WWf^5E3&t7!#C^{PyEDoOh|=X$U1yS(tAjVj7W)`NQ$gT zi@ZqxjLb-l+(?e>NRRwTkPJzY97&QaNs~NDl$^(|u}474$8M~|U)&IYRJP0`NJVn7 zl`rN~1hVq)bYsTuP>FN|huKZA>4RoWy;E$#0xVfh5Q8 z6G9;bL1_F+uoRh{M97~kOS4?apbSg3TuZiWOSgPWxQt7=oJ+c_OS`;Fyv$3z+)KXf zOTYZedW43lq{^3sNe{uwaU{kf`vpZIi%}RE$c)U$e9XzLOv@Axq$3xyJWI~(Ol2Gy z#{^Bxe9X`!P0=jP(M(O$T+P#LP1Af$)pX6+giY3@P1&r?+00Gc+|AqUP22oU-So}> z-~>+IBu?Qh&f!c>`w3WPVEFw@f=U` zv`oijP9BNMdu&O*o|lHyA5^FE`(vvPXG+_He2}i_C%ph#T zA;ixKZPEP{P#BF-8J$rYbx|9=Q5?-t9oB|SEm9*rQVspY#)!%g z6;WQ)xys8#O{7oytWUV9&=*D0A>C3gy$>(_QZNluF&$GfEmJc+Q#4IeHCWk zebYA8QaPPdI;~Uk=mZeW(XJyb+ZR7G7>Mr~9_eN;$| zR7sswN+ncFz0`#amR!`+CM`g3tcyQI(G-4#hE-UEU08>WScsk2hNW2liS5^Ky;z6E zSdQ&jkNsF@y_<1O9dy0dK`q%n`~+oy15RaEK0Ow*F`G%5Nk9!$6*W1Nbw+W0)mmj$ z9Do6!9o8TiTB0plqdi)rO?|9eOtMWTe_9n0~m#wu{}G@S(NQtCd34mZP|9E zL=xq}ACyv?-AXHkoB8A0bwpK=ty{^RT*|Fn%e~y6#oW2gT+Z!W&E4G31>MjcUD7RG z(>-0(O(TP2S~gUgv#Y=w;sMmEP&4-s-hp?9E>7-ComWRdR)0@D1NmOc#>P&y}rReSFyx zy~IphLz}cx#RcBr-NTMZRqvgf>@;5P-Cxn|-|PKf|K(o*9$*13U;{p21eRWGyS@2+ z-7MvuFseQM1l&AzS;1^i!^GVvMO?+K&v~WKT4>w`enQGzRra$+v`t_Yw%&qWVFG?( z7oOY}hT$0ASQ@Tj9L`|^USK{fKEK_}Etr|SmC+8uS!Mi$6@aiC$O1ADg9jiU3R=hW zMPJ*k#Pv1N_9e&nh2JX`-=D+<6E5G@s^1jWVKWY3G`?Z~HC|&jPUAL?;Ws{GIo{tM zV1TIQ;ScPfq@f)nK0bvTIr**8JHdn@z=R3a(%(cF)4i-7MB(dY#umZe*U!9NEdkzp>wO^y{ zT0YtyAl6$>U0>Ga7M@|XCFZQP;TbmO%*AD8c4MObwFSq8A<6sY42&;h2dW2Qc49k>Cg_T{CXVGU@3sSbh{=;fMzYC8^VqfTb2 zPUfm6)|_^Nu6|}60BWGkX=Wzsw(eR3plh|Z>Sfk|qjqZsfa{_j+PkjmsV-{1_GQD4 z>Z*S06xeF9J%An{Y`BK(tp3@n_UX`-Yp2$0rOoT6mFiTD?8}C19T;t*Hh`O^X4RHn z24H~+uxYwI$Rz^jJr-w@&f_}!IDjTOB%0{|cGiYn=!9Lk26irGC!pjk$lH8g0e*(U zly;r7bDfk1-YT;{53+)5egIe~frc&tOP-?UhQf-r!%fcUm+i-nmeML6YdDo7 zOlRNzTTHI5P&At8R^<9V!j|p@pJAnA5`}XLAvjrOyTa0vGpc zh=%Q}MS>yd=~NBz5l`_Ufb0^WaU{U;tWB>BfMyN2fF##|Cnj+n@LUF{aTv#Dt7QNg z@9-Od@a4^bEI083$bcgMR}wdI&Xw^04u(>mf%1Fpu#GFY*)+a~;2c69@8| zE`eZP@p@+Rf3;>0SOK-B^RVS|G`HpyF9F6j+5@oi7-!lTZvs2+fomQDJoj@F&+`d? zfWBUBP}g20K!G2y^6AB6C+ue0MyGGy<9m_ADHO4A_FWCo+>SWO|~ZF>uVr@(AE_32y)oU;zXW z@)Gy~7D)1gfI9#kN6Y! z?jI2LA82!8mGQM+0mdfkx87>5R`a}G0mW8nUe!9X9|FZ~_d7a$LRvmyhhBet@^GaUEy^A+O_4Kl2y|^rn9L6@TI$ z5cXc4@VRDmokwjiUub z_Byv|OOJ9!$AYAfahkvT#Eoac06D(!Gzup zD|+nMu~&wM17!{WcsMd+fr$V;P&n=oz{!#mD0(1a!NRZx41kZYFi?U+hb4UN*45F0 zf{Y_8G)$;0`SarfdLth=@IZ$XgAFcZIH2MIk{cQhBDuhT^YaVR`x?K1z&e8q0aWx@ z!Cc3I>^;gFAJTh>bnW27m*~+uMGzhB4IIdTFTLViso zMqUObq@V#3Lu9btd+@P$gM=w`m!5L!84w|U{Sn9ja?NoU9Eb7!mYsPl9z>sA?;$Y6 zhCPr-9DGA?=iGz&J)pvQ2n45?T{hxKBy<28xu9apVIg0BK&sFJee%(loemA$(4=!n z2pGhQ)(sc`UV81_2tsz>1tH!F4OHmfa>pTXAO)NGDW#QKdMT!uemKDl5zP0%4@;nMAAzO9_XQRdfpw87tDNEm zC#z7k$||W0B$cYG79)!+r*xG{C!I)y2`0g6^2sQ;=9*D0yQsnmw5g~v&@M~qdes)4 zh>;K(Sug{&gyEEmgn&3Hp>49qZtjRSa~0S6cuptFOm9bx7U9yDOt4WD(_!IvL);9Lhk zwBSJjB$@zT05dEY1ZDj(#{}0DWWXr{t~tj7&-~d3XM5<;_h)%*_Qhfk)b#+Km3|ud z9GOV!tzdWxF4mrfc>`&e6K?jM_J+U(5CD;vX_#k!&@Go(-cduy_dy805Z-elqGw!j z3^b7e1E<@kScA$T$0z1kvZsN3{k_fE)>}K4>6kfZ`b7(Ct}CUIP!z zXCa92wH^Se)?0@If48$Zyn90Wxn+Gc9(h^rHX5LUa}d&~_ltR^@U{vF5E&!w&ULQ; zV)zYoSKO4ehlT&a=Xr)Q&|S2Xy|QGgfew5i1S2TH2|8+1JX%u;SRg1VmhWu+D#)8h)G^2d+?@+ufRiXWjEma*7nW-wFB1VrJK zxs-}AGWg66bmoDfg-QdfnT*itvVt8H=Kx12hzVwZg2x!Z2~e{O&f4b~wh5;H1Pe$E z5)ktqy#UBh9njqZR)B&ZghwzvXo3v7V<-Fd2zCD9+}q-YJ@$2o0Rv*(23v<6XWC^( zu%jG&ET^F+j0Xptp%QCecY_ToiJAtnkmVW|K+Tn4P~to%7R(9F`)EK%cG6Fl{@DOL z?Tu8*Y?JfqN2NXms6hrWr!;ZLJ|lskIcaMXcRtjDv=K*{2b5f5#5vG3Il(&=veD_x zDK{i7sW-NJP!kT9823fbMn94d0>y`;6kKLQe?r2K@YKEd42MSR8%+J~=hA^5Cr|MN z=j$MpB#64lOWy?0I?Ktp%_)_FUi~Uq!zxxuovk?WvD!m9SgN^9Fv`r>!&GP+z9FE4j${ zF=61su~xi@TI$t1F~ExqC0ItiRxyOO1~OO{gUY|!$OxFY(O_`AtxW2uTRRSgC`2jB zYni19%s!S`8tKF-3TfGKUG}l4f{)8Skugyy#S5EQ1qISb$waMy1NxyrCbeiI&j3v_ zL{n1L4rBtQ$!0>u$$((OGfd%G1~@m!8`M7JH2~EneCy1lFVDsT4H@$R$+YV00>#ZE zP04P2YMzL6hq$G7X+eGB-h7T3wG59*QCU(j>4wIqfmY7Kz0@E7ISa?R7d1&2z1dTE zVtj)KG+}b9+nkL6^WXof;r4(?#mV8QdwSU?EPyCOd8_f{1f?JUs9-rZ z#(<#q^Wu6|M?}U+G7?Zx_Cj(#+xBmF4G%%-H9X>_S@k*d#L425x(Ybvlv z+-oT%SVp9S9#HMYz6REao@Lf9I-x^a0kN^gt>Pzil&4NyfU<@C^-~Cm$XfI4 zF<8rKcKra#bl4U`Fi@{lC^C>sTq7GzqLSLa@g2B*$!>T5+uPs%wi*#ruBx_TY-@7_ zxub4vbNyJdb+76Gba9zlKmiI)hyq$A;DoidTbKA0KvV_D|?(+ZMlA*qNqu$gCWQ1hcn5!v+0;w7BQ(urpDRM3!ISF#`)*y<^h3zR%#bD+rv>8}PG$PV{>nov?N4 zZs-sFy`$GbXrKMppfV$RJNV;~>G_8F2B#qPO-;L`gD?EyhgHB@WSdzr4c!kSr4zPF z#R-AciBAZmvcc9X80heaR-a)EM*P07p~Be1F7~j^dNmF`vV|&aD+{D`0=h9$tYIf* ziJV10MmkZ~W(PnF0f^T!T!0bSizvL?$V)Q3lz|5LlnS9MZJ~%mOxGOvF(f&Cvu>panwAG)zpyOhYXs1460_25#W3 z?a?2#Lc+JRD9(EtN1zyj%H1khl9HIRSo*Zo9Lf|V2>>Ll6~ z$)G9UQdUJGLYEEd)R!#We^^tl3&i zG5{u2!9=>2VLFkMDW))c`o=Iqb-kNaUBSuI?mPpS- zLc{|g3-h(4x_p8H3e0S38@D}R!ssSQmRkh^7m3d3aMox*jzUcGXMf^oPKIS0cBOve zr+N;lkQS+t8Yz1oX?-gHDU&v-HQL*V`e$4Hksnd%aZ2gSY{3nHAD0@@92gNrYAB5Y zrx}r{nck?jRKY=D!VnDCreW!rP6YK~B*4%nZQ16xx#(`js7RjVnZ~J&2I`LHs0Zz6 zFDPe{rst9psgo`$qc$p}MyjMvDy3FxewLsK!s(yF=_sr~V0j<-ZC_t4#1@FDscvef zqN=Li=#8dINnwHxY(c4tsHVm#wvj}NGGJ{+!$#g_ZtCft0)aW~;VtE4Oy5w|*$$G~YQ6g9z5c0*8Un5+f}Ns-8sKT30xO^LCP@-2y~?Y?vZ}Ia#fq!OQtQTcEXRJV$A+xPjx5Hm)tk<1zEn=4P(u zYVN+iE!+nG?avA=!SZHr39GqXpxScn=B90D6(_2=E}_OQ?8dIcF01U`F7D>8?(Qz{ z_O9>#F7O7g@D4BW7O(LhFY+d@@-8p)Hm~zOFZ4#Q^u}ylTcsa_$<{ zDbFt8ojRc0Dh%BgtVmV^N#d>Qwr}@p?@Z#T{H6;1mMZPK&aVI$FaaO%{4THpH!uS~umeXh1W&L8S1<)%umxu@25+zicQ6NkZ~})g2y^fO z2e1i?@ClQ!3ZpOrw{Qxx@C&l&443c<>u>eq>-z?=5DzgCn`vx? z?`)#~1WD8ui_YywlCIs3B*jz%`&y_KUojSEu@Cr@*>MIC0DW~Uos|VvLLwJ^^68qcTfBuge=|6TvpA14IhV6JpEEkAvpTOcJGZktzcW0?vpmoL zGdN4v=d(WVGe7sUKmRj82ed#BG(qdLu+XXJy7IQcL=%_8ETiw~R>LWfqa;+c zMPD>Vvt~wXbVfJ9MtgKef3!%8G;5Z$MRRmWV>CvCG)uR%Bs@e%vnKS(v?R>5P1iI| zue3|=G*9=mPyciiz;sYsv``l{Q6Du(8#PjUv{JwHQa3eJKebdd^;A1GRbRDLXLVF# z^;TIw{&8&^kQH1WJfk;pR{IYHfL}Dwr6uT zXn(e7W42+7g1@rzAjh)#o-Zx`maqZ^C|H1O&o*t>wr$@wZs)dc?>2Auwr~G7a0j<= z4>xfaw{aggawoTPFE?{Hw{t%?bVs*zPd9Z}w{>4Pc4xPCZ#Q={Hz=GoYPZqP#_esf zHbg^AHC%xJ)Bt<8w|l=ge8;zZ&o_P7w|(C?e&@G--?s(qw}1aPfCspM4>*ArxPc!y zf+x6wFF1oYxPw19gh#l9PdJ5FxP@OhhTnG=0L(OVcxs2P=#K7rn=TcEzzCdphNrlS z%Xbc}xQo9yjK{c)&p3_OxQ*X9j_0_J@3@6KfQSG1=Qi{;i?}w|GHgr#0gorSfVa4k zH@TBPIh04alutR8SGkp6Ie_~(kcYRf0xgJ#_!Gxy!IJnCkbs$EIg~Rwny)#Vx4E0Y zIh@D2oXS~_}Xx~%j1f!DgSH@mYxJG4i;v?KVbhXj`^ zw4{HzEQk4dU;0q|d9qKte>1zdm%F*2JG!U)r~CS^i#PeQsCj$;dqkfuva9=kle@j= zyT0!`zxVr%vwORzI!g%qu!~zZ%e(1zz00G6MVr>AnD5c zHrM;avvhvyfqX1!6Q7ZhdacFyvffz&DXrmD>}+2w8}$6qz|mY z2dm3_{Kw1u%@@7VA3f4XdCrHg67T$~`@F_?ySyv>&=Y;qUp>}mz1Cy+(xW!E!}7{U zJs#Dvw*!UHPrTJ{z1pum+qb=ccRk$pe8T*yIE4Aeb3D~oz1vFw-}n9A|2^ObKHTFu zyw|7PF8r@rd1KI^x>>%Tti$G+^#zU0@w?d$mLr~czFIS(Yk37mil1b+dX02d?y5D>uW z3pnLBz0+%Z#_Lwg2mRv1yu+tG!`natWWV-90QYyl_kTb5Z@>7DKlzux`JX@fr@#8I zKl_Kj>EAy5$G?l`zWa~95&*wmu)s2~*)jlxj1C1(Bq4CHMnZ)P88%edh0P*~6DKmV z2<_rUav0TY^tcWrxsW13VgqLqrMQ$TSx$3V?Ip~ZFlkmxM)Qgk5<4CC^jSf~6rmx0 z7BzYlX;P(2d*)y&0V)rvJcCxXdKGI{s#~=J_4*b6Y*?{l$(A*H7A@K$N+3LS`xb6o zxpV2(wR;zDUcGzy_VxQ0aNw?0*UCLn;K+hAXACT6aE8vZTmuVCjSG?DMTr?XbNm=> zBuUZb)~z)CvSrPgt8LP}Nkz*Du5<;|wmnpEZrX0yCK&u1c-r8@i5EA19C%^f%b7QK z{v3LA>C>rK*DKtsa^4)69AR6jh9Fxp49b9nBG}T;_%{JY96R+ODY%oKmlqkRi z6J$UDvSsGUC6`#{5p5*m*{8*(%v%P}%F)b^XnHJEPt&q<^% zF(V8v@KkgIF0K?%CsvzmH7Ypa>XI%qZM_xOT*H)8)?R)671&^f)s5FVgK#Z=RBH&wDV*g_T63N2x6wG2uK4CBfu z+@*vAonjsKC|c*z72tpc#~~Sq4fvC@73qE_rp~rtBIR z>7XLR{J@b+<2>d&?4oDdq0Hf1hd|$>p^jEH>C7(QLs-?ah^UO7W z+H$Z^Sc1fd?K}bG#Fn`L<7dU*_)lxE#kRz4yLCJ6+9|cPB$ircxo#zs>=abE`gRwr zY>iJmyAEJMJafpAvYPS>qn{rCEUY)b9{cRI_f?S0x%kf+ zHTM5%Ar#>V6VyEr*h~p1Fai%muz&!VfGpPgiNwy<0V6Df1s4;K zp9CeNKLKh60|6NZ8Tga6ne2YIliQ_u*Tmlau2Y|ynTpJCi%ZDMKv~qH7O!{dhcu((B;TWJp3o z9ukp>R1s9Fb^!#i1y54{(Lf14z{s!^vM1a*o1T^-nO7J}Jkzq#jFz#0*TJrcJvky0 zPZk>e?QeHZe4^cG62P~N$`+*1B`Iui3tRjWn7;JoE*H3v4peCXJR?NDHb991Sm6~r zE0s{fKnNLd;7Vo~;1N=_Mh&&GC~y2;2`Jz~1TeE8I!eMi@06Ybyx<25KtTgQ$AiN# zlL-uN_Z*hw$8Vo2V&}X(d@X!xnKm!_J!3h@N$T~7h ztUNG52KJf9A0)v94bZ?1&eH%GFaUJ@l$@R_85BM7lcEL{>|hDoG|RE@1SJqFSSg97 z>^wMP`RiCEkM7DBt?F42I+s>b%J)&b`XowNWaFkweL z=s*b)qXBt}^#n8U?h>3p0Ve361V3oN3w~At?xKLMOgMlHSg2P*$^ei>*a4za6wt!{ z7r+5Fm<&JvAkVFBHHAIk4hr_^&4WVLq6!k1j8s`JKq`4TOu$98cgv8rfHMoPc}7WaZhDQ=-NE0F@$z15`} zRfTK+UXPkswaTh@5z9?12LJ;-_}n=quw@Kjz?AIG7JR>KMOFaijeFlfZEH3!hKUF^9I3?N5l>tj*^=@J#3b zPk)^jaZCM&+L)=9?h4_+FloMAq$<@F@N2Vbj8Xah!VY2qX@zP zhR6p#*uv()3_t)r5MW?RbiBX~W-x;Q7^DDZWqRxzU{*jrdG#wy!V3^!$EyEc-dq>{ z@QLpbic(;O-Z4UZ8$6ca1A*Rg&c!DURVcEF$L;<8ccM-eaHClI?r}Rfr}zG8693P{ zXaqd~nz>@65{Toj9UA;+OA2>crnQ&orW7bnR0Osv^PK8ISS`ue2rm?%09tPc5F#-U z!%pw>!I76eBMHeg7;hGgJkI*H$2L1XYuGEV7K+>m7KhX?UaG!X6w`R@Fp z2;z9834O|!cnY^Vv6pJDEmSV2N&yK;K@0ju<)TWqnovsu&<>yqxoqJTqU>>m?G52j z7p<-id(jv3%z7foz2xxwT+kSg00twYvZiGjV4>*d4)nk;5>e)rDp4i>kmM3ej0mY{ z#c0bLdrAPsXve(hwW6xlZe~=*ZuAMJ2B zEC@wL2Txw75S3AEG{z7SW0O#+(smHjXy-Gk(HeuWw9e1(GBM}EpaZsuBkjdXYD|pa zYJxW5g7(h~>BVO-h}Cjz1GqyY%kCC&F&}3#R{*OYZxSc9Cm8*U7oSIy&gbWJXouEr z?M{jNyied9^0T5r(ywL>Yxon#2%5NEHASy)lxGzvw1eiR<5onNT&pP;0zcIP&z3C7Nbw( zZXx%QFClRn1JfF@kuVJtrnWIUEC4}t^C(mW$9VHGV{$AplQ;vCGmjHFGiO&obB2xp z3C3<2%s}lrX(%P_Y(!@LI!rSnkw|Wb{M>5QQ>`h$_LaZI&`+dTEH+da0EN&(TJ21L75UTK}_#F z?HVaEDrw>o4l@l4>q3oBM1d_Vhm=T*)JTsMNt0Abi!=w$(nFsVO2zU=9SlGFh4ggb z4C2y1F{T6{V+M$^A*Ask3yw_7G(IK~cRH;`Sq2*|^6$1%>!!3Jra~&3)K2LXPxDky z_mofj)KC8uPyEPMfU1&}3Arl2lLCR8zH7R~1!RRaILRR$o)(6uDy!jLV^YgKaNLuAvGWPwm%leQ+fb#uSgMO)W(Ul(>` zS9WKYc5BymZx?rSS9f=pcYD`&e;0UzS9ph)c#GF~j~989S9zD0c0>1VGSz4U_E>S2 zV&C9Rm$Yx0S9`aYd%M?rzZZPNSA55pe9PB-&li2u*LkT?NYHjuj}>*J5<sYFlURv^xPuSaeOF3t(Pf0wrBw)3SFIR}uULz-n2WdA zi@W%X!x)Ukn2g8RjLR5}&sdGqn2p!ijoTQG-&l_0n2zVzj_VkY?^uuXn2-0^kNX&q z@3?>sm?WIIa!a>OD^gw!b4T^YQ6CwSBUzFsnUX8nk}nyP>9mppnUg!&lRp`hLs^tZ znUqV}lusFzQ(2W)nU!1Fm0vlJ2N_+1L`FT>Ql!Cf6PTs6^%7T?arqXQgISn|nV5^& zn2#BmlUbRUnVFl}nV%V&qgk5&rZEX386ApgYmTfV%e#cs_ z=bEnT+OC({snZEchSoFx9NH-*dV%@bqW$@<3)`>{8?h7Ho%4F3_qu8y`>a1}bZ;3v z$3d56iK4OkdI?*xJKM8A8?-|^dKepMAUm?r`ktw}r*T!imy7mS^Bs+JG|ptzUQ01;d8tb!K_cauWPwG z>4UYcTfKoAZx?xywR^VRr~noKCQ0a@>)XK}9Ks`fa_&37^;@mad$j>Pz>hX@N7w;M z*o`^>1SZKAwOK;{B^Oej3MFWkR#N5ii>vqiWyWP34rRlIb-2pFINz?5@X zT*!x<$XVRQJq)xI6upJpvM;;F3A|!~C&#zr2S{8`R6qwNFgPVp0y&U9ci{E1=2xks z!He9?&m7H@`^YU^vQ^u^XMDp=mc+`rdJmjrE&#m3$T_X7kb>X&sDZt80J) zF5nMMJm=D!(Hq^-Lwn8H9LDiE&M!C0(eJ|pB=(?ggVd_a#%eYW-~zfs1(0CBWB>(( z$OBf?)5mZJegN7GA_-Jz3Gn0s4&Vn$ygeVC*L&U9#X8anSq(5exZzyA;r0CJe5j|K zDLNp_*J2F+8{3|SaW;a0&`yBQWIzMPZVAw8KOu?%Y(1=cfWDG#+Z({yY2bwP?AP1f z-QT^YgWZ51*3JLh*rhkK_qn|@eO8>^W~@sG!U_Y{NskJK2Mz!MkZ+Q#3kfX9$2V}{ z!@bH<9n64k;16Bt7{E1g72Z4E<3IkO<=wmyIL-xJ-=(v@CgwY?y~KIRSH+LdDmG{6JcA^;9RkLGIz ze!c^2zRD>n19m?26U?g=ebM_K^;2KKLB~96taW ziz_Z*>PDce@(SBaJ=->)AeLZ{9*O|Ky!7ly>FS8xSD*T;-}-OG#V<|Rb^2OkU-pZe zrM3Dk&g)CXYoQ&V+J$-A6$~N9-1ML1`r}{z=O07ft+dPA^`rjvr<=L~LYzB+1jiL5 zXiXrsY1XPObOq5JWS536N>-zfr`&ZgPY=aLP7MCzF;)fC?zNvPl zVojeLca$KB5~T)^C_dn%{PVLA(4$G4KD`kr+SROEyM7Hjw(QxoYumn!JGXAFYCo`b zAeU}%yLyYSgLYgmU}?+O6+R4IG2@1ft0jXNc{1wK+_!uG4t~4p-Q>%gKaW1W`t|JF zyMND{ZQJ3(iRUeT*RS&CexDokf0%Rtlt630i&gg+c2Z~;UV;iP$Y4g}!3SZ45>7~A z|AiJ_h+&3R_4b=w^x1bGa{KZ37l8joH(-hcI&@uj2sQ{~j56xWi&V}@=V>*1DMFqK;K^s7e*Ot)pjUx;#c*TdXQqBfBAKR=h&{Su zfp0)rXQh^ccBi14?y2Xdo_-2ys1Xu+D49i`iDsi}3P>G9ZYEP8rI*f1YiJmDKtU1` zbn2cAGnF^P0xa<=YO%&1i|lKpj!74hBcj@zs;fQiVA3ink(-afsV%oA&-!h@6BD~bwo-j??@Ca0zyb;= z{BQ)s>l#t>4>AwDCkHtn{4WaTii_I`Mt7jIzeSbXvk9N6o5TVq6tRHH#KO!kOEUP3 zg47(?d~*R1n9y@)PG8J++iu&IvBu6W8ra9xwn=v zJ6yeR0v{Z)I0Pgp(XzoxhppBQCS1KXQ5?kVE>mM$T{Y$fK(6cI;hxZP|LPbl&IAkL z7QsXZW-IZv%c3t~#0;S?wbr`=vmUwQs7s)_;m51XeDh-;4RlhmGhM&}EL?qh;S}(D z!U8j(-nipJ=T7+&i~9=h3{C-d~!L39IOCa0TzIHRn?P~!AsG!X_w=U4F>?kw~UAtnp!P50E zb{33@**Nfnc5N?%G3nj!fVVH-Nv?TMV;b(vRx_LNfNMTuUCm|^M78m7iOtjcjb=|5Wt1KO^ylKg?my z|ER?(CdDdF2JGVju>!-Kq0xE_en0nI-}X3QMD5G(y6T(9ED6VWYDwr&X`& zMmW~TRB~LD9o_jqc*^rl^eovu&x$EP$_@fa=qF1WX-!c=wSx|1f)N>4NF9P}dnO2{ z=Xj>Oq|i=&Inky02#KqB)f94zb72K)87!v)p{*kgru0sV(3A=bk*i}u?re%Up+(WN zgvBXDmr_f$8P1V5C1m^zmRJa|F98Idol+sID^-RO1(`%DZs~{B;0hO6THWf#p5;}r z9%fqYz@u23#5X>ob-H%Sr&Hqhy{}P}CL5(0;;t7z|C!oOCQMjD`w+`pRi<$%Xp`j# z5mvL)0oA6wyQl=zJ6BV>)OuDUZ%wC|Nz|IOw8r#q2LhnE^vVyaG9{eaI@q$)((k|o z-d!(IpuR@3aD7Yt)oe|WMb6!JbR85Hb`^Iu(00wQ!)>vPH&t9A@vp0?dZ$?T<`@D} zcgH-jZd0%uU)ueumNlc^L9N$F>{(C(9aP^8ok*}0Ca{7)zy zP|*Q4v%3r8<}m2P23oCy2(2iTN%+w9{`h7Qv~oc`vpv@3Ym3N}NoM(c-z(pjXGNau z45a+xL@OFpF*fepjI>VWN}9RO)f4u1mI@fuOjyDZ$ zP6P7I&8uaAdEM)wJS=+c2^c4Fv$Pmz3X4!r^rjb z;WfT|vX_oy1O5Fc2)F?DkPv{ijQ%GFz`zKQmjUAsabBYtW7HY8{BLi1@vyJH^{?+9 z+0R%}i?ThNGGFfj)>Awz=mBq8po8yWKm!zbN%418fE!?-Y9+V;CJA7f7a;fq_o}}t z8!&j4q-oTLeFbQM2Ur{0CwY}8{}|q<5XlyCOb07v*ApCY87h!`@$&*<))DWAeDjBa zgeO0jfdMLj0P$l07tlyL5CE99dwxTJ2q=RyNQ2)(cl!e(UiWnlxF+D2Hxrj2)ni_; zuo;=519f;AGu0AI2swfGNruOM$(J=5 z_5xC~0|YgE`gVwmNQ29p{x zpaFZhhkzIo9}xg3ND_Yd11iui!-yG$D2uiTj^Rj)xJYA?D2aY?bG&$axW^(CcoU{n z0&s{CCXfLZrH9_gX|9A}g<}F5MLHD4K@Q_X*hFrcxMQ;@jtQxdZx>^xB8GTJiDx)g z;^!SFZ~^mIB}t%vl%oNNcYNKkdTF(gBT16hM~?1xj_7E4>ga#PRW&B*>XI||9yQim0;(668VciIDtlKkXmU#OX-zw372c6RSsEmVX2aJ#FFcn zds)er2N{=t37ClnmbyrW5viA&mx*iHmo^!gjp>+HHJF4shE&;rhB=gHnTCrwU2e%X zTKJfssfdxufO+YQnwOTExtU&(15npA8K*|fhaj$L6w&7!(uZ0!;h9BIH8Q}A>jImg z37pv1kRVcXl4*|Uh?k^UnwLnCsJVslQ7%YujabMqvgte?(1Pt_ozf!%7s!VzmT|F! zg}KQoN`L_dkb<^`Fc?q*0YGiR37?E7nsZ5R!6ldZ~?hl00zo_HJPE$ccSr0qc1j`#QBhsNS};id-!>nh{>1#37`S06$O9`FmMv0Xrdr#62`DyL6*P72ouQJIVAIGM@WqduCS`q`X9I;7pQlolWfcqpOnNdb3QpvBjl zP`a9t+8sM^0|)8>dy)fIYJN|;6V8XB5zwY8FacBb0A)G^7NBaiNv4)6MZv?V3b2~6 zS_9!G|Esm?5-rNAu_^@BsiDm$fH6v*b1JNCbfdEeiDQY5eQFR1v8;fKl};+CgE}Wm zc?3FuqzrI97BECRfB=o^jgLbG9y9^HX+5UuQkfc|J%9m1SWvlgP7DP?Bak&&<2e_A z1vG$+-K0t;p#wVAG7aideFdwCs)S^NOE3YD(qvwH6)YPO6H(KfxXJ)Z=sg(~uQ?!J zWOGv(t2iCYQgAw~A$u`Kw}4@ZtW|lRJ$RTS1B}wzmP_i=2k-#sdH{lV0V46C17J8Q(F066EHYpKM9>32plU}-1S>!V+IoDSGXdk; z|9mI_0v$S~rgs7Z8w4rHs0Uhy7<;5s+5!an0G6?&XT*s~>I6eO8XVB3r(~_@sS;9W zN@Cly-k1S2@Pyquv8^exit_9 z56}ZN5Cmmfw3tMxh7$D6ddF+PY^ue;~`a^D8RzxubkKxp^~h`q`hETR@xp0obbn@%oV#ATXyx|DX(D zdINyHe|t%YDym9rrA3PX0dN7Od!XQjht=yD(hCDP00OZ_1$~Wx-7_{JZq-X>!B@B0;A-@+A9P<0KvY(r8v+7Cf26uTYn9JynR)E6kJo? zdb)DR0WILA8lVDL_^9-&#q~3nbeX63TSxYpvR8R>?&!bM3P?kswpt6U#Crto>0UDc zwDV^HHJi9l+5zqh0zNbW41m6JC>$ad3(VzuXO&|akXuH0}0Tmm((EO2Yn*jzqy+eRf+>4$!@Hi^4 zq^OjQZd))ATsUD5CN#9#I-!dri%o;8mGHV(TwJ}j##oI;tcc|#`(Ll zm}$nvthrdB1Q&qJvqd}=fD?4?2f=pz<)e`|L zZO-YZ&P^L@ABny~3w`qJ8})q8*sISu?WM50!S*b^*IXLbe9#9#{{vI)rP&)Wg{z_x zUD0KoEab?2ly|aX9B=>Or@_3x;m32tEYfZ_D79o?7q~D zHSUYmvx)%PECBedqSwpVz#6|S?rZ_@j1izcy$#TtqSemmrZOXpdIsgT| z)hUp637$1qP%vW6+Y|lU8y0j>AmX&#jcy>fM726o^!* z1*$j*UCSeo-SAA0=^MX?&8qXb*f+hcI#9wdt^)+Twd9-vG4Qzxph`XP1?Cq&1j{&e z+X4=--#^~pL%kON9pI-t&-jc5PEd&F-2@?~w`!}`CP2#!khL~V=FB(&Z8)wEot_%b z;d$OAU5ukSDwZ5Q5ZjP<%|*E=p5oTw+$+xFO6hSZaJ)Kj0Z7mQQi|F1tmcbK!odgI zczCrFAT{F+1|jA8lcGfT*pK{|K>Y<${ef&-fP$b;NJ(D;kyde zt2qEjTLlBKJ)elnX?~#9>j4cd;ego0f>%>bVB0rq=PQcmdT#AlV!wZG+&ntQAN_+a zspy<0sq&%(KE~p18Mw?#y}S)W-d&)6tiGCF(@2}Rpsv`FovGyeD$ks>-DPUd1ixnP?%CCT?IRy- z9{%Bee%$p*;w!IQgzoYs&fJ(7q~VTw4eCKRk7_WgnVfmv-5Ru@_pKa?!7Cu0-+g>P z0JN@a@8vDzI9=cpdeozh=|SMUrn`q;yS4vZ|Fi`ffM#m|J)ZD3d*q%R@~C&pIDiLzM$74+EOk?k$!@z@@fX?hOY(b9A$1SxcdUf ze>p#+JM^m14Hl1o{{WGL#0Uih3m!x`|1d{~86|pPIly2?jf52}R#d_xV?~TbC>jJg zkmE>_2|b1gIZ}X-lpzI*EE%)qOqw-q-o%+x=T4qIef|U*ROnEmMU5UsnpEjhrcIqb zRdPbA)E$WwS#@+3YgRQ~y?({BF05E)>ztWID^0C5cWvE{dka@CTx;#@)}@A5@7`)} zsyWaClfdAQg$*0tk%2;zjVCn*D3Ro_k|i1*zL+SuU`EcI2Xgo^!|`T?C!tP8ow|nU zi>+P1h8*S;>w>shaO$}bn4ZuU&o$Z z`*!L%rc&L?Rs49cUd574n|5tn|Mqj;>lMd0Z(nPE{Q^EtSifq-IuZ;Gl>T7NEcc1NmdIMHg3l z$vGKkq%pS!dSHNo3<8*tx#oTZvPR^56f(#nxkEBZC6{EfNhhC#GD@So`>rajuDr6m zuEa7gJ@wXNtG)NejPEY_@aij>H?9aQzx`a~;RydUnvB2|IXK{g50c360~s>V@rM%| z$N&K&K49<#9Z)C$13_810Zed z6<4#fP6HTdU1{%Uht7po;0 zCMB{xi~|`;&_M$XP*~T93o<}qhC+Yn5as|-$nXM371PK77Cd;NfDs~P;^q>T=rL5O zr&hRXtFOj7Yl*k!x@)h$23uB&eFfHHEj2E-thLnh_+ydJysu=FSwuNNm02#$Wu*(a z`G-7zV1Wf|XUG(Z{}CI~K!+Vb82UdvF$mxVsgSsVPZTbYc~z=6XE8>tKL---kbb`RAvtWQ0g5DshR!8TvqygvvzZ2C4!Vju<6_>t&7w zmQhTjct!yRYzt~XXu#we&?p=sz+dWvVPQO{KN{B1hBw4v4t2Og9`?|OqvN01{5Pzz z$nGp(65s%F{|BupBv64GzyT5_#26~3Ofv#0fCN_+l`8JZBc1`{Xb8rb!B{bdXPn^= z)wo7Bw$Y7mgkv1#$ipBG5n~xE;(3lZyCmMOS#Ya{nszrgx*-5xtTLmg4zn7S)Wkqa zaLFRk$hkDm(UOhFqLV>$1|nJN|EW9qS`qK(<7Z35JlVxll*!CP_uk zvX-}`<-Y*tNnZBSm%jvNFoiisP*zNcuj3N2Hg*;;rP6k*Tq1s)_^FLFGMBi-W;V4s zNf-{(o8JUyIK??ma>}TfWILw+l<6gA9?O~IzzaOhG#`4>Gj6UtV?MWe&3$t7nw2bP zKm|Hb|AH3OpgTP0uSh8@Jg(D-0emMk0eR19`m>)G<)=l}2hf9d)T18-X-Gv{ql7M$ zp$;_`5+e#9+}UT48ii?0Whz5B+H#+@*(GTzN~DqY)TciMsvApc(z7Y^p+;QKHdgw_ zdgjGG$$05Zr8?D^wkoGRNm&(bI+K@>k*gBSC~Zt6M&5<-B96IgP}RCtwzgG%LnSK4 z(q>dXDs`zXWgvq3lA{Alm3}w?HJ(oJ1cDnu^*j)o%+&G-KM0-&{*zNa4Tb{^ zKmj8i>9Qqg4m2hBnPebiOYXMYnTqDQ~axb`mXlQbT$U^o7-rDNRASpf!?9@aQ$P{J> z02=LqPX-=Afgz1xNE1|`1J&J8r;r<4`qtOJZJn(hVK>)PzQwK{lW0q$`db1Q_^--2 zLH9%}g2WsbI9b%9s>B$zBA8%NHGo;K-kX8O8YaPqJb?)aq_i4Lr63<5*A!ncDH0Lj zL2lX*Xn&vsrE$=~0idync@(D|Dd1ezN&*HzaD)WR_r6C)a*_?2-~JVqzemk&9|^cy zs5Vir2FCJ651gC?8*erZt^o&QhJYz%YJ8&MTj5A71Pd<#!#W@sxZJ#;5?w^j|LE%g z&~^a6#Qml+Jy_HUZg2qvTuQQ#8EZpO1ey#?$Yvcn(VQYw0UlU50L7&fq}pp_C3o7> zp9UtAv!nXwbuLKRQ-p6vMs-?W!yj~gB|L&%9Xl>^?fk0eT`85Ve)I946Dk0{E znP3t9En`=J0*YbPh9;Dl%r5W2_j-^HyrqH6Ui^jOR#6Gs4FM)J2qDdL|9C-~mpC`` zPTn`idCrOX9dyzbcD98wDxJwvrq1W70|t1~XF8*S7lh+QNUsXI7wPeB>IK$&_G%!W z0b{2*ktp|Yg$X3Q9{d368W63_zKAm(*a>Nx)mh4SEP?E?AV2w1c#%&WV;2^zR5}NI z;01p_de__3?L_s^<;icPE8X!M4ekYEUIcJW)&vdN(Z!|L0XjD?>lJ2u#0B(gFZU<2 zrI8TKP_;a>FK>zr@PL5WK0Te8JwSg_kc6P4xg--Rc9XF6pus&dM;;DN3n|+634$zEogZmvMf%6$ibW|8Y!3>T-CzDw<&M zEDyf-uCM@qYP|-gsP@&cJ;BhWFx=w-4stUTTOrDWE(!C9O!>VcxVV}lKID_Y38cW_ z;yF=Dr&7DEy3#GYsxp6jyb$!TR_Oo)KmsFxEX;B-;Xy$|iL~;OKkE8C;ma)5^8g`3 zfX?Cp)nc|07{CUQJPe?mN6{C^(SZDu7Y@(?BRB~VC_olCfNx_Y0UVGZysoGS0B%9S zz;Q3=YQbeQmV;ywUbvj7<@4iK^qD~*HrqO1a=10kze1jleJ!&#)o z!(&4?6rgybvN`m{b_A9ANh?nKCnac<0FW;06UThiM{y!YCo79u%(o7lsD5+Df&{Mo zSum2aCWL?_mngq$A+Bj!$bF>9iWIzlM8o|u$AH|o=`%Wl)F*u09ANyVc#J@cM9GxQ zIgBhvq{26Si@s#>BaftyH~7F_3@d}2Dw0&mp7hC|{}ieVY=jJKN%ZJ1bd)|^{76>| z$!63^nBqyGl**~3O8sHUqGZD;i%FTBNsqJ-UOc10+QMgi%9)DFszl4QRLlJlN}|k2 zjr_;v%gU{!%dT8UD?6hO3W5W80I(`cs9ejy6wJW{qPBEPxD1P+Yo@xiOPU-Rc9a?p zC_CYE0+9PgzuGDRDYr_ZrAl+ke>+RU6wT2j%~|V}o@+9!JSEy;%vBRG1ky6eG&@4M zk<)Xf4~QN{siXl&HW`S3VWcGvC=n6s5IPLa(nQYW9L=jl%++Mgbac#`yh-)hffO;C zFpHTW&=Z@Qfy6-o4N!sNl%Y%^FS0}pz;VVO|A0Sto2AR_t>aYA_>|A8WX>&NzSV5a zr1VN^fXq)}0mCUfA@~3Y37#1eP#wXb2r>c4G9DbW#sgHXyrVqmahaI$06bI?JbaoG znSsE00MSr_+yoBx+{yVg(GxAr`V0#-bWWJe&u7}tUF^?!+>;1kmjGpnm5GeQ*ntDE zJ<3xRB;Xcq84w1j!u!*a@i74=fFKR%&=Pup1<-&6xUBK%f$qAVNI8HA(G(<@lfHsa z6cy7k1xFR7t0Kxup%ciWOOqMBCTl!`YmCBBD$v8|fCGR)A6$UztPJqdziTWMBjCgt;KUqgu6)tIcwtn*|49)aInw-c`R0^1}iICEZQ-Mlb zLJ=!8!*Nf*+|*}<)_m;LjeN-$ZA>=(N;mCH6KDY$xW>mSpBwzX;{k!;JV8c1uRVPk z9XJ7`krQzd5tvaC3b;1mIe}p9Q$+EP8vuc*Si%t7fhEAvOO4ioHP~@9O#N!kPu0qf zoW7&OOR8x(-@Jh(u#gndszqXfb%`AR5Hwr`PvJtlYn-v*v9>)Ov?CPR&}dZfe1LWN zfE74aC6ugt!GIpH%~-?KgXP(t|Fy4aO-!Rq)8};9hviRFm70lN07A2r7RXsiLKNY# z$58ZuTx~C5wE(RBtPfq4mr=PA>3}BSkqZ@qQw1%h;X+5HQq}s}X4R;j_1U+MchuS+qlHf)~w6D1<9%500)Ag&-=^5 zP27{z+|A|O&h^~S1>MjU-O(l8(ly=FMcvd@-PL8?)^%M3OIX8OQGiTSRm)o#bxdvT z%!X(H6BHM>z1)z5+swtu;x*pmMc(99-sNT9=5^lZh2H3u-sz>@fr(w&rQO&mf;zu;TpE#8^+-r*5MuI;U1>oFtpz9P^iP4 z-R#9TcmUzs)!Y3fVG}-KnHpYitYG4$;wsi5lf(_UB4YdG;Gsq05SB^Y#nz>4Vkds$ z@wF-}w&FErWBXC#*ig~?^x`j0Vlal>GCt#^Gef5;%c?yB_=xR zW6C@>PPlzzK!)T<|JIsEZjC`6H4QFeyHyWctm7B$UOZ;x-*RM0CgoCYAM{HOOD5#l z@!+{UWEhRZOn4hm4(0buV^hZET)rAqHjc%0PI#c>SH|6eT;y6d+))8eT~_90zGVv* z;-TDPF4p8Z?qoYwM_DeqVm9XM+u3Sj=5Ge)JvOHcew|EaSnU<&?$zdP2Dokp;&6uN zc+L$Q(A>gP&2t83V6J9srl&EcWp*x6WX33YCg@W3X4rk^XqM(VcD{XnTru7YMLt@Q zgf(~W8O_Epw8l#=7X~- z%&Z^%h{oo<0Bg*)Wr5!8&Zg}YB~gwJZPZ-lEHQ5 zEvi!wi$aaK87WoIC7>L9fpH<6#B5-f3FC2|;m+`an1_-}Bfjw}^!3ID(npv!n1zt#>@Y$a9V~52`PcAXfz~u%yYCi8CcWw1v z9R#GdEb>wn$HMJ|^Lvq$9Z*?@Xf7Vn0eC69AIpImH~=P)0g0fD?Q8d!Ng*Y;mpem& zb&nhs%5uz&bY%B;`s8DT9%`bdD;=llXy^1!_pZsP8>oYvPJDBfG4k>O+#@8RLA%li zCDa{2_W@XU5;D{w3$t>u*Bq#KCr$TtFL@y|Pk!I>n=*Ea{`Z-O$AHIe4SQ^?|DN3r zFYo_G_}2Du0jC?H$x$#GbrQ&dAV@YcDu5%TnI#wiJcIP$ZD`b|+Sx!NV!l zRTMFRD7Cg4n1FSU0Fi4R71(!|7w1T)WSOV=xMv?__r`=So8>0&kp6j~4|;{~9inf~ zapQKCp?bkubqKe8xaZX5!|E_;HPS*etFGp`Ui42)_9rA14GnT{qJ`l*4J>I|DR@O@b8}Y z`AwI0z73L$;+t?m0^;diFD<AVYyn4j{o` zgu;o63KSGj!a@Uw83>rv2>D?ohZ8)Kpft%tff6MqM)Jrggk^z}7aQWtsdFdKo<4s9 z4Jvdf(V|9=B2B7vDbuD-pF)i)bt=`WB$Omr=(8os4P6>33hPMh8Z~^(;z_G^T_3g1 z@JM4D*BM=ScJFQrmsc;}zU}<}4VUD23AkkLjNe)(ZoL~9kwX4{4wf*7KR zk$G4YZa?inLkTR5R)7Xe)X+f>Ol;KA3rCpt!3!F&u+aiLuprS5EEE6{cnti&f&oj& z@B#)Xl;FXM^HEu)l~-b!rIuTA*`=3TW%bjA`t|1@fC0KCSAlCDxR-(q!b#YJia{u$ zaukl3r=ENA*{6lKP3TjKJusjH3r8>jqYOQCfC2+HCPzX6|KZi7!Vd*>Farh|1Rx^| z%pI_r1_AuggOECK@TID&vf8SvufiIutW4D>W`;-fHt+|%9E1GG&1=p{@3QH`p#vWTIwb4>svBe!ui=qL% z4sfP(lwGWDx2S#ob)M zMdSuh5dt==fOdB~#9X5gHFFSv1{$cagxf%zW9Ej7ed zV{UcUcW(W)=bwXEB2i)Mwvx0Tn2>t7Mc|l32PX)WgiIKjq~W1l_7Mv-p3_uqpbzWC!e#rS;#GkOuf6^Ei`}5zw|Nj@yd;mjV`j#f3_Eipk(~BRStihQ2;V*)q z6Wvg{2fzzrFoPQ0pqB)g6UP-$a?e9x^c2WI|EyV0YZ6@H1og+k7s4=xGMph)JcyH{ zg;0I#gI)>!!VapPFom3BArZYsL7`ZXhDux_6PqYQ8#V-lIYbltpf?M6ASjDk{2>R82KVUdel^x>PT_aHHjv5bP; z(;300LN_8Zk&0YozuX2%2US2 zkp_`tBqce?hF#KxhVdgH1Nq5YZjXDNL?thKIm%xGGnm30CNYa?%wr-mnaW%yGn?tm zf~XRj1Z<@pc?gdy#!{A<3*#F&IYeB7|1)$h^d&jVY0h&Z5qUjJ=3~VBl6Q|IsG74(dk$B^!1TD+bdxUYuLjgHnECb zEMpt%*vCRPvXY%FWg$CQQQAs*|Bq|yJF(f9xMtFi4fLj6?~2!PHcnc9y$NeuyV|N; z79kMW+G}fT9@Sztc`#WXXJh)x9(FOfUDPC9pJ`eN<~6m&jIC{(OCQ|I*0s)^Zd3xR z+|2HFvoL)t!)W?hOb(a0n=9>cxt83mE|<2{H7k%RR8SFRY&L~$!?8P9b1v^{;WiF=IR=pNX|wxw@Bu{#pp zZg+CO^{$2e%T3aH6UOusXVd4M@zw#M& zVRDn5yve%qNy;Y}Txf!T0p-9AC8?pva%8ZC*AW0UqxsQtMopf(7MQPpmi3|oZRkV~ zn$QqLNv#L+yDu+VcDKg08hcG^-IfH&Nxq|`DO_O~x7pG(*6^Hl@RB81r#5zeOrDK! zf=$(}GZ^?A&Q7qX3>50q8_@xeK9tW`-*B->j0p;!lq%__$|ZuE46%Kw>nLh!G}lsR zM^0vtMujakh3>Z!|BW5*6i0a3`7O{DuW4a7zxhG`t!1XItpp4VN86ts7pG|Eoh5YU z)H^7F3U(kK6C42szcD~`0uVA-!#j%d#)*rn!+?<6`U5|pfDn$*sf9CKk^g>V0B%qM zb&|8O7upae5GfcjCeZ&-i$*acw%12_hR0;DcHb;}gsj3AsBG=P(g-)b}orwyNbg&spp zO68pdOWd84m;eHZhj;+Vyr94jAVC5+N_LEgl0Zky9EWzOjNe=y(j6f96rW%nADJj0 z`-~kWHCcFxm)eO4%`AZufXlbNOZJsb%}hrR@P-PgAG(d-iu{cUluF&OU(Z2F258;8 zA)X5Im_W#g2>?N$pvDm_zzZA--eF1u2o4J%!4Y^N68ylPonR9!oz;~<|G`}Sk<6ud zVUsL?{}J54>#zU_B!K{ei@V@qbgat~m<-F!P8tGG3MQbDv7p3RQ^j4}*+E~j(BSj| zAq3sKm*{P5tw1gEr1RX$_?7uyT#wbQ469(-Bp~% zk=PmQxlRe-$QEiL0?YtMT;Z@8VhpXJTAhGp_}~qAh9>ex1lHkr5I`PIiUF|V#)x4? zEWicqzyTZpE568Oz)mq9;;bp6vY{D^xsnSy+ap3^Bu*lT*q{=O-*=eeznuh*b>B|~ z)O5I?sqo`5dQuc7;-NuP26$Tn&?5yXNe6TQLOR|JeoQaEz)N5U_W>F?5|2P?g~;R^ z|8$g$6?Oy@1OU+;!3#vAxGVt(n81+q4({zBwwVCmV9K|Zz%XiL|Dd7#L71b39kX52 z(=?yii5JI>h~VfO@)6y$?7(HLhye~5L5kTFvc%0i3CMLzbI@JiAi=3cM~omrMT{bh zY_>OAI8*1&GWtehSD1$#_VF36`bs?4&Fr z+ajV_Vx}20tRtH^B2m8M)li!m@t~e;nm!_$j_sI1LX!+U9Ro54spMT#q5$w^gh#@p z6%Hm7o#pH`<`ZeS#i(GU#gA z=Y1Y#@|~G;>JWcQ=d(qdjBVUx4k$PksIO&aIz4EEPGyNgC?cL{qa_<6v>-dSBXt7k zO=0JU5@;+gCyJ)%is~pjsi=XTPsCNIg}Oy!_UEL@Xi)-~jUFD3N@$PzsDnajIsNEB zcpi8PX_;IohT@TDWz&W>h{kE0fHG+rh3G=0(UVRomD)#yt|=+Gsgw>-|CTCd9JJV) zNgDGx-=0QhhT&9YmT3{4De8@AMjGm&B5I~3spav=n4QiUIX``AFi9IT*a$S{L=~}T9rB)NvWT=;Z>5Ot}r+(^LS^v zVD73}7OItINEo86HhoYm;93aXPDX0TFgwOZ@7Dr&ZB>$Y}IubwKm zf@`>n>$s9@xdxc30_mJes+LAhrE;mWS=_7wD6P8LJlU&tW&)W)D{@I|cYYK`0jx>| z?7#|a!4j;&9xTEp?7}K+NU0Yx#%IGy?8H)R#aisePOO9B=cKkP|FSYB^kAy2E^EE^ zsbt#fzM5&MV(VJ5Y|Far%ff8TV%5shY|Yy2uaZ~J>g>+ys=Br)BVKBZj_eoFYa|*a z%7SXY;_T8gEef(KhEOQGZmDlIqJ|D_$>OWNBCY+XthqMr*McpVTq#Rn2x1=VoxvZ1inyv?^`b%I(|+&DgTl*(PhOS}oho;IqCh*LE%43hv+*j@_0mI=X7!zTn#O zZL@Z$*8Xh^4T+Zt0rttYEJ0Aa2L1S=z$u zP|EA{6s^(bYv_uuhA8UO6nqCezLB|x^7)G8>PkW{||90w1gmfaKW%wD>b<%x zV{yO&WJVABFsYm#ZFI#D>yG@E*9h;#LvW(0Ea(BhVw8{z3pnr!Q*on8@cvTp|IRQC zA8!UDi^(c4XneqNd}dpk@n`O!3K_u~=UozbKwq*Q|2Lk%Z=g&FJ8_dvZnd!A0!Usm z{>Tw90r34M%IscU{-&YWaTP1_(q1tZ2d@@WnDJt8`fBh7hp~lR#tzt~?W7LLjLj>; zK>DHEr||I(KxDZc3O~Y~=_E3U(r-7~UkxCE4(v#HFaR9pA=;oo5kN<`welhZvlX}S z&pI;eLNeJ+XAPq-4(G69X`+*`agy}N1#Tke1&XI_+RLF_$q-4*P+g;7n-QF*U;aQX z!e1WeM9lr%R-z4%8~_S@%oy(8(-lCv_%bmDb3XTIBg1eOyDk7ra*`ItJQeUIPlhHk zzzrz0-7s_y9Kaw?21M|m2Q1(gc5`RA4B&u3{}1$~CvTX*GBaPAT6aiWVy3jLqEVa15*l4q|AtGhB}dy~xKM{hy9AOL@j$+^in!nFyub?t z$HoNpq6B~-euOb4uJGIqchLZKoBs%{~nj(wp3zskf8?j0OMgtOxv|*OZNeSHj7zm z)mBoxvTaWfbQ@R=2g_qy;$PXqL@@Go4&cDNsLO9W2L*7U2_W)q+ac^+05duYyKP%Y z(=V1n#yGlX{;+b6ME7(P_~%u3{|c|(2BlmNEpef?4$Jl#oAGmS2i!>p8+!m!1|K}* zpl!nQT*^)<2i!*NvVI45fup$P9{8PNXz?!iERc3-)Adhx2+90#4_}7Ogbl_7I4jpj z%hb2YEcPIYbc!1}<*v9fi#FL=@AY!`Y14Rz@EX9T_N|rLlSGP%Z}pLLIpHRGf-iY? z$9RJmt-UsLw7!=>&F_A9d7H24|Cjr31;6+;jX9YUuqE5>_A>8@yE#?%Id!-2k?C(1 zgE_2<`30MUG9z!Fp1Gdm?w0>KnAArD*AHh$dWgY!oc}X2i}@wlIVI2dgKO{Y7Wt*~ zRjt6p%$jr&T+2GLFSD&jd9yn^oA>!_p^oqkXR7n;sH^+B zvwOR{`@6%tpGVO&5jwVS`kZ?@w-c?LRC1$(`<_Gk@jz5c5PZQK{J|r9!Ylm3Gkn84 z{KG?h#7q3dQ+&l+{KX&q|4S|eS&D#dk~hYKe8`LZ$di1@oBYY6e9Ei*%CmgRPjR|_ z5WUyCTZK8DZ#%8`dtDFszw7#sk!ZThY|sn+&=Y;p8~xC?gzPPZ4{-di1O3xOebh_+ z)Kh)cTm99OZG-_xXah^JdQb!j&0@h>J(r5t7f~+f4;siQ0U(} z;gf!!n7*8%{^qH^|Kqp*LKJbl!~Pqg>+DA-tRFhyce~*0eD!z2YJcOeKKBgk+Vn7AxD-}dy{l&)2C6R&S~}R)4q9O$Cf>tHteiS#*zTL zTkKA=JkUxQU7L7uh&==6%TdQ2O{=@bJDUx$M%*FTsRM%Md60OFF`; zJMqNq|5n_tJBDTn??v)1D$mZ&YRu$D9{KFkPx^eE3Q01596^ZyWY~{D6J}sxi4Hg* zz)1sT_<@5Ga+@^FO_gh~t17L;Qqe6rE!EUhQN?gfy~+&n%o5i`F~t?RYtdGC=G2U$ zIxXsz#*XsTGqgX6E%whqr5cpgsaObL0D?e$zsLrb$kL?~erOAXUt$-tV zD1ilKNln$lMgjvP z5@o>y1phEVApk@JGeX-$I~ex{6#5ck0c|ZUBIXj9P@>UnQD_!}0xF!>>gJ~Wblu%L z*iXVDtG-(7vB`#a)qCy5dPFqS+*j6uRE*5KTjd;>*NT1xHdurcZkXY}cSKgdzJVg@ zga%$<0r3KMMj3-0O2`1x7j(c@0~wH*A&D87mYD(&F36x`9gY^g1O@^)LWc{M&_Hw+ zGzdCzZ41!(gv9$Q;DT#q)}3{bnZQ5;n|oj#fpMEGd+c;|19j^T*7`bZ<*BdU`X|qZ zczXO^ZQJdB{iXXdyLI(7?_k#(mhfqDGoj>)lLuV>n*^_Sd&7xZfC2YUpdg78jvxct z8ULXHfH_`ZF6T22%q$5E;G7X4N3@~6U;$Z?*%2~#f@~$B0I{>!0#KlXk(Gc0Dj=E= zioiMkbzlZg0M`jPHn9RM;056NK?xpqy>~HhYoUUUw7{lAAr6s<#>t+4?l%;Ck;r@B zi(B|?6{lQHaaZO$-@(+kzLg|_1vgLu2%>dE%?${J^V4Fn>i3)^{^VgZ=mC@V!hsuN z;071y0LBP#IHEOB1_ii52M|y}(glD8JLsJXD4>HQ1SVstD?+pWN4h0|3;_%{g6(jJ z!e&t|D=i$%@LUGK22jm*@hc*#c<8(yLa#r93F0YX8B1BV<%sxOqfeH|HpDpbiT~P} z8;b-+#VQu8S9+t-M?ljpm@J_O7!V-oj^;rD?PW~WEaS8)XvX=ZF-W)U$-|DofC@6< z2MZuw0SL%A1O$b3LAe|OLN-bs98U%+xPa@B@B>3)s&$WJ8V`I1oC!j(b&Mk6&?HcR z+NG=p3z$J77XUzx5i$vYGyxDcY06oKDV05R9xD|fJ#TK5q$Q1IEtQxy?yXHvNm`{P)K^D9O2L|w;(lkH>PBMXiLQv>!wGt`)QNRQhqvmZ<$N_~8rnew+ zDpU!j#_92(QPHYs)XEx|87yIjI{*%E`)A2GW=jSgC@x9@NP!NpjsuWVTGwg-Sf{Z} z1^|R!9|_Q10@MzICn&<^a`r#kW+j`rK`CW7)L6%I?T0x8@PQF*wg2K$r9{bPOI^Cz zD9-e=pe$XHh#U-m(GZyXK8vjD;jfeS(IvtU5q=7 zEg^e3IDigr;J&&Z0I$Pa!VQ4h14OEz2P`l#%NTHjMqQ4EyknXUDu+Qt{=f*o$wIc|9b8H|@J%eGi=AwGnvX>iuA18{1!;?JSY2|74V?G@!aT5cRFnjz7yl#3(mWYu!QZ5Y-a2E!V_f%k*#iL z+R_0~=@@`P3BUjz=l}yaHUX2%on0XyKkt1DE^2@NBOG;hx=%D$u})6>0>y>!6T@ zPd=#C{`gbVyY!>?zSQ}>n+Z9uKq#LXsh7~*+EVEI?RS52yLs7sEu7(-1?Ko%-)uh1 z-ZOrvI87ubsU#&UaEAi4izTEbF!Ctia*F|t0x;reiK@)n+y^*#4lqt@@$QTJRPFt| zucO#c0*C-qb^`sDApK;ZyUq&qJg@~_@Ri_C*yfM^z(@6-A=%PlJo0bPZpsj#Z2(DQ zuSf<6B_*GVV5z`G1C>xn>WwTQ?*-Q^l}1D9hW`Ww%c=v+YPfU_3b`=!o=_*IPHkpT zO=jiT04D2L58`x%>tgTg`~V~zAnYs-3#*B@YJk!>|q(iu9um)|=MZRGNyKes|P7e7n=tj{GktkWH zi4;|F6DY+Z3a5X4=ofb49ik2 z@iH&c@&)+~{Zde6P@q8O zQUT7&E(3EiL6f9B(-KFLDi5<(SpSdNQgSgF6PbhsDj^e%#FCB9C^Q9;gV5_O9|{XQ zZe=u+Hy~>ZKhrje6P8AkjlQuk4f71G692R^wD@oKU^6z4MKX)?197tgR7Sjvu{WLL z5G}7iDi9=x^E%D5dX5v9l2bWPvpG|<94!o3XoMv@A{||lI%9J-8LT|d^Q6kF*J-?~yOtU?4uoIsXI$M(V@Dnoi^P3pdLBT95gp1m= z&;kQ!wSMJ3PLgo^=z;J$G4 zQ7jQs9uY{Hl(J$J{AQF!ng4SfRr78>aV39{M|sporS3zOV?^bxNjt>?k+DdF12-!r zc}Q&rl$0CA^i9!I8PzjN!*Mw|>pd^j>*zBR>~lJ`G;pl*Nj`u#sqORl6HW<5*K~4> z%FhUlApJhiO%ZicujWa?a6QSe{x%UY^R!M{vm@g0e*Uya0~I;;uq2vn2QH8fBd=8{ z^_D15WfDMuT;>NnuQSc`Rduz9EOkgER7znAH9JcOEz}bYW>0-^M@JPb2ge(ML;V%@qPT2sgH~yBab~ zANFF!23>!3UGL8_RC6_1^L$t`S?85rnblsA!UT#Zslurc4W(L}3|gk+2wJVWW}v8G zCQ=orzgQLIB4h`m#ZWTSVKEjfa?7j&w77b;XqgsFG?q^5RBiy~1`#tcI~7m%^bHI_ zSxt5n3#Hx&kEy5yWEc`>_$tRpXaPp3q-Jyn(Bnp6hN#J`cz1ZC3V4Z9e(l#TikC2f6?xgU^&U5Jt@Km3wq&E%Yf-V@ zco}e((2$QCPtc zS4NGOV;_!6Kh_)qSY+K$UMs|M3)p}U_$hd7D+G%JoD863i~>l=1n8D=^ht~qCk3x+ zydFh{#{a9mSZBOo7t6qF0gjP{1ki+2xQo5`)>arwTR3W~@+#pZ4VRaB+wcwFpbH2% zhl4D0?JeP^CJ7K_1cK%cF@|;SOJ|-qaq8=a#07_lcz4DneESCh@@ZdD6@=b02?>{r zzj%@<+3}opjEil~7}q^l&wtgJhU>G9-#9GsrD#f7L*Y0krx%HUXkZ_(0wC~|gDXoC z@N53(OeAHRgtL-)xtB9Gj4xGj%P%)@2w7 zhx1K`JCH4QIogsjl6^Ux#hFv`SC=@Jlffg6KY2bwxefeplv`r2G9aGp>U8X>#tZ#T+nD$6uL zvSKelsGH&ApTT*cNxGz?5SV>cm^HPSUl@(m8FFX%Sljt|u^CW$=L9dtk|tK1`WXfe zVEXD7mHQA|wzwoZ-~qCPK#SU8Q9`7H*QB93s;{S@*%P6Yw?bq1^%k0;;TZvY=V$Ux zcK+@Fn<@xW*U4fIh#e*1W)}sz1`k6RgebtRfmXem%vK|XQRZ56FsKLAt5R@JgW3*2 zsrivRDQHk&soRkWrMj^lTi>W!T{BseiJ8vO*=m{j4rEwzJ+&h`I~4DY$0)`G&i_jK zFdB$?nk7UIyig~wbf9=D%C26U+7u}VFb8rZ4ep#zpt`=vL#v(3Y^&09V{qECrT zU;QWpDkXMK$C;!_C_)YcNO;9W&S@^^bV?i3DrgCYV7y|-jr>WGm=AW)8jFX(j(Crh zX5g^i$c>sNo$N`t>SzW;Ze&c(y02TiJ^aHrio4JGm^)d#Gn>3Amrv2Lp}kt|Z1rMd zM`s=?wL{EVl;*`Mx&~DC2Xst2-mYr^9D{T~0FJ-_sB7FXTxR}?jw(i5g8%2PECA!4 z&uKRNW+FQK9smKB-~a$i!a1DrKs?OFoQOnx#7VqeO8Zd&dI9nwEE~ zr|tU)MNWS*Gn}eL`39BRdf>kU90_hfXA}EWiLCCjzE@ z+_HsZDB#dN9g=%wl}Y{F(Vd)8I$gW_%+L5ClKI4AI>jkBOKTb?xc><5rmqKZVBdS7 z2gtptLCtj>Nom;1y#*F_&dw~li0np4#}O&Yiwd_zC(1M3c6MOF?})BKCkGa2XgDYO zK&^9Y$O0@1)CFavi7wq)zUA@4)XA7a9aq&?oz>ZVWPwG!?_DN<_%d532)gPr^)@Dx zQ3;OZnvf-)`#CI8+352S<#*WSrGDyx?UJX%FpgJh&pg#>e%{YJI_-VuN!bEFX9A~= zJfA+BnYyW+n(E#D?H2^*5Bk{pH@qXa=5PM%ah}EfMyRz}Hq9OG1%L2ge%&{AU*LwA zxx>4+9?iR6tG^yS_-}ysIhBb%@CiTjHQz1deoZP{=1ctXZ~q>}aX#$T-tr@}!#98R zS)Y~+U!@UxYQfv3<6Yi4yLtKk^p~~mTfg^xKk8-h>f2rR(~#>k8}IX8#re<8Dm^_LnZNm+pOi5lYXo5Z4VL}g|NY@V z{^fuE>A(K%|KqMd`}M#5wjUtk;ORp!5VL~Jo+VVMa27*{5FwhINU@^DI}|Z$#F*($ z$B!W27vZc$HFk{M`NwcQSn>cgA%u%7I&!0ep{!D^WsL`Mt!k_`Q z6phlRPnkMJ({yUps8q39m8!KXRjyyNhE>XxEYvWU1pi1;AmXUow{YXiolCc_-Me`6 z>fOt?uiw9b0}CEZxUk{Fh!YD|lAtYJB_}>e-p~aRpUaprBiy{OGrWhNBTBqTdLuj2 z8$XT=`6g2rlSNhPoK3s7?c2DE@Z?L>v0VhM zbnDZPYq!2nc$6?}eJdVMzP$PK=+moT&%V9;_wem979SV#WXhH^3*67&AVSaph9+7% z7L}F}Y8-jAT1cxo0Gn*NMJVBf6xy`gUcKEFMnB6L$Ipi#hUkxoBbI35iT$LgB8n`w zxZ;a2y2xTZ9E8Ufeeva}i6^1G1n6zD}SB7#eaCO6s9WE~zMI{ss8xF;rfOWtNm~xu74Jw5I8%phhb# zg`wg#s$5ricq*&hcI$15RN#o~xa5{=?z!lu3zV*M9f_zLYy8TnXU+^ur9;L_s-?0W zt+uRdUvfI_zy#w|?OoU2sqn%-d7|yN5dTNKDjGJzr0&HSXRPtY92>@NeY_?r@4O7v z>u5t$7JIC|$y&;qvjGcS@XR!~bg+ETGORNfYf9|%&oZ{U0jxTAZ1mAcC#|%)AH(Ht zuOpM(=*cSQ%ksWmX2R^VGS}=i%{LpFbJ2J<19Z@6pPhC;p4dtD(s0Ku_uO>D*YsOY zi_G7mQitaEy;ZJ!HNRUnYdCmgf30}IVLz5Q*<6^d_S%!H>T1Pr*RA>HoOkZI(~TJ! zGQ3guZEw|rPkLI*3xc$D%!|iPaO0#tUNjfw$mt=_+xA|0+d^di`S8RSZ~W1ByOnq8 zrk@U})q@w_Mx|LJZoBN*i%EOC*Z;38d7R5h9RB!~uNXdxp42Gr@$9$n{`;~ff7J5K z2TQN>&qE(Q^_Mcpb=TPo;DokUpY9Fidu?h)SRD2+-@#}&OQ8fBJV6gnZ0LM$+TG89 zCp`Fh@Pi->A$#O!6#CWAe)ppq>Qbk=!nx3J4T7BjXGkpqf^L0~%iWvW76!C{qC^+tD8V78fFd|lsfPm5aKuZX!V}|b+Yi}xi3xH64VwsqaOy#Y zFx-(IP~l@eTG7L$1jm9(Y5#_BDru@VanOpO45cVXSwbxCN{f0sVG8jVxG;**W&V?4 z8fRIiH7c)-CZPwMTs1*J*25E^phOXQ_{&Rp0u7XjWB8D1DnKHU14!JEQ+!wtP!w?! zhwO(BFHy*PY?F^gL<o80;gs` zF{W^ht6VqMPD0hxrou`YP9<_rWD!m^Fa#Lngi6?*5S5?^j8QfDs6b=(&=Yk6#Ssw- zNVGJg6bxJr51TQGwi!~7aa77Nf#Z~qhO{1+00kr)ibOyLrx};@<1Sk|*V@|lwzI11 zy6%cs0HG{pD~yNW;4l%MX3Y=%EUa=3%NxX^Et)+v;#$*!2G|x+k4UtpF)yk>E)+4f zOoiZ5)5-y3mbMv1+{ZJi>j_jivXJQIz*={yKHS>(zW5cAZim9#c%slCzMaxubsDKx z0(PHmu-f+<>Pil`!DK!#YWceEx+ zF??bT&(jukU!aCTU1l6JLn8=Ukc!+#x z>!gLJ+~g5qaE{!J;{z?MogLOU#x)MJn8$or2)nbtI{xo(b4p<0y7`rVZNv1ETjb>) znNZFxALZKAM7FuOO%7bqC&5hSLL2(f4;CFNl?2Z;1GvXN{;_f6977`48K8GwSDq`^ z=RM1))2S-wg9wf2Qk%NQH||M}2jb`e4>;0qjx|2@Nk%$f+Q?;ojD+>fDo{gg(5DWz zuxXrX1^=wN)vL^DU;WxsA=~=aMaG$#W6U#Nd)m{!rrhO*&Fya6x7f!HHut&F?K*W^`rtH7cw34pJPQOT&<;LZkvCJ2S=_%}BzOJnU$y2`Zm*>zimp=E-dtPZ$-(uCL z2Hd^#-N0Sqdf){(^rA;m02XlM6=qNS*+)0Jwa>loYhTsd`~LQ}5B~3Q-}~bCu=iIm ze)7ZcD#ADY^U#mJpF&Ud(d*sxr{_7x4;O-uuU>YoH@K$5!v!gn|NQ7Tzgp6-{`Rx~ z{p*iE{N?Zd`MdxAOqag?_m3;;+f>ch7hG6}Ao;{U)dO_j2Y%t_d~gvkqee6k7=aS# zfD|}^7I=YQQh^W1e;n9>w&H&mM{cE8ZrOJr#RX*ja)7s!fC03C9{7SVC~+WYeg9T> zbyz}N1h{=|@N+B3f(rO`F&KnGcxp2kf>w7=IH-Mbwu3+AgMx=4;x~j+IE8RkgjH9A zC76U-c!CAEgiNSvl7=~Wsf@XM!cvyvL zsAw~1gf(b`CisSw(uG3ycXY^lW0;4ExQL;IhH7|sG>CJr28b0Qh%OX|fn|t?_=Aba zh@SX~ALNL9ICXy5hCil+nAj{s)`Ob}Hk}BHuo#QPBZ^tnhoz`OI_HL{NQnKjgLBx5 zv-66wIE=(NH?(+4wit<$ID!u03X~X#sKJXp)r7zpJ9b!%*qDt=bBr_viT_u4gt};n z#MOm_7+AkpjnZ?rjSJK0StcZ@&qK@tOj{w zkQSMeD(NFANs!*ihJWZv_lScHDQ7mRk0oi6!nl$=*^}#0Y&do(8`)d`25Ac^h~$Wg z%<_;q$vA*9R2Z3&QOJN)S(R3Kl~|dTTDg^6*_B@Tm0%f`VmX#%S(avbmS~xlYPptd z31Ltfg#~$#kA`yzS(o&9lsVXQ=7@kz=_x}uerOn&P&t@_S(t=*nE!^En25QUirJWq z`IwFwnUFb|l3AIQd6|}(nV7kmn%SA0`I(*>nxHwFqFI`xd77q~ny9&&s@a;X`I@d7 zo3J^XQyG_XiIB|*YmpLbNU4{wlqr1Kmt^vnfJvLiDVxZ7oXVM;%(1pZd9<{Mn!W`JVt9paMFe1X`d5dY}lJppPgSqW6u=n44gvKW_+<{F9V>xsU0o zJ;b@59NMANXPdXVjCM(nl=x=m>6`o%oEcg}9@?TVdM@7?PyZNK3L{FQd{>?)sD&2# zo6Q1oD>^eT8l*xRC@~tM51Mn%sGB!>qUeU6KMJHn8l_SiBt_bhHJY1vX&O7~GD``Z z82Y3GG^JuXraw}p04Rc2I-X0)kUCnSfHjim2&Q8~rgU1T=wYU3YIUxVg}j-gZkmdf z_N8(vCU#n=hI$xynx{rdo|WjQ8vzGf3RpfmsM>R=mU^j$k*IT7lt-GR&nSpA8L1r? zr<7Vtn0l(H${Qf6s2;hSys4!jNr+uas->z%8=9)PN~fB-sWIuQI0~zODw4EXs|Itc zxq7Tpx~rVpsSvuRpsJ;!T5BpwthS1*)_Sc$`I6u%qW`&=rNFwQS~INFY9ZKKuE&}K zLdlU$!mGGRqH5}^Gr0&H2dC3It_EYS_=>9Ns;zv(uI*~0@XDs&`lr5Ful7o=`I@kY zy04SL1f;XA?V6-`*``e@t%7>62&=FfTc-^BF8bjg|K+c^`L51to}xOb7;CBf1L2STyI>M|&+v(?(O zP%ETSX#z3JtVfHqj*5dfOGZuWw8a{=U@N3lE06Oiu@r%|(5ewzE3RH^GhzF-EjqRk zYqTXQup1Gn^m?&v%Up2Vw>>GebIXio>#gG$lK*eYwtM@Qe!I99DYt=}rk)y(ZuY2@ ztCSY2w~5OV#>%*yJB2hiw4htG0cZ!gxu<6vu%W88=gGAOo4G5oxt$xkG3dCDn^QFV zt4TYsf7P|F8&Iv^9my$LzEyg9kT^{mmFw+c$Bf2p`X3zWh0zB+>>y#c@C7d#)Z1V7+<`6mnno3*JMhnC``{?xp!#{vP+0gk}|KY#%XFqL4I0~8Pd2+#s0zyd(K zzaY#OJJ16(0Kw09!v34Ww*tTd45997y8i`CT;*$`x@)=2+qOf;0Zu>zM&O7iZ~!oX z0f~vZQAq+ApaD1l12hl>2GGM2as(Q107sB$^lQN?e8p|~l`P!CE=<5NT)DBzr8&#M zzRMRn>;nM+1a1VxC14*(a37+0C{XFLAIt#;-~$JM1SOyYL~sBU@Wvc)0tFBmQ3=RE z`?^9ob25M?hRhfpd&ol2$nJ~9l$^y>xy4*ux-cBWUtB2%tdw4wzJr>>41mT(90WAb z#zMdX8K46%Z~-X5794;9FTemgPy!Hq0wr((8ms|-Y-}^&13th41aJZ!Py#Qo0B`&Q z4bTG`@W2qj1IMfZ&TIrnkOU^c%Ky}S0UdzN7mxwh%mgTa0IrP7HG<4~tOL(n!Ad{@ zNTALjfCBnk$@FZ=SBc5~u^+5@rAS)5oNT>Yi@pm?#yPA3ro6_fYywMQ064$^48X+a z9K}F<0Lk0|NSwq5(9Onn1UNthH?Rad5Yi?f$VLFf51q$5@BvP606mZaY+L{y@X$Aa z#NgZk20#Q0FwK38%D!9xNNfNsO$0dbzzbl~B<#m(Y|r())LezDke*J5ABn zd;&8N1U}FJ3%~+N`~yco1pi%q1SsIjQ0&o1&;hk914nQGIK9_2kjy_o!fnjf2Vewl zO$0jd*L@7qNWcJ2tkg_h+V|WAWDB|eJk9s&T_! zo7HLD792nWh1~(oJkcN^14hu?J;2a;Y|}(A+C{+u1>oQTpaFy}-9&)L)NI)&aKuLK z!3)p;Ozh0u+yPwe;Qv0L0fdYMoP7kveF2QI1WWALq0P`OUBUKU-#~6=_$;H#T9>42 zIuR%ebuIoITH%{_R#wc$u!>BY5QI&fhXj!>vt@Oy~!p9NSO67Yjhb zON`(>Z~@4@+y|fmF95^~PyqVw&@2tokbUb5V*)yW(LLY;GY;?|PQpD<=16SI3;zPj z4a6^i=nMeTI3VbUe%C~>@kJowHPGmMj`96|=9Lca-#+f-PP_%o+F-o4qCV=PLIEIf z)Xyy6#moW`j@K!X%^~jG()`Un4iv}y^9+#8Nb&0o-b%i{$9u-%AeQ0`9aM z$^jja)Bj9B}{R^JkoXfDdGpZx2S=>qZc(|eb-ulJzN?gHKWzW- z=KuO5;PTE-qDUV7={}*;|AjS=!{=ZB#1{LqKeM;{>7NV`hVBR?*v5~cD^o~FAkffZ z4TupXPNZ1T;zf)ZHE!hC(c{D%6-16CS<>W5lqprNWZBZ?OA#<-&ZJq>=1rVAb?)TZ z)BopBph1NWC0f+zQKT`IE@j%(=|ZRsrB0=q!0OeiS+z#x8nsItuwlgp>O+>d zrajiy&MddC;JS)e*X~`raPped+ZWD2z<~=IOxX3~VZ@0QFJ>GuWKzhHB~PYY+45z~ znKf_b+}ZQztyo2eCT-a3Skz|Mu4X&;tz6i%@Xq#I+qPib2HO-a-PreU;K4yY23_3v zapcLBFK6D|`E%&KrB6SdTJ?3-ZsTruP1}2IzJdRa53JjeAKnjzuV>$0G4bf}<N6O6oq3?WRR{@jyrLgC=s??MbS)Nn%%JOA|X zL(~2PP{08Td~L1+Q$&zK79DhPHwe|Ea7G&0vk*iabJTH19((i=Gzfwel0PCNG;IVf zL>w`Y616iiHWhDMvBlh6Oi#udv(z%N8-MijOEAL}b4)UeG_p4($+GUOCZB{dN-1l@ ztwG%!h!M*z^VAcgF3a@uPe20|bkL^?HS|m~6IC>{G}mm?N$y~SGfwdcs`8*K@!WIM zP5UJDQ&2+{b<{wIH1$+eQ(d&GM#myCJJ(K(O}0rXrLsIfU_6pjUib8rRA7S@c35KR zQ}tM6SLLczSDAG7QP*OPGC>6`wf04X?$oqbZfE?pSa8D?cU*E0H5buj)BjZ#NoE6l zl|VOPqpQ{_E43C^^t9c!TYhaU_g{bm7I@(PiU7jkgArCZVRaX6_g!dZg=Xw8U(r90ki(*^ty;fIrIkvf@Z$;L5XP$e;4*)2F zZkS=BPu`bRl~-;}t|wux_u`o|J^E&th3g7>&<``dg!9VMw#8Dm1bGuc~z{~ zVk!yB_S~xDwwi0Y>$dxDxbsHW0uDey!xO*(=LD26HW1)&DMBDxahxAE`|OrqR(oxz z1v^h_xijYyZ_YdS{Bw=XM0|rVCILepQ?9`#nPimFMjm+px4?qLzyD@j2;7fOQV(}~ ze;jg&XM?#~ncuEi^USXXeR<}aCpg*J!>C6dRj^V0dhD-n1`XP|sNH+;ji#zq0iTUH za^Z)s+-=N}cYh=0otJ-p`bmX8j8-ghy_#vV_g{5+OoDsg6Pu=X*R0&9L78uGhZ~Y6E*{kisQmal%W4Z(&&>Ujh~QrpZz6fySHK z1m71w4CYXWJJb-ecyNL!$RimZ%pM4{Cl51lfDo^k!z-4N!UB$ucWT0p3(*Ecb;OWu zbFv%_YiPsd;gE-8Bx4z$<3mo!gNSR)9vUt&2ROnp7AiDg3;(S+m&i#D9a^Od(0v+Qxr-Dxy_4T zEalBkxk_k8Q<|+5U64ND2}k;mmb1LxHmi4yZ_W}7VTgq#=ZHzKHHwPCY#J!HxIR)g z(|ymJWXh1>Xy8m$^ZyZS75#=G+ ztBOXVT4m<~WAjP%&5)zVGi#^li4wI^v!j~jY-c_DSd&w#<0*(9Fm!0FH6LxPO;agy4%9mTvC5e6Y z`(Y4sE_Vu3BoPOUVW+^7h@M#Rh)0v8^FCNsdZh40p&`TfE;GX#?i77%8)6^_nNTCH zF9f3CL0U#(u!ZF&_L|XR7h{FEF-9*ZtsJRX(OAM27OO@HQdS;wjKe<`a+%HSCn5{r z$N|3IiBU`y3rvEDswgsp8B66VUpZ0`-g1p?jAOvGO~+x5+n57--!ms#(f>-b1=hUV zJH2@-2QDI=Lt^KN-uZlaTyLNK3}``rIsd+d7PFxdwP;kQ8qH~5g)Fn@W<=jt#U8YD zNr9PXJ!2WdY5R0uKrLu6lX}CZR&}wBJ*qTI;nDt6g_3h%)mSU;pR@kolj|(f|KNGk z!sNB5Yy3`)7BASs?iI0*eQtEGx@1&PMK)h8g)TH93aA*}1~?D}FDRMP*<N&k{*|WU7ja+bBY?PxWH^0oCZiq+xN6BWli4#YKNo0Hy6VUhrCSW*f|8^RjfbzdJ zZCp+t9N`HEHp3gfVVxv_%$t>X&23Ke(v-{u&2aZNuYuSiaG(oRh{6P-K!px)VA^2F zo+Fb%Z^T78lQHHbuGMSsURO-H3IAU>&R<@mhu>W6W48~UUg?SLzB}q`zY_-@Kx+<+ z*W}u^x+k%&@^7=;%cA!B*S!_rb}aOjT8+?tu?>vxeJl%5--aCm7!2f_5n1LUV0q%>aMG-nBaKPhZ4jC|lC0GIqB)@gxfF*!7 z4^s*zz<~4{m;(5L3q*k*I07YLf#NF|>`MX)EPxXTzUj(9t|)>Etgjs~!BfK;EGd~Q z&<1z2rT&pWPXL8_fTVs4K(k|l*Mq%WJCU|Pz}jmsE(^6md%$vAyp^cH4^RRVSOA&Z zfdTlw`a;1O*g@r!!vbi&BN#!e`oRTQ0y+FZV>tl|JQzY;fa?20f~r6_43Z82M5QA^ z6aYUyG?)Sy!4Sm0BqS6POur;pJ`TXaMTA5M_<%=j!~&3kC9uIyY{LLBf+NVoxMPA! z3_2N*f#91!BS1v$v;W0ad_&*+MGs`OC0rN_@B}t{r2kW$C!7XV@F#%sw-?hw0+g}4 zTR=v*y|KzYGXy&{REY)ffDSN$)7t?SKmi(Lf>1=qNjw4%V8rTUfp}!c0*Jv2JjXE; zM9gr<7YsU~!vGva!5>(H3;aQLoPiHOfgWUnflS8_kU@fU!NDWOf|QCAIDvrt!vPS* z5S#%Na6@$T$n8S`#Q#Yj+5j}@xt_$HAxZ_Pn@!rJx&kaXFjT;mW57iih^GWj2#mn!>c##-$&K_u z(j(3vEX0cpM@1E$Ttkdx0(Xo%zl2eD4hp!`6x`0Z+e;DLO8o>#^bF1DWJmuLK@Z^0fWgRC zh5yCR1XBxS(~bO9fmF_iBu8<4HcX||8VpYsC`7=#zRBE3f-F8cg;3AA(+%K+|4K$S zB8F@5g4r2L*Ylk!bCy8O%|Rv9zA#j-D?>)z5*3Xu;?q90v_%%Z(Qzz-;Iq*pfdy$)bSBQ-tW%gleR*usQC-@878*#YpYSnNy0 zqzi#WJVFSN*i~)5JY>{#p+F_Phe4WL4vd?Z?G2Esu-k`j%t z@X+%?SEUqDyx>i?Nz^;&0+)N&sKkk>ELf|(+N8i%oxL}nn}#83O=k4JWLSm5DgU{l zDO!jz*Vwzd1YAmA>rF-dO{vwAc*R<~y;~2|S}B+vDhwy=*(RXfgiDakK2;R7H5;`B z(cDy8ru+^Pb=SF_TB@Bnysg}^0~@u99x7OcRyYOE6@^KNf-5*Zz5%(RC6dFnMs#J| z#$8%=g@e~69`PE^$fc0FwOriIUC`n$oh{t8vo|7@#)e7VZ4*@5W8I~N+u60P+MOHW z)LrO}UL(1!`83_?owmcVyu?x$;w|18TV1#HHKrv|J2}nf#hKfcUiFpUoGqZ=g*>TS zmG1peyK7tI&7$!=RF;EU+CAS!o!s>eU;!=?ms-QqHIw;G+;epqqz%JQWB*#X9e}9y z-wJu&0lr||#oUoH;I*O(8A#g&&N8IE-`2egMqt<2wHPHki5;V0^tE6Nj$ymiV3C^2 z4i*|hV5jY9U{|mO5_Vh%8V?kx9T%3|o2XhDPGYRR;R3Q@CLREkW8e|~;pE-AcLIY6 zE@IzEUnTBhePrTfyWx?51@oa_;|)L;aFs8#xTh%GHbqOY|dtF-ezv@W^evxa1Lkc z6=!lTXLCMhbRK7QPUpQ^XLn|2c#dazp67RNXL`2hd&cLV%3OZ#=b`E=fDUMZ259Lj zlt&H&J?3Fs=wW1T=&ZXj6V_z5K>**?<##o{X zl(uM<9xj&tXqWD2m`-V#PSKXGXLU}D2RJ!m<}`(N=w<%tF|3Vu8043ZX~)|eO;Kcm zU23LoYNd8+sE%r>o@%Oo<>?i|!1EqgDA$3j-zoO#VrpEVR{vv(t_?^4gXSyhnO1AI zUhB55>9<~KxfaZ)uIjo55|hqrk_MX?K>=7GgNXvvuD&9L_F+uUUp2lB7Ab>CkSa8I zY{-sm$)0S=u54U_?8~02%ie5U>TJ*cY|svE(H3pYF73}YZPXrZ)n0AZHiOoFZP<=& z*`96Mu5H`CZQRan-QI29?rq=x?KB{S;2v(_E^gvJZsSgFxr zu5RjXZtTu(?cQ$g?r!h?ZtxCo@g8sTE^qTb@9mx>;a2bLR&V!SZ>oat=AI<_mhbt7 z@A{5!`lbZ_?r;C51X3UcvJh-DR@|QMO=e{v{~aw(s3Dz9=Tck(N@axGu-E${Ly z9|R}|^B))U4=3|5KXWrr^Ct&$HrMbrfAckub2y)KH>YzuuX8+q^Fi2iKJRls|8qbO zbU`0bWrE? zQ1^64KXp_G^gkGcKRAO`|ASWFgFRn$R(JJUU;lMm&vjhi^;*C6SpRifk9AjXbwPmj zVgGevcLP{2c4lvOWmk6ds>Yo*a2^KhWDeiktBnX(CT{O`jQMu+kZ=iqgM-L}TrqbH zH;6pY4Rc@jc6WDkSNC{tcX@|*d%yR02ZVgb@Iu&kf9Lmq4|stecz`c>eJA*XM|guz zc!pPahZlH#k9dinc#5xhi@$h`&v=dBctDVKkMDSp|9FuP`H?SqlRtTsPkEJJd6sW^ zmw$Pfk9j%RgPE^+n!kCQ&v~5Rd7bZhp7(j64|<^=dZI7-oPTwsXZ56qb)|25rhodS zUv{ad`DIsnsK0uo$9k)udamz!um5_m5C40yAA7Pdd#OKrXGeRrPkXjsd$(_UxQ~0e zclKwmd){$_yT|*y7mKXQ`%DYC0{7}3roDw`=sq6npFY$;)?dcobk7r~0bL`sCO8=8t~pmwvOae(S$}?9YDf z-~P9sd++~#wzvDd*L(8cd-MPMlQYU;4g`X0i@~mBFkEPB4{Q4GwFPJJ#drM3-+#*o z2tS7M2=qg+;6Z^2^Rzk0&>=*EZ~r73jA)S}MvD~zLFDN1W5kUbM}j12G9<{9CsC?w z+0o_8g(+v!tZDNm&YU`T^3=KKC(ximhY~Gn^eED#N|!Qi>hvkprE;cHt!gzY)~i~# z0{!FF>eruP$CBl`wJh4SUDJMT>-H_&xN_&xt!wu#-n@G63hL|kFW|s}2M=~*_%LEU ziWf6x97yrw$a*9Ps;s9@Wz2yzW45#Tndds9kC86zinN_sr&qIfWy>{eTiwphu1#AV zIo!JAOe0cChaFiQb-uyZA=*(g~EX%L&8Ph%d`0@*Vr2nt|etrA;_w(=X|382M3OHa^1scf4fe9)|lvxd~yB8)N4NLh`RRrZ;VoqgsbXg>r7pYeUivP)Dhjq~r}cUo$ETl$I;xJOo|RiNEX7@J98+$yr+|T+4N!)Piy?4qb_x(5E^ts&a;Mx{` zIO2&b&fwx)-9=QJZPYGc9D)XS2=4CgUZkZIiWhfxcXwLc-3jhaad)@kF2xHueE)w2 zXPvC9Oft#f89ce)``Q}^2BO`ND=8%^aKJ0XC~#`55De|yvo$I?7c-7|nXBP#%`)NPbUvxLTq{lD#yaOWo@7Hi`_jICj-0mCSQIG$e zpN5Rf5e!n=3?|qGMw*mTF;(1?0UY?rm=sQju5bCo+U;9MOwXs#iU8MG?sgQN6+gQE zx^!g^3!xu3OKzRVU=~jYbuzy!{DG5Ma2W3d8LDFhi1d~{{P2td*cHz5H?}rHI%^v< znV#bBEJmb?kJ0N?5Cru8`Ln-9X8~?q*>9$okDC0Q5%d7k5C`Xs1-h;wVYZtrcRh!z z-6!K81W!Xqb2s{&1sU*M7D6?y%I@vN6rf37ChUS4k1*YQ6zoVT^05RjKA(L`0}FBm zBBSsWR7zSl1<w03pBYJ6z$e-kKi&V5N~;-;Ej zD^udXOiJxOr}-FtIs5~wA#I5J5U%(kZCS(CEo+quu`sd&qZ|2KAT>q*RLLaHC30ev zjSe*zVAL_`BfXr6qxCOSA**S~mh~wW5-V5qoNCA^)uj==E1&khbaYGng@_i-Bq=>!7Bj24Y`a9`mxdi26o&Khgz0u1jGt3cw2Fvmnkd_Op z3$1V387{!>@&wY!V`C+GSezZ+bUrI9hBcHP{5?^fPFKlCZz}sm%}5+)wcxSVP_E+3 z$W+R&R<44Z_qE9iZyI+d$f>DPe{)Y_AWAI)Mi!H>$%0i#`*YLS2X97W(EY=>j2rtk zo#ywf0?X6HFFUmi>DwKpAK2o@2^rvCE+CC3#<=Rn2InZoM{N(cEO<%VJ}0sk9QMQGnBRq2ug5+j!bsO?+{o z8_cv}v~&-taUW#rG2vsJv_S|a3Na%uCDgJ;PXRYd+8E(h&JQ`Y)aMj4d-Mo>AC78i zuv5zMoY&F6FKOwzFtNA%gR&^&^3m;BIrqJ*+S-~Pm!TfcsWo^9# zAN{Kj-Htuvzqk&zrQb?KT8pFVhn}5BAnE(H{dc%M%<$I10g)TIXrZ}3K~b`FU8r$( ze|6E~5ZYm3&t0Un^wQCj_*5$_-5fL2Pa44;TcPWo1C{&t{i&mJcAP)6Ne&p4+s6Ke z*89{}nZ7!$j_GeXr4CGNb0mN7g%x$C8jCUGif`*}lmQ3zq?sWrrA=Xkaphd-nv3Y9 zP5VJ;Ir6wjjwLc%XToGTI0cILWVqI9YC8UOC!9_AMGg(knxMt$A@?iym<{Lf+86~_ z=^e5+k0|ig->Y@)Yt*K8e(}3^Cr3E^?w_*MjOUzUA8W1mr)Amd{XQ#@#pXxt_g-7P zHE?{_jv<%#s)a3YV>ao9rMk}AQa5jN!<)I7s`vWVJ8$d2=20A02F8I3cl#u%gezbB zW~5t7M=FE8**BJ20$sZ$vk5t(mapvsOE;ySRm7n1jLgCmo_>u<{m@sOB?CA9K~I*O zNOhlOjGortOqXY|bl$zB5dKV^%nNBw4X-ZAx>+h6g*D;rC0IgiYC>C;(+Ie7(|b81xJM$xn`HmCO0%EZb6q!+YyTZ^^;}JD zdynT%KlZ8jyez4jtImuj-b3zaj>lz1+sENgIfL_SlK){Nf7+EJb0qq zYczv9TM(jTE}}~A-_V_V-#nzGJb++7paAq#4MJ(oL+J*g4jDdA8n#sjqHTfD4-L_h zau1uxF&;p`mz_hv4^Fgv%)N0;#n?g?3aoyELW<2J2@0pKmF@&GhyZ@^iOwl}9@LO5 z^i-66CmFIMDblN&Z#&E>V5{NAR6)`5keJf^_KzAZUlQ#E|Tig8U&L^_>EQ zPym_(QC#MslNnJ6>H}FRsj%{?h77P}3qIzC(tInR(Wk^VqofrjqJvP9zR_gk#F6#v zc74t$cHCWX-NiRbCK$?5E}nrwUNxe~x@e+c?aya5$|HFwV0))zM<`@RCZ;;q zq&%e-u=*9J81)_P49}K4&-ERhr6$RuPn^4+%OkqFlNkDdRhywq#j7WC~ z3wCeL3#3G;*~zGU751gSnZRmO%jg%$niX+470J1oaJqqH6sZ-+j1^?56jh8BYl;*l zjgi|;l#B{LO_}gg8u4#Ys~i@opz8_Uny6+_3BDJpwNnY(8GVL?ZGEoSDodcmU87PQ z)#~|}fexQ%Rb_<#1t0WFN*tMld1y~^ZcmoeL@R?x`pb?SCt`Z`0eelcU-p4QiK+aM zDS7=qm!iR^66#Nf2YQ#qTQ|k}FEq;E3W;o1xYtaSFR6`aOH^48)nJN^d1*~#OHAU7 z>6Z#ch9tgF7JvB$6q)-YqC(W2z4a9)?^Lf?%&7RA*}*rgeJ08jW=4=!RIyFi!P+FP z?OKWUVF}ymq0R%X>`RG)Ah|uQfddSk-V)8nL_hnnpwuga&b!&zteegUvXm<k3q)FrSfdRwUR{>KtV*NW`e3Vr{RCFxD?VOdv+6OOUX7VRZkY3Muc=nso9BD(NX zvtd{Ih%gJsxH7J!V`sbL$P#*|8v5wwvgmI5n4x2|6bp}j3-l%WI6>pMDf;+J3s*n$ zc!ctJ^zsC(^2Df80ZPlH8oFfB!z5mYpC4I=N_q0F1i3uJFMXqw5;`n9hM)QjX>R3d zUY2Po45=9m8F7{w6(<=+C7Im}S^ecc&E;7Sedb#XIf_K)dvv)elni0CirHk0Runzu z1qfAbM@Tiim=;vM|Q-jI#2A zo${25m>R~)=8DR0#;TzTN}D5>xw7hloy0?{#M9I2OUBxl)7q%QB=kx!7E`*>NgX9q zCaZOQ1w)FWr8g&2w&Gc;N@dfRv!*4>OxlAcJK7vSrq)Aq;jl_|1gj;J8UHA=fRvIR z3af$&+Cp@^qIxrHH`5w%i(C>_{)h~h54{w)$qqP->ZBi#ik+WWb`A+89EuOuo<^H8y{kx zs3~t}Jy=ysRu}!WRm*)axYpU1kFNTtsH{xtIN|+lmn47m) zHutJ0+RwKhE}G}gCtfV4Zkc!SYIYy0aA9mk(JQ_#rEyD;mY|IW;2lG41+DZl%`vjf z+g(a9S`^t~cZby+$JzDa#h=(+EacRj*07!hRi6dfE~iwVPqD6**j}t%UNqbOzNEiu zx2+qyxI$pNcCr}>I={xU-#V?nrLEoOwQqT6`72xdH^p+Y#L^G_a@VZ(!R~7Bg*o6f zF%Rb00^Qb9g*EU?O`wt893H64tL7E*LJ&-Aa+Y-2wZ!%|b#=B^`_6d&epm~U9H>Tc zSV4B+fMJJVvBP})Oj`D8UiM2?_TQ4%F$FB}ub1l?>o;cX+fyvJc3{LFhC4s-9jpCL z92k(wj*MZ4jKccZ3`S|W)<4n2(cb?bOJK2r6A{|>;Y47WMwaOXW_MgggMw92!mFPyW@Oc*`Lbnf*92#%N- zj$ORA^Ck77R}BmeoGg$#7Q{yJKeyu8E)tKNl5~xdY>gbrbyB$;(tIw`aswQijWT>( zvgUs|t+{#J8|D1}${94syK^acHz?#bQpR%$=H3YmJA7?-fka*@NzJ#vL@0MSi{;d+ z^n}rFOA3!Sbox1oK-_51BTIzvm{tE=)zY2Hh#H%fm5W!5~J z_V+TC9C8LOTJiU?#@zC`JldsAWbuvnDK2g8JY0D$y3@Avxs`h* zICvEe)Y}fqSJyOGE(h&ns<1q-=vl_m^&;*Ey(~=(18kpuI#~0 zn#T7QN{w0=4HkvG-*b7bKaNP=k@X)5n`vH~$|l?GX2fAv?RlPz{`;NV2m4fR2Ml)y z>=s9IcLvH9C$^T|t@{?i7JVsqm&bd4%@$V!J~#6gH+w$!0hi*BzKv@A61X8?@2?O> zp0&lT5f^U|1i8suidRzB?K4x8sdcljKd;|Wi=U9I#iP4ss#{zMZ$L6{;G?@u>Wwvi zYY_QUP$jgj5I>PvYY5d-h-RyeL2IaSYnb^{h<$6gKYxTOKiAAd#IjptE`MZcYh*2d z^l&p*W2>X_Q)B~wY~_*TI=}N#Yur_daz1_r}H-@WSh!eMPvyXwB*fL+{6<^k{ zfIl%f2-0Fz=oTF8noUkEbN3ve|5}WK5TkDZf+FnsBSb5c&(BoB!-Bu+fP`%t< zy)9Vd{U``A=#Aj>R5yLTWvBHPuz7Mxi*EByZA-6xuICcMS>|gj6vz@1QYOv&I8IIG z9nGcB&980Q!4En9-nro>f3x*sZyluZ83Wy7WFD1Dz zrK&I8SKeiaK0PR%Wf+~&nSy1iH*8KN7*nY@f*xlNv z#$R|Oyb}`1HER6m4!bi_*om%rYoJim;pAxp^cIJLqIiabM}}gBVut#7eS({TfkJ{n z=^>HGgY)`=k+2zcN5J_5;Xo?ItPgiC6STo;wJ~y2@Zrv-qDU0#i$;^_)hl!@=W_mY z=j|kE@x}kS^V5ydhW~NrjQjE~g)@cn$&7kqjSSO1oP~Z&NWXTzmh$KCt&cTTYE~Hz zM3E{rR}B_eOjhWPH&^S_eTMf_l8dj=XxBCvt0r!^)KB-kK|ZYHvDq02+-B6DXst62 zv4|GU`P5c_@H@PC>fd_H-DaDffIHcz6^6Ze9=1yT$#$O65ox{goDX&`HyLv8C+q9g zfHKz^h_4@2)A|u9&aUcDbv_%fkY=|`@H-H@o~$%%<4lNf^mqH0lR@^C)LT7w#gZ%e zYPvq$xoAFds6k!XeeC1i*|KVn*xMykP)e$W$FI~Wk95K`lAs9arY)~`8TudJcVcLA zWwC=AbA5@Ol6S)=;pz7xFc9eSl)@7=_JTmo$3{MEv@ZEV%ip!s{1A7C4Pu1#W%uGa z80hf>;iHCAB4HoO_LDy4a2v;(%&zSIB<8g&)}ecs-;dDw;)b52q4&YgO~%{|Tm`-8 z6EZAvz8_|m0NRKX+%9Am1M#z4_Won%=2i{}jK_JVqw_y92`H^d^J5Tx(PSsM9Tn!r zezQ6$7O)M*{FQYXaZeerVW+v8( z^rkM=dj#jTgUFd@AN8H@XY~^m&Ls6SvP@L`;4ZE6rkZ-ji)O3rLPO!nuywkYH%sIH z+n;4a?thnO_B~K7<;Oism>c$)uX8%Ittaf~;3CS! zw&mcP^LzH+_~upMq_b{sv`#W(^~_Fcb)EWAQhOWqI#}b(i}6*#ui(Bd=Jq-P*mSN` zKEX#}eBDHZ&FdNI*Q}da8O?v6=Nc*=t8?Y$s_UcaQf=-RwalyUmvr5gsm2VQ-)S}tnylW zsS}=;e!2{OWof(W!kFyX8N6e5ybh$R6}}DC2`SAIB4qVkP$d59doQcseo_!p z8o9#@5^h7i3hb;@xuG55Ke5GdK)7AK;x0=-MxOlP9<;DmR!Ua1sSub_x$)R|#4=v( z;8Xpa_F^VKK;(6ymzBKzPcdBd=Yd6dwtCtc#LT_-2O%maTm#AaaoC{2>?aomo4p;V z330($Ra=kA$!rs}*n#q|zGtO(=v zG&jZDTyYj*_?I<_&0R4YsvlCIc_!Af%q6i#n?RoUM!OD}k|1N-z~R+)daEy5KGr|+ z^;%_^+mnN9Oo*6VW!pGwfJTKsbsvFzkS-j>$rx;kLN>&xaYG^;&aj_mlFH39p{&w2 zo;JszVF@a8O7*lQW(ApLPIS`u1TDD`V(elZaVNX^%mj`@-cidht`9M!cQNIk>PGrH zgxPbrvQO}(()zwX*rbTOSrS-2lLisF#)^tCs$md)isZ$K*K*F~Vj*g8+^yA6F+KiR z5i%+3^{c>;MNey(ajD+#zAAZx9*V(}CB*VB@x9(;Q&D@Wy?&vlWnhfsyuPKWO;%VmDHhHO zHbs`eQ-2roN?A;rY%)ZP>fw$090Tchi*#11)VZUZavF-Q?nS#iFMwO=#d~ZWVX=!_ zsoOZDd?@aUW`&b_-hjhOiRLRPl}H+2xzu(DiVdwA2w9UD*Kes$G&L9DAYLf;d+0)K z)0X(>q4grV&|E%gznWyZRImND3dxp5CUSj}r2K2t#`z*2hR_$vEe*~7SO*7|LA^iU zp8ALNSi=l5HaD~RN|)x#H9D0(W~32x?U7!Yeco7Qt>FK?W*R~4$@m@L`61@Ii~JZz zV4L*HgQjTFp2GLjJ|!_>vrL%0)^oJU>ej=k+EO*Ior(lsYf z(a5QX(4(9^9;?g+@tf)?@La0H$ELH}1txneuVOzU`;!p|qG6SX@U`E^W<5)*uztsd zrIW@;e4+E%OMi=&8~1J0pGW7W>i@~IoT@b(j3W+xL1dc1I?mCaXlF%=w0|?hn}6~S z-a2S^XEFACU1x+$8P+uo*q9J+@Fc{sH6~AdvSP~`mS;6SNu!+*MQOTk)1J6DTuUno_%HzPM8^3NI?KlMfi#0&1^AUh~yYyDd|BXOoOv1w}E#z22Y*``8@4Q0j z{U#yaXE8a_waB1x_*YA(O!w4tZ6(eHw%Bb*%Ym? z1L?&NFp~6u(+WU%B8tR!@!$XC-RSw%oNy6rj9KQ2Egy)bg$DmQ5U({5C(SqeF%V?o zI?7GE!{5zHSEOhn^8hU?8PES9y3$(QG#e;DJ$njk4FkLtx(v*dtCxrVwh5H~y1g(SzYDIWzk%py3 zg!5}5yJ^)x=6{6nyI=hXad&XPrS;BhRQmBXQo79j!=2|5MD@}|64S3uZJ0ONTSXcx)L%v9EtpV;)k~ocjxh| zV8s8i_1DFdRngNgjGJE{?i`ac!OSwH@rffXJYg&hI8tJwf-luYpJ;p>-G>z8k{*^P z6SFpqLn2^6ev(KLiMfy`1sY8X{~vZ9-sVfcO3oD->?IErI`K|oNQb`(PdQ1BibziO z{F&7j%i`f(BoOymD~`AA=ZRK^JL0bjPps#apBZJ5G9DS1a=*kCqWD+iYRZ1;TE-)B zq~sE0{b0x%Iw99C!?yK|jxBY`Nk*NA%APw(8CCm_owGT`ICx}ltY&YGW-q8^??S$( zNq$BfWzWIXO8VoOa~_#<(Ux}p7dOv`>#C5b|-h?Iv4tX-FZ~< z7$^@WBadGoZ%hpcc*(_L1VX>$;jAHGR^$=2=Obt2W1Z&P8UjIr1z0b+)Y=80sC=6C z0@{~c-o66%_5wnpeD1XZMz2C1uR@OYLKZ6k*2ix2~jXx9e`SDU=|@v+Os->S1?BZ`&%;#D-_anv-XMxnC$?J; zU;;9Y*UJb24M2^G-i>fNK$h1=t(`^%p(a(OCiP07;#s3SO4DbhMy2s4){Z7Ursl6Y zP1ewWG;wtF{8B3+%zuIj_klbP^kBdZJP;E#KcOW*0L62>#p|^tfVeei9K}PZHN?6# zf(bRUvNd$QwHN$Rw`mR30VW1C{zPd@V`_uN1b8t4gVu4fI@+?<+YmY0|NCQ}+5St2 zr=+sI#=5vNuQ4nDJ(bc?XAE&W#^QdS=q zNsn6RZ{3MOy-uuECA^Lfcz9D01paP@#O{Do*JbuW9_HRk?+ym>RtMp>@13o()~JPf z!`?Qi#86T?9pD1js+a1tZI8^Z*U=#d!Rk+`gp1d`Dt=26D{VL$Id1sg*B&QaBk z(VR~MUnWNNgvW|>#|k>Tu6KK^U%C#*o6SGe`#I)!u9hEVz5P3bh&I1Hg@*C72Fllg zAtYGw8zY0d6I~!+vdzR;=fuRu#MHluDAb8q;U4v@QDvjC+|FL3pvlsS$*qma@pbZ| z*Xo(x=6&LBOo;%ttf8q$p#1p&<-{o8XH>v0$_VP@<2g!g5rf-Of(jbPfM^a%4Rw0=uCOG;SBd1Q0om-n`NG*n!wO^MM$^RtZOAEmLVIs zaOeXB=`DDU&7M^;eOmUI6bMuH!OBC$YIRs9SEtIuM4Qb+%>}NoGffzXti(tx4gi;s zFJRu8*Ntp{$7Zj$v(VeCP3@tqQYd4C&zIPw;mMih{9hMmnaMP}l2CtWKF(=+I z*PDG;5cS$Sx;8g9spBEODZP_R{geHvzIdqo8owOp1x^4Kh6tyIfCDw;{eZQKE-Wsz z2_4%_K0Oqp>@`2M^&#NkrpOK)>u-Yu)Ey;sakMS7m{NqqSrGKHqpiceGFEx^nil(1 z)#hTA42GctFpYgNni(r2006@QoxKCE&kkK40Ps~Nej!~dfkK8$M6Om|l=kj{@BSlv zNyPgOz_{7$CS8zLUX{7r!FSwHe*`cHf9wcD0cFtm69~u(fXkYJ%Vviy5XW;CU61zJgsC}d?>=*61(pIak*3CWNmqlrXOopZ z4laN}isQO+2O6SSMC|kNexo!OQ+SvN7Es75WC$=nP?(h0sb}HR5};vbVQKY&1)xi1 zC`=-8U~d&e@0;P?7{Df1ZCaV?~EV)a)}EV$a;R`fNW6I&PZy14n0edb4*1-geA)|DsOUQ-5w%$2f6c-v{0Rv@lrj)sTzX z0S)57nTy4LlWX~UDEYakb$WkXR2y&ffk$z5SML`{fe$z$^9ib$MdX-wQ`;i;zg^BB zJGWoy1LuL5$MS3mief;ZUn#umIiSbI_It~;U5~d=6Vc^)hpGVVXXLr-{vtzQFhm%u zDee(Oytxto$~U!L9zSuE`;w#j9?}K$5#y{fU<)oI3GGb1YvPohYV&$u}J_pH#n(=(&vDCy@ z_l>=8!Y*IPmjz_N0jf;CwmKe673+Mk^G?tE%Y*ghD}&xZcnn(ImFo|7UPv3fl$K>2 znn=S$vU0t9FphF$)F97M_&bK6a?wj?`NDFcT%%THbd6`EK-SctSb2#T^1Y+Mb(@v= z!FnySE)*qvyn}CHZRl6&TeS~^=EnCw6<>8XUtG^uo9vIaHecPZcYkkR3?AQjrX9^< zNb?gq?@u^MgqIl9tu?;RhwWmSE#Lb?!P?lC?}DRH2(#04>?9D9ooB?`iGDituj4pd z3;$nt{=v?HQaGO}hscaoMR4Ym&qcxwE20~$X*1XgoBmlVi<3Cx$flKJg`C!G;@~*R zQ${m8RAK^PX%%QQU2YZV3x7H(GSVT3QCUz2O^A%?YL$sRY2zH{w;EPmO_&H0TPU>K zE!-$CA@rM8F`fy>C=1;!IID`jez0>)u~jS?$weW@iATc%h(akBF`8{HG)iSiaYH8& zvf3xEMHh7yq5Hq{sx1-bYPcvO$4XDk^Hs&uBxo#?CRR4dUuYPzvA^m%FS=^#d*9z{ z%Ksp0
    sA9J$*k#ZsZ#t}85VZL01&dRS^|DbKtu$Y2h)C=>V^S|zV{Qg?_E{uhH z6<5Rb?7nBj1i7v_-~Po(q79Nm zH$NH0a4mWKp=}Xfkxs+0d+Z9y$j0POylqhJA)UQ-_-VkO;%Sy)-TZvmy+`nWg;{SHnHAzT6YllRZ81;Bl}GtsFBzI;abfa{ zP2>DG?~4*Z$%k%>K>KbT7J#5|OxS_~!YL@vt@Y5dDv~|my+lGXz zkV}pN`KsQ6!2I#$uV5vWd40GS80t0WX@2zt^*lhAqPimxe5;{!d zJ+PC$?xFAH0It4-qh?zO3NaV%XeUI=IR#+?$al9tc_^cn8l=z%WIJNu1j5A<6dEDw z+&1LyUi&fzYq4bAjyukU`lF7B-|smElhI?Ui-01~U1ov<#%Pd4X8O>YlLLd%_u-pR zIq>lbeZC7?jf8;2o(=h#jE)Uhx3 z6Qo-*t6K^he@2iGXeSe(u_XAuH5bTI4w3+Z<{;%eFe(v9yhoC12tkdRMD&5^^_G(e zWrmcEsD?m0hd>IDM1j%p-_kaD(Wab#H_iqo^8J8Lp7l2oml=fA+2)n&3RPP%{0?BJNjTE5_9lbCd=hr@f zI3%5_O_+YY#WK{l^PCAACY|97{yQx>2&gVF2N0E-TgJ8rW9xk)irvTd|PYQN$r$r(v z8uJg?p-q!40c)_D>6DL~68DX%x?w*~2GB%z%h&!Aj^pDNu9%KEr;9(a!y2yd{PFys zc#9BUmkx0YNsOx+rWfmJ>X^kba5*Yf{EB8TZ7SM}LFk%H7X($7a~=aZ4?&dQ?@oFj z58GwJqGtCQjQKd^bd`%iPJ+us{GL9LD%r^6jgEDmztGUS3RN(Awa?Pb8m>mhXK;iJ zQ%eDfVu_<2hDP{7)GGGLPV-$k&;UY1r`7>@L=~&iupaeqYEtRz=Gs>B8S)RhPn&jo zb3trpMNNnqM1M(jwGjNeB@pQiuVYc6IuU^{vZ%m-0iDHMtZM!5`R?0fs1#5%IWAxA zFJP2gaN#ObM#>7m!1%r%48S)v707gMMqa=D1+6;xqv;quQF0$rNK~I;LTc5bF}j8X z6YQY~kj+WKKDGatnxHpykGn+4sZD)-s?^Ca#g2I3DXyT6eaXecg881|5Zs<8{r&zn zxYUYiK|`{7-^E%&Y!;#ISoA?*Tr<0EQUCP_bb2TZ!vGpBD2%i~La2U%{eES2 z7?Dh_!~p)0amn*F0si?4Jlz4}!S00KAZc&Oxn$N>L8xb|02F?yY3Ct7N+!KlW=$eN zwT{~TLe_sLwIu_!Od(Rc+w-j#S30h4FZIO0LH;=Bpq{@SFn`ZYO( z-mV#IKz?E0c@T83mq3*EaOTH@S@kNuC=i`Im);;x!V?SxN*Gm}A71Z?<&Aid%OUt( z2o`~0lxsvh@)&Lwt2dh;ivknrjS7}SyxJ?6MQMZ7nG5f?S_}B55WHJ`1d?fp24Zl> z=qHfY5=7*xiFOh|WmS-&CI}QpL*ev7!JGwRI|t&sNEjE6j2Q(2JfMPgnP@hKDIB`` z1vwdQI0@db#M-i)tTGCleIJ1fSQM~q~qg(0ZipcW@qg74BW4{)U zGEwIyw5WippRq*WVj|Gl8|5>`wOs^zLqJ2sa@iwFLlcuY$wSZ@YJpCm?+xP>87S}~ zmAY7<>vYBuN(RA3HCv;cjfEOG?U`gRnj+&o92hn@v`T+w6&$orI8vR7Q4xs6JkI;F zxc3tXGE$_A#W2KDIL>O9Gf;v^k^90ll;yl}EAW1Q1Wz}5r6*8QNfaHJ5^fxNWuB&1 z6ou%O64wMvdZ6-EzVh7{N&h+F1NDi+)x|-uPU|B;Tw`+OGeVe? z_5m98Ik3aTf{%-(mo6Ix_M9d*OWpMMq_inFNO(q=WqA1X^N!F$)9HeBEXp@z46KW< zp+uT!6q>q0pJDH3k2b!7^C%wYSn@WntBMT~SOv_I)6B|KG|&Ppr~N(ha1k?m4g zC{UlY1t~;}N1Pzsf~1RI^>alSX?tlIJ64VIoTSDors>zL&Ci~=IwK{D&g^w)nOP1-;=$f&aTnM$;a~wI1@iAJFh8St;A4s|*6w{M zs5!_3S2Plz2)0)Xy9sEbT31v9DfP$t(pY9SCBF)yeQ6<$sh?b5@Lg>ZL0$CSY>Cxd z4b@xA-fUPD|LsxDa@^A|GPp&v0g>SE4T0zTEwVP`y79XUJvY!ewGhVgmix?)dhM*dD=!o4#=Mt&Gt*ajVh1O~yp|5S6_6p309C;^(g%AgGfLW0x z7hh+)Uai+jN+&aE z=qO2va%qV1ZU{vGf!V$J7NWb`Tf`#28?zRKy9JYNYCu4iv;o$5+l+p{Ft{k&Z3P&@ zD#JrtFu4q>jbPlw> zeyxPCN#s>$VU1ILd~h9?muZ9fy)hRU{1%Cx@Q4-G6c9kNie zV^e%rQ;IzyQ(7c2EWXZVg(0h%RgRf;jhRiineCLB-PWN7I;#GIX(Z6x^5H+YzN%zy zNNet@ePm{5?rdi2(S6|PX6~tP=EZvClXK)+bL63N@U7WAV93mW%iQPE+}G{MT;C!S z8L)i2e%)$7fR_bXY0oI3F+2|4f=yvz9@8bOUoq)j(*{RPEk;F>t;wTnMU<#292%r9 zq5bwf$+KA6BuGthGfAJa%-*uhen-i9Kgk>dMIu-gkXaStSrxIK7KvIF%PP>jCP;)) z4Cz@_gq@I-054a*NToslFfEVOV38KcBSYN3g}&BXICcrj<|a6q)OB6o zsP{wuVCRtSciSCwyInH7JyyFvqIUZ#b_e=)hjw;Hes&*qR4~=pC za*2*{Z4XE8pdb$q-*N@#?|>BV0LXO!h&jA9UsH!Tpl&;$`8xzz1o@-fM7cl%$#1ae z95LnoN1od|;`!g;#yjHX-r$uw0Ky%Kh8;hez@*cTWXp~qh$BU*BYKYm5{45N+bt@e z)0^xqnZFZV{4GWJEkmjkd5;t0uoKg?6DyrW;yoaGr7rzxK|Zx^G5%Wj z@9xUme^to$0>Q3PGjKfP4e-_ z11&10id!|CcY~OB`mPuM@k`Ca^F6w^%Yb&JoOgSzcL#YYLjqE%yMEbXVcro6>p*=$ zk9R+t&%hDxFY_%j1g~^H)OyyZfq0+M+<#)_$E7{z(tQ_!SLd%#k3@MsGau~SKB=p; zzM_seB*Ckt@FgPP->m-!JAb=k4{6$d9J}yNwfE`2_JOSa=g!~y<)34gsf>)`bFZkQ zy6PjZU09&}j{N;PRH0xJ-V=cLz4X5KeHIfe5$rVTkvqP}82&c~UMFn+3E=lGL)hjP zpEcGN*8qg=(SN-J-hmFj0|`+7rXgSP-QN(A05I6C^cL5k{t#q*PMhOviopmhx-V50 zH>Oa(-N0q_LcW?quu@P@wDGuf#s<=_!xi z+|{;Kpk}h+=8L-mO$PqAtEnGr=IQf4 z)%lT%$ii4Y;swAHd4PfdjIBGev@QX--a|Y^x)QgO%en##GJKnUZ_k?9)~K>Y84)xY z9+XiG74oaXk68<3F)<``xoYmK5fqkG&+vLy3yVWo5xmv>$!6Px3@}M@-`J^>l?9)v zf0{#oUV>qBftPa}4sDL}K{_$rmIy>cfZ#EC>?app}Zq;tbQGezKM`)Q>WD zoCjqE(cz)o;e?no=N}483y<%Afs%xA+~1#D`DO>bBQ9avw}{s{`EB^L zj+~*Fx%>IF!(MZ0i5RaOK32Or=Z@ePRifCb{PbhTHj#}fLNf)Jyd(V&);kNUi=2(Mc+{y%N4Kl@ z516dLWLpkLJ50F5L{IWlNn(zuF$R^3{BuJUzrRMT1WAPpfF@x}QV~ojCh&KbgrQdg zAkXa}@f(NDmv74xBRya~I|zQf6!H5-KcTbXYq^)^Dk@sFgs-cGV#^9=8%xoOWRZ=v z`3A4N5(p{_Ai|uNy;Yp)*Y4@lFtj+qwO{{9TFgf1t^@S;O!wI>5?7DYs8BjeB5Aw6 zPhAqnF%p-CK_S2s88J@{fWR?hyT^(X5P(3411AYA^M;5#wX+BlBWhW`DoVOXc^#Fm z!#%N43{Q!$ z0!~==VdxEE;Kdw*qiw+vvci<#@u2W`T=sp@$A0$#VLB>86|Aq5Q7+BCG96{X2`VXs zY2Iq`v8qvek}*iFMHGXG+k24+@}P9Ju7l7wc97O2pimOZhDB}@+3DCNu+kiW?XlE9 z;M4acQY69!mL4l2ou&bCze1=`zI6^+Ne|^})`qBa4Es zvGPF*X~?P9QFnEyUzz5Ya3qL&@DRSLc{9X@naEFa7E}Mk(};lY2Sdj$^jWCWC$wwd zb71+F${S;yPHQL47IWhI$o&yVl(~hbBdP~^>LR`T1k9QG?TfIG;z;BMCy>>}p&+x2 zFd0f9Yc2F;lKvk6eL#Z0{{b3Y<)x&i(zUL3rAVr#cF%;|<8_E!VeLG^()kJWMubAj zOZkE;-)%)$QwW0@Ug1_!5aJeg(}=%PdDqQywzHKPh)7;p!G+u@TgRj-?L^sC8U2&6 zuxW+whS=jQ+g8a`RsO=bSH8R+i z*5_9DVG3g>Ry@oVPcR+}3B16G-0_mPyoBLg6Afa7PmQ!8M^IG>xf&9Hy0${ifu=~j z_!qK*Ld$9*@ec{yaqlnf)m3=7ib`g1)+nL3_^!;YPF-n73_pw3s(GsWQH!l zE{-#y`I$D;{i%%J@kzQaMQ;B`L)vkUuWQ0@`C7b{u zyX=^$7ovvbs9Gubd4z|tvuRipyV&G))rp0wYyq>7zK+nc3mA7_ONT_ZGu;iKjjio% zA6Kb94R1=o|Ml!~yLQ9YwI8?Bt?qS`DyOW!bIoee*jbvUXR^ z?)=PDXDB7WZc4usuJDET6P2LlT0&~H>w-)>v6sHj0U56GjVs*SVxdvR>+KO~0-L`a zKRL<^&1}XDoZto5(Lq2wrj*mX<~EmZjw>G9VBfswKL7ct=iO`jFa_8^KRVKZE_ARe zydmJo`XkOR z_qiY2AmJ*Agv9L#XKL8x7B`4E=q~udtK9AmaW;;vE|jbP9q=B(G~pvJd8T%O4d5!y z-gQkb|KvDsBI0;EN3=fq(UabmcmJo}1+HO=eEuVE;8E#gFZ&G6JUOW!4TTPyYH2J4 z^cx}h?1L}-(_IknF|uT92g@b8&(Il$KK4VF10_7xz`3d5IT-c>r^ZlO$0^mZt9`#+A0cM~EKFR4ZAnp}b<58U3@t*`vpb1J73N8iN zZ6FJ_V2RXPhT$IDg z|Df}E7*k9j3QCg-3ZWHVA$h>y6~x~W9$`y~pyxT^`JJHuO<^@yU^P6#6<{G7wjpnn zPeOnT147$!OqwPkgbiYY7?PnGN*`0Kp+~u)AP!<<5MiG%1Z(kH%PrpS@gD1yp$?j% z1wtMnR-z>`#_=WMAtIq`9R(u}UF$W(7?vUWjbTFIM-O(_C9WbXg2g$_UnY9rtmIrn zI3nOV;S-kP1Tw_*9h5`0A}|JHSfrl(v{qrwN?zb0F6N>&6vQYpg!K8~$_XPiRwIzv zpBJj$(h-y`Cd4vA;+NsaLG+^NS))0gV?}(xIce1Bp*S)|wvnVw&RsjEB)OFaCK&`H zb_7T2;`vSCQ1W0}5Epd)pwZkUQ!<=Mvg6$yV#Ki&PcGfa1?3FophBRcl#EzrJVZY> zrC6?;I_4zi!J{#5qu8-tK7LtSDuiEzq*;tk2Vg+EFo6fWqCtE>4V+}3DdkuOrn*5T zr5zsU3DlNdWk<%`DHg=jX+dN4B|@lx5?R$)(b7R!(DTfOV0I>1{uWBc|4Ixh-Y0ql zM@j_bpoI<~#1cFNX5N%%#-_Ga9$hHk>M3U6p=Q6#i$Q!2M>&K6FobXYKo9UG4#_5R zw%St)WW4bmEdu32Lc~l)resoB2>})`9Rvz=Km$>L0R)o^oMe$CCwaOWVH%%GI?PM0 zByOVS8|5wv&fC}8Dg9#{x3Yu@dBSj2_;)NhrKxSA3$(Ui&Hz6B<{$hr%sESD>>gipj zt>jPs9#-aN84l%8PSjZ$T6Kbe2H*e!F<538i`pcm09}TOvZ#`-|5)Ye9WipEhJ7L) zgoKSwr$J2LL98H;_GUt;z>~3AO=94HEGe0~+7>nfti0&B#iNW$DgLcx?|GzFN?}ow zV~_dU=4ggdQBs-qsm7V5d;HT+JYXYc#56b~3gYMm76dinDUkw=pGK;}`BrevA(XNd z2p)wq?4C<1DxxYXD&hybNGhog9s-VPOTq`+xoL;-A*jw!71Er?EDx=2N{G&Csg9RK zo~b)Vn^cZVjrb&-Mg%jGA)QX2^m$fPB^9kUtF!uzMLa9VV4#s&jELmcTcJ#~Zd9%w z*GZzPrP86BYU;3_VzG8)^nog}HtJ?^tHVhvwhpPg$}7As|Bpj3YrdVwtWImTYU{Us zSGOA1t}0--7GsW8T|(d=6S8G=;$t7m&T|g(}d^7 zSRYOmX~}x)$u*~1lvmMLh$87h*syq|KQxxt=)RouFgmzDx#h(2e}%> zoA&Kio+~xDrIjKV2P^@djm2egt`7YGKka2hd}-NAL;##Ee~d&3ZN%_GTC*l8M8IuB zSXfkPM-m(_E zFK=1KlrThPPOp|Gga}mtUcw3bVy*So)vngxvrVPMZKFuws5LC`?7F~Gs9|?S+Fen= z0xSV7b!^Wf2@?P>y*AC_+8W~?1O~JU{#Nh37SInsRJ@w4+;Rj|c|-wxDXVNJqf&6G zJx~B281{ySdlWG0=G?^XSYsKRoj>P{h z@Uh;bV=}6D0h}IvaD9T%lnF=r%Gii_u0@!@64)*ZK&Uw-L>7Co(Gt-d)9%tbsrDYQ8oaVk^2HuAFfH%kx9w5BDvSxwFDoe%2fr=Y z!s`bItwdNK?}De%{s0Y21O>1_DMu^30#h2xfP(=+Wt!+ObHzC=fHQNa$F&jU5<~?( zb1JKHK~SW)8QvYIWge5nEURld5(pA0|1tYz?6S%8aK%nAkMF(i>BBh`J{w73WpLYW ztABXx*!tK%*AUSJ$CZ_{(lRP5BVJkt@p8B`Ex$8Y7DU_~F`pXI&Gz$5xGQgNL~PWv zJqvU_bF5S-G`_y{(24~HzZ*oaSRDuI{P{3xsvYCBGnGc?jl$%WdMqvn6|REmNS}-#11)Pv z^;B~!Rp%_{RIEols$Ay=&t`Rk-E=FHn=9uSNNA+sF7R0Q#mNS8vY8oR{u^NC%UVA! zS`3wCAcorh#`5a*-LCRP`*5(@|CdEiX<+ZMMlWhDcXU)Mc7ag}X0Nt+m9FV>Vl93I zLh>;+XmsET?o6Gvp&`_Eu(od(*SBuAPwI5|g?2kA=x8Iv1DmxVUqlA@CLn|5Z%_9- z@wFUwp+kVSX^M924tA(gZ3KU`NFPKju@d&=_K+S>y35Cw|ob2PkTvdBdTqGcQTN5$fC9Hg7Z|D=mmu? zsf}`hclgMV@oUewPeW&Ir|W~GV9A9y6Sr_LU9k&XGi0yD4p3;=M39HyIMTQ_cHh}6 zx2Yb#^H>Y62~+s?^b%}#|4e>*Ktmva3Yb6?MKT3N4}jx1m1_#q8nDyCavgdJMtUAX zq$?4B7z6hpW}(;?>m_iGa1n4o2K=}55K@-*=WA9uoh!>`XSw`j>guvC>-ufO2K7Bc z>>Hr$ma11GkH|s{Zx1L0FV|U}KYFtCaF%=aj2Z+p>^WE~qaSOuf+p&I)oezL6P-N7 zIE6`-Fu*DuPz=9-E=!LE4I8A#I-B&Wno=b)V!Fa+x;H8?kbkO#e?%}Lppr|34lqD@ zKZFsmx)Mn83(S$MKl`PiC8gJ*QQ!fe+lsC`<7@{dZF}lqckQ8*Pn;LuUaqmNt@#nm zz??%nyRV6)uc=oX|0J$6gQaJ?+#=D;L zIhUKJz9*|mFS}3a)`b$0N(b-IlBmIN{E$cpHoCH5CG4f=dN)FZ1oC>Ak9C9-`p%|k z8E=|H!aB##yq(kEd_OJUhTyeZ#BmStZP)UkX9djquu-IBaepbM@o8)8P)z|vg_c#c8J)bKqji$Wd{yHulXB4$S z1GqV2%m9sZlsKRMIchy;+hL}TJiY%km?Qp&IY0>5(ufU<2{eHd^nd{zfB{gy1|Y#C z0e^w_Fz&IwMPGiOhu*`}@|R8Dpf|rp9p?~Gzx7{#_1iu7`}VJD9@kqw!(Vh*cRSDn z#G7i>-XwEz5Me@v3mG_VoF4Vj@Qk)j%t1G+NPhN!2W5w(g)bmEssuvv$y$Re}TG z0QLIy|K>&^S2Q4;z#)ea3p-Sl1p5|lT)A`U*0p;VZ(hB7`66s2l#!yL43R2jx)gEZ zbwn@b_s(Xn@N{8eHwLY)vH;z#w!$dqKQk@D1O>F zp){zFsVSUnc_6}MSG2e`j&b8bEC(`*|Nx-Fw2ryAR@V zj5{|<%D;gxtma@l_Kg-2qW!D@|3Ull`S&Yu^Uj+JEB4xRp|=66 ztKkFX7_dMr{E~Cw1PML-5X2Bg9Fatc{&Fb1@}LnfLE993%pz;Xxe7uDA9Nv^H?AnL z|1=Fh7=pRxG}KU`9e6wv$t0Crl1aI`L-9bR%tP!p^j3-tOUrI-axWrX_~L*q3b=1W zhIG(@0W^a!iOV+Kd=t(%k96R#z!X^sJSmlGNH@k{yl9*Tt!juytq78nE+3mPO9(PU zP>4PkL_?I)N-ezlQ4y7LyRHEFlr|u8!Jm_t4zTfvZ4YMTHv4?RGpR9T5Y}c zrA_lZs<0@h{MAq0rpl74hYETR*PQZ;b-5lMU_t;2C%6HJgpfeUgoN0%^@wD>{TAF? zWs_(rr>u0LPu{SMC{@#HAdyw&4Cuj$3#ve(1&ba?qShg1n1KZy!X234g6FJL|5xxl zZ4uZP(^cykCD>z}%gVq%Jq6$?~i^O?L!ouK|U49wBO(QzY zT!+cKXg#ffi}&h*^{Iy***Yjp{Z|+=OgwzI(VlCQso6HVIGVD_BhNzx90nCj*9{INfTVCOXXnW}4lqSQ9|4_=oGXH(vzwVfV7G^xbDa`S|6ZAFRz&9Ggl|J&#^`>#G;YCrP;2 z2Mq9nSmC4teruBYYNrH+U?5xFJKy@$2S2$~FnILS;08G;li09`b9kuPOJu||-(aS6 zr?ZIwwTBP~a;Sq2p#Ta{AfyRm5QljYK@54=Lm#4qdEX(MPXgthLT#lhbHfP>*QbO4 z9Do7WazVF#2s9w*?uHet;ug8sMbZS3S0PLgP1aST8%>NL6{m9Msi%l$G?mVhn30{d z5F*cE$s%0>~eGnae(P?lDAU6kaj?W2@AcY z%nUvgo#~wAC54Ef7V!yCMmyS4mu~Q* zm{e$j#t6@mN>p?u)h0JniqSm|6q0weNKno9QllPqNeLZd|85r3P+aCrXItu-N`Goj zyuB}b46LITk=j+S{^yeTlM;(;sxEpO@leF`;rn)|8WLFXb`nXSIN`d?uih1}r=ipG z!q(2(u!xgp8x=iCibwag4o+9JYhD@KSiG!7C^9Y3*o?v_!S3&=Xv?VyAw--iN&uX3 znyX_;TUxwy;Y2R@X!HIU(x=?U8JVTg|JK%7-)->%DBzc9>1bNt{uU@7*uY-H2Eq^q z>qUW-7|(_?g}1I%PDXga5$1SK;9eKIG(qi?DC^hVxJZOCDHKU@N)69u7rp6K?}vm- zT)`-04c=X7m1Ghb>Q%*mw(ZP%_aZsN5tP3H7O+LY|EScO?$2y}f=B-@(%k0`m@n3a z@Pq>_T*=<=y{hFLQ_tm8PZkRz`898Gytr5jpBTkRa<75M8kMLxOjv`pZ$uTMquX+k zh2W%Hl338Xwx$@!=xr}|joVt0%6Kw^l!$}9*g6vgKvpi8!I6Ma#~@!hyW7nZi^GfA zv&j=n4k8u1&fCSsDWn8T`Z1Q-Y}z45R>K80a$~})U=KqC%%Pl!jrWU33XF+D3vBbC zfjh1^$5*~x?!$MPiex2EgvnJUKoKeu*Xpp2(*Gs&rtfNDzjU`Lpw3^zVB6Gqh8bsN z<|5#%Oy#<98rFC%aFJU~XHz5OwI4=RF855F|31&;!7%&vLv;6&VPBA~!H)H^Ir?Uq zNw&45Mo%a4oM%uHR%gQTHTat7GjD&}+XEshtGCM4m5e)}89}T}ddqBgx3rCh#&8(@ z8Y<+TddxMlr&LFLoJs%t-{MC1z_p4Tf}`)<32*acW!i9ClG??(7M++&T3gX=o867- z3BUo)L2@e`r5B>*lG2h zgdlpREb=+J0$D~8GsNXB;T}-Fr=Mq${~hmnuaum1e#x7a`o;K$D1=0ZD$W|+IE*uy z-WlI`H#HBFcv3qxNeD_~Wt}u*l87l?NVgg;t)D)1{OC!49J8NY=N>Ng+AG5A8kZ3w zao3CnRA7QQn<?Sbz=|Z~_(RUBTd7*iu^}?Z~sWbyoGN zsJb*B2*BVbkdU?XfC4T|$c2u;fFEe{)#*?E`q{4+FiIBcFGl&)KMdxHKblaOH;oED z5a_;%e`ppWU+Aj+3eW)OBBA1s=Ntwy2CqTp38&yr|2BgON>e!#zg zFChjH%{)LzL~jL?Pzh;*d7#Y&84yr>!}=tG&rBoDCh7>bZz3WA!rc&(_T|_RE`;Z4CF24}+qXoG-9qE3@cn1{WfQ z`VbKlaqq^)sa}hd=+Bd+uMLSMBAlx=j!h9WQ4?Jv0aZlNf`u7ErT!#w3M25JHW3w5 z5k%yyY&>Q4{6z4QY~?CZ6>HHJ`y;H#Y5{j{e_#tG!T}gb5!}XSuEZYQ zknmR0CU26!Fbsda&gk|{-!=M=&bld|wOYbv+s z$3Ctpv+`xs;1JHvA&^TD>r2lRqFsLIDzy?VTLvVT>Iq{|ys{({f-)#$=q%IHF2RNG zzVZ=g!W1DSE-j;%|Da3#?vgNTCAa3=fSVAJG|J z3;KrTG9@ZA-(q;&1T}9HIgj6G_%Gz6^vnd(%0CQ!4_K z?pjAYi?2l*{L~^q=U9&gA5;lVqjo5S3AXH79sfwLY}(;Bjn}R74?>1{tKrunyQD)c9zX zZj7M&|B7Vm2K68=00)?WA&$TU1jYrx)gjLBfRrdZ+DgKJRb2y!_{tDc{}VvvaYP&9 zQdOcv8I#-o?E_4e*zSV^%0gTAg#(a)286FjaxYCtP*4-}L)L6vA2xvAtQyxd>R=Mj zya5<7byG_ejoLl=Ky1hM3IU3$G7V*>h?Fd9GmBHLrWS4q=273(Jl?c9AEI7mj3V2% z)m)g0G4OV1WehV}7kHbu(}oLSz$j7~x96%CCVdlVfi@T+rgtMyN1PXYy-IXhaab=> z+Ma?Ub~h4d%!I7La>EyXUrJ|L(|pg@E%6ppV^cT}G=BTHr4mL`UD0$qDH|;{f8$br z(dc{qmw^u|aa%Gpk2N4IRd-cxV@3|D&&W z{c~~-oi1%nsrI&iE*NA;nUg;HMwYDA+ zSbqzZZG%{fxhRORvxtuvAh9=fA8Lu2Scfw;BAv8z^>B*IxP$fz^`=*ikNAkS*o*1a zSYnfh_3MYp?%?hikMmfM_n43S*pL4hkOO((%(#Ab)+Id=i=z&8;b9iC(S&grg(3MK zo%o6C%5`1}b2C|!H<^=Tr%VMIltWpRN12pM*_88`kiFu4i&BlV*o_-v79LraBRP_* zLMi7sLR|QzFzrEmnU{armxCFYhgq15nV65+n3EZqmsy#cnVFy2nWGt+|EF1+tC^aw z*_yK%o3~k;dpVVB;)?6G7t!}CP52@1AbDReD>%03e65aMDgokIp68jK>)D>~8K3i6 zpZA%c``MrW8K47Npa+_u3)-L$8lm<1o81qK@qif~T3)l4k-7Mdm#c?4wcN;fqfVKl zJKCc^8l*#7r1dyDlg*@08l_WOrB|AzTiT^x8m41fre~U_Yucu78mDtwr`;zmR;*3E zGG3E(k>|0Ixmc3f;fBNEp(goW_xEWv`k;VHs;8Q&tJl zunXI;4;!%)Td3q(Cx*BQ$$A{mZeC%TqLny#`?`lIw=~yAml@l$y{3u{(|yJ!WwqFe%YZ)LP5icE`JD_1#9b=-3&!gS(qJ7jsMi1*r7||I)S9AeSCFPkSZ- ze6{b@l667}dK?M0$yH)SMHymSRAojBfJ#eX2`<78kz6RdAew%9W+;}Zp*+f?`-YQR zDFZvb4IHXt;)TM_OejEHbs#}6U`5XyzR^$3lN@?oJe=*eJsqNvqr4tf`>%KSx9Jqb zcfwf=y%s|lEyY$&vjybf?GPx6A1ZckI53<@D6|E<_udJfhXp090vsad||ar|k43*qu! zBKJPp^Lw-*`ohzGqHkEO$y?=vo|2O&i7-(sb4c+W7!gpfzj@AQNBGGnU&?d4?Fm2g zFBlFxf3ZJbojlfx+1Q~0-*~5&~`zI8f}*;snVs^ z-efWfHLBF9RI6IO+EId55?s4_?TR(5*s)~Gnmvm)t=hFv*C=cm6Q;qNbamQ=XV9C_ zq6hC3^&4&>)22+{6db`evEs#y8$+xBIkM!*lqEa9j5)LB&73=X7I|Vrmy~PV9()=1 zCLU>Z?eg5Ib5=pU2>%W}ihJ;2r-wiL{tZ01@ZrRZ8$XUb`NIze5fv=W^0`ZfITPM> zy;HAtZwcAW1`PhFZo+MxI#7;2z54a++q-`cKYr8`2%)2n8}oV2{rh>!KB!%7whd^M zK}e0K)D`kA$Y6sGJ_uoi5>AMb{{uE=7G zF1`q3j3kA2nrSP+B^N?9Idc$pu=VuaUP2m0;(-V@C1a9GF3Dt*PCh9djS0fN{aLH$vxfzL| zc~hjgQ4NM3ifE#WF8Y@fbX-HB^H|K^6(N(2gA#yh&f#4~mn_(^>Rw$!~ zR>WwfuD%LutgYew(BU7hlHXP((AWLtMdmTFC^=I+aH zzyAKK7MvJP*+-|F@;Z}%eeMOO3w{y%R6%V3OmW2)UyQL;>8_h&hVLSDEp|L@+2_6Y zo+z=g8L!N8%Pzlskid#06rIPXx$8o82}kr-uzv|FG0GCdOmxvkAKmQ66IEM~o$q4H zkdQ!O6tTldUyXIvT0<%GTlh`8+G_1y>EpaEEG*Q+4Sk#M&|AL^cieI>s5HlMd>yu~ zWdpg;Ufm)U5n*!=PI%#l`zEm6(&EwMhX%=u7s=a(XJWBQ9gcbCnr}{4x-;8)UydE- ztTXA?BCPP_1(6su|Gp}74twmf&u$S14dF=TrZ|N>@=U$2O`>;y>*hA?#vhNo@t%9l zI6`2z25-EunSD0ew|UF9n+36e0t3l^4}SP9+RpKFH;a8Io{_uo`)msX4ZP*#0hdHV zLWqxl{`$MOJVeh!PqyhvWI(f79rng_tg#f~AST$K10M)M?NP)fqcdH9^d!L4O%8hu znHWdRq`-mHFM=d2p$Qw)zZyvgZ+e>xuh#bvU^y%>!#a>jRHdklfbdi&?4b{T7_du` z21mHN+y9v6p*$7DZ03+!mkjtV-?R#dgsT7*uZTF%!G&GAlgRX7Siq>s@JkP?8Wzup z#xy#JK62V3|4fW{vnzotAxebE6T_s&JnC_e>|x`L=EOx881aY!#7RS_*F+~?uZ)0; zU}Z!A$w*3al9a5ZB`=A|OloqIob040KMBfEigJ{s93>x5i55R%3NEYER9(n+kZje_ zCqg2X4Ify_TQr#ssT&v(kRo5idrR#fSNAr`TDf-IOI z$0jn^@N8;OETajxg3yF2bfFAws6!tL(TGZPq7<#DMK6lcjB0eF3$>?5se(^^(n&fV zYTUI3|ES0s66tauZK+5s>C%{nv?Z*(k*2I;(#J)Pdh`-XK>-!hpk}3#LjCAW?Xyop zBJo`>Y+sNRxzeZ>b*faY>O_vZI2I!Df=;z#cDHNk>ZVzCdFWn+KO5yl28I;~w`;v81Cwt0%nX0>+j=ljiZpYsCdAz`W?~FVXlg6IT|pbnRQN zd{sBR@X8Lq3;ID3`0L*Yx9o1nB^%RDw^r`TmyCP~lYQ^|Tk^^$07=NNgioBY(Si)a zd^>G-{r249t))?fk`;sD!&(=N5XC&Mn0l2t;I1)v3sABRC8*-=< zP@u%IU5JyB<)9nOXU9G6a=!rF&*veRj=qDo>$E#KA84S_2kC*84^rza|2E(p7x*AA!s#F#9L@AqxL9(qs>2xI)(q&u5~}S26d2?O zGT=ZDSdasQoInE@D98|Gzyb}hfC3q~HNA0dagG1G<4phIj|sc!VmFbN|K(<+Qtlu$ z4{9U{ENB6~8|^3;tX$Fwk-@wRvU7;Y{2&lV2mu&WB3&`ikJ9|#~Jt=Vw`%K ztu8;4YZlTfa0tT>18J~Rvk2C7`#=nW@_ma?(Qux-2b>6ov*>^RT`hkTqO7~YaONYxk6lX3KYaf*35;%=~b zWQPy{p;sajciT&U`pl!*b}=l=w$cllABLhKjPDdbD2Rs_X?|3cUuf#9fBj3_Jw&j7 zTlf9Gr=VsecrEAJ=(B(S9ml=hI`)2s#6G_hAble|&n0>Y5Ga8o*mQ4HQvv8q+H!y~q&73O6j;Yn2{I`9M}jqI zgBp{2G;@M>=Mb|LIp3gxEEsx$_XdV=gGY#jYaw72xG8h-INCQ*jWm4akqI7{6jv~V zNvMTeNEHB_|51P&NPNRmJZ6}PYQ$NL$cc25 zgKg-30T_k^7=*M`Lmp;?E+8nVqKRkNiLbaG`v!_92z3ep|2cvPU-O4j#0P^_xD=Qe z28Bidt%pUd*bu+9K(MHc!4U?B;EZmRRR6;evlNNxuxta=NG3;z5yOhZ=usRn1Is9m z!QmX$*Dh*xI-Hh)+M$cs*nvrbiD1`KlsAs`$QjR=R1+~TodzWAm^P^8Re{HVvFA~b zr(^brkePvxXmyTdwqh9AD|vAd+X#;_=m3M_Y{XbhAcRi|>5++nkI|@91ZPqhm`Fe< zhxAp46K)q|CHX=gq?zY>L``zkVqP_jhL7T z7x|J21Tf~s1Yb#)5fYLs#7`C0lsVBzKMLoIGH93|6niNbE|BDyrMV!G8JSfwnQ~NSVK{*6SP?DQ zixsI93|N@1co7AV1Pg!xz3F)3G8v{Rob7R!B-N0B*gjwcn4bn{&Do1^We}@~n-y^b z3~&Jq@B#+W0}B9?T`3i+f@o2ejKfKu6UHINDVYX?TW0x}e@TC!_Y=JMf?DTx(@7CW zpaC#|0Vcp`DB^rr(R^M(5LR{qOm=qViJ%vQ|C%5NIqSHb069aH8eS1JDrqmIFXnpb2WCJ_e1UsG6)vN1djEpN3d?q>7ef2)CJu8leG4a1a`> z0IGM84FRG*3K|7r01R;eETE$&s-zUNppw~?B-xq@(UZF66ES2GI*_CkXAl^mktZN= zI$)hS=K>Sw5G{~&2?3@7kPzIn5U5r=!m(MKxTJGxO!e4e$UZ&;m2i1P1W{2%u=Xxdhv)0|&qZ zGr$14`Eb`7s&HDWs4A~lu>t0jRhpNldns{J$8?#EQ1_Uk+bCI~ZWCf9S$$00iTKiDi9hO`>O>(c@2QBkjD_0ny6&m5|5ePHS~W|nw3#4WX@BtI>wo34@Gp7Tr=CN*zduobtJ-Yx-8yuxNlUA#^A7LZRqqSP=gsquD10}X% zONF$Fh&p=^iMqKL+qSXC14|}n1My@Anvte^qyFii#weO#XDAA5pPLb9vrD_R+iZ)A zyCBiHc*&G=gm>Zwydwg(mOHj}Xsd*3wv0!*6rch-FrcJsjG`NMT1qCZ3uUh-y|6dE z+Dn6)fw;GuyWzVLcUna0xs%i=yf0Q8_EL#6N}pR<1J%@>I7GkOE4}rLzZDyr;8~O+ zD7%|zy8$e~15CgLY`_OR|4iZwi@UpL9}|Cj+EqjbqwD*xsb~U)g_l8O}?CZ9NCHr&Ccdbpo;us!U<4>6el)uIuci$+Yd2IYO7*?=^x7Wuog zOp;^8v0rISz*M}21-& zjL3a z38s9?3Bk&!{K~0}{|2Wl%e4H;t~|@IoXfSW%eK7Br0mFy_``L_wLR#fwNc0H3&VqH zn>C!jP!h_h+zict5cRMR)?5z;u@Bd5%?ZKH*Zj@iTo2eB&Is|%*xb$LJkH>p%?v>g zp5O@_AO%_w%t^Sr#B3#{aKb1&xxV*lVJo&@jKPJ2!^4-L@? zEzuD@(G~3v2Vv0^P0{A8&=@_@1;GVkT+a~*EiCJXE_*Hf+&jjmj{gjy!Qz~TfOVWa z&~sDJ0>IH4ozXhY(;7_?{csTp%@ECS2_Z1jBlsrGh?@8sW?K~-P^rQ#?b2auB6YZu zn4`v_To6Fb{}4Of(_anNVJ+4W?a>c?)JWY%g0+)n$*&XP7pO(NK^azZlhxb1&5q&K zUv183ZG%az2XD-~k6X+Ssi&`bmhD-`as7)|*u7a}8C}iLc|Fz}Z50>I*GXtiN^L|~ zJ8S_a#G}ZU!MlrX1!&z-p)Gk3%{(}d?a-6$+CEJcLah)9-3lR1dzWo{f2|=MBiJh4 zdjrVQLuP<)UAa|gBFI}s;49h7&C?F;5s}T@AC22JxNps<*2)x1qAl8P4bZbndW(&P z1g+JxE!MN$+L6r>THz1^zWJ#Lk3Sa9uJz+=_nT}3$^(GCID1a8e8jp48@5qYi6Jbd7o zr{Ff_vJWh|5P6oLN7^eWs|NDE{B02B(9;~g5d2^f>b>6NZPsN?*0bFV8!*o$j(G`g z#rr&1$2^e$?GPS>vv^431(|6*l)Z3n&((4J~ zCz;son-DTRn=?M;E>I2_u?b;d5T1|_VSwin5eC345tmR9lc41;Q0QB|%{(2?MLy

    ciP21!7bH+8GPLrMr(7yavnDjR_7E^5Aa+MXutuU;Oqzy z2GG6%XpqiU<_Z3Q1{*NWXn=N}0O)%SdGNdmv2Fo*4&9XPp|{R-kFI-Lt7()B;q`r) z2vLWXsNb4?5S(ri;a;x?u>qh!37e4Z*PalUFwUtC?VtYa-tGeQzy%oX2^L@l4$tRk z!0)jh6rv(uYCkY28|KhURl4jHftDd@YlhFzR&j=ywAe|7*j?E6y z>=Azuu`clSK-By030!~%6Mqmj&jFWk4-47k!)j@seE-=yKdzzyZp@WWuDWrsww>{{=2~2`lX2V*RR+zUF`5!p8(-P;6Q=}4IV_8|4`vVh7BD) zgcwoc#DxC(SuA)_qeYDs4PvASk>5st4iyNjFk(bVmMvYrgc(!jOqw-q-o%+x=T4qI zef|U*@x;(BhX}^u(`R5Grc4Vmdj>V?)MvA-UcH*mT)cD&x85-aR_xe2hB6Jj(?+1# zHdm&sbWtPdT)K4|(j%Fd<3)n~1a9o>Q14#C{_6b|JW;`7#&GREh8$V)WXhE-U&fqS zGex3CT_6R$wCQN5PvH$bjWC`-u3uZjF-TT+rnGF^WO`$$pxn*9eH)ZQv2bCZB|Q$@ zw{U}&l#Lz#h8|t|bn4ZuU&n59=!c?7LE|~-RJ6pcug$x5EjD&p|Jv=>wk60r`+T2p z>kY=AKX2rbd>M+6AV8IxBal7=4@59Q1s7!Sz|Oq;E<)0t5ojuiw89WQ4P}Evw)e~u z2)BVq7_dPTc^Xcyzx-QKzm8^l(XWLn7;!)XQ#vt69e3ohM;~|Gt~&@N9IZl@xJn4N zf;#lj1@_Ro?I8Jnq;f>$@wkZyFirs>?o>c zaR#fZ?A(gA4fmXlz4qL5!H=`zQ?pEd5PC5uFMpZ>B{9P+G*U??rLoh8>7Fvx|KEqYOf^L)5Kk)%6|3^}+%v2YV?(Rrguopb zx^c&KAq-H$xFTebM;=+^k3&9LWRFW;nPit=hWTZbK|VR}CzY_!Kl zXzj1n<|1dbd4}7heS3y!9COV#=R9**y{?>Y&pme(a??*oJ@vUxVD>22pM{p`S&URIL)Q+Pnrg=HfD$)9 zu25a$VG-Opc;k;pKKX|%I6=rZ;|PQyPwgb=cHBFZ*vag*x0+igSA8QBw@6UFeDlvo zKYi1Qgh+O;o~E{Xv8raT``cDkKY#uA=fD3lSx*QankAZ~dFoDx(~$MFhb`Wrs($tZ zUt8YdKMGdRf)~UfW|(IX0V2$BQi@H7HeOLQr8KV;=RmM?MbhiLg`S z8YzT36IKgi8{%S;MASz{Hqw!g1kw%NM8MJ{>{AJWUsn#9LTtqmEJhqilpr}uQkK$` zy)ola&ZZ~S2waYqo} ziSn1vgl06Q8IdPK@q-}@B%K&COBWb&lje|L?riy|J5JM_=S1fzQ)x7$oDpdUxzndy z8ANb)6NM&}S|WSbp2wxrpZ^4CBXO8R{~d~Oo`j6$J-4Stac;{t0flHpC7Qut25yoB ztt7OH*~V|;6OmJc((d>Pyor|7q$e#Fi3GG0IK_#QX^bZcmxM*^p{1S+;U_;)x>KI^ z^h`VeYETo9lcvaNDS{lKD_!x*ZUT`Z9I9VzR)w1|gutg)#cEa+R0OWhWGIqsXDbmR z&r-0laP*`p3RkF`?|le=T7_#|5Y&eOSQ@szllr-+LC^3F$*3QyIIb57CLyfD4fz5J02SDm41C?CU3ex6)wrL z4$bTYJv&?4))qQ2H6T-aC=gd{{}Qd}S=?pa%2wMJ*SN>kPQtADL4pvox$-2TD@rxf z4K3+`EVSb-k-J^)b{8f^4UZs=+1$T2)piL?PeZX=Snsyiz3&CBJ0lX^5<0b`S+wbO z3ER*2_Se7M<->f%Ta~hsx4!57C{(Yjs;ar=zYm7+Yy&LcVGh{9>8i{2*W1d<-A{ou5QtN<~>Dt#kxx+6( z2uouk7g?gsGw}^WcvR=M5-UsfZ8G%~P9u7_g+|75mLa;_K*E!$b$G0(GAy-lf^@K< ztg6lU}$Vzw2gw#`q*9#vt z``OWtPT~>t;7)8NZN9oSFHLXSK-{s_p>J(8q(wdI3r<>2puw$$7IId=Qdfab^^-tI znl)0_+SdFt^-D)fJB6Uu(C48nbv>qeBLx<=x4mw5Lry|7IW)OV_2Ha`NVCUocfR%gGo(RJ zAq9fCr9~W2l_hljQhT;W&bF#NsjN7*WBhg>i4E#*YY`UlG9L{nq^ltz7pNs=tp0W zXc@c1hYPWuHLYH77p+}&Bi-s(zs{RKlj{(YG>t{#%W|qt3u6G;` zvJA4=(>`~)M`z>E+qIWDD(3S*ypsfH5;L(@cfuE*GGX2AszcRWEI!Ph+*@tX5prk4 zSKjh0#XD{r;%iLb6XXTI?9T_n-^-WY^fbx&&Kujb9`@bwpO?7gm!y`{*WUIm$$X#{ z7L{I~mhpX0+}p#x#TPHwEPB3u@|7QwclN$vai;EuAwGO1fpYYeUw-zrAKvCmv0<%m zeD(7ydM63Kdt!|5_W#vi!G=G?*RXv1=Vd>8`yro+3E+SRLSC0#JelKmY?s6Aq{mA+*AqxVzL-Jin@tU-LlNBe>A3zb3?+0yqL$GZO+RohxKR znEfLcIx-TFG3gQKz?&~>!$D-ee;Xu@bFwPxLcU9@15~f` z%OgO(kMsi%LWmFPeOd)C$JRBs!eMHDwgU5o%Mi-bz_!FWfNfLvYs$NXUi(I!V zvIBBFCa+K?j0DMxD>Ow^u^IHHhG55#RLN}fD|xI#*9$oDbAgIn$(pn^v)ag<{KsVs zHk_&~CjYd_pnNo)BuZP1$fpuR_7cjayebHAN(guioYcsr+DVvX38Aw{rqoK4dP>O1 zN!BY#lXM6}fI70<%CkhNrvw1HunMANHF|V8ud^kzq)V;@fV+$gx9mKKJjc59%RUMa zAjAnESVhA834dhB-}1m*{L99SL9^p&?2gHdexCDWCg*kYI0#KWtkV>NLE$0N!J%Y~Yq)bNf2|5JJ za{r7;lw{5^QpNBLLgB1T+eA&9m;+ltxZKoD-h|KooFVd5OQ+n4>m-QuyNQN$NRHGA z^7GFI-J7}s(5}oGnYac~pakH+%&FAShvcb1v_uEBz&?yn+e}TCxP}uYh*+p9wmDA( zMb4aP&mHtX5VgXfQi2=(5`nmem`DbBm;|rfPxY(`-)tnyq|YNYQY1~%B~{WUWl|@7 z(kO+}DV0(xtR4w zIi1rwrPCW#f*e821QpUDbtDM&Pza^b<%`aX;enVyg-JNk6WxVHHH8gWjutIP&Hrq< zkn6M<1)~og)DQ*K01c&=Fa<`Hj18sAK7F6ERHRV?L`vM$?TgSF395;4foa%Lk`u&* zIE6*k)oYLkwwy}dYD@lfiGyp^Vg;BHRm+HA)qU|*4s<0_-AIaHguYw|vcyzl#nwnc zRtYt`f;dr5z#E9D)!OW;mY|B#N=&9}I!pD)Y<1U^+EyW$OrX-yUA=}zjRH+Y2o7il zZ7@`nC|8y^S9m%IQ+?NiHJ*4Cf)X8wX(+{g;Q&Q-Se)3sHQGFbeTb7xRfOeONm1B= zuz)B)(YtYoP_@^QeTlZ~tAcII?`#R>48v~h*qJSpZ3S2%umDat*+m7{JpXkFZP?XD zb;F7^u*2D*QeDu%quHfRLxl}03Qz@F4ccqS*N5oWoqbq}s8~TFG-!oU`q`yn6x*>S z+p;y=vqjsqRok^?+qQMvx7F8fTf^X4RaEpRn~e*4bpd(US*lGp7Z3%7Fxiupi8^?} zunaS={o1l1%TuC6eVvir2voSG+{(4w%f;Nx)!er=#k!?N`Q)2${gug8sXujEt||b7 z7*~+xj)I_q4^W8J1sj*pK%MJaxAcj|x#y~sZtxT~MOy@|QKD3F(Fx}JT zfK`}Sf!GDnS;d5CR8!!9wMki`E!qw3iA882nJp#DWJN=LMXO`gTmR*mQ!Gr;9Zu8< z5E7+{;sngo1*%P0h%jIt4$xoy-2f^~h+jb2x(Nxx9W+-eN}lKt*YqXZzpm537%U*okn$bo>=b%6!gf#gMBKB8avRb7;I0aKWSW&ndpKw&GOf;971+0B7~ z)mmxM-moM|K1JYbZ6qU@fdCy%$b4Y(tUDB#0V60;Al6g>^%hx^PRQYaANYY8xSI=Z z2oylWZk^8wu3{plAY>(1uA+io71vrl3>RnygGdIsnHzImGG}#GhWt)peF^VnTpre= zV5-jmL|r5301j@O1y}+K7z!*Nh%=;MfjD9l2p;W32nvv!5C2$zh#=%5VB~_Bff+c0 zLuCjhFo6XS~&)shGUk^$(^v)XyW97xa5QYWJM+jDW<#l zVu6B)fP-KGBaVOq2&Moq0Rj+W_f3c+UVtb3<%O8!f$)GwMgbJSfDi8E3>xLyq~0<< zWyyt9y=6WE-dZ^R%0-lAwz6Zg>;lyFd0hkCTFuVd#UagSQ7NhAhY(tVNQM(lhDIG%x1rAg z-de)J-K33;*z(ZFl;98GfLms06nN$%Sb}Ia2tl6cP5%~xn+E7yHeZCOW?dEt6bNP) z;NT=+XrktyCCHge{(*tG=78R1ji%vw#?cNQfFxL0MP34~j_NP|T_8^681CD%J_r>o z>4~^uFf)j+3|5zgQQh(0s(O@2jE^Vu>VrV!gurR3#^=IY){YT~q?WZ07=R^^XOPt& zCMbXzK!LB~=fb9m2%vz8zTQV3Z1ovyoj6fdumD!xXp?s3S0_3jw0@dccW&%)l zwcT^(Nx3mWSu#!&Cx zaJTs`@J46F4T>DJYxIWlD7W%|vU1@S>mIji8$IN0-aphE*uK)xU!{vX_Lgrsb7NU^ z$Zf#)42eINTmH#%h?wEo?c}&&SY|auSO2ziXwBiDNMM-N;y!P;-u0=q8D~6Q<FH!Vq*>z+uj4nYCtX!hEB_N?04bH0->chPQN z%5Og@5~;8ouJ#SZb;xi6G0O7IrT2Qb_j||peAoAV=l6d1cYSY4IWcEQ7w=Q22y%#t z@;=QZN&;K`_Zxrsh=*VvAze-Pjf#hOd&lyO=Xie$`I@(Gy8lvU2i*C-nRz!qke|Tyo(f%dpw`SFP?3u+gsv`>+@Lu_yboH~X_k`&b>!a=mK3Nqe_X zd#;lExTpKMpL;ZUd#=G!SHFp_7TB%}b6B>6mrw`t-gB-$laA;3o#2gcas0=J{K%L5 z$*26vS7Ld^{LI(<&FB2i_x#HT{m>Wv+oV&{H~rH`{nWq9)o1c9$S&!SDMb}iesP2a+eE4So`x_0m49Wi$=-@bnT0uC&AFyX3s z57#YRSV7yhiXTHJ7}amYgFG(Vlt{27$3S)lc0TqjqsY=DPmhF~QYCAZjN-w54J|et z+OKOfG6YBWqR52Nr2hdfJUDUTaf>4#t~@z&-_D;yk1l;W_3GA%W6w@pvG(r48Fn8S zUf_5a<;z=7uYNuI_AV$9iZ35N{rGU}Q-Y5(v}cO~H3E1{Nv08qnkzY_G*D~{-qxT% zFFlAG910pTP#f;S^~e|+awr#v9Wn&ThZ;5{;)o+|XrhRLeK?ScDSnuuj2zBbV_bcq=nsCA=O`NgD>1oWW#UEQ-u*i_zTC2$=qkMA8vkFwQ!-Blr z^2#&QTr;k;mV|OcGs~Q^&n)fQvaW6nO?1&m7afSMN;BQ7(*R=hUw<44NLp$x99R=- zHyNwovHuL(wk&QY@m8Ur+M?aA!PE|z@jf$;veD1Gk!$oVMJb}96r?m%cl6aofhE<7F&d+5U-08 zj{lnr01r692Hp#g5>#Ux|Hi;QB5+TEgyYC=V!A>?%yNhfSm`R3u@*iOW3C&a3M&~y zOJXvUnnd2tNC-n0zV4Eo%vsKKxJgk4F_o%ZW$&i*Gl{{hXGlDWt!fFqTjG)-x@zL} zf~HIDx!`G2^GJWD7%Y<*D2ptyg)&fhp+Z;J6i^y*cc#=RM{BmokJ{vSM;xgwW9b5ynFMJ>j7!t} zGms}*qz)b#7D#GpAe%6gnOs|m*Ic6@YStz~K;6yzu1QAs*v)QlIM+A1X*j;^51-{q z<2%!7$8}=0s`Z+uI=jl%uNFd{3j-@zxeC(31T?Lgn;}KdDnm+k%%Y`iXhb*JI=bSu zp*VaC#bl?$i-Iha8*Le3MF=}qA~vzVQw&KPi&DAZGP1c$qAx*nJ#fI~Zebgt*Svc7uV+o3Bq?^#z&a%Os$SLV9gAz?;>t6+-cqq2SlnDej(58VQzwf7 zd)-K`HIX-#9CR_ZIvxXApg*?cb0@Z2!|K?R;7?aIsf6-DysHI@F-9v0_AB zT~pt9$I(rps#{&%8M0x>t}f`4V=cx;3Z%!?nYEA|rcm@|naLy*tOwz`R~l})qLk$5 z$7VL=SHhCaypv9NoqV&CCbol!Xx_HdtTV#Y@}<`9jcSu$oa3G@^;?6?YhK@Q)gY{Ou~m(0N;0;)=hbAe zd+zfo5A5eXe_fZQY{)NP`=HU5c4NJJ(QccRd7Yf)O}>q7sgG>iJ!7w$PfFhWPMzGh znx44T&0py0x0mp43KZL3-~<0h&!&M(Gygp32v<|6h(@%bo8@=cO6TDnPq^SQEiz9x zoaBnnbpKi`MoO#v@z6DVnIhAsP2o3S($`urvd>@-8pwqap7=9LJjR31d`sT1@{!ZF z*P3_y{yDvIvR&NbL7wHw93$PIaS?!;ZW_Z z4(@s22gY9UG2I97``E?CE(;iVE-Lr+5*~P9zGe%(VXZ(*#%x#)n(ud zW|^!F;vs?{A|~P@f?&xoVj~*g?!jR1(ICd?9-DQWBGBL@?!~n=pKlOf*Zm-Sycu+? z9nj31ymikKdLL*tp`SgWK=_*!l0$%-JjXap)=AUGx{PL8e=pjUNzp>9ctsL>0vi|V<4tZt^wjB0^Q6-;^`@3I+Ei$ zvST}Ln>(JP)~TTL$9!}~cOkUJBW?;iD0yYXIP!6Rt7UW1KLL4TgQpREGAfo9JqOiqG z2WB8us-w{zrCj)9DfQzec4Dq+Vpt|0Sc+vPnq^rAvWh>mE9_MeGj=8D!|Y@#SS-e!yj<&4Uxj0)$CPGfNrCyZVta{4HCCZ%v*=aAAU zkQ(Xc^$&C&DUnJg5fz=2j%RdAsaPh$K$K^CTB$!$X_Rv5mU`)zf+>2wSC|GQLy+m2 z=8Qbj(wT~>dhX*|+Gj11r~g9I=Nj-qoZ{zv_UC>^7N72?dOXid`RSng#VH0;F9B$P z_R3&7q||huEB;5^S!6QR-9$j>ED8osNv19K;!PZy7Oo#^9$Z|gXsMcNW{#=|wW#D( zBW=Rggbk5}eJFBn=8CRnjTR}cR;H|0W^LjeaL%ZcLTOV@m~!&yk0NQPPARe?tB)0{ zbuKHD^3{?Wsgj0iv?_wNCPG+NX_j7Vm#$-6Y$=@9M3~a)xtgh0x@oM|shVc1V4`Ju z;_1AuXM4IOy~5>N90GOX(w*jMxa}pt*6E-UtiSRnBJn9l%rbx;vOF-jA713^<>Nd_MuiB){vM8)3 z%Z|=0PXcSM+A7ZmOwY4AU!aDo!A;uNsW;YAw}n?el_dvOsV2R&P97Z}u)Lx9&{YzUxVCtKEh# z!A+PZijLW)t@pO=`lhX#rfa=^sZFr&wh1oy+Gls-Z{CWnx8|>W_OJiG@BPYYzW(j> z)Gy-prNKI`T@H=(C~zr8#sf<*e}Pt>?wD}Dq;9@E8! z#9NduWOxyTfvgoGn#k@V-?{9oKI_c_Ee@B>Zq~3-zHG4auFV4P@)B_o8}aeBr0w=_ z(rRxLJMj}ku@j4{+LSz%wTTFQ_=i-|&IjCGT=3{IT`HC$2weAeR2JHTw4*TvA*KE%s zrS9S^hYm3ihwAOtDlGGIFGpIC-fk36FAfWHF&i@&^Hoc{tK2d(*_Lq`o9jo$FErCF zo7Uw~(s4Fx^ERU$NCC2-8mu4tYeW9&p$c*&qq8HY^B=48I=AyXyR#urGCVU4Tri|y zwp2c+;wF2pftoNVSJ8upvc7F3)?liHri5&;)`Y2pY@Fv%tuo-O<`=P8Z7p<$x zGXJ|?v$)oB^n&!%QYhKx21y_DNo()lUNQID@Au-ZBYds+I=W&oBC3 zaR5hiM+0?G+i^UPGrI|OElh{0BgiyQuQXv(^SCPW*vhYA6SiSfHe?_5Wn*>%M>132X;ORkXM=WVznh@W z^CHi4VZO6ct2V>xD+f2^RCBTt##u&?FrOp5JYKgTvRH5tYqW^`C zq(!&vr|xe4X{gYuH7oaWbRV&&TJ+x)sb9D9$(D3=?=?z4F?N%5$70zRCpKbF_FsQ( zPdBy}&qZa!G)|~@Vvq6sW_EpN-8b(g9*DL(J1`?-$Z9|J+WAr{2Ka0X=3G>AfnPEQ zOSOUT2louv2vh7;2S|n2)SX#PZ+A5mQb)*^atkx`g?eNzW2i*iEbNYSNxm+08}Va2 zH;ZR?iTfRf&TdkIcVv&Rw{|yp^Z58uHY2!qO}l`5?>Lbcx!KBZx}vv{Q+B#4x!E$g zU?Vw>LwS|E^pL-?BV@Uiy0@L$_m}5NXX63F3hFuU_acvCMX0$_FA-o`&;RyVUrh*T z!}`~PD=H?N#6GuOY&$9^KWgainO7Hxh9?+;MF(&ji|X3qq63w<=3QEwwcf$_i&Off zllTMPILQiCcT0MYA9;@Fgk5(!80++T-*x!{d8(`Wd+Ye8N4aF1X{ZzVtkZg}+q#|J zy4dDAuJ1Zbt3j9ZdXr;$uxt4m*e#cvx37Eou>X3qPd2ebyBdJ`N9lL9i@70Jdm+g) zeiN>>dpnvB;kJu=AB=msll!^%0c5QCx~n$2+j&LgIlIGlgWp-ZO?8CNdnR-4=lb&! zTC7$xsOS{k+|T2M4Cu%Qn=6)+aBfp&RtTN1-elh{1RG$BwnakF`U0I{N8dr*gMm zqxHrM@yGKx#Fu=?+xSOY_f5Af`fd>&TN?6bZL52+tDwLQ=;6#fRFk#qFMkFVI`k(;qeqb@ zRl1bvQ8hK6Ms;*@4OE(Ia7xv>l_6FaUaM}6O7&@1u`$D%J?kiJ+q4bcu8k|TX+de0rpk(_z8DGWy_&#_P{Ii*WXr-sEX+zlx^PnK!nr<_OT-aLl#4T?{7Nw@ z6!u55y&BlEYip$kwg-&#a>H|wm@!-QYqnp z8!WfwO6$$H-Kx}XxZjTR&9^#|WA3>#p__%eHUHW4&Q0vTyAG*awv#fc^we1onv?Kk zq|bP4+Hb%>1r3xw{}3JYBRi2&6dFe%9n`4>8*LCm`;xNMK}IVrvBay0vP#qtGu+U^ zvrIh|)vh>8FjWkzN>xD=SB2{>4Lt-=*Ro8d%&VqiR6|Eji7nPK%aHYPDIb|lvRTun zWURGksV!215OXy>=;*DE1~2pxGxI=^GoB#H0~Lr&}^5xz2s!5 zP8Z&U^G##l%M(wA{`3<*oB&N&rb8z^xZ(NYs}EC>^o!Kuiw&Ic(Ty*5Dpig_9u;Jk z02JBekT+f{<&`~EwPluJR{7g1S0^SPsRdO>8ZDvZUGC#7_Y_)9~pKDckU^Cmd2;*LGH`Xbo@uh`k4Jkb*@}y z`D?BbSz+%7UH|?02i-IOzZXCOw*PK6O*z*95%@YTX~|w)vBT1^Fn46Kpar?3 zCgCNeYsjk$oahy{dkyeB4^bQZnCCE%xvhGHYT?}CbGE*GKw>?ufj?!&h8 zrAJR@(-RSI_`>T6sE9eV z8B1AWF_jY%Bk#hPOI?1kmjCLyWiNrLKrs=IYh}711(o9^GwJJh(a9rqz9u$*{qcBk zqLUy?C$RIRXDQCpQ}h4_y~453VdqmJ5)H@0NTTzF>)fO!8%DSv25>E?l$`x0)yhg* zaftftCqDsNMSre?VEEG~K{<0!gW^w03U%mT9*T#EN|d4#{iOo`;+S3PGB6hnrjs_h zOR2dgq$M?}>+*6^UVfCNKyz2TtOFbd(h-?jvuQGc*QO2{&pO2efBfgLk6gH#;~PJWZXXU3CVqW&xcXjYCk{e)%MxxpJf&3V4_h_q)arR zZFTD^yVAvP8JpQ9Rd6_yu^5RR3SAo28&#@KvqIJC^QzacV8eNRrd)1ND&7EN~SM6s4tkCok8tmMCMbL*BES8iata`WOHE*xKRbjFh(Pky_( z^XJf?7j_uEy7lYWvq$8nlO^Qu;Ac+%j{TBq>dWI*pI^_Oe9O?5F`IU;mi|=QtJBAB zZMn98+uDg_*Ea*gMO<*g5y%s9Y%NHjZfhC%;Bv=xb=O14;o}=C8B$0gUCMrkM|jBerSfn{XbcVw`l=X{TZ~ z=BcMkH;U$?j(xTiD4#z9T3Kj=26|&xnRTa@eoP*E6i{IC=wE-5{#56e8FKoSUT1=?z`}IBk#QQ z*4vG)i_vqZt@zRj&Px9V4Dd?*+UcFI2q!#fd4d{CsHDlxn3|%gVa6?mrWTjiwaB4_NCxFcI9YNxEb%q6*S!@f4$aACJx%=YH4yWWRy1hmifNaJ_! z#A8?Z@q;5>{B706%N`E|6n zA1;S+yS^s;?ze9L{rKmv|NeQWz5D-q+uQQsH>;o((17)r#Q+z`yyPM8feggc)0`I` z8A*;v$U=(MsP?1RL@#^V3!w-{h$+yCZ-nYYq3Kr0!W60xWU5mchu-GCl^MuM;NqVS zcgRB?nka7o4B|QeB51%O`b-)TOk!iW^R*^sP>I#COiE_Nn+YZ?92C5uX08Ul^;HCf zVDz2}CpJF)WbSo0fnoa2HmnzZ!yIpPV;s@wzxvrRc2EqE5N+1G-x<)4MhsdW2l*E! zKGBed1RNp>DKRXH=ZmC!qEbAl#jIftgC5CV_X?#sFLDlRp=;d7K-tC(us*w zB)|gHJPJM!dTr}s7;U1g$Zf@X+YICOEEmExs<4$-StZ2O>B>y0Q7H<_pd2G<%XCJP zQq6?bEbo{9PmknM9lIQ+KnF_9f*SK@z!TmPhu6=Ba^#tZ>|He{YQ>6L^qN((CPo+6 zO>RbSNM*F%CBsS5bBYw4*V2dzN6F4ddCiIKRLyJH*~HpW&TKB@%JTM!4skh@B0vP9 za|lY*q8ikQ2MuI0f6CMtK{GTGy(mU0>d28y)R7v!B29(_LMK8qj2$hhSUaall#(%Y zE2SqWWh2ur#Suf1f{Q)7_*QgEs7pF&r(Em0K%OFpH-6n8A%nWRp&IqDh(&5agSOPB zGDaImgY3~RMa>ellaRtu5O=B0oC9X}7Ygnt32Wn4kpce=w~4rn$3m;t67Z+dQIyMEGp84xm{;WmxWVLa#gmI8Jj)3WY4|6 zcCP?dXMAbf+q=f;j>3gYU==*3eYP!Hc1tkcnrlFEK-alJQ)m&TI~Wy77P5m37(_(O z3d5m@yC}XaGgG|XDt6YhF2SmKkr7_$)9q}H`dl?o<<(< zf0xXTIo{XF(h_iP%~@be5jeoKdy|6q$;t*VSi*mW-Eh4uOEQ0X%mB?uPnA2vLGBEZc`UIFYPGC@lky2^+vKh?O1b~l2|71~t-edSCTB_s)0K9%E@5+dC)Q`M13T{$hW( z_uvRmcxo5UaD|up;SOi#Ywg?Sic9?BvEE2JpUl@NyE4D66kj$)El*wtn7^<8bM=dA z&E-`N8`**VHJBZITrnHl<}atEn+FN)=lVs%(Pd~~JdBNk6#eK(PrA~V&h(}`{pnDT zy41lT?x#EY>5p!@y3K7>b7y_)i9WYw#}VQg3;gV8AFrnY4se6_o8La3baw{s;(tTj z;oQXg-SfJGxx7wpG@OaPL`f1;{N;W-_?0a9OFV?&N_{V?n?sFec zuoh+2?50 z=PYekeqV(}jHn+&D1JT04(m03Ji&KM2!D>TcP1x=uf$)im`Z_Hc^I*ZeyV#2n=zy*gOlLTZYZyoyctnDcbTjyE`J`yfc!#t1jD1LTFh~)AIE{c9gW0Hv z&-LPD=x;Z8gH4oY)+n9%WID*(1ZoJo$C2V^){Ac#(R!mo-->#Q2xwP%E~Vl5}`kf4Gb# z7L$C4d^Ks4gE);K!ICbilPU;lJBf(ma4SeDi2^r$m57c%IB9XWncG*Iq)C;inVM6n znycBGPg!YG$&azgFuZEYq*_OgNoVxgyPxyHT7?-+Pk^R@4 z%$b*a8CcONoxez()M+llSYdt>n011Za+sJUc62NMMw6CVo`6A~ImwM4QJ(AxX*g+~ z0f|8_Ntm&EpT9#zFp93182b!P?x}Xf|N$ zF_ljUaluKP7)qC_Goj5Hmo!I~@i0^Ehn&+{fQJE>Y?fvpx}w$ji)V2!YWSBXA#H8= zdeJD2I9hfjI1P}-qdwZ6K#HD0I;2Eeq(&NuGpT%y`68DopZ*z|Mrm*5NO1F6a3kiV zRtk3k+B~YsrCa)?VA`M*Du_~9ieZ|dEpnj-358uan;H71#aSI8S{Ki0q9S^yzZq;5 zN=c1D4hIMg&3Tu41Z{wMhQ(+=9@vMq$8AjiH-ZAOdy)C0lIo+*@q0(wqnDbgnhK;U zg_)b`sh`@OCi0|>c!H^!lr8wARf?sSSfwa&nqC^9gCV9-85*-05op>k5E>W+dZ%#8 zYg>q`bIK9^x17WJij9YVz}ca{cASa92+awofjX_2hno!u&_ zj|#3NF{$QyuIQSs>bkC>8mb#%sqIRQeLRTB@$PuU#6e`e}Y= zTClK2tEl>&23oKT%RZ{;9BMk80phI2imW6Wrxk0l$?B{U*?$2dt%1r289A&jdL=aT zY=xSwCFxAu8lJGHjNv-2GCK`Ps;-j%RaUur9r@a%M2oLKyRfDT zNwCVWOxv_h%bKzYre=DeH+v_}`J8ajvBNrx7@M4xr?t08v64}uc8Y-(7!fT>sM)Eu z8Mu1M^s+76tud=~Fc}fkFtc~dql}rieEVlr8xrt(q}`XZMO(O;S*oQGqn= zgs-i7uUh)Flv}x$E48Iare%q+3|h5!07su|4xu}`pbNT)akZ!$5n0QuS^E&H3kMJZ zyR7iKaP+#ad%LrXy1E+=wEKkzDTckXo9zZxpK9UHv1hrwj#{79(iZdyHM0y7`*4G*)|c|i@iI_t~6Z3Haw}-z-T#a zt~)#pJ?z6i`olsT#5z30+}NWdNVEO2xI0T|)S$yo?7rzMZiy?!R9s^1V0=lt7gBt$ z@7u*+jK1otnO!Qy49dRYfW~UPru(_D_RGe^dd7LQs}ah-8G4X-jGPe6r@8x~!g`!r zi+>y}!4XWzeayiKe8GeN3?jrEtrLvAfEvNk!n__iFL!1UDO-%fQ@x8SjZ-(1GMu+f ze8Z${#HDP?M4ZYvtja*l%C0O9KMc#0n#4bfFGk9~RqVvhk;UNI%l^sBA2G#PEDm6N zv|g;n$ehf|jLZI-%xPT4OS#O;{LIuW%+)N-*PPAF9F^&N2aWi-?cmKo5zgd%2jzUu zWjfC2{LDc{&YCr*ghaZMY{24tq7Q6%_q@jwVZnxM5f-e-SsT#8o1zTd#~OUl`fL^g zz0eHJ&;(q_BTEs`;JR}<(HtVJBm8VHYQl;IU7TFK7D2ZZ5z0!OBBgxNl9~+Tiqfy# z(x~jxFD=VDTy;JF3=uVL(?|*)a<%12F^TT&S1UHP7T%01WQp_&-CoSUfmd@+t6-Z(7f^1 z0XY&&mJfkUWL?5jXFf7A% z+sP?1+1$(0tgO;1Jq?}h!zK~atqjwoEz_!u(=<(oKaA6XSPg#R+7AKStRUO69hpxo z)3+_lPmI&npxeHkzP%0H#5~->UEIHY+{7K+$?e5Xz1&RvQ0CVT!A;K7UESFH#MXV( z+KtB1z1_n9tj^x8)nU!d-z?tWJ-g35-e?WpTJ6;8&CTQO-qrlx>0RIS9oDyW)&b29 zt((^S{ohdu;BBqeqD!m`{kaIv&gE7yQ6!Gf(3hK;+q zo6)?BvL`W;A_-lshfK3j4w3z`A#K^;n%R6C7@WP*KKj`xEfJ!f<2jDwB~8ntjiWA| z)0(B+PW;0}9>qvb%p#H7)WF-v&E!uW;h+p??B-(qZE9Z54Q}3ZZqD+}-@sheao*nY%}4`I-~x`=vrFjp zeCUS%p3j0F=m*~D^qkOk4dE3&>6BjSm9EDEoxy?)2b>%89UzW(dKUhB0E5x|}dxQ^?^p6tpV z?7NQax_<1rj^oSj>&RZ)W+W9f?_Lq(p6}wW?@yfW{r>O#ZqxHV5%#|C0H5y%pYRI5@Ck454BrOW z4e`_<@f1Jt67S{~pYac$@f6Vp58?5wAn_3a@g!gJ5bx$EZ}KIN@@=3DBhT^|uktYe zukj~8@?5O)IG^(+&k-}<^C{o*=WY?1@bg1o^hUq)5~1@-Px2+v@-1%&C=c~cZwU}j z^~xaiRNn?!ZwVlA^=`iPPyh8NFZEs@_F8}TVxRVCZwORx2~f}WTL1P}-}YzU_H2I# zRG-v#ulH`R3~lfBmhks%FZh9f_kbVwfN%JS5BGrI_=7+AfS(bIU_>8UUD;pB7 z{(-vZjI5y25n&2`s}0I<4W)njkBa)!Q2MKX4Jxhrtbe`iO8T^a`fjcaxR3kdaQn88 z4Y}|8xv%@d5B$3St;7HO!Jqufzx>Rv{KRkj(BJ&hKmF8S{mt(V!f*ZBzx~|*-~HbI z{oo(|;y?c6pZk`O{O6DR>979iAN=Y6{_2krejgF%&-JWe|Mx%t4^jUR0T7mySrW*~ z;6YXb2@YJSaNt9S5g|^L*ihj`h8Zm~B-qj8N01;nh9r5>;z^SzORi+u(&bB-F=fuA zS<~jtl8Zco!HF}TrJq0%4rLYZAh~x)krHLv)agNK<)oI2T29-wt68yb)ryVlSFd5k zZk@LE=~H94)UIXQmMtx}0AoO$zj5WxrCZnTUA%eq?&aIpZ{AUK1HWxM^zYumgayyV zau~7PQD-F+ZOpcE<-wWNYQ`Lx@%2UU#|QV`Q_tk4L;wl-7xj&;IE6X4!XSS z&W*7g`~*o^G^I(6&u29?YQ3mk!R{a1-~Z%DLJ=^S0Sh$nKm-#+5FrH{bTB{&8x#hZ z2YDHAy9f!aFp{P+-0(vW52Wv)P`1M`KoeOBam5N51o1@}W0a9a8A-I!MH+K_kVFG{ zRM5u{Pt-6$gHTjaKp-j9(LyGh)bL3qqs$S?39GbnNhz^h5JxRp%#unc!xVE&GQYgi zOfb8gQOyLq~P02!i^;TPF z&6Us*2{qGK<7hRt&J25<57#+~O;%54msQr;XQLf+SZQ^&)>-m&zOGU3F9n z`MlOzRv)I8UWFg7_+pGRcDGxOJNEcvkkutNthP=gVB-UdDR|3MIjP=q5SA>l&RzX<|R zEu~@wB#_1weZ_EJqbr?pRA-R)!H+3eDGTf(_mn09QG3GZ918WfKew^shy{#b1ylIM z33{rD{-Yujt7t_5&Zibw9MLbl_zNi>(1|4^V;RY4I5VbEjca6M8{OEtE8*f(26T!T zj|3^C#fKa;)R)ru$gik4MN_J~AuBj#D3O7yLyP>|C&Jg1FGdm>OPnOgtoX5FK(dL9 zL!u--iN#4Grv&m8;B?6B%fnc<7N)peskdC`Ci5NT!fPA>{iCr#-+G zBN@Y-Mkka(4P_)VnV(3;70E}GPf}8p2$Z4`LAk|D|Ju!y#>ysrIvGt+vQnJmBxgCz zc}{ea^Gp!@CObXFBP=B2UVD_~S=3icLGBVEgggk^nT8R~Lsy`$92tmB zPJZ(fsW62sCZj;2h{ATC=xQy_;?x5^Rf)#pkmiQ?R=CDhu5+bpUF~{TyyjJ}d*y4n z*!fAi9Y>z=nWqfznbUsqG+;jUY1KH@&HHs!|4;~Ffd3jZKd@Dn7F!5RFuW^9Yf|BbgJ8mu21&(iW$Qmp zcwwW)_20+h)K}<6SGv=sZgs7DUF>GJx)ELKBcIX@X|>_Ig93C*?&f%h$vWY=ntgd)Tfq$r0{3pkWh5G57kJ4h--;RQ@!VijHx zMQmBIii1qJ6PQr27c|jb$E8BJAX|lnY1>qUEO@uwa*cLf{9+i#SjID^agA;4*QtIH z#ZAG5JVED_KOR;p(iqBOb(%xRCb<@0{{Wv`ql5+j4VM;`WwMmgW>G4$%pmjCY!7hZ z*^cHf9P~hg70gfuPn3ZaQ6NPrl+n-?H{l=);DjXV{1i|a1QZWg#SBuh08tD96dquJ zCQ4us6AxO>2N^=YFvWEEn4%}8mt;|Fl!3VRM2Ca&C4RHs_i zt7dhpVV1T3LRAxaMo3lLd5}lsahB#)N|0?CWU5%jEa+7%lG}R?4rU|D_R9ZU`fK0+JYlu{eDe2~z>Xte8M6t}JCudew-X zpn!K>{c(_oT;wAsxetr>*O4xfDBy(#JzHK=ARmQ>ycYKAh@Gjb@$%SAsWOjvywqlI z`K(h$%Y6Np!y^T6P%d9^uw-v zRy3gro6s}%vz>}nEMWSY(7~qDEs0f3U=&{9z;8i;3BGR_A)!8qz!l6C`y2k#op$B^ z8^7Xi=>#UzJOV6013W+kq@Nz)E{_wvMqz}rD6DdT4@r@^q;M1>Yb<^8m@oLHC^(YZii z+o#nVi^O7||DS>;ZTz8<`4*zNw#l-np#X}csgHFC$8~79H%uWe%(#2Jrir4cFv2R@ zxxv*^NICRENy5ojiZvw4#{>*Yp&Ux0L_jZzNwhkmr+~nRbUCNg7hFQUXT-Vn+QUQQ zv32Um%efmTdP$j_NhtbBk?g#%EK8W&CYOXfo;)jWswNIB#G;H#xtvS7G^GjI43+DZ zPk@GFOh)mVIg|V&NWs90tV;DtG9y#K>~WsEp~VC1qNH?8vjnTpI!mcyBZ+dZVyR2c z>`c%6%nSlEcghw#;>*6Ir_-yHW`v5u#K?>!w%1fl>%zTqtW31Jpo`M6k83N?>`mYN zP2j{J|6xFgmTO3I@XLs#K-5dU35=Rz%OTmYAG=D#h!Q8j8adt!PVL-I?%d825k(+E zH>PY#m!l>47|eUZlxOTi`q;?G>J>ElNU#*k?yOJyyifeh&zD&o23d|1Itb%5ilR79 z0u8UHBpnPaubVqeWLr=z`m)W0PnxL}ebi42y-*CzPz{|KgD}pcFfYGE&A=4Q!OX); zLCii((W)s12!*}5nhZgNLk*o#8m&&c%KkZXMoeit{pRYtyEqV<(O4K`5R7PD? zM{QI{eN;(}R7#yxORZE)y;M!jR8HMgPwiAt{Zvs6RZ<;QQ!Q0gJylgrRaRY9S8Y{T z{ZvD3qv9-3Tdk=d-L?8~P8Ds;P|%wM3fA0#$yrrA%ZyB9O;%-HR%UHhXMI*^jaF%$ zR%)$QYrR%%%~oyQR&MQ9Yb{nYl2UOUPtxl~r$kTY)KY&kwtqTObaW8@6bTv`fR8iMU1|8x*|9zdz zeMQ*gvLKD^SdaZ!z#&(KqEhoTJ@g!%7Bo?9D6%bWS4_fKf(2Qbomm^*9hwE9SQ^)I zUDJl`#$4sqmQ7KLrP$Y8E|@*qY`t0fIDiyrTBdbcsEt~uRa&a8+9qWLo%KtG4YK8| zp_|hW6qQ&`A=+uE25MSKq|I2XHJYcDTDN^$xQ$!5ZClFg7`dfesokKvh1#ar+q|_~ zzYW~LJpjOQ+qHd}#+cdt!5}#3(5x-Z1MOOg%)@>iYnUg(Wp=`~$(BO@7g$Hm)8kI~-C%&Oc}%9+%loGs8saag}; zFQLucX`onxXeNAIqiZD*!KL1{o!-{Pf!~#k``usrWnQNsUXE#9!0lhx{a@!@TKEv7KdUW1iZA_W^g zmEVJC+WJjk8J^+Qg#pI_-~g^+0rnUg*5TFt8~Ozq8hGK|ePJP9V8Cr)3cgN{lOP9l zoF$IeC<~hj#soOPS}bXK94VfN} zWBx6p&>i5uE!@#Xh&5iLI!0nRX5d)qLJ)Fbe;m{b+TN(joG1ogOprf84jg4L1g6N^ zTjgTH(cJZISM}9g3)4-{tzM3)O)?F+1jeA(osTyrVj7rURPL5N*5e?43evS>RqkVB z(cRIV<>D>eHpX8e_F-WDVLvwD;PoJSG{y&}U}IfJMMhz1ItV9Nfvjt0D{fqS0ntkS z;`(TuOx|S7tp)d8T+c1WUipH$9lBmvkJ}}=HcsAlMxIp;W(1bwc&6ufo@ZFr-(a?9 z*j?vd5ng|m<#PsIUxsC2cIW%`8-V`jd+ys;9%kb(-JoNN{~8YH4>HVCf1=4g9ezMPl)72pbtw<*_Yv59NOf5(%jDdQjQwS_mvEDPF&pUj)Ep-Jw_g; zHRyV7Xn~ep17O;x9bN^lWqBszpGIhgMqXluUA&Fj$!KTQ?P-f9=+&+2gZAP4g_(yI z>x#}}gdXBy9&4sfXzD3NR?r3So2s-Fm_ePU`3N_I$bu?Dp5Y`*X*vZAsDdmw2s$u` z0GQxfsOF~7;3_>%3?gAnj%;QorA`jh_8dV?qRCz6V}34WRNmvG9%`cY8w1dQ8_;oGZ6`oG%2H~8)_~PMYZxf){}zDeT0V;cW(6lOVp*nbil(fj zwlBhk<9YF2ohHl$VfgYglzGZ;l9*Q2Y?bZ&0?%wLa$pHP;Zd)ekiN@j0CgSHV zZiqf_p-$yTcAQhFZ%s(BF~Z(oxQ1Qmgk4C6UHI=^h{-M$YzuCjU-+``B`zx{fhs5g zSUBwE3!{uW=_|(Kc;L|i%@-`aWS7p|*SLmcs8I0q%4W4#jtN6kIEAX?%BHYC$kSz~ zMrgcUZnKv0`qO|^24LTHZi8;_gDzfa&SkZhWu_=;&^-vO-A;@1VHiwaBs6#0odMS9dPmsSb;HT3a=(|{~IsuDIWs6gX*Ft-Yi$;urA#jD1j%Z zXc$m|5-0(6PTm!`bE43I3s~~3QvoEn0NA!|E!W?O?tweLa@MVJ_+~UDKkMeca@>CI z3%K1LAoMNU^cTJXh<0a#Sb`t$^ru#y1saNMm@r+?1YMYH>Z}D-IECM;_5POi$!Gu# z&~9F*BJgZXPaeYsX9X{afD0%AI)H^`rvnUt1rMMwqxG9FGPn!J@bM+tlFe0Z{uszE z+xEQ%PB!sM3U+P^C2|yJj_CwbxNp^p=$dSbOq1UoW`!T%a|4KTEZ=Q=e+p3N?t_Sc zPR9TbFoJ1jiZ+k!ABX@1xb%Y?3KlR9|Jw#`qS)@HK!CFlb?HjQ8e1~oBPVF<#XXD0lEU0vYsPfijfHGfi4G49LH}d%lx_?h{ERSYD zH*BVjW@yF&i@)6r&}}SWKBi~_(`K|G*JFz>T}OWkjn8Y3pLwBUiVQe_-5!GOegM;k z`rOs+o!@e^5A_=$`!UA>1PJr3U*L~V_ERTxBtY%aZst%rW>z?bvfBk+$Tv;+N6FZQ zRY<&p*ad^w1vwfK9l!*TS4p|{m{YK=;}C{ow?8nrfHRPKV`l*o17Dc&A`4IH3@*+G zZ0s*SFPH&wLt5XLj_eWdpUi3Z|Ccp|FdUJAHbqc2jsWO^2`B()?g1v)fdS}%j%$hn zzyO3ef&j2>`-}LYpnxMlfdeRk6L5Y92!Q2wei=9dFK_-C*ny8hg41?zr`Yp}u6RZ( z0lOy(kr#87ocQAB+o1pmBnDU!L_&}O!GsDQ=)iG6p~4jj12lAaKu3v(tZv-UVIWAt zAR-B7Ab8-w5P~H#Fu3sGMh}A|FND+>AmI=LC0WUY2(f|BlPD4RM3N!J(3U=zvigYh zBGaf#OF v8WE28l_I{p%Mhngc=|s{aGSqK?Y7RTmZ_drB_xUdh{5ZaHmP38bQuv z0OK#$z=8)8E^PQP;>3yv|4*0^;R0Q%tWfk^Y?+|1%*8fqQrM}Ju7sU5)%rBb^svCI zRw-o7s?)F7vWNZRD$CdwroAq+GNmxJ?u4{&BP&jvDV+dx68?aQjJMC)tT4f0V0Q|un(JZZy&zw#I@A#moBx}7f+2c;BcP^ zhXpM?3>Va24XBWT3=3#5fD8mpV1xz*Bvj#G&23Z@0v#}i*8@>4p-=-C?BK**Eh;n+ zQ$w9}!3~6Iu$5FAxD`Z8el2BDNK4@r5s^W0bwCd#R&~(>M1A-c0}R0E69zBV$Vv)B zN_LP&c9C@@Ph#;%{}Mz}1~DXAIwsUZ3R@zf0#rev6;cm#zBL#Yh4Gc6ixBA$mPt-D zP-jjwz@Si+EqQq+pb?eT6c$K$86;0^!e9UrH+{D$r=5EG>0o~yfCOchVw%^Upa!Pd zX0`lQS}mW|5}I)RJx7=-sqC^;Y^xeb&>2~{1>Gs>`KMcf>lJ6A7Za55Y;qGt=V@%e zCR81^@jw$5G~KPk-M8VE6E3LJ5ENcAuMVedEv8^n8f>ukI@q$pjfP3Q^BOiACWh@r z;BBZ`%ZeuPDkKtMOGTg&3Y0c=C`l*nz|@d8P~g)5OI^I7PZIX2P!lt#H5HjrF`z;K zXthw&NNWz!|CW?1Hwbm3P6Fwuf(jA6`O^zH zCe1WtIxr2BkR`AQW77#iR_4hEl^oHGV-KCwT@>+r=TDaUi{hFK4J%|(1YKR!n>R8o z)te|w8Pt_oDdjanto&K#l_#aDD&vhi{y1S7uwXO{I!yi%A;);71Hi1o(%y{Avl#H#h*Lupky#9Xvf~?G@%6yCUSBom?*_J|pK>$pU zCZZH;{~;*6Ge*}0q%|3nR0TyDIACC2a=;{Yz$0Xt!2;OCk%Z*wNe~Hu$d)px9H{^@ zbRi59XkrqG1%&|=NdXSTmKKDa2?0`D6J4HmFulwuN&zSwMF0hZk*P#42GGd^not4{ zD8wUwBcKX_#+$1IW(X>%lK_=8HkAaRiU(my3b^t_ER4!%KEa4h=u$Y45u{2584Oya zG^H$h?Tm)g+DQi2wHQUkB3e6`(<&xc8Rza9In-^VQz<*+k(hDtT(Q4ZK`|WNk}!gCrRhfW`F28OzcwW z9Bg{#H}k9C{Sv|nQE-A2@nMidG3FA(wht8TCfF6uPkm8C8-<;>4FL z!RZD~xLOae&_M)ZsUSLqXHJ&J(iMCZZd|F%4QRxam4vM+BAjaEkjT}is5Jwc|G3e| z0$Rwq(zUK+3Dkv9&@jW*v5%V@2Kf{MIyZoB45vHn8GP}dU8IX3@sl5BW&_G0(18rL z9D);UWgE;W5+bHp*D9HU3t{kQk_36eEUN&E5+r9iuliKAsIv&?(KeU_VFdJyIlVrL zr?=27k9b}eS@aF(v!x|2ae0Lp&bbR;>iK5Nj6oaW!bTGm;KT&lH_?Rbl^`=1Kz9Wy zHv?S$UCb_vYl21Eh2FO z6G6ZdhecFJJ_^a+EJ!D-v;cEF>DuCa+A^-$jV=mxB3CV{;1P<7g(Ey!|Acv{LxcsY z!zri}f>(euxkkhZ7Dm_@A7(og`m_hS5K74o(Bl!Zh=Lk;0f>*tv>V|d!h+PrjOAiu zq8_Y+M5SGr3^zjk15gPmLHu-9`p3J7RjKrET+Y&@Ch2ZtfQvBWH-^(sS; zKMK)=D%|s)`n(6%gH|8x1bqr6fq@K55tyw*j&ecYw$!Y^R&B3yOk+xqQ>xw`tIzXm z^~Ktdxz-PV+;?0(1?UT^d%3*GvPUv=g{E7Qw!at!cR+G!?Sf%8rv#pFz4N{Adw(~H z|E_P03!df&2jJme!U`N>m$@*D^9!2r+o|C$f?*3c==DNdn_44lpY zXE3Y*{&3jA0*u#Z*B3F6Rq2<@;uA}z1@0wl^UirF1_lBYdgy1O>f6@uHHt|T#(hZ^2OfkL$X-O6(PoDl9_NO&9wYQ})z|3ZMgpZvv{nZ=!XeomZ2G*AsR;BqzwoJDxCy!4jmX4VnNaw%t6WVo(HOj3+CZp zo!}md&-vAwqK)9M9S#NIVGC9o_&wbWUdQx(OSdS~^A`8c0!HKJftAGr{OB<|qVi3j)mAlHdv5c)?i zD%X#c-4|d1Ix-w$lo`Thq5Jibq39htk>X`s|DibMV*rNXK7wIC=G#BI;yljcKn@^5 z;vzyOq(TawVSK^TP2;Bc$0ytw=!DLll@90(f&+daM=s+bK_fB}M%Qhgsw`nNIwM!a zpbXZa4K~m8P>&?)AUA3q4{{&(T;jUu zQ#PeL>LgS~Wj+d-I&NeO&YCBULX;TRARyKle5C+US=Wj1y!b?@-1IPf)*BR|AA;SfM_k7abDO?u-sVW#ppA#tLkaULgfCZ}>9C;c?1aXKe-KId{yr*uZ= z6P_b<9zb<&Cv|qGcYY^$hNpOrCwZ2qd7dYCqGx(q=X$nhb5f9ds;7L;CwIsvC2L~SYpPD`{ilEqsA>|ZfsW*2jOBr*CP9pvFqxWd&LuHXA8&q#Gu>n*hNEzX znski;}2dZ>u1qlZdG1%+r;j_8RlS5>Acf{^Hoa^``+XmGA5jK(OAzNn6R=#J7T zW`yR3`c;fNM~)Whh#sktCMl9G|EZETDU+5cic%1iHU*Dv!HHU_idv~UYH4UzC6{t( zi-xI~jwzXzshOVXjcRF?rYW21sG4qRo35#w#wnc6sh!Fxp4MrecIln!sh{%cp1vud z{;8lADxD5$mwG6cCMu&Ys*F-;fCA4h_9tvDsHK|ZUj->vz9vD)Ci6%SZAGYdBt$g$ zgm-ib5BjCLXsXCvsvkz$W5?kcbLs;?I4uL5hZ;woz5s;(X@ zvJPuUE-SMtYqL7*v#utzMk}?VpJkM#NKUJ;ZY#IC-dVP0vwp&;InPSYW~hqlc03|8 zS!hjq4>hqWhZ^f~_#)8N|Ld*Xt7&ShrNV0=>8ro~tGohizz(dy7OcS@EW*O(F@7su zC2Y4^E5C|gOE#VK)h5;P7C5qkss?AoHf(6+E2gfj&h9MF_N=t}>=A0ts}3y+TB2+T6K&a`ZfWCfj;iB4lS1^R zc)SbERjt)tE!JkO)^07=cCFWbE!c*w*p4mPmaV*)E!w87+OF-{wyoQ;?I*l#+SV-H z)~(&%ZDF`A-d<)hPzS}*=DF$?Hdd(PU?}&TLM!Yoo5fNpNP;9ZZsR`g~>6WhPo-XR9uIjEX>$a}zTJA)2ZtQX{?Lq|Y!Y;^Y z?(FJr?q=@p`feG?Zt&i2<^FE%4lnPHl<+Dq@e&d80x$C>@8>#i^g{3S8ZYiv@9}DH z_FnJvZtwJVFX?&#P6Psg3^5WX|FIG;F%vhj6F)H&N3j%7F%?&_6<;wHXR#JXX;11Kon(AGiDkc7A2`2*;Jh3J3GB5YCFaI(y2eUAzFcPzJD<87dHSJt7ts`2U zEkAO3T!9oY@i14jHD5C}XR|hMGdD9uF(0!lzjD)p@NUj>xnQVxUm+v_T&SD z_GY`@vN|JiLT9u_Z!|}Dv`6#nLfhApGBP)|V; zyER|;wO{`=U>^otlNt>pGgnBc_33jq+I2;nFj@yTWJk7SPj*%h_NPkUVJmY_ll3zB z|8#f+^+oqFWsf##m$qpiv}G5zNatYnK*VMFe+~HCr!G`k&LpyjxlXHYqboNDURA4xfqjw0Xu>cIZoa?ohn>nH4io^VF9sVd1`h7@HZBI1fA>v_cY+5o zL_`545WqeFvNX7jywB6nw)UeXJ{Z(mOri z2Yy;Zz0~`C)K`73t2qfIfyXPv*0aJq#De5Uek?pb)x-CEL-}=U|MrTvxPF)V%wM>_ zv;EI=NF)=)B_zBSNI?g{g5KLWwFmfKAF*H%fG~eO#>ak%ce~*CzVEB~;Uj(?>s89* zKIw1z$#;NqgnTYYek@SFLS%Ze`~4Jq^`{3$s6XO%Pq>xavhLeF5L^N*xP8j|JwfFC z(J%bc>pCwZKDhHf$7lQR&p-XYcksLa+9&+s3;9?9M6O(~UM-t9FV;L(3lT2MWn#xd zi3BLtuy_$;#vO|!cJ%nsBeZqoqLCcA?qfNSD?L`5=F%n0g~_VPwAs-HMOc&;3MkTZ z0~a1bh4#cbLP$oX7$*ub`n1MPsZ*&|wR)A606TNt$9cdlqe4wQJe7 zb^8`>T)A`U*0p;VZ(hB7r;<=2p+aE6U^}w-p#O_LICrlIo`AVvc%8Mgr1&&$i#?JFOo;7K* zG}f9DSNDghnu;k#I4iJ@Y#{|fTWAU2@F3@b>Xsso`c zvQD`VhVt$N6vu+VqXtNP5ylv0oRP*FZM+f3s<0!W|AO6m{AdRXGW;Mt$dgY#umRaBk_R0%SFqwG7Eu=)M0&Ss^@Ea4QwO%^2Kh}_NSo+W-Ew-$PfE=;<>CJ^?D58K87BXR>8x}$!h#r;zLeWk= z{S?$tMIDvYxc(~CroW0s!6@(CamAwez!>Ego=u5=2l|U0o zOo_-uOC0~4{Vyh8&${Kng<>I8raC>Mgj+&zk_}tgUfHv+q!2B2rT45lLW7PL;6PGV zShzt*17_fXNObkxm*0N<{TJZ5NO*N3iO54?{{dD-FzY*5A)=t8T0KH2mRbceZ_@G* z{go+?hGhg~nMM*xS<*&g);}~^gSIvII>yQtLMY+ll7+DCi@`7!D&r$cZpLYYbSnxt zF_1VQpoDfix&aA~==E;W9ug*+>Z+~28tbfUg!rQZ7C-<4h1Du=s$43btjZmFrRpNh zJ_easVvQB^*f38jS*5C`X*Sx0RA#vvnxr-B7D{3>1C7{jP9sjC7b5o~4$|T=(c0oLnJi-t63N93IzXvO`RI4T z>dZpCG`G-r#AM2|O0zz48jz?aaMfVbq40E-Tl{Px)!J4HZxNJ_fbULHxI`)J5R0@V zKz&t{!9p(Zf*I^kdlGqp5HH{&N4)?EmSV-|LS(h!01%2%l;RXgBms|f;0H$NT*2s4 zkY&(e9tiv*7!C4}^(@AB9CO>-Y(f%b*^Nx)7!6fmB93O=5qf~Tf)q$t!mGFfBr5z! z_U^kuUz<>Bm{s^&h(ixjXik}aHtj>Qoa+O@sy|I zK$3*b*|RRVVnr_QF&T+8@JP30+lYk48An(!c^RDJRQA@q=RvO{`*9gTr>7uWSac<~ zC>$T@^e0l3BBO{{7K}L1!5%=3Ih@w{xf5i!R4>k2!~-& zkR_o(o|qJxH|C+qc`{Lx|MUol!rZi}k&AqVMhWqnCplFropk9|x!P5)el=@nG=V?@ zz!0-V2_(Hc=kI_gK?)AW)_mY9*P+&x zBy;)O%2w7g{iU{1Pkp3-BE->!3}&x}x(p#m@d`d#b+gG;?sA!%Tyfgvl`A;|9nO%8 zwDC?XX;kCK-c~%<3N?-zG!0(!Xw1{Pi9owG*IK7QKy$s9qU&t0MCecopR_Zj&DHOI z`P*O4^7SlEINvhP{~#snI?!O+9q+d|QjSD3KZkN<_wPLnASGvpvC z_|30>NnBzRpZFvDTOz|Mi2>4jWC=&Ar0KGXTUK<}g0CG7e(rV^^3L`@DF*OfmU3SB zvXhGrFfw}gB~uhX8Ol*+ZeUb}TBV#|2{&NCbXnpM&u(;Gr0H%vyzA|a>1!33!1?bd@E2xf~=xa_^87yFHH8xqfGy* z#!SZau6f;S|6jKSGap%!KxRPf6q#DSg3d6ZZE`GR-A67K?oTGA^JZml_}AIi_O`jL z;EE)HJO)bAY)mbV{=NsY?mw0pO&8Tvl`^LOv@JFHn%vZ15DvgGoYN$PNlb;;r zDObzG`Q6!;D?8wXE*7ghlIV}4437`(&m*c)YnA^T=s_2H#WzOu4%5e=1yA?QL!MOr z%vzE{m-^JH&TXR?-RP?Au6LD==!3(WDt06}_dnrZWc0(6D?!;TkvV1IZ{{SiQk(d1B^Gy-Aai-Od zSUN~Dm=+JNWHoQ-xx6V~defhN*FJ5n=2vfUrmIDw3$c6j%#M27-yZiS{%Na2Ql>Xz z&@Fi%`RE6qd*dG;`Ic3^YURptkK5Jr!uOAA8lHUXUmyD@y}V2mAJmWcxbD&xeKLHW zee<6m{i>US;fH6??h`zyZJGV>-;jR$-yi?~s&mHQ-g?)cIW8XWBU)z3^3MPd5CLOk z|88t0;%4a(j^m8VEap!DS)m$uuKE&C12>QZo1-hbByO1RwIl+s(LWUJdq;s)GWD#$M0|e-H?D3x@!Jg4kg9Sg*BeuE);oB&0!( z*n$FQk0$D`o-pS`tPl&YPz$q=3%AeHNZ<}ex{jIO>9V2!rm{T8WH%^Li(bR?`+HxPZ1SyPdf@P5sVP&ZjAL3udZ$o2VsvI zo~|17a6ckZ3R2M*e^C{+Lo;@16Oqu_j`0rTBJVt6{PttFfDsy_5%5-V`y8z1|2WU< z%H;K+@L3)~8ov=7)s7k~=oK{&5$ncoj0FYLLJvnV7x%7j_%Iyjksd2g80DbQWNz+A z@c&A%E_5*${R0E*ksu2a;>NKNn9l)?(coO|6G>woZ80E24}_Yi9&c+EGg2crk|R6P zBR>)(LsBG1k|aygBu^40Q&J^Yk|kTxC0`OIV^St(k|t}?CT}t%LD2ucixI)k5hpUp z`jGO311E{HD31~;lTs;{k|~?gDW4K5qf#oTk}8>UjTDmWlrRxZf&_~yA|(78&vw|7{Kwl_mHp z&;p6C_$)0g<&vDNu#a3)27bT<7}FyahB9%o2vC3pCi60%(lCZ(F*B0RI1+O@lPE3o zG51n6SCchc(=}fcHaW5y;fe+$j32jBFd@<{a#81!ffrNr>LTr{OcNtH;8FhVE6&m+ zO(2Vy0P8sN1Y`gOmS6_ZGBmZ~1ElkZj=!2vn3n(bEXjb0t62L^~AR2oyyFfCY{QE;ka( zY!r)#v`HbL08qdL0F(lb4K(ox~ zbSe9i69KayzmFUBE*CQiLo<|6pUZ!+g8~9HM1N1XSw{!{Kg$WEJ zVJ1~QI@BXyREq|bG?{ZqqO(gCHAB4ShhV2sPvAIsfIQE1Ka;>Ph_o_|lrskv0xCm7 zj%ER*b3Xes#(+RoDS!ulHBmQ)G;wtZmo-YeVoTizSqp$x{{tXLuk=zWD?S102-0*? z?WtQ~Cq}*XXlOM?6v2u1|eZwscYSR;Jc8^fwAs*ADf4mS9M8 zbOFlLYkmNfCZRTO{J5PB(__Bm0`XOTA-N6O~7eAV@iYJT)X>b<|z6R%^GGYwL727t$O%@wI*rFQktd19D~C zLIoxzOs_7A3a@G<76tIdU;|@2O_l--)>3!iRZrDS|MeDYeHH~uU}ZD*eh!5^7=yAl zg+{G(Gf%);HIqxx)1F+lGVWz0KL7&);9NKISQCa;VFhrjbWQp8FKG4zUiKs3hjS~F zfksnA&oW5v1mO7(!V%7JDsXwOv}j^O@&g7JbA1+SLzhv1AjUM4 zeW@f|EiW?X7bF`v0FD!a9~g7mR6lbxT7h79|CNAYZM0Ldqjz^#0+x01 zd2?8YcbJD!(s>OLdKuaAnjj3Hg3Ub&s(qjGfqIsWwAk*iA*? zN=x;1cVLs*mj|+QX6X}PPg!Vt*p*)ymSfo?i|`ehulKT%5#?{z(v~gSM#yfXBNzZG z9&-l*KuY5cz3{1>26#Hxcas>GIVBf%|2EcDag;o3v_?Y&Jr^LFf3%HVRwVh?f*qBD zKNE(lbUMXiJSCS}<<~~-Go8(|eh>E}`L;&=n3;dHf%_LCHZ}pMR+`N-IzPCQA=n6- znTrAVGpkuuRk;I1z*Te9L@yOl*Y}e@qnEBi5iaKM zY{fFTD(;t43&b8SNQR&zvh`Jj)B}7~DvUrLn`}$d=0rdH_5o4

    7qg-ia%b?%C=~9jN5W9 zC2DfI0D2&7I6ws&U;=so!LH~#p}PmjO~lMQwi(>P9~{CXyn}&4z2PsnakIVY6|yBe zAn&!ZudPxNMh1FdC0G=_|CnG01gM^3=K@gRifTG@vUs{BoW^V1#&4X)Px&lOBn)J5 z<}yz!6*0sAG4>9#8Q&mZfja|iq68HDxKE3wmIzK3!d`GnUW7SS4AqIm8^;^G%fB4V z!(25xB1Hz#6&Y;vB5b##4J7>I!>^4AguBE8Af{3a2{xr89^if;7X;>KB`{!8O2E%q zVSs&6#S9YB0cgiJpvUv^FMnKYZ4d(A0w8ZuB1BItuQ=BxLIyOTbXeg6Qn##J=jDoK zD8M(-Q(e^q=v7=%)oSxWiMX!H_IfF@%dkP zWnDs}jW5#5#SD!gvz1%4e=zU?Q_>5QBdc@VTs zZtOq4?0vE3|NS!UxxV5>(e9FJ6g$Ghh0E9OKC$yY6_38}6HzPMe&ro;n1s(l@4eCx zKk*gc689c0+*{4Tj@=6WFb9zMehu>xTk|(REp(gd=YkY5-SR2z^bb4rE0OaX-_`b_ z!&`oE6m#}Ny7p~e*y!N@F()aG41z38vWBBEsk;R0YVz9 zfdi#UD@QP)!i5YITC4UjBE)LRs$CgKqK3vA4Lf@L2r{I|kt9o+Jh`xfN(C%OxNP|n zrp%Qz|7*s)i8H6oojhmi?CJC8&l4vS7zG+sDAJ`&n>u|8HEIQvRI6IOiZ!d&tz5f$ z{R%d$*s)~Gemp@Hnwqt02u52dkf6c1SSLE1_{O3Z5g9j{{R=p7%97~A z|4rFlh+&2rZpdMW9-4%fNo*bG08Bu$zZP5x&GlSF4MD_Tb=F~J+KV5~NK+0IXmB5b zI)cZ-2L+mdL0I$45V;?2MZP$!F@>Z za2^Le9Dv1=O;*Ten@r6JXPk1*NoSpQGDMt+Bkrl3a?CAY&;UXW};lfu|i&sR3RVxd4tzF1n^mGla^3YC&?zz^JF?@LiV{ zpz0z_OkiLH1tXAIB(4Q)s-}DpM(Cz(kS@z?v(7#X?QS8GQWNz+Kca|7%bfG!ZCU}FquUV=yAmKYTVzjmJXQ1yaY@7amT{eOF+3Y z&rEa8Hs35QN+d3Xho9MYYgILOMVHq_tR!0Sq8$l=b8Zp{K>-$MzHo$T797Z_c{y}& zffL^8K;EqusPHcq0$Nb2)E>w%#Ctk4Q4|MV>reu0BNX7?+XOtV(g#Q!K!geo+^PV5 z4csuq4lkYHK?dS6Fr<&a*7^X~UstYm0Pi*70ogy85P=FRT<~QFP--9^<}Ps{HUAd- z%}uxBEp4s2+gC2=_vS1(UTo98{|Zw9ex^6nqM7hQwTDE20B$iCrs^3r{W9veh9=M3UP?J$x}(zay%q5=TAVR zO7o&4w7VSYdn56SLUbS!2!V!43bCFLD=0#rI7w1pQK zp)h63&yG4ZK>Q3yBMAH77-_-+9vYwu6Bs}n@g+C+fh|gU`~VMVhz0MM&;Uz1!Qaq? zga5tG064gTy*iMAOEmxqI6|P}SU1V{ypMucKx3;&DK$0<;R9%tnt05o03zygm%Qwy zFB?+4ksJqG9@*CNm}iZlNoOD|LX?Y4!V!n;|9yk+x z>J^1hd8t7Ty#N)CV7FZUa;H4)sZWWgGd?i{Xvaj!6P+g!W=>>!;Ibk~9K{MG5M>|% z$kUQMl97b)fSM@-SrUFYONHWxa6&o36;etuDxeW=MF7>wB9^ukjLAG{JHP`f*n~?J zzv<5#z?OY!R^h$hM<;Ty5P8Q=yEE@cb~iSWckfnBX*>nyLZr5-<>IM*}66F~>^9Z4BLhy!Bn0RrI z`$QK5B*EwowC)K{aw!$GwyNsi>bAZQ?`Ezm^a}tt z!Ss=|(R-1x>0WTR31)x^`7)M;I9-530ZMqmx{j=Nx9kA`zFe1&1cZ(sd<}cZt_JJ>(OCw1g72C3x3$kwji=*BO--aDwGccNc1sb68(aAM0~~a>j8i;edCDhk4jEtTqssmvR!w3eGTrESF;z z*m<>Pfg6}>r8jd^^noZ*KmwyZGN2vP(PD%bGYlaEq!DaeMG#M?H8Utf{-Z%?vV&1j zP&l%LUQ=~qvu*YBXgwqZN0@i`WB*z~!2v%&So`CK1T~6~#6J~;bQhC_|3eh2MOK>i zU+Xads^~sgumvRrXRV_~G_-@VgEo%iLg0Y`6k~=IG!uF#j^jvv$K!j5OVwXDE7#@CAk>L>lO9zKoFjqZM7{;fNXM;$MqmVa%0*oP9CTSqz;YlY@ z0vEYBW;PzUa|8qklL16-0kT{j5J4R2h%#g!JE?siKsFPZDlSouP3e?Rxf_0n95W^t zfMS7!I1Zr4V+;XB%#;_&fd6{+coK`l0!%hX1@tIK6C+z>Vgy-qi_4JO7%m~BHO6uRd1;t0(U?W>m?YzukhvcJ z!I-iEm7VFCpZOKN^@quGh!m)mpm%FNW_rzpG)c1&1F0SU=sm4jO<5E~OX(&#U<7Zp zL9U4`$pW0XX^>C>n#F0H$0} zz9O94shz_qp0t6S=ZT)_xoXEVny==Z@92)wxez_Z5XI$z*!dUunVsXAo3IlA_74oq3LU0RI>6d6lNwpwWp`yEdO+8J50@D^{VOC^n!xF)yq&kmR|cAu6Ib zaiAq?qT)z$tgxKSsd8A^j^e@2qCX0xmA9TMYI4be zat!*Q@EKe%DxVmrhy(#e4lo7QS)e^?rB{ljL8_%&I(dC!a!2Z(?0A7piii_BdJa)s z0a=^qiKT7orfF| zN{(Ophb5vAVj80_2c?X-h|)2ok9eGj%Bh{|6pRY0p^9Q}(T7$Erq1wN@7WGn38mw( zr%qaW5J4!_ssE|Q>8SyFt0O9^yUMFRwW7-5YGB%j6R2a<;Hs9|mANLRRQj2@>a5P% ztI;a0z4WUm=QGyoauv9a5IUnj#tO-xrfmAGaY(BDRt_e%>jrW zF{zXakMTIGgDPYS%dj82qYx{yBReC?sTHeArcP?6f4Ybjv8-!(o**l;x=OM&YqJ+3 zV?$+?!1basYOn|Uvi9nkc==5$$e%c|HSvNODTPy;2~I~KX1-Wf5rYC`2Tld>NBJ@X zq_K2iCjYe$L;*6;kViY9H*21MnL00M)CsH021 z<=Yj%TBMtIvd7B0mI|&mDzu&XA4u@L4~B*mfEYa4R){fX6|}Vju#zu9PraJ~xWl~s z!T$l6D8RxSF;NCuQ2PTHl`qD}T(lGbGZ4WU6}DQ#1Yvu(C_q_7fC3FrHA(=&9gvGS zbVePCzZ7%A1@yTDYQ8f}!zVGMDr&8!o5MP$x~jRd8_1e4i<~QANwk}{wyT?T(>0x9 zexnmmW|cnrAzNpJ0wd5JA!B_<#sYO?8b`nY;2AL)1(eu}IFaFZC6ICN(E#0X0j9%I z5|9KXP&e_iDxf9D4%8m7=SUfm$9&ToZX~%eEWT0J3p<|J{l`9evK%06) zEW1W50rqRb?6S3Gb|v&-yXH5?5JJUPtOH+CkyW(-gPU$-mrf51A4$+stOYrwRsSkm zTVIx}DmOrQvwM^+UhV6z&FqYo46{K^8z@eDL|%7`lj zyCYk59LKWKAQ3>8RFlm|5M>N}tH+Gcg`6D!syrgWj;N`cEc--(8qK^BUDWhNliV3k z11e}(&_f^@LSkJP5CBz69tIQ2>YNc-oXuTaUxda33B`j9*30u;K?%g2u(z9K27faU zlXASreVh?S00QRhBI$;=Aljx0&D7>=vVXF(1F;P*8V;U^ni0yCEW5aLtN#_gDHf9H zEO2aH4wM83EHDYoHtHfr8HKf&@dA4*GU%+*Nw5U<#RDyUH=0W@1Mr^@AUGyaDg(m2 zaf8#p;smS#K^WjzwU_~=F~SVR152>jWnIpDZ9Y`&v@@&Oo6Xss?b)9V+M&G^+M`>E zP%VlO5>d@WcPb54J=M?ot_91w(Noc2@qsKTgSOd4*hHIQ4Fm(!BOl}gS z*23KYTl>~1E!XgZwX2s;HP8Wr>&kk4KZsM2xLg1cpaX)<00fl;-`$g}5yu)JRi;tL zZG`|AlarBG*BHPA>{1k~w*>4GykIQa`_13|?ce_m-~lcnq|H5zVE@|H0;wI5d9Bt& zuDz`n%crXgr38@-wfz-B%nHk9O&O1XX5@C29j0WO#GCjRUw+ zTtO@l$gAVxhs(w0XXENa-G_5zL*f}cu-yKeUZ!Dw4kX!8{CM5*VHZ$1iha21lXC(t zZ2~@TFyYFecB(f&^x@^&EeWDH~)xk3#jv1pV6%5Y@S>f zJuSLLWZfv|Fe1$}6BS$ismG<%sUlLUAzQdg>do%#&kpU;-ruGkA`W2SrhP3Xq0FpK zYp-4ADC=WcEnII<;hC!DF3pomGt60)5Wm(CnZX1Ha21MiFU)f6pIwnGAimPh@BQxY z{|@jnOYH@I?WgSss=n$BovPmc?XJ3{w>H~eDWT?m?i%r60R&wuU3(xwzbM=_F!F2+ zyV3U^@Fj2ZCy(+epDff)6b>*DUGC-SD6y-gYh8qtqzpRarF?NJi!{&Ni> zce{6Lo^Bz7F%|Hu@=fpbPY?A`&n7It0bNk=1|Rb>U;nKXJLY8G@Ugb45gzfEO76C9 zp+H}Pv)K_ZKoBLs5EvlIQZM&&Pxp0i_Yvc|iqPc=-fCd(&^Mp+J|^~YYg|l!?mn06 zCLA6cRWAwABV3{37%y^nPx+N^`IrCfRB!b#pXv(kYUo?@uFmkVy&~)@;nPX%uxpRA z$&_g+Y^*{}WE&;5s*G7VJqzmFBD%L=a@_&tpLGdkaF5^aP z9X)=0Q!S)}DI`qRKzUK+K^{C-szjOcQt&#tzN~NRqIx+UA=w<8&>RCvLkYwKzl+V!GelC?0N1j~ya;~8*a9C*i^h0TL z4_aK!F|SJ5)mW179bWwS@XUvkN1tB(djIzA-M@z)U;cdh_2)bPuwe&V{stFQuWllW z+*yeu?Yg7J8jS2?u)&+m6V5>iC#0}K3opblLk%~)FQE-C5-1@@)PTk?)(U*>HJoZI zi9;9FgV3`VXQZ)48*juhM;$ZFL81LhlMW#g7bEbx0mnfyNhLF?Nx_6#+%Za=Vgyb~ zE3d>dOD(tL(mu4hXk@LnL^RRJBbU5sHiW8ipi4KYl+jH&=cKbvJMYXfzaQHQk+(Dd zRI^;ZOIg2TS!e-SOibf85QNs zR8?1HwN+Pf)H9K_E+rBn0nJSEI{%*7e04)qHS0B4VTUEQSl_-3b2L6jWLB|oa0S%J z)^ak`*a?44AX{y><+fXIzXdm3amOXMTyxJwH(hntWw%{---S0`dFQ3KUVHDwH(!1C z1T@2a&mc`UWO}R6SP=*^80jP(Ikf7?P&yK>7f$2@b*Hy4WOt$PCa z(h&m>8_=;06m=z|7q72z&nf3TcG+jAy>{D+d)Tlqg{-u#pnE#l@WL156x5Z_W_>=k zABX*R=bwi@dg-T6>UG1CWyQbV1GiN*;gOV<$(+*uEvDpOTAq6K*Jr5+THPC?%grMFeAivR6N?^PbTIq%d z82)*LJ0$VV2t;T;_!Q8D0<4kiE_Xf_w$Ozyv=RxFz_aTGOibEy7U9rTJST-}CQ!w?F>M5L9gO>hc)4_y@N^yqyoI-@)lZ7PaheDi^PtZuf zBPOtUaD-$eB{|8>$+3o(5*-zJ5;k0Iq(GtS;gbSs7y}rA0~v4v0|befG1BB=`MHS_ zI1t4%7O{~@q+~92xl3N6%92xLNc}Xa6D%G~i=W~PL%f)q2;`svPB_Q}zJr(~*hC33 z_(6|uQUW=xv6tTjXE?Ad&7{Vl1eG^tikAW(n+#1sSv!N5W+z{(;lK$I};LC=mLNPQke10mJG4`d+7 zCzaKaYsGA4HOt1f?&Ph$16)vx_dLjA zZ!(0FuKz@v7x4fBR=^P4R;Z?%MQ(DHE4|M0On?U?WTkn86CwFT z2rMgmkqdah1Qs;{1x_%55|n@i(vU&PI=TsNg_ij}>u8{rX*poAl! z-~?g^eBjGIc)}~{>`~@9I@uic@CiMcfrfkBzsT*S?0^Oaa4iP@&Po(Kb_p3Uf#5|i z6v>y~^ruJt(GhxMl>`0dd2#%^Z3FDon-S@iKL87;I?E;OKmi5td*`RCHEI-S%EPVF0kDM7&5FwbXb(nSft8w^2z0_Hl#K41BZ{lF^-wn+ zk_)%<2-J$a4@@M;Yri1402a_a6r4X6>^m6DKM|_Ij5>h~2*MxeG7Tt!BVd5wvp)-< z!yhoh06+o+pgt%R#6ctrDV!9Zdp2~lk4)gWK|w!a^TNXj!S+i)p#R{29>^vX2rK-X zK{jjwH`IY69H|-b!5v7!6X3%F$bmi7feEWH>?6cjltrvC#7TKNN;5Fg%C!t6Dn``8 zdds#*d=RJDfv|EK9_T+K&;T4r!{IA{rul#y(7X@GuO;Zi1E|A0M8)@7#lGUIA6S8m z62@^P$8t2sg>Zm1t2lsby&Ph>i!iE|BN{L~$MJCr4!8i{8zTxpfCdNv5HNuS$fyno z046K~BUpl#2mtTGfEU0h6aYZyqX0L2fP~~n73ix$s>hAw$c|Jvatpt8tU?5vi%TOv z)M2|l>a;f?l#}qtAhL|M`M8&)$(ppuV2KsQYsWI-l$Sus*#GM|54_21Yr>)&J!2EQ zp;XGHWJ)~I6J-g>i@T#HK}o;(#eQl^qC85i$jYwt%CF3m4`6~uNJNF=$-yznU2KO; zfJa~Cpq30quw03)vh}LYdLZL^A5~(GitOR8UCtxuIXpb(mlisXP{8Xasv{1&_ z%YF+8IstBavV&-W+Qd4BsH%k^sUoPTl_0MftsFa4c2ouEs(L-Ga^?Xj89B zKEX0Vt9sL4i_08I9}0EXinUnDA=i`0Sc-ignN$FH6-x{B(pw8pUbI(6w91iS4EPf? z!=i~`bw1jZQa(1x+kDeE=Ci0%lZ+`pSR` zt0$j8Q1hD8+d5hZXxgV$q9YquzGWhRQV()_I_^}1s+FsmxY_`9xdDYq+1##+$i3Go zfPU>)U&R5IpskA-Lf4`#%qv*Uy@-ANQxVu(z=fcsMcoJzT=bAKMDRCSD>s8t&3OdT zKJr=&RfygiU4uLb`1&ocGTNK;HP(&O~1to!;y1Sk-mkj%AN{B~4e&AjQpEt5l|- z)GL%QSgl0Im)f%<7_5Y&OGmlA_;nw@omd5aU-*dK!^NPTT&VobLWc0g*IOMi^Rpqs zw$Y0e8M;^pmKp|bU=qgO`9;L~{m!5QU#*2bx7?%Z+KIR+Ac+DssRF~;tW!i0;SomR zUol~%1>zv~CHJ^RXv4V04d0V}JRRv_Vfo>+Dgq9;&5hj~Bc_`nPGGn>fhz{P75x?~ zhJYrVP53QhEnc?y<ff;ml*$;X@W^ zvNUBHnZ!rGJAyi-yeqY2%`jx6IHiUBJQdVi$Kfk21A=2@QGNX3B3OcBO|>jW=2`9x zOdi}7W??VoWLWhE{4Ed}zR)L$<)5hBJWYZL+eSJ>#RKR8B^#*1~5ov zrjlAN3MLr9S=s@5MkNF|f_0XyA{#ysSjYfK0DL^>B&gYYWLK(2aZiju}dcITL6 z=JhCCsx?b#e&Ie^OR9Ufz>t(a+mcd7K1&5)jCIROQub--lKX zcy^c)xPc$&0Qh=CmjC8sefBH8!$cPZ0X=mBO7^_m>p@ex00c1VMm7M5bQ_tbWfeYx z0;{4;hU3%p205nJYPL+r4Pi*s5|EA@r_w*T*@5F@fFA(B9DL>ED^VA$!?^urrIqU8 zpy_2rfeTP*8R$f@j7W8UfexUk6-5Dnbb;UVHKDdq+>ntJbq%na3^xFc^*_&2IrMAdAq`w<5fps>549I57#+|$tiZ7O! zau&tSn@yJHyTO)2>N{)`2t^bi!{q~kSz=qu8^h(sfQ(9G+(ufEb!H_-+{8r@CkBjH z@#wL>?Ih@FS^x4fWmN5v;w!c$NZ;#0RSZB3sB7z>}U~@Mk-8sTn8%A$;T= zAa3ejf}!@oFNVm@OTOu@+-c0g=|*Gv{tC=ak0h?&iUyvo*6@m8hoHmYj3&76zRuEy z91ak!XBt|OQh}|?K@@;Py$iipHf#-eWC^ct`|g|o$be@WY#=CTes*Mp&c`JH=mcQf z3g|)EI#z3RNV-N-CFg-3*gG4K$@h4xit~+k36oJCNfHRui@~ z&pQsyF20H{p6^h{Yf!g!ycu=!u<*kzP42FXOydreTuFHPOzyoq$2WD`FbAJg9FOV*`bZg#dJkIuR zubK#__j9WS zlygT7Rsi^ozX*13ORpBtHn?VwD`SXfcy%STlQ;PZH;MLeHVu8_!{zsl|96^)$ALFl z!~X$!S8oGb#q@<=`Q*qQl0V^~7vk2JO&3a^l#_j+h7`EL)VmoWEpM|Xec_+ZC*g6C=wt>XqEOw-$KkJa0{xBI)t`@EOsz32PB z_xryG{JNFlljXZceB@g{Iy`V-&*CY zW6%bmxRE@=N7d6u{nS_e)o1 z;V1s$H~!-{e%P7#XGYw9FL%vn2AV+i>7V);HaHG4Ou{OCt(W}n=YH=8e{FU7E&pYH zXzq9DcY3UadU_@Jb~AVZrH>DuPZ;|C`d2Pe_qmuq|L3oHfM6wXAVF5_!g))0a3Mm3 z1ltu9XsunQiPj`ejHq!V$BrJA)F2@w2ak~foDYIaOnl^9Z%&BuH z&z?Si0u3s3DAA%uk0MQ~bScxOPM<1>Ds?K=s#c+5&8l@P*REc_g7rCbEWs{~BCb(0 z5Dy=Yc$mRm7uOj)jtU<#yu-30#f=vQiabdcrbv!L0FMZKcwl0aDF;K2EE#gx%9bx< z&a8Pe=gv4MgAOfvH0jc&PopkvGd0UKYt^u2+tJy^XYihF^yV=f#e)(17XNG`k)Yxl ztP6Ww@FC)2;wM-`kIvjP_3GBIW6!RAJNNG1zk{zS{lxNut_7y$13Px?wd-ozu1iqi zLX8a%_STCr_?wFTg(D`IbQ*b}!wVX)z>y9b4D`c*4;~kugcDL&p@kP>n4yLnW(8pv zt+9uPdvR%WTW$&Yg&%Ks^wy$9d=b)D= znWU0MMn~QUADE?xl<83^8$o@%l_gyf#nMmP#f0#M;wjN1%UwpHE`gB z5#=Ef(G48+5Tu@B$yZ+r@BND!hhkP|5e1ZBfC17Dbi)D#5Qv0>8Wmw&w-0aht+^9ZT(QL$ zW4zOVPkwl-dFp|zrO3dNOewJo-DOa{4Yh+TMifnGyl9!D(`$Jj;XS}`O3U+ z&1RazmZvQU~2?%M0#p7Id@44KE=|F*)(@H zm_g7Y*;`wv!p9g3+D!yZH~ERzys~mfcs##JN)@ChBBNX z4JFjLjw!E78Ufo{hQ*~|%>qm)I@adM^dg$Ef?3+Lpy@D3vI|nl4Dw0 z1HE|p@Q0OJAm&(PL{B9#Mua>hEfKi`18gyMDo_Cl77$4bOi~60m_Q{}P{uRfPj#US zB{Q4p%>Bi2j-}*U9r0MAAkwQ|u53w}uGh^A*8eh`wj=@rUho186u_L2)ZGJAumA%g zpa(C=Boa2M#%JO)pZctthjIuTYEsU5@LA;ovq>)(apZG`6emMfC4nTcAfhNxjt3QR zf&ugmj3?;81X$p~B*;O$jZ#x}!`BQmIPa24y#8Ss$&ZQzf41o1_aWS0N+;5tuYE zMjb0z%SxywDOIWL($cT0Nz+!|)Lv=}q<;d`)w|xzta{xmU(af&mY!!_Yn3UyVGU$ z^rz%-~R$QzydC=cnsyZTOGnYI~3V$MC-@u(&xIIiwNlcqTd0RH-F0AFo!#= zF#rZIsq0;8D(y&+-IBJqVUh@TffK; z9wBTjvtc(`3SH=o!3)*|)Go?Wp8qnHt8C>fV>!!O-ZGcF?By?mIm}`nGnvb5<};%? z&1zmVo7?Q>H^Vv3a-OrA3*5K^qldO6ChRG7Ycf6FbRaDTB!z>kChs9S(TZMl6Prw# zsNjswlAbiBD{bjZV>;8C-ZZB>?dea0+RPBzvB%c6U>@GK&qIEzk$;Qi71=jtN$gL4 z`;|dQKbm__#xbvZ?bJyI^v0%E?<_3*D?b4)fN5l1CAPwwY`#Rg& z-fJ55pnK?H=`RawQFRPYi~Q=@+NCZM)06RZKSrvCQ)ttOi-yl z_)>y~h?3D}+?LSJ*6nT^y#MFz@Q0(ccH<$B06%6PM;&ldD4tKi#dO+)>WeHS0N+NM z=9rN$JUSu}8pK=f@@Go?K}?9A}`%O>19wqkO9=XE?TBZuFzO5duMP zQpIz{u9k@QspU4evm=<*nQf0o9x#Ft2u2vU4ZUW!Lps^ZF2f?!Xy(Mec^_`hR8_)5 zh|-fVcD3r*$FioRG{V6^dY~-|2m}Q?c$?6|?wq2VJ@JZP9NO=VLp|4VWHRlNVWTS5 znLPWxcpr9cJwd-#oexX5c3 z5gDRgsyksNPf+9l+mTFiK!XE_(C-Kn$PG--w}Kr(^g8H3K^A6s5%XCiLa1r~F1^)@e5YQhA zT+0#=gbt|QK%5FI907wg9{?uB0465fjKw`UuX+Syo^y0N;SX5)9#2J>U{b zU=x}h1xg3)4bXGt8=rySOaW2cEgRKsQxatt)t;oFgW| zAC_Uxlp%SvKn}he2kcDrp^nm(1Rhdi0XpF*zMdHd$v^d&U~LHk?VdoO7zbV+-J#o6 z4c}OR9tm=U-Vp~3Fu(;QpB!d@3TnU>DG~+@LYAu+Rd4z!C632OvS*lt2NbPXL4z8BxH3fB^3t!7gEoCFwvpb^tdb$VH*k z3!nhu`~bEffDC+q3@peno&W~4qXzh*flz=bl43)aohbqd$ZSNAv}P~M*@TYV^&&t(FOSlwJz+t0jBO0Zn#d7UzbogE_H z4D>*~QJg{c;#BeyAns!hFu?kdk@_gXW2%oi@}xd$00OqpOg174kf9Rj06@Y|2J`?E z1ZA9HA`f^b2Yl5cvKWp2J)TB3#&{@A`*;pNyx$Hhdx*k#Cl z3VsO{b*c!0S&i9l~^NAfxu4hoz<3u6B8A+q=gl7#rW(@>@CKbVK zierhEs3g6>4`2YP#HVML1bueUjZS3??ITHv<3w>C6Mz7G{yVzh;CJ-nA(^TRk!pV~u!E2tx4j6$irl@KP4hrmF&P3WE4l4lw>mEK$B`ysk0gWbZ zVkG)%-&BCNdf3;E>L`}#WIV}%9m2Y%D!5@>i5m7m+r>g9p{81XaBzHtFA)D za+)c^k}TqREM6^al^PGKO6DZM4VM(8XWL|sy-7b?-oiqk_eXUZ-Yh5{|A z8j1z@lf;5n(z+Gpofu5i-OJeJ_3&&%`RvecMp}w1lv=IJHEd+0>(RFBg8h)ZUgVZ$ z>|)`|#%kWwQf<9$EoO+}3r2*sD&dsA?QTMVfq`wck??r1!jMo25< zMlRk;g|IYn!oh~gsnGLXR>n85-iiFOHl>ePE#^V-U?=o-m z77dhj8|6BUWQgCSS_lnO=2u-McFbGHM2#0V9lkf?fa0;XF3afAnv+xVMa16un44W_q zfRPeL3?pHy2=j0c`|u9~aS+oPRWva5l9*damu+Bd_Ilt26Wnp`uB;&u2TR+o3h@@*oGY-o|drx);>BvC}de&gSeK(=oUBab=W8>UwegiYoqU@+Na~ zCwuZIgK{W~@+gyXDVy>sqp|}U@&ij`Dl(P#-R^BMvD2z;+WJQ=MlvD=Z7x%r33PGk z)NjL0vb;5p8oO);#zf`KGSv-SBZn?8cT1l@Gvekk@=o(LV{@4n^C3GIs!dnC%)-1z zPuZx<;No&NOY)MSbNJwlFXQk!!*e`WC?+c?*g}^C!`FirXMIJnJlCpU`E!u8^DhT< zK^t^OSTk$nh$7?}Z3zT2!=1@oEXEF}6MtL=AGAY_OaDfb#z4O_M}xFMCv<8ov>M|x zGf;HXjoh!%G)W5(9dRG+M)bU9VM_0Yl7;C` zt6WSgK@rL%$a;lRhapZ&byf3nPooB+SZ)u+SC_~amy!itcI0U^$4*B@CYLP29syQ@ z1e4lnS`UR)s~-~(v|QUY^J4XBXtgU3(DrE@%|6vKH}SogmdYuoN>E11)r7A02$ zKzDWLD`=O7Xd^Q7{Iwg0vtVOcp>P=D@$&>H7@$MLG9#TQ@>WH+467g#RBsGaW-8GT*lG z%;qy{uXGohA8Un#c_VrgB4GoZQl>MqWbcLkw}LGmEjp|?idkDi;#-oT(h@M(j@T%R9evgW`7 z01g?p&yBPIW#j3hDmyvmOtNYsM`6GNBsyz4dmn;pK=O>HFU<^eI)i`tmWx1_kNUaS zIE7R0PuHBWy&bZJ^H@me_7WY9|HwcnZ>W+Kfn-35dw@$sfIt|dK(LTV&S9vM7YQ7q zH{zgHBD)FEba0h0XMr@hq6z<~3BFYinkmHwV4Hxm%xkYgnR?tpuG(E1nMv_VnvG=F=o`bkz+@XA3<8IFfyb@JbdEdLD+{G z!e<7b!Gn2FCPj947~aXWvkgz4C0PlPz=OjJBT68CI3Xm5g(5^b4CEnIh}47}Oe9hA z)Zo^oOa~S)SfeUXvmq6Xl%SRXi4kPsK6Sg6E(9QT8x{3qL^4fE%6L8p@0)O&`LtTLbTJ+LlI3>(M1_; z)KRv&voED5Rf}?)_851huiyS&1#y*kh4R)=jP%))P>T{uYcb;TZr=gBo>ZuWGw!UX6 z6_{EovGufOZgcK(#s?{;+U&E@PFuYX9Fp;~ftDtdA+4REwrkotMILNXz4hJsl8k&lcCn=5 z;0U(^d&maqx$oY4&N|O}^wFs#9iiH2KE2DF7yRCRic*eW{ui&>N@P~pW@WOpBz)ih z{{dK-y|w>LYZjrIrwGTsOt5YsV0ush12{hkW~_3S+ZbhBlokkXkb@mO2=ETlIe_(z zcm#Xdro5&+Zl#V(tT2@afunM zPZf?8xIu`+f~X=J;wXyek^l}D0-l~#k&9ilTNe(ohP{=BB3xrd`Cend<+ZR>K4G92 z(=)}(!Er?`LctFZ(g8Z|k&k9OL_(UxH;Kp$Br9CsPnyWHJ}yU&x?7}*CWDZy2DlSx5N z>LUN<3!H6e!$MI)(S?e{Wo4qng-b|5nao^fGocC1XG)Wq)Z~ISvH8qtYO|W$yrws? z$;<~HvpY{w3RbX!5aE!Mo$WM?E=B01ilC8@Ni@g#lGj90GU!Y0RLpZYfQCslqZtEj zMmY#d&`HoEp$JXrKpFbbW<*q>6HRDAEsD{DHdLV$#i&OideMb;6r<(v1R6LXPJb?k z2ZFNS7FQZmnPvo@n)F;CK9`>(PRU9MQ<&-+I7(yAltSW?&IZuXk9zcjss3qBa~$y0F`ti}I$@D}$mbwpqaCNJNC9Giui&(`%wH|2LfEKcMR>@A5vYSb(TH`6tOZhEk z<(o)89pXfNzELBE@>n1H>Krr#cCoE>?Q3D%R5XZlFO{|JZE@=nc&>GX`3Zze;L6j_ zW-C;})R|8dS6T~6q%2iT?R7X%k7gV;w$-&RY*PxD+-{e<-F3)iQBogfw8SMlt>-Uq zW606A5OG5liCuh1&gWJq3fHyoeXToN8G4t${mm_Sdz&A)y3CiQW5?G(t3Zc+DGcL$ zF9kdh015vfP-MEyUh_6z-%J9(9riF{Cm;YR4934DHZf&)yVgN+b|y2^Rz&}D*51-0 zLLT8wjeeZLyar`t$yt?#M>5?HFPX_v^#oXrdSWS0*_~?$o{9IQ-jSLYXI0yhNv540+@#<|IDw|%clLMHt3CUbga+C4wVNYZ?%2f9A zpDzhuK#F%@Fk20P0>q$(DA>IeLI?_WfB_901U*#I-f;PhL+V`f(CU2WJX4p{ zr8ad|J@HtwK?l&Sb~POV+_~SDcRyRR$pO(9$~HFaVHqL82^iu51dxXT$eD9j8qyRQ zq|^ip0CtU%fI&(Z#2`Ijhyn2UvrwOU)ZF$qVxN2_-@KaK<^G|drKSJbOlghFFizr^ zN6AKy%*RhZ&;gAkGwC8*IxN-3F)J9L0}bq82l__H4G?04hmhwW0T@UR9G;M7XFzDx z;Q~DhE8S*zVxo!lcxyNAiFZi19-g@GU}4(~Rqxl_EqA%}fE7a0@HUXMW(7{iOY6VE z<{-5Z+xi@~Fd1*;0s~ROKs=xWgLps&4B%K?4uF9n_snhc!T<|8U~Qx8krgdcNC&7* zVXQsCe8(eO1SL!V_O*YRD)0SCapA62zZwU_B8A zygzyRQXsy_0o#0z1J)DT7i`Em&f$_1pB_uSKE}7Y=Ml5q@$Gm2;((PuR$MmnZC%LL zq$JnRrcT#Qd|?|JtRp_4f4V~L%)}ws4u5#B(gp$v5Y8Y@uTLJL=(flLcn?>QU!E${XL6i(_Aq>lqk?#ce{uf9t9 z!XWTk5bwOI?tZYbh|uJ=O6AtCw%)G^op5m=Z~oYdt!V!ut_sO!GH;YVB9=VB>Aa5L z5`qGA><4HsA(o&48{*j-t_5n36>#tE9?l?AAPEox0nBh97?A7S!{UO^s;CbY&_MYD zvG35J@0y|Sif{UaaIhRpsi4ZS0#W+L&l9Ks2NChfHX-^>@T&rG`Zgg9>Ol%X;SiA! z)u42(G{^n1-6rz)?<+B9m@)Af?f)2StKqHdC0}et1dH@Ct!r+>VA+}-% z8X!EXCfhvCAR^-+C~hGBEg_Ol1spI3BCu+9papoW1K99G_;Buq&<4j36d>`*5|Qoz zkrM}N2pv(WO70St3KY1YElQ4d~GvU6AfvaJ3Fk{Y+#P6>=ee zMkx#;)(CC9B+}5Tg%)e^K=x_Zh7Ke0s?v1e04`uM%)`x!qh5MU&NO5MfaBzW6den&k}?d!03h)W4M4FO zY!DskAqM-71_P1fN^lcMP4P;EA;B^%-6Q@UQkUvavv5u#i>KDw&BbgHBXLoIl8Yl% zawSU-Ra~UV+9D+_Y=&kM)6y(2UlK4~ay`OvtG?=_5|a{|k`s*(@0uYo(I68YQ7SJX z5~ptiq>B>pt|0xc_$WcT$nmSb3aR!i5Y7Lw9-1=MF5w{g3>C#PHf7U0Vr>d1vb@f# zBY?@La?xQ(i!M1$FDGU=i4#p4qfFidI^>SJ2(s>yiu$HfsgR1PnllBRY?x@&B7r^BMPJC=5#JMbrUW#ZzHKpT>3D!3a{`^?c?;0srrr* z!jrLl^dR4}IaqW_nN$E>bpC2Hr#AmfEn5TsDzE1*60}li*8mXEl#|20ZSN4Ts>XD& zjvMo&k-Qi}|GKPCnW%{xs7vNj$_8~u3-wdM%_&HxQAxE_pESiHFBbJw z%WAZ^x}-M;;)Py~3&0dq5A{|JE7jceO-r>`;gl>x=qzm$Pp1VY_Vglb21}Z!+@ydC zY!z3bHLLV1K76%W8&$1>XIP0fPeTF=K`R%9%Fp^Lx|D1n?#o=GHC?v~D3@^Dto2=4 zG=C65jAV|C)@obX4I>06(Q^L+TwzsQ`zood${@=0S=H658mnF16ny37ON6gVF9yYQC2qHKv?fIEpIbM>*-3*Cn1!m zuHr4|HdRnl%L2sHV?h>8pDIYnR7?w2XafsfadilHbzxJsX&LfiZAx3&az8TS^1Ky6 z97dGhz!0ur3UYM5kg631Y7)j4qsn${(YBxlsua}rY|Zv<&o*u4R&L?8Zs)e3iVsEI z#A*3Zm4fXHx=g*gl)Yx44xe zO$b?c6CxOhm{$K7h<{{=sd%ztIE)(DAkcDK4b78$N+Fi@hhTcOkOjGhNbe$)B7H%)=Cn9QU)ETU z)xhAOd54Kq#wL$NKyn6|N6vVY;i!-enSt#zgxwfV9a(}Sd0(~Eg}(QVeE6*NIFn&H zmSuUCX}Ol~#FHyRUQ?BZ;Yw9&SYPi6!EDs77!0>;`IwP8nU#5&XF0wy;+3%2hA9Zm z`JVAPpZou!V=zMg5*cfdcvT-+Ax!vU6C!IZIAe|ZoEf^I9r~dm`cYa6&UDLk1x%nR zLYoU(l4rv=c};f5nJtI^+`c;YoAA2SnP$Vo%*SvI;y35s;RoFoBFD;I;*vMtGT+X zz51)cI;_QdtjW5p&HAj-+N#xht=YP*-TJNJI`W?vL(B+Df_Z5JF_u+vpKu7J^QmkJG4c6 zv`PQFv`zc8Q9HF&d$n0xvt7HjUpuy8d$wh}wrTseZ9BJdo3;0vuPwr#v-qOTLa+xL z=nyGZyH-pp3wQOpxt;sDp*y;zd%CH+x~&^Husgf8d%L;2yS@9n!8^Rgd%VfJyv_T( z(L24>d%fAay|4Sd;XA(Nd%o$rzU}+I@jJiud%yWRsyhN(Z|X3%Ifsc_Bim=8lO(Sd ze8Ky>!5#d;Aw0q*e8MTb!WsO+F+9UHe8V}s!##Yh`?+2fxqVT0i5(&y)TpD6`-gE_ zpI!XLVLZlVe8y?K#%=t@TRg|rrLTF33a1pfTFjoz0mYApo9lSR6i346#X zeaH>slrMdPi#jgf=*^dV(Lp`bMSavY*pNj$klI*@kHMNPHn1z*go)aT61OLQbFxVN z)^R=8b$zENBIal=)hE@O`y3(UIEN!S(<6D)??BmGs3#gIxj%i^rG46|9n9}MBsxp3 z1{_OWm)mWT*$*6X5WL#Sz1+?Hp>^v?TvpgkXxPzwf|0$jmwj6Rb==SW-tqrE-(~rk zdm37(dGgL^Q_|tp30|;uc)gUhm~Bhn8NT5iUW$2qeXB6oSN)o+oSE;DFgZ zde`jz;YEJrNgidMnSZA_&n1<_ad^@*KFzsV-a}>P{SyEkETl_5=XHMPRi>YNTHQ-L zuCg7Q6GEH$blH*qQe$;-;hg84{^_A!QTRQU*w}&H9cC_mu3#+CSDDsrUYv10>cxKS z$)0?#{hw1_dj;C!7h>SMy>ya2>2DTiwyW&z{_gQ!M|>Wd@3ZRFp4Nt*bt(PCWBq%D zYU;v1?-hUX^WNuCKIK)OBL==wTR*!>`~z3m5|*y*_P6<6Hr{`G}_`1L*ZD`Mp_I-}p7 z<11fTB@Xk6Kl-JA*E=88Pd7g)w&IC?^o_lVRXO$PclyPD{1u%S_4m|m4t2}xB3>Te zT8;eSKmNnK(WM!uVsvnApWWNiO41x4;Fx1Ykd?qYh7cwsh%i$^Lv5~1DM+!R#fum- zYTU@NqsNaRLyF8;p`^)^C{wCj$+D%(moQ_>oJq5$&6_xL>fFh*r_Y~2g9;r=w5ZXe zNRuiZIuhfFig7$XV>Y$ovsqZPevD@hPDO-Z!!9(KiD4UtP1FCXUCXwu+mTD-%AHHM zuHCzM^XlEpx3Ay7d*2q+6NgXXIDrr+wo0|}RIFEzK|TiZpg@LQEsNc3amon6phJru zy;g*2z^GHJUd_6->({Vj%btx=G~qsnqY_N)xcB48&q_&_95OT4%o!0D3`@GX^XJeh zDws~Yy7lYWvuoeZy}NeMPoXNtO}?P--@}_CUoO_G#jtH4#*a?FzWtEYN$9_xBZ-in z_XQ~6fCLt3;DHDxsNjMOHs~NldSJxcdFTN**({YA*A-@(J!FVkcf{vVSr3+I;%Pjb zP(%VKs>tF`ADD2&5e0mZ-%ciE(1Qs;ndsw>Kn5w~kVO9$Y2=Zk-IiM*yOn2Ng%)1u zkwC|3$lOICQUp;LNOswr1ScS&0SXY9rbGrB7@@>}HMBXxjc>*&=bY0OaYP9{9DqX) zNF2c?oiy2L01XTL3Fx7Sda3B5j5g}%qmV|rppr{E`6QHA2`A;0m$4_-dkKv=;uVsX z%25IwFyI3QD@wpa5JZ&VBBHdynS>KW5YYnzMEKxD0fyEJr<^CS5WuX%3Ja*I%r@)n zv(QE>ZFC=0%Rv{ER{9YWEwFH*Kk4YOrdVt{xDpiY6@Kfe|vyYJ#yW zbihLmCoC{S2QvHs?3*3r8o-Pm9B=>y`xb!0y!!vPsl*XKV32PP|2d3-2lW=g!4fQR zknzJ8|L5-l9aPNl2p-R11O_t*hLp=wUw234CJYYV3|db zA38e zAD|`=Ec6g|4>RO�_BwaqJO#>-#khVc(5419bD91P2#PE%pmMp49Z^m}jo}=A3WD zbI(6pWTARoY4i|h6p7gRk?F2$CJeEfP=c-jTrlVYOlY9%1y~31Zx3xVpn{)ont+23 z2Y^5?6Kn3E!|u2m5cAX-V1V@sCNv;621x$|!FLDlqO7|CD)8#|3Fb3^H4!=-fdUZ+ zT!82KD6Ve*{rKmv|B;{j%ve>T8_s05;wg`e3nCxd-;W58D%VvY2kt7t4jSMC227v> zD9D!#47LCkxC;pj=mGl{zybfn$pd<;TlFk}0@#(nUlA-A5kT0%nBmHT7+k^*z~=)D z65)Il0A33UpehEC5DB_U8T>c~vWT5Vfl!R16sJf!|rdgDKBGsEQ3@pt_=_00_p81(gt+2447qALMFR3`oHUdG!N~&8c4jSRe9c@PJ<7QI8EU zf$mb!gBy6DMiJ2Ct}Z~r1%xjE6$Jm*2ZoSD5jMaALYRQq0H((SFaQZ;d(q6$$jVl_ z@|E|yR{xsBo817=DnS#PMyk}IULuV|W{C(a4b-{|wCV(Ud_W0N&^xWopb2f$zz{M( z%@>N01;J8+)vSd97tT&sDhQ_|El_|5;F4;l5$MhW22FX8Qu7f^w_dO*Sz93TTM zKp_#D&;b$Zzys#ATV;uW%{ftk2JCx)IiI=!-7x?I0m#!2g4lx+OyB~@EA3+=i?|0c zRi?DNfx#@mE3R2pYe_Zbyma>h6M_}3$W5+tm&@GdHdn0#sTQ^licktg1|wO)P+m8* z-L7z@mzz0VLC(SojSjZ3q?t}R-33~)O`tzP%%`4ym4gYKGiz0=EDXr>#{jSwvsMBY z=*O&J8Y?ut=_3CNy4c80wz8MK5mI~l)rV$nwJ0#t_O`qI?VHwm!?`}|39-E87jdY} zt<-HTh4=sFfaUn8;ZFF%8~*T9T=>a(2JtQarqLD=JIpP<9x+}N*y3J0;ulqc&3pdy zpa+TKhfa69SE%J#dGy#L4Jsa=4)9VR{NV{6`r6w*^rPRpwxnI%TV6ZzH})=$VQv}A z`L4}qR3Q4wC zpZ!SJoI{s)$d=%aYVq6u{w066waPDVwIH4S*r!*02O?u(bMYZ;Z&!c#w}1>tP~;Z| zL}z+|fGM^{89Bxk0k~sEGf^~`G~tGJq>_My)_^2fg0A#{{U;Tjw{kSZ{F1g_PGJ9TDntG)F@&!c>Z< zmWsakix2WgYXSh%b$|tquMT2^Xjq13L(6JKRNPX2Oi^Q=@;%GG3H-M*8bHKDTkH~go$c_LR zkf8x6?+B0Gs20o^hxgbi@E3!KXF1Z5AF;<=0$Gt3sTL_wjMGOLq637aQ;+57dLAf^ zs)LRrc#$Y+hURuELe`Ql*=_~NGi(SN-o+uqM|`3fjR$CWt=5x1`IA5ylz~AYp;naU zwvtV8TpKwdT4YPsM~fGz8D9u^88v$fca>O~m0G!#T-lXg`ITT9mSQ=UWLcJGd6sCI zmTI|{Y}uA>`Ic}QmoGV&bXk{1d6ahfBIVY3(v~gghm6-Yk0N0uyEcvE<$3>Qk_3vW ze~h`9j`^678JUtfnUq%7EkJln|&!BP0@*-SdTY} z5jhEl^M{VZxKW_CpZwXM{`sE(8lVC?pafc=26~_fnxG20pbXle4*H-F8le(8p%hx7 z7J8u=ni~7EFQu_c9O<4S0d^v~eivwj7UgZp(U1OE62>W%F8ZP{8l(R*+7k7Fmo|E% zcsYz1>7kg2MY^$@b}(1FlpGZ0bQU2Z4nTiqB9}ATq)z&zQ2L}0u%SCT7;hn>Ldpu- zg%0#MeBd>J@1detH=a>ire=DkXo_l7nv@=@5lo_f6j5(-3X;KR8TI)dRVjkDnWlW& zr+)gUv?qG#IjAXeo)RFat=OYqv=+TdpPj;YmB)x+n4}5VqJVm-n3}1Ys-Yf1sD^r; z`H7UB%A+2%j=EQ2F^BQzea3~W&>F4MI<4)dtGs%x>e(tH0doIqh!{Y6bpN+&3@MfT zW}mQXb+SsW>bkD%N~>&ooX51MiOHeA>ZU(RZ^|e-$10A=>Wn1$s`U4)?K-doTd--` zZ1I|~wmGjw@UAhbiRuS-St_nESEL(A7D*^x&Pt{R+p!+|v9r+`v-%>}+7Y1&1t@!) z9l@>L8lvejS8|#h3pumPdW%W8u^^kXI=i#WMG_;6tx#aHEt;}r!jcvttRGPaBf7Mz z_bqf-t`|W9Vkodi@u@s}wOAV{R-3iZX`Ue<1w{)IX_kBoD|-={Xqsq+feDUEi>f1` zeK0t4oiVk@SuK`|l_#)eOv<&VmLkaVpgb^Pf6AC8VKx7z2B3mFF*V{uSlIz0lc8p6 zYLbhs1aSqPAO`ZF4A-Cx$&d`%AP=D144@DO7Ql)`3jho&1U`GT)Rt&#ORN*C9%6(R zQA@WkQ2-UQF-$P1IC=zi2DyAoXd0ut@>-W4z_@i{67ht)GJ~dG^D;(&0?YKf0NO8* zlrKzxWp%kjK#&A7(7ao_w?H|$K)V4Jun7bq587}I?AyNX%f9vC38o9Sr)!%S!Lqz@ z8!Lx;#2T|}!L0B1j5aH^ZASuBs}cd@VK6m}CZI`WvQ4tuz#D1;6HLJm^lBbPo)7`nBC_xyqoq(;zPHUqjfpM=_t_19l2kdDsqarcVGDfgk)ew01CV&AQ@P`zj#2p+m9k4MvG%+OOxD|6TAVV)}A_I`z!k>1gO3=Y3Q~)GY!^hjm zOtu3VGyqIS0eS3FN2XN)fXMMuBQ8_JCB(|nE6TCF$dTMZM$A0E90Afhuu$x^$3zg* zfDIn;zVhI+R3@_Kp(L&Qa4yRh04%`RJh%S^{KhEpWP_}j9kfH(8v(fUy$i5pK-DiS z5HBboBeuLe0mIFO45KL{GyTB?Fat?K<;YnDGG+4vj@v=F!)J#)%A7p1Gg>SwgCe9n zBkn`P?VME#Eh7}X&i4XUhr9!49NVx zFdftEn+6;Z3$Q>C%P=de>!V{Vff4%_7#D1PC8A_K!*!wNudbqU6)yCNF^ z3L^p2lt2l#KoVda%{}eY5sMXQ{I-@?fECfkV=B*eQX>h#1Lus&`I1!=mbY3a033ip zEga4O{k0&y*b$J`S`DKG{fCZ=)n837w9U@G?8AJWLCd|?B*4{%8l_2)L4!>rPJPiG zL?(WN$EB-POYLS$OwQzl$Z^dw$Gci5eBFYr(Efrk_{;%8g)xvV*~m$@*}5VG;knmP z5xbDgF@41rK?!qk;0JyWUEEQh&C|Exky|vZrTwl$(s-gYhYHO;ns}|SXYXgkoEP=bwTiZw-(w8jQcV^2N^v5cb1hZ^sEWp|<4wN4= zFMO5*_wu+Y6IlUm#KO!0d#u16oYXmh)NSp&xaU;MXAR*PsgQYwYYx5x0Qgb8rd% z;mnNo;8*(QX}ox9Pp_^a{&MYh#@!$H*tJt#U(;luA{&$8sY}uakPXF{9zo`zNuqWUN2v6CG z0`TrDZWF2Nt;q6kdW=}OauePWc1o^8ZJ4SZtA-BsZXfl4ikL1b9rVfyR-Ei9PbZmI z5kgN9n?S$zZYKXl&*rU*jNeGwsV5o!J2W&K)CENIZjAG9Kl$#ud3G6Of!hqp@X}K+ zCsE4*rjX`vnFRqlmHM>7g-OsrJ_FU6N;RlYSF^HS{+yF)vjgR)-78SNn)O)Ti5Pgym|HR<=fZqU%-I{w|mdJ2uSsT#DZr|-hcjXbank*zlMGSqNG z4cY&@??Vtn6mdinOEi%$nwYvmmCGR8C8$Ym+X*|T$`Z}AXRfLv!tYpnYdi{hvXDa} zi!{=<50p}JNhX_g@<}M8lyXWctJLpGEVI;dOD?$KBKJo(cz2POJcBEiRYxlFPULg1hawUW6;&}mQ+04+p$T9Coha6FAC zJ6vNW(+MfOqo=WkJaSZ0OYQB>JX2M5RaRSd^;KA7m33BHYqj-OTyxcRS6+Md_18Uv zP1R3gi%sI#0z6R$z-v~K;@K$*wSX(>h+_1uNk^m9!Af_`5J*o$71dO7%k3~p!%qKH zw>@^-b@yF(z( zBj+q_xhuE)a?CT=oFux1UJAXUa*8@wt|Db6G_D@haqGq1iVdx?V3T}y+BN?(cyruy z*L`>14=T-g@N5Vx}YrO*}s<6jF|7?1zq zgb@Yw6wDB4yBbVTho*vH4}JKWops>joyHlv)B1pv!?5ROoI zKNYUfgcxl@&TNzFNv5@ zlo#6PmaA>ciYddC^HSKw7_tI@0-WC08c88BhS3mRh*nJGcuC&8QInhGqtts; z-TBT96*GwNRAVx=q%kPse+<^)qSz%L@ zfio)%^@%!g=|xNAvOGDd=tqspl#V75aiyz9>w4F@<`pAuf-uR+-1Ua2z1UJ)&lC1-ET3C6}zwYBU_t!h16$>SO~FvDeTb6+bn zGxmg1uRKlSy49nt>6EwR@Ly0E;Yb?3udU4mr2>@D0V@9h?oeH@*#^SEUTMAe3I&MT zWGt`)z?Jq2Cz!&0=UXV2toOeK=%qorPypl}7`X)z0L7s7fh+tMCGO2ZQ_O3E{4#_G zT{r@L7v|uUKv)T-O~?r%%wL%h7{2CRugy3>h5gbb2`a8H6b>Z?_eNO4G@dYrJD}MR zFyY5Fc3=^9Fas1U0L7!#lSa==0Vpe3v^FRJl8>MiCM;nI9!|m$w47lPym-kOJ%N>d z^AZztc?L6J0)Gb5fi+VA!6NvvX~V2!^5&VZuMJa&%oqjfQdiI?+my;6o0jd~^0G2C zrz8iUg}CZ@horqiE2_BxPfP#}oQPT&JP`#?SONcVRQU9#GYpDmPyt$X6@VvppaoEP z!V9ecvR%<2>nIGs0-!*H9aka%H1rY_lbFD&SHb~Lv|@B01f@8E94sulHoPK zpB9B``Z~zpCe#NG<>CXoS^$%H8WofcX(yw>QFiq7-5(qVRG>lC)Bbj`NsH+TOagGA zMoO$9ulM}Ko7PMg0FbXR33G#<*=Wf36t3WjW)xT@1WvdXMEtBxW6Tp&*gzEW{pl%q zqSu(NX6!Gacf~i@6HoiLyCIEjXEo#Yy{5wMxd3}dh2ifG{J_JJr2`faKnX8q!U>YF z1OpiT!+;PA03)E708szG7ic*>c>x+=g5-L-?^%=F5CPn>y84kNh3Xod z@Tvd#sjh3Z;3|}#AiJ^?Eeo)MN!Y!lKn1;vwL=*`DIh&m2!s3*08ZGnd;>MgqqlG4 z1XuGkq~HL)Gd2sLrZBj)SkpZo^t4-}H5^+zP_P2Pi;2@)w^2*CFo?ZXih`$+f=%eP z)SEI6U^cz;wD-ET2E(>_dqY&1gj$=hD%`zTyEOy?H!g_1TSJB43q-+_!$X8L+Y1<(Kl+kqGO0R)%!wEx44tSd1yBJTX z2h;;H(o2dV{=z%fIC??PVn(MfkC_g0wf%+?g z1st&iP)3_80aYwUv(tg|v$G^HOyBrEle5MN6tV_LKm;f{WxT&}R4v>9Fc)MDxA-ee zxe=^EsO$a8apO^yLIwRVwP1_th;%e{)1K1UDc= ziYVYTYPy7qJV=SjNhvtQ!gIZ7^E7;8FNSnNNJPj@kUYKH04s>d!Ql*pOPK$@=8M8ax5az6=-faOD^U|YQ4~#x{0qX1 zI|2hRMw|1-9#9y&YD|bDj9=uq_{%=Pq&Of5KnDEAAA13giwXamfhM_t1QY=tSh&0d zx&|-+4S35oLx2fH0VeQEU?kDaj2rCWGpVbaIub9#&`UGA1U?u|tRuk^yb4=*o;N)U z00OVFlE)eH3NtV)BDzgJt->c|afLQ?xdieW;S>dr?@PGI{OVEwkL+(Joh)_mj2Xo&(salKS+ zNRwQp=le-+^-nHMQE?sDi`W4MpaA|TfC(&s9?((1najHTMe$p?i~@oVkUxPkOaLT6 zo5O$`FuD&|y30HOC+L8~$Wmt!mSZxQ*~4!tt+b+vMetGtf=c#nyp!zy;+>iS)JWk?Lm(kpq#q8 z0x#IgM)kBPh(cG3HBPvMLX^Y%DoHCqyrgZ_LcF!56g+5KwZMT?9Yi(3D>qQv1XQDh zqrCq~ld7@9I}|NEJ6>})Y`ZkXt2YGF+5rf*rcKUo>qH?m$q#_UdOOQ!K{hJjH6YYI zUhB13D?`Eiy$U^A-P;68NX}u+PeoOw1TEAzT&qG%+(E3+ReD@G47YHN4<5+4C7^&G zz{|!gMk4@$0uaR+STdL^0l6@N#k5NhaDvmlvWG=Ej{5;tT(en(GL(x()8YV!V*wd3 zx`j0_eeD4K+uh@J%)QZz*iBhCOUp1oiE&0qc9-)=mT5<6E|i34v*=S3LFh?REZ?L$9+Dh zpuGoc)WDHp^R(JkO1uN+0_FU)2BzR4bQB@pJ&xprOQT>HzN3Yt;lPDqBep`gCB2{& z-R?1gA1DDWj$I&N0`@BbbUguE)Ic&*MTKDj<3))h&|(PWViXWS7T~V~)M5;9MKLzm znJ@u1hCta>f**+8@-xf;2s@xVT^bb#BRIbqh(>j-%kG7s@deXJ{#ZYat!vXulO;{A zG7Xb4q}6y+IOUqIa9Og7*{=yMA&CaR8d?6GWLM5dSdL{`7T-OU=$HwAwI*wBW@uG{=wQ>av{og;y=J>MYiPDxNVIGDI%u>u>uDzIG9>J`hD5m5F~BbD zypCrL_U6WhY_Q&IvKDM;CdqFO1GE0>$oA{VzU;bwFT&l!i7suM_KTon=~;db)7EN= zer?!}ZP}h}+OBQezHQ~oY2A(%lx!={EK{5igRP>9gF+dU{i?Ef-`!$O*6g}d4kNLF zO(RJI(xYzcuI}rm?%Tj_?#}M=xry!$Z|fd!@-A=lK5z6+@3RnZPcQ>BXz%K7Z~1<2 z`bGo#rf*Lm1>L)~`)+Ud_V4`W@B20b`v!0V@9+P^5-tM)3b81^1S4 z|6cF~Pw)Zv?+Pac`#$hAIB*Yl?+^EcTd;2j2XPL+@Cp}j4d3t=H*pTf?iPRW6A$qk zPj4OHaU3Uv9{+J4=W!t)@*xk3AO~_JKXN4pa-4AQn{aZTSaK;(@+trECvWi}SMn>L z@+|LiFaL5d4|6deb22Y;Ge2`QPjfY2b2e{tH-B?DS93}rgq|RTIk)pDpL0F;@ja&m zoal2q4|F~EbM4@!^j$|a#iEq$rj}Jr?`lo1ScJ7G1V5nkLV$!kxD7~n1WbQ~o7i+v z9|KVDicshDQs?xZfP_>}byIJ3PIq-te|1@pby}bGS|9%dQZIE~*L9wV^-rhuU2pYJ z-}PS417kP!K{)nZH}+eHbxgl?T2FRmUv_F2_B;^wK|uCsH}z|mc4RkpZx44=X9a7g zc5wf8a2Iz>PxonG_i6w2XODGp5BF&YcXoI8VHb6M-*Hf4k5BfH2YHeI_>UiXlPCF- zUwM^pd6akgjDLBVPkEZJd7Hm^oX>fk-+7+zd7uAzpdW-k5PG5?dZQP5oEUnX7zCa$ zdY%}BGuQ*6kNT(E%;$Ve*L>4YchkQKK7Rb;mwGwa zgX_--FCIf7b8*@>l=$XMgus|F3WV>=%3TkAJOKemO}0 z`Op8W2Z%p%1_~r-@F2p12=^>(=@ChNs^Vx zk^)brM7h%CAC@0w#)KJCpvap!RgOGKbEnUlK6&OWYLuu*H%gZ-9qN>%ov@A)3VsS9X;6OdZEO6_$#loi&;|Zpk=fTW~qW z2uXSAxSC)+x=4~?Jth_zk+V?-S!I-Mwj`6CJ%t;TlBE@;YCl@3n{TAaX5?(QQR!Tj zUJeB&nPr-JCYouAcioz8y7?xY;5B$rMdZZ?+hYvnIZ#D+_Bjws6pgtMd>9pakbV4h zq$s0=l4g-cKRqPrOE&$-(WMD$svxHm0>u-h5>gtfN24O87EU#xnp991y1LYcv3mGn z93a|?)re}Pxa(N6oK%)8f7$;MEQ}}32p6)+)n65?0$u z**5v@x7LCyrEIrdY2}ycf(9Fx>Zl59#7yPI+ruk+I=6Zo5?*`I?qn zrg`<5TmvUBb9RC~Hoj$>eYSUK2Ta{lc%*^4CE#?>aXKB_9jjy8w%xI9+qP}nwrx8V z+o{RkXPz@tS9M=E|G#Q|@4~5FwXFPX?mJkcVC_4*b!*A>_=5gh>* zhYny5+N_i9fPfRYjWiZlBGG!7Q-35b)G(G1U77Kl0X!PTNM2$GkAH+v@cyQJIseiibFMa=+-MQ}VGu*Pn+|Yp<-8m*cg~ z$4a9bmgg-GAot5&NKe+r(ZFTa>q1t?m#(V~dg&GvSo?{a|3V7emdA!o&rVYLv#s3Q zPs$JGYP$0hvMLiS6*nOm; z@)0X3I}G9_u=~;x?z_4eB*Sn;YNbB-HHiRPWcf)XagOx5{6H4Ylxb&?4y?~g=W&Mh zBv(;E-r6C0*3LbHY$2i1%Al~KN}VKM2PER{h@?F#142FMA@%sn@aE1xv{kL#47AkT z=2AJ^$4A1v%4DfSj=$U^*(Z&>>(VqFjWJgIOM{pV(o8!^S*8T}IG4c*H!@MME6OOj z!W*I~`k;r}+j66BZv(G9Pk!Jk0{6;Afo`|w6SQaG3PGl#!|07bE zX1HK(@P?6A9VdN+OvIR%wvdTrS;8~5BYV3n%$h%LW$t+PHjY^;f)mYk{AVq*>_#p52vd*(a}Hbmj_&Vb zVZ+y*P#^VeLYTNhOQ@kne0_6Mwf0%AM4M#7wuSC9gZY!`#C*dPlZC{Cm5on^!Y-03 zur&;t(t3kT>d?NfWK#O7H*;)rJD;ij-rVwM@~=oM5bC z`w_52VSKrYlO(}cXZ%=Qp>VEoTd~8lO(O7GwP}h&K0msI`rE9`L>IY}FlP{;WtWQ@ z`qYLxmxfRm=5;ZJ1wAuikI|5#z4vE_p>gg)CczTg%G8~;Sl#@giPrmXq^1^>d6be^ z9rKxWSx3!7W#b5@a3q~W1{of(xqYUIG(N%e@DEkn7#DnaLXdek#YmL(aTbzCL*)^T zL)*mRk(NSdX2NGTYpm?w+O!3&Wyg4SKcMhU#5Rm_)XIW9VonqM?TAW)%xARX0ClWhZi#@m^WOz1Y;ZE)Yqk72l& z=P}Z#5x;S841PteLorj5CdWCh>LTC# zdHe?&V`G7|u~`Io7b2)bW|QZRSw!WeLp!7VPUZ^RiTWnoMXD44Zics^W9!PBwQ`@P zVs3EH0g$3SaY?G*psP{@H}x0izyGf$}xndA~`HNbs?*{v?PsG1MA&+2~E^V zZ}Q#T%vh_1M~2AJkD3?1O|(Ffl)aMw(vtYv{bq|*MQI5R;OSa0=3ZU7d1 z%(>z=<92nIHC zN{D?z564lZlapi)`r&9q`f?eq?taF7Z$J6E>?yH=)odt*YiY^;Q`6sHf_I8)e4j1?3Ht*_kvYuC46?z4E(l_=9 z+_%d<90{GGCqaCyxe#sjo*f;1I7qJgV;gNj_-NaXe^~j4rCCBjz@Kx*XGMFcS^8mS zVgnACIzX`lA5%Y_y-&5{xL^<-=#x)f*FRCS@De|!r|h8hVxV?^05Bk&w;zlMrkix8 zf8ZufiJ$jDLQ!fm&!Hs8<(@C;G@22>TH|`Rp|_#pOOX3}&*dtYDT}Y_X^7)$6k)@E zt^7JxMPSIeN6q`ao|ZnfyYw0Ku(d!)f{TQp_E@J$z`VLP6rSxA+G&=T3%#Cnh)v~) zU;PoT5;+~jt`ZK16HKuVy{ElRp;WXsy{(_FHM0?-QLP? zRNBUmN2{hG*cX5+ad!O3hrKoBD0?KHls(%NrK-Rgm>Zz1C4{r>sy{xcux zSbaQq64@V9Pnt02)~`a0m|2E6Cn<28nwZDxAG&=(r`s0y(nicpLbpByZlb zwn5z%AXVm7dbx0Gi@!*hwRP{kZHHuURBlxmcNyYyE@$^>XSYa=5OkNndPH&yYxJD4$>_;_Kj3Gs{65id7G1-f5{&^!r|knvAX|bHqg~*F48PN(j5LXw*(Ua zyFpkhar9T^+8ojD7tFd?`b&Ri3 zjBi|QtYa+l7ESUKQ=G_PIoFVc!2oYx-_+aKJf75|xKN|9*utL3eB4-~-_WYR*eX7s zRA*nJ`%t#rzhXvSgq-76}eA`{Fq>W$FlzzJls%xp47UYG_sjAYPmEj_)si(U-%9xdfu4%l~iik&=%n= z20+SyWE2fz7K@-C_eL5|X_g>xVj*DWpz_C2$1KsHELnmq<&DHC+$6=nEH(Pfd4Em* zjQ}~EEFg|7qq!tqr6^;#EOWX{t^7p2o{TyD4~@#62H+&P@=vbuzmQJF+^3Q}Jj#5S zbZvxX2%7Q&q|&@H;V`jsOqTSXC|xdr(qbSIER|Ae&@>c}d+MJcjQR)S5QF)s-O}*z zV?{@z?>eqH!){nRpg4*I;+Wic+jRd7U^Px|b4OYG(;|xK%>0TxNn?>1sjOaOvCjJJ zjDo_>m?C-nxSoY732E9O#gcqSGiJ^o>rz9(m!mV9BD~_t&nhd3xBCGklwVkSS z*dqIm!W&P`d;5%G+QJo$vM1e$x!R0}FjiATJRhyH&s?kkv=nl!NLNH{VBSwx21D&3 zoFg5oI9|6{1c}P@VpxKl`TJUYi3ZRuWaX@4B8Tt~k;tm~%A#GRYDve+ zt-4}K$3$$5s>j=kIoK-Km0=00f#9iG_tX%LIY73xC73V{3bzv$-qlR1um5 z(z9gH8h9j|TaNx)EV^Y{y7uAfd!))0e=F5unn#tIM;)76M z-RqUrYxebXyqI?Wb^gz*pTMbb$hMj;Km9#_LXP-A$SUFuc;f&A>XLedEq?36@z4?p z@{l26ra?i_nY_A1VXZ>SM$=L_eT9%@47jn%6YtWgYIDHT>AiX8IZRD_^OapNN+1hQ)x9WzAW)n&6@XL~IY zo!thYqIaC;O`tBfo(@&D4psLrT29UG&nq;}iv$o z07y!J!FB;ofaR#M#?I$I*q&m;ZC&sga!8Re#!>Dw$V+4**BJZ^zHDf z%r!rIVN5zKmWsce?f(vREQdwbhsO`jsf$5Y4%!Il$B5`#j{km$(~oCU8iDFStS}Nl zW^?D%8lly@Mb#qwWB9FF&-czqI%8P}(MZK>uaH(x401%ZK`Us&AWp(WNjyvCa%Z8_ z*zOD^TwG(UoXo_V!-ToR(5FL)htq)4(eGPfzzB+I%+Vu=b5OuB5j`p*2*z2v zGlgHCUOql=&cmC@mEokGU_9Q>C_%?Wbbp|UTxY^xpIFQwu)wS{Ku;Z^Bq-~sePFHo z-Xy;9NV?;seeEPF>S$o$NHwr);Ntk(;XzE4B@<8JcSz%%@spwMld*%d-pq|q%ma-z zi&zWuB#zt;fZ`?EMz*EVVumJcx6yKj$!g`va))KBf70sO86QW%g7^NN!<^9XFY~V! zUFYbJV0|6h7GB&Yd6!uS#&i|7$CtmQn?d)*=`z^y0VOikhgZ0u9gKLK^mWbV;59cI z35C&x%pSmv2GZba_nsy*;3&yTi#}Ni)WNfEgl!9Gv)&Fp-%g7!Z?k?!y&pcCKUu3k z9h(SPs|l~GNk}sm%5C65EmoyjkjaZqKw+?;tC?hLaDsDaOlxS$3%u-$p)A{bRU~=KLiKl-O-d z2x9pIrmbxDR&*B?j8l6XxmNsB`4Y~3yky4~HvoA$8<=rt2NZUQl8eW4NL;z~m|#k;mz@yFg(uk2gy`MsflJCWjfKb7a8Bv+9lm-oLbZPgE0)RJN_7E|4~ zKfsm|hda5*x1p9Mu>tqumi8b~HmVDbkb_!7pQoUa4`WH!$`-EaoR{(xSG^w3P~ePb z-V)br!c)~b+guIXS9xDVkEaErmjvtQ8)8Gx{`vfbYd!IMqu_hAjhhWwTVsgV_i%c> zX2&XLTU^HXUOo3lOV>t^mkP?U19r0h9^F<=TSw2k+Di>!ma|Xzv{#v#V~0~kvHA#) zr*CJd&*Rbk*;)7l=x$?p!R#%{fLeWJcmn0qx0zB*PXoknCgeN=G#dE&!~wf zpoV9R(|xX?bCUD3OwqY!!oBL5>$^ogv-4Gvz)_a-cG1##WAajY&E>Vzd5i9~e9jSz z+;+FZ6V3>{ljKcM^jVJhtn=tv*6p%z@JV#|e0q164q04td6iamIPL9v1J>}q_jEbv zHl$-$fBa+QlYTDz+9*=E;N&6C0t2V=1A|<*H;TyR3#>%IlX^kS=d-^im5Fq%BVsoN zFgY6@{yrRznH50_9Dq6+OQck-cU(g{9t-^|Js;SLWHK$xp-Q^{ifk&EM_n%5=!<$hg+q5tbA}j9&+|&JGU?e;2 z1v2EL*<`tTi_Ya6)5&5r_dv53wu|+apK;6aC3dTge@_@Xt2d6@9q`aUjg?hfR5q96 zS4m7Yv37f}$p)DG_vQV2ugSnA4kNJA^GzMF)?vRr>m~4_O|d*f9laGf4FjR?9^*F3ER$9!YP7Vp`VTT9jU?SlM1=v5s$eYXZKE=?8|%C>>0o-Ry6vmo;_l-+&gDJ91#)&&wZ1#dG+$Q{&plxzaqd=s z9JO5eWp&8v2R*uUhH?KRJ!7d|~jd-q}U?k|~ADTg33BAFZn0xHuifss-=4E?s2B1xR{{5yg zMy(jvjLsBPRFFqNUr9In zDHv)XVC?3O`nI(I|%3EKxuDX zb-3?$Xk@zi=KGx}dkj0Rw=`z%>8YA1=l2Q|dixTXJ_{c#A-x|07zqfbni%uV@3=6f z!}^l+_Sh6(6HDQKeB2R)0<5dA!Z=^)pL0lnI@(y+K+wI&26NVw#6gbfYex0P&-|N4aKhNlPsYq8~M%iZH56+#2f8HYq&h6t<-}d z7sSDc^U?5~hk)F-PfLHUd5WGY0VAs-?0kdN7g#FOTPWny@i zldBcQNQpV$k0YCr(x^SpWy-ATj#5mQRG46(p?H8(Q&bH1=PL9-Le<PTh4hi`H>woT@}@+I zLJ`kjP-{}_CPcAqcagq0p{Uov^If%N`qE80EF9s(6YKh!bZC_t@IPB4Z=mu)L@4#} zO(p*@%?ZcNQRtE6la2pi^G6y#G~@w?E=^N#x z%TfHJ6Dj|&jw_~pp|$p*UfC+AEy}{o$y$}~$-Ar%T$(}#)p=UWRJ1Fd3p}>3EnR)j z#c}=7kmaxqz=r9W2-n3cVAnn}FKl)fHs6NzT<}zSeQ=-;uycs~h&wN4b6#Px7t1s@ z#MtACUj^Aw{AUFh2Kp%}^YePCFCgmx7k)yWxweT0SKd+-%%UO?fIOI)RVoOL6g83l zJ)F&76+|aVj3&9N!|JgNGPnPbA=8R_wC3m?J z!V*s9OXX@kMqO`PhJ?_$!$(3sjrVIZ=*RS!)gOKTKlpEZvaR7{=1m|1PrAMFY~jCj&0E(0&^6l8 z&6n$~4(Hob9WB?JNCB|;GF%0>2crp8yknjJp=<0Ua%8&P9}9RI{(aLmoXe#OC@Q9VK|MyLdDcTtr+Xn2LFD_Pb4jNAVar9O`{C5hr&-RFUE~LmTxBN2?ri=p z1qz(?C;vR|eoz>C3!#~j#DOD}8pcbJE|Vlci-hGSDsiY4B_SD?9VKHpx9z3q1&HIwCPEM~mKun#8bek1pjrLU0`B^5dDE=1BlTPUB@5 z()9{dMJOqf6BNdeic;On&Wa730cUBs!R5!Lh52l;Qr5*-C*_sPXMl>@WgvN?y9Yp> zs!Drnyd<~nPNTHuo(8Z}I|vhjP&kTLVObxIH+oSprb2nyIH6KeS(Ij6K~u8n4!E>_ z2&4eC9wwVtm!B3_SXW$BV^BrXcT-)wtS3kXl($xy8{3*;GMx(mv{Xz9}w=bMy z`Jx|6P!dy5QAFx;aMjx4c4*7HqAqyPTT+I+YHMmiPL$?xQAst*ZdlC(zIvF@ zSpDAiHce1F8-`-JR zQp4@K+49A*xw}Bk<59O^_0w_t`c5)cd`#Cexcjp%CMPQi&Wrf~@w0s>% zE|vE=-Pe!SeiKALm;+}r+Ji-8!lp@wg z$|3KG50}$jj3MleJBbt4w-41aDvabFXQq9a1v4Tx%rJZtXTo-Lge9dKZCfqN?iEk z9FYN2+K@~zedNu%A;y6`n*!csWT})9mg?BJO8s2Yp=kj%<6+NH(m--=S^@1;(TL7l zeTpYsF+q^{s32BD+^AzAL&}&GdGknoYGolB9p;p@`$70>CN7yEw1l7He=PaF(S-X8gm%#5NUKGK(UDYvs}$@e!kxQr0B0OmEz*~auZKcm37p))QQAG`#N)6;@~;I zQ~OFci(|SkG(h*qPqjY(`#eEZeR*{3)j>SwvdAcEL$V5laSi9P)G13vPD)aj{o#g` z0&{_SN{0FLmILrg^^#b$v?JHQ?7m^JL+)e`o$b2;0@3NjoL-KQ9e%Gk@koG*W2wf4$I*C&=p9f51Uj-xr(1~yC{ zL8-p<4`1Htk65lA#jCebL|2jyv>h|+QURxIf*UfGrFbz$R;OK}jW}`u6tlrC!|eaD zYpe`oP~1lZXy-f=EO%a#ALk;C>kGkNbtN?0=as$f6aQ`*N}IS-(~a9VXysaS+7s4BIFg!SjU)g z#S7@X#Mj(7Ah2O#5t6x!b@*T>0YbKnt4`n9Pb?p0^0-VIbzeE!tZ%W+ymlN^Id~mx zobUnNdIGMF@=RRQ#bh~pQ><+R-`CGT(f=-DS0A4pr%x&M&>6%|%Cv7rs^Ddv6y4Tj686z1-mq?ej zFnR94H@CE&DJNZX!#!h@f2Um1zn452N}xObO?&ne zXP<C3W%Wbn*RpY6_L=(Q5L2cI0tX>^sWpHzevenBuwg>?!fX%csQJ zi_D8a&1;j@pQO}_RL!5<)rPXwe{jkAgwZlzfT`8{+9lw!M3{xmhix=~=OutI z)klCWP-xUgq!k-E#ocFE5WLlHVK&e><@^342!7NT0!p`gMH#I%NbA%Wbu~y2BY1>0 zShv;Fh}?e~-QTn|*m6|C`Xx9x(ZP-_ge=s{an;&o)Qh$?#A!5ye$?OhC1jt>(N8V3 zQY8R4I5fO9fW0(;8z?3bi4hn_9+rp^DC`=R>Kd3{8i;rAuJ#z1ixHkf9$u&xF2@!m zv+Ao98jds?t}^OdTN-3Q9?{4a(d-)0Di(}k>SvxBZ1ob+ixJrm72>8A>1`T0k{an% z8acV@AN&&O4;3|c8ab~P72z5s%NiEb8nwO}mGly|YZ{iu7X8u?COR6H$1GYP77iL0 zUi=b$gAsE@9`nE!RlFiu5crHZ`6lG@f!To_s8R8zW(l zErAJBa$hWPA1D-kJZf2e5_WDH^N5kiN0BJRp7`pT_~sfb5tb;qmI!{9D4S+4*A}a! zo`h+dWD}Z%*7je!X3bnTEe;Jc{y%n&dc3!5{I^|0|83V`hB08K#E_>jzq&=<7_30R zKeeQAum_@)2=a_Y`>#cdoCOAorwXT~Y8RzOj3s{4HHy$l&lqWmm`O=xN#AshL1{$# zS&|P*a&B8(-dd8$f9V?2Z@OkIx$?~0<}AKeJjDq*P=hIh%QYpoG{ti*rSmoAR*6q; zG_{{SGnPG#-z{xeJq^n*a}qjMZ7uD;bWQp=IP1jVUC2y}~RlTOy0V%Q% z#Ix(&vYXSgX~HrxOEUHcvqzi18~R*gP^9vxqWH*}gKht@Ylbjm64NrL%+kO#au?dt zAk%Y~+x}zMtc_)T+ci7vdAs6ydv1Bh>e+N@c{uBN0QL;qn{0f_{QEXb&M_Y>Fa_;} zz^}FZZ@cD)c`gG-K}uP!L242UOq~8{F4ua&FUoXY$~*ygNMiBA3%$Z)-$DtELQ!`c z$@D_m^+Gv7p%O=t%6OqNRuMBs{%voO4j>N;t5}aCg_zPucfF|jEZaOi!!o?MuCds9 zy_iHof>fi#*}cTIJs}Rv7dJnL{;kCKEyrIXhDoE8!8~a?tsp!+cNTb75dBul!%@gr zUL-G3rsiIzYF?K1Eu*iOWxSQChUevT6mMc=n7EglhL@MOmzS-VR|3j!FiYw=DsCZ5 zSSU)|(kt5C^JSCRG0;ne)!}o=D+e?xhdJN}-vY;fINT94~tTl)bjs zyipeAw$}gwHQ}zs>fyqb>4iTtYOBL*;oi&1D2g!wwNO-b@Ex^iRCNfF6?Nrxjc>&? z#uXh{^_}H`1PtM!WjbWA1ru0RvoN_-6ZP~P^?#`v#@8$9fEEqC5>*@?rCO%dJ8uoV z=M8)7r8kuS@r5748znj#r7{|2Bny{enl1^Nlr{<#J(|AZ8nuol<%uR8>>6#@X047w zea$j;i$b}GX0wWB%Z+A>_h#Gk<{ZkFEc0fEjanfXNVkp_m)sWb_ZCmsR#(nef6Z1u zk5<+2I@FF@NQ+kFiq^_lf4U1^K zI&byrXu06*xzX&2(QJ#(=y~(#Nu+A~oMv@(aZae8#Ad)_&R4Yh`SJ%&v)hs`^OeXx5iGl!u(d+l*Xpm9pRDZk$z zy~PpjtcxY!r}?qM_%(2AI zG2Y6~?2oYn!~q_hK~cDIF0Jv>%)#=?L7fV?Tfq3LW|#Ej_#*7kx<{cJ)lhrqgxcmr z_r;K&R8ODQkOJ25HRoi1=A?1uBSOu_XXcJ;=H?q~o}7OAn11S${g1_vX z0n?s&@tg&E&Op)3f_lxIf6O9O&4Sa+!D`RlZqCAF&0%!SL1@n*Zp{+l&Jj(`qDaq? zSj|J?&J#w>6JO3zSj|yY&6D5`oB_EO7;)!Vr)D{#7FePdc&Zjya29O{7KOHcvRE#P zc`XWeElNx+N`EejSHZDRFNsJmEfOrLWG!lREy-;y$xJP3;Vz5OEX#2(8%r;nWG$P2 z|7Oxl#+S>=pUd{#%eGrTzbpCrhbt(DlV4dB$4?90}kbm|)JcCOm?g50)WW#7uw)^^m^IO5J7&CV+Q z&Lj8EL)6Y|)y~J(&e!J-Nc7Hr?au8b!Vm9VsO()B>s^@bU4*Y)M7%vDo;_r1=v%A3 zao;^m@4e5eJ#5+?JefUwgng3fJyM>1jO=}U>pkl3ecI}M`suyD+xyfq2iVpJY~BZ? zc>6@tJCxbG%+;F$c!%RzdxF-7g4u`Pr5Swd^B}?jx=3 zBb~1!J-lNB>mxma!x<~+If6s8>P@rkV?D%QM%%}x2>(hvqSXZ1=jmw!pu>k^;RD_0UU#DvHr#X0l ztm?CT9YClxz&{#Lu5(u5eOlFhn$H8M=Q-B}0p#eMw^jq%s!!{(Pitt;I=0XIzRvru z&YEdYhwuP>*%y=6=N(@cqtSrbtBd~X^KtLX6@<&F?91`)i_PlGDTJ%7tING?z_HHN za`)w+%te7Opr96FJQl*S9f4NqBpXz=e8PR`N5{DuoZlBje=lqv{SCP8%};zd$ebI{ z@6(o@8wB7@-0Te;{wR#{ z4|3NJ3O)}?x{oS8k0QDciuiY$H&9w{QwICm-dy3`dT>NJcc!{GW<5`qJ&)GFryn~{ zw7k!Dvd{KD&rUhd?lI4vzn;B&p1ts2{CQt|cpsB&UqX9s!g^l9cU~faFY>xCf^@He zvaiWLuc`PCu|BsMIj>2;S7-dUT;8{A*|%cdw|t+s(ww)-oww?mw*+38*c*cf>uV1V z=cOv0L695Yp11B_?_KyGHM}4Fx*x+nAEUoM#%ew$f$!76kHKG`Q@o$^JFl77FH7HX zP|fEiu;+6d_&N9M>)Z9-=lz1=g`rh>Z?5^$-S?@a!ujBtKHL0E(ES3Bfeo^Td+hwoE*Yl#exssGY7zlNV&(s%kpYW_$%Cs~%( z#{9_;&X~J1nnYl<=x+T_Vw%Su^D`VGovuw zo$W5O$BVcTI_;e=FWuLp>l4|X9&ZpZl$)L(usAWDcSuUM{6DcBZ}_2zzb|xQf~d|r z^`Vrgm}6^wg>c)lk?q<~B`U2sZ{!YT>QLGwkNGx`xXzj289i%6!0aaNMDb3}Y4V17(M-)nGC=J}6ZqY}V0r6}~jb`6Eu!_u&ZedGLqD^Ze? zf<(!HCO zXaKm5Gelv1vfEi>=(eNsV~ePU8|pV*1IzNF7bvKpG#cG~dboO$A7E`D(WmufU_5h( zqM2_zXpm@$Dsp}h%ZX!T7~N!rj$_?lRFX!!L&2&X`BL>P;{^?ertLIjVWg9NED{6sJiS@ej=;Q`kK!<+?HeWxxmrbS z@Gwr9MhTK1Asn%UT93mS?6$@@9EkDR9o`Ni$nA_Na0y$B@jr2nxJmJbc9l#70R z6r_yt%O%}(Z=T)`2Io!=CEY=vO6#Np8tXbhT$A9p3oA2Bu?(`{hdNk+G2^2tD2fg} z9=Qg8u5a8S?99N%KGSuZUJRSzY5tgd2f;{t!?1U2q1!F}DK&qD6W&~=NSEp% zTbFU-+8RXU?gpYWgs>X}?FmqkL4v@$N}>PpD5BJ7^imqnhG`V%r{m$6FjIx{%XnIY z=3@krlhO|k%Q!U!H>B`63FK>y5IUmcq75~H*c;sBRj zqV6{6tbhoj3||~DUs8rrMCfK_t-g*-jdNKTXsAn);it&t>t8ZT6Z9uqq(HWiC|R|4 zQ$B-!211W0kR@}061x?e&XF%TpUwQ(U2m1N&^m%4Ghc1by;0tB|3@6L1zDxLq$7`B zN3EeRGv{A$xz<@&XcQ{27fR?G1iSonNnBEvzZ|Yt*2oV+*LIXpoyr%MIUiEHq{g}9 z^RHCy{|bi?v5s2iiq(dwjBiV56=7`|RGQF_M$s9}N0zM@+3Rq3EQKXBH$d&(+KZG4 zRNWcU=no6UfZgIG6FbC|n?dik08tww)U*QC+Sd()>6iO9S`q(7#StB-IXsL)rZko@g`v>t%dj6kbw zjlCu>IofKiqzMA{^P!WAf=KOKnK;Z+22TBsu8H0@eYoQzea!7Dv_H+&z)bYBEC@_Y zEt9X#T4`&JZi|dHH3?+FtfAieM?x)oW}xGL{Ge*CcgaAaoGZY|ZUx7(mk2Z8*dLF& zM96eRmg)!EjAH3+UIb#OTeh%Xxu+EveZuc~s z1(oaL8JG!9-Km-0hN?p^46b-P{q30s<@_Y1m9bkI&D!6OgTM9JHMNbrms*+?w5yc< z4t+~H*?1_|aU4GEwMXEZ=i`E7{IoUocbtp6lg`w!UNzH4ui3a`&BW&(5!?q0@QPA} zD7gMcD8z!|C4-bN@ujUFNI5!AFhU(8bU5+H(?;J&5V)vgQCF8}UzQGMp4+B6%L=gc z-q2)qQwlyqWNX(CE-Vm7WXUB#>f8kU`p8zpzNdM_qDy;Yp8mynv5xXi%~1k(MD@#6 z438V%hA@|@kiWMS+jjUpAFB!-LSk?3+zCv2Yg-#gyTbf}UQXr#uG>?(d~LaSyYJOQ z50rJ?h17kav{AiQ)ojKl_EykYpN_G)YBrZ5eq)bO8S}CEoP;FDJD5M^c-eO|qo=spB!D|zs71R1CY2)G9FF#17m6X{m{fd+(- zBwT$Yg9RwmzsU^w&m8zi|1o3A4`6}|^1%u5;|K}B3E?sc3P}kG3w^ULf>pOYt*L@K z2X&l*Jd#WahETsIGGd>ogk(R2*Hbtxpg(2sJO+y8B9)>tj@bx1WY_dT_Z*10*9R{MLjx2JtsxI zHbuQJL~U>oY*9wU5(mW3nj9_=947~*HV6KQi@14;g+_|eb@KRBibZsa0V5~HqBg~% zKSax+#Mp4e?7>A=Im8?@gAFDHgQ!@^B>bsU-@yBnqV@inJt(BQvxO!AVd?x{&PtknDkz>cx@j zqn7IDk{Xbb8q|^+vXmP3lp2YY8qJg%tCSk=lp66IYYo`I8|$B|w@ zkzS;pTI7;mvXox-oLV{@AEeGG@f=u=A5T@}N(k&u930Ga5NrJy0PaSLf4_rzGfW>y z$sFRy99znqc*>kc%A94&01PFMI%W16WiB^mt}bM*J4+h~`brz6{R<@kr4d`5LP4EB zDJ(}W2B(iIr(ZH<-zsI_snc6J1wJ=rzb<5fAF?3uazAk8KxyPY0%xwJy)Z!Zo!cgnsF{sP05M?#fHhL}T^mPgf=N3)Vg_manm zlE=)F$EuRYc9MshlE>YW$Ge>SI25>r8`)@-apt)3_{ZycXSER>POs1M6<6hX?1eNlY0K%Zo!zl9vEXt8b_H!Mg zkUq+DZOL@^`5Q?7;Qd?x`+iYFUF83xBna;>AiO9jtt6tYB&w_=X7!zoD2Z1oNpvlW zPbrDOEAd?_$$l!y!7Iz-Dl5<^D{?CRBE2~&3t9mJ`MJcOiDQi?IYj!DX9Vzj^ zlgz)!F$yo}LI^6^DQ;5F{LUM6keVOjgZsnW!{w!N&V|W?tE8Zv|c1qbTOVzzf)k9l(>|8<#Rz>ec_ z0COhRb3uHuAQE$!QC=)jYyRb2m7WV$-bqy`OD()g4H(g-hB3M-H>4JQsTT987Aw6J zi>n?_qn^O6o+z!Jq^+K8rJmxYo*Jc|mZkn*yJju+NYVOU)zoP<8(zrQNHOn^N*q`6 z4g31=z_?M>!f6DL$>UnO`1S8UME=~ z!8UJ|&U@F+(a_GvlJ3`k(jr}u>Ya+CrTxjSr6ke4OJ%39+$RWODL#KxTTljqfS<0u zhV6>WLE5)8Dh15*wyKEX$y4n;o90Dyu#wQFlYXH~g*Id*VO?JeDAGer?agUm zfQn=Iqap@SrtU(fg2UEp-fZdlGcN-RnzRSt_c*k7inX=QJq=t74LFSV+NHBfATR}v z^ba5m4j~R}Q4M3cLHI}y)0huyln?8K45onv{E(%U-ry6PS5V#cTG3fcyKvhFd`%@tZ+j7^4+EMg;2`{Q!Dj{yzxcK{bvZ0O|{DG33Sw0I$F#}_ z1ocKAUhNbQ@i0(C317t!cN7GOYicT}V=Dd~gZhXm+o^wxHT}X{Su^o8mKK6tgdpw?(ENK>Pu~! zNiz%n_vT2xnZtScpQkvkGp+vX_fgS3sH``LiT5#xt1s!Bk>0d1&Ejs)k)HjBG2+uy zqS9)R@cwkOafRIpep5QIFfC!^8KHvIKSm6HP>2%GnU!*P^#Pyz4jrQ!$D>WhzNBg^B9$LISXVwmt`Nb+*{I_%rp zyb9}j8s-Ip+k5r7ZS!1g1O9T2P3s2&0*Em3MH=;e4YmYMYy7OeK411jLNxp->Pu*P zb7Jj>Gw7bOO`zrn31t_q{p0@ttw2)06_DuDzUTlifgiST5RdT>M*#IM@c{p1 z4Y=^<&F}^ga6~l#B=7(Q7jhCWXtnlOB&g{625|$g*aJWU8u!>8$8WK|0A`Ns9rt1w z$MG3oYqo~rC|`mfxB!xkXaC+S@e^Nm1_)>!5O!amZ){(01pk5T zUUo%paGX|w|6c0iT{Fm)7-xera9bXB`0e z1`h%lpKpKm@1Hhg74LWy_wfA=a`hH!d2eWVw`h~M`j75-fv;g}?|~indM7Y*la64k zPw=V#Xn=q6eGYJ}UvKSp0%YI%0*83&4s*em+wgYuGDm`C$MTAn>+9t`<5?QrxvN%C zpnP&i;V~W;P-VaFgje3M1KkBx&>eA8g;t>D-c}W?isBfcU?;#}%--wSLv-vEhTikG z37}>$J_9Vp~Vl0a$<;*nt4(fh8ya@$Y~eV1OOif$^7teO7A&@BsH;fC6X&^FIO^ z=zs(Vh$SZ`^gzM_#SRJ@JeZJiWWs?GCpye<5a2)%96@w+Sm02Fh5;0J1Yp3TVTS@9 zOsu2xxk>{&kI6JFtB-HCddl`R`ggQljZ>iJ~JA{h_nO- z4H$b64PwCS&K_SgAo)78r_QbidTxE7LnB730U9v4*)zafsu@Z|MY5trO|OEfx@{0x z#ZV_wAOEx{$kb@jxKRTAVgAH<7=+Xwnm^05ieRtkoq4y? zjcb5`+Z1ub>Y&_ng-y!`NcYv@H)B{Nc{d;Z>{)0J+%gvrnaaC?=*eg6VxS9isD{*i z#fr~~vA1aRg3+g-0T%}R`10q|uW$c8{`~s)^Y8EfKY;xi(7*&UMBo!wKP)jpSyVab z;1^iP;>#};B6A^z6PmKhDPC~0iYBM5vPvqctkMZF6K1#~g~Mp_Vk($)(n&3t$k@p# zw0xo?Eh@5@VlBJ0vI-}hq_WG1ssw2aki%H`qmxl?fe9IOG$2M63>35FlTcoWV=Y~V zY5$5QVMghtFt{}7V}_V`;ROpq!BPS&bd+GHf?kw)rk5+$C}y9^7_tZ>a1?qdG=-9r z=%S56( zA-)YBakYD@}WWHd@IE$(W!nfi6a05fQnV^FK0nDIPMoV-c)>At;5fKF$6vzNaKCv)E zbVR{69A7ImXdYw=KR_^Ox%EIVdTSx^Q&AdVml0PQ;2_&%v#I4-Tx2m!UItC&bpOKx zEx82NPX`Q0u~s3+Cl$sr-E@P^;nnQivnAJ6mT~9N)EiMhg?C%iJohFtYz0w8UXC#4F1NlLuo1ecW)Cspr7|=~x_$;t<<$jo z#`R!(83ik0i(YE7iYTcRsY&j*n}~_8P{OE-lf>6DgVkfM1WJB7tk>$EO3IjzOJmss{bn{gd+Ms zqayXFNKJ|ymZB5_C)KG1GSF0?8WpKLHL6l21AR~;lbL8_!CztMcwxekjKJqSQ;5k% zBJ5J^QlKm@9Sai^SVal0hXrQs><1KJ!4Y=AfF)?F2L}lkUwoK1ji|t17!g+qm@tC^ zQJ^Uu$V&?-P?j4UK?Y3R%M;fk1A2Ld0yB_I38Xcg1>C@Wm+Qa~>P4*;q%i|&)J$+} zmJ_hhWnx2^OLZD^vW;BeJk_bqU*ZA+#*s!hc+rT?VBnb7Ibn)@9E{5l!xyg{vXMnw z3&q-!7tE#LVq-y?3M7{>z_4W@etFGs8gv=San1>$Y>aq929wYb<^LuGYo+Ai7L-u# zQfsKx*h_Mgo2m%KjfdO{X9QUr!6b}}oD7R0wPGAeLTv@cc^JLs=#tBzr64{t3^e)3 zmN+6NJ08oWR!RbcS@Ll_tIM2nH0K(&d8L^cK^;Exsn31#vp;17+D2H=g4f~a3oEo8 zh+6TX5T!yCrx3-7n8YUG=|VLP&32R3~Wzou^qUeMd zHZcKCEEN|t%*YBjKobh+a6w{mU$`!^L$r-CZ$%u}5nhvlnxyMrJ7hso(gir6B{7@# zii=28a|4GN5eZ~8BLmQ=MvqCeK|6`#Hp?cps6j0<9h+p(dXhOsss)<3*`!*=371B$ zENHh8oL}N{Ov&5~H$~yfTO7kSg2}UUFry3xmLeO$r4pH9L4YdX_FRdPr2B=G^FrWtL88f_+=Iv9j6hnCf*7 z$8ZxH-PXppQdtXX9#=`UsqAp??8*(WQ(T9w7XhkaF#qiOsp1v0xW)SM;bWC_uppQE*}goL)37X84|s6ktIG z(BTX|P*fAp<GPt2WEh<3}lM|3}M0AmcRrf=!+{kFwhb5_5c*r zWd{q|K~B6Ffe!q@BrLF*YV8%a6ksOAZ6u`6n*SzlXPGCD5N-`x3b#6kfzdBY!52Iq zAX&EQ;En0F3}TE)&*eI3D(TLO=LT65mmYEsn>Q871Y}TzOOQSa5<5l$pteX2r>Jt!W}= zmm`-j>nI1BWft?|EmulWn&6!h5V1i>0poa|Wgb)hr6_2*Bfe!)+Vf+m21 zgZ)b*H9osvAC1>mlxRpXs)bWiv7?F7T^29;#0`{OWR5bq?_BoME}lnHl&+M~Ke94L z+f2DhUkKD<+EZM~EW%DeGy-ZC!<_8YI{!4^Ribic{3iqj)u_e;&v_=GRV@|hKo`38 z5G)nr#m9L5(TJrSe3gw%IQx6TKus+|-i|a;Brn7&LZgm|`#g}90T>7c-Rgm}4fW?={8ST+;qE zhRwK869Ln7PL$v_KA?u2p{5z;Ypqd4Z)irSewZYo}j={ zL1K-#)uYwO=-t^=ogV7(S)Z+5s0f;=lnPi)&@_M^Dt?}vS;&m^qF=Q~g!D+4jDk~X z0;OKVZsRr%$RyPvIf5g%eT+G}p*p4` zJGSFGx?>yCL^+C%<(SSn&Htm;!J|ID<393ZKl-EWq*vP@0Y1*oLCK0Hms>zgc+mNSt8h(PnI5|4A3j;S?bYZE!twCNl-OVP%o~ct~BMW7#`g* zOEDTF9>SfO3}Z9W$AEww4b+Rsq$6%99~b>&I;u=Tnq?;?V9oTS4rpV5v?V~!rCQQu zUGftUrlo*{$6eY7b@85E_GMrWreNO3H-4izR;5NpWrvK29zve&VTeQNLI!LPMNVW# zoxv6`+9LMKp-H7v(f`Fk;-plTW|Z8atTbdNh(Q3r0vQMeO^U*1j;1g6N$3sAPXZ87 znqD;UnQs=QQ68o1DWxth0ncjhBu7AAR?r+JPiIC>{PswaB3r+c1f>7<=iB4@5_q6aOflT3-7N2>pblzA2n&R&D4U8< zLvqwd0Vsg}=N~-kk}9c_QYw>XX{9n}ib^M@mZ<2F9+&o}8W1I)70{S^D2OWMadv8= zwrX-dr?5nbtimdex`&|NDy|Z0uI{R?_A0OXDzLhttRkwbDyrTc;(ubKMINf9W@@t* zE2~B;r*7(+Iu)T%XsBK&nEvL5o+_EXo-R^QHK-}Hmg`i0K_~=Et=g(~2CKWiE4;?5 zyv{4V*8i)$-YdT5tG@0lzxM0C&gzevYpxK?MOx$-T;w2}!5|3evm$7lUaGW4?6gWO z#eUwFR%ocMgSK)jhk7fj=Hdjcs=$t{a;~Uu(5b(stjew|%eJh`zAVhftjx|V&Auo` z$Z5cm?3Qc+1Mut^^z0Z^DZ^Im!y;+X5-rjuZBpGPwq|QAifXBvs>cpzagGX_l54{% zt=10ctah!}el6IBt?`L1*_N%@o-Nv@t=g_F+qSLSzAfCwt=!Hn-PWz$-Ywqdt={e} z-}bHAqGrw>ZPx1SqY>-iZY|;_E~a+Ep){@2VkpP@+0>?L)lLxB9xmo8E=k6zb9OH0 ze*bRgdaiTM5b2KY=bmnyey){@F6y>!>as5Eg08a6ZnDTOvDhx_t}g1vF7NKH@AfY6 z{;u!_FYykq@fI)g9Tcj_ zF8LB}=9Vw|YOS?itK$YGsYb5ko+{NAr?_4&#NIFd=CA%bD_R*Y|C%oV2k`k0FaZ~^ z0UxjcColppFas-a13xeX|8E3O@cve?1z#`*XRrpd(5JHR<8CbcQp)7cuedJdQrhnZ zr?3h;Z2wNI3p=pj#xM+1Fb&u64BPMx=P(ZMunzNZ5Bu;6191=!@ujYB<9@KD7lUyaM==ZgFBzXP8mF-ptMSn? zZTsGtqr7jWG%<*lO30qdp=FRB?=c_uu^<02AP2G_4>BPavLPQbA|Hz)V`U;YG9y2- zBS-QgLoy{#vL#pYC1bKCXEG;CawmK8CWEplhw>(G(CC$li0G%^b_NTc*IvouP(G)s?kO3(C1cXUm+G)>DiKo@gQ zzHv~3ur6;XFZZ$&-!Ujy08%HlQZF@AH?>ngHB?8nR8KWkSG84NHCAV}R&O;|cePi4 zHCTtWSdTSXm$g}+HCm^&TCX)*x3ycpHC)HFT#NM;+%hijatPD$a6T~=)PP_AHDCv} zU=KE77q(#^He!!JVk^K4n+22Q?_Mac4cQaW^XoUcmK9$e>P}`wrGzw zX_vNXpEhc1c51ISYpZr^yY_3xwrtNfVt0WD+x1WL=AH$$Q0sN&{xT>?0SUl1Z5Ow3 z`}JfWw{kBxb2qnhKR0wow{%Z8byv4_U$=BSfCt|;663X)`ZjUiaW(YyaAUW5Z+3E@ zw|cKPd$+fHzc+lxw|vhxeUr9rbGPHhG2}+=p2e?G`f|A5F?pAFeYw6$m$hulR9`IF9c)kN3Eb|2U8bIde}qg}-`gsfb zpyzg?Xe@?TiqsM{)jsifPeGVdx}%f2sh>Kkr@E?RwszxYq4>0Z>vDfbt(RA=xZXLQ zt9q%gI&Hn=W1uKzl?m;bxG&pW->J9MABk*oWj$uH&p zHU%;Js>i#%4?MvayupWdwZHhN3Uz;LIKOXuhnEV#>pG=NfW==t#wUQrZ#>6$yvKih z#vi=Mk9@_4yaKE?zIV5}$9Qliy1Uyr0mQt_e>=&8e9hlH&Wn7?t31PVI=idtFHbzp z>v<6rJr5W?(kH#rFa6OsJ<~ru)H}V@NBz`SJ=I@5)@MBgY<pmFUv|-- zzzLWD+84kHa6u9P0n&H2rQ^0MZY!K4khZ&CfHQfw&-`e=eUuA)VH>^4b3Ne~zTpo( z;*Y)JFFxZpzT-cBOH`E`#;TOFKc>lH%B!R23z%pbP3*^EPsD0b>J*(s9 zrDOWF_dL7jy)M2x>Z3Z~XME=;{^sYt?(aVD_rC8}0PqLD@DD%n7r*fzKk_HP@-IL0 zTf7ore&t8M^vCw? z=TZv&mDOny9)EuQ`}z0x{~y4W z@LEGDxZ#M~&^R@aBPY4#&T8(t5~ZWAI_&u3 z%P;P_E9nvx!jK}2E-c)D3&jd6?EkMGeGCw%_Fl4&$efHk63HZ$BvOe2fqW9mD5ac| z$|^@|a>>9Jatb)6c36UeGnUDP%vj7cGZ)*8^3uW=N7_L+4LR)aLlEr@Q7z|4G|@T~ zt=kJb?Xv9X5;M~KQHnw-xunMPW}pPeFfOrDCLvoY64OjI-4shoJ^d8aP(>YezfK1v z@U;`XJYpFQ=oquid(Jd-9-}&7!pjRy$}rA3>8w-4JImRV&$<4zPA}~A5>!h!LyGZG zAO;eE5LnD0j}RFs>3{<+!XRbF5lBVJQUNzT7u|FxGj-f{-F+9{c;jXFREp?Dh)a@S z!}X^C#3a)}2Wdrfo-q&P&Hp!NL4vc_s(dwWEV6_Zkyv8;JoZoQI?%#Jg+aQ+ifC<6 zO@PA=GYkp5lho_!YW-T_fS;WvC=<9CM+0=NKH z2N|4|R;4OLxY;+^X!v2TAQo$4<=WEI#OE+Z7SL*#_4wIqdn{;VX`AJu15#Xu8SQar zR_W%v_0BZspZ)$D@W2H(Ng$psD1l2|+3p*peOFxumRFX@WtmrOOVhRnDbIz?HtW`i zVXU(*F2t^f1snBZFZT1gy*7UNB#=cGpf%cYtX(r11D_J6j055Oh7YQ} zm4A=%!JU5|`sm-SK>rC47NCd)Bbr2+fd(w#EnYD$`%LCV+Jb_B81}VPX-AmkR%*#0o+4C z0e}FTZdF7HC;ynEB%M-#7yMuWOK89nDk8~8nxF>@IGzjB1-~F>riQ6hB6SJ zyNF|K$*RjEs?nlTlB7^-goPc%Xp1zOF^y$R<1KI$f~bHjjj$+%Ens1-Nl}8Hc7Vkg z2WS{du<;gVC8uW8%KBeIGr~uLit!bxocAQfD;9N_SJu`A}OnyMSyidyGI z*cpfvvoNQjCPlz=s?}uwL}MOzKhIRu+mOuwTFaj-ikO3rMU}-YCK}b($oSv;yyasUGjD6wvVuml6dvH=(wr&ic#_GY_VG`a%G9LP#>fDRMKY(xcVB zMr-Y}46w-aKM(q&ON>*`dj^ewNdn6Z1fX~`nDo+)Fatwt(kU$X^OsV(LKruoq((@l z6J&6L((ahknu55|OkHt{U)(>Vl87ea)V#I=PC~wVzgx-PLPinY;eG;K&P5P;(ZL6#B&{DHeqW^sW`1`{hZs0^5)^ z+aVSORN$HW`H(Px!4~lzd5{#V+KlQ@;RU_`5;iP@0Pyev8C_%nOt|(bQunpE_VFT zn{eWZx!aPKOivCbk&Fy0e>^W8y@`O7k@}Jo`3={V5kq>K_laS zONofOh-{kNm=+|=GzD`(QRO^1jrTS z%M1$3>CS0Uexv|M;mzi%csPK7%t=zxj1oc+p8|>!IKTonAPsJz4*v+@_i)NIkng*2 zY2r``1_cR#y03-4Z~S`D2g9%YieLdADFEJ10<^F5^kvriEf-*+e28qR3MLn#Dw(RV zsHn>S{_na*&mE5K^sWr`%4V!a?px*wjnK)Q($MIR4pDq5p9YGJw15NDjg1I_t!}}- z%nOYOYMfqyy~fTlWN-#`kP!EV2Y(O|6VcSl4|z)H3U^{xkcQS+AQxVROzy!!FmV}- zK<1Xrl$c3$f-UEO1J zfey=`(rC=yi|LH-_yo}v3sD(6?FJRm8K1G#81W%|Lf!-~{{NVQawG&+21W`c(Yg39 z6i25{JR|@~&#O*k6$|hH6YwLvg%Zr;t`NvP;%o%tP1R;@6O1R#e8K^@r78XbpI!k4 zW2r_S2?rBPv6fLGQ>z)F(IFpFwxn^(tT8sOuLC4ciC(3Aw6PPB=^sB43$qaPL~lgQ zQGd3pY?5UR3F8G33Q>N;B}wKbVG$P6Y#(ECJ&-XO7m_FI;}9VdD1#DuB=U>C?;hQ5 z878L^8(=7g;#Ep+2P6q3LsBHCizKlN70(eRvBLnpiyd2H=!S#~s-+HeQa#MFHAJW< z*Ai8P(kx z8nYpH(gGe+kaFY|ULi841TB+?EjLpx;nFidvz6KsLy&+0G@x<-M&-`OV4w&vsd9cq zQ42%Q6iw2~k_`+AkSo2CF?aJIH{&uTGkCh7N-|S3Iny{Lqc}lRIhWIKY>G1OMPHWS z1sEU$=s+ncr(hPu2zKKeTQe{PGa3jJ0E6WeN$(4{axYf$EP=BnfAiE-;t`KiJ{M;$ znbSV+)5Y4RGYLa!X5$R(k4!ca0a#!uk`NTZQ#|j4JZV!a5i|7^6DJFgIODT7E7U?S z6hkvqLpPK|JJdrzbTXk6KSz{ALB%;WR76u{2LCQV2kbYb4M)Xs=kA9=-NN*K6zG}_0eO?Qh_g3g5(blpPxZ7< z_tZ}V6;KEDPYYF04V6$2)k+tYQ89Bd9#v6Y!jXE2l4z78)O1Z_b4TCwM|~9Ngl_dn z4NoDnQA^cSPt{Z4$2)h$Mx~2QWs~PP)l-3PG3}J6Mio_aRabY_N~TmQTGdrY5mpzJ zN0IIHszFZq#{h#A{BV_5pA}l8HD^5kJhe*|anp6^m0(+GTr(DBQ&wfU z3_-(lSc#1bA2wv^B3>nS9Vu2+S5{|tmSv1(^hRqRDNJnU9ECF zvr25c)j>y=R4)!~*H&-$7B1DaU;o7-Y(G|YW|bNUuvUA}EY0?Au~u;#mv0>xawAuA zCzo<7*K#ixb2C?SH4O7np$?*nuAyf+JXhCzygO*n%$@gELrzH<*Jv z*n>Y<1mw4SN0@Z7RYcmAHva>5d{Z$unMF!p7?NOEhGUq9XV`{o7>94ThJ&XAVa-xz zn1_G!hBvNlo1i?f)Ex7drj7>vJI zjKi3W$JmU^7>&_zGfd?Lv!@A3&5`BZ;%e1Awcpn&1ISfRfo*H9i0+7}*1g5W{4OiHmS>rkYuT1>8JBZemv@<$d)b$N8JL4vn1`7s`frXqwqa3M za80CBk=I+1SDVVWS^syEnoH{AKCUzw-~c{u(!8|7uz&$P03$j;1HhRDyi`Lkst2}t zrLg$`B22g}K%0A@2_jV^3JjhLxCTDVob8#K5%`}2TA&A-pbOfd4;rBpTA>%3p&Qzv z9~z<~dIr0o5#-pS=eJEycy60nj|&%h%N7x5<}xgx3D)_YA7BQkAObP~3${7YHX;Sx zvj!}HoWB4IbnGI?d7D>xrawRs7eh6Ez@&%ZwzSz&m0u1L zhkW`sGOeY(;4xzQBO)o6teUFlQ>@3DtjpT0&l;_jk)kg;c!{laPxvfQ7+{}yg;!Xb zB~+v>qo!Ailm9k@vDi6;IDi7OCjfrnOl4*UN_v}#x&vssoO|F0#N(3*G%E0J34SYQ z6l@6^&yX(Q0BC^Wd|Cl^zysF#&v4MhB+Rf6Y^F^b-xLf3FzGil>9+I0v(p;7qg%SCo4Tuex<#0T={Ty0Gt8= z*4x1RS_9TwoFC$lJ{zTeQ~BmwW+v>t*PEyz$)qo!q}RI#bU>@4;sVrr2!^1*yXOJa z`v$P&2>&Lkze}?T4vYh^z@{y^z~MUr7`%fboYh!d1tc8A**mPQ+sA(#$b($Shdfi( zT0=&ceoL3T<64=C)_6I(u3IO6?^;5AG5~CvrCei^2VeqZq#_!Coj=Z|VL%0VgQZ*A z#d{#X%lQj1{NuJ+2R5d(`Pw2%yUm}Q1RkKBS6~A4u|O1in@#$iAGrsx{JppN&Wqp# zkf5dAe8oHb09wGO7d^~Px(8mGpW7MD(|iIT>8RoR&5?kSIxNilxyOkd)l*&7SDn?< z+Q^R?EJ$~|r`E}j6=U*uUM1RO^O|@V9h|!bLxu3MFMtXbz_Yix&l4NX8QZaux(8I? zlmE+kr$<1i)f`JzD$>zZTnenEQy{}v-~lEeDoLi=D{2VXhJG)CbqnR9hm$^iqyrY+Ge_z)phXBrHdO%LRuq|W;7@*Cedat262mU$*3@nli z9t$2|rosH3E6L;DT~GD>n!UXTwjjd1#{z^JoM$`*OZ(ZQ(!Nf z3NT3cgF_Po2Zop+s32p(jRGu~7~o;Z!!Hb|9872tV+{)oSN6bipb5*BNdI8!uyCSg ziV_@VTDZ`oM+T1?QY6Ik@}kh3pcLZxpeE=y^A-m-o1SL`uz(yu;9Uj3mXR9g-xPGj2An0#0auvXlf{1 zUKb5BIW?F$W5z6P9XN5LNt-^6+8XI<;#j+G?b=$kYRRg-sn$UY2?+#x`&J1&xbWe` ziyJ?VJh}4a#&huHF%jv6BEU@`QPE)f3=J?D=*XaTeIK?E;tCfOzyT&fU{OL;{2j()k3RkgWROA*No0{m z9*JaD5qt^Jm%|d=>EOkn zCF%Iy4<$Sx!&nBLxJd~wd4 z(Bh1}a=@zv84${;lFmL0ZM4!(OKr8*UMnP%PCf}`W!`Rf<^Po7VyWdgT%Oxny03-F zZkV;wxkN@Bs@{dGSF*O)$`d!7t&MudN5%bV9KeUg$&)IZ=ckOB{HCgeEL_ zwDO}k?SKhIJ#cIv7AsuAk_9Yif<#0o49fD3*j|lw)>?1Pb=O{Ft?jnlN=f#(S7zx> z+G?X~nwRUg%kFKrnHjIV_uh?n-fQN&SEc76b-@G65@^9D2w|at7Ze11z_fe;A8v@r$1$Mk`+M&(%C*VbQ;efHXK&powakKOIJRhn&O zXltKuF5BvY3HQ73?q;{%^3PAdoPB-Z!9z!JzF!hQtgMxwG_Ftvxe!bW1+hm-j3V?Y z$N)ytI0_J_1Bct+Q5fJJ9*8U}N_kmsT!)^P(F|EKqg4`U=9~{Cje{sj(C~tQvZHWl z22C@S1!2;H4&Y>PB@mTHx(CD{3UP=;EaDKq2fpx;Ph4k{2Kv;un)Sto8gV1jBi@&$ z_%VkEfl6Nez6eI@{N)2PD1oC`$EQ>^K?gxeng0>=gCg~;pp89TPf#*g0HE|MDp#-( z^So#^4Xv?_LD9g*3>BaWGyr9VyZ{wug0d~7ryvgq&rodDuad=%0Yk`_2|REEMs7g| zR>53&s$v99xp6EU_`w{JGD=V}L|H*ul+}!g%UtSmm%Qv{VUU=_vN3U(<-^RDo|u{_ zz71}1J5v_5s2n9wPy+!-kmtgP&3W<7FC6H=5?Jtq2`s>xc=F~FC_tkQEMS~`oB=o& zPy`LFlbku&!8-30pvRqv2e;hF2X^4j2&`%Wh~$q6C@_I=vU4NqTvzfPTj?tune5yais2q2;2q{d6CRKNo zLaT~ktjOsPt;l*+te%BaUrno6XA0N2%5|=Eg%&U)gV&pWW_)Ht8#Z>DzGS*>nQ*%z zQK6ZfR#MXhfqH=sICjl--qS(FVS>W&MNg<2wR;xOk5=N?&$@IR4gLaII+sWaXC_eR9;R&3b7}4 z0P0SHE&u~YxYH*Mb%Y;a0J0}Q0S1;pBM9}^xzFXEI%lwe9Rzo@+pVyLFO1;~7pc2? zof0w0XFl=@mbP%aZJ7?c-U&XI2T3?V^j>^|Pf4{_(2KDO8r1_0pa5Z(JSYPSkTEvW zFaSAD*@kY+;5N-z!%S|nlb;M_xpKI};>)RcMOLUD?Pa|HMDU;!c005w}2 zfaa>SA`1{%0M>&98gcTU)LcRjcCb4Xz?&6mfVB}Y*iZ@_fQDVPPrvq z2Lt{s!F?NI1SG&SwErR>HOjA!^{i_>rbzTIvOUal&omLX%r38L%q!F~srTj~5A)Ei zeS#mvImQYPx&7>}Aff}snk;^6)swU2tqXqegfD#7RKD_-W2WpepLrE+J8IdOll->LrhyBmJ=q%XbciG=vXkDcuO&a!NLI;Od;o$dK_dW>FB<`V5IcfBy6*Ir+g}%;q=W<;-rpmCOt*sk!2uwb1PB0hAE+vb^huwx1e$bo4{!lo zr~pM^0xS>!0l)$mfmSt$hk0mn3uq-esDI>wB|Yea=9OMUC4@s*go)T3&rxog#{?_p zg#DB&K({&=kY-MB14GwOWcC762m(%k0>rTdAdo^ym4XYfNEvVuF_r;xr~^Q!JEZhR z2#^~F0{;S2cxHE~hr7s&!^Ve{p?H8Oh_3f|vtd}(CxD8`eHl1{Q6p-primwT1SO?X z_QZEIGB8Xvi6bC5E+&nFV~1>o1V|uZ0-^%eWC9(~RV+qiIy5y#Ffdn$EG@xm9WZO4 zlprz?0~~V#8^>5BfB^vJ1Q;L!DBuJV7I#dMCNkKI4e5|fwtx+Y2M#!lgScgvW^>7? zjKRl@aFG)~qY=d0BtBE*QjIMI207~iuHtGWMKjrq!Ix}BM|^2 zHvo?Vz(y*tdm1o~FcJ}SIAIY`0uyyqs+cD{0U%^XL+Y3SMZf{A_&HwWKQ91s4Ec~> z3ICSam3&_Jb^7KE`S*M;CuwV$E*5ENTvl_)xRG;NfP3+gNf44V2?FG}0JauiLYajc zpiVH-QGBzFf}?pSStu+pBV^Gi85xBaKvH>jk__OE#+8NcWhqCX034tKtF{|6;FN8+ z0lNVo^A!RilSA+mC@j#4B^Z{mDVwJikzu!k!dQ{gXInv4fpj^S9r+e_sahI9L)d{% zj72c0CMX_2W+3@PhgmcJBxBjA68Z?4-83+b0u?`?0fzDcXJ#`xc9jDmC{1W^Dg>G& zZ~`r01RNnK&KUw7V0)ox1710s_lch}6_Lz`mXX$agB5|6CL3vY8&sf+8tI$92LGIH zA)IAl9x_k^qUi((5D{ed9V^fRIA}A7MGnp9xCCUPTvI8YTK^+i(rS$_O^q#-7V13Fl5NgKBT znNIqfP-=ok&;tCp15TH81a$=0VUPZj1RpVt)Fxce=_@4JRn#e+hxAVuK>v%h3S>ap zp8s3X8W!KpUZtE;KqAK(9j=V=40~wpIZY1OjtNl*%Qo8_Tg>L#?a# zsA<`QF}JN@a*T-ht%<0qYypRiX=~Kj5Z`13DYYsH&@^<*T{4^|o+}y-C)+?d!fM zr@W7vy1pyD=gYb4o4xO=zx%s~-wVFp0>IT8vgBL8(+iic+y5pX5U7X?oA|3_{0qSm ztbp9I7&k?^yKAike7EW|^cVW&Hh@|(fr>oz4@8y%dC7Q&UMssaI-YV;`_ zKNlcI5W2#PzbH(^Ukt{)6u(H^yqH_S0y-wv>%qqnhDhLzv*uzrIUIdf16LVG3_MGC z+_!t&xC!dTVGPKD97GEkz^I$VN_@r>=)}n=GD@psJ8*6iG{*$l5a=pRwB|1bG{wp) z1M?bZu1T<;_kMpY$faz`x!b~wN@Xz2yJZ|$$W*>kH2=M1l3q=kShk0*e+i#n7zC7T z$6?rsq*(&!`iX#MYoa-aP^rGoMark_%+GAPPg21a{KCn`43-;+8oa@a9D{okDCY!V z9_2Z7{L2~ovqH-NuOaJVn!M%WT|cTkDQY zKslb%%S4w{r@E#JAp=4|s|fH(QM|6D)4&%f&-IMa84V=bqRRQ~sJN-NG%TrP^3TPA zKN^5};31w)XT_r?B7i~x3M2y)6-p(_bQCZ-u9P4L)n;-W&l(NXK^-l8JOvyr8Pn|1 zM~uz;jL0)gMXX!WqId%+KqD^pNrGttrlPR~A^#}e!ATOfNT0|P7$6kv8qrB7BDe?7 z;40K@?bbvBV*+5*NS)M%4AN#yvi-b%mO2G%Y|?C)Sq|6GYn@NuBm;K%&B{su-89oC zpaj(UPmNvC7X8+hZP`2$*I`goF6?2Id(B45&m}8V+}zE`L3dpkh8VCgf(e^`3&h)Y z(U&dTmu)jJrrDg`sCJ#rp{>`O+t+?Q97qV6Puog@#>KB~(zC7H%Z(ShX4|-3$T_&Y zhuqY&Owzwy+VD5o$i0Yy5rW?h-r+6Y<4xY>ZQkdN-s!F0>&@Qn?cVPV-|;Qq^G)CN zZQu8e-}$ZI`_13|?ce_m;G3s`-e}d(9sk|>9BFu+$du{;*qzi}d zj^P=u;Tz839q!>D4&os$;v-JtC2ry;j^Zh<;w#SLE$-qk4&yN{<1 zUGC*y4(4Gl=3`FgWp3tYj^=5u=1socEW6`#P1ioY)Co?%*8SAzd)!ApXwBT_%H8b#Ao%76e&9dO)OGIElxpOTuAqKV z>61?CkUQ#2c2`EHjK|MEqTpE17{y1oHB zkjew!@gL^vt?bGOzw=f9>J0w#Kp+3~60h$qZ}eY(dPpzm0e|iSKN%jc>4_flJYVux zkMdf-^;|#nUJv$lZ+K##7D~_b(jN68OYKyz^Y#AJqkZpjuiag5_lX~ycyH+%-}Fy! zmLI>{Yd`N^BKUGs_=XREbFcWF|B#HoR%Gw=qo3(I$m&x6_kgeSo$l$7m-U+8eG^aj zo=^KF2l{5A_cTxNlArpj&-OhZmk5{tx?k7q{_B*#`j-z>pFaF@ zU;M|9{Rf!*Zz%eZANgmA`p>5};h^&hKk}+kY1)3>d0zc@Bm3D8|Mj>1xzGG(kM_(P z!`lGwVNKvbf^VuBd~^S03JD1b2so?}QQ}036&YT%m{H?KjvYOI1Q{}7jtV7B zo!fcQ zwu)8tYVxgNpFcl)o9+L*xUG%To%`=X-wy|$z|aH91^)dTG}n@I`FgXj zMvzQs0R=o+gJ`rdq#ZJ#Ac?4$T#|zTN)S_tB{b;402Ovn zQbCFi+q1AAX5ceP0ukerN=qw!?m#d%<+M{zKLs^Z3&oVO)Jadw`_eA#({E; zb3SxGx-(cz8A_s9F*!Kkf)A81(#Z@M(BX#@764EMAyH7_1s!w@fQ1uSxZz?J3c%on z7BbM`I1e&NS>bXSb5N%%;pMq!pMM5AXzJvh7r}ZxG*jPKVeNNTS^>T_#)1)|(^rp< zz*Evi9kvLA0~kPJi61q%paUZeSm1{jy45k_WLNgM0}YH=00lhHev*L_I=I1u4wkU_ zX2o*OX=neT4@W$4#TREhCZdZb(B7o=WxB+s+pHC8f!`x|>Z+~gDB*<}=6ccKO3;A; zBT$Ighzl}6VTKHlpu2$AOTM-NBaF!U1Or%@xquN|HsSRWW*}Ga!OtzNap#|hK6>fF ze7xwR!<(G)$}g8~MejA=d}@R11AQZ`p^Wk(B}z9sxM-1RA%fOfVBxqKFwg)5XlFoO z2PR(d0PlhYF$@IY1yunM4+^k^3uqt!)lwdFaALUWRnUSL#9#(BQ@wdzuT1&U-oCh} zp3HqOV4Lflt`gF#@`Vmq{OeB%`m?Ojae!SX^=f&(T&2Wu^hNL^gqz^OUc!MBWa|T_D!>2(2RSm9QG;hhV;a?%B@U{@ zgBDO#_MqXa%W)5U0?U=u@*=*^4F+qxqnQQ-DM;8M$0CC=fB`bVgb!e?2dWFs5a)I> zLgC4FKiB~U81S_v6yOI9Kp-gd(*qR*%ZyfB6!UIUfJZVwZ^Dz3?3|yCBdTnc5>N0A&s12{lJwDfeCDyk2ThEx)As5#9@CCQuK zRFsXlIZjk2r*K+S2@?(w0O3ICND43v3N+v({~?Hr97tXfoMp%YbTL|tv?oAs$xHu& z7Sx~z)tN7$A~|8sk&eYAp$SjOIXx;D}Vg8p;?5Ipc?@O!`xPH=$-z>ft?EP)Afuml8XG^YrK zYE-2<)t9s}C1grfL+MD&SozDC5~WSlDyqWqwXjLYA!$iHs#cY@)ugF}j9X1|gdiLb zaI8c@4P8-Fty}>+Nt6Az+*ShfH>`E9aOxK9kX{0Uf!1$P2)w+xsQL-hMSwGOgk;vu&z5U4phP-Zq6}(`c zW!EI0)k;`E=qj#4w6t*Y9C_6{-Slqwin6VVI%QgtBp9Gt8M#*V9IKy44MA2u#r0 zXWA-6pBCP+X4u@~#&?rIyl#H?+uzgq^qwa+v3sF9)hIIdyy;zUUGWCo`G)v;$qkiz z|J&jh$9SgPJ@BkvdkrHx%b(?aY=F5sPP^fit|Pv3XKOs;FNgnm%!?B6g>pQY+@+(e zMyv2+GMucjW_i#*{A`&Q-RMWRb!*=Z+IT0NP2lUcy;UxBs_UERNXL5CkzQ?^;~Y(D z5&3tw-ReF1+|Xq$Z`G?lamQ6$>u-nqow<%ZkCP|l=~?o^Ip=MMyIt+gW_!H+KJ~%t z+3<%)eBu?qc*Zy0@sEdmM; zb~#Z4=zoWONY82cZf?`|x5s_%bwAGC_n!B^_kHkzKYZaApZLc&e)5sOeC0Qv`OkO0 z^A%cEO-JAQ*T;VLwZDDtci;Qp2Y>cQ|DZgRzUy_TFzWvk4)v)2{Nb^O|GraN{?HS? z{?o^Q{`J3q{`cSi{|CST6u2)!O{b{(;GR}TRpx@Ir}rh zphH48D#9qFD%pcVGNi64Ou`J@LDNgWu-iGFQ@!^vIkGymwA-~WET=5uLo^&RGh{+P zTrf4fDmG*&gS*1*ak_|_!@1B45-hYX#6vv-JlL9`$sI5$l!!#?Z~xZ6%$_PpWjwb{>`O86UyunYqMtbBWZG13p6h$}eL2&#)RCEtQiw{Yh#CGI>9*BS( z=$PE%C?C=h!Wn=7Py!kF0m3nzaM>s@l*c8+M=89-k4!^-6um0!Mt>y5Qj9~)={bU2 zrew4e3MhaBU;({Kox0i?o6Mj5*_MjHA5Z@RCCAvEB=98QVymNkt7zQDY0O2DJREy; z%8}&7F*!t(Y&v;sL=mjRf`rK{I;sw!$r^y89U!V>ITQ}~BL(__7(FkjO9r#~eS% zY|5yl5UAXuBr<}YY%lyc0+m~$ok^Q}#7yd$N`dP~t9*@b964cZwXJ+fFLaEmFarGv z%aQn#At?fux=69SmZ9P@u)z=Cf=io%nHAU(Lec=?aHR)W01XHh6?mc^5RxHKBq!jj zJ6VAS8nL8AOvz9}6+94UqO92=vqt~r6PJk88JLz5V)DT1HSY_~nJ9T{*> zh){yR>86HYAP72`_l!@}q{Uu5N!E1D*A&QM#G`RUM%oOWplTVgJTNXQBHmnpmP(%v zt;s>50JroL1mI2_a;yi?0HGo*KLL~)OBTZ_f(VG2=t9f6k%7VL0GWhOBI!sfG!Pd{ zDt#)M;-a5CksAa+0U!aMPBH-*$j>>u%mc*^%}gf-$P*r*n+kA})#;+ST%99f0^vX& znc@M96dN93svj_d!8}thC4o$e9RsCOE+I)0m^*@tFb0Ko2KKG?rmMWsF-_)m*=>d0QfF=J>(GsmOCn$jjh#w>vf-kaws*oW#GAtTJ6uF^-3>X3d zupz$mfFV$U2FS9c{892;DH$yZFVdZsVS?yvfG6p)wWOPvNdjeA$2xTvD@_S>ynu1Z zv|{-rx@;=fNh-f$Ck6rnDS}HtVVwuyoi@`QjY#Rc9)jOuEYevj7U6%idl@5*VEQ*QTAIbo1 zDxP5ZpR|#fYkdJ$>Vb5uo6sFmu*}Y}Eg+VqTj(A+UfSpxX&}fWJIU zjupcsya6gYnY8~gEDFe&OZt(<1%ZA70m?GX*VPisrB9QAsj~Sp%l$e|P; zRjUC(N)!~|kZeJ|J*u@?g5QZ-$84D;ZQPn7mPx%u2xb@iRgePcGT_kwW-+Q4NUDvx zlfMaFrkc^YaTpe8o!<>t04gf{pvcMPVRlPUs-$2GURS6MMupSD2!+INGeC62m_yAP z9DGJ|xw@oFLgVVpA{bT-h#hxg0kNT50th4I;MpwhVIRKLqgqccZBj1UV<$mNQySC2 zy&p{S-zEPzWC##541nYvC09U>;4N0o*`-}D4r4Kvi+2rFuI%7;Tp~{8kEd$mja}iF zrB(5)A#q~T`P5`P6=Z8VWfmqp*S+QLkl=W!N{=f=P!31cBW0IF2oB!mR-Q7z72NTi zIY|cDU;dO_eqd`}oY*zVf%~kdEvB8jUEGxst8Hdy4&OH}x1;c7ZQc-GcIQpxX7lsr zJq2fA49GD~wsO|pb4F*Ag61Ag+ITiscqZuSndeaaAoL5lYzyUa4l6PiA7=LFe+KAR zUdn@(XM@&g=t*b=t>;h1=P<4neKxJBF=P6xXp25Vj84psMrZ@IWta{cmPOZZRs=6z zjfVeb*LE%GF)roSbG@!)X+;ZY#iZ%y0mP&xsGGLu3MLCL2IDN$XNev`ekSUoerA_` zX)I=H?tsUyZXDNjYLPbPV@75;xoT5ZX|1kAuI_5Feo8tmjdu2Gxvq?vhGmdmXku1L zKSk@R4xhC~Ed+FHS5{{Xr0ct8?8bKN$A)aiCPcf&Ys<9k%f{@?*6hvZ?9RSyyDn?9 z#$b3L?VlFxpdN^RK5S+?YVxCF=nKEtmhIW5?b^2O+s5tcTkOlNJ>B;0-v;j94({P5 z?nW{0;zsV{R_^3x?&Wsw=7#R)mhR}L?&-Gf>c;NtUT(uy=ydhQkq+$BM(sUP?bZJ_ z>ejC8l~(WdX7Bc<&GdHf_?GYacDe4x>8A~4S%GL|8t-K;NAs4|#D?$4Ch!6ua07?I z&}QhfCh4?}=*=PQ!ftB--)gu%@C!d^1<#j5Sl4zX?a~(S{!ZT)lSaiKP6 zF;8bBH}g8DTr}r{4qtP4aPp}Y@i5OrIcG=Lw(~;gVLVUsJ=bq9e{kA(bGH8$^hL99 zL#K3VMD(cs>-_$3KKF(;Z*)fw@>q@ZIp>LJuJlvK$4lpNMdx%+|MVb_bH&J(9+~xi z(#I5bW*c`eBgv;z`T=HHQjS3NVed6nPjv8J^giE8!jAFPlf@p_!B=F2pAM}KZ zJko_3D!L-vk*rK6_HgGlV^?!z_wi0=5g7k-68GRK>M=5v0Wy6tsKgYztY(u!OZ$aJ zU6%=U>P%|+&u)>);4r2C1e~bomJ`UBjS%;Sw=;6r@{rT?R#*2&&xF$+`gR+<{5Ls)-THOS6h3x%%q??yT3|E;>#&Mx|hW zi5;K;+rs%TV*woSn^!e~36KGrs;Su-V36ARp11pal-)*p>amdOV-I=|clB5A?^tgp zJYfO=fwc7rd48qHp<0_sLII%qo8zs4Bw8SsS!K8%T_9@DUylm543aJrki!Lm8=xhy z-*>pCs~D>v9tfEcm|J3@B^cr|95EK1ulu{#{apHaCV%n32X&&q+L7R!w=daDqvNHo z2qkF79cU`K!6ZxGGP=>>7Ko`S0vQ|_(};;3BaVDnb=V?spd|kV8Q&U|0Lq_xFLE-U z2@&I_*N=Tojn0yJe!0i}+~58DKcn8qJAh#3Y~8?u2H|b2Llv4H2mPuVTlI@ z0Sf2{u%icyA2)U+nQ`FAk{%tDuxN2rh5;E!kbF>Lw1Wl|MNmwAfPiHQ5Cw{uj4O98 z-H|ye;LWRdFJHYjMf44fm&t(=D3&bvprS#I1wANyz@Wl01{oS&n9xyEhY`AhKZ6b} zdNk?Mrca|zt$H=<)~;W}jxBpO?b^0)G7!r5%K)9U}Mc7r%qo$nnjUkHG87MG0Am5EK|H7x_@4=FI~d-Y4b&lmdem zNQ@TXQ~(<6kQEaaED^*GGcW*E1S&wazyTCo09FoTX)wcC2bfUda2~kD9(p5^XqRh$ znN|P;8n9SG0RaS1g9%SrV1$biyfhyN7hrJ01xt9=n~*~iS)`FiBAKL;OETG{lTSAJ z8(_f=N1Ox7Vd)$n(NWh29@$+7=5`7(rM9F->l0VUo<)_OGv_m>j$NlAj9 zB;XW45`Pwflo4nx!JwaH8Q{SQ3uF+$3=QD0Ac6lyP>{g{M- zyalJJblue&id0tl8=&#ACzGsI5+EF%d5(~)u2AyYtFONT8?3Oy5?idXw&6-zomL)~ zl*`;-2YFBNUW}1nnnrXKA=BXon8rMuEtUA&G7@^}JW+TquSuSHG8f>|SBE1G6c1rHkdjS1UY3*vaD`pD<&-}>rQ4fx z-nr+WgC4ruaEB)M1|i$6<=tPhj-B4sTHCkZEd##oMuPV&a|amA*SqfkW6tmZM|}Ok z0yE5ex$>5?R<`KRLm$2L(^Fr)YNUf^I^C(SE}cQsrmV8;v;Pe^?&&scI-svE|Fnqh z!!P#O*Mn`n{rBUazyAA2jQwZYbB|mzz0H5Y$vK8|0-y~ z3t}*X8a$2v1OvbT3h-pF+nelIrY8UM#Vmpk0_F*p-lcwNqdUTWG{1K9GYjv>_9l=)@;N@ku!Biw*-gz#fheb_Yb@5V_SrB!V%F z63ko_6=23QqA`tXTq7IX=*Bm~F^+PaBOU8#$2;ONk9yoAAN%OXKLRq4f*d3v3u(wh zB2tN(^VrQ$x5F!D@rUB$Vgg^rz}$&3lbRIQ1Q!{_^MEpxq8ueDOKHkeqB51LTqP@8 z>B?8aGM2KOB`s@d%Uj|ym%7{~FMH`rPb$(hRa}nhqJ>2QhOcjwwB+rIxXEW~(uJsT zo-lb}t7~F2o7&taH@oT0Z-W0boZ=iOIm>CzbD}ey>Rcx~+v(1C!ZV)ooF_f&Y0rD& zGoSC=)ihbDfMF`9djkv~5LdQL1rqU#&|D}Or5Vf@+5nTloG3*lYSD{gG@}~bC`UW$ z(T{>Oq#_+DNlR+dlcF@GDqSf{Tk6u6!ZfBboheOgs?p%taf*BaD3?MwP#;Rtpa(VI z`505=v@&(6Z(S^78|zpm zsWoo?gtZxvT!^l21wW?h$YiWbp#9FnHb>*Hh zfoj0LD#WjyW#(sXD%xj~R+~XoQxqyLyX?HqVx<1mU%(!eMH9L#k;#Rl2 z_2O`U>s|B&w{*>Qu5!z3-t)4Sxk`;~m9`gM9=a}>+1)N+d;8t{j<%EQMK6E*>tCn7 z_O#v23uP?}UutP~zS^bl?W|#6{6e@y;Z5s+@lxFXVmQMZwqAg#_*~h(cRJG@aU_4J z;O!zbuoaFlekTmuv#xl=FoTro9A!vPbFAO}BsZ4x^00}QZW zPW+rRoe4}74j@27P~P*D`|M``SJ^KgZZ9ITj9J?zn6q96t6;Gf&*}sh z6f__Scm|lAb_IbGxQAU!fYH^`Olitlntd_}%gnA~DM6oF1eRv5HBSJSSpPZKx_)b* z2|Vbz&9akaVKRSaMP!sAOI?Wk+*HM?ngBMNRgR#VveU6vn+Q3+{VCL3ODVn5#<8iHo^;(Qs-SG zJmF33wT1tGeL6!QT1-A(u%dgOK<{J8sO#zaKJqvXF*<$00N(zyl^ED zc?P~+mLC{J-$uBw!n^ME{4tzQ4tH;siM>^>R(yPbRWX<~{y>gfS<-rO4ABx3Gz!p! z13y3jom8;(6ZC*64&;cYs#sf9^fa5wq0hgOV=`XJcG(emc-wPl>2kyuPEP%^t-?b&d>HUeNAb=6L8Tn}* z(;YzpPyhndn@?m24Q5CHY@O55AL`*>5DKB_=--ur7{noy#KFULQCt&R+*cKq@+=-| z*+rhX$^niM8F}1Jxm@EI0TY103tRvLP}>6BN8!j@O4LAtM8M+c8xvsLxGew#U=9Dm zr~qdifl|EOdsq96w4#Ny(Z0GhSg;~YpO zdgC{4%q6l4p=silE#VS&qU_+_0FI*dS<}MQ2f7GB&c#H)RAZpXz&#odVl2SEP-9}C zPm8r8V3gU5Wt^^1N(-io5)8!;_?(E;VDG?V!yMc}g5yPEWUq+hTs2ZT(w_fz)LuG% z;yON9I|3ZIiHj+!O9n8a5xj)rU;vBg0Kl|Mi=?1TqF=KOqy!Y?kiRb|;#|M3`tNkdjX z;opQJHI$^xc-vXHBL<{gUDAn4(&L?E01~jv3CzHI*b7>5>;2PrBB426 zrEZ>QV>*+1VxU(NXI#9<0r(5f1?O_6o4*Vn&e(`kkjr!Wi=J7h7UpGx%IJ*l4-oKW z$8jW7;+1q^Ru(-2NNyI8dE#Q8kY_C$oUEt5oPZG^-I9eR&RC%}Nhz#IDW)BUe0s|j zs$z>>Cw59Aje=>I63bqSoe?t2gnlG;kf()0VH;$q@HEW~fQ7N2>pbF}s5^A9u>Y*ZPqAKd54w#e1s9=5!ZN{G2r74?UD5J>~_7INOSmuku zsN5kaqlT)el4_}%>Zzh?s;cU$vTCck>Z_6}pEjJCT0s2OrjG)trEVpr{?e#oA*$#p z@2$_U5^J&Qld&RevMTGcGHbIs>$5^@v`Xu=QfswZ>$PHQwrcCPa%;DG>$ieyxQgqz zk}J1H6z|cht%4_|iYJifrbynwr8d)%{wiSwYoEeuzUu3~@@v2P>%Ri5sX}Oh8K#!C z>v&F*kea8wzG=OR*1h6uz)I}IQf$Rq?8RcNp$;tU6{i1zmDrEEYhvo^uBNBMK5Ump zY|5%^4Q*_3WFneY7f6n50%0he!?ntayfGd17tW zGOW#hEpRn0+0t#@wvE}oY*cz|kKU$r1?j?S<<=YncH`oLZrwI+q3JD;LT=jr=vA)i!n!SEk|f+_ZYFIm zvCv>YMvc%oS?KC6&yH^FeJsLS?T~7%<+cGk#xDQPermB8&(G8e?%wF`LN9~%Zt3nV z>W1v(0`8@PV#6LUa3QV-O{B_|NE+dzTCv$Q7Ug=-Y4oyhm`*SCn(ii2?a*ScdA9C% zyl&fduY`SXV3+`K<_gFe=bZgw0lX1~luhQ9neo^}(7Z^$x#;t@?*uR9`#!Gxrs=!Z z@9hyU;bQLoiqY(T#{bd>FPbHt$l=wT?gl%={R%GE8ZQWYZ}Nr)QfS3I&Ib}S#PS-OywnzfnljP|c&nu^VfewtNyz3T!^q>@G^`5oh@6y1^8Uo zbO==72f4urd$7PG8%7BlGrahS2VC9;>Ix0E-}MCzgidondtEh)*y$o#$S!gA`Z4~7 z;s<~8AcJZZmmz@2oDl4t41+}h48i{rm;j9I98umtq1fCJptHWY$m7)vXL^M`4<+-d zT)?caK+E(b5_Cb^bmS)S-)`_i|1ltY^F!Z6|MrF6U5%qO#t*2A}UT2@O z+9Gr+laJAEbBZ}MstjEgS4jVHXK{x@j0YIOa?7a_D0jL^pw*Dzt&ne0-dASZc6KW+ zZhHw?>oIQXwP^SCZ(nX8iwo~qayF{>N-)SUmIMxvVFozh@id=LBq9qq4doE%mdZB$ z(6)91xUTu^)fh*Jc{j>{EMAAVc#n5l2lrg~+j2{|Lk6RbObCC!W&90a0W1Lu@COdo zj0+CMs4*fBjf?GYdNXQ%T0`nF$l?}+5gO4!UN zw>O3#zzS%91AYaP|3v@I96(d(oCzR+4m@X2G$2y{YFt}+pYQC8clVYncwPsum)H1j zcQY)FV64T88&1lW63p(TL7_BPvqp36SAL+y3S&GfmfDKDYl^F z^%5>P>mE8cW1OO20J*4|H9q=#`-vhx3JHd^rJq2Y6I{GZ#<~Rz4t#pE3H7Ke`-}p* zs)IL3hIaiLI(e6OEcu9wFt2N$d04`Cg^Wd;Ct?m_dSwnM3dm%f$M)e;@v^f!jg~s; zp1LUqdf&RM-zw&4!}?#Bh#DJrzqf@BfaOR`I7d@sfDjKyb9lKGjlH#8F!lt0bLqOb z`@}D3vpc)rPF(+?H}R|IFDzlrXQp;)M=<}Q$dpp4imG`B>^YxPe9M!v)W*BVUVL6h z`%ZJbjhnZ)Agjp(wi(^pXZnORMf_~Le9_l2%*Q+)JG+c)yei{+*XF!|3jNR%z0q5} zYrZ?s&imGTH^$ez%|CsEg*i}*5tUzkYxe5dmlr|DL8>44s$;x2n1kHYJ&oTy)LS>% zmlbxq{H^)CqkgH{s}|e0eV}vwT|;}_pS9E9J>eGb*z5h?Cwt%9iW#w5+6z9F4O*?b z7s+_O+t0kc&;97z{oUV=zDGU+PJYV^2Ry>IuVy}m5q{wp^wN91;^V&ElXmfD1mvf_ z1Mz(hT>}3oph58)|M4S#@+<%HGk^0t|MM$9@n3=fNP$A#b`KMmSI54T%|7iPevGRw zp_BgjOMByUZE630*@rr5gn=hC13m0R{LBCR(|`Tj|NY~C{_Fq#*FQis+t;j~nJ`)u zR8Y8(VMB)xAx4xqkzz%Q7cpkkxN%{T4+V}mERcjp5soKOrc}9-WlNVYVaAj>lV(ku z7L_Crkx|Ksj~_O4IW&|{QKND2AWgb7DKn=}p~8bVYid=h-ppau%8qMSuXn@}&wjCI*S?*5_qc#n3g#Jx;NryR(Wh6xo_%}w z@8K(M)){ww`?*b{R_H@@Le?f-d+WXd2`tdS0})IxrJSy#sHfh1`z<)(`WUV_`yA@e%(Z#*MTTn%u>NzaF8*$80$M`(CBt->*3{ry9 zPGidj{v_DxzXpYD(#a>Gj8e*m7~GAw2-lc!I1Br5=0XjvVh+SHMI^Dr6PbKb%{Bko z%q~Wh#KY(c^LDJ$&O6_;ZpuAn^N0!kL~udS84{9UN!|ENRMABlZ8SVYFM9Alp(NCj z!Y{$xuqrXh49n9p%QUepG-qtn)KgLYF0fL6Y_%az=)_ajS?%=1Ge>cK$xlFi=+y@q zR*SRHT!}5#*kci_5+g~0qBKiOFU1thCAH$EnTGM=jWXVtp^i!m&RXm9*1c6~ogI}|p(A5H z5?F=s%@^#o*>3wlp2zjMQlN##P}8Y6EZRAT&rEAuo0|^2;-~)wd}FHPyA12a*Xt24 zw-Xin<$CL-V4<`j&s_7(YlGWCx#tcpXoHPIPE);&<~!T!A{N~B*9qsz@OV9buSOfq z#S?eK7>}=hS?_w-vx>q{w>R&VXjFeA-v_xByJgLeO0{DT8ZLnbG=u>Zh&fEG z&a*ql)DILWyhJ@vSR)wvuQ4xlp$x0$2@ZH|fex%s@eEkDUL{a@BsAg^p_nrWN<@M` zqhQ@MwV}LmuY2FqR_mMyLNU5YgpmrPLby(PoZJQ5Uj^u#3fNDxc{@qa^%BJl_~ zx#Rf?fdS%;^osaNRjx8@sQes+tawEWYKUlD)Zhlqat982aE$-Gd{G&3a>gkju06J@ z;q5ND!b|js18!U<8+nG;&63>+p1l_Y_iK#4Z&z4;r(RA9dx-NjxmknhRFvlfK zVzLK^l(+;3^jOX}T40cqpeAQhNKH5@hK)YlhZ{EltCHf-6Xz`H@0LkNe%MNdghT@> zZn%ME8qz&o3*bIG7fR#BES_cS01s#uAfPUlsgdg^75^DfEM{>`wj9f+beYv%GIT}u zEQt`n>QL7~Gt9=+($;g5mr~m{@_`nZJFoP-HHiWsQ;%>9{$4hL263(0?MxD6{ zJj%73(yRv?wfVjt9VUn2tRHvrXuo}+uzrLL?m7SKkxmsqHmvdOlTecgPncQua`nut zje#8Gnanp{QI)E}?xsbrWDu+VO{hWvoSh}~00T0>fCgl+fD&Mtz8T|)7)TX5mTPn7onzf-7!kv&kICKhG7)2E67#gDo)ofYg_>UV z9*?~WP~f2<9cidN6|_b^vIS>R%M4!Ci%)(sFWqzj1dOBr2nfJO7{CI%FaQZY&cT%p z{Zk z4*+E?>A)mf(K=?YM_esTZI3(1M5dCZ35NegFXK$wfL104`yBuOvs^?Kh|tKVcX>~Dc;E|D<~qUJV`9C``>L0^Nz@T<}`PP7BjtM)rLstp6(#MU;6_U zTpa};z(4~OY?lM*;0CfCf)fr<02%+JdI>A=00(-Y0vV`4-z?B~)^9L_2w-^uR3AVC zN+56@px_8rm%RsYJywlJisNuM{??QmYrwbU?%hV}$decRivxX~n0gk0qK<(TG5N!Yuk5Dc5FqtF;Ku|90^>#8Mv4N(FmY@p1h=h7#)ha| zpaKrai9XN_;jo=Xumn$VT1v;}GHr`ikSo6L(;#mz3IGae5D2M`1L*J6=)wRzpaU3y z0hV9_gb)Q%jRMf_1X2zEG(ZWfEcIl71$J)+GGGx|UE&vE^5CeVy1=x=X-tZQc@rmXT`|eBo4(1LA zYQNl#@s1|mw(6d=5k>?I;1bRgBMt+4z{gbN2#0_HIAHWlW6LxE%PdjVJkck*ZuC?@ z0}kNnP9XoPQM@ z7>$4hM)4T^ZW%qYd77~|*vh_&0vfmP=F(vgtq~ii!w-~Z8v!vGW6dK%kp~{m*5sl! zd;+X=(&AjLHNH+#WGg3ffVO<>Cpn4p;WDH#K@pzkBkEf@F0A_s#AcFYIR zC?iL431%QH7hng5^2@05EQco~fnp>{@)K{~<*2HoNW0ze?m>%q77u{Jz68Me{8pt_DsZ^IqrHgs<&5B89H$F&R@h z9aAyaLoynJyP}LVi_9uXfGgRe0bE8aeWe3UBFjGW;1+HKnX@~O#WdNn4x@1z@n$tw zP()laTk3M5U{WyM^DhcA+XgSO=yQ$|%RcEdKlO7z`LjR$^FIN!Bd3xoA@U)RK-Bz$ zlzhMm<|#W#EgA9BJ0)~cz%!N#&m?OOF3s~i=@QB!^h4hhRS0wNKvZA2YCtEmKpP?$ zYp_rxvR>FCw#<*xCbUN3r9x3NHHm{f<5DiYFFj49F7L8EL$pXo^m0nnA?0&PnY2k! zbV5~hA@=|B2fQ*N7@#7bj|r5LNNu!BZq^424m0%dY>xXlln-?dHZx&JxztF@ z>`0X)NxgJsqO<~t5d|JmBV8uPbfO3lPD$%@P@@D#a}-0z69uU;CDSzCfbIjdegw^;D%XQ@(-8=JGinH4|WyJ!KPCKS)-4 z?O5H$;6h?kDPSxWj#qWFBB4_O%CcCsHC;OgOp8n;E%ZWFvqu}%1t0HKtuUaj_8^Yv``1wg!ZR!=}&m8Ar{QdbdH zZw;4WWm8Pw)l%w4L($Y`<(6)xwN3xF! zZ6o$+H4ScAQ)i!Q;YE{a}XIrw|0FOPPp)uPS*r4?G7DR4_P;A zJ9KVeQg-n*Z-3Wc!0k`O57A*`xV+R;-(>Hhz_+3h(Yr7T* z+Ve?ixQ1=`hH*HDb$Ew)xQBiChk-bVg?NaGxQK^XgEts|oi+vEc6`aVd}05$a#NUq zftQ7i_=>SOi?w)*xwwmW*fqRLfH8QC$+(Qo_>9pwjn#OKuLFsbczE+ybyF2`OPG0M zcZ!#Cg}-W#`L>V!_>TcOkOg^=3AvCB`H&GgkrjE78M%=i`H>+xl8KU!CE3d;`I0R; zlQDUdHMx^H`I9|4ltFovMY)to`IJpLl~H+>Rk@W}`ITKcmSK68Wx1AV`Ig~?gGnoO z^-Fx|IC8;9it#vbWc3S80+*55mX$e~nR%I=xtXE)nWZ_JE4iAj`I@mgo3(kHxw)IY z`J2HxoCi6Vl^9+P5AmRORaZ-LGgg>~nIKzOjOBTr>A9Zm`JN3ppY{KFpZU3;{rR5( zI-mu5pb5I54f>!FI-$3>oXr_3J~$5__m{CTie0HMdg7wN?I;2H< zq)ED@P5Pu!I;BnbG>P|85cQ$ad4!=@JyCcs7QhFmlr?Tz3Z-xh8F*uh`J=sfsEJyr zjT)*Lnkn2eUOhw};?_~8Y`c7>s>v0V=O-AL^_TJFx># zm$fet1Zrl}nN0i_NTu4Mk>J$8atHE{S3h^JiS7UzQ3F_j6;J;K>=037I-mqdq7zA* zPCY`gVLP^)DyxPa8F#xPgARsy5 zAA8^eazOXm(KB`62yC(ecHsS5pa)V9Aggc*+EMGSum>^#AV-k_?mEO%9IrEg23CMm zpRV>afW>#<;hX>RH^m+y^KnMdd7591uRG_@|Zw7kM1h%gB0)Vh}z{=ba0gxd6(6I&x z_x&uu&1K67KwJe{%>=00%`srb`3}F0ywM%KH2d4X@f%Qn(2nX{!UFVRUqlq8_t^+fM+odfsQv-mkpt0^kOQy9Ior>SVA78BqlWFbMwK0Os7_ z_1p(1@!!OptDl|XslMtF7><{i$vFhu-P$ZLJ+AS>@uAlxJD>$jVCt=Y^EtmzEgl+c4&yWbUZ$Zs-d>P9=sO2d{I09F?>Ty$ zc@p*$8LSOEFL~yxou>13fA@iewi);0MStrxek?kkEVL@qr`4UqKCCUE_Cl494+rqE+K6-j})o@|wT3mH`XIFR^ zjVIn4f0@VLgX%#jp?mpNXyJtzW~kwY99FoWSJL_SAAs2vv>jau8Hm?xEF!2NMQ`Bd zpoBEmNF#+E=BVS2Jof10k3cH$p=cuiju=QOr~!x_iYgu$-gqxs#;iVdkmlo_zM{XMJOqY37*% zYR4Li1-9wtljLPY<(zg#Drrf0{%Pr@m}aW!rcMRw#u|er380Bfx>i>@iiVe0lzn9a zSOSnvD(j?`a%$_XxaO+sm!5`7sA{D$iE64Y@&#jq5zeZtjkWIT?6c5DD=l-q`l@89 zC$c$gn~W;xZANm&y6m{jYFPviBuuOBy6m<~U$xd28tkETZENp$#j@(?tKyRD?}N>H z)>TPJoqz%cA1L7N!VEX;@MZA-R%hh3t0AiGz4*3R+qbRq`|rm$E|r7_7s#Me042;6nfq2!=M-9g)dI*O4n+022i$As7c7BtfR1dUTt<%8T7;wZ35D;w)IDiir zaD)Y*Z2<*9LKTn5018ZvibwQ8`3m4hA{c;tL13Q?2T906+6RWG`5T(#VzC-J>4y43 z91kZ66Ce5nh>gkr!Q`0G0w^9~2liV)2WYT0CZNCoG%&&fsAd2b907A>%fK8N@W33% zfQWoVn+H0uwg8x52J$O_258U%-d#`zeOv$qj)(valCJ^Z3xX@f*no8Sy*XAn6t8P4!x+)`DeEYgPTQSzNVTnZ+Is5LVNU~NcX!2p)Xg0z+32LT}8 z4OUq-SbFUU416LDG_Zqf(x3sU>j4pvumkeZ5(P8p+y(r20e#vonGx{80eV+}44iHU z5^aH3e&ELrOkiy)Fv1d+$P#hd^rkp<3^|J?szsJ4aE`R$s^XLz9m4aSmh8#Y%I7;$ za%~47$YT)yVqi_R1%Z^rBq%q%##JT!Qgvw~84>UZ1E+F;1ys``1B~#24h*4+2cYNz z3}D3otdRpkus{P&N;M={pa@GyK@hIMffA69iaJQ35Od1d#yYlBT_lsW6nReSRc~W^ z(U&`q%2|*kl_wv_fDfE-yCd}SD-NK*6B>|$sy)Gv6G$xukV()V2!N|3pg;gvz{MF5 z@t&#dfHGx3(hkg^3685KnPk8M6_64GAZ_XIZot=xCTswL?Hv=SFo6p^b7tauta!&u zUZ3ogvX!M_PzeXg!xi;zoki+rcftd^mU02BJHP-wa6wlR^a0CEB_HkggxZF0Y(_i) z1~!NPf#(LcloreYMpb|W;!c2y6i`7-e~HQplTVptGyw~+FwyT_;IG&ntOte=fHorF zbl?qdd1p*x@tzmG={?nYHHO)Wyl6^ca9Vuj%M_fcZLB4>0C0OC(G+C#0#3B3GU>_! z-_b0ZYa1&=z39i*_CN+EX!0;eYXUhUQKb89Ega>D$2BIfuQQne4q$MF!1|GlRXcEv z_snOU-WbQES`#|sp`wTsZm8ed?2w5pG63p;$r)(E+^=0TsLeG=JBG36?MbG0ijrkkC06AmL>%_yJz~OzdL+ z8}iS9rkANwwBt7!EY|}GU!@X{b99>>=x(QN6L33%UZE81VkZ(ZeYjIy&=1&X__9=ntf zD?gia?MI_`&=N4(}ByZO$S&h(}irRQnNarFv5ktj{6X*5T=&7IEm zu6x~Pku4MS)*>K`D;rTQr@F!2`*E!^{OfR!yWDXCb+a?FkwNFUvpMGRth-(RSxTR~ z;0Isqy8AS6Iu^Rx6NJMY`u*>c3OwN_Pxe|Uhlp2_Za){q?^yMF_kLeK zFaE8Y|Ni*@S>2EC{^YMd{_K=!ZC8J@QhoV%fC%V3=%;>%7ihS*Uhv0!FgJe#NPx7G zfEbv8&hlpf$A0~{d?3htF?Vp16n+-S9tNm^D!76>!htEre|VRBBIqsu;Aes-XeBI| zgF4tHF8G2!hY+s7Z0~1itk-%Kc!PD4f;+f`Oei1ww|&Qed?EOM9P=$jcZ5hteNDK9 zT=+r@NHnFh%%h1_R{x`&6h@qB5xh+6oFoY;xu^o3xU zYZ&DgJyV$ zYv*}bh>HNzi^{l+P!Wj^ScbAli8BZ#HF%52SV_9bjM~VGHx`Wlg(!`cIE@oH9;kSY zA+wFImyYVVj_lZu?)Z-I7?1KekMvlN_IQu@n2-9nkNnt={`ijo8IS@wkT)?TVkd_$ z_>83Jf8oeprsx;tXpYG!6<1S{u5*zXnUNZ~ksR5P9{G_V8ImG7k|bG@CV7%5nUX5G zk}TPhF8Pu$8Iv+OlQdbAHhGgcnUgw+l1gEadcc4whj>AldQ;etl_-Chc#IOcIAE7K zPWhBj8I@8wl~h@kR(X|JnUz|(m0a1CUip<^8J1!>mSkC$W_gxqnU-p~mTcLUZuypQ z8JB5!6h8Tr)`5`6M}`krln!~6Py%z*7?DefgGN!2xP^}Y1eutMxtNUEn2!0FkQteh zIhg@zWlz^dL;{SW2#v$Y5P4^mehDQ$MuP36l!MtDDR_~HxtjTBnL|;RA_8#hSBQoP zjz)=;r3recnI%tWjgv7P{$kV5&Mb?BKIYM{7z zh1|KI+#@oO0-+>Id#?Et6pEJ~D3lq>kQ?fkxG9+bAR3}gS)w*7d?#vN{`n~t`V6yK zodar{coCXaM}i7EqcmEhH@c*xm!nFxqd+N)o%x~*!J(o#pX6woNQ#Xj%A{UOc~0t! zp%DiE*`szRr5Flt0ScspR-kI9du@7fGHR$?I;V^JsZHmoIRUA1xT4GU zo&*Z0f+`X78GXJPr-;g(UX!SdTB|}ttF{WNPb#6g+Ms*LF3w3hz8XHE@vH7QZ?Z~p zqDpsps;6g)CrwzSfKd>1Gx^%CG@}F=bu%c?0j5N+>i882U_u2jqUX6T%UY1l z>Nh`%rl^{#mTIlmI;fkft#YcQJYh|<#R6BQ{4!>c9xl;?UAGw=f+JFgxg ztdS$L6=|=NmY$yDY6tDO|NgiOo7-hSaivex4x^OhVYYV?J-~rz%!6k>a=o-9i5viMb zp?^xP$eX;+=b^6(tItbci>punON6)!0I~!`T(6|M*)%sZyaPk)zph5R9>A_Ei@sfT z1RQL{;M%gFGr+V1!K&o3L~OYu0WF=&pMd4L#N$%!i@2Y3Rx@Qg0em^!>s2kbwjk0u zpM=8(BQxhCO1yP53&vcbqe{lwZyrn&T6-oDD~xt9vC~SWRVub9jG#%m!Yu5mzW7z9 z6uxllQuC|0C_uw5wY`_C0Ff+Wc!kMU%cKpJzNp1k3ZS|OyaXm7%Bgb#*c&@Q3%UWT zS_;t|9=2f?Ry{^>8WUR%RWUm|Sx!)Q=E=vLXr2zF~y66J3 zn>=H8EF>VjwR-IyhNPy!<6ekXVp8_dpWSPV6Dr+n7b>-#RNRyuksACMhrUE zi^>!o0nsr5`^8(ptU>L3#;P^Gb`;595dp%*(W?6aoSOsCOmfql$09tuENZ;k>?YMZ z$lGbiTiU{-I>X;e!y<;j-iE*pti`v?NB+aZ*6YjkT(0Np)9^d6Si@WwOw27S(3kA3 zkt@$a`vdyBvkMD}k$lyvoK0h`Pms)9G7!+Y`^K#6(Oz88+q<^&>#^9IKrMU0kQ_cO zJ#sHy%`tt=*nFM;G+nV`+oo8Wr8>RSi2SJAB}$}I$^x6S!nL=;1xFF&&>>Jen0>Nk zqd)e1qmSJ?Nc(NIquG|@ZFGaTzf;-m(?-}<$I)^EZS+d%;%=#(u(VS?9RebY-pcyB$2#3S>a2rJ*fk9Y+I`p&k*RO$-QV5V6q#-E zectMg;F%)b&=H;0eS62($EXUZRLafQY6!~NL z*lwEPKfaqTKIBqKeX)Oeq~I(QK|bYZ9!^zW<5xbZ zIF70&zTf>Fe-Po#%vODj0!i(iue(9P1<$=2CO-`s`4RVl9 z>a-p-rXIq9F4(CK=Z3!OVD8^4zT%E9k+WXw#{Mw3-sY$dgsSfA;L++!j>0)T?8JWN z34ZPW*q-g$zU|!J?cV2e%Fq1V&neC?;%=|`&0hV;OZWN?`gbn* zbb0aJ-~Jo&{qdTozu)u_4vicS=bTUe+uf#q01$?*Op#!t;K3RR6)t4h(BVUf5hYHf zSkdCehB+#1U$Bs!zfFxPc&(&bB-F=fuAS<~iCoH=#wNTeq&KEtP8Pr&+gl1xwbp*|oXd#^o({uHC$Sg9{ff*l^*(h!6W2 z2z&B#=+UK9tUUAbY3$jxZ|C0K`*-l+#gDf-wW@QhvuW49ja&D#-M!_*4PHO+{lJC^ z8&=-N!D95L6L3HRFRD(a>&R1ZK?WOi@IeS8ln_D)%nNWS*|rLUz4qMydv7;8=A+L( z67e%GIpmg8?7{+Dbn(RlV@hyB8f&!iMjUh0aYxm9^l>#7&pXe;Al2KD!w%8<5JVAw zBat{L@6!)O72jBqu^?g8a!UhaglR?|!xVE&GRrjcObRP}t;pGqOwz+8-ID9AS@5c_ zME&xlQn@R$bBNRh4ztSbwDT)<)ITbyr?_!1B??lGsWsH!Ef4QYL%T z^iEEV^YpmE`~z^*Qmd^NRUp3v0#|Ok_4ZqE$Lw`na)FIX*fuHu#m!Q7(@Hi|Cz);b zQ%?tL?pJH`)i@ z7GtA7(z({2i&}c>v9lPGwm0*xrHFnWW#qoCs8E}{^e3JzN3U;?#+j%t7~6yP9=3D95y;41|8TL3De z0J*5V84ls_zeg6}@`NiVU5d&JcUtfbL{7?dG)`dHgwB2cz9NkRvK9UARXP~>gUq7< zrS%UiZ;o%l8~@yR;xm`xgedUF+ky?EUq$%>(BOpiDIQNg;hAaX;D-_}D1m}VeMTQc z6fg+j{RNOpVutdK;8_V4%16OyCfq;&Xduw<2q9U(`!JBZAr%mPNyuLbSRgx}EujSO zdtecaP{FDc5Q0_X;Eu5N!K@vG1L^9JZ9W8_cg<&0V~f&>=!KY2{by}$qv3PjwvxCl zr30R5T^KxZg-!fW0HF9G5gSJZPi!I#n`oUDHsFb7pdn-!R|#Ob%BehKm!kbI0-4T%7@KJ z9Me$a!^$O53Vl>!JqE%^MlRA5py;E(&~Uv-qM;dD^db@KcEvCtF$|dd;SvWXI7+sn ziMFB@4=(V6CCI=C8d%@dD9{5UbwCD7FhT}qAT;$MfCWovzzeT z2zG!0J@};#Gce2-9B>9pIKdKbzyJ&!Gnq@!!4ig~KNMt>0~yeO??~WE8tj0T9i%6B zK!`Pw@G68O*xCoWwk|k*4O3xT;k@#5HfR-cpEYEt4R5HD9FAaon;hXK2FArj#_kf6 z9OK_C3clNUf)fMlfE6a8x$b$w3VqZY8=p7-3DsFlavNwO5z*iUGu#gifP7==xVQ?} z`Ot_`h~5S+LCFoEfse|wzz=^|G3Z&r3SgWBO+iuBB@(eG%bymu1H=sa)OZ0Qz{kA;5nB`Zli^p+gTM^-%SjxiyKWG>Xh2Y`f?GY z8V;lnHySzsJyEGY+^AC1pt+|?A&6bMq8bZl(<{>0yhX*a7E$M8E}(*}QhefH13bKe zT|&P;_E@f#0L=*YwwV}cfeBC$gdNOioh$%gND6QP7Wj65KtKT!n#+O=7_H1lK!6M+ zMgZRi*H?<2U~_liON+g4gybuL3Lv}Z1{h#+O+Id+LF3%zCR(vg)2F%m`A@S1bV+c5 zS6m8eP-ndmZ3_Kvr`;A_l#I9kj$BZf_MYm)0P{?896)4JV@Jt>vD&6hHSZ)Tp~WYD zQE;5#8HP4jj8^Cg>C+( z?UwRX0B@NJ&|6Ko@QC$BW4Pno*fX(~20OJHNA|9C`7^6G6 zVPQ260jTP#G~T%XG)&91?b@_?vN{cv2&^*+QPZNC;)3@=JT5@IOW=f2N~*j;Hz@Ku z&HFkbs;dnMw@TU=mqMhWV!P>SF~sX5YFnj*p`<>FA^{k&8ngg1x*H-Yu$Iv)xC^2a z)S@t0FnEg{^rECSGA}yvgt+^nQ={f@``T zXL_F=XaM390sMJ^_^A;0nX{IgK23}i4ETWqp*}x+fgczx-Pu1_%)nP1m4%@(F^~sk zxCUiNhGfwH26^~JzDmL1E4oJG7p;IU2M`Odp{{ossHVHF3e>LUke2S+Kx=%64lD^# zn}GA;saBA-Bl|IVdmd7|8v)p;0F$CRg0E4ssc$5;2n#UH>7!OKBXIPDR*^sti^IOhEqA) z(EuF)0T9@&7{JL4z#}9ufE6%G_(Q+{F^VIYrW*i(07yg&dLR@q00wx01~|W>-~kLk z0p%+H0dsn+0_d~+i$%ATMY#08tD&AvsE1$FhHJ=6z1+*ZT%(tSN17Lb93JAwhICECd>+R7*0x{k-$rWuGzD+?`%tF3L~Pa)C& zIix|*1ZA`$V1nFX0t)>^PBel1JAw@TL=Fv~w=B`PEE49bBW1|NX=qWtd{Js}2UBoJ zQUgqg%ZLVqGzetIIyg+J%6_lEYDDS}=%VE;yjDja^PAO&5=y_5q zjUFs*(kgw;EzMFUrP9oip6Ee?yRlL&y;953QY__!GDXugeN(~Fp23lx;DJ*+y;Hw= z%_!Z{DScBpz0yB@(p~#gD19+F1=Q`SQZ=R0Cmlu;g(~nP84{gTLZj42z0^aK%W~NO zP-q4heNnyqRKAo4DY}GMI90*AQNcV$u1H4q*wL?<4`oqIAw3(%JW^cEh$MagMMn%6 zU;R~x0ajrhR$?vIUp-c1twm&|sbPf}+e=nxML=H_7-}_EYNZ}*jn-zJ)@#+)W!=_p z_11|w(VsIcOO@1f#Z+|_4^3q*gD9R|1XWNKRlUTWSb&9Bn1f|VRfZVMqJ#n_ivnT+k&bxl{0{aBGr6LR5H zd5~9>1yxF*1Y4K`m%WEruu+8YSASZ@SOuHJq}6zUI(lhLd(l{hUD#b6S)wgkqdi*w zD_JU#hhEIfroDz;pjWBAhGf`weX!%(H zNZ6ol8-{IIq>Wp-om;wf*aaLHGQ`WhjM{3T*Oa}6v&vVjjaixHTIkYIui9ccCfk|juW#B?I zxCKgRUDsvRnpH+vmEE@(+kzz<+qKomt;XDq%-xkay9HkLU0?QH6yX&Gz2!?)+L*1~ z1W%<}c>p9zh=trKT!?$!uT9*rU6!Y_-65q$@-5%U9NPA6UOkwsTUl)#IDW2l&=oK=QVNtcwmK(HA1aPxIiBOu=#?XAhNsP9@Uc`b)?gYH zIx&7#0UlTpb~+=@E}v!NHukhPUK2VFSxAm#NuFd%u4GHTWK7OvP2OZq?qpB?WKa%e zQ6A-_9hapA)u^T6iO5@d1v2Jh)!Q3mGM3#C$=M>lWkbII3upje5_X{?U1UaXD5wW>zKu;viOB3H(u9 zKIdFcV|5OU6uH1(2Ifw4;-ML4VyiL@84-tm=!iz*iI(Py=8}spMw{Mgp6+R%{%N2NYM~x#qAqHqK5C>+YNcLkrf%wd zUYmXPm5;9Ai_KfVod$s(=wbGX?d8#yUg#l)X_>bF6GoV6cCHu0c;}mDsGOeJsD5j> zj%&G|Yr3v$yS{6@&TGBiYrgJlzy52$4s5|5Y{D*V!#-@pPHe?qY{qVE$3E<-i0tK+ zY|3Wd=ACN3OywycnyWDBk><;ie(Oi+0+nXzuXgEL7VDTc*qUbBwI-EaeQYqEZP~7E z+P-bu&TZV@ZQbr|-u`Xh4sPHcZs9I&;y!NUPHyC0?m4S$=bmines0U2Zkmk()0Nl8 z6NS3TRsoCxFVGx-CTUeY>9vJ|aE`QDK5f*t<+0Xfvo@O{RqNMwTaJEe`@V1d&TswR zZ~pFY|Nd_P4{!k=Z~`xI13z#CcWr#yZQ&ySUQo4QDrkjfcz`o9sY!T%2_T*fuHfY5 zVT9o7(jMeY0c-YNZTAk}M3(RQw&?nHY8QWT7>{uopK%(maT~w!p>|y$sL=vw1{vl{ z3w|D5n*hHdoF?J`FxbndO$OkL8G0sdgI;M7U+)rc@3QV?##H1LXYm&AaxedKj1A1W ziC@0t$MPQ6X&GtK-33da9{SztC^v0|Ch^o(Z7YB8PHF8eH(xOCb3gy{)(|>W_;GFq z1>{7!XWj6;{M1z-o~?xnD3|j&ZxVCPqdCe4b^wAhYq*)h{yJ?p7VRR_=nzQ zeZO)%M-C-U@Qvp{fd6=&?|BZw#@Dz=Gt>5}q(s|`Y1EWX(Wfv9bhlh9}k@$Mw_DYZXOW*hX(Dv>jr-zT)6@=7aw(l4lBS6kIz{l{?p39-;g_VCz`em|dmU+q)}7hQJWd}b|s&v%sI z7yjWNeZwaQV0U|}w|eFOv~GcZLScXDfB*WfeDtFp6WE?sI0gI{g_GKW`vC|68>A%R zVXz272niJ?Xz1`^mp~H#CqAS2tWHIY8jo?b)6tbjkRnIAQg_ZI%9P~dK{MHsB}|rZ zX40%lbCIDiB6gDKIid#8phAZdEo$^A(xgh4GHvR#XpRa}r&6tI^(xk^TDNlT>eVU_ zo?o{n9BcM0+O%rdvTf`3E!?6>>@ zUytqnJ^c7bXD?6xuEa)f-}U$N@9+OVfB_0P;DGG`VI6`A8kikK+;w*ugqcY=o@W)B z_tJ&vrFRET?8)~bh#|U1AAz*pmtSfns<UiLSHv(0F zgAsxk-h`T!cVvbdrnkrdTFfXSlu=448;LT`RS=3+YPltsU3&Q?m`~d1BaS)lcxI15 z5fY@3;E6{el1U~>T6$3&HYJ{UidbcsXKB3LGw49&o%cWoB= zCP;C@B&U)~TH&Ogoq9Urp=SLFS6QK&dMc`^s%mJWioUico085l>6;c>$_RQfEeS^n zcM{O(r@;#UI~1RI4VK#mS18NG2fyZ~RR<36Sl6gpCBf{oV37%JvpaFXg_u8i_Ut7)>?63iQIxbyFps>KPV(qYk6FNBY z0|#SmfB_2$7d)m_Ct#3)5e$*~@L&Z5Kn1|$b}+y|$XXHcx-Gl>GR$ALD+J9ohj6pI z0CPb`tV0rcuSngjBs9@O8;z;1{HEujvn31s^qvIgx3LK`cA&u$JD|{m3<7Y#!48a> zumclCd{98mUpo*n4`8D`!Uk-A&{zRSd<&ViT?KGM553Cu0SydvZ9)r6fa`X2Nx)rz z$TuASQL1VWs8Dsxs#$(H=9z20Ip>{w{(0r0i+<|N061;bC2{n8SxB?e3g7_wp(@^W4r_^wr@WTxh6oG6986eDr1`sn4gA+$Akb%Q9$gl#l(|3Tx4;DPH0Rx7! zFoZ$@1aQI<4I3|UxKk~$fCheN>t_%LAON-#C~UC34l@uw{@w%h0D}&?b-4l{p#*Px z-{?T6zy&g}feCyd1h)i*2~zM~r%R0hNWlj-`Rr#vvxV$tC&FAMZCC%YAjWp5!l&rR zfuQ5T46t?};^of-OJD#Cx*$I)9Ki_>pn?Z(FaQ;JfC(k2LIGlc0S>g!0Uqc<2ZAX7 zof4FwhXG*14-B`ht5hp1Omv^`3{V0clz;+4q02Q0V&%QCmHDoC=A-uxywVHUroU}SK z<11hFS|Jf-FavqZ0EYX5x4!%>gCD>khW(V$g7S%>ZH;$JoO*zA=q6Ob8wG zILAHyF_3XAb?W4eCyly40pV^{7!@>Qt+m z)vgY;sbMYaQPY~#wZ1j2b3N-_>l)Xh?sTttO>1HM`q#t`_OXqP>}4~1)4^7@s6{=9 zXh(b6)xI{iv2E>ab35DC-Zr$uz3pj}TiWI>H@eTQ?sT(z-R^ETyx%Qva>Kjc>y9_R z=?&^Z=$qgB{bVVGt!hxye(G@|CN+x>XL4B9v+?bHe`M4UjKU6!!Gu*lfCSIFgx1Q zK6XHeo$YQX``O>#cD2JD?Q*~S+QW|bx9`30e*gR2_3n4UyS?y+KYSq)k9fuJf$@%i z{Nf`o`N>0`@|ABq<};0>)e@J5;>v5m- z8~k73jUCs~UEUR70OkP#8ere49pF761QOojNniw0AmUkI1zH~EVPMy3pa*{7<$<6G zh9C)+VCtP93Z|e6t|0HRVCvZ%@+n^o&Y%p^pbg%j>tUY`(wyz}VDjbQ4$58-4j~a1 zp%G@^4tgIFYM&B@pA$Nv`K6x}N+J7M1ifs;ycp8^!JnYbAN_&Gq0z~b>>n8#g8xa{ zq-k9mCZGbw-2yTo9LC`QW}pR9;2ho|9S+_GN*)J_U>}YkAevwx4k956q6;1(A||3D z9$yXOpd&US5I!O#N+KmjV(eMsBu?TF9w8_HcH$CxA}D^MC?;VOmSWIB;rKD3 zQlv%hq2ooPA5J4Sa%4w-BuIv2NOB`dmZV7@Vh&~^Ij$s2I$}G%q!7ZSOwOb`N}oL5 zWId{)J+>l6=;MQ2A?w8A7RKLZ#2-Na5*miwqEZs1LFQr_KBY#2-9#!QL)zg~PGmza zV@7Ue27Vw#ilj$^rC5$7S&rpNvcXxVB@TWg@u_4>zNJfwWADi&^3kMS)+AouWnSVW zU-o5R=42}BWEAG(`cYvz9oI(mqw6G{QN9!XC8g7WRsJ>QE;1S#YGeUk4r*uvyb^c{N2Bu){B>E9%6=DQpx}PipS{FLz zV@Bo};vY^>W@T0;LLwvuvL0y zrl^Y2BVcCd`SB!-VioIDMtF+npxr2WHrG;0CR3)TdRpCj5-4fvCy^GZksc|M3SyE* zsDlz{lP2hsE~u17Y2s<9m0oG)VX2m4Xz*>RmvZTshN*{+>4%mnnVKn@f@qqa>6*4_ z54tHjiXxiI>GPE-o!Tj$YF~#U!k+f&68b3-o+zIV>WUVsp;qU0CTfei=oD_}`dNe? zc&DU(XJQKFVlpOCf&`BL2Bb@r5Tbo5Sox^bh1HJ+>1v)~0xG0?E-9=<5)vx53M#iAsX^5mm(5T$A%re-Qt@di9A?&~sY_UEp#73;dPHdY>EVN!M#%8R> zZfwt8>&0#>$a<`{iY&>7tGJ%5x1y}du57ustjoSE%*JfXGAhl+L5xm>PkM*E03~=1 zWl_p&W74b8($2mA4rHi`Dyix#zxJ!ZHm%bZtgb?B)EX?+GHlf@EY=1nuug22a_yB? zEX989u_7zjmMzDgEwm0Q+9s#kvaQ>;ZOO)M_O0yO&aK^sY~9lB$>y!z_O0KVEFz$y z%nmN$7OvqgYR$gr%^GG;PAbpZ%cc4(7Y6OnGM&BJB9EqLk0Pzo$|}=J?bC)X=rU=n z_A48hDr{Ef>Hg<>rmpKY?8162q-m|}hV9szEys$j?gHo9;%@K$F7S4&@LH?fzHRXy zFW&yG^75^?GB4mRukt>j;07-B9K_R_sctoz`0_}2DuD&c~ z(H<>&PF>Rfb}kxzuKcd%{Gx8_PVMOCFX`f@{l4z%{x9tUumI0)02i>r8gT8}t^)6_ z?=rBJ1}_9huml4t1tV_-GcVs-@V7cI2Scw1H}3~O@AQsv^_Fn;Dq)05kFaH8B)B zu@px!PWW#EUvbzb!v7|x7I(1#H?YMzFc?oT88@pU7;o=3uLiI28MiUqzH!@*tQ*hq z2G?;2-!UGiunF(69``XH|FH@OvLFw#PO`8ex3JB^D7zMBcS^(zXCaN!sHMhhrdIO# z>M)Z3^spv_>JJC;5XFfcDOGCy-3M{@}yGx+(jH3Kp>U$Zt7 z@-}z#_wm6uhch^jvpByj%^or$r?Wb5=ept}Be!eLS|KF&tYT8KCFigr;WH+K;Yu7@ z()cjGZtkdtGC?0PDwDE7AN2jY@-4^mDdRFk%knO7=|op_Lr3(43bRH7@J2U6M}KrK z_p(QiG)a4LNrUuBr?g7XaU!&F7q2u-$Fxk(G&G-dP2coLSPf^;B;!Va9M3Mrx!w@^R7dB$KZVXKFt0 zGjsVfKzH(FDlO+0v|C?vLo>8ow=!JUwME}GMoV;F>orREwO{`=P5X6Vv$RgX^k5q{ zP9L^nFZNJ3_D=J(V*_<$3-wP=b~Z!vQeQSwYqs=eb~$@CXq&TCTlHvPwP~MrI>T^` z#;`tuZ}=itWsLO=H=vh&ou!Lv~fH15*IgdD>q&@ zHxxTJbVqkYOE+}~^IZe>Mq{^3Yd3elv`Zs4cz<_Ghxd4kw|U2OWT!V{m$Z8SJ9c|h z_Iq14X2UmUhqiqSoqa>Ke(yJb_xES}H&dgwfDbr<7q|X9Mcy*_^imy0}x44T}v2}O1jIZ}f zXSa=?H+kc@cjtI{@3>7nHjM{)d=EL1&o^Nk`JiTYQqysMgLabxIDSKUfKR!U|92lp zVU-)WmTx(iE3Ud`HFrt`gS)m2do^N;Z&=GV(2})2U$_Wq_$EIRhZ}8&*SSEW_;2Sq zoC9I->Kqjw^bPFM6XtdSM?qq%XRpJ3^)ZBRQsLx~6YB zr+2!Oe>$iy`5KHmsh4`FSGiKBdMci}s<*nUcR8$+b0W|BIMceV-}*YoI#w%q(J8o? z|2ocYb%T3nq!hbXKQ29s^@Ph`o4fg&s{@?dYg+I2o>#lIU;A+*_n%|CwiCLy$M~Rw zJ5GpuqL;h5$Mm@~y1F|$yHEP1yL-FC`@74#r`Nl^D}t%xyS{7ly)!wh|GTK?yTAkd zs=vCy7yQ8|JgVz@uH(AHKRm=oyu>%W%_6v$zi_XEImW~AfMJ=b@=*MI%QhrQU3J=vGN*`GbyOZ=B- zJg;lK+iP{s#!`y=;kN7Sgv-*auJywTvw$|IV~2fobTdChP8%^UvVx46(RKI6-L z<1fPFM?TO`zT{WF;FgMR2oJ?TTe>CZdpe|PFnJ?m3Fzwf)% zUp?5*KJC}O?bm+V=YG2CKJWMb+yDOCZ#=KJ{l>Q|u*-cF>g-pC{KyL>rq1@7pFGM> zKi{{+-`}g?$GnHPFZX+L(&9YgC;rYaKIN0Y`9r?rm%sY|ll%H_e&(~k`;WKe^SI}) z{^*;x{Ud$;mwx^Ogd;@*2^KUc5aB_E3)duc$narAh!Y!Dq<9gdMU5Ld9%S8BR9s!O zC14y1Ei|~h2lwFaB)Gd%xVwAd?(XgqJV1iGLvVKuNj3ky-B0~~#(CUh%bMT3d%-l9 zO`;OK`|OQnA)m&i)9&_)WvQ6OvA@{rjg6t4D&YO}@cQ9~7D+^(F%Zhqsh9C)@VK8X zR;pEMRLQ5k;n*$Js+`kb5>}pTH5pH=1N0Cb*3B%}>hqSCaW^_$PnX*)o*fN)54UeG z&p+K5xr6{Pc>M64O|3&e@g^X~tC>%vFbr1-c_&#+<#4*TZ@#*oOqa;TPWyGc*)G(4 zYjLa-z_eXWviMOioWpaq+2geF^f!kWv^O06DFeT$SpssHnnV0r2cQI$7?fg5K@t;62{}_sz9>9fMSh0TZ=QY0SFwbDih2f zDiBls!wYgPd=hd^ZS$f_>`(`D*tvKwSE&LUngalJ1c*(PBBjB%HJVI^5U$3!} zmWJLKF*V23?bSGN4J4b#^gl1O@e;XT%#D+W_c^SZItz2GGG-{S%30)HD*v=D%S&Ik zWoh;(VeeY>n6m5ni??Y%OroXFu|I;n<+>Elwk5m;k7HQ;JMYt+4{V#(oZsVC#!kRr zhle>?7LrW$e;*NCF=c|0hlL2SJoZIog}R^2Us*J{8#E2J_%yX`%9fVRoDXrik^=kY zR!vn%o7Ykeq_NiU^%AmN=+Hg-?w)Bt}3N zZlq=AtmsU)Jle8VIh6n`hr(J)JUJbC1Y|aiD#^0zZY1fBJd3z7Dl>v9 zFKP5G_%Jg05<K<(wH@!e;T3)hik7C>8N;_zUL9oRjCAbs&uKQRvsKDD+o?v^i>Jz2gA4`#a4Pj`k5!{}7XzL&ylQaH zLUs9t*o(uxpz~-9k#toL53$0K91Lsl&t|dTrZh3jPAnPE=9=@DYOR&db!S}FJK7)X zn9`VJk9Db>l+5)!C+17FGW1>@>V&5_lF2gGkKXyk z^x2|D2Q|CDczPQm_(PaSmthP-CVMcShQBX?BKUhvFzB9#N${Mb#1u`@Lh(oGG@WB~ z;v(_BUq)Z8o4rNuQ9_ zZ1$?sw_p-HoYYx%fw(X9SgXfu4HZ1r}=fy4*B=wXLus3`)AS3 zax0WB1y)|yR}+2)IXK4#K|xWx!oed$?Lsl5L6Jgzfoq0=LV`dI0*GZB3kO4yuo*ub zpQ7Q&56350wuw^08J{y@p);vOCLT{B#_vSAuqS~|qnLQ&e;gmP$y~Yr&+*a4J`hbM zONpUEcFdK_pvj^9mX%!fzmCsjTg^&?@vv?mPaQK)lQos8M$9d(UIOg~ZLWMr{Z^0H z{n=EGX3-=l2o{BuxwBy}^JtaEu(tE|_b@+!LVm87sgP^NKFOjYyQ|pDE-T8fxHaN~ z1(c;EjV*_^^X0~!zI+nB_Dh!zt9{nUY#N=Gfxv>xc=wl)HiE5wC7)CW*9zw%h4B84 z?#FW`708xyr?2PlT_SaehvGkav-^V|H7YX!eSbb2pTy%$?-%W}Pn9J}l0&z%0TlvO zXDNjdK1S9xx{xKB%vN;zBU@xFMr=^;UG;f`-{d!wMhD6^N)H5<(cU%^sDaA*(0o_Cw(&e)BmZp*o>kRB+M^A_ z-Gn>`j7A;v!kbP#W&YavFRm5)d#(Y)QZ??;rw_)*iGr(WcEObI%2kHWceZnd=kj4e z)oO)nK|_Ni)BDI-P;2>wmZW~kIAepT&%DHnN9+#_G20qcM!Zv>6Wub;07zfAWAk*| z|27d@3&PA^;`xV!Z#e8aes+|Qsg-vpmUr9SiXxQR`ylnpCS!8+aCTRqZb15>HEjv; z+evYT2oaY_M1;@J)&zq4sm34ZzSGEisOU4Kd*}~Wy`Su+rU}%rg>r{JCC*)s(>3(! zFQcXkiH&bC2(Rj5_p#)van|*gnQS>(v^d((Y7d*2%471W8as@jbwDPx4ikWF} z9i34os-|R$Sx7RnLs;22Vf2a=00j^6uM7#qr4k0HEnGQpqoKq+=6Q|*;lyjNH<9fn zBoB}fdfb+idQVwJ%A3sR+D?2YFO@LLo8CjoQz@sYwc_kpW;bCwH7Zt0OuSE{h$0-> z*KrSaGK4FY(md5_1WzsngnxK{Ab2RzKmDZYdCK6CCjbSZ#u-BqrF@6!S27C3`Pj4^ zRf=2AIw$P!4qZr0$cSSl$9*kClxUxr;foGuL@lG2k(2+FeUVzona5}~F26Dk=5Eke zLN{t5V2_Vh)R$Y3s7{JAziG}GxHx3%%WDZof6FYY)=gWh`2MTM#?W1Beaz= z2}dbpys4M=rKIFkIHL4v_bo)1FI-hW0qUm z2eF*pJ`U#WE3ND9H6RjMgPm%P;L>jm{%u;u$JKFdgzb%!jZ8+kKUX{Oma(D_?Zi=U zwLsMA%}!;tme;n?Q>N`Lg^%{u{C{*NL!Mf5E-$Q@gV*O+P2CWJb&NtXQ4Jup3Iyvw zBuCF0wC#@;$UXxU$Dbh^>&hjaEBIHSrl!rKdMWFH^*WAnCWEt>(C*c#dXJ8uTUS2M zJqzxLACAxVUG{VDiBf~lim%}~tw*;vca-n`xz@}2bKi?QoA;xy(b=76|HcxVKlYs= z)b|cLuo5U3kZjy{^xO}n$!>=&XtGS(IaK?29?q+{hky|_B)3$j&Z77mE9Yf|Ub!ho z>(_6*o|jSfgwHXbDE0|AUdH&QZ{te_bxEIJeh8sA(}WSDEtMXnS1ri1z|EuR^Sw@} zCN!s3C>}EEy-sRRH>Y>}I%M^I{b`KelKDgNh$Dx862bdUd79WVolgfst{ugJw@ZA? zP|-DQKi!gl_ztf>+cgt3#*vTcZy}DjJ{u{`m5LUB7%cia>*Uc|FgSQDqxUue?&m7q zdp8${bj_=#yH(IBok^wWF6O7ZiJhUI@Wm3&HdwaR>TTJ|G5JiVM!45oO<5ZT4$dW) zKUPHkJym>OU;XC6U4eK9kfkpfuhWaPFYU{z4HG9X+LdbWn7VZI{j;$)*4`c}cIAR- zMt(%f+qp93;3o5X>n5T@xfdL1WUJUb_~rti`PH_6F7 zyZU=bT*B!jkN$IfJf~eF*ogOlryeuzq%Q8Be!qzmpT&dFok19V+tQH&i%$RURTXvJ zjl`SJqwY{l^?Of;kbI{;p?lbW65di%@m-uzzRH8zCs&v0**qqF>fZQ!K))SRRE;)!sVHvPBdp(IcK zkzGI8HA{)dXh_cOYG6!k{^O5!vAXTFd>yBOagFFzpf%O$jL%w2*TKgs;c zR3;+KiapxGO^IWAHtr6N`zw7-;?y2?AdpW%{KJ88_(2HTK}cRfFnri0K`56Qfk6D= z`D5(a3V&{j;19=#urruA)Azf&FD`{2MW!c`nkSrdP(`zEcai&p7U+e}hZWKJOd5nN z76eQR;?oZLoD|vz#DaPWMIH|oM+)PY#r%7WG6J;uff!1%8kUoR&2AmcwHNkPJ6zK% z9G5;^>{Cb#oyYD#AYD`#&k6862zWpqsEO&hy5euo=MHOviu4)Pg&w7qJYqZu6;Tb< zQ#R7iIx^rTQp_tX_$3ncqt+BLDylLvrZXyG2kQWG%v~c%3^%qr9~v!Cq1RUXdRP z@4$G}pjQlrSe&Lw6!;~&Pb@ZPHE#MPZWcd&o-cm!*w3Ug_WNoe&k6w>wq7$AuoEVh z>lVHx67P7-Tvi)pA{EsMjH-777!i$J8;^a6igT@uI`K+;>`Z(bPaJwdK`%=D%Lj&N zgI_Wu0fVsso4CnUF#0PPNRWip<>q7^#Z4c9eiB|M8)216w8xk9hc?Qm5(r)bOn}1f zPZC|n%^9kaVRTTSvJ$^TrJSq+*=@jVkgSwA>oCSOFLa6!a)MNG{!~ewRK-2>KiWw{ zom^-|?#!JrsGadF(5VVmftb#*tzh7jF>taG{_|S$N6pDQeTslix?C4f_%*z*GyTmv zMjjB!;hhAb$I`S(_3+N{%F6Ir10Hy!XvD~`96-@D?DWy{BX__RX}bE zQL(`u4RWJU^WH5iLM^ndY0qT2MT6Y~z&YpbHs+zw1HZCEXF?%!kOOjO;ozW=%Lq!8 zY=Ibs0Fqfa!a@KIWB|Y!1i&T-7S6&cK_QDx0wl;IJ1v3O0|3GSKyDDQxDh#L7rq7x zSz;Es?5sGX8##y(h~0<`KC8|pFHN2-e&?_L_YBCrfod%*&ITjbK^2&L#}^nQf0%H% zMX*}ywbO64vu{}B73n7oz|5PB#0h>QaR-uDU;{q3qcx`WdaiqBQ7bqJmoO4X0CRg5 z)w&Tn6$H2;L?AXsNcB$sVhyY*1fY@wYE)qg*@4)F`G)L`d8*YpyOo8z@B#yXcX^a- za@hCDiriUf?A?mYSwOu{N{T#6CVLGks16RZ3WlQel^>W522$rV;?@BA)&Z~H@G#ER zg4HE(kl<`0qT0&Wh#;H7JW=T7?lLcWOaV~yD?`~_cB_0(t0tkJD}VF9(@ZvO1@!g0 zU8JG{G2lFTk>zAAVRfwb07~X-+v8~vrZYGLh)@`WU;=_q6$R>n6X8F@ajIs+&OyUA zAs<%3){z65fQ>@BFa#UWWmT{>yYK}AwY#EKFrQ&zDUe|Xp-9eiE2>b6svB64(F;VI z6VKuEZlIF%Y692c%HFD%VgMyUfI_CK;>Ifa4L~M)t=?o=5FfCz5m`{Ra?vNvXt&7- z41*yXKP1>YqSrg-+xsEsOfm&H*%b8rEThdNp9PuMuM%J~0+)!e`6s=oGV6ued!`{> z^$D8w_>5SiK#T_<0<1GY4FsPNgfJZhZ1aIHY^;D+gA$B^6HoxsiUI@Q04T8lPAKHa zbwJ)5REZt%T@Kig-GzYKh1!b@dk!ymR$So?&&*i)>AX1e48AX>i}cq(&P^w;d}OF? zdT}EFV-}FjKBRA5H`vq5AvDaTKg{Dd%$GY{zS45_3LuM(H#rF}VDH<-FYi<{4}UHE zjh`b}+jY7I`^N)l_twkPk-sNc2Q0#P82Fwc3a}PM&N(fvVuzzQ`pQu zr;AO}i>$YcIq>}nAZT|d0KXJs@E!oz_mRtHd)Hzn&j^}$0qv|m0Vo>)?ZVi3p((h` zlB_>~In?&buAIfq;m-mS%8D^NJw=MrVM_u%@WI)4<7N9@Q7(UmT9_yAgk~P}XP*3K zvSlaB6cbmU;1fk9zUq9Y%4 z-^|Gyd|n|<4d>!qM8v9K`7eT*k{`2(KY%o!=F)G_ODNFlPhv(DdxW>9qUh&oedhah zz?3Xtqr7B;sZgTJC7!kU&kQI6A91t4%Nmm|n|@IZaw~!)x#C+KfTEEO>LLsW|J+Uw zyxBu&09_&P7*^6w55=`M`OE?mI&Uo_H~${d8? zl{%%xHG{Snw7nlT{n@?2h{`xGxqF2Dd&KoVpZalue^c~k6J6Ois(46W1jPon4l-l+S%zgRR# zkLvr6@v)4*+}$SXm0Q>&#coBPr43-pY1-lyyI2mmJj`E@s>9wU{dkzg^YFSBaF%B; z<(>sQGU6j zHoE&?#)shg21V}7VHzwBJ=+HPdc2)^o`-%ul!p!Pbc5J%he3Ax;rM*i5J;%}=lCE{ zJV;B%PX z$^QG3hS3?j=u6D^m$;pm z=<63S*=r)(YpT&}df;p3m)ESohn$AjJjiPy*<11TYl+cYsnc6U!CPg+TMgt@yAghk z`26VSYU|9=+SMsi3~ot47Ty`Zw7$x3@e>Ncn_bgvfIhyXZ)C!CG|0JOai2GkjG9Rl1oMZEYn+Q(+R)iTf z93_$AWba_{HA3DofPA1JD(%@_v9SLfA0*as34Ba=`LLJ9J*hYX4CUO)xqTThg;c`- zIzFVKI3pa@T9YB)&HrqW+O5*C zh57=p&mfn%U1h9`!YTCnB?L)>hPptF3KdP~?-ryPQ|f1o|!SP~?b?Al|pWKk1p23je&BU2@HeWBn3M z`u9^-1ac$s@hXVlJU;YCeE}TtclZ8A0!YVf&%c{0U)iyz%zR6qEHYS3k8^!CQKHO# z8{de+?o5foF+z^SU%sfWOdwGld)3uci>**}(eBDIQfiE&B9uCD`EpvHdTgj@k^x?y4(YVceCZC%gM#$n^*~QI8RcS*_FmoG?H89KT*{A zuol(#a>&Z*Vf<#TlaW%g$t>h*M$ULKzc|pYxT55SxAbLUE{28albsP^i1AQGIW2io zf~`N~RO05$b)B(miN~9TyMJ&RlKioxGV8J=;3*=>9mo5{>VCwES#^#H|;b+#b>%pB=c2Jxc(*+BJ zkK`FDTBm)ZHGaj;B`N<|>Oh2YOM$>KfJ!TbF~1vu2(S&FPB@5TI}P=jE!-$anS>a_ za}7=y8;(*U`>cnfv+p9*qM7b)r@FmVM%J7PLG7gAH+F}k9t7Oz@%!>sUiSU@De3@v z#`U{N93ZyAS5&ttqsYWo|2CQXwbUlV+~d?d!;X1JieZOh@SE28IHx$B$(2l0R>$giE-1!m-Mp3;Kzu4G;1|M<0j;c@)c#(+Vt3pC3ylzYp zY@y&F2S5aJv2eTFbQ^MQ9)9m~M6uaHSjmvEE1DadvV9W_vr1aD*BJ6x!6oBmLT;ZsyR8D3{JE`r!_Ow~@N<~>Dy8B6GL-Poh z+Hn}IAYM?ZYRbj=;3{~RCG2D-wX2sL?VBp^cl=#{D7QrMd>fv0$m|U{UaAp#&%|Vi zwW^&dRSwr59;~l-a)E-VRvpU(qI8}2{xS<w?2Y7sCLuCY#lK=%Xf?7kcb5*McO;bs;ti`co1ecxNKC>gB zsz{~jg^PU^Yz{U?+`^Bd)e4+w>V0Hy@WG3<*s`(-enLU_h+z6$zlZ0(kx`hR@_Y-jr zv|K72F~6R{!Lx#))j5A5+Uafj)wh+G;2ft`2`Lx{@T4L@83Q`ollstuZHMd!?9fcL zL!$8Dt`A%`fu_9po9=a$IyEqms(lf`=2mxqS+rq(^ZjnNt{lyJTMr-dsbo8~QMEZQ zTrIH1tfI5lH~d&qTEkdrO#UD917tt=8V!0^$SKgG17W=)IKf58%Alq1>_yF)ZpDcs zQ~+AqS;H`9*xER$Fn+Jl9U)W#$*%fQ^_#H*`-clO=;piD_QSo3RfGMD5FShCVCMjeHj&wJw zt;%@J*hr9$QPAuN4-#CI~1H zN72{9nJF$SApMvu-)El5L3?+eM4v8uy|mpZODX`(2B(Y4OX9y;wzv>N{kvj)`$^m9 zo1+WU%>Daiu1sUj(#F29-jqB$pEBe&NW+@%t22*9B8Ga9P4gYn!c{rbozM@pT#cKMe{ZZe$fO_;tiY_;a1Wap zRC`)7WB4k((rWH*(BN-A(Ymf6@^?z&O&kt+;@^zWxz*yRFVM3R?b=CH2xh|P#&AL$ z)y*$pH7I-qWFF%djxfLYr9!Ta?rLCdxIO(|uMqYC8IWQdW}_hhhs2Y%YxTg|;4Q z&J?84r|jA^9aR=64-kiejmgGH=28!VhRu`|Vv5Q&tKSrlLJ(@@fTh4}Wq^-q#@7Ea zWt~k%R3<4l8bMA;Y1+PO7ppNFt~?&}=S+qdN<2uvN{c@;w6fTMZ(Nssh{%LkSC$zL zzwdq+iCbT`yl^}TN*I#Oh@^ePY6nZOR8x}DMe=zw?X!J6u1@NO{ELwqSL{(<)rKPNgwlli9#UzNrm#%JZ zfMky5Q?-DmKw+-gBzTxZHI?7Zq@rex&dwHkzdBpjhgLNV$>KL5;c8ArDN)-2HmgE# zp?A60-Iv6D`RboE(MZ{LI_1;r@-x5rBUdMzDfpe@ni1upM|(L&dt0j{klf)2Egr^N z?ne`$f>UEr+f8I7_V>=-8LaOb(EWApopJB71Z9k#IsiL{h@|k4=FBkY8HAK(d}P$OJX3 ziWz*BcK9kYO8WR~7HdoGt+`DT*yw|^P_0U6E!6d8Je4p!)=pa@98LoV_8ZT`bSQMrM*YmbwUAD3x|_FZ<(E2FFsRs*)KAbsw-KWH3Wm z3W$-mh@Q4QN9Dmdy;Z=erYdVeA+@LJ!lLEsuUb5(Ar+r$Y9T}zj*`o- zoNKdmZ5u;`G|s8YN!p6SRP>cvNgaQ)(c5B)Q}SEPI!;Jkw6ZKeS`F(8QWk@OaG^L> znCgcoFj0~s8)5U3`U!P4^Su$HWt`-2P6|1si?3-LKO7*%AtkH@HK&m=s+p0elU=9t z%0+yBiZXYw@TEBG-B;|dMfz>cmLn>CJUMw( z!~0aH@Nwe{_tyau{UH|pVPX9dC4HMcolytC->!d8Zh&{SdkYnbG4`9qI7nvqcWvKqO;>~JI#UkKts$@hL)kn;ugE0M zIvKpXW%#&c&OSqVlvmLQ+Ci(>a*|xLGvM=@#esi3=*59S>{;gzY zoM_(o@6m{RcF9o?1sr6?*RMA5)9h#>D!p;Dw@vT!?|5dp1s3ZjW}ZVXsY7PDLrqc( zodgTrdW%R~iyjbt?^JAWkOlLNg>Jru3A&{zsihgKrMZZug|elkp{13hrFDR%O@gIu z{?WIH_;1suMiGbhv{4S47NMO-qNG+4vKFrYEZjtnU8OCYNX;asjU>wTT1w`X*}!Sz>jsqTv9ngmUbf=tvaUfBNlFu zv})er@J(3cWZV8gf+uOQZZ6u;CY^Ra^F#9quuTrYsSG%-sy{31x2>5zt0_MY3OE&a zJbRKgXKl67f80s>bxgF3#77&Y^v^UY;CvSz)63C-WAQ9z%qRiDuDi{ypVqwU>bwi# z|1v&l9%q~uvYn2nf0oW!dXvvm4;G2C!e&Vbj%=6@mE)q!U|5l|9eHQb9p*?K=2;yU zK8zjZ%X##q;BS}99uBJ>9jsCgKS(oI`yDo?9oAXnR+b$gJO3PZ(H%GOqPGzo+?X8q zq(3}u$0J9_;{eB#h^vWyD|rz{+PRB}lZz1T^NVCSHfxw`bf;TVr;kj}y@=C;^7Wl2 z%1yxc@SC8gi0c>D>#z9NZ%>X%;IOCV>sLpZ*Q4vd=%9bfQlY9&P;}P~0w8F_>wYtk zqzee73IZ7202)BZ-$5udAk-ZY+Ls%Ij4%uc2tD!!IuHcpy~X@;i$v#)cYX8hdYjC5 z16jTnNp&XVy(RFxhVa}1k8k7XZ*jGp$sxCR*Us@!F4U1A%1?Kc&r#G?F1SW6RL`JO zIi%-^J7Ns^h0`9$zw7I#5H^f^`iwjBFV}2rPLR(s*PIng>*&#(1&-@ec^hSodfv%# znFwJgR-oB?p8p&lS5XW%F)}xCBc!Ous+@X*R#Lf(>8o~;CC&1~-N8Zi+J> zaV0n9Yc~~$n;eF_Dw(@tWTGk=K#k5_InwPL+v69NN6kcc?E-h5Pwt^7ZXtyaPlb=V z*X~Bg?#2!O-5(mUdB{B@nUV!3%03xcJ*otvStfdzWjxtbJXv|-H9I!uwrl6%NhA*3 zhgTvn0-=AjKPj+1gj9OC8+m#-d3pvuYhuuQ6?pnIc=~?#^qcYY-|-CKePRWh`#y;i z)|*H>xrNVOg@QJ0J7E}PJr!g;qY}L!(FI;H4IiW-syxIqXywTP5oKcvod|Dt>dA)x zpB0EaI9A0w&B#05$vY$Pkzw}5mq+is@}N~&Z~N6F*;6kv(7OQQU5MdRMCLOz;f0S5 zaD0Ah{dS(m>s=h^Q<>;fRp3*-f>g5e!oclT-0aPJ)t6v+zh5(cQ#{ALT-Cvz*2c_=}AZ@YZk`sM{A z^0Z$NaMlp;QCYFP;*pTyTK~g-;^X5y-*B?$bGE?yFM$u&elW67PZ>~giQct{{t+hu zQ0U&j`jL##JR?*B{~;oQU=XpGt#8PO!jM04*q_``j6`A4sn=THQjWzD^157{+)|B$ zaj=2pa_ZEbgc{w)ye>j=1kU3=0Hx7nTLR*SI z%_~x}_BGmP_H44+ZS9OVlyux;bl*+dUubZGi2Qqhllt-PgC<9WXC?<@Lc=x2p_2Hu z>Uu_x9H=ULkG9Vlw|vuBH4$&?(9?s8YBY5E7|1my!Wju4yWA0uh zrm$I{FjOn{LRDz$<-#Nb7=Oh4u)yGZ!W1FEletf&jrDIwJW&cLpuF*#p4sPl{I9-r zG5iqPEQ@Tp@(i2mF4`Q2W_q!&`iDr<=rDaCgnYMLF+>u?Ja%Fk4q+Tpr2r-TxfI;D z((P3AU;=$42FsJ2oIwV1anlUjZib39pErieqL@jtLcP8;Ne|?Z1z3{%U#^KY_YdAc z3`w3SYC}v8z(x_q91p%&3i>M$fR-f&$Vf?eOUca2k3MRilNV%ZTgor9fKPDIHPglH zOUnZw9461AN)mXV7lz%nol7oI$83j3$i$d& zI>;D%*@rVJ-`Phn6>5zFM>$eW8`Wq&UyM4NXwiw?6XT$WDFP2gLu*+|ue{{n!Xav@ z>O5yBG@4<9Qe!k2)Wf2wAYjNKHi|IUoD|X4?>OYNiHpdQ^is1<*mx#P;03%DkeIRI z{VutY8-SU*G%L@x2GQ(br)dNhs}7)ROgVSR{1oQha^E1)oCg5b9@M4;{@y11>>U7) z)5zq_2hvx(t14j;PS$uI!&`3jnm1UmmEhWga6aYj96>QSTt9gp(kxg1VCZiWjSc9B4Rh7~;}y#(&Uhac<~zwT2h74G$kD!96Ci{!hOKm*Q%Bm#~k! zB4PTlDpJjSF@z)8EMvVIXB-K8HclG)MgCl3qtPwgpnM&I_nrAukN}qkhWuoDXXE4+ zGgadQSyxc*Q^|UQ%IXbn;nna$K1GOfVR8atd~Mc5kGYQc#Rn*j4_k88#9!#S;b{O6 z{=$yUiXq(Q8xv$sisx~F+}MRs-Ic{r9L z3=6F*{DiL*o`D$7qd^b5Lz%uO7=KR%=~V}1n11YYyN#lJClpL;YTf~PSj0#*$v9kd zoE9@Ig;k~m35vRCH4QZ;kkXVZRVKls?_4W2#aK$#1VtVE8?W7~l!e$EC1f)Mr;&aK z2B}Dnv$LMuE`>sz9SH|h%s2q&Mgvu22s(`17*K|K$TUI!Q)m7TBErZhi@-hrM*vsl z#Ep}GKC{j_RW?N|Xo!y)CMkJ#SIWh38>m$(k!S zP%I?r%pf=~Mo342z^Xzy!=$4jwu%=lnp_D(5iSPO1k?LmRg&xu(fk5Nwd()IDhq;d zMj9ON6w{1Ws7=hY?tp7+7OkbBrw2RkE)x}sHc*NWqKCDNw%oOM%nJk~`c-||vMdcE z86Sn*#OMbf6oUjd)u2PPhQIqnVtPSSdjM)`JXp-xH{uK>@Cf8u)Ho=l)7EH=Z;|Eo z#mO!3qi9P+0eS;1eM3{kUZ+$UA;!k!HHQt(+Q3jM^IatQmkMaSLa5uK*$(soP#LR* zY6VhatXJL-Ci8D#X!#C5>4X^;Y(f$KYI)W4?57)$Tc{975(cjzI)u1s$stTCiO5c=HckHF_%xgj zz|IAuMy(jfDH0HO8AH2dG@%OYVkX{a4s&G84)dWme8q%rD#RC4!fJ>Guegz#5PIm6 z%ejp0f++d)i-7sG9tgB7NkoJOBRi4y^c}2V?h&(9Ehc1{Y!yA-Dn>jDu?;>VTxu8I;UAJbVv+ zK=;G5k)@0Jm_S@>69+gn*w0*qX8n`@4`$>Gfc6#Gz`Xl;% z8tUDIK%oPYnTH9DZGM;zQzG1R3kVFJAEhQkBp{3EN+;R~D0) zGJzWpoaLDv_aI!1uP~gB24R^7J>e{}>rE4;$`C79lM^V4AjZH~jbJRB#z1@lL3n92 z3_-ii+#Tq&1(ONCBpAfn`BS9=7*=vv58dBW2}h^nm?I&mCqUOj%si-+)4cd*0Ol)q z&I&Q+czCX{&`|4~p`o{db0E*1KL{|ZF5I0`MRehS7Ztxr148fmY+ippumryaLrW1 zlTJi1Cg&?OS%Jpx?b432*+i^Xz!u*qz@1+`$hb6^Nb5~#1S@^=*+aw(BUsG@Mio(f zI6h-PSO6mg#zFA&gV|Ft2v0lrQj)8Gm-3810@*>v_C{AzoOh=7HOia}6Mf<~4 z=x8XJ+KZW{rREHFDn17x%zp5OVLkC_?+>`{xD#hhapcfr_Qn)s?!V}>DarA+&|G$z zcuF`$_Bn&qk*jd?sXw_;tU!$>N|ut+s;4B`7ZYaU!!nFuYfA7-Z#3+#=rfxh$t4Lg zqA(~^a7|MQFH^~+LPijNiNaco;tdROFb{G!4{+oT!jFrJH|a@5kq{F`Q>Es;IU*QLHb6|`5B+2qcg$}^28LW6+tob#C zR$4Aqs^32W&O*~dt28HdxMg`kJ69eW_2Dli^F#e|z>~8Zn1I>?nIKB2VJ@o^OyKh42d$;7kWiW}D zwJBdRnYiMxSgC=r;xx|St{z^w2`Q~SaS5w@nOg5xyOT2Be}`PH)Hos&o@tb(mgSa? zM_iOtW>*VD$q;jiBMUW0Q}bqXoqQ@?bTh3}FqbhuE-6gi#)_q`)S=}_aW z9c}b5zFZ4iag+DOe0x$l7h76OTGWr`32VY=)n-z@h}~BP8W8QI7}Ak0EuE)Fg{SW1 zzcmZD`~xm`rG3?7Cyp|Z=8?TB5orLO-&loxf+`S=M&gWi#cgVzoQ99SEU;UFzkKy= zqQnocGT8E{q!HSI2g&=4*5i`yppe#A1gwj76l_QzVQ3NZK^LvlIR#kL>860{EtR^C zT`cK87$2#5IyXwVfEY8UkSX;ix@eKvR`|S_qsoNkefr>GTXxA%#gNB#0C*zPx|}?Z^jJzuAu=GomUpfHo_kDl4BMySys9 zo*}2bDyN?zcdRORnjvqwDsP)1|EMbeilJb;$~E0Q^0$=X2WV8jnpSTWZ<-R_T$%n@ zRg!*MB3)go%vh#bU1rEwZdqOK$XMY~oriqb1;r30kPszeqHd_+p*c+Sb(o6ztj?RU zcC5N~`hOUoZN~be>iVndOwusxWUE4#Fh2d`p5DQSTgFmare;>Amj4(ZrdH{i*0ZXH zf7MjRO8Yo8(k~UFizeYnmUSi(AB;p-dQG!YcRf>2dreP2Q}0+!?=(~2GNh(& zo2mb(rvK_cQRf3q?qQ`5MW`xd#K6*_YcjQ&5zYjIWrw?e3`;YQDA$f?)(#uij*=pN zH?|-1s8#V{9(|J;52+pDshZG5nSoDAO~`CFl_bF%A(EvQ15(O}t%5EN#%Uu}Y*nY9 zYC*fTGs^aeUZSkZ%(F-KDhPG+tSlq>wF{aGALNpQ;kj1~xfJ*aOp^(-H4RvY~+n`3pG(=1!dbz9pk+aHF{70b?(!zOYWBH>77 zKA94Y{IBY>_;uX_7S?@nQi(LtaGv^u{{yu^O21Anaw7lAMxxM$^#(8lXRrouFb8+A2Y)aKhw#4%?ATH5$_g&w3TRoX z*s9hB6{W8T$FK~~Fb&tR4c{;h^BYhqp-GOa1Wzy>GUfTQX1YCC4j(ZRC;zb$FEJB0 zvF8c|gG2BVKk8ud^a0!ENZWa*`JK=B^YobK% zdu9|F&oLd>u^rzr9(OJj3yRc!qhCa?vjq~OBiD`g8O!xqowRI<=5FEcYYvok+4v^1|7J7~e~?=P2_ zL=5Y7GUY0>vfJiivhFNulo2$SvpJtLI@j?uKgfOZnKh%QG$cfT_Wx)41ursN&f z7Pes@Hex5XVlOshH@0IxHe^S(WKT9_SGHweHfCqGW^XoUH~)6nLf%Zqvx8$y~X!r7x05Uu`5c3SP zVA%5umLY*H0MyAfswsqBPgxZJlRr6>N4b>u zw*5q&Bs27&zH>{Twvn>8s;;=^bwCElw+v{&&-_3`%@4vI%ufIp2oS^p7*qh{)=?Ay z1_)GK>zCgaIiL5rpDVF$Ew0KoZ+S0t8bh?96z&zHhtN#~m_c2?pgF(%xHQ_{2w=+x z1^RwjhphegpMN^2hkEJ8g`kIcu&(ua^GZfwaW^BbigPJra=<>SfCdCX3#oO6JGNMP zU9z(-w{uGF#q<)gAt$T*kift9yTd;`#LI8PC%dT=e7mc4#&P*UU^13saSI18-wCnPyvdXF#7o}ymM}Hvrn75&m&d!z!|}Q9bIr5GV3YjG?>x_=bIPkbLI}_el=r;UPraXyyUK^3wYvPv8~sB=^m=kDJyUN?H+@+_J=LE* z+Nb@#S$)7=oRqGtmh(l|pLonugB4G-DcY>gt3BWMy&bbXz`4CN6FNgru}T{~YX2G7 zqhFG`Cd2N%-$%aWdokeu8#e}g%jc%yX}4lACZiw7w{jPo{p6><>YIDz z`&)~$JlvPI>mdZ)7b#3Hx}wlLzSq*zL;mXjKJcSEkvnc}20R*KCb2BmemJIH>`tLvg14IRZ0|^#1co1Pig$o%rbU2U)1QH^^RkU~!V@8S<7colY z2xLfTAuEoAW^tXGlf(C)V)k5@=ANLx~nOdK76=r5Hs> zNYe9RRH;*`R<(K+YgVmWxpwvX75{8lv17@WHG39qTD5B(Mx3}5oQN)oE_xJc(ox-L zYB1u#C~X~JX+v!mz6q{yvYI@h)`0sMa%9PqCpUfClx=3sn>ly({26p;(W6P1Hhr3) z+l@+BgR{%1WnR~KRjy>|GVpDeFk=>;c~Pg9#*Qf$H+~#BT+576H+TLVdUWa2saLmt z-K=Wmb3ekiNK&s#H70GB&s|YBP{TAU`o4(SC-LO->DNbYc{q0c`}z0x{~y2r1sqMg zrCNK9uD-&%tu~+p3#vD2*2Br7;q1E*!wiwa5556C{1C(tMI4dD5;ZH3HSW6e?yiyg z5^t}Rf=W+4-VR$&u?~c5O#j0kef%*lr}8tA$RdqA63HZ$Y!1bw4wP*u^2!rxFbLPH z3Ck?6*fBo&9_#WaAjKS$!zIl;6U{W$T$4=_n%q!D1Y4ACygI*g@Ij$^%SghU4hxdU z$h_of%tCuCv&}>mU6j#A9rf%@xLA}Er0@2EhC!mLgs`P77X!3QLBSkB2t!T95Yb0f zU6s{VU3E25q*k1hwmR)BNmD&P1)!rA}~52_cL<6#i}1>7a-71#$1-SO0Uns5Y}7E56?56PASiZ7V-uLS!)Wcj7vFsKMQGZp zIGUB#yT(h3#!mm73I9urwhR}p9~#i%+@T~2)unjVU01_)8RDUXjyo`+h*3>gf`6m0f-r=6z+JndX`W?3b!)@5+d_JB!-XV3`p1v_giFx({sR50QLm}k)8gBeyRpyim&J{xVD)n1$JwmZXls(%57(o$PpG^${i zc176Fg#$%O2?Q<}_Jjr^4(fyqxQ4nSr-bICfxa0n=yP(J$7;mBs<1;56jYg^m zi*#7x2w~wWcK`B>g{+q$CJZ>aSQ9i@VuS@4*dcTdNN0l7BoN3z0|v(C-~k5QUx0%i zbkN{{5;CxVa}4aiL3z9r-~b6&z|>6aDsh_INi5~F;uXb4Gx5_Km?HrTE`@2}Pyq&8 zfPoKW1ZfvBf&rZOo>EosLX}Gh6M}#QBm97VJpdmCdf-Z1;0GuO z9R{@50I^|i20e(t*E}$T1|Xqp29)9ysYo;5Nu^qH3z$*>1Ho=-342i!V|1!8aRUEG=jnuUQa4Vpur0Q5P%ufZ-xdSpXkJU9RDza94(-M2mGK1EqIs)OjkuueiD?ex?)rmxIlry2sypFk^{mx#;A$#B1+f+ zQ6x|R$W?>^6-lB+8eoDSynt*OIinrxqe6tRkb5}*qzGnkfFn$RkU9Ju9}}qqKPpZJ zf`jA_f56E~2EYS1n1RPEP_|H>6P@W)(kLHN7ulH7ZhQGz7YQ>oEKSgFP=O1?T*87P z9L0?lS%3~a(6J?0!~_b{B{A=aGE(+n0T5vT5;|}JXO^#!)AZm6ZV&(zyZ{AXdn5x` zaDoWvKy}#M+5%WG0d}errZJVNe_q;IuQY zIRaePC?Dv+;nui-83@xN6nI^*u!=4`=5bm{D8S}uCp+4m?y@H!+zwdVx?)+tbQ~BK zZg*hWzy21u!6i*#t=657JTMw4!9#8sixQx9%0>}t4;Up&R1-=rjXeojOEAk6SC%$8 zS{3ei$y?qe5!Z5!;F(a8`y%tu(|BLpm9Ma;lyW`JyG0es2(Am>hFZ+L0T%FpKmUZ@ z6&lwrQA%uWR^p8EJXdeg{bB_zC;=y>(Z3ny?r90!VGnG=-42o ztQ1_9>s+Jw6w5ctuRf97Wypxv$zc}rnB9dyYSjfY@4YgSxpgojdl}A7;xL)%TxUD; z%gj}ga)JnkVkLCtDczZ%xDmi4To(rAM0xzChFN|sf5UH>k38rT~7 zbgNrjY-1lA*~wP+vYFj%XFnU-(U$hKiA`&3UmM%m*7nBoOce!ExxpD}u9ezcn4Bb! z(7{&s;i|h3*u9%^@RoPH={;|I*BjsW)_1=7y>EZ_8{q#Ic)$rhaDx{d;RjcE!Wq7B zhj$y#50`kuDL!$FR~+LP*LcP`zHyIt9OMu$v#a-f%EKe+$nxM_-e0jF{?Jc9yS~zHkjASW@{yPPSznb-X0Ip2BDPu}h%ho{N0 z{$A#Coliq<`A_|Zc-Pa?O^RRqy?pohcF(?mjX%oNE0A~6^F8H#XON{S#48E$jT#hw z{qi@=d)(*F_Vk+j^PLrur11V;jPa2GE=`@Bn27XPj>XZN@DW@cl+?^cI9Kz(MPr>-2&QCjZ9aH#!BV5JRYX!UC5M z13yrAIuHlXDFkA5{V7b5aLt|*>1-T7o`U? z>j#Ug1nZ`~9uUY@FmGBA%lKscYNGW*FHHDL`N|MuREUvEAfnQRc6_Om+At5_@PI6* z1fJ+*%0`mB>evP#0C0w-B4-m5B@6j*3#~4mN>BJU1;W}R36aOAW^P^>5xe9?jUX{u zDu;;{AOHXe-`1+4cHlD1tPfWaO{fri4vD2QfC2=fbN?iw0Q?9K8E3UBEOM400A6cl zP>1Lmr~RBUME0;Kun(uU2oP6d7E=!V!jJsiLk0(J7dM2A@XZ`w%pXY$7>%N866X|! z-~%wAq63N1&3CncsC4~z?e zvb`4T#J;c=ZP6$%vs0jkVwB}QfU8QX z15`i;lAw@E5(GdqZ4&dm)RJFN5@{~r2nK47$jSte00mB9Hm6Z_J}Pl80364M1#-xw zMhZG7sRIJQFL{$h7E>sAPesDx-o zGm#>o7&G=T0zMtlyXdU~_LDymqHw0F2(pm{JYalwzzGbr1R{U~5|jdfsc=&C99^_G zI}sr&YI8UM2?C%4G@x8i(gZYg2SA`hKQwXTN+L%msemXPNy>G~1p^StOF;xid2|F1 zBKz7ATkIx1wr&N1Lh!`FGS!n?CU7*Bf}bLzdpZVWu2iXL=Lj~nK9N))t1@1U?caXMhP*%d=p}vi5ZcHk1?%HbDMVGXpis2Va-lRR zBaD`IEn;brO16;T24rbJpEfJzlX5$^z%Cbg^W}3f0?Op@DSWSVVWd1O4Ge{hb=_8b znQ+UnYcl!;BT(QXUI3PGH?-n4DHQjgo{)H#7XiW4eVJEwr4`kV%0B19owiRF9gH$F zB70--ZS65q?_*SO$p8-E20%tY-v}d|ri6?FiYx+G<*x?YH-4d|+nn`+&!l-{DRgs} z#02K|UUmiN5ggLfZU3#Ldtp~p+7oi*WsT_Qh1ali=5K}}Q4=>Hd~c=W+CWxSvYn{Q`5l2Q;ULgyLe5`k&Eq^6(s_Q-vV&#*SJn8Hjemo5tT6h zS4VZliP=RojkibV7*Io3b0?XSE7_7S8Iv^&QP(2QY#i*u$rx=PM+0(31k`2)PirJWt8JUw=nU|TF zo7tJ48JeS6n*XPnnycBGueq9QN`r}+hr8LEzZsn2?u(r_i2v9n6ahR;P<5-ff3X*l zTUT$|7)a!J?9v68fmFLf?&I{CpZnRL{~4eITA&A-pyO@i3Yy`}QK1)_p&Qzv9~z<~ zTB0YKqAS{>FB+pWTBA3bqdVH8H@a)5r<}!D4pju5d&~q`Rs~P*xt92ZZy8eQ@epzL zp1mxUxf!Q(TBos@r+eC`e;TNRTBwJbsEZnETRAC;_*RB;DBig@jI!|P@!XP+rfV9> zZW@)dTC2Ajt4G_>gy36X?uT5d7hrZ*e5ueP)Qcms&|vqd{?(Ym?%I{X?n zN9(aYekHD5dw65pxIt~UB_*`axvcRoHzv(IC3}0JI)L-(iiz6>k6XKOFS#>oSV{sG z#jWoslhTl|b#HOC6Q-)M8@$4LyWd;U5}PRS*ENL4xpjLowUxSo8!?3Yk=eVwj~BiN z9RB26R*q}E^l!Wm^|^Vwt*4uND-ggn>}-6Qz%Ly4416fac*@Z1zKsyb986c*wq}KN zz5gfNj)ub*CwX3AgopxiK){bR z3H)e`V4TT4OvZsiXY0EG8=Jw%n}0=|mfLz;eEi25$|`Aq0qzH_R$KuryeS6YWE4lh zo1D$7XeplDB}`iT)}X_Oa=|C9mMg6&_oT7@@5H-&YLH2{y2i6u)1bJ=WI~E_D1d&Z zl+4|`%_Ci3MDQ`99Kq|RWlvCTuN=#z+oj1ZxE(nvN1fDNtfu9{aOczlUf=*4Kz_8w z0DkBK#zl!XAO;4<(IH*ZcRgQN*|T{}Jn5Wz(OTT#p^3TGze_zd<5<~E{nVL&2>-%d zad@Zz^fq(EssJ{%qLAQfaNP(VUDtVC++Af%KKtsx`;1i(&nt7!9~Bq17#;T^?P%pDMoSlv0@mY+HXgUT8* zjT(}j-ivD9H~zY_`=MTyRJ5kwYv>?XfQEkH2$04EQk0`20O1#2=5?n~Aan1Q+PNfN zPvBVwE57H08(fCFVT>R-eeuF#^u;{YDqFpel2ZkcKmx?aByV+~%2Wfm{RR#`;b)%f z8Kvfz+pKZEu<+Z#uPmk$vHXM{G7xFIf|;HvYlZ0HXBu^X#ZuP>65di zgd8!uUhr|J+yhzIW082sp50lFPwr`eje~RWnPTiW?$u?OlwL@zSS?wnYs+!)Ki_S> zUVl;PA`n&QB_pbbIOzXdT#!J9jJzkib! z^+}y5a3E*NxgsQK9U}nd?S1QWU;3Rz^m+gA31;!hEy4px-j$#(qbDwWRB?bJgShJm z`sj9t7;+}ExXpa}-(P^L{}4Z2Z9_xXt1c`z>y;vR9JioGp5X$H2-VDtbjA8&Ye7a z`uqtrsL-KAiyA$OG^x_1Oq)7=3N@?kEb`9E> zc-FOLw$52xX>`FRwAK!<-o4e{TuE3Xu;9Uj3qvR%5(9{YCu|00$irp;4NQtmU~!^E z2@4&1P?+IShzBPMcQ9Z9`iI@stXn(o3Oly!*|clhzKuJ#?%lk5`^M@zIKi^SWvX$D z9IkS?&g}B#3z^_x*9}ZuY~XUW5GFKP3~6veh=mUvG=ES?S;zu{H&h^0h+Vk+`CxzB zzmGq^{{8&>`~MGMZ}bUg7Fq@#Xy7^uB{x@GtmXBbH4B9xo&RAL5|+>mFv-9{WDFUA z07eEXbKuD3oP-QfgO@@Sf^%V(f+b``1~X8=SPnVB!~z2-mGD&$2~Ch$j8r5GQbkj~h4Khy{TVyY&_rg@_shRZqQZ=)(#sdd85OmVieQIfT+E(fW zb=y3+4foq}$4&R#cGr#f-FoNE_uhW@4fx-J2Tpk5h98c2;)Ls@HbY~9CCyv^3PBI?DpS}KkFPTF!gx}B~q^>F=-)fVTj~P6}<#bsU6qDp1&L* zzOy;3d=#vpOmOnH`sHtf9PFS6KM2CDWN>-(s#gF9NRTBhX>tpqo&%u+4k=*?bQY|k z4PSym8)z_uJnW$le+b0J!=50LN~0U6mFp?T~fn(RENX1IZl8z*}WG8Ql%UtR*lYvxS5shS#0=6)TGYQ8k^P);rWsY+Z zoTW2sSj#-_a+=hvrZxN1OJ6#VjH66pz8vI2V;=L7o3lpeBp}IWsL_4hl zI3WP2tqy7yV4f0WpsOHw!Dl>z1YT`7s3<_eY&C$H3;>wFD9``~Si;|&en|r+@_-DC zfDz!{fCUrnz)n$n;TK>4wGzgFmyBTH1t0+f4kfRRZ;WF=g4ZZ`q|Zk4`B4UOlwVZ^ z80Y3YQTp0<$$d(~Pe3Eqx0zrDE=GWEL7;>a2$i88S;7TOu&g9-xVXo~j0#vV;I|>* z!bn(v1(X{e9h?ZY7hr%1c6)*mdk?sBqd{aEB~3voUV_PeF9}5Q+Se@CsY7DkEsb}8gdJFBGwoWmwXYq|9lO*9 z19B=74Jb^IRyvV3S!N~}a!yU7&UBq#Cw_5anzPD)zj6jM5_Ee5!lv3|p}D{dz*B_T znz#hmN%e6XpjmB@n*n`&dYE9hg2DA38ONjNe5g=_mUkKYIFZpA& zO%z8H$f*PzB_1mG&8aT)dg)g8x?R<7b-ufMwQh-V1Mq@>ZwA9)Ry6@`Kmh`z=4~>b zK?g1_Z3{Fv0RJdhfe}b6?E!eeV+dHl09t|s9dG~zGzI}pXUp&w{M*eGXpezU;M*i1 zrmx$ru}w{m_q=BtefG4cCf|kPch>Bn5#Gip zkQtId1QK~KQ>wY*7r&*(yW8$hDFRN7pzYu;!MG8tHFgGF`|N9f`?n8D?(MPx%SIY{ zY9zdZ9>m!rt6TZ(zX|i7;`&eP?|Vfs!D`id_xv}O`~2^JC%X?6-^Us(*HOhYe&a@N3g}A3=YRpV8eGLa z*0MYb!+trGfiWn9GuU^H)`9aO7Hr{iSaV9lw}3eEfFqIt=EHbtGc6SOcr>VlOUQ&h zA%Lv`X&=~f0f;m(LL0FPx^+6vKXoN?Igg~HwO-P1iSaKW4fj%aF-NarZD1wEj zH4A7?CwP2W!V=S`A*-=pDrhm!qEPLpg=i>E`$vd{Xo!c1h>56(i^zzLXofeoKRkAW zf0u&=xL#8jQjrsUS!0C@WMq-Fg?8ZtGSGHl0Rw?{0S3?zbrof+$ci5Gidn{NB9klr zGyf;e#Az@%h>ysNy~s;~=o3$97eCf}E;lML=M2hGBh+Agb!dljwgdB#TMl*uyj22y zB`Yss9R>q;zCsf(fQw8vf4)eLj%beOh>q!~j_b&dl%hp8h=aw*hM9OkaN#U-xPD*( zD+I9wtuYV`5huB_0u4}WqO%aww*m}t0i{tk*$571GDD3we{|zB6={(diIEwpksHa8 z9qExD36dcxk|RlyB{`BwA&jfxZ7?BjQ{xwOxNa=LA=qdWFu)#aq7dtm5Zj0_1u=v$ zaRa@U5DPh!fOtU@sW^k9lu5~yOX-wN36)PNl~GBRQ)!h|iIrEWm08J^Tj`a_v;Pz+ zIgH{LOfT17$~Ys$2ZYXeA;KazWVd2(6B9vTD<`5~M&M^RfCN1-Xdp%qch&-xp%6If zD;^>PwP+_jIVKIE7%y0ky9k+)DVdW=nU!gomx-B~shRR4mNd}?yf=Qvu|~<*AS1Y0 z448OqX(9V)Ne!_u;uC1V(s6@X5#Z1Q21z#3#*f0HkbnU!p%aFWDV)QJZJjw24j_+w zfFM-39Lu;D&8V7%u{QK)i!ZTt$tNQzl4(_!9pI3M8N~t$u>uDW07e-T#EG8iiDXD(pEMyDT~!+4!5#uyA|7x6e|bqR;{PE>u#FyK zo(!R$5h|gP#hx$GP)9(FlcrCj>71LBh0%E-CGisfh(oLKBItpA-!lTSfG=g(PihQN>pzVmJYbvON>V$1-6HnT5%L#56W2NSVrDh6&m9gTB|x3pUF{am4h+7YOU8=0HGl*d3FJU;baFf zci;n-$~FTx*fkY5hp+a4MtclW^B9iTnl?ir^T?@dVCZMwCifI3W}lc zd8^EcNbuk+9}BWfre7c+1KV){1XlwTkYFA#XiG3+u19rPOaEv;;2rlyWmJX&&9!f2 zD{DJ@tO*cYJ4<4L8nkhXfkPW67@H)|C9)wijuryPI; zIJE;P0Jam+0AQyAEAeIOAzLj`ThqlN6ESbI>${tAvZSVV3EKgEH3IhcF~TL_ zYu@2iBOt8FR;=Iky!-pA(fgzXh$Ph;O!V2k1zZzzr~eUIyBSNcQ(g#W~{m< z5bcWrED&44wNqerC;o=H-ctZ3!UU|^YZA-9Cro?(3#ZikQPR4-1`NXub6ToH8wY6b8S4saqoAaPFjT)M>q_Xk~drE$D+D0#I5a)rWCET}4sq11b;8|z;4 z8pB(Rq+d!j)Fr0Cx_MGe#%W5$$r+#5t1n#4#x5$xWemr041?}k#pz`=){Dn#gp8cn z#s)0Me+9LS5z$T>vEm6fs8tHx&eB#GR{&D+SCOpa&F!XQY% zmR!J^EXrlf$p9?Cp!&j~9Ll52$|qdPqLjyzZ2yliF$?fe%XPG(s=TYM%*)PO%9VJD z0W8Z4p|rA;e7a1ny{ybZ`^u4wt1djt&Ll?2%qPoi%@q5~0(8aCyh3hB%XTC&)O=sp zOwP)h%~D88@VdvvJWN*P1bQXT)m+Z=46EmS#nxMv*&EF-F$?tZ7xHXgw(_EL#HYyz zhFna~2`##VY!k@Y$-sw}%aM%FU^%yJ%(?s_oC+q=3jwbAnwPB5A-$$5tX`3Z$KZB; zPa@G3&C-*k1iu!>HJ}7^X%`#K7~U9;+zS*1Ez&*xi02$k`n1obLeVU3(R7pqKv%!g zsf7!1jU<5r7f~7(5d<^|!#<7GG8(;qiT? zoYij~p<5k51;U20j0cEB%tp=5D&;XKan|g?TDJ8P{fN+TZP>#p*CjotA+=AWG|MYZ z(MK)T!$LM3Hli6o(+$zrSAE!>y~zwcjKI9jr`(2+z1N^+00sbI1PYMxQ4q70D+)nV zg{?nYDkis$+qtdVyUp9Z?c2W%+@zP=FzMOF9Y&yCBy@eZL*_hA5)=JwM?(4oRz?71 zTBZ?6Pb3;at<$}Wa-(SA37!Dn;vL@O4c_KW-sFwm>s{XIectWO-s%nC^DW=*-QMDj z2A8nZZ*1KC9WA4aA`bn~zjsJ_eg8!}@LWmI#TEiFC|KR@x5bIF0pMK^%}@{Zun!lW z;TW#r9M0hx-VYw`4W8gK-_1p=s}X^K#oRH3OzIO&r2K6T^JY~;N*eM=Yh`Yo$l#`{^^~5!EJzw~p(%{^?PW=%()L0RrI7s@Rt}k4=I= zkzM4l9)_M!>$>jj&kpUL?g>zk&opC16u)r2I;E%_BK{qT{>1rqP+AnxzBp7Mjv?=3Iw z`%UT^Z|#Y_l4zuytWngGJ@Pr{IVK zKKFTE_bZS0R_+NfPw-Cf_sg24YiQSBJ<)j$?*5GRMqcuSjs{!+-fZ9RcaHL^0OcZ1 z_Wj`XkADf_z4=97;-4Slbzk?HZt-Ez>rManiw^ZtFUc(FO%pxjL~hH7pZHn7_+g&k zZeI^>Z|IkR3gC?jpkNP%4&Gx93TS}gl%M+-UgP4;_FwM_T>t!@FbQ4n^#HHB-3oQW!K77lJ|)=8~t17%!pmq|wu*j}<9?Y&LNuD1ZD~ zCWHxdWQ`FOY}Ul7KQt&#tzN~NRqIx+ zU6U%Y0Z!~#vSrPJ`>-L5BDROtIKuN*teQA+@$|9F>@G8B%*?`j2Kd=o!iCLVF`QWO zV#bXfKZYDx@?^qOxIlcFIc7bMmpGuQw-C)hkQ>o#7)f!Ur+Nx2KBU(ywxt%>UC>WbS7bC=X{V*OT5GRm)X~TwJ+C}9&cbWG z!sHuFKPEv%H(hlp6LnOX@=Rz#&K!!jwS>^1_fHaKN{F=y5lRuE+(P<|M1cLF&|YWz z4fHvOv4uEdi6^GG;)=C}j9W=76_T;Q$er)LbJr!gWRuHk*Ig9rO-|tkVN%%Hn3ZcV z=FmtY2{fA>bnxYcXNvA&i-#tO@G2wU&2b^IzzJ|$0%vIeZT<}_uM z7X~}*QD=7dV6njlR8XOlX1i^---g@hr5$S;J+i(X7ryw)6)VYExbCzjzz3(#>;JFM zM*MKa7e5>5wd007a>*yBJjS^hOUqlgepKu(!T`J4T+5s!fu)Rxu`65_G!mkjm91(lV6r*SruiPT-W2 zoaZ!WI?=gKb+(h8?<{9%cGROBpc61otC zSH)^prI1CfcGat21#4Kv3Rax*)U0PkE5&>|%&SR~GCX`kGWVlL9*{sW3MESh?#O^@ zJ{6(bJi-S>NL7cI{e z6UqhL;Eqi_z$T$ugbV24EJElbVN1qK%huMmw{;|Dn?yBm1pgIDpzG{?GWaAUR7NaD zFv0{1@Bs=eVJu2mLPVW=77A#xj1M)LDmEJnQ*DYyhRaENg{Whz&B#`d#ol1YXX9}7jx zUye-k-V0_jm)Xo`&gha!rrEOuI5I>P3!K+GXEfKjGXF@kbDjmOXFm7Y&wn=Xhf5Yz zLL0M^VvwX^>Sfzydk+ezNZsGiUHDlzYV({c-RoZmd)UQ3cCwe< z>}N-N+SR^xwy&M*Hi@>8_~tiizu6k^CVJM5ZgRxd6)b<}0pR^Uc)}On@P|iy;uXJm z#y8&akB5BZB|mw}SKjiKFTfFI&hyuzF`H^rc7r=~bV4*00|6t%v>V zWgmOm&))X2k2#a&UT}ji2H|ned#U$+^hVtt2*yW#@|C}Q<~QH@&xd~Wr9b`TgOma{ z3G{OLNPX@fzx&_!e)z#J{_&Te{O316`v23ve)YGX{qBGN>n#qHxbGgoAV0XsLtOOX zcdhGX1;79lzyTz{0yMw_M8E`8zy$=rz%zl(n>ey?0U0Q(<$AlSYr728zzyWU4)nke z1i=s#!4V|E5)2AjBQz)Txnl~xUh6+0`#%$u!5Q2i!hk>vOgaqI!5!qm9`wN<1i~N` z!XYF=s{5z%A`ARW45GR_cWc2HB)%9d!YQ0O5~I1Zn7}KPG6tl=F7(1L1j8^C!!abo z9;-jAaWYVY3^_|QDC993a>LbOf|+v+3zV`gEW#sgu6AXwYvMQ$!yDd`0!FfRRN*xg}jEj@PO8<@nMQ;`M)bt3nL|ko#ZolIJnIP* z{J4RlJ2gBwHsm2iQ?OGc9WJ89l6k~i)WuyivQ%U;4Rb|9{JjfGL|&XLP&7tR8O3B| z#%4?!*z7W zcgz~>0~~?UL|nT@KN_smc`a~LM|eD=cJ#-96i9ZFN7iY@pyR}1G&v9xHy1366Ig%` zurdK8GFu$TevZtg<#+XSROSYV}OF|)> zzBou>OvqUT6(@Kma45^MV1WsEfyHQn8Tf&x!hjiY%N@`r#WVu`b2ONY%cTOy`>3D0 zB;-!(hU#B$)${qr^Y}!1mdOliu@+tf{Pw92jF z$)2pnHXu!OX`-_DfgW(Ghs31he69~rOR|UntdmF%V1csSK8GYM+uY5wkV^chfTG;a z@NBl;930RL##bCplmF=f$CS$x5R26u0yaVc891V`h);vcH#ihe!kEkb^iN$R&v`_W z;QYx%Q%L?u0v33Ihzz{QPyq&jNC-8IA8@VX5(^N}00Stlyt<3&~coF&^(I*El>xb$%Y&Y512#-J4s~2&l-R)A;2r9B#Wr> zyB@908G6yOcrXc^ztSOs6Ft$BBnt{4iwo?`Erk*3c?LrCNjcj``WRCCoJ@#XNgOky z;#{?cOw*GLQT#+Q8EVqQ5Uj-#3m>3Te^Lx6sM9~yvl#Ww89h(JxC0xNGeh_+{;10w zy zL_^dT+D&+kn=!qv5Zkvt1w?(t$jp-~bFL0X`a8q+rwcgs7o>&(l#?l!e+r^~it16hpPwm+jV>)lHlD z03xD5L;|8EI6j~?iUN3oIbwhn@PGpN*{E26ARqz=Jldl8q&ZAUCP=8USO6pNFtM0Y zc8%J=CAO(8PhSN&Zbh`KJ;XN5R}nY?4IlyxkTo7SfFLMZE*y#+sDjE30vhmK1V{qR zl`$eEqU#jd(}h>Ym;n^8D+0K^ z+^fr3ZaV=J*g8#13;AK&`P+fl&4MeSg3UdE>kGW(Jzn|2+w0_B98`h=@L3|&-n5{J z=Kn3<(M#U$4O&aQRf>C0xed1ko89`wuG$?mU(HtitJhp(!f(|zg^NqBWq{y?4$38h zqUC@GH~=Cbf-QIgK7vddcwHTc$PUPW19*WRpn`{m*;6Gs^>mtQ3d#3f^4HBg$JN&@x)$>rJx zE?N(u0W1Im6)=D;SO8Fh0S(rGAugo?4uS*F00=w+xYd9F7=RA2Ks?rCK0bjh7~(3} z;|u>-;6JW{7zlzkq5>?4+b&Q6&vgO=h~pN9+#%M1&wao@jsXNPfCCU))UCvGV@S=D z);pcsTGon>+{>1A*^L#=aJ|Dc<^T;?W8rmUCRhO@d$ zUN-*D%9Z2*a9|BK0z_s32iAZaDCK`1XUToIo%nzO_~ZtTV+z0k3hsdpaCB+%^$#9YK~hBAZl#p01#Mb$OYXT_{sz50B|O15EyBRY~N5N$+C#p zv;5SC7VD=dTx^|DV7-HI7-L@URz%D_sMKg-zFY}7fzFlWBS3--t=tS)fD*uAz|;W^ zr~;&BfgBV7ghHbM2;#^EX=?`TLN0(Mz)(;=fB+ER3*cc??tpjRTcT#$6L4w`Z~&;5 z$N|V>L=FJG4uUOcVRt<-#W(`x4c)LF>)KWd0kt)3eGHiWNkiZ_V|`p7&;b+hFbeS8 zuFSjzSb_j>Vrs^8jfpzBL2Jp&14(|sX01U9_<~Z6T?NsmGYTE|z6TC{6Rg8@-Pyl{|Z|#D* z1a5y`VWO>ORtD{~l>ruT<9E&i#dLyeB5bY$XOZ>*C8mKNSj`pIa0Xxj3SfXj?rWkh ztvG5fkEVbIE+s;)&Tx+8(4FcIFoK(20vUL3C(v&J?_!25SFmo~{|@jaC)~Je#j_^R zz6`*cHD=GH;3!6@ALv{wFK_(* ztz6NjW*R_kHs|9Fa01IM?E^4?&1Ume9sn+IjBt&jRjzt zb<>(H0w%zK7dYt#K!NK-0XcdBCGtDT&H^$B06Ze$X^v8yFk6&%0<3;ssh%hhNcGdv zbOd;T2EYK~K7mkAWl~@D&qe`Vw{>iW03$$xRz89fD1oS=0QyShLU-*~J3Y~LbN~lq zPqQE!Rn)^0-ACO;oMmUzW!~dGcmJO4huK-r8}gcvAB0L%`dMChm$Z7P9~;n#J*xK+ zXySm&_tLp{bIVJOg(oTn2Y1Z9Q~Cd5#+2vzp1+nH{gIXz z`oai=|Ek)Ers!|QvH=5nghptI#GiDupM`c*i2pU8$NH>Kl!NL5K6s$IpcDy$Xv)}S z-=<|q6(O=D3(QslAgV8lUDy($p59$mxc5uK(RztN^u4dMjMp5OhoC<2n7z?oVMK77 z*LRUWi#=9o;xwH^XRLsa{NLV4Z{qvQ$9zb`)gBT0z{d-*|4qKMMmI(KvY3D?b&Lva zUH)+8s*h8NrPK#-ppn-|%g_AV$NfC}fUYl#p*IZi+7AP-bZirOWLw~AUQGI^t{DiF zi$V*CG>ZWsizNW+={E}*_<#;bO_@t^glg9J+8sttRHH!_eP=Ej@aNxif1O;+9SWrg~6et+rxUeIDz#ajZ+@PUi z#*YgL3m#d(K*zuj3QwX;sd6RDmM&kyj45*_&6+lE;>@XYC(oWfe*z6EbSTlHMvo#* zsuY|Bli=3yi7FLOoT`1|oD9Uw*+6H{zV3@QrjxMKy90wyvHWV>`S<;s>XW6rF3Gw05pKZ6b}8s~|D zhX_{fO0^G5ukZwZoi*z<&at-NbeTe+=7_{G!Pn!rE+O^K+Vk3Rw#q>w`rS)`Fi0(An9PXRcflX$EYTWqi; zc#~~@*yez4I(>A|M+>-=ga%IhaD$B~fTzI>JE&y93|)ewL2zD1s89+@$Y4MdYdvsM zb4UUjsGx%qTBxCiBFYqqANbegfC6Dvplkn9!Xu@X3yy`Q12@$mken-Z@SH+AWKaT| zXi*?UUj&tq+zk~jQDRIgATiJb87#2`Mds=7sX!e>m#DDA5?idX$0D2TNu@E>Xn?9o zYT#FuT59EOS#C*_aVaW9fwvVJcbtR-@bDnH7>>K75XM=!uDk1sdQhjx!3$2k1HBtB zzTwWRtiS&P9I(Ix6D*XHjQ$s(fRTc86+zD&L~W&A*_7#(Zy0h#w%V=>T~ji=u!9oi z@~|(zaJ_r7ye2!S>Pq|ygtEBuo;-reExT2)&O7tmv(G;dda$zrK6#bV%mjkcEIZwC zM;jKuG1jJ>rj)YPHSadA%~*5Ywb%b&yZjb%K$Bgz*=M7jHf2I5OtgTiB^|NEQdTL` zmDl=#Z-e%N-8a^110J~GgA-nOXKXwA1ICwuWP#x#pX5zB%G< zL;C1IaX;*MZ3iu=Z)TSF+qvtn!ydcr#eOaoq@f>;VB^QoZMWo3V@eioA|x=B%&hx8 zyYkC3-@Nnk)m}7|QBM2#lr@Ppx$sf9F?CPJgO5De&y!!i`RAj*UsFT7?PTfFQv3Ko zOVc!Y@K~yj8SBkgP(k|t12BLBBuMmbrx2t$On2b8-U4S+n`;d(Yns^~^ZM7n0b($N z8jM}^7B;B?8a zGM2KOr6_j-NHiV|V(-hL^?V1syRD3oTQnCji)qYbA~TuFTqZM{>C9(BGn&$zCN-;R z&1+&ao7&taH@oT0Z-W0boZ=iOIm>Czb9NJDJmJF`KbT7>?y`DR(xn@()JH8s0G;{V zCqMh?&wm0mpaLBzK?`cogCaDc3SB5e8|u)9LNuZhohU^sYSD{gG@}~bC`UW$(Te7z zjDW0RE_YWg9nQplJ;V}j<~T%i5ih3Un<-6eYSWwIG^aYvr81>zBg<;mv!XSvYF#T^ z+v?W0!ZogPohx1IYS+8sHLrTzD_{F+uA4Z8NmD_|JDDNHLXHq7zk4G=p7t~?6;GJ{ z^ebg6YuU?UHnabl-7IH2>)FqOHngGrt4*Aeuyz&=o+0F={M-mu?FmmTRy<<=U^-UO z;x@Os-7Rl>>)YP~H@L#BY-v%8%LJ|#vDNZrSsL30Fw%0k(w#1Kt83lsVmG_mohuM# zW|Lc{W`>33CADxO7L>?#V=xrK$hPa=_rf>6@|`bz>nqxF)g-XD6tB^4C^rUHERl*_ zBqQ5ojzvXt7v5nOfZAp-X z=mI&)Ql9@Zm8(qOa*5z4kvbZ3p@=l{*5q?97N(h`M_3QGn?D&<~PR|%Q|70 zalLFiX0vpZ=5W7D(%KR~yzcQAuo9U%!xSOU1-E$e|7APIu$ zfC>LlV1ijY+c611xWU~WZ_Hl%CPfpkAS2exR9o!Zn)36RF%7!#?d8D^SP*bW8W_!tGQz_xZs?+;VBJ4h00&Tz zs$oO&*F(QI2Y%3UCbWB!ShxCiS$6ur=4axOXb>19Qa&b$WBi=cI z*(tyRqYGRxEO^riIAZiNo?8h$2Ydkru=pdmOYM?3>x;lucqTv{1!dH^<|WX0BP=1{ zEWp6N3*dJafUDetzX-WL-E$==p#y6oINy67dL@{^@Ck=H1roj{83aJ(4Nv$2{!ah( zc@LfgDGxyCqponB6Q1Q>Kf&ZV%XU#Cm(<(l_N7i}LL8f%+71AH2ln29sr&s1`kq1Y zchCV0gv8-~|NP25u!9#FUFUeOKL7D~{-*EK5isBVd0p{oUb(3rx$T_ksa(o290r8m z4`4vOb=n6EUf6XP4_KY@$w2X)n?tPIn-n1XoxtHiU>Qx|`FRWNSsm~(oX#cO0(u=2 z{N2Fq-1~Jv|7F(owO#f(8du?2VkMcF4Md+wfH3XfORbv+oM6v+%MAV!ss9f1c>pALc{7q#D5bs^-vAN%#3$aH|;fdbHB0#sa^DM1agmt)*v<8@#Snt!Pk@bL z-G$pJz7!Q)UHi=+xtYK>ZeH(|-wxQH96AK<4c_nxLDBIWK7!!~oFM zu$UeW)?yN%;~Nqm=E0xhO&~RX+5)^};fdgygk;sR07L4eL#AXoPMu1Iq#VxMzcr-r z(IO=NA4~olJX#`0>Xkme#8q)z!{89zOpHx5*MJovSDK>u{Xkf%9D3-S2RNKg8o?G8 z-QUSyBI;XMKH?GxV&_3+7}8)`?wm@}B~1pR;dx^ke%=gfU|8njJrWvB5~lCHAYQUS zEsox~$pB%N-|aD8V-{XoI^11`Wre6B)S>@rRv&t-VZxD%*|H~OCl4@+`jrcBI_G*Kn{fUn5U3=>IVbMj zntP%re4<--DrY9zz@ODbi>YRR0ihZR;Y*+pr8yTtUfON$XM^??#5w4LwqAsOmXQ7D z0cqd1#aMxMmG_}gK~`gFis*<^DE|driOM92qG)wlC{46jot2b7Dvc;gP8*yMbcNrF z>gbO0sG+s!OuXpEwWex$+$Uz(o@Ec8MbKCFXp=hWlm6I`%0!TQ(3aWRKZ=k{z=3Sq z6F9`yY%wX6is_h=DT7JrCc7C+fp5oP+?&F#g z7MAkPh4BuJRhn1UX^84+qAKd5UKXEDC8>Ro6e&cT4yY(Lml(YoqiX7=a%!VOfT)oa zRU8ncDNvZ`(PTsdlL)>&(CTNKrkAGK<%@@lX8svXU0OT6eA zz1>v;YM^qcOWchKE$N~DDxyMbv`Xu=QfswZ>$PHQwrcCPa%;DG>$ieyxQgqzQfshI zrI5C!KOQS=b`@=AN+~+)vy$t)(rdli>%HP@zUu3~@@v2PYqgrIunud(j4Ep$*=u&2 zD8i9g_yL&4tGxbe#7gYMQf$Rq?8Rbi#@1`VrUd^Jz-St6AC@u=l71VVu2+sitj4PB z%Cc8W$rG^pE(b~@ft-uLw&=T#?8g0=c?a?Z2(lYJR zI&ITJ?bAwa)Kcx#T5Z*0?bT{+)^hFEdTrN&?bnKJ*pltonr+#l?b)hr+OqB1x^3IS z?c2(2&{`F`hU~GL7s-Aph}tQX`R(5VZr}>;;1X`(8t&mDZsIEL;b5k4&Dy6)@hspI;DsJ+Ni^xEQ9`T^E$6-2Cu-T7PQD{qy=RC zTq|6KmXRma1zC zD)0QT3J0;?Dj5UQ)(mrT8@n-rd9nXY3=M@rni!wd$NlgblWdYg7aPCv9|Q7s#j%SX zS3FfqDHJf30I^C4aSMCt72}v62l6CS@^B3@Ow4fw_wccnak7pXOBia&Npg=)C;_PG zDOYkTE7&Cq@AQ%smdcnS=Wz!&^3Lw;?l##dkJ~12ffV?%0Q@r0X+f+7v!6jh#i5%P z2!I6uv%3W|e{hkq8S^kZn=cb{G1r!#sVFv2^UuXx9yJ9vH}f@*vu=d*Sd}w1PxCWx zfH+(8Iy-P>p@)9f(J|E=-$h%pZ6tCY=dRY)2kdi9xsd9p`@;;)g zx|%ARW-vi4@&;RR6*H{z#<2e`zmy3~0vdP+8cf0@Oh7wZfG3CoNlW5LPl6RBa|b*? z6uk2=O+XlE!4zg+Ucq0bEQQ2ekCwMZrj~f_Df26-)vz z7n>JMLUQ7hRHrml3-kb1 zFZ6sh^!zp&6k{+M?@<2`j~Q7s%>p;RFK;&}YycYE_jI=cP{-O8ctSIBREKV0!UZ56i@*g zph7D8GJp5SeRp&C-u5e^fD^DAbBc@x_JGI`b8ezQ3_N9rUjPLh!37wB@@2rvX#fi( zzy(~yL!cW4p!WX^=-Ohu;%&d-2gHgFjQI?_N(bCPXeV?Dl)%b0z?f$M16Upi1c6Rp zoFBOarp?&_?J&WbZYNK)Lez2s*O+p%nke&47gb#|fb=CWGZ(x#8US^6+jNnKcaN)h zf=j`6o4R+vIwwp*7;pfCpR`TSHzcN4iMP=O~j`Kzx&7&HSj zBY0Dox&`pMkyAJ)pn5mIcaU?!t6wz-pm?wUbTnFkiq}Jk*VeBC^-VvxlD{~$134#L z!myt-P+!7&-}{6o__XIc7lc7}U&1fLwS_lBw+ldaUpOc9`gn(P7)-)=qcVwgz?J{` zj)XY^fB^r~T|f}9-dIJz1AL(Z2*fwyqN$X?dCWEv;K?r9NO-gpNGfU(f*9O&+7a%!u4XCeyKz9>nQZIxT-P3)>i` zZ*-@3QL>Zxg$H>%yZ9tLxJ_gD*)uZ-n88o4I~9b%ep~o=UjitAxWoT4t_L&OTS2W~ zf{R~4xhp%4fAka}I4Zz*G2gx0Gx&BtxOJyJe@nrIU%@3rw>iHxxJx)tgMD0EI54~V zxYs)3Z}%o3IIpiaxKn}vFgO<={I5TFQ@D8J`?ao5d+fiyFW>v9d-qUF!AEnzz2A4^ z-~RurKe&0j8JpJ@3U~wpTm%ImfxjJqbIKG4;0jGqhg`k@ngfCI8z%;+{01-~b;!IV z9f1x^`CEIW7y25=h;`6UAO{3MXEy){AV9)-|86s`eK9>tI66Rp6F87yL4yVP#8J4g zkHUiwpB=0-i=wMp4+$E&17~B#Od9>P=`tk&Mw2H|rc}9-WlNVYU5bz(P-X?23U1;Y zaitX(DL%uX@dWA;sCsBPO$sA`3MdX?hF(Gwg~QYhKf^#JiU!rsp-``G@U-a?m8nb7 zaM*BV(k8G|P>tov<&vi>QlfAbs%34I0ZXY^;jpP02PsfJaq>jP!iE!e4|`eKZ0Y}% zC{d_zc#1}-T2B*%dI}2)jerf4h*Camma0%Rpgg_yWH{_ltfGY~9xHXIr^0WgpjlPa zQ)0+c;{N>E7HDxV#jAocz4#{w*t1FI)V`g2ckkc9hZjGdeD@^`438*+;sO%}Gm7v~ zOQ8pknl^c`K#*Ri&H@=QIN^jFP)K4u1eAym1rT;9qJtY2*x`jGiny-`6E46YhY*Ai zpoR*Pc)$n)9R#7k1aJyb0|ZoXVu2432mlEgm_UF94GNIpzVdkN(Z?Tw4Dvgeh%C~` zBauu}$%iCRVo8P=GKraIsI0Qeimn=_4>5P*dxzx#pTpaE89=O`uYpb%H!6 z$SSs3Ik{vL&a6weP}qoFJ2q5cBQ!0cB4t)jx?!UtDA7oB3$?@KK2D3p1=!?RH0s$T z&)uvN;Lgg94Wk}Tl$%okPF_2eR^oOoH*U>K>s$b+QhiM8- z11h{IMCs5eb;_~JRXS&#W&jJ0LX1{j7CFf4B*rjTObl7>1gy4AaT2YVi?~0H1Gr{IDj=ZZ~^o!>_j?Q83Zt(ffM}4f){+{ zJK=s}hAwXi|Z1dnpt2Hf(ghmr*q zZgUIs_*5+mWaTNE0o9|1wI{bc^ivrPC8#vhQIb;Bp%!o(9QUM|wYZI>PjSm1*Mf>i zp_HT`P3a#8^HOBd^a_Ka=u;pnCx?18D-@k5O>>o%mktJ{aCB-#@y1cC{&7zrWnNa+ z;uxxO6|2b7X=QY(E_cS0t>YPpPUKU72SNY>bB+H`hsMbv5Ivxr>bWajAsE-cZV0e> zEo{pE3PHmvR*#3(YhD!)u*5EKp>4J7Wigvs%~rNuPAb|y1(GzC9OIu^Qi(=3g4%&1 zl(ns06Au?!fXjd`FL?nSCu%#D+)jnJwB>DXX$xGW{5H3j*)8ZuGhEV80bRqT!fi1# zT--{R3T9yq8q5N^sbm+sx5ch>!OI%$de^t*Nr>)9je!Gr?|#AsItwScCU!ZK`wYcZVRd%`wI z+2O2*7Ykw$w=%@S%Z`ahoZ|j|n8Ye}@r(a0)M6Q*7{xQT@r`kuV;w&h!98BFc=*F& z9}k(xMKu8uT z7#xsx2ur9`=_ynB&fu^xmdoU6Vc!tcMHY3ckDY8~FPquTJ~f%49c@@oo7&a3_O&xB zYnLoJ&Y~p}3m-yZ)sDNM;2;wZhn@fKt`$4P*mn23;T>;z&wI$)ehHJIbR;`m8q;?W z$VdGBZ$H#f-38xTpASuMX)B!J4R`p%AwD#FyTsN_R@#GC0&as)xZjvI1h6+|aFMGC z(GfTK$x)tim4|xbDEWXumTba{&zvPuv&ca>QsrKE>Enjjvkh!Ga-k;)*(*2t(UG2X zr5Cv6EQwNR8!~e!qeM$n(|N!JF7B(<5a>hq`kEfNbg_?}>}5B5$-~z4lyDog1eubz zPtrnO>-@@919{iKo_CjI6YXdB``-Z{c!#T9CEb>}q~}g|lYC^gI@0^&S0WS1_FeFm zxBTTX&+?!XiS1ASC)_b&hdTdcl~U>3j%tR zVCf<8;K)GCk$Q4ljrGMx31TTvd*v^m`OQzh#CafDBX!=}QI~p3L_d0hn4|RFzdD|+ zb_L^)-yp~@y7SSWe)X>(?a}{hJ{RI(iC`RsQogyDxS#*4Q&{*H-XZzPFDEAJ{GP7> z4e$UFFaZ^C0U59X9q<7mFajlT0x7TpE${*{Fars2139p++z;qT0O2?f-;#zQr0(rf zsA;Yw`|NKP^sn@`uh;&KAkIz80MPgZaImuJ19`9qeeef?FbIWk2#K%=jqnJOFbS1# z37N17o$v{vFbbt`3aS6F3a#)8u`mm@a0|Jx3lroE-46`Ka16)55kue#SQ4N`~8J+R`{xBM)aT=)+ z+1Rk{o<{fFPRd&F{`AjFvM%88P|r>*7FSO@4r1YEaUI#Q9WihEOfmj)&!0YT6?qT; zV8{;n&mi8Q@$mo79QDo}1#%z>a{aE+!K@DjwUHp~QEIjX9Ow`uJkIXk>K_5}>k2X> zHF6^_4<0RU9;Yt!Sdg@$>;-c!53z3J?r!rga_BhnC1ElqlkOUE&%x;N4ZCp;y|4QO zF6+YYYg#hzZXzg&vM7!6D3LNLm9ip*ktI$5=8Ogg5Z|2 z;D+)m#d0jkvMkN=EJ0`(of0aa4iz(QB%=)YOmZiOZ-#shDA97u!ZI)UvM>GeF9B02 ziLoUFg!&Tl;^xfKN-`gTk1iQ=`1UL?yUZpevobC7GBGnVac~qB%PC(jwA@Y}`^h3& zFb3m6EA{^|wHR;HL{KwzNYz|3Hf3`*Y17l5@iuRhFa_iT)WR?qav>SA8{-fky@cGV z=Fn<0Eh8>DnX@^a^Et~8H>DF72OI2pS-=L2EaBU>z&NPD$=NQx1ezG;8GffEc zJk@hO*%K_M^F85{Ag(hzaI=N6HQc2WDI0L3A8{B^z}4QCXwzx z85AbkB_`mr8Y5Ib1;p)g@77v^1#xcUvQj4>bA|{4C>^gsUyIOaVgNZaMN^aq6|_ZN z^ypeI76dm&#_)KNtKl0Vl+Y}R6-5H zvv&V;J43UdB5c<@)G>7~)9z9PM^s2(LeWqZNG5Md&Gbyu?9-lfO`Q=VWQy(d&1qOr zOI%?`VGt!(YX-r=A1e|}9W>Db#z=DzNzt@W4Ry$Jh)of(8QoOoXbwk7Q!zcXF}2j= z#zaKX%}<;3;rQ-QJ@r!ytb?2suny5P=@T2@QrGsaAnepJkMl<{RZ}4p;>r|MZS_{$ zN>rcpL>KWH4dP8N^n_d@4)HTTvC=e6Z}@_A4`ns-G!JKL6<4KoTB9dX>53_#01!ho zKA18c3*srkp+=*!N|D6=d=E8K(>Sp;3HgkuMl*;vf3bjD#Rc4ld|S`+GGClqW3ViHV2Ahh9UvEgUs zK^{B_Qd;w|@QTkyu{wKmT)_bq0Txmp@>~rz1`*cox-?~HRXiXTv38&Xj^GGThCJHm zmmE-LrN;z#L2LGIO%6SA7HN6aG*$v=n;~_!)!90eUu!g2 zAN6Uwb504iaSIktF|8mtO>5mXJWk*T&PO{QBm)`%Kz0DyIDiGD1ZgTm&^`@>2hPBMLYm}O(p;+;06Y`fKw)P%Xn@%_>L>j zgW(fgQ6d>m_>bj5APC_V1mYEP#ATWGGf&ZlYmOC9YajRY6>ZpixdVgE)@E{GU{GL! zvxf<2Ad3IPjrC&&5XJ-^pnqOw2Q21#2*!!yBSTE4dIo@B#Q4KbpqAv=4`0>*oOoqo zfCAjZemtN7WQ%i`_+FgA1akLuJ6D787|kZRJ!w&LL->#X7!59A4zyXDS(ZY>ARbRp zB6g1?eChfCDrj z0@7(=O5g|r;E6q806^E4b>NE=FbU>)W;!>3>bV4pcm_BC34)+uSl|Yfd9fnv2wva< z!1$TV44v207T?b#teJ%Wm>>wj7PeWNm3Dad)o>Nr^Zs@IBw|&cMuxLj+=sm!I}=L8$F0r)&JHzyY029E!rKMLSIAYh{>+C*dT7$0Z2CvjYoQL32A9Im)V+ zzUw>5agKYN{9OSI1mo$=6@Af#&b#9S6_i0=(XmJ$LLSK5Wj}K_9iR{2HpJBp%q1b zhSiPz*lp9%ZBHfg%0YjEDaA^r{#7(XPZfnT(0`rO+g#XhX02}q*}*;BCsWyr;P`CD zCAeX5@e0=w66ze4*JHRUB{d~T9oWrNySe}U&{2Bd0sCe<)?TFmBTATa-04V;O|qq3 z{*VS_4Ps8`-6cRmlJC8<^BrF7nBOt}MJ*O*nK$~1w_8J4avN)~-t<(NCIuT_!X~Zi z0!k%Q>)c+v;#-#kGrs13m(QH{BnW;-Qeqc)91sz{X;07%1ECIzKIQL`NL1_CWxm6f z+2)}hk8$4CH#G+h0;ba&vV;Cq86N74{_CqGO?cASn?BmcH0sgbcW+v+6vW)aX75N| zK-7ZIciq$F`$({kYFPc$&z{`V{_k&o?blu4uYq$mV6R7jNP4j3*{<%B2JGt~II~3b zAU+Sl9Z>BR4a4x_0l)L(Rq)-93M~KqrB$Ic!k`ICqcQx};H6ydxt_@V9P+K*Dqq6% z#NG38KljrN^gSLN%sYhzVu&`S)@kq*S|VGm^Ew;f@o!S{E8_l?KH|w<_o=`7kL($j z9V4P(aR!14x}PyzR!Q!;ek5(YgpXjpJAUbP^%+4&6CryrA z$r9nBJ1t|%oJq5$&6_xL>fFh*r_Y~2g9;r=l%;~B29qj9%2dSCr%j_el?p_{)T>yb zYF%371Bt9#sfPUuRxDYwW7GehRm--m+p=%tj+jfgM%}w|^P<(Ops3%!fCKMbYn37t z2tXP>+$8go1(pyMaA9Ax8ast7h7FiPDcj4>ADX>7X#Cp~pGbnif zu)uke!Y0Cdce>mJ_T9aCdyRX7!V($l33K4C1bg(nN1VWyK_@)t)Da6TCj)R#g-6(2 zyPfyofE12Z-wC;8sNroB8V1~9iLJy?L(NQ5qA4Cl!NCR`ShCQIBO;VpLYrNbQ5T<$ zrXxtt)DhB5DFx-)hD84sY2=YeCb=O5EHJ=;0Z=j!fP@tW$3hSq43OCmPaYs8l^rI) zRT3C5(7}I9nNR`<7X;A23o}5clnhR2pur2)m8pXhKU8%@26oVMGZhlz;@JUtO?jR7J?pzy{Ldh9qF zNI(V&&MZE?wAzrxrmOC{?6ymr2uJ*Y0hA@Mu%L~aouI_NN%U(G5dKC~Lk{}pFkBKV zN_0XD2h4z(QxyL&%AF1dkjlfqxfUGoSN|rg@CODT#Tl-WwOK%CGh}dUQXG8Q!Iv`h zfC8BZFi`-T7~P-&vP@8n1eg%`YXS%_aLRAST>(8^mKU)(AI3pDalu6xAn=>fJlJ5B z(FJdY7+(opsk2A1x~rU2{;vlv>xFYf^qAcC^`G z4nFYFe{&7AMTs8Km%ES`d2F@_ec;J7JS|ksv&kqX_t%|$fTOpg`Pk#>A|+JPNhzWB zlWVZU?f^lN*KYgmxEC9A0Zt-U9|BIqFtgfKJb=Izdp{s`6afT(>kb%XkU;>!BE`WE zF9hG+yfptf5e0B41aW+Q7$A2<343C|gaFZF&w|s-g0RHMZM)< z;0LbyA5x5vfewJI1Gh@T3`m%udFfy~PUt}i6zI4Pl%RglV}cB5$b$}G00SjhfC*4o zf=!8uD?R8_08VAHeI2k~`I(d0fF%(m>a79S8KD)*x1A|muXrf`oI(FXao;D$B3dsVjrQ`rX~d|caV$?Sz3~@5Li+G z2*LkGPhtcP%sGTcG|HHbRy4XDaRha!EDb_VgBsSo?lmrfNhD`U%UaqpDAscl5bb2V zhIN4k-3tH}oUPy)6!&jSJ?fvK=m0tN(O04T6#6=tWk1@wSz=CMHm z*2Y1$sm%u(c)&1w;KCU6fPW41<`$F}PezsC22m`6$WA!4DNb!@XNtD&{1phzY@!E<@Bjx8!3ic{5d)lX1P9>L0bOW;2F?dg zC30$<4HIM#e%OEmBxox3bAp#N;K103z|)$yzF(n0Z>X|Mn&4N2ALxJr9T?RTm{$K^q7rC` z!AmO-8mO-$Fl~AOtJ-h8NZlmnv;apiff0cvRP%`qh6im54^&EBi|jQ6neyX&p)xd= zT8#%weU(8Rc^(cj0R%s=2n#BfGV`fSfjg|PphO%pX&%5(#6xiOI)DTQI43|1We$w} zi;6Tu88l)>0KGZrDN2xYR+wyg?xgeBm@ zZ0g3u>~DdgJW7NG$&H~^H`)&U}b(TtPInFdV2f&K}R zVh6Z^mcFtB4H)1z3*a;o3=mN07GMG0J%GH9sc9@FiU=WN0t4t61Q!34x?#wDxd6lOUbVu5x}>FIn-2K#BmI+Ju3klga>)`4hCSrsGcrCfI;(W z;ZBgAVNcL1D^&u9O?BgUFZh1h+`;v9(4_?5xC7I5^$nT~Ko;}O2w8eB$y6-T7pRKX zrm7B8lhUMUZ}3+!jF}AHtw&V@7-UwpsKe;;fdinR0xdIulMHa+4!mHu#%2K#ob;q5 zf*^)B`+=;O(`Qo^-mWIi^rolT%^Ni=<-l<{J6nAryJ!X~AiHP7hE8;Gh)HNn zhq{%t($GRdG@@rGBSt^^?Qlnk)8|firBoWERVWl(X8>1ETay2Gv%1-+OO0D59DzDC zubv5{>$oMrz=5-_^#~FSQV_Du1Dpnrc$`2336Y85HPaxnlM5dV5aFZDdG1^&kk+Hd z00aARpxy6kpymn41S%Dk_Y79*-AdP-mXtO_Jp^$ehPcQ;4zsYbs>N-p%VswHWL3oVs^)%S+3(3pA}ke7k55b zOCA`6PIq@eU{xv*0e3Y6Cg67?0C=KyZ6t;SCuC9UM<(KfJ=auUv{n})1!Gt!bi9N; z)sX?fHWe~(05HH_q8A<(bt+bNh8|TlFGm)sl0;v&1Mjmk>@hD*v;YC(O+Ro1^HM(* zRa3T7ATp4MF|d16@hdGb0ExH&93WCcmqrCJ13JVU8qg-BWD`HawoA^NnwgdNMQzcLj-Ys4riHWb(&x zJ988rWMJ;60z|e^*#A~pB)bGX6;&c<_JB`L#m1DW@t;5PHIV^i5~@Cb1|HHnHcDOM>0ded$Pb}<h15lpb|NJRC|`dA0>C0|bmAlkRD?M+eJx~RpLvfLnVUgK z6GhN=JRl-icY-JgXW8O{95Y!pfl4xQgF0vkYbPuY(SsB@owcKz)>$D4c!wsioqLC1 z+G&9U1fJr#0}mLU+sU2f;hp4Zp63Ze+@YR^7m0Vsp5*C==y`{{p`HW88zCk~B9k5{ zpn7AH0|c5Y7b9@R2{lZSL2`m`T%a+1qXZtqKLPSDT#=nrBB36`9?F3=#R(x$6KyP) zH#MbC9eQpk3LO`-p-S_i`NAM6DsBH+Ge2L(CF`d(mXa?US{54mG=>uqbb~e=Dp?`Q zf}IJY*O{c(c@tCTkDj0kb*2gk5eAr{EDzzEEeJpdi4i){BQ4>OX{UA(8J*Nwrq!vW zXv!8!a8J_Hrf&MCHvp${I;V76C&4qPdvd3Cx~FfGgxS=QwOXBrbFSJ&nb~+x~uE5slF;5>sg+{I;_N6tj2n*$eOIms;pTVsm|J3 z2BMqf;h#dt9G^(7zxu4%x~>12dL5nGf}hiNTqh8!3a%u>sxSC;sJarNVH2%nt3V;9 zW6G;>$F25iuMKLD=-PGrYMlQ1uK*jc0!yg(TCi+Jum+p31uGQZIvA;%8HjVGp@pUV zx}_Qs2$r>wA+fP!H)x#II)?^Y^!kGeTe2prttgwaC%dvNtFkStqzqdaLGhwAE3qEo zq98(tUzd;^+p)}9Xr}=fFEO&b`m#oQv`Cw@O1rd7TOlzkT11hcA<~ai8=d@GBlseN z8LP3Xq@0DeO0IBdYDcsb*|cVRwl15tXuGy-+qOyZw9*;1FG{fk`*l$OtFjssofR93 z1`ex6ws7aRf;+f`Te$y*d$@H67*iJ_1=+9t`gLZowKJGmUkkLWGaGXVxNnEJoZGpc z`?;WdxUg}x1G}$~+bs;KIz5}Z4T&1A>$*h&23l|snoC-td%L)sySlr(+&a3X>$t$n zBKQIeV+R_hBWSd$6Se@mLE!`gQM-c0yV5(o)LXsQyJo)oyTJRnopA{bNx5En6U+O$ z<+2jAi=+BV0)LUd?AyNX`@ZlSzw$f3@@uKryRFdMF88~?-3k`@8(4-F6Wbd~p>qjt zu(36mS*Am)v)Vc)@wyPaysIQ6et-t5ny==zzVy4n9NfVk%)$1{znl8My(+>c+^r;R zO8~oQ9pam#R0{t&;>=RyYK8g)Sox0u?+OMEO|Oc$kfEJAF=YJ0|LY{qH4#%erw zNSwr995-ac0hdq$?drtGOK2aF8emJs60C#epl9i;!Ae}ng=`Z&oX8t!$c${r{ej4F z%bSoAN^M-pY<$U>oXP!rsP3Z4{`snTKvnv?90Ne$cxsg*kQ_>oXpC+%$6L+&iu^K9L>@^&CKk{ z*6gyNoX!7Aaut5DsBt_O2w?_i&9IM zFul_}-P1n((?6}vLX9MpfE$!R2$XQtNuAU-Q92t7#j`5Kul%Kdj1%r`35=k_=OU(= zUC zE&&MtD*oat?&8Po;v_EHG=AeZ?h-Ds;e9>hJPr;oF57(_<9@K?8h+#{&Juv&<1XIh zPX6Rj9_3O#j-cj^=8<=4{^PZvN(Q z-sN%-=X8GNa6AX_lWysiUf(ov>6o7Bp3do@9_pgr-;Pe|`JL#Cju553>a2d>sxIHCj_8M8=&=6k zuD$~3Ty#DL9Zt6qv-+Dmo#eVF_jkj4l z4&w37@BRMo03Y!BJ{x6T@L&GMlVN+5o&M^NuJH6N z=NMsBCFdy?WFY|!m>ozazMS<)(f9%TMEj~{= zPMu1r^s^u9$IKh15Wd6LKH+`K6Gz_dMGoXfF7D)B_38fO=8o=H593B2^-(|MSpO6L zZtpFz_4qFJJr4E*-wJRZ3 z-}i?f=#jth?ce^B|L7JE|Mws9`H%T14-kLk3=~Mvpd4@s6)t4h(BVUf3K2?lxToMn zi~=QQ)R@pBN01>!jwD&q6<1vC$*6dlch8Bel>*tLhxN+aY zZ6i0XTf1`ez+Jf4@85)f0fY17aj)UOc@-yK+_(_ih=3$ZW;ogMz;QVsTBZcb!!V4n56v9w5#vHq= zu)s{bOT!acRFFi{`Z8^_(^@o4v)E3|aYr6|^zla^dlPaO}?MYT|@h%OabEpfyYFHKcph%*&erxPExc-2mY)RD&CZX_(m zk3%gDM;SHF7}nJ~=5W_qTmF>|mW3p9vt1@SvkDQV>Du2H1StFe_nrNhx zmTueUCa%xQ%C1e;+HS{NIk~B?CX*p&zxDd-oSYVWC34F)n<4wo%}pId|fB z#}-+(I67K+=9hQ=rRCF0?|JHD>iTJJsg?Tbu4k_r5wO7ze|)pg#(Uhfg;M)I{nBmw zZMdD1zYe+OJrKCAx(YVmh9ZTuLP-fufDB8VrP@##OfgP@3xpWnY{xmvMQ~RoBbetX zhq4I9hICrf+~!{9I@Z0;gQsI%>bU2t5ISdT;rU$AxTLz@Wo;n9vsU$NNWJrw&W7GG zmaw4r!y5+idq8}ivXa+5V}(sVO7z|mVKT&vAOd^hBOev3$e!uBK`1WSq879GrL>(3 zBJ0Z!MEDeuK*%qRf12Ojz(F@b)eC=%`V!y&_BJ6q{)U!W0|U1TpoD26gpCprgNOnD(S*VqVRUb!{o49cs-mIZ93EGAt6nR?I*t40_QfqEnx3IcVq(iX12j z>8!@f>L4$A$TZ24mLJt9D@A%&DymYVc;&13IOGKXG}hiG*Q4X|3is8EJa?>+XpCgC_QaKi%x!LVyB}oaSSqvs^b0LJ4O{?2``$i|tzff-Z&R0QUjforVDepreeHW& zZH5V(0LGPeP3h0I+Agf<1fg+h7(8N?)xZ;`F8CAyzT{n4!+C`WbPKE8Ls&7y79Mel zNi4k%54K!rGc1K&te`k-hfgzRmw(iFV?aH#P?1ekG|ww8o$ie#k*el2MI_puLQu&A zo-ZIhd)oTq*T2ARayFZa$}2m0SD5_Vlehe0g>KSlT}AGK6MSYgBNWVQ9vfhbTUa>9 zxv#LC*dbV4;_LDO#V1DbPE0IkKCGC{gRT!-^yHtT@W&}jp_fb5cv-B>f?js8TUczm zzrEp^f~%cBM+4d;Kj?q`4KS^z{A&N+dY@M< zl5Bl#ca=o80?07i8b}Haalc#yAddq#r`* z_~>tJ@%SmHMbz0@1McCX?sR>tY;nPbn%En6wQ#M>aa}(=ZujEc<;`!r z8=61Jd~CrPB3_sxio7({N&cjXUYuq+k3ZETkwGhQzv0%Jk|*%QrHt{%U!K(z-+awK z-tx~Ed*;SYc4SDu@rQc0)-(tCCrdB#*Vo#JMn8Jfbzbb-=f3Bo*L~2HE^bHlUiZ>A zy6}ykb>wp`>Cfh2*Ol*l+e4rD$fv&6t$%gwTm6+p7yCw|#Qn0%PVRGWd-s9q#Wm!6 z#N~Fo`q6)H__yCDV<)Ek>yM|rlN5ddcd5S%p5yu1v4a!59`msr; z8ko}aw9d=_r2w$BKs@D_FE}DqdG#GxBOEm#R{*!3y=Y%G{BRz zI4VE`1Tsk3EK6X7h9j~KWI)SP zjKQzsL1A(_EW0qHqpl!Cr7J8&?K4GGOhr{(#Xn<3Ddaak%d<5Yr@E8HcgsRBRKH?s z8@s#zzc}#4c>~5@97bXkMq@0-`D;Tsd>8MUEcpqq`k_OIFce5Ludh(JI*PigOT0c* z2)=nZND0S(aW7b#Jj&CuN5sH#1U7TbJkb*osB=dYbg=+CC7R1beAK~d^2CM$MSsLO zpHmQ>yS`DZ!El>GPaMeXQ$~gaJNb(@lX5>XOdnoE!x*!-p0K-I3`S(^NRJG(k0i!+ z+YiPHNn+%#l1xdJQ|!vFT*#;_ONPAv zNQW#&G5pB)F-f<4OSp_nxtvS7G>VnvF1)`hptOOH$`kQ~OfV;_(lBi~G{izG?r)XSBG zv7;!JGz!fA3qYELDey|TYrGdA^GQZfhwltZ#k-@;3QEFQyao9zqx7`(T)@suPOvk~jQN#%wI8XtcnKG{G}C#Uw!OB8Tt?x=c#c%=R489i7kgLMr;?PbFQ_QC!mV8BN}NK1Og&ZKO>?%1zQt zhz1?Y{rtxclg%ZJr7fjWG}OuwGgCB8KQ&ELHeFL!oJu#1(>9e;I;~SRu+t0$F_AQ; zxGTm!pinXFH{#6GJwZzjovghxQ4)O-6AiDxEIfZ}DK~;n(=)0I$Hi~Lh`{Y7(i8l`xww}}IG$ycOEw0*T#e_hnS{5u@GPJ*?c z#e@=UytJFzRDID>O^wm;e2`JSQK3W&JR%Eiq$bE~RX_w%BdyrfbXAcZ*;t)ek>yoi zg;iS(*Cm}&m3>)+Oj%-`S(>d`n@zu86~CNSR&QkpZ{=18tyVZygYx@XIAsevty7^Z zS1VZ} zQA@2G!s8CZyGExW2N;#ukWD;OE!BxFL|yIFJIYa2-9wVK%=8q>n0r;HL0S8JS;uA4 z{mfO?BwPe_*;)x~z-oyoQ}*htIC?oinCA%~rs(ednCArxFw1zEm5Qp06m%Pm}2wNZ{OuFEaR zk_A?kg^c*MJ&pyw`*qpIjo+5F+@M|D7n@zzP2Fil-34C%;LeTR)iqk!_1p(O-Jqpi zsZHA4^<4h--8)6sDs|c~!~?GN-RRZ3;g#2-7}ORPUgwSB=AGf^HQN+!tLoL>HQM39 zR9ow8+ndBmN-MAM-A?hHn242F@Ei;8q}*B!Tqh3Ti(Oy5eP34%-~iSx1J=qd{@bvq zTx6wImTl1f5#t*UW7nT?-y&VfNNy z7G^XxW}rPZ6&YSk#+lY>3imqsju2r|{Vjyf{q5Wl#=3+C}Wsv@8k{)T3E@_mm<&<7&m2T;3 zR^~dMR+*k@l&0yJzG!9=@@{{9&n{tZ76zu`%LX=!#nq8=u_&!-J;LU1ee!_0*rdXo+rXie?lqez9IY zX}V@TD_)er0r|IwrRnZTG9pT!9HxmPVAmmY@=>$$A0X`7K)~h&>EiX z;DE_RQ>On25J*Pbfqk_9iwb6vsZY)!fnFB!T@ddyXx4_XQA+8>CSKsC+GKB;o<%8 z>+bO91@RGQUZ2fw6vyG0j2BFn^YU&~#4UbWiW}b@+6mZHQ0@bxbdHR2TJB zUv*aZ^j3d$Jb?93k9As?^;)0*bzHA?U8i-ct?m{kaWXfiGw*VG?Q&y3c4SBPHD7k8 z(dX@TG<~k>IiJHzy>mQNM%EVeP&M?`F7!m7Z$fWahz4||j&K5>^p$?{Xn+V+Z}(Pr zcX)62c0c7-_wiMa_gv3%3|DAZ-*sQ#$$Ss^ah`R5FZfz73TM#ugs=65Z+L|_cyyih zzw>frr+ABJ2BD~UjlcMf=lEeK_Hfa7kxzD#Uy6}G`I1lhWMcL_RVg;wUV^*oh^R?r z3TR6O-$B~*9nbbZ7mL*<<<5msaQ}AxK5}jk&-`9;_!M=!W_qV5@B|ln*cE!H-&Qao zcvF9HewTFufA_Y*`hE{{d#*?I3)gkB4ICiSYT3Os08iZzfJw%$#5EuMP+RA^8La3IJdLEE@m-05Js60ssjA0Ivxg zNU)&6g9sBUTsRPn!-o(fN}NcsqQ#3CGiuz(v7^V2AVZ48xQ$!MlO|K5T&c1p%a<@w zj+{xerp=o;bL!m5v!~CWK!XY$O0=laqYMciUCOknQ=i-(Muq4yrq!!4pK9I8wX4^! zV8er2bM(+J4b?MEc zJC9qwz5DmAIcu9wpFaHiH-}Rc|Jbo@{4|}{oMQq`2jGDSCa9Nu_0^}~gSEL79(j=Q z#~y?rO_$z=>lOG=fgGwQVu&P;Xkv*dZg`@K(Y1)-i!jD0BX9bRG@gYlL6{N7Z>BQm5oMGdAhulTh|H8W`@3=O2y_{s`iUN0OM+kYJ)1CYW#y8ReO1 zHiZp?1*Q2Sj&R->=adQZ*yWf(7OCfwZY{|opAPw%B%omUX%3-oCaNe%Y__>*opVlE zWjJz9n&XTn?l+=5freP7ixA;?=$!+0>dm1{no4G&=BR4utAwKen$DxN)*8~94K`@& zeHMmx<$n|6b*87J?y2IZdk&i3hZ2d}EUYTZn$EP;R(tBJ6j>`MtF!LvZIs%43t){B zSsIRUih0Z9x>~M_tg)?{s;aB7(o65D_d;tVx7zl*q`%aPEAYSs#|D$R2vdr1h8K$a zBfDW{n{RdU23oAB{$4BXs{CrK@xR!1jA+3mmuzxs3RhXzx#(h*^2-eqJm#_y#VfLz zA~UP(zSKsms;uv(dUMY^?;C2s`EHEr$xJuxv{Q8Mm$0RB!EBYqRANn~)hBW7b=P1E zo3+;2X?(QDG@p&{#PLeY&Z}#48+XL^Cao`$p?2Es&;Sen{q)~}2VT?EgsZ%Dx`$V2 zIAle!oE(L{p^7%gT|R5}*q2{kx7v79d~d~;E4?(AZSluXTXXZyx8HBgAp0%-{rKD3diDIj{%ij*>q!99M7{#5 z&wy=1;OE#kJkgo&P!Nou1W#8wqUG;`7^Ic|=yEv1`G;j-OIffK))edE=71Q4134nJ3K}niILx7r0GKfCRnI>kbjL>j78XPvf~{PR%ZT-EH=DO@h#gK* z-vnJKx6!exOqP3~&4w~T9eVML3W}W(`&UFE8b^#FOe1=hc)d2VQH|e2%oEWz$2vk$ zfK=R_6OjkMqiKa270OV8 zlFVj@1Ye#37Z{s+OFqjqPb=OIz6gvX)@6EiF#&a46oI!?(cgt#E(4lbR0qx31KQ zW*s8hLt(a1&5f>TGs{`(K3BK8WvgsY%U#4eB)40uDtG}(C8}x_r05-+dQrRH)TXzs z!5M07=Nno2*0;XjEiG{=5?ti=_rLxPuym1|U*`I^S)9GDe-q5$1b?W(5Js+Ms|%6k zR(QY|9xZf1i`~-B7m25J>{~}%Qp9do#I;SaZB?WFQxrw_+jig(p1ZBMbP+Opfw?Dg0dE&QrnH)yRG;V&Nr6IJ)oS za%idR*ADB}8i++RHDbHwHnZ9P&EimVoaH>{h|qZ=FHSLvz5813rWnrl4YZ&KecQd{ zn9ztG^o4X#|X+%j&YN>oaHd@IAYpR6Pnvu=54k)M3j;9hw!`&ImbEB zD+6?8Y(?i0`T5V=pme0~JQGC+I@6_2^{QL_>P+W}C8XZ;skdR$Lk9!YGhqm_kNxar z2YcC-v38rY{p)c5t|bh?_B*is?OUh2-tWG5kXyp<5c#@8wodpq6#nprPdwrY&vr{F zWATbN1mWo}aFB>Y8JM32=C!?f&1b&zn-4v=D^DxOKmPQmH$Cc2@A!wbKIwlS{OWzq z`qe+4_Nm9c<9C01horvuVEBFPNAmdHTf+FrFMjfmzx?AD|M||3e)E}6{pwr)`qkKmino}Fj(CQZfQgjRio!UIxcH02coD^zjLKMxnvsid zNQ=<8jJrsT(in}_c#YVYjoP@4+}MrYIE~*ZjkXw$RE^iKz$%>d=m<1C5<% zj>0gH^jMGfc#rsKk7A*YTo`}4SdG9Kg~u36^{5;7h>jEie~;1&4B3zl`H&D9kq%)D zqd<}GVGQFajQ(g5^4O6c`H>(Qk|H^h*q9*O*c!ta5>_Y?@24oVFp;znkuh13Gbxi2 z`4BSyiIWgH7BtC_J>iqAn35Yw5hPiZMtPJ-nUvl*BPYok)W{LU&qnmADlM}d)9$eOY_n?)fESO5v3HUf7eLFbYs;6uLPHn&}>+@CncV zIiBR{6{<;=qxcZjIT;K9lMDcxOMwREIiK|TG;7(G#V`@;c^EzUp0oggQ`i*Bumz3T z6JEdon(zRKS&&l1)3nC5)lQWU=d#M5SqXV6VM8h;1HaE6Pj=Z z^_iljDT%lcm12nziK&Lgshj%=7+4^lVDJe&5R(jm3zS(Do=^%Hp#l%Fp&da1n{c2I zfdrf|1f9?UoFD^7unJ8O5i_|74v_&5@d>(6q!{WDn6L?$AQ7z)5nJF8R9dAHfumsH zpa}R0ny?C(FbXq41uA-{q=|+til3cn5vf_Aw)qJ001XOBsKN;mJ4y@w>7#J}I23m3 z5VJ6#e`*YD`VdXp2?0P8S_%dl&LT5u+4A1!3S2LaLS(F$7Tx5t}dp4sih)p$VSK0-eAK9iR!CkO8Ye zrN=rEwYmUb+62TJ5t@JruPPA}zzLcl3ahXM@2U}?pr&AOpFel@Y5!0%b9$U7ArH*Kcm=ZCgDyy5B0hu|$r}_{Pq|gkcPzt0l3zpCcB?}R&IstR~ z5DSo<8>dCr*L54rUr`0>aeS5v$O19EUvYpGp9BicoY`W2#5h;A0 z$cvUL9K#1J5`Out4}8P$0kZT+423&hfty+9q* zLOs+(jlCsY#rzz@M~&14-PBI~)KDGOQa#mFUDZ~7)mWX?TD{d=-PKDx9{;%hYpy z*oK|hh#k~-4V*44ndH0Iv&+nAP0R`+!4urrn!VXIA2<#@8VWB{&hSz1kVU z+7dqD6kg#Le&HCN;Tpc-9Nys`{^1oK7%*Vr6_Me%5#oRW;wnDkE6(CB{^Bs+;xZoN zGfv|+UgI~8<0uXvZj2a@?Grg&){^U9is5B#UoL=D4e`bBjKhF=P%yqs!kNLei4p->JC2bv+m&Dk-x%=xVZioyDsLv zKB&wM+P_}Q!Ql%K=;#pG?L8bg6aE#fj^&K*?9cuah>qplJxdG6~!4A_>98K!^dIae z0x$0&;RGme?F^6sEuaA?zZWC_&Hx%<0B?Q+65#|ofKw=-0r9RKXOIO#{}4R;lHR_% zT*SyZ-Pgl>#-vTeVDaO1{t+H862B`7obUix3K6Cn5*2>(%wF;uf#}w*1Hi;NlOk^I%Z-ecut8diN2L z0SCYUdcX2+ul60m+ASdMH}C)@U;q(e_gQ)oFz@O2KB_U_0*kKpr(fa}ZU7B%6O&&N z7k~hspHb?X@6gH?f4U31z_XxPznu@0pvky65djVX3v^Hd%iycuUR+@O5Rxs-7Yy|* zi4#??)xJ6tJUsTTO_TcnN)reG5kFuP9uWBxfdVDq5p+-ftzP#yG5c|kK$AN<|U>wZgabQT7E^#otpi$s~l`#=Mq{z@f3!flU#;j-%1j&ajbuQIOkSS9g z0UjE7%0OU)r&+ab<=WNjSFmBlj`fJ+%P3#S)RJY(RP8UgyHv52C2&i@wqS-GG*zpu zF@sO83M_GkP5{AqxiVrL587kL!F8L$bPBY+Jkt7v zR_qF-9S?Z$x_G@4lyJhaa54)ExYnXjv08R%<)B@{l4Zm#=&B{dVNN_HFVzJ4fjxof zxBwQ0Jll{VMk;fuvUPZr5Hx{83(X*8wh|IFQ!qJ26BKRqs!1nTd&rp7WclQiEly0W zhbX(!p#TpIU_gcvEaa#w1Ud)|BZ~qUpuOmrLZSy2nk(ri1gq;RAqG-l=mCLfFz%#@ z7*KFOqL4WM^Exo{>p>`k7Q!qMRYQXX8 zYhXD;ADB}jNwjYCJodHlekJDs;dy_!M}!LQkBsgqm_1AY3oe2 zxC@wy=&ZCfr1r@alj-P`xdhq;m(em@YQ(w@%HpqDR{Tp7FDCE;MlfIr0G1!V#c?u# zc@&tO9t$pKoy*CUP})yXqGVCM*gIq72AH ziiH`}WazeBnHs`4mP zh;`6kzylY2Yere1+gY=!EOKzINb87`T_U#_V#hTO8A8uC6R7ehFMHEdu2SqFfP>3L@ zH#>^Wr8AdcVR{|Za~Z54tisK_lQa?9w1?>l z)CyN$Yd$ck03k#LP9UB@Fp#Z*JPj&DVF_0rRf0!hAOH`bjm;29686c2CdZYvKnh{d$1l~XLN6EIBCx{j8bgDejW zGvI_M#Il0`Hqav+(8&)NfPo#n$Rramfe{YE1D!;#Q45$s1;H=@s2os^H(Mallw&BP z=x-nw=#I%Yk{;=Zup<&NKpZF89SoF^D<$ltCjn$C1wBL}wA!E(FkqCP{D)W&JPkxj zxk3SDP*)ESr9ljEx0*3UAzCScdJY%NuV`cE3u!c}W@nW>cFH zQpu`Z7!hA8Zkv{KOOv`%Ix;A6EKcHG78@s!Rs6sM%ml+Sqy$Aq93&n)iiStXfyLl~ zWpd_xijz#y2~2dQmgw?@M>H`5un5v161~dQUI_-aDaTp@(Lev!0?95w-w-Ex-##HEAK1s34{tIVOG5CP|(e<)iacTBnL4LRHoDXz6)@iO6**M1-v!Q#5ggGhE>R4CM>U7w zq!SCi(pOGl3M3(EpkyhAUmC*9EVLDctviUp{wWX-AB;1U;~d6%*v^f)vn*c-tPj&7 zkSjfry}W^+E;kw6_KH`!H_Fdo$va>me+9>6K^a+td}JgCZpcahb_rar@rzxswnhYv z31{^}VJf$CotBl6be?dp7_+W1W0qoNTo;Xjg*Y=dDx+9%2rejHHpeh5o_lk0nrMNO z$>7~{pZ$zn>Dscs=lye`4PBun2U*Ih<6@oN5@t7(7{L_Qf#zHahQMxd(OOwVry0`` z$eh`V0#mbV<=om6zgD^2atSVQae(19mv4GD7JLD%73&^a*SqHRjRD+iU<7_-*tPl9>^8ueYbl5|9jsDANa!;e({NaeB&n{`O8;+ z^O?_k6FxuZ(szFKt$%&&XJ7l<=YIFS|9$G?PWMgkH^4`ZquDQA{o`*k?%A(s=)Yf^ z;Rk>H?SFs#=U@N(_q~baKlAc)y_g`aY&*Ga`#;y`#wxvq=Ah-)aK z3qZm9H&9DD(n377qd*WGtSkA15JaI76eAQ&!CE*$641ff&LjkU~f zG6aK3ptho8ih&z7e&WGCOhiRoL`G~xM+5^KOoXu`J2g1G#6!DFRKO81h!LoPPJBdA z47N`6L{cP0j~GQ%EJamJ#Z_!YR(wTQgvD5#MOmyxTC~Mm%tc(>MP1~@Ui?L0^u$m+ zv`qvxe~Z0JOtZAh#9?ekXY>kMq>x&CMry1^ou~nih{W>?7;VGEq_akF3`cPsM{@M9 z^8=v%gFHxt zOh_4|#|?yre_KB=7|4KBNQ$gTi@Zqx)+@YZyhn+|NRRwTkPOL(`^Tl=z-9D7kxWUI zTuGKpGKSO%dpy9FoJpFjNt@&=ljO*Pyh)zyNuT`5#mEY7JV|B*N~1hVq~yt$96*9I zM8r!Mt_(}D97}6lNhTuZjJM%lZJ zom@!?m`l2>OA@$CylhLo+{<30MIa!63IGBMVS{Wm%Z9unqQpwRTujDnOvZ^tfdEX& zoDjo&iVxdJi7ZN+ z{oK#6Q~(m_3cE}K4lscZScVrkhhFr*nd5DEr@CYpJPENed zuK>6~JkAp}2zEpvB&7uZhTsAP=m^OS&ncZ#I&HwYWC-i@h%J2?6X??d;6$$20YM~F z9Arw&giSO>Q8<0mFpbkvw9`6`R1F1!6JP>A?SK+E9R%glSeS<^?SLaFfE_5AAf+~~ zD8zo+z%vCLD1ZWnAcaf#O9;SA6?If0h)g@J01oxfz?4*91xY2))wX~DJpIjicm;## zQh~^Y>cr7M4N|YLBJxyJ+(1_Fywf(dRnWZE@#|G+{8e!M$OOm?XC=^O2!LEz26Z*o zdB{@XgwwDn(~^XplI$y3oq|z}2`vEB32n};H~|Wk&~P2tq$Jj6B~>uU1#9gHI-r9D zWr17}Rc9p&LQMq!%e+s4T*m&g1yZ=zQEXCyFattpQwsc1f<0M`6jqlAfE?9Vt{Biu zl>u15(kxwuS2)&AodY_U1NqE_1qjs~eO9)x*i%(GY3;Cax&>Rfg;?zfTYv?QHHcWH z1Y2Oe6QBSjPyr;^fhEuZ9WVkOFo75_h!S7{0nL<412%yLPn2z2Ad`T|Tw7iRq`>$r zem#h_%?b~#P8p!tEdASdjRj^+*kvWqXU$Q#rPx!29Yp0XQecXzWn10zp57D)n9&s@ zxPY%wH*a-Y&+V_o4Oo9A8OzNI3SCbhHCHTM)+~JmE_I1K1z2nq3!(*!!}G5&kODcy zRf9MJDYyjxQYeMdyB-u+f)^0ln1~|^pj(DeSzYtb&yC)}8rN&}S0fmzYvoS7#arst ziat%vBglmx_**T#Sq0tFPL+XJ7*z;4$EC2!MBOVhFbJhx+E^`uxD|ju;sQ#5g;ID0 zG|;^UNP@&Yh$AqFz13G4U0?=oU(m0k)S9O@O_<@^BeWCF@c0s-aSxcv$+ z6^K_L-+7n=;0RTCotDHb(s8ndrA6S5fCZ+-SAn2mQh)_gD1*mUV1c!TrhT{r_6P#5 zP9|W1pFL0ma;sAm#R`666uMx6om zxD8ktzr7JgP4Oao=<1Z z(^4%|Gj-u?@`RC{V_TpER@Mm%5QwM+PdKIoO5lP^pafF5g&qcj4M+oY2H|zfOG$3t zPmE`G?qz$P&_%vbU6$txt&LA!VlcSb5(eXD9b)7iTK5&of|1R@QQG=-=y4`ahIrcl zj;&~kK8RarXPtoBSI}e1%iDVn=?L7=m=KSCo@A|%041)8arjAsS_F)*MWKhXOeAdwwBVTHeC!p-IVSK`s7gq9^n*X+)Gr< zz>$LdZRJ#MWss&~kj`IuZfnHG(6?p^UXANrWe5zYS%%flgKg=&Xicx=z=clJ$FyN! z{tCPN<62eG#4c_94A#Xq-FjZnN@a*^)>MPQ(owzC6Yx#N!0fHq?1g5s(H`yp!)6Ez zSYy*3ZuR6$c;--gzGFB7=qqhlb@h!AKIpb+&_irt&VF0cUR1eWYvS&1=rmaNeBjvy z(5zTjlIhL>Z$Oa0+Ll%4rCr>f*8%jiTIFXT^=|r3Pumpl3N`Nb1k9%(=u-@4QJ%z$ z)$2p$>)njTK!Mo4DD9@$0JJtKLM`~c-9kg~1rKrW-T;qS zfGqXoM_z6AzCbcVzl<$Xh8A%dpK;LLWp(yW0VQDdzHLbqaCF@8f2l+m-)!DCMH)Zy z?$%cImQ}3oaTbSPBGv8WL~<&x@)R=iMzz)U?YACx1jcv_DE~L3T~#YDb2IlB|5eUs zCI8Q`yFq{Q#xsv|IX59T*KYov$1a~lFyC{uoO3__b9K{3FdwbP1$09{^c%VJE$8y$ zByT0}F9s%z3YOf!`}^O$0XRae7qtSkFNN{PwS?hDSI^|LN@OT+D9O z2Ie~7l&5){uX&um zd7aOBp5J+&?|GpAd7%$_q91yrFM6audZkZ#reAueZ+fVIdZ~|ks-JqRuX?P%daciT zuHSmE@A|4gf<`daAlLRi|H|vuij&uP{FO%ymTqeK%xBH{uCVYwc2bVd`)7~&3lw>V zZ*~y-ihn2OeKc4)zs0Ulh{s! z!QuCc%U6BCX8qCqedb4KVV`{BNA<0EgVY~3wC0eZ!wy-R6_jyNBeN!2^e9rLqi55uZTmLv+`4ybE}f8ZQjrcX8f9(l4Ab|xMcwl<(i3LVkz;V~zK*&jS+&UCBl+0XKjj$nyHF$_1 zh#`tNB8eq>rC?&Hh4tWS_CcC<&p8Ix4B9ntCd#sj9jvtF5~FDy*@}IxDTU+IlOlx$3$ruf6*EE3m-|J1nuq z8ha?R$rhSuV!u5Y*<%(B6(dDSX%Q6z25EqVwkjO-Kt~_cCIYew@leFM>7u)?y6v+2 zuDkKVJFmR;(tEGH`Qp2;zWwt1ufG8UJg~q86MV422_w9)!VNS0u)`5UJh8+TQ+%<- z8DqS$#`&^@U5aDLH}a!yNeiWgE)>&)4HP>Bx{WaKOi#;~kWjp=xP7EsQ z;8~|ZrsRawCKp>>P3>TUK?S4~U}!x{w{yFHOi(V+&VJRAI+Xy`g9)%Vz^afM`6k)Cq3dwLlc|Hd|I`6&v z{yXr&3qL&Z#T$P-^2sYNdFg5&>2{J!GP$~OQD!U5UMUA$EnCs_BX>J60wLzJYrL{vmGS8O@w&BUw)8+p}EuziB`NK7PF|u zi_Awj4m8^Pa^pij56y(P}6n#*{gPKT_aZA~UJUO>)wG3P=cpZ>%kFd4|8VOEn#ODbjSs#V5+GPI!%9bhc^ILlF%QjmkJ z28Iav(2a7mqaYHdL`4a$NUH8Q=!_yNJ*v``vNSy&ObuF`gb}&z$06(FW=nIb)1A&H zJ`qilAKwU>2fht+3zVV(39qT@SxihE+K3)AvON=}4GaUw5?zri-tpR2s1?0zK4CIGkR(>SM3TV`Nb)7}(pS7OZL3sO zi_2V&WCR9DLPB810Zn-%3BR>S|IlZGyoSU-cw1S0BOF@#5c7Try{KiVq!JA*VFw0T zfD1PQE{y0ceHnOg|42Lx1_m%-1rmPoV?&DH=BD?#u`x-Bix>H*RzH@6?MEso zkrNc5LjY7l5HLW1gBali@x8J^Q1D^!3h4+3X|jsfJcA3k7z{_q7LXZW0t@J%1QF;m zq|r)C2!%(g3SG|1(EO1O8037kYoS37AiIjlpv4A30C0u-XC0hi1~(|chP0Fs4jd#0 zk-qg=mHWUj8^_YSjVs+W-4PE|SH~^PKn6Hn>lrLy0WV;{rHs%43ScR3JTO7CNwh7S z9bp2jhOfCk|3GB!X=}utwr;1L&4N6)8Q1P^>6hub>*-N1ISV<+naMR04#;{(C9v}D z?h2p=XyC_C)&a~C(!CIkK+j+xbV35a0UiC<)(2_9K^`n?HkV)t7GOb?8?XQibobpR z->9z5n_vCHRU#T@?+Hp!0$8tEA}dUSpKCp5Fko3B;M-ZXGriprDG;$kpUAa|s{~lr z-~=CNy0ry<@~Ja4<*8vtrFENGZT2_j07USquKRVH8=Zv=_pi`71i&1jJ?(6NyV|KP z_kX5*q?IM}SWh`6lE?tW?S)8=x5?ST|6TBkCHl^o1NX%*e(sL-rW+cCzq@^XE7 zn%j+Z|IHsAN16pu$-RW5G%5bGk3T&)tq$h%Ci8XqISA$tTXq3^rM(72xSuLf;MDXVVWf=*nhgn-pb4HJ3Z|e6 zN`M6}ANoz6NtICkfuQk$i30gg$K;?6atsgdpbz#S5dNSLE{wk21js~Bc?lIBU|nG0 z|AG3UjPqHZ2}vIf&fS;1PrzlN7H%OIcA*!3AsB|C7>*$smZ2G*AsVKk8m=K5wxJtl zVY>KD2+9*r6w_Db*RKIv^i6>rR-ws-ogeB&(4H6LKH5SVQl@)!v!nB(7bdSRyS7qg@CNCzhNGt_D!$;z5j=Ngc;1M&T49 zBaD$t&M=1!2^}`#1)B1CT4JYFI|UL;0lWbxRX zM;u_P?W0D9q)3jW@Cal~NYgDk<4;IW7~Nq}RU`XxlHVi&Y#2qz$&E;3UDsX(e9frCvS`SAySAe&v0gmSDM# z@6nZ-8PQV~2|~~a#dQ}1m=Dtx;agIrsK^>ny5$nc4$&;Y56ERtPNsJC|K;@A;!oXT zTX3H)I+s#bBNb+zsCZT}1|>ycz_=Y=o{?SKfu@E)00)!+1+-;G%p5m9ooMRj+8JUW zF_38v^e6YsKsO?2P2eYh8N>xF0hB2l7=({T zAOVYtor%r_nuU~Cl~zSKgZfd@47TP?a0f^{G9KqdiKmqW`_qE5PeOPV2=o1*k5tP6V2tfxlz|R@pq`pL@ zcp4Z`sEPtt5iF{wrKpe5POegFSSTl88s%3p!xMJuBf@HDAgZB($bwlM;LSkLY3Z@P z#Di%kfjZq0OdPQuL7QP4y5!lSq1cN?Bn!;If|072cI!+u|56fe-+B@zTydagWWb*G z*m)S~h5lV&}P0rUmRwVh9iOj-;nG$9FFz2+SV_r-ELhS6mNkpp#Ec`{SVHW2^$fMDg zZP@yc*h)oAEuyoA;->C}n~Fr2N`wI{+}ftk6_RJ(LPrxFBp&Vt;OLxzm1y4*NsR!_ z;ZnylN=Oq@i!|;<2t?O+F0PP>p5@9Z;snOkMx%6K|B|0(F5)9+E?-4#>*_9mq!94>OJ8KFrWWr%StR`UF3irZ?Y^P& zJ}>k}uk=nY^;WO-uAv1sZawj$rZUu>DDTqlrTC67`F>9JuH+IipDsROYJxAWX^#7n zg!Tfi=SHJ$#jk|HFaCt{dH~( z-*ErUZ~ZbK)}d~XI@4#~AY$P#5p!+@({EVv|C3~*?UD9w5kGNE#FP^*qW$`?_fDrq zq%9P0F~!2o9X=64X0Ql<(`P178FMijuLK3ln@~(~2tTk`K9s0pm&>NH9kYZKrzRL1 z?+;(qs)p_z2l5SBaNO{*L-D{3%&~OlrXV-+NO?*AtZ5 zJU>UwMl?%{3@;nxPxL|OuI~=n<73`{3}65fFo6?no3@M{N3z5ZkhB1FfCf-NA`3v^ z4Z%a#XYsW(OHVXfNCY~@vNL~7s^Zy)S^!LpoVPjK5GeJ3N;BU=byWK^RR^>o+vrb^ zGlJbfqw!tdc#kbBgs;->ZIrKG?=@fdwfoXpc5&m-Z$Cwi*O>9vAZcg5N}tD!IlT zuvUa!btKD9N8rFqgF!5B2VrmfHgNy8a0mBr6Sr{}H*z0$aw|7;FSm2g3l>Iqb5A#Q zSGRRvw{>rYYUi;Yhj3KrfQDvIKVdg{m$!MJH@=uPRirjrpHm?NGY{WvR~jhkLPgOo zuQ!>te(yJb_xH-GH&hVtOP_NRgYYPeF+AYJ;J%w@H@HRucvTz%goL7d+c$(?I2}p# zqcA7r`VfYHc-0X&NmaE~Be)1(waTI_h_AR6bybe`afurt7PB~wuaQ5e_J+UliOX^= zzGi-xb&cq%|5S9YVvuhzjz5Yo8+le)U+`7Vl8Y@8(o=Vj@>z*LAN<7ON6uy>ZgM`4Z;mC%X!>du}g ztOhI2?Ab&hz>#gODf7KCA1eoW^1Vv zbX?Dy00NL`hIV3-QUr<>u5SB$`E9qL@31H*xJ_r*87&giGAy0-e6c1TJ+|yqi;35h zpMax9zGL}=UUWy99@+-+!loVEzc7GEx!X}QUeEpQRXTf5$dmUwlrQ6jo(OL9J?6qz zAC3u6A};_U|Gxay{XcgH_j>nDoa$e^bMB;je}(S^a(?FvxJokqkz4dsNPACT0F7Y9 zX+wtuvcB(nK6~FimUsM)lf6v@ow{$uedkU9^8Wrdh~o3X)m3pLK7R4bfU8%WxoL&j z>%J^9f9kqE7;L;kJdxXayi@1^v#CJMN0*RtCBJ2H#SyDQ46AcMrT2@j^FKfEzp{=4 zgfoH8z!Wri5Me@v3mH0*QP9bW5ez9Z>`)P7K^94hJV3GILI4#Mg=92&5@kx2Co8mc z*>Yt}nKNnDw0RR}PMte>_VoD^Xi%X;i54|_6zNPwiIyr-bI8$}rwZ|`t7_G1!F2+Y z8tjl^|Hgs~1wWin!b3>_nm-IGX<^VK0+c-FcD2h8ND;kz`S$gD7jR&~g9#Tld>CTbxI5?s z))e9ZB_=Fz(4g8}TPVRXr<{|{I_l+tDNKwK zL=et600( zGg_A1aK%mWB?YKmm)&;VownA3{)7=BP&dP`T+`SU;oE-w{g+=y6>y2wV7+=5|KWrc zUbr!9{d~;7EiogM;W}SRNrHnb-k9T#J^m?PFg$%&%Pp6y@na}gO_}AEU4FSHhix6% z%05b#rn!4z-c8@_bPSdwF>6}FteuTM8fj@mj!}(HvCKK?-q1w#EC`x4@Y{s4i!OnJ zFyH`)g~~M8z^KhW8|^z~mUrfa);XC*g3@NpRE#})u)?xe3V80m{r)?|rR9yRQ+ok) z4)EXB9(xH4SSSDkxyBtb@yac~+_$w!ZaU(_F;8r4uS|6~0L{S&pa2R=Fuj6g>;4?} z*kvbdX~|I1*(M1loxLIzc3{CGkVwkGvkXdj&*x-$U!M8qgE~SHKX2Fa|8RI{(tTy; zF^XV^4k+M&2_!nORtDcyGEe*T)n7m64Z3)Ea9j)=p{wJ*TSQ z00}ri7H#fR+lpHOfnoxLc!&r7(^wJ61BL~NO>947;08G;J3bX>a9QGm2up^CxJ|5s zIq8?r_@Y6wA!t}wYatA=mqHuf5NQXbhzM(DG91R_Vgh6#!2Cr+RnhQ=NmSy;_;juQ zkdP{#TjJh`IIwaFOeR5`;ug8cE*8BgdQL3i%>dNJnN$&tRm9r32xdk%zA;u!oD2y^ zSVGG{4UTw8mk3nI$2|TKkkZoO=Cb5AiBSw8v#?$u8QDljwoH!F{}Wjudw9a?!DAMW z)Z`{P*{(0XXl8*jq-IPgw;3t2lc`kYDqR)HNivOF*D~eX4pO~LvJ#iM)TNXLImr;# z5f8weAL0&z%3U54naLc{2SFK10nKQEC&`30+VF;gSTlLb)aEw1X_#I*Gnmhm<;3tv zk!hZjn$&Q>6m-{3cfJ#zL~*4|N{NzmQuCU+*a^okY~UHlL$Qs zI`3%>2lx}B5tXP(J_#8yMln=iz?nf4iphm$;+oZb12wNOQIno@qGfR(7~{t_XqvKK z2kq!BLHf~=IuxZj)u}su^#KP2l#aqABRQF|5Mw4Jo!2ZW|88!eg9>)cicyv7RH<53 zt6mkW>e}g6|G1_WIc0}-rx7h+}nC>|*7LSI2HqrC)W#FQFL4vLaNG9!081tEreCkOc!h z$k1a+TUtm~^j0kunPn@2*})am)Ksb6QV%?^GeRT5DrA~dq%w=y+-5hwdBydx01Uxg=N$U9DPrW@jZb;h9;1`9INhR% z_meI=A9`nc#WPU%d=j!2M6gF?XrL?HL4Uv*)0x(En>`_gtRWiIFDuK#uuJ9ya&)E! zX(&E7=t6^RdeedN!UjCO!cgD3W24UQizsjy^6LrjIGt}Gw))h2WfI#&*(M2hQvBfwu<^^*(pGoEBzuU$`P!BU)huiQ zz3udA_0?8h_L9F%lVo2yCMK?k7NojvgMaKw@Rs*k&pFZp;#(z5CbYlZ3vh6sm?YCZ z_{J+1ONGx{qu#}~pz8;5ii|rUB93y(S5EDrfEXkEKo!SnjD8tbJwD|raUH~$rmDX zhJ;}gVWodoaj1#m?4F>6t$(u0yzt7-=YYhmI1n zUt+hLK!ff*Q4j5df)Y#LK7fth`@PGf9*`;R93IY{80sMMv1A42|!l3&=!2u8A$tZ6UCSm)`1Ps&<3?eV(5<>UT zfCCGn?usn={O;=T4+ZapW_T!p|0;v$5)a!Li0mq211=%{CJ*m6q5IzN_)@MORG}c8 z%?1zR^4!iKb`K2HjuJqD9z>A9Hem?C;1b#|AvodElJDGFCk3}qPwGe|;tjLF&h-W` z>bh_5+)Mt@Aox%s`1*kX+i>DE5DnDN;$DpbSxot0f(!R>PAY4)PObmU3Fw5b*u;(R z@-4*h>%CBJ>QHR{6ygfeaQ)^B5+&}$Ku-_(P!pYGUQ7-8b}l0Xu*Y7{5PjkbtIX^w z;uQ4_5kJoaHxU*&(GvkNvr>Zk3b6zCZ55eq7klvv%Z|+`s@7tW7@wrk$_&-)>4%Q3 z6?w75Sk4#UY$e>#1ZS`q|F4lg5TS>{$%Lo}ku>V-z`*OiYYZ7^5~6Y9pwSu8ksW^# zCvpoLi+=H$7C=K7O@@I5zD*{X7K75o2&tC!caRt|Sf=E+50u zoKn&*WFuiT%tW*RD!(#C$SKl5j^6UgFeHazE^qJF(hb{^E#J~D;}S0CQZDO~F7MJV zkrFKTG8^{~BPK?$|M)Q!SuHTy%B{M?2%L%oZp&F}O#nCmG9yzm!vHcV)8&%TAS^R8 z!DllylQlT=GcVIKlMpmXQ#3bIGC2ShkkBjn(lvL|#-d0pUCZnG(I0~@Fqj}C4uA!I zAOmXS0^S0*Kr&c7%`uM?Ig?X4myZSvI=PIUUXwbbk{kV!B4AJu18^~JfJ_bo zIA7o*8q=zzbC1&KJkwJ>*ONWZbB(GKK5@ejuM;rq)2skX3`L3%3nK%-^RvKUv?xbb zsH!H~(?HP;K@ZeGHWrV39F)l#l=Px-V@ zwS`JID*&%x3UcF3U6M`_6;tsf4*b$)3Ia_}iQyLNN;8#IuQ8Zn^RmWL`dUv?OVw4e zQBnI89|6-w87=_ZU{qgKSNCvJxzQ(owMG*P&32VoO|@5#RaR+qrWWb{(yUmYRTHn% zC;I@V|Ef}0D+BDHRa@~29*ts8W7R%q=^5}7saSP(YA7DjqRWz$UE9@N-xXftRbJb`pTY*l1r9R$&*GVH?(A7j#AXbzAQw@Py*< z{DZ6lvo;eCP`jWtVW(>HMPx^oWJ}g$PZniUR%KU~Wn0!|UlwL#R%U0`WebK)CYEBC zZQZwdrmPK8wCTW+}FPfHVpVn!k7HX$fYO9uNuhwd_7HhXw zYrB?fzt(HR7Hr2hYwZSSDMekq6ro%rg6j;JAe4QV6=((cK8W;!3xj||SW|j+gYPrs|E?89 z0EUDKqX=5KRYa+HW9nKLS7`OAXB1+*I0cQAhlOAmh>_-7dv$dsw1eF?II@RJh2SfW zpfg)OBN!aBcs_KX1pZU=6ymVRf&*6I0-k^Y9>4<@VwHb5ldp$wVp)?H zb$w^#K3Jd!OrVwFuT5Oz15kM(ngIUBio1afC41vjFWjL!kC<)q}EW(XyT=Uc_Js9vgxf^q5vM*J1sk} z1%~W67qc_9r;j*pVaNbrz=utnTpIgsM%uPNdr+6}06bKnGo zw>XnP*ARjRh#&^WGYE9s#mKlocA#$PnFn-7rw2~Bha0AR+9Ye0eS81|a)fwH86hH| zXcoc=rn@@|LXI<{v$Eu^watHlVZ#xjm3Yl)t+{wOWM-$S`vxfC|C@ zs(=izfB`z708{|J7s3Es*}5r$0aCy)8KMU|U_6N=A)-Jb|DgLIvY-JDz`zfjfsy*K zm%3o}MgR_g1`fgruD}6UpnnR22s-raazp_(00}PKwgPtsSezo>#~^fo2y&bWjNp;i zyLar{1BjppsDKF?$2t(S#a(m4V4SOG91Ms+NPN5m!q@;_Kp_h2#vRC+;2qW?aBNm|34Z;PEqzFzxaw-T3>e$E+>&*`=K{rW)guKz0 zBymHIytyryec;0v!qf{{cxAwQ230*YBmiiD2GDCE{~!bdm!JgpTx-l3OyFV*eBcIH zTCW*tzQ29fdHt}H8nKzDvgSUCC`7uoK=`1JMXkz$`3AM^8s&Qsz+jreWv!lE-@^wp*&B44XF*5P;Jp&NrM zN8j~_f|=-v;obJ!kKzSNfH4OHgkPWceZq7&U$e#hu6En(ljF`G4Z^9W7OB zv)u0@F^4iv0fVHMANzM=rE(5eOyB!y71A~8EWsIcvfun~qWM#=dhd6EQ@=pz9G>NS z2NL=E$0eXOCRk)Ze2Ao!0RoJ`fds*1(7=F2!GR1Gf+y8X11X|SzH?G{dbnDu^i#M;{ zy?p!n{cEx(&!0m3Ts%7P*{r2*1+qBe6e`07MQSkZSXM=g4?HcPJ%YeVj*T)fWEhCR zgb{%}GDa9OFvp1!Lr@ryaRRXJ-MoAI{tZ01@ZrRZzwFtQu+im37Yj@Z9p?qsMLe@E z*gE77*kE|;qtc%sJCs zfr=@WS6Ks*0Fr70=`esvZwZ9J|3DE~*h3Hl1wf%{40uow1s2F~09O2+h+>K=uE=7G zF22}ZZXV&WkWd6Qm)L=NA=JYWQib$mK`b~yP!ku_P(TYBO~BD~1Klve5)(Q`p^qJ` z0ArR~ZpmerUVaJYLY!F_(sIic1P_8bQ9yx1GQ>vG2twpA8l2r#W?7yNWk-Mz3+nl& zWp;LV-JoF}ifE#WF3M=5c^OtvVFMZnPh3GNLBS0>SinP_g^J{-oq5U$!KkL5it4Bt zk;-5}BRC3ctg_BZYpsjo5l5r~PMW5gOU*f|66&3FnX7`%lxne2*~)CQ&OQrmw0*?^ z;El{!OKF;Rp;{8F$w~`u|G46gOK!Qmk+z#- zuhHGrD!u{_OmM*lAFMAJlJ=Y9zwaIlal{f&OtGWXN@`TQ*;dSP#~yzSGQAiZh%LJz zpNw+KDzBVnfRVb}?=jA?OmodP-;6U~yK2m9QrhMWbkIT%P4q_S4wSHsXi}O~Vnsg< zb<|RK%tzH$U#x4%i1B>P)LwrLcG%&b(8bk3VePcS10r=`*J5uZf(UQVO?TZYS{)S9 zyN;ZZj&`qF!QX-pPIycT9M00!7cZ`^#v1usY2n?$TdL%iUyeCP2}sZq-e4%c_q7}O zIy2!ClnMl%nXk_J|K*(~koX~0pG_^cK=F)b>(uGad+&8u9y{oayWBT|k^7E(^2+n> zIpgW-EmXG9mA-uS)?fd&*`E(jJl9P7ihcOvj~{gN-0!?@+vKm$e)}nFzkSl8@9Q`i zKf8~A{`w0n{P_tV*0lWzaDW61m-f&Gy`GV*fDCM)0~^&p|Bb9y{-WLkFNnbmwkU$3 z%UjGMcRvk^aD*hZ549FoBYhQRZD0ss313K`xXEya6{w*NZ-~Pj>TriV?4b{T2*e-? zafn1Lq7jJ*!;&CSZI3ga%)pn#C`!>JKAfUPPPoLgdC(yhw4xVzL&F(bE{v3m+ZQ<^ zLEN#ei)U=3{~I&H0}=4RKQOUj{y5k;?-|I2Z|q}UP_Vnz#V0E-v7`R**uAaA%z}Pw zq+AxjyFm`b1WItp>Ew1t=y6YtH%gl$KdFD=4$%XtN zJQwLen+mdoAACRoseD)JDhUP@9H9WK2_`Pj;;|oyNCB1U6d?yP15yIAARSmj2R`r^ zvw%b)VBi2XQNV%^< z927U&oFoUP1O^#I^K?zcKr|sLqgW=P2pzo>Hw!pBlLn)>?o7E zIm?20-~_L_sZIh$D1y*cHEb0~SM$1)d+pN$;#q<>-NJ;XCbl4?MC=LPxmJz92bodL z$_Gkw0;z75fLL5*2i3VwJ|eFJKPc@$IKTo77-0e?K&@%r64$62wlOghhzL{zOuNP< zwpl?zSb(qq)n?Q}EC^=-kb1Dg84?Uen9x{odDy1nlC3|OX?TiTTKJH5Ab%BM?Xspd z|DT!FcbxNUAOqQ2nu_X21sy>Werww!%6Go>t*?FWi{Jd}cfb7AAyEqegu^L;A{Na` z0RX(vipqTswa8B{%2803TV5fF+2M`($MQ z%GkD?b)HE-dn1`$udYs3ZjEpJsUWwizho}6na_;oG^=^dQg(8OYbaP8x~j&@m2s5A zh>Qt-xhB;yK>NIT<&!X*iAN+KpZ;k8T!8+FTQ>Pwno$Fu= z+jM4(o$V;)w_z)L*>u}=tiPMWK>O^8)MmD|uProZlSJ0?1GJ~LjO}oX+h*E639b&a zNR7E|-0W^Q$jYrm1Wpiu?yk4Jr*`fw76u~~oE{_Z4e)@gEQ`fkpui7~@b`JvlPPW6 z6B7>ch*J#O$NrvTTqhV|X?&|3-?+y+4)TwSJme%Fxyeh8@{_APpbT?-?`6w4)mW3J?KOqy3vb{^rI_1=}cd`(>uPB^EI4l|Cmcah5Cq+XHg1=w|H(6*lMA*j`jzlIDxL5cCt;k*|f_YgWo-x7MJFBe@kx2bMHI( zBni6bf!6D^X4~}qj`+kU3h)v1Gbb1FO2tcl@>ycM1VaZ!hEEaBl<&O90PpxKn`t2` zjUwkgkNSyOK8qtO{2s-=ZV{e6^|X%|=tZC6*I&Ga4I|;~YY%*atv*1k{Ix*B{+v3@ z!WGN^ckrJNeY_0+oMUah?)^<|R}^r@`~}I8LB)NKaDV)k9AbRMEW|uVKhca&VfR*i zee5&e`Rm&u;N<&}?*l3P9T`9WS#m^l_Cp%+d+YZ}?ZVK#q1_;%9#n z$ZFnael4+QYIc59W`O)OeC}62HIjVtw|uO_cT*N88CYLI)@V6ofs96g{s&`qq=GeA zg95l@lvaTBGE|zAe^k_gyD@z!hAmu_ea#nnms4rrhh~R{MBOrJm^5SPw}Vu4gB)mR zmliNO=4FO6xf4yR%}dIg=7e1V;F>dVR?U6eXiGlBv^u(GlBbuYEF2D4#9;j zD1%*?hh~UpVrDF`Gl>58W#Cdq`6nlXXe|8)gIZXIVYY^Bh%^ELKYR0Q2B&Spl8agp2Bz=>0Dx&1Wfmcp70Bojli&pi7K>}Oit1-*lQ0QU zAOOXm8~_RfK?P_)CV<5W!zf`4sfy7RiUiRL!>DO3DG-;i3C6+`VSolc2~D&ZWKqxx zE-4VG;0Xo^3Rd6=psDUTkfC>ds|BwnPh^&!{dc}|f0Scb530&Y1 zo*)pQfR!7722kkQE*z9fRvYz zmK%_lR9Tcj$pyK&mz;nGFbR*I@RVfY37p`TNrjXR2@0?&02ncxlRyQd=!^y730Kex zhZO~vU;${r3ADKh(>aTqfCgXrCk3$zXwZXfd0HlRj$bHa-3gpsfRMom27H-UVhIZB zNDwetlz-`+eVLn=z#7e9o>CB)hn1W$*^08l|7Z-b06q8sk`x2)C`Ug~0=vXz>d6ri ziiJ!NRLs=?HML%Zm{%8IO0@-v`WSqr853C6cCd$pz5`!6b)F z1)MMtXmFTda0#411@bwY0s)bFg&k>mP1uo_1QC>aXaN(DoLIRh#fS>C=$iS+ftwHp zVGvyeK?$595WcxA3lXJFX;fG6mRTW~8PN=a$pQ6nmSC_7=(!MiX_Ha_3T6;pTRER_ zbBe2`Y724}C`u3pfQ5t@QCvnj6k?8&*_VEW05f1@m2m_b5K#x|Ow5&Aw`6Ay!2t-M zp#%|{u4sTNY8P@iTE#gK#0irE!JDl>|CKK(pLTi?Mp>6!Fr#5Gn@EWr;kf}+FbT3r zmR1^4oh>@ujy(m^m7loYI|Xs;(Llr!yNHpfCvxIj(E@qxJAs zc2blBA()_857AYmXi%d*N=<-i|E<1ypZ@8c8*r|9DW2LHj9g#=Q`wzi0G(g}o~YoL zQ|l0EP^V3)q!%%rgF3LM_K|~P9x);VJ+Pc!cxOz21Q9Do7k~j<^#Bb}9S(5;C13!I zNtTSu8f@}WLa+l!U<4CpY3l*17%P%8aX#LbjYl|k$zzaKfCfFw39|XRM46mXT1L3>l>Iuc@tIApstpJ+^fePhHl?CCO)`5_qP!G?kuL2PT zoIsSaItk^<8og;9M(LzETBiS+md4s-cWD6vnx^v^yW+XCS30g&FpNZr3i2wK%L)cl zIizj6o_)J&89^jIc%~-d|4s}L004$K3*kuwFH@*KH0BV;g`9or~lZL_)1itf~R*P z1@gHH;p++A3JNyKwbU_{x=E8W>$H<35UbD(laQSF%C1$LpK@ESEu5ZxiMBQgo=Hlx z1(_8q?3*?yz@s*vj0FJ?f&%YF!6;~A4SWCx8NTN+i`=_XMS1#>x<1VPXRtTQDbR&$jZxcGqdop?SKaiZ-Ul0o>GqIH*xg)m3fP zSB=$Ky^D^f|J7Zc)$t5tTP@Z+>3?HQ){v-+)Uk`UIE9j0)Spy*+2~1jfP~!GjSZ}5 z`32Sy7>xpZjOq8*{4}!Gw?}e~$c?1NiLKa+&Df1ykGgS%fi2kFvu}pYdm}i1E*7#k zvBsTkYLY$KtoL^MteOtU)SF!so~_z!HkqK^NYod?aVRr(a8s*o+oI;$uMNVeIXbm{ z+rRy2x$Q@V{1L0kn!=6TskCFgVN-`e+yJC^OP$=){YJ}8AB8to#q)4TLEYW$N7kJm zE5s41+lk(7-Y*2+_YplFvE532-tWD6WOUuR*V~p2-}hZY>1`Z?Jv!RRiRF#o0lt0m zjUO1J|J~7j*#fTM7o^`?Qr8O(;b>P^B83ryz1}HF-Vv_h5jWvzBAZzW?bjZse&>-RJ`79WmPjVb`YaZX!qPwGP|GJ?OM8;=zVG$DZiF{@pmfBDap~ zgI?^yM$e(%b<6XR+wGCbO@>7@Sc=Pohft|i!=>E@2^?;bJg zJ}bi>>+o*x1S9YB4&)wT2yFnLbg0)k{@8y3+y!s&2aoUxukZ^WM9fmx?w%2TkO_w1 z0Cx>(2E1o{5#|if@g48+9}n^&zk<+0*L2+=QLThFScs0@*h$!<4X zb^inwyOEN~Wn)p!x5)A}l=OEM^Y=%(&-nDi_I@Za+ud&USl?`!`T`*E0t;N_4BPcb zujD+;+03MLFw;=>Fa0F8&87ELp0wyB1RVVNv zrS!xE)1KZ%-$S|8##99Ak~Q;?c=)&$^&+vlig@*wa|0sM9&5z_8lVGAFeI8`RWTqU zGY|tiPy*o;02vS(5YZGdZ~?$?^lDv2jcA37juDJI15R)N6)KGf-1<0XWg$WD%?I}q zx;mO_0x;kZPOtC?0N3J&nfUg_|ZcyBZam>gA?a0JlamSRMmXm)6Kaj;k4+u3J86aLqiT@4(Ff2fD zA&C-)w8`pzhc?n+-p(B-*KjABqTxD-bN?u}2?&1Tsh=HCj#x z1P|Mw3xi&g4knBqqzOvx#u~CppI$8M10@R3QY#*ED2gDqkc1#V8S$$x0Uq|-$x1ih zgfmV#=lrS2=MZCPL5&=o3Bm{`qLNNQF(SdY%EUS}QAE!iv{6SNg)~x0JAz;#gAyF5 z3q2N`u0aT+tf;a{M?EMmMNdU_(b`C5wN+PNg%v;`DmC&_B$XVf$^T9-iqcPuwxcjs zN4rb4Sh*HuR9Iz~Wwu#oO{(=y#GqS}NswgvlOaRyKq$~>tu!{=V^zKHTXWAvH(h3< zwbd|O!LU{&Y$a>R5OoD57FluiEuzeO|2%u35zz5AWDH;6cqVhW$!rRW*cwdekrk0x$D{L7FB z-fh=k50&KFcMofQt#k+e(A#jc{Brx4_kH!u-uI12{y#zw0D&VUyr&;ZdFx!X|$fxV+tZits7{B)2Yb^%oG1bDgl6|aT=8{Yx_ zcdR-Ek5C}QVEkePy@;qHDAbbR1f7JCIz)+2GRa{Qh5wU)hM2%m8XOy-fPkr4fkZX~ zdBE%dxV+(wkTVI;92jB~Mie%L2gssI`SPMa1r`t}8zbNhRj5SIk*taonM5-t;gd}X z(T}8)$aNm1L_&fmU*-#2{Uq{(iGbxday*|zFqA(Nim+P<$$wDSB~}$ThHW4L(M2bfwFl zAbYt?z_8K?2QbJ2=5rWYIphRf_!37z5QG%wK#E%OO%e=~10nndl^(?55gcNGBSdX} zaZ$p2OfbpZS@MYpdJX|ZRK`mgVMn*IVm#+_M*q2dsY?LNrWi>kHxC}PKKYDDmkK(| z{^iq=4AB8I&GgH-c_JCXK_)~5=^%}klrK@_!3kz?kq6u$2GV4LK_0LjiEvYs2$3d6 z(!-FZXlVuw(7;7Fut6|@00~q)2nLSOuK+M0p}3>rM2rwiBRHW0VG`p9B9hSsV)3fH zQQ3YL@PP4vqy#cZpF^O+8QdKKHVzSp8%=6QT48e_#2k*`MmJKD&eg9$@lH$=qJfr1 zWC+k4NDx$Df(sa6GZ4+36D6XShZvxx2Ux-aC`*xZ>c($J6&j|5Yid79;O8>AZLDA$C&`ghm}zTWkmT|qJ0bFX37!QEMN$r3W z5OM^`9%KOq!7N3=`&^WL01cQ32`aFOvbvVj${3lTc9&E{Bl2m%<5I$k2JI00 zis`s*HqU)?ql5@JBnGrS$N|c0f?91>jAS%VPS=8g62M56m#hs1M!Z;70{EIRHo%Iz zC??Nfk^wpZ%@Z7QWP`9^r41=Trc~)#F#EE}Xt-{&m0jI>Sb)kuqiZ>}Y7sy}#(NAI zX|c6!lEywkxHH&+Jr~9qCUoG}t_7nBxJrTn1mUO^1vYtSz-x${6AZs*2zn?`PJ{IM z1sF)x?=79j(y-OWsLyQ0f zgb>Ax5^<;Q*2)kH{1h%aM*oO6J!v!j?BFyaXF+ifqT#TTH=3tDA(){JCQg_F8xc`< zGOp1IDYFH7RkpGa#F@Pz!fVUemeRJh{t=qulnFFIg2NwsaVMC-2uSdN5pKUv1s_8VPDv6RTL^J_ooPPpk%Z>aT`8mM1Cm ze1{WT_u7_G2V)QV-2ZzZkru$f06kcH@$-p3ox3@?p2VoCORhTyNT}bY8Gs9b3z=I# zwz4L>z$%d&IJ(FWNSg@E+aZQP0tUFX5Rn1B`v6P>yh)QQBjc)Idp?C4u~Se6VcLad z`>3yIGE|E?sS~q@s5YCxy6XtR8svxtAT6I8vK=HZCYZ1CdWZt3v)L28Xqr6=2)AA1 zH6S!29NDA}1RAVz9T1bHX`&V|+NV$ZsdgHue33aQRHa#gr^m@cvWTZDE2wb-zlU(b z1p~TUYCU*^rJE8#N)j6of-g5>sA!oeXv?ZFDJbiEIBioh3mAxIkcWuigh>OYXe*LL znl?OPh#0iNO8+E@9pkYY%YaSfL{8L1)pCMPe25b1L?PR%Pz*&(D#cY?MM($7e{PCaYB-5gFz?R9xm%L@3O>rdh zRLFx_FxAf+5Q3q|TtMMB*%&g*uFx)XahK%7_?%*&-~QS{p8R7^w0kU&;S+C0VU7^HP8b^&;(Ub zIATzNw9DdL7);q73Ib0gV!rvL&kP-q;*-C0xz7dFmj)Hl5hc+Q1ySBRPY*TBog5o5 zGC-l>O2?SV<-^blxyWinniJL09p%v;tci_J=lqTSdA4}j@8(I z1=)}l*^w;-k~P_rMOl(v*o7_GmfeGwh1oq=*@%tVnzh-R#o3(I*`4Ltp8xgPp9NYu z2-=|~+Mk_SlMvcJNLr;$+A3Mvr-j<3WeBFF+N!nMX}wyk&04M9+OGB5ujSgX72B~T z+o~0oK7!G8?4Cg2E+y?Qnj}xi*wYM+TSh<#ICuy@EmwQx+jlLTzV%p;O;~{q3X9#? zz@^xXC0K$@+=@M1jU8N+eO$y%*~+!t%f;M~?FgVP+RpXd&jnqcu-gsn^Kf2RwwF9>8 z9+9fih=2n|hzIGV-f`6yv(ekVK-YnIgFL_pd}Z9p6<>y(*T|h*$NyDW^c~;zO<%=z z+=~!g^o?Jd-PnxH-227f{MFz6)m)C?+|UK!02bf@p52BJ+V3S`)TP~luwC43VBh@% zrvbUU3=iF9rz251K-V+{I?46pI;fr-e)rxRf^(9|} zt(U%)Ve*Ax%Jp9PC0vGJUy-$6{_R*HCgLJ?*&{~cBv#@jX5uDx;wKiC0)~j9MF<6U z-~*;$DUR4IUfrjy;s+*Ong!!A#^M6;;;FS@+bv$hbP}{h-qVc9CGp@9mQV1cTZW)s ziAmuUHUz!RUc7MOgyq+Xcmx0a;TgtZmX%x`o?jWRVIzjuNdJCfNuK0Nw&Y936k`mCeDa?=He_?VtH2QLH|D5emz<-_Gy=u2-IbRdv@oe zM(U(i>ZMj{3n}HPRp+Aq-bRROsg78xu4;bPYBk>KGA58(#^tdUWrQ~9Ts~-5b}nFs z<95t4E(1(#D`qsk+v!ziK5pi5y=ar*Xqk>_hu~}QjoiN$?7`-1jX>(s^@E+J-^32; zNY2-pw&ZG-Y@B9lPp(t+(l%|ywvu{Q?ApcZ)@E%s zaP89W>e+tnMqm|IF6*roof&dyKQjm zZE%0R=z)dbLiTGv*lUj-^6NHolE#eeUT;PAahla`olfc}hwLQwZp?03^7d($Htf)b zUoPkG^6u{VcJJGb>dB@E*dB9yX7B%I^Y|9+1&8xEhwxpVa|Flm3{P-5$MXy4^F6=w zKL>O_k8nX3a1wv;v+nQ_C-6o=1Ql=eNB=i*gGO;6(K6+AM;f%YdC`a(hh7{9gNjzu zX72Gm-dvW=YkoZ&i0ETUUiFnu^&dIxEoSmr5APtB@|?EmCRcAJPxF?hY@zOHFNf+N zXXUBew-YxT1j`lcbbU=^rL8tIV*LFNN zcX1#0-JV(lX7{lU_XZdDaEJ6qmvj-Y_j5;& z9nW!RR^dPXh>tdNVt4h_KI&F?_QZy7Ea&ceLG6ye^;@rQQ>XHezx8D|`6@?=lSlS$ zkL}@Q`Iq;03a08W-uIUO=?91Tng5shoR9fOdGvHI>kMynK@aqwM}!cUcXu~>M_+Mx zXZj6SdPNX~fsp!@czSx*x~j8ufOly5yyJ`5Q;JaTv7Z{eW$qp~`%^dUQP*x=Cv{$L zd&XvaiW{H8hjp#OP8ANtY%e9v$H-ZlNtzyJK# z|NOUlfIuX0AVDw&4f&VxL60u3s3s3AW@j}|qWbSYD$PMbni$|fq+sZy;z zy=oOB7_5d6ZZ#OT>msXWi=It8HlbOyYPrIVE0^rlx?;hyY1pXlUcP?!9%R#3Fk!re z4xAA;zbX#*W3&_X7vHlk=Mve+Vus=aoaZ4=ohTWmIl zm?LF+)W)NaKI)jGMhJ~mq;SC@BwTV5xe}L5oiP_BbUyX>P<_>5=@die1=wX**i{uK zm|~I{o_z7G2NqidZiyjV`oXCmffC~P-kbXcreKvF=6RlB4Jt-gB6nUG=zxG;C=s6s zc35bmgCdHVr2ms9vRS2+Y8IrXn`YQziJ2PY;)|LcLMo?lm6~d*pRVeVsxyWfDvqbp zD$$R(;(A-Hlo@GcZ%970l1oiuRFknb@w8BMQ$}=@m02C#RR=9y*_UbPCqyqwf{?+@j<@s-|A@+R3P+?^+5erTQ)y>7|TX)-R@p zc2#Yk3j=(x!7D-x@varQ_*ulFnmRGXug=QxAraNsna8tzs>Txno7CmWDO;k#mELYu0n`VUbJ%u#1N}-8XQ%xhaWz$eQUA4JX-*>c| z_=4-L(*H%*cc*}3k=^cdTJwuAz3;jTX~X;iDsa8-rd_v#411>6r4RGHx8Iqu3b@Fi zqADp^7mKR$Ll~!cTHqr?KAXoU^HI6wmt(FgMv`aVx#fz39=dD0wX7t~H=~YoL_1?^ zEJDb_;Sh8|Ei0YTWZ{d^cUmUgbj>-y!j@_a4deDYm8U%a2_${z3Z z``SHkz;A1xz4vY(p8eo`Kl=CIfdhKk^#GHD|q4(VY!A9yrCEkhDsC444r4g8{#l}b81!&zvn~iL6CcCgUrD6ML0$EOEl(N zV*84yviy-saGeR_j-a@PD`GK=3d|f84-$thG7v^yj1_CX$cIpwF^vmUBN*9;D>tqY zj&h769p?zgF3#mhrc0ekGN`&>=?pm&AqOK~hpd@c;)6kpj?ZX^LZK)OC=^l8w(RAl zOTG|zJ*?p-JBhRoeUdcu>0Y>ySQzQyF;sS&;`}Pbs#mHqZ>{7TtC)B`*4Xl5`a5DO zdkGsWYB88#d?Eo|)y84o=!_y$jUC@8M_+;gNM3YfAb3>GYkm=%deG)JyUEQ!aR2jE zFN4|X^k_jpwq%3E8l*Z3=`(fs328O4(6>wXJ0UqYkUz9#bWpc>ui#ENNvpPovpNLR=dj8s@7_v6*Z_sANo48wo;;Fjo4Z(YRg!THE?a6EBv5p*Sq2d zq?mi)NB4?XUWJrM-dxB?qgl;_s5G%F#pYpmCBX_7CrN*#NFW0V5onMjv;Uc$i6M^_ zPY+hrIr1XvPgz*I7MeD-s$H!oTkF~lMMSGC1Z%oR2}DhOwR&~4(rw9z+xMN-Z|{@k za4Blh;5uTe7NhH0&k9kwGIy6!{3~_ma!ilb>T=JlZbup0STI~OA>JJ?ILlk!mcDeF z=3Os1Cld(Z?6I7~L1&OOi`jRYkV-xKDQ6pLlcSclwAnf>YYS{())M%@3SRKCN*N*A zhVZLi4X9Tu3{aS~^-~h2t8zh9sfubiuE=dJT%(&&w=!2XDTeF)wCD&I!}!J0jWLa7 zT;m(#SPePe@s6R1hcHrXjK1A#GfSgakaAZ?-fby)$GgppIGMdnX8-ScJ<{F{QZUQ% zCDvqfgjxG$_MMoWiD%~rTAR@Gvdn>wY-5)Yo=BL$qh+va<7{U(GP%vmm-*yQ{2_kPHbU zT8=Me1gR2s+Seyy_MoBr+wVqyu&8hzIDcyw+XA~;;C)`WqW>B0a6A9|;bi@cgsIf< z5r4SEAl@p3w;EwZ6STw`GPQ}Ty4FpjJmo6Cv|4Gs>4uE@th*+oB3j*79hXSeAGP_- zbB%K^ni}S)o?jM$4zigOJ?A0SpG(RfsW1sZt;KusZla1J|7Z$wK ze)guao#X+YSIX&LJ9^vgcJso!yjCs}MPfFQE+5B_th+3jcS`0lgZ9j!MKj_34Z?!E zna#oNc%o<0@HQhl;(>N}Q7wMxM)yvsk>~i}2fcZa7aHWMg#5-aj`2{-YvqDgJ?mQ^ zU6;Qamskw3reTkCjmvoFP$zosdB1etEB)0m*L|pV5C7xf2cO5ewzbnOhxF$AI{MHb z{;{=g{p&klu+HB;_Pg)xVq3fG(-u4V!EXNVqyO9I|8~~Z@Ba0h|Ju<$I|Qvvy5fM- z{=JInNq;p#EeFed*;HH%U&0zzIzv5S#<{o5cxSfHmIaYz~;8`FC0-B;+9&L4C$U(_iRa~fSSPNnvsGZ!Pso3h7oQZLi@Ff@U>7MW<-|s=) z&uLnzQQxZdAP@Rp4x*kyu-fAI97Raq_E8-~{Mz(k*XFPW)lJ=ySsl?qpBuZk?klrMA-sEB8C2Ha(a$*Sz6g1ReMQLJ$sbCD2 z;wh4tPvl^J$suIa-t*a>s`y~<9bqiuU@YF6%f(_5)|y249JKi!_X*?74I?q`p3q5O z`#InBIS^qzgc>pg8iJoRreQUrUo=`{HfrNGa^o3dQ!AE~{@tPe&CGiN$2gADIDyx1 ztmFUXp#*V+k~9R~-JN9_VBhIfOaPuf=Hou<8!1|1;f+=&5@bLcWI-ZiLMr4ZGGs%F z;zJT7L`tMYo*tBl5pLaLM&hDIa^yz38voE8;}RC%D|)2MF<(jE-tp}qjs4&dwxq5l z;}^b;_t9iAE@Mr0pA?p%Od?zS^(426UpERRP-5dynw{vlA2@=D+*Mg%MOM~{W6Pi; z%K*o_J(kQoCH@IuWenm);Nv2inIgJVBJP_)1l~fTB?dwwTC!z95|Kl~B{Vn$C?=F# zn%P9+C0*v_Ug~9E^5tLpWnk)9U3Mg`>7rmFCMXW(V*VapQDOG2Bo#{L6FMDETIOXI zBW7x5P4Z+=66F{gWoU}#XntFGH4wQe<%ej59*AQfnq!jOAvx`zI%cKK^x--EAs|YI z-XX^=&?DdV#xAqC+kaY$C`&=+jr6ju}{aeC-J`lACn z9#|shbIOHsI%jpp<%+UsL-LGw!l+{QQj9WYU}6}JzGRLr=1h`jkFKYWqUT2zD6MoG zO$sRSkze^0DfX!se-rmb`vl?~~Y%H|#7VQXrs{zYX~>V||)s068FIXx&K zTBvLIrYrd7WRL@Jf+&a@8vk>$C8RYcLaL&Q>S>GeX`kX~UlLjd`Vx6YgN_yIk0R=Q z{iD#~C`i5}qMB!-25EW<-Ko6iOp2F|0V6nWDtz`Q_`M;r!CV}IVW~#xr%EX^gy#G; zX_dMvx2Y;KNM)>kCYIV2mmVk{>Z+I0A^)Lcuk@doj;Wbe=qw}&A8M!{-2!BQB|JTZ zhxXf>BE*OG)P5x@bZ~gPQ&6w}NZ9#wZ@iC?f*txuR>js_UVyE2Y}%yLu6= z%IlBKD;~^iyxME0zUsbaDy$M^t@%Br*!P2UNCM?0KD!ks6IqvFfo+DKf z2}#T*gu>gfYV5pi8UIbqOijH8A-btVQ0th*>B*ujLoTXwY3nEE=|A!*yZUL2&aAlF ztjxkF<){I>wrkJs?9bwC&;sqyt^?2tVbKz8(jx8BD(%lQEglr@(@HJWQti}6ZPj9J z)@rTQa&6T*Y{Tv+(rzu3Drvr&t-m5{+N$l^vTfQjY}+nul0t-et*yi26>UyznI33N zN$k6UDM?U}ubhss&f|Q=7n%klM@j3m0@cZu?8%lWKmOa|8fWEdCtPxC0}1Wsu4kc| ztE3KX7kO?pHf`xnt?8C7cYdzZ?yMJ`F6(M-Wv+v(xGvP1D{hGG*nVp6@~-ZRt?n^H z(bB5$`tI#KsQp`gSZ$fh?yiYh^5MA)0Kl@MCeJ?Bk|v0%Gp#e5lF(ZvX?Z07t`S z9pV5Zu+QAyM*-^m3~d9O%;?Sq&n7JeTkr$dZr5I|)J`kxzODy@@JDIz0*6TNn(ge0 zFn^*jconU=7O$#a?F&z@^Kz^8&hQNfuMI2jzUuJyChw85aP|su_vY#lGeZ$0@ezOT z+3Dds2CJ~DZxpky9vyCO-rI$S$eOamEd&GrA^8La3IP8AEC2ui089kX0ssjA0RIUb zNU)&6g9sBUT*$DY!-o(Lf}2ROqD62KGiuzpk()=3AVZ2A`Ec7uk| z67*(|<~N%%bL!k_&|boP_JF1fN)#kMp-9msT}qFq)2A`t>61FOD%Gk`6;ds?wQISo zV8e=yc(JV6vuH7n4e62F+P59s%AGscB;2qq^UBLx(`4Ume{0?yjIgg^!2)d}{-^k` z zXyJtzW~kwD$X(diK!N?2l5~%G10spkrO4fiEVk(4i!jD_9gH+q_nAQF$%x~6>Fmg! zd-8!MBYZx37Nm%4?PnE;qe+G2lTg-1(128;73GyvURY3FS?0FogN%KtVTELlY37+| z0@mSgUdGpZqiaYO0d@(5dRGDXAq;m9Uoo#VV8z$~7fI73wCT zZxQ;&Yns3YyCzK;a_G@t!}@9Lu(RITSfVNRL=>OcJrbvm9dyZGl)@v_AuKpA2z7ze&Z><0a?4U%p_5|i|x#Ie(U7P7D z?86WzORS$1zgh9c7jyV0mNS;vv7UB9_i>-s4mBiZ-E!-qklLZAt#|3A+f<@6TPpL- zFtaMJsW`HG@6SNrTHC)>ib%A;NF&{4uXSoDrLGINiEzYJS1qwz%tAYH#$0dh^~4&> zX`NzZcf6xwXM+rK&@_uTWZNyW_wArE<7lqUI3GFd&U|>W8p}4CYsJf_o8}rL1&rSF6f-Y33;Bf71AAhQC%2hqm5EMP~A`#B`_1I@0`S#pT z?)~@R11#oB!eS2k`Clg+9>c@MnLe@cxevd=@}CGco9~wn*jnq&#N7Y2Lz1LaCOnvF z>rli?;PI%(F?lTyRWC`6G*ZLB!D;V;7?jn$A{Qn08RTH*tJV3+lpL|VPkl?%lI%3M zCjWhHMig`5{jSrx{1IkkILx7mqBOuBvIlsRS|1Sqy+yYQMsQw8OrpD%$iz>{=qg4$ z2=t=JmJDL!Ay%X!^(fal5Go{mc?lu<#)!f^ozRBu>)jdGsJiRePk(AV4h`q{x*b9! zT4{@;IdTZDyYw-4y8EH;3OK};#Y2G!)M7)Fh{T*Iu_ui*B0ZirJw*=AA=t>ICO65+ zEb6P1{jwy-8U&NirS5}d1f3Yg#KtWdDr{`z5U)N2o&D7kj*cmtF8BD2nJ7euyEGf^ z(8d_c1ao9y0VDzo2}ou-lbPqR8-1qvJ5lNfR6prPe4J!FZVHfj%$uS$F^Ne~n)94R z8|RY}*10cwFm)tMr97{;FkE#=B=2KgEVm~Azg*IBmwG(rDV^0z=V*(Uz|tKpo*SLt-Y zdYOf^6cGokhISFSy7sm1Fe_|l3)hXWv!y-hfZv)eL??zejlu5j1*Pob*uuf^l|4=PUwig8bO+`waA!xS!@5xXlmbzHWp=8C3$>NClY+SsdD${r}o zdKUWn__QSXv5<#sQvMqG$Vg7IlKE@7;(Y08{cW;>r<~yaTDi(q?n#!sf#uyU%*%Am zTKyWlN4ypjx#fA|gnJs|VCHfEmnA+kbeS5|3!CvWAy%p|A9)v%5=h7t_sG@_1Iy+vi0seI*KcUjj4e&?4F9F92Imds^dGaA)=%n3uvpf*u< zbl+^p=cL(qhFa`}4{As}`&rMB479k%y=X#byHQ0-H@ahHu0qGS(B+Odqa#gkN&kn^ zsLHpESEHJJ``gALEs&0*NT7zK`jbR13#-3a>V+gcwG1!#I8Yr*SzG+#827Kn*(8x+ zhq0}`?zQ|99H(6C`oJFlhnjC{jcg5Xq|Ixlwzp?4ZNnZ_#LuqtOuh|N59|5o%aym# z$33?_J{8e}hW8>Xs`TH*qv?*4H@xTd=z8x2(j&2YXP2H;&qky_C;_LxFi1s(_gK_G zMmWSNPHKar{iGUfyO<`9-?L}@?it55t>22@y5+W?b`YY)9|1=ji8EE< zmk{ukFXyCv2S+g5r-W#CecyL$q}F{+_+#V1WN^`2+=q94_kZ%&e~#yc`ZtDI22Tez zfB4sbXUJvm^IXHlhFr!P1vrNiMr}7IZFk625GZ=E*M~;ufqRHy6$pBH5e!BM*p9wL zUCQx?^q6e*c#nW+Vbx`iYc_xmh>rjnkbO3Q_~>qvxM7}HOa+;U5?FeZ*ohDsdz%uG z6xn*Hc##;Hks9fW#K#j|Wm=o2iy}FaB-x9&v1+QNIq`*zE2(zDD2y-}j8hm^^`|e% z$b?q^7>!v7j^@adx^ac(NO9X}jd@a&JXve)Cx6ygj?@?$*r<{^#g6MpSn!yR?|7B& zh?QBnmDz|-{OE>X8I~|NmIp`^`FMu{nU-p)mW6?WeHb198FYrYmXuhLqc~y``5l}H zQTA3%k9dlcMtsbZ6+xI|A$f~QSdzLZa0CNi;HO^BSQ{w`5!B~!k~t9vmvAkqjMfKg zmkE>$=Zv5^n)7vqLWx_!ag9?Bs=XV`Cq8+-8 zIJKMMcvr<4oQ7wd={OM138T!JI?Z{bI0}GGlcU+Gozuyq&*h`li4Z~BokmKR9N~}H zxt)U;mkQYvO3Ih+Ii>2EbqN`s@i~fGN^fAHimcc!JE5OEVNS2npPZzaXo{v~It^Lj zn58)lZyKj^I;V76r=bRgN@%C2W`w5snMTQ=Ao?nkX`2^XsD^r|hssh;n&|p{II|m=Nh1oxe(@!aA&WIT^mU5i9r|6x3NfC_!PN4Ih+VSaePU zk){S}t!!El*lJGOnu}`6rElMD?l^yQ;3boV0qR zy82792bUTPiEi1eGP}D;pD4Slx=O6X3Jvrr7FWs9GZ=>Bi{<34 z-CBzhK{x?=eFo99L_4l-2d79oUrD>P<=V7$I<$5g98TM=Pz$y5dbL=4wM^MCqq(o3 z39kX`uMK;s2V1r(YPJLWMN?@;y*a8f+ORGPw+I`tbSts3TCsR*v3e_{dHb<`E1rN0 ztc>`aDJxCsv5>f0vUv%fEZew_8@QUt2;wQZ@c<3B^QFPZk%DQn%?cIIC#?{=rbRot zKs%raH@c|HvyMBNxf(%;xoWcL zo4$`bzFWG9PC>F)M;e%$d|-i?y_mDp%Cn`Lt<{>k{`$om^NS50gy!U}xp}EV7gJ1^{pe1%0VT<-V#Qe#hqw)N5u65O>`jHqM#wUj!+J=w(?9LC{r2V-2uSq4b7T)C4=!-FiyhJ46`9D}a^$gDcdx&F3%Kui{j`@6)N{J)O%#Jr11xC_QpJj$ee#fW;wsLaJ_JjPFw%4CeZ2Eh)K@yfKkj*65YEiZ*xbh) z)pdx>&D>lNH|)on%g7zMS&j@&^)L&POvIg3MgE(~LCcHPptSR>y7i2-_pDm;ED-lx ztt;8O0PVyAEznbPz``r1Su76JAhxQN&4x~$W_Ji^}FiQTKr z0|5sn?1%Sw&6R5o&P>hJEX)ug)I*)kQ61G&TEo^%!&JSvm`jQV5zf=p$b&f&g^4eq z3%Uqr)`ChnxFW!zt4Q{Y&rCcG`}`_%EvN{Q&-$F#5WUy^Oe@bwyG;w&O`Fi7G0_!$ zud9sG>Wb28yrGPZ(ktB#JN*xnUDg8*5JCdmL%5&&9r?@*Gq`B>b=+e ztfvQo*YZ8k^bOH#{m;8A*mio#A(6^|jZas+%6;muS!fe;0XTMjeWbUW5ugY z*$iIGlKrBtt>6{z(G>2|8J^fJt(Nt+gxt$5$i^ew4FLzz zjm^xB5Y&z3g6y&`yX4jV5I1{#{2{X-F@*T?v#WdF_N&SFoD6U+*XB)9a~|h!ozLq1 z-qo<@)u7(_{N@l1==06jaxUnBe(3lAywLNk=!g!djPB^<`smg`=!j0BiXO{~&ghfg z4vKEYn~vxX{YBN<4g_J?QEci9z2K_8>Zr`>uHNde4(nEo>8L!?u`cVf9@(mW#*uyC zv#!w_KH(P5;VDk+#tzz_UF;mb?96`Q4)NhJVcN+);@u*53VmpM=&-%;x$F&zmggaBlH%dhy9X77IV*oPLFxPNWe%^9rpn0)grgjk^+U4Fl2X zx1RL4zVu6P>rDUjP%rGgp5asP>s6oOo^G9_y%53Pu*dG%)|m-pEcRGG_GRy@EuQwp z{&Li=64E~H=+4t94yqSk_tU2Baj)6f9`5n(?Lpq#TK?|XPVaDlPy9#k{YO9i%g_7;!Trl0{^LLV29f^k-~GkU5bPiS>c9Tt zFaFd3{`_qJ^Z)(#4-nJ@4kTF6;6a22leM9?@Xf%75g9@>IFX{oiT^aV(fBXpMu8nM zUL?t|BglywEjrcfSFm558Uh;CYEDZ!l^Gh_Hf@_bZ7iK!D;MrssT%Pb+S`}! zU%YnhzD--0V_3wA<>D>;*OH;fks*`)C)x7k%aP?`=A5}R=E#pbhF)aJG#t~J8AWE) z8X7s+vEL;=Sq`@UZQQwWOQYTUA2yBHt_dGb-1zb0)06}6SdBSdZOX1mr#?OU_3GH9 zYcIZCdvN96g+CX*-Tbub)Rl=-cCwB<4y)^7d85I9v@x>MO zYH-FGUpz6y8^K~RH6Cg7@ki6X@{z@$DR&JeMo(t70@71s6aA6?9NS z3pMmmL=#p2bWuhdb@Wk4Bb9VgN-MSWQcN?|bWtn>?ex<@K^66jP)i+^)K5D?_0UrV z)dSQ}O=+!l)>Q%Z3Kvy#E!9y{d+qd9UjW!Bbw zz4bI+Km(469F5*NN8vWt!3{t;a~qJn=JK2-tZQCX6|7)F@%UqqL;hH!j(I8BvXBK*BqKh{AXrz%=73QUt=D6pnfDSqo zpCjr2Na}yE*4pE%xt`kUpu5J|Xmt_$`RA{37W*Wu4Ltj8xZ{?4Zn~LH8|8)E7T$h{9TQIJU_JFQO&C{$48p=vMTSxEO>vsoS2TFB-_S`p ze&it|6GO8=>5)%$ok~2D!mtT9smwcm2Ra#I$SUirN zOgJ@g#?Lpu7zQn5VbFv6f}q7??Rjq4fYg^s= zR=C3TsdMEbCyS}oj`~6@jEIdCDOkk`+D4yOl;wDEmkbcLldioPV{abuiKQ}j|DipV ztW_Cm!&;1#kqre9GCbQ;s+~2iqa|%=O?z6@rdGA9WvyNRfsCalbSfEzT7dBRCZoPG zsmqz=Vij~%9g*py!*J$9{dC-pWEL^X{ViDilv(2b7PFt_)AxLPUF>F8yW8b%cfI>v z@P>DLzJsVNyy98TPNfZe6>6Gd7g*Wt6P6v>EmHjn!^M(Oid=VGOHO!IOn&Z1p;o2M+bN5u0Lw#zSA-?k5 zbf-Q2X;4dfg|8)sM_*I%j0X)p>klgj2Wzr1iTi^TUcfbApZsF;X-GxdU}11sP4SRT+gm+mJi~pJ`@GFoDKoc4O z#R4=>`FJD#A3pzw23YR}T{yiXnwZMv)6L^6D19oLVDu0+|8eb(tYQLY)C4DX>}X2= zxw!D2ilB`lcIZc6`qQU=^__gPvYy?ip=C9D(=-)^FI?7U?hUS&qw5pCn0FNCRlI6U zarnP1i0Np?;?V$#OQ7Kj8YzV(9)TE45IqL?wtrB)pZl{;I5!=@giX+hCeQ?!n+i^l zgih!@FA#+Xd;pEm0oiN4mm39&n*cAU1zqR}QRoE9`vlZ;G#(oQo%;$ZFu|#a!PWye z2YkVkn7|t2gwGqg0Xwy-V>(IL1kK|y<9a9SOF|`FLMCj&>D#p|>xc%7EyweiM`JIy z%R*Dqh`3t^*eSERyD!AJHRZUfFKRx6>H%DEnU?`U|AfN0HLM}2D~B-P0R7_tLQp(P za0X>KKCmb~N$3a;m_16<2r5{G1(?E)D1k@&2SgNwAsD(D5QPbhz>`P?6`a7D!?so6 z1PySyPFRHpNWfH3xl!-}3y=ga5IQet1>l>CE7(L6e7VP)z;k;5p~HnCOt4Wng_CGR zAl$RY+XM~#hur(Y0_lW%b3$fpMrV9RXxy$f0~*lls^+V={~?u&Bes7^u8iv=Wzz^sxCF+V3X$utu&_i2{Dg4R2y{z@4p6sCv(6b?1wMH~R{h|d--~>?MgjHxnWTHCd zGeUV&rZ?=djY_Ukthlg}!sG(RMmw~B7`L{JwhlTkvXaXil1alnOvFq~m^>T*!ZAFO zzkgu6ZR|$b0Lt*2omopNS2{}G`h?%{4QUY?b#cni46u$!2e7DzSttb>t1+_5GE^h9 zxa_l3vq=JDu!JfofxAh)bW8zTKIEEB|Jee}Qp-(zR7~VdPUT!q=2SE6yCa;;NpiqK z%GAQZN|=TT!!#p9&OEfIB&iar8pxc#go%dRfQMO-2G%sW^oq;f93!=?g|ag>-)zkK zv`;i@&$&!4_`J!y!YhGG4D)JE0xeJjJy5axnfb)c|7<=}noMql(D9?936d|1@d`3@ zjxr<}cB&gsvYcxqPXDCM+k~xa#80Z4Ix9ob|76h?eNpGSE_DJp8m&$i3pv{FB89@9LL?_O!G?y&*`D>BoNnHQ52mk7lqL(wbB$7uNlQrF6~k; zr7Ig#%rEUzKmAic zHKH&THLv=<@auAZO86Xti~$QEFIKu3RF$qR8H+vOpP)#T`ppX zLlA2X3Psc0SX4&!!ZI;4I8{GBhuP1Pq*vBGMiA^cvWpJLye_QnS9Z?2TR=Pq~b3Ipd zO;>eYS9WbzcLgMJ4X;Tmlb?i`w!=&=Ox4VMC2f^cCXE_tLMe0<9O<0AU zFoW&1H>0v{+R0XRwO1>X|7+dX+}YMx?Mx00*qs5C;?zDLYgi##Sd%?jlzl6b?K+&wRXE!?vs$}a<=9x|a1L1otuq28u>IPR>rt`|PP1*avK^+Ka=ar{+5!<*_6%FO z72CP3Te`hlyUkm?-CMowTfY5UzYSc#9bCaJT*5tE!%bYoU0lU&T*iG|$BkUbom|PS zT*|FfuN@<>-P~IpD3JYJ&IQ@0(p=I7-O>$R)ID9*HC@)_TsfTEVJg&%bPdU*U2lv` z%cR!8VpXKVS?=uD|D7e+D;-`eWzd3i+XdsOc@x*-WnSlfUg(Wp>7CyDtX}KAUhK_Y z?cHAP?OyNwUhoZH@f~0CEno9JU-J#l_H@j%HLmDl(Cf7+up(ai?cVmiU$j-<;ax7i z{DOmv2iv9HorKV7T~!jJHLK-UZLOI1tX~JVEXhLN;WEzLyfNy%U<}S+4c=f5?qCo8 zU=R*r5guU@E@2ZsVH8eb>BUv}ZDALF;TH}l7@lDou3;O#VeP#!rVU^KF5r6gVSJUK z-bG-neN{Q_2(6V>=fYtoj^SzgFI{b3DaI%Yu3{^`Vl2*LE#6`-?qVC`<|21A?Hg4lA7El$A<0hVCIj-aA{isf8TBp@xA-&g%wb%*u-696woz1nj zoZ=G(C+dnJFH~?Tq=3fS7 zW^U%MQ04NQXL+7ydah@CzGr-f=X~C0cgE*<4rXZn<$xY&fS!tfhGv5{Xn|hllZfYrX6Rmi z=wS}%|A_|Zf=&!%KIUXr=#BOYV~%LrAn1?Y<%`znjy7q8mgtm5>0}=1l_u$vhUu67 zhl7S_nYL+~F6o;d>70(~o}THN=INjg0*(f1o&IT@9_pe_>Z4X_mG0*gwPq8oW~tU- zC0h&E7~rq)V}=7WqwFUPU5;6+2C^<>XTwdf(_CE0=wha5iXLjXj%&G|Yr3v$yS{6@ z&TEX$YrgL5xi)DS@N2;yY{D*V!#-@pPHe?KXuz)N#%AoD{^!S*?8tWL$|mZ|o@~oT zYRKkn%J%Hdj%&@vY|tic#8wr!6szrW;vYln+EgSDCa^r}2sjAftftrsby{q-uZ-QI z|7yGayg8_I>ZTx2Ks(N2UuIlKO&4U6k{-W*Lw(Z*eN!?ZD z-91v?Hmp}R=dG;Z4AcKJZjsWv~7CjM&15lj5Xq{9S$dtRT4c;I+wK3 zTu?=SXmI-SDPwc;#(@GUZ`!bQ`PTF?_wwVebcrtU7usx?Zf+I-XX8fmO#k#Xzwl7+ zavD(aH&-MYaPCqEPVVx}8R`_>6w!hWkU)cCX}-_i07gDv5cJY!*Qo7iE%E{nbP^Ni zab|KjJ@i6{@@B)a@RGU#86#fy(PJ0$Z<6#$xAl&2@mg>1ut@VZPxT-|cS=X|&t`Xa zXQUO6^>rup8KQ7hhxZqE^`7?kGoSTVKX>Csm9^|H-%V zfA?sc!>;E=jD~-70NDaesDd+y%RL1UO*n7 z4B9>yV6E=SEsXZvhSXQ4FKgF`bEf&VSF(w>GIn;cj);P7(@Rq0Wuu=f>0T`8HULuJ zd>zn$(Px0959tEY05C`In{$E;2#Xt_Z?ABIr^)~Zi01{|0z{0*U?852D@I zh!p^S^zH!_*!-+#Yu`VK=HL9P*Lo8Vkm!%B>|$QQJc(U!{K%h*EO@`mQ~LwKgo?jj zVJOb97s!8Dh5)$x5`YDLoUW5t1OeyclvnwvwH-o+`7{HFYSsSzOZMv^!GsDI9z^Ew zA;gFhhdESOix|a+3pa99sEMGgkRm7g@|7?Ju3p7HDfF^&CBYOyU@V;3kt2?oK{o2N zNl?QQ4KRl4hu7DO(1YG1UV{m^s0l zLS|aZ=RcSp7MrIJ2$d6jsj9jvtF7KvUTwX&I@u{A>avS+|72uhL<{ZGQ76(7GusMW z#Y&JS7J9ebRJe$@EL5v_P{IXboJYqQCBWjr1g&rwTSJ;DW5|7QEQDW2{qdI@gHuf- z(7V$}2(LhzVNxN!ENNJaDcqGPTSg;tw=1s=l?10sC1ps^y`Rvs1ue=t*5;?Vxu*~X zSavkPW*jZySO8%836-a>E8l(l{daAVI_48_$N;|3DVTJkOD&i%o{+;47bDR| zt0<&ODkWYB)kvRoB1S@FfJ>1@7Is47za60{-dbKv;6XyysxDh6om@nne2Z*U?tYB? zr;vUd1vq>;!*f)if($DE?ncb(dJRYhLCeu8sQ25i<|jh55GL4X4=};X7HsgrDkdCK ziNKBW2^J!;{bxcPhx)n(9C$G03?qxC|F>vbq6za6II<-s7%c!Z7|c{MfP#8)k53VyfGeE_fTfkf{M@6O+z9#U;;FDk0a&QKI0srVdg`OHUCM40M15 zlpO_bCqv=ODrQ0djqrV2#M|N0sKzzsrZg9$N)H4#Bpl$PeQdOd?P}Kv9fa+C6X}R5 zR{G6I_@|Vm z3?6_`NucB{&v^@(SG*|1q3LmG|3vx1mm#&+A%fVhUlmH-S{f-nNRE$u2s@v?`t&Un zvOoq3dEG)xFq)4212Y_I}Eu2xI(p}~m6*OXyDoKlH1);b|E44FIPIxE(yauJy)NpV?NN7LTf+Q!pMTk&o z;+fn;wXB3}H(F$wXspSYO}X%j6Ga(F?K#7>bkG1nBk5xh6_tbX6G&;hBTjRw)5+M4 z1yM^?TAgAG4-n@94Js>=~ZO2{Lv z08on#WB^@uz}1O`MRFtK|Dum5NePXq_(ymG8W2>F*GdM#E0)tp2rU0v$)*lWe7QUk zBu|$*!_n$?3DJb@!pEFrmgIDz5QQl)F#%Alf)kGP1s2u`n{M3}6(a z8*wdQG%8LM{J?6jO_Nxlbd!^Paa9oA8#RCSwtec1e-I35Oc)^2B*KlXX}M|5c+#n+ zXy9w`IlvI=CYA?G%d|y$fDdi+Q+2v20~i4Bj}l^9hcYn;*1ggTy(UM6yeXq=+09P# ztKXd>2~SiZ%~5`%k~E$oR|H||Kaxs^KVWOA#$ZlDc7cgqbT$>Lh-xkoS&}VOVJ`o` zgok5Auv6fxs+wuV{{*}+trA#^4k0F{E+Ndh+{G0dqckNs@-tW9-IaL8yNU8<<2(mR z&#REUiB%^PE)UgWnT(wbWFLF4?42k@GErYK62b{kpiY0X60HnQKoAqWc%=(DLIF0i z1iguh1%5&S6H+@yuu;I7Q?c`#z6lvnd2vfDwC|s=23&ed1uhuC;Z%n3CP9lya8Ef^ zoP^kd3uK9+ZXpyk<v+O##*Z4URV>}B^2RRn>RQfSZ{&eRbmfRjby%%k8l zaQh4l;5N1%|4cbuRp?g)xv+H1O>V9(r#cGBR+&dPo>?yWSpRUH7g)RuZJnp7_FfBI zLrD-Dt8(D{8QxuaOuXamwc&zg2!y@|B3-o`VBI-vjGN4`h0(j12b+=RlKITYj$+gb z!GH%Cu!I9z@V7EkK|;e@X}JABOITuoPoJ6cQg0+=k4g<}24_PlSvN>qu_k5^lc!=~ zMw=;!_Sj+o0v&(=f^P!B(P0h8tt{j^kS>hU|BGL#qlxUv9aICXX%!z`>ge?aZbV0| z^&jpw>%V1sm=M#GhOV@UIw2FEFRE!3%v~-~dOF;hl#~{v6sDk>JK5q*U#~x@O*66l zH^Yv=|CqU#>x*PI=tJKs23lI|q-o!i0mmI3oWTtoF#xw+zwTb?D&5i1L{u-1@rZoF znhiLEHM6`Hjd#Qp%S!TXG~opmH_gS*rm|4i;;-F5`K_V6{*KReaD;Q;$E{jFb)RwC z0_KGi9bgwZ5tmF`0^*2=#mTg_3rwYeX~F${|V z-wQOr6TDFyYQ`^g&wD@xjbR_}5FC%G!S?l7kGYG(xe`H)U%d2H6>3DgQQ!KZ9~&as z?hW5bqzIOUoWyY85Ox}VrO-dfSq@qt5AKi(l8xTAAPxPX4YrC9sv--n-7C7HEXra& zx#G}`q6;0~EbbyN_M$KT;tG1AC~jaQ(q0|5LUAPF`+;FX=z`+~!5NG}LbP5U8sRla z)j=56g7EF`We!`aH9qaBs1y?N>JeGZP0ynVqcu#`-~t9=1c>!*Q!}0 z3$|h`ek4eSq)3h=NtUEZo+L`9B%CB5PkAI~43|WTo=nc<;bh=2DqvG>&tP=|FXa*O z6{A9I0VZ@CQ4-}GaGUTEB=srfXGAed`^dT*iWd>==d8{(8oNM zOFi0SDh(p=*ar9^qP#$eK-5B!F=9?iWkEXPTw2w<^@t@-AxD55M`q-G;a{BahE$Z~ zP51?HiJC)^peCv$V>YH^J|<*FrewmRO7fJnt)vdB6isd>XR6T&hE`m{|E0&d8&yuk z4ItP;xIwo$VQJE(YnDW7*5zx;rA}%X7W5uSjRNZFrIJzER3yWFgyl`(SoZxx_u=Dh zh+n)Y1id&&`^hGBMyHI;9`f;2MEWJvWtw14rb$jDcYY^$hNpOrCwWq)d7fu^rl)$Y zCwsOhNv4o*2v}NaSoqLqL^y#SV9Qa?0fU{ve*S?J7UNPfv7vr`e*DdV;WMYf*NS*i5S7q|C`@n8T!nMyUm9} zgk>KRT#l8cfMjSn=;QGuL?ZrUgcz2F%_M}5Xjmxbbq?W*US^77rkesIoW`k~zUiFS zsh!d(p5CdR?kS)4sh|3wfU(GYs><$dffGa{6b=Fua+?uKXPL&PeI6-X^3C`_DM5fG zh1Lg05N9ApgN7!jUm0RR{=>VB>gUV# z?kca&=R}N~bP~RsQ>J}L57;J$w8mTcd>me1XBrarZ9_isoPQmbv^<63-5*(+N z<#Cq7_Z`R~I$TF2;y?%}lBTP=t}DB?tE0N>y2?n7x=M=F|Es;;E57EdzV0i(_N%}C zE5HV5R;<9LM8ZVsF0IPD!YgfVEPQmV zqeg3jc5Fg`C4F?neOPH-8C<&HRk@C9A&wtMIA^trEY7lP`|PZ4_AJlJh|lWg&&mkV z>ZW=XVTsrgNDM9025r-hsL!g`(f;g0?CfA6ZK`JN(mE~IHm%ooE!cjo*oH0HKCNab zEqz68+FC6zRjt}itd8`Rm1ZA?4xGZlZ1c#h@|$a}zuI}r?Zo4j%v&wGl-Y)J2hwbie?)I+l{x0waukhBT9VV&h4KMPduJR`D^4=~~ ze1i1_CzVntr`}aGSZMKJ?m}RL<`$yNa;{gBEc1>py9x*SPA7q$FYm6e`nE6RYHb=3 z@32N6{MN7i(gt*Lru*(M|AsDAa$!8Ctjcn$x}4Itt^=rYsJVVx4IrY?eBaQ|-b{@$+|X|S}0Ed6>g370UlDA_=6=A(k{$I>SLwlECehQYchwyx#LVTZc_Ej7A zjU=R2A2-1t2QnZFvIj_lAR96w53(X3vLWk`IyQk=H1dluG9_!qB2zLZU$Q1oG9+_y zBx`aeV{$@-GANgFB7ZU}@7Exw@+0qYtE@67!!nHAn<>|_E#ERJ=Q1wuvM%>BFaNSH z2Qx4avoDXbEjtDw19IpEvWgjUAty34FJClIGc{MUH9ImQFQ2kmGdG{IF-NW_N$&0# z+&v<&h1#)~l509=?jgFeJHInL$Fn@o|1&+;vpwIlehGl(%Hy_jYxN#*@uNI8z(g-}LW}@MTy#RG3P($HMqjjWSTsT_#7C1f zN}se!r}Rp*v`e@2OT)BG$23hdbgM*k-^{d5?=(;Mv`_!^Pvb^MPlZE=UQGjaNt<+1 zhcrj$mqsh~Q#18KkaR~(0Y^+PJaQ@^CLH#Ds%97>R740tOu<1TwOOAvT7NWD9Ccc^ zwOhY6Tq89;+Xh?5wO!veUgx!5?=@fdwO{`=MsF${uk1ftuU88hBHqNjPytoXwPQav zWJk7SPc~&&wq@hSU{@*d03tbS|1Vn7Z1c>EgD|#1U$$wVHfpD~YOgkHw>Bv0&Hxj( zR@;?XZfIJ9?<;X`W4ktQ_qK2UHgE^GaPKi6j^%!IwU2#v9yjMgkhTO2H*`n0bWb;R zSGRR*MrQjnw{~`x8fS9*a3T_QXe!v|PjlL(Bre=j(LH@Jg8czTmdcB6}h;@AP3vpx>A zeM2~hcesatIEYX7JSKwlYBp`F!9E*js_vkPkVL7rBwM|4NRZY=u^MIajW7FYrK@AB!vakyp8uUpbZwxd8L?edO>~ z?=Xe~$Y*oJk2Ci_koAjaxtgyzo3}ZB8^TpXMSyR&cVB6pXE>Qhxr79{oA=EW`l?qu5EMbVpSw?o z0m`E6WY}0fb~m%z|8|~3d!$SIY*2Jn5A{@2`*ibwx&OPr2Ry(Jyub(iI~P2`Cw#$& zJHt1;2d_F0oEbqR0T)2L5a{%=uX}8$3qdq__7dXe{;<19^|aT(Mqm8JGj*<~cel5^ z%MZH4$GprRdBpFC5u8^uag!2QyiBY6l(IWr0iw`zb-X+KvoElz+PiE(bVzSIzh`>| zM7`8cJ=Ise)nC1fXT8>MJ=b@=*MB|OXZ_5NJ=vqN2}mte)Vxd!K*n$R#&2fpABKH(R>;U7NYS3ub>KI2b&s(4#bMZQrj!!nov z6XXU$tNXeG|GlU4Fr&vtcyF)WSA(RZI>~Q@5S+ZFH$zU#j}YY)4sP|Q8dO6GSu zm%Dws^p3|XJ8g)=ZQFf7;(d!xJP>dIM-T!fBsl83G<7FF^hdw+Pe1izJ?vjU_UFds z%ZBBjKoOL@=5KzI4`<81_iX^bTH-cfMSEGdv_ces19XA+Bfs(QyA)_aQQyYulXvw4 zL|JpRM(>x{%8#bBJr)$@O3=@Wu#0?w5aFD{5X+(=4Nwlr`7jR&~gF{v%d>C2t zAzD<5hxMYA^1KE`Lg>sm<(!kwI_08FVj51qBhWn5zZb~7m3Q3J39c@WL-NwAkB=l68qN(Ou1r|*; zV|{U>3z};aIB?Znm)&;VeK)2LpmT`{{9XZ+qg?7!wtyd-m1@zC1e`6R0+S@^|13%) z66>r52Wp8FQf!In7F#T?n3RcbdF2vrH!|Z}>9km?#aG4EXn}}8uqz1;Yy~2u1~LlO z--T?x8RwjJ&Jy09ef}Bfbwvni=r2oRLZesaQCi=-nEWW7Xb1!OJ;#W%1^pcB6f#su&B}~z&G!ZJ`?}#v9 zK!=Gyz$k|TUXUn&AJz>`+y!n88uQFG-&`_n5gJaSA26VZONK&rDYKv=>P#XYdt^3I zfQf94c4!2&9ZB3TI{2fp$F5c)vw8C^t-?>jc%)gFy@UddxS+)EiQ4Yy{}!863>?vd z7I1=zfr==Ci4u<~or4gJQe{ZfU;qJ*n`QX~XS!UeikKl4=}3R&DK;HY2L>>J1`Zlcw+O2P zD9D8bC4xL=y3q{}VPT}d{`F`&4J@lPr}>XuRI`n*$i+RBrnF^XXs4NF!;k`& zk!VGuV2(1F*nTHMq-w-0$rz-bVj?jWijrayyTvVL|Er;CT7Ux+(bT3CnMf&6;+}WH z-nGc&pl*T{0*Lyk0;Jnv4}aJ()GY`PzBMm@fpsLrZRSLxR$f7EvtA|P>3YG*8mxe= zKp@$TQH3|W;V~;EVIV~*;AV@8Sk>M}}taE4`s|J&&JDij*!SjUN=o3po*FW4Y;UyKw_BKjrd zwrrp+gjd+##ig*?^KAi@%c-NErhMBu-+7_Ey?)_hNeR>_UcFCr+5PG}jr${!)No|h zv?dr~dJW8ZPkxU4&iZy&UT!>ruG0(wqUVWUd|f%7>^hUYO2EsyCnkSNbfob{HV9OX zcf9Z+7kSA8qhZElHRT>Meu>ID^YNb_{eK-z5@w(>3{+EJ)hxjo8jy?cB{%%SvC7l` z;fU7kPLG(RCC-UAM9wU*ZahdW_sS&wz=NC$RN64)rI*Qt|1tT=h|F~v%RAU{o zqT{|oBnU8gI^q$4jJJX=1|uUcnh*-3&|S);bgHll<%s&wtp|JX2m7p?P=d!4gc_=j z2Z>UnD1pDyEP^RYGP!I=^5DRf6 zwhJ%5=n(g>#Xl538XUtIiT*Q5c7j7&8&pG6)v|FdBpq z5(&gBqyb@k=crT>Op4JO|F01nq3{^n%f>LGNP^H{Mz9Dc(H9HQ91$=Z(@`DQk@NNf z6i2ZRNl}l~fbghA8au))f{`8fkstff=0sozUT^;xvHGAP4}~fm!O$ERk~;j+As-SV z1MBj#P~E_B@FKByrU4ZJsT|K{qSPiLLsBG1Qj6U2?*K##5(PBFtiMB zC_54>AE6BSODVh3E5DLNF3&$y!x2Z(Aa$}Cr?QXkkt46t7tNySo)9eOk}m6#F{JJF z><}&ek1SuqDx#7l|C>Z2da-N-t>kFSE)!ER7t=DBu>bVZ2luikr}8IFVk0AB4E+!z zlF;`U(=$I4G%+h8Sg-Z^vLZR+Co2==+;W64Vk>7d2IuJeU~@3MFEnq{CZF&fBaa?HI-gTIqmw$P(>kjYJFinYLyRTvuXC~! zJhd}C$5TAZlRVGUJkt|B*Hb;)lRe+lJ>wHT=TkoGlRod$KJybl_ftRnlRy8{KLZpx z3GzR-&@3O32;EC;CW19PG8x|B>h8_rdQTfYW%yEr2A2;vH}oO5a};?{9{)00M3ge4 zQ8;z2b(m}u{}pfnThv82?=GK%LuWK3#e_g}aWb9p8L6^&{Lln-?G0p-Mu(J0K{F*) zl7mPzAzdjwPBjGYQpOG}Tm3H4{^^7PItI{j?<& zQcOwfK}k+SQPoy&^#vRCQ9U*GNK_^a?;l3B9E}E7i`7^qa4J5u8+(#Xe-kV)qCww4 zN{^LV|Esn0-m(8W6%S1_0NZUxpS3MXs~q#OR;|@s&$Zebaa&2$RSgDEdC|uploipH zUhCD`{3BU+^(>XNUH_C1?MqxCv>c(7I||KS4;Eor1Y522K$Q_Nos~)D6j&9}I(RQ( zFBW4-#HT`VUv<=7|J53fYB0-@GxHBfYSS?@R%KTfJMfiZ8`kxf(e*IWO9Pc7eUY>T z_BGM)CH6vyLInk9k!6RLXkP+b&62Z5wAN_$T@NG`#Z+2NHu^HuB`R!9psi@LR%=-z zUyqbyr82(MQW9O$4+YjxUz7z`Vg_(8LIVkQK&Ms=WiBVdQgkBamT~1aUmr8wZtP*d#$RETPzN1wIv{cW;Q&C_Xi}C==FWAecOCWeN})~<#nB!wcU)Ijj%I)X3S*;y7qJ+? zdEsXQdSD{bisfME_yUo7-WgiznU^Wl4w?tP$Vh!(5O@ac7p#3ny%>Fk&c%XSLt_H|$p$r&8|I9aW zJpkF>*I*qOhBc9XAvZF;(QawCkA#wYJ2H1)SjKQj34Y-Dmaoa6$Y&?w093DcDKtL{ z0Hb0UiV>D81u}Nst6NcXQaM66Z@2a+RwSGNfq+qP)iV}50XWJJgQs4z%fB|~o1|njCOSSjp^6{APh}l??sg=KItQ$pQjv=;5 zFjrC_R9?$Ff5GQgR@q|mShdlI>Y7nzie)s68LBw}xJBbkmzB4pEPhcgo{>Gao1 z*_LfNgX?!IRCyv;xs@%mQb%@Lb?}ypnT=2RXa|yGwYOkWvuU$0mT8rV|K-S#joF%~ zcZwg?hE+L|@xWc%bdp~gmIt*sL9#pF_L|eVb&>fFx0x9e^iRJv8U5-fQ`p+10A`owy6WXGSIf5Z|cENdj z2ihD<&@3*wqA!}H6*(^jGHGGMmz@Ei1rv~_@pAFGq-UCqtv7O&aihN(ZzGnAadwz# z8mObVhDG#-S6VfJ`9y~mkEPj8gW9R%R!_N=n+t@cYxW-?dNm8$L7DogySjlRSX*PS zUjcd@+`+8RdSS)^oqXq<+ES@UHKg^}tLK_!p^3eC^__8;L4!xF|J9m7`w&b=6*CKS zUg_Gfht{r-^pRa-swr2VyFwb!I-Sybt=U?fVc8lo8?Z0Jl4bg^KU<0$Ijl*MT_N_U zd91IEDp*I0v@lb%H~UmTTecH6u?zH(OWUY9nzB=yuftHa#o4FFwHyn3wu>8GYnwH+ zP+z5TrR90BpWCl{d!Rj-C1ngEway=t?#kCr&}`%b5M!f z7`bh`-#e8ER8`T^S381tP#eFYl`yrsBiY*!Zw9^tJOLYdq0^LAZ}_Oio4ixoyr&zz ze-XBW^+|PiC5~A9bS?+`?dH5#z&qRt4^^=pVO1GBr*}Fy{}+a`2h*}&TTb5~AG5o{ z&rie2#lvg-Iy6LEm7Ay=o2;#xL8DtJ+kqM$Tv&TiVC9vu0);=zw_WQ+W#`)*;P?apqFl`%1ILwB$34q0AKIBA8=`qT z$or7GslmEO4!eCAC6+hqT8iuf$cpAHWiAeRK?j6aWz8GCXF)u<4V=UqnygJ6S{Izb z^&GQ*+n#;#piP3wF}V}fxSNWYbBe|Z`gr+f0ChW8lttaqV;$vaSgL1D0GIZ$bz6&v zT*x)MlCiS0L%`2h;)qQBBRr+)FbvsuZho{RBZN!V|D$~}Abo@UZx=s{xgDDshN^b^ zn#*0gx}STi#}p=xzz8&8gGZtSh1P^4YoJvqnmT1zK) z9kyN9y_~@rJg{B+t;?~`SycZzc_LOAlcy+j6vrn9d8CU;W!CB+dZ2JO_XH+D2X?^N z1xbAuN(YQ!1{&a%MtLYGo2qNN{T3*&)-n(mk(kVU5n_1^2TPPER z#XEiB(X2ax*o&I|fY^$PtX|~F{C^f5d3AsT|Dt|z4@hQahNTW@&S0JE;TuNd5ze)= zxyich?HpRkcC!O1|6-)?Gh%rcfPakO1(XSa<@$dFq6jnq2_oVEoQdxfOORhcr)*>* z^u7o(K+ZBi;|o99y}rOZRnnC-HIoD=FWvDm)93%1$e%k{X-EN2qS-K_|7>u>S8af{bSg2{za{Xur{I|E9O* zy+rWZZN8;5Vy*FC$k7|&0iv3ZNZ`aq4oN6JGyzcw%7;23juu|8HLBF9RI6IOiZ!d&tz5f${R%ehKM@tWG)jclk-@Z%+Co$F7TUvT>kQ&qhgTZj zm+r>-l_>6@ID&)&L!>5fVnY&)CCI#RkU$N|H;qodJTbCm$%-#~7J^x<=&Tiv);xVW za_X_DO}BOpyMpS`v}@bGjXSsQ-MoAI{tY~LqKIyb7oQ6;@7{xX`#OJa`0(I}gRQw# zxH$Xg?L1_3NEsQ$ks={il2Gj#XK2>J+p{DK|8?xm0Xt1?#XALe*Ot) zpd^xI7F!Y>$DD#@o(W-^&QVz1kq;p@P-3jjN#}Y?-6?2pdUgtGsG^QaYN`0$x7I;y z^_U=Dj4Ef4G()Nh*gytFYO6$VRAb*_QhjJDaGwTCY_Y~3i)^x+?HDGj2dc5$KX;+| zP@{)EN?oM5!uh6~)r^N}|EA)8rmUy3m5Xk=>aNSKjHv!&pmE6A-!- zu5VnCp|9=o=b{J$AB-@12`|hrRi`mKm$QmWTko{%Fhgy=)ES9nw`_XLq;?GpTW|s+ zpIq#@jhT#c%PzkM=De%23T@4==B1acM6%Qlk{o+XjX_ERyfUya>*knZ9WI>`(@r-H z_0v*EP4(1PSB>@6T4&Am)?RlF_Sa&EP4?JkmyPz>YNt*1do8UEcieK%O?TaP-;H

    Z>Nl4H*)Hu$sLErW|x@{0>-h1$(#z%bd#vhM-^2#sI zeDlsf4}J8~PfvaI)?bf(_QuO(xecbDdzPvaM-!&SImeE^>@oGsKCZa)D&pkAgH__W z^WSfy0^FC(fB*jf55NElaDW6XpaBnvzyvCAfedV5$^2Fp)D34$ZYf{IV3)pV@r7{% zX&=yZx0#0EZ-gW)7X*YdC~Ob2x#zg zlMe$Xzdw>lGnabEYb1%utFTCN_OoQnGIOb6Wpb3i0-qq6xI{JBgFxa{Tr-lZ}Z#VA5oedt-Rd}7J=+0R=>2}+qH&n}U25GMRnp!uxd34Q=B zUH)TfD_ejOg#WTUjv7;p1vMy4(3vg4G&D^Q{UAWSIj-55=Z+dMLI(nHg9T9S0ZTZ+ z5kOQrm5#Fj0g%B5EILx5a9{@xxzrK@PykByglbB~B|&FaCgMN@V~lFlOmF29S+emT zLxUtmWs-x+L}grcN|+NS@&OMV#0X7Q0tGaH02Vw11qS&+S4qi$85HFrkcmtO3~LY+ z*yO54I0yoEG6PB;wwEjgE9+F5D0DibhB|XcQmk=S9eT@O-*T%{2s_M7tw>YX;ZC`F z@SPF;qW}$n$XzAU0Uc~MlOQk%H6QYV3z*=t1_8lQ|It)X{-d!HsVqk7Gh9D@Rhgk} zCB|AZtN%3V@Iunz*1uf46eN6!uG?vEY)-SZn_`qy_A>!!Oyd#?G&iFvSOQeDYs8xD z)wdn^08k)l!mU%7JTp z6doA307>;GQahPPPI1aq3(;t243YtfSRldF%z!;9d_*rkL<0sPGALyLgcU^u2@T#Y zM_BS?Am34x=S1jTm=TdQK4cqPrq#Y`>ak=1bSV_rr_vUS)BzZz2Pyz%Pf|>^qhvKQ zalOcU-wG;T0_qaw(1q1;pe0G=7!q#7;GLC(vgJWA zQ~%0MKa}W9e%nnntL6g+0KozVpaK@Ccqbi@K}R~UgE>d_2(Zp*c}$XnV4>(DzhyuK zED(X8{MQ2-pul}Q`Dpy@nkCGANCTd0*&;0A1sGt-w5^-$KSr8Igfm~IJsb|6iCEJl z&TST{^5z+vK*1EqfDU@P&lXU7&VODB3ku+Y7Z4@iEl8J*g>4X5M>X4jge(wFFas9o zw<3xufKV$lgMSAlvi7C59tj@cPkNS2-6kytA<|)ndiYvByA`<`87Euf*(K1X34RB= z*dyrmCCxsFk%RD%mXFj}KCx(s=%W}EWWc}k#rO(Oxavx(JjOzf>QL$fHQ*Uti~pNb zRWcD}mu@@JTK_m-&P4tku+#3+A^w;|B4=Fm&Xdmrk4UMj-4wnG_aFLZII(bQNuY;t z>PKhZYz2JL;^m$1Lo~DMHvU;?WqGuR#9&{xEEr6e{34rn0J@En8;Yr8M2bB8RYsu zLT+gZjoo&cwka(;9Z8r_Kn5i!!F3`Mdb&RZvJjC$-CMHf7MLIa9+*MDYhS8hzDxH( zOj`120<&dbpZz@FjRb!Lq-jC1@m~*n@|Qnim6vk4rxY0AZuJ&_O0WbBaQ^@XkO2yI z1faHGL-r5>@DK)|0TTuhoYw=bcX#x+MJoe>^w$<+BskAvGos~ndhvhCM|RMpR=d(x z8ln`P_hWAZXuEY07J*bb5nq%<83d4nZI@^LGlG5hO%9}lOUQ&x=!8!Qg;6MlQ%FEb zH6ZcAabd+uSq5@lMs_sSPGI(To+c9rPy*ocU;K1qMlpDLr&+dlceHneB}5i!IEP4B zhtXFXpoMi8K~|(yTK?yPnZ|rIr9)$=hb?ggEH)J@$9D1sC^eB1(sPHEXo*+COG>93 zVv==q(Onxxd>Pb&g9wASqiI3|eUA7xMX-M6LN0!S8k0zcvq+1zX#b11h>N+nKs$Ab zb_6;G^BZyz5m#0%#Rz$kw@{?GMun(pyON4iGGl*2QsIJA)M$;@h>h8(joZkL-ROj^jv<+1QKcC>6|Pd|h`wV&{&^_cOJVJ1Ey$+LMUAp%O%dbbNC)=;)9C=qG{W z5-FH5?AU*g_hHo04wKSQ&6rWxkzM5|krPRg6={(diIEwpksDc6Rl$)T36dcxk|Rly zB^fTcu>(rfQlK@E7~_(MBVJypjLW!IJmfD!cv!cmle);0J?WD_36w!8e*h_a8&D{U z5HX6tk`@F-FBv;sD0zReP;8=F&4@)(=0HR#JOQPZnIuRQr2loUghs~qjx$M!C&y_J z31nH>m7~LzZ7D~K1vsA7iD7v`q1cWcW)OO5A!g}5qv^ENOGvTD>w<8zX_bd85Jk+R9zrG92Y~OnLhAPmb$4>RYa8` z(^0}noz-b04&a%bI57uxo6MAq2zeM&iCU;h4L8Xi9rc#hsh;ck8&Bl`$ElRH2`?>~ zPUr)2426uxSaxg@hSKCo>}ysiN#9TY~l?1Hb`1 z5h9s|Srw6EZ>XeW8lb<2O2W7jjJZKNx=?M@jFvK?a}{JvZ~!UV5)IG;!S)e4pr6B0Tpr<*dlW8Pw$Wc2G5$6Y7uyz4ePyk{257s3gfU2pkSrn(_nS@j` z8s?NB+MGthKB`qK{&JO8p^D08cP)W)$aPY5wEwBKikZaes5P2}1o?3Wxf0-ssFRnF zZBm{B0Y_F!S6jMKSlWi7wgZzYYnOU#nP{uk+LoOvsEG%vTX!$S_p8n6f~F{}i&S2! z8UodbtlPl>EMNpP$A*Qc0!Y9zmNj75cX`%|uKoOJVWV2Nm^c$5qe!B&6cMbI zis|}_2LS^%IIXKlCu@f+>ZOAQF@8Lea2zlL0T2M$*M|8DvRWyZz_+W4m!=9zn!qY@ zG`U7Rlq)V9nB}-wP9UTl@N?wE9X-KG;v#CH0%M?bJWTUqts1gLJBRz4ttB{&!>6Vw zdyoh@h-mdQ4i%j{tFQ-Q06$<+y``d}*Z(04n`<&bq&GV}JBcc^YP4zFi)glS%mF0y zvYC6>Aa5eXsft1K@e~WpBC413Q3i`4|y!XLx;bK#fbg zBZ#ydx~&)llW7I6Unr%inwnzDb|3IjTzj7HY7jx7wGctEL6L6-FaQitZ=+Yb5q4Y^ zXoqyy0u)QOwd=j#%N4lVELXRbUFUJ#8HnJSjJr9m#OjR4RA|fvyT}WBS(^iDwtWyW z00+QVpu_>yC!7C}Sit549e{woHva>8+G;G8yNX`1-?otL|tx%0!12wksxp`=^A5}^V+paT~GYq0hJM)nU* zHK{x(aFL}03$Ox1U;!MU172_f(JI7oyq*ut8@CIepgJuz)UBiGWp%q{P7IxIk`Q}@ zV;hTrA|WS()q!+(#vkBrtG5CcV1V1l0M%!6`Hc5{bgH_!qoJgqHkcO(4C zVf8#G{ z%+BrX&hHG*@hs2k+@2HcnZsB!lj{sv#(z(n%VXEFwW34RKr~Tu5w%vOv~?+eTg(j| zjc^uqu8V&8NVHnAk7UDi8LiQIB*!RoPi*tiAq~-^j)c0LJGz&+Zi)>FU5J(YbaO(@zc6Q7zR| zP1RLx)mM$xS*_Ju&HvS1?bTlm)?qD+!hw{`@vk?{v@WU7DeKd19nc=kqXcbxbk)>W ztFU*C)OWTWDRa$zebjyp*nutBgH70lZPpN z{Yh>Sh1Yo3(ZMa;th%_kp^|1j%Xg948l>5|&BP2Dp{kn5!%f}Q9V++t5N92#mMxkA z3lHk#C=AuvZVJV|eb?5F-sv49{7Rd%47XbLI!CNN&du4BnF)qy*GjG4`_13`k+lAL zqZCu9_DQPF{r@w#!rL{O-^iMi0@U9RZb#a^ASkOix!s@%{=w&_d!N)296nb=3E?5$ zF3G(h-}Oo?fqXkU7#Qw+)6I9EWMM96wgI$;B5vcxGBcH3Gsd{PP{Fb+o;z3I;_QcP zirXSd!ABr|<4Z0oWu0|48YW{kxoIUu%Shhc@-IWqemy)}vS|ZHVJD?=Vl3vTOm60a z(zf$i*<#5T@95l#dgT{|l>*}#OdwQhh*$KAvkv{x+h*Pxe&&HbBk>8~oOt3FT;&E1 z-S)j{=IUZ=S;afITO825k`7$-reG2c%^>owa#iLlLYx?30yg{Uf^O;oLg;7hw%!dB zo9)x4+W$d{dKjFhoTLr?bq9Zw=M}s-^^qWvA5VrIx;fn_gJWJIyRm zbPjLvd$t4nH1GA!@xF1pHl6Pwe`O=DyK(ChDGr7}UYP{(o>Ldu+SgoA1saKUXhk4_ z5pn0{F?4!rv4izi4J_)W&au*~Ingd>Zr24IxOBjby(2ZzY#w30L zhyM~q&0LlK6C2n8#w!ws-18^=b4y?X8far4LBB@t_OXpR+O6~>Z@~gi;XG62o}CH2 zm~G#OZ}pbUDo_F^W}y}zxY$Q+M-Tuy*8t3{V<3RS$hExzC(JLPSf8iAQT+Cs4;Dyo zrmBA8POrohfO$@hDt(3sRc%`<_QassDDNh|Q@R%_(H zNIaT<{O4~K1e%m{ug6_i+dY2rOQGN$oNcA9*Q@bVEfQCsLA?M$qJjbu75*LkXa5l4 zz=Z=v98}oQAwh`S2u3Tk_Z1j8IkXA z;=n)R?VVita^}sQKZhP&dRv`5e>%FZS$pTApqXi0*Ku3vs3WVMKabUcO8+4THC%L( zL5CA0!@D*%yKuV9;{E;q2QWYZ2ivbe0}n(nK?N6N5W1Z}!XvY!jsj0S@-D=%9St|+ z5Iw7~dP6Iec&qF_i@eIMtJuD3FeL$7gfT`LXQZ)48*iknx}JVQNR8`^>Q1z!yaUan z43F#(wSP9m=&CEGcu1usC^#qw3udd*#L0N$VTlqrz!9W^#w0U9{07tuO*PkKvrRYO zgfmV#=cE(BFz>`OPv$J!u}6zM>yao5{~@x-kjl#_NhX)HrlTlPazcjNR^p*jk#Ycu z1KLnLr~y4mdht_HM4%L7fUks;FhGch*Zxrfwjxr!EluOF<+fXIzXdm3$M{sFN6lOXQcyf* zozOzlG}84}Mt2Pn%HWDPLI)WZsPCUmEqDMV1UdkABp7zEh=LY6ZD5BI`o*Y$_x_pq zL~z5xIAc!5<+x*yKL$Buf63i(JMPqtbx4&BEh*P{Jv>jp;3A&S2uv{=L8B!S-Z|KS zFS_}nA6C#PXoOfe0O5m9klEBqpN6`&sHLWQYOAZp`f9DS=6Y+dy9WDfvBM^NY_rQo z`)sw-W_xY7+lKpXx#OmLZoBKo`)N#3v{XK|+(J`8#hzWr9009oJ&9hA1 zcjvu#-+u=_c;Sa9zIfx0M?QJwmuJ3t=bwi@dg-U9zIyAg$3A=Qw`T}3rvI^yRYwk5 zBO^h-Q=C?Kjf8i@s%p~rX^XSvp~@sGzlebVGP)rENm19p5sL7?4tQX57ID@;I(HE& zm8wn;gkS_EI6(?l(1I7lUo zvMjS400RUN$k8lffS;XccGSWMm?)+J6bKQBKJ0--D*q-jCN|NDPlRF=8T7sIl?*#P z!U&)UC9B0*q;ZI9ALKqn6Cvcwejh;N2P|^GvQ%Vg93dgln7}|2pwVcz^VLOmV1fcv zsxMLmWFQ4ONJ3uBdulALV~iPrV*h60jTrRBX|R z05KxK+U({o7eNk3q8ZC<0fw5_#AY_Nc_0&#FEbiJ%DPn6!tqG5eO9r^`=Cjw`?b@Z ze#9Xv(V~PCi3OhW#HVhy=}Bwu)1UtYXh3bUD*y3eF@?HxmpMK1A$0Obo?=o9G_}Of ze4>(6AsmxHHM&ubc2p!*JeMsGchDh)CX&4jW3MdPoczQxR1i%Q4e2RSkJi+tH}&T% z&2@+?nyfqLgJeP-$B|1~k}5dGkE0wS5^WJjnn;CeRHZt|Zi=*g-o$EVnldPjlrtPt z#ZmV_RTzy(086X`Yg*O1)^8CMq(9MOR-GaZcS!{y9%AcZN?8|EdL)`GDq3CzJ6OV+ z>aAJLqAt6LkuOSRQR*x#8Ef#pz521Q387&ouE&o9W9>^{DX2_Dyw(fqXylrxoyIjOnwU#JE zoRI1oTdTN_p;f8vUYWbBgS?=D53>jgFc5$w_?87h><9`>fPn_Yt}-29V|hO-*6r5U zzV{W0TY2i!TsjhcTXe>B9h*-4uA~H)>y!z0fCOXe*1@Z^+_1dDfhA0&z!%1Fh7odD zTyA8wND-w%CS6z0mgY(UySxNG%8>;F=s@gD zfM_KkSei3ELLF;kXS+r8K+PuH;~#hBnj0<@ zlAjEr4seSv>D)vG=FEjYbfUj9jSf)c zrXvnuan5zl$qsnN)9J#M7TxMs$GXUBesiJ(g|+HqY?D*|WMzwy%ncxB99Q@P`8mlc#a3x4*sU*I@y+NXSI z{XsrV@z%~l?24~*?f>tbcg1wYh8DTu2^Mf!?Ej-@XW4%K^}pQjxqbD2Dz=pQx`{&w zvO_;5Av|^M=$8^$fo+GXu{p#uUTHrLxPTr=BHelcC8EF!7@7lPGk8O_jB>?j)G%0F zy^-oLt@;ckvV}R7mAILk~0=4@<(<+QkDDM=5(g+haVK zV1gOYi3^mdHk!6h`oxccM~v8qdf11J=!beh$(C$Meqc#|a7mej$(M}DnZ(JOq{*Aq z$(*c*i~pd>lU&JHc;`^qNC_#w}yT&QNsJlDJD#w>73ngGTLv*851Rt8} z7?`sNlthSrAj`5e%d_MMe^AS{L`$}G%eREfw2Vu+oXfgA%WJc+qtwgLa>QD?yni6J zkeEdB6F*&S!aD%HCv1Zdfr(E_rE9aky*nkUX{m7OCxk%Bv;@kwq|1Y#%g+SO&@4;O zluObS&C}#ewv;9^*~``xENWEBrPH~=>_!uOLxGz^C1F7IXupa=CMQ}d&s0g!OwHo7 zOXEb&<3!FZNl4dp&Z%n1OiPJdWE@FCOmi!$++@FhV9d9uDf%MLwxkg)xXkA?Pn?3y z{r{Rpl(@z6JHK!IIXQH`^k}(ZGR@?~&-~QS{nSr15CAC2%kvb_n(9dD%p#JYO%i-h z_)NZxTtU18k@YYTzsMtxDM|jsP~%h#f0%^-1j_*hQH4v&hBJxwbk7A%z!a@Wah%Vo zgDedr2rjtK49(AuP|h38P{<6?9o?udpru>lF9e;qAPhY&f1(Z-KLraU` zhm`0CkigL@6;Hp?Cmz*OfC__~Lr~WOQv!_2sML@rQ$7~V&R@(5Dx(Lpq|z%Di75St zC!Nt7MTvg!giEN*E#=dH@-{~LQW9*#sANziB{15%D-c05DQE_=RMQO0S^bB4 z@B~W8h@YI(O3h4J1y*3yh+q}gRvnAt@YG~&rGiSbpR&aPY|831KwMcq?CixuU5$F% zfE&aJu{;P>C`(=i%7Zw_i-<`_jZ{{h)3D4;mC#fv$jT2@)_Ij8_53~{5x=?mFYy!8 zB{WhNO{W#DJzyNNHbqyE5LbCb1yHaDJqiO)0EtU*4Nt(;4RBS2c-2>JhGy^tPzZxC zun1HL15l`kQ}6_f1z8)dPDM?pkqLuoyL$*+6^e$fAG(%eK3r;1e9!sdf3>H00oXnS#m{) zFt`MJY*b7I+OTz8suG`OHCv^~h}w)K6|L4WMb-ISQzA21b=3%7-GGiQTr{W$RG z+d@UsYfTiLtwSmkfW_@vjab@$NZj8IOaJUvlI^lEXoiDW+PC#zF@OP6za5R#fTx5R;l|r2Za|!T~(Dth)dW2 z4&Yh`8(-T++|80$)Aa;Y@Ya7gS`JuRG$`D|McUkL2E-j3ce9nFj&%X(PdjL_R&B}TrkQe8f`m6z*;TP7v4>UC2e{$aNKRkO^@a81pFh+{ck$vR$3 z621iYP02_lO{T))K>ij@L$b2nJkem@*xSWZ4T;Vrvf>?1EB}S#w(Q_^ontoEP(jvY zj`?5Nq{VUKS4!;He~mJbV5si&4WewN&n4dd#N*_A?~)_XnD@*Y5(46juwoM25JNe>C7OxZ3c;F z#l_|g<1;PiYy~}ny@|0fi3Okwc7C>PT~}I-WJ}d)jE-kG^%wH=y`U!R=x95lUeDRg zMoQd8s2F2`9k_}-=LW=yW+aLGLr0sQ2prq#uGZ)`wPae3ReUb%!k&(!BkBVclF1F_ zljddkbY4V#(7C>(VxEaD2 zGwj%Yjx@rETMAWi8&pC$KPD?@mY(awpa8852|z@E9#|c{jwVW=SSV(0Jaz8ohGP7U zZaSWB#lY1&X36ZvRr0kh+4k<{KO|8@-v_-_u_1ph__059E@k;R1MN&F}W`-Z*Oj20y^`uTQ4jP_|sR z3nXZKXb$)B9|!Uv7xEz|@*+3#YSWz$S8~Y+?;kB>jCj^=CT~KiVH+0X-fr=8-c8H) z9*n^6*)RYWzyKW(A{{@08Gr#!Y7eW9*Hx^ZC5Q9D8EQAPDm5f&DUb4gJqVQUxwb}B zI;8Bl#R&$mYXN|P97C^S(SSbM+d^t8H~*iLIG6Mhst&X+{Ew}HNU}f*Y8J<_L8JDR-AUxqHV>VVb$_xDt`tG;XIXI>2QbY+(YumJojipj-rI6 zcTYrLllOU#tR|O5lDqXghw`%}!I7BOfED$tfN|m8Glg$XhZp(9C{X_!+mSBuT(-rO z$Xrl&c`bkGYEzB~ye5zj`H{!@S^rO@lehIi5y5RA%-n979M;Y>t(UFL`J@-*l4liS z>v`B)YZEnKy25%>AVZ&HBfW{2dSdw2FJVC-Gb7h&(6KQ9TqaxB5}<0xi&b!2+!o zW&*0-??*`hGw~2Mg9fDt9j3x_0m4&0Duf5)MaR)|4sOhYSKW z2kO+TcroL~jvqtT+j42sr)r{3wJLQgL$1#b!kY!pqS$r=%bHe*HWNmS1|!3cEqgZY z+AAdtG?8#7&4ZH@*#8bLd^qvq#urZh$J8=rYA#VRP0?HaM!TyIRPErU!LO(Wkr;Tdc9A@^LDkSPc|iv(Et7szR?)dq;seDth1>Wu6(= zNRJvDuDHTwx@oW$9X77I>!yipidH$)A2SC9+EAg@vU3`?*&2A717mhBV2C7APqSz$}huA zoVyk;%Obte!bqs0J0lv-cC?!1FTf;=bb=A#;hMD4;KtmvnhUSY@|jLkUG-x#)2ytj z|FkNt&i`Ic8*9kijVW}uCIz4Y1s#O95JyQF9H-TDKdEv^57V8u-bW=@@t%EJ39aBb z*Za@Chj!;GYWtpLcEBiM*aHg$@8}`RNSjR4riSg^`GingbR-0xlU}+?efynt;5Q47 zC6=yw>@!2P{?i7xkCRjY0s%Ct5Dz%?us}#l9%O=&4HaOz^6%MQkq^BuAHDPup^oC; zfHRw=LN3ly_&|qy>=5HY*Z%L>C8fRgTMj*NLJ~hA0RjfXPhbCVpnprJ{rgMrz6A|k z4^>r4)n`EStnP&kU#FQJ&>Tm!kQfaKGx!KYbZ`TQTu>oDsNh77#=j3jrg;-dKuubv z0RM)K41_A28wVb;ECX7{Sq&K!?7EZ?*^P;InJ^#jNSX%Ub#Z)9f^-AdBSkH;Cb+L)>UjjC2Hu9ucB%=9wEgN{R!<>qt%# z0?UHJ^dd~qkS&9_sW(kcAu`BKhhpkeHROZ9Y*o zrPN5ezP8y4P4-;SPrDisePXVJQBwdM*G50Y6wHubU2Bjw&;{lYRiO(RoI_ssy}Nd7 zjQbKF52blgx9#*Ild`DWu&PCiMu@G7<*7v*023b&VK7&uz(9`RQsUiovHzNtP*4%N zNa#3&n4y`)L(KsZFok8A5?$bSp!&eO6=6XmFx9a&U;ID+9~cP=xfNiMNEn6#EOt8U6JdCTKgxPI13 z+Z*j@Nvl+Ei0@;n6(3tL`C7r=S7IoXP!`EsE-Nx|AvLWjR?k=710ScjSj&oNmzxmg zE)}{FdFE>RT3v_bv}q61ABK5i#Seox#2`lSLi9IKq;1Ftf&l4mrkobpMGAWUvGjTv5te ze$RQ)%O}t(cVpcnEzh9K%p5s{!WPbN>9BZ0BxhA>3P4(kw|Pq^-;zntBd=Ns!Vt=8 zXv=~&uV>RcR~j~pRSr#u&Kg3|iIT5&i*oIgqZeBi<9RNbUUC*i>|{@iwY-^iNp1>I z0L#8*xBeV7tA#UT8mH*cVdmbL;ZfS9Mp&6Wwq#f(fk!#yr32_ zf@|_VHo+V7YC{K(a1k|gcmL2V=}Pu`g+9>ZAR$7Z zOJi;aL_kPx{Y+hpDnX?|Wj+X`{XkkN;aB?_DWCaJw02Ok;gc1Z{_0UmUzy z7cb1qu9z+_IM(Z(b%0)UIGn#~=jHBk#M4c5n+hQ6Ltn^Q4?+Y6u+1R{aA}fG*$^63 zf#XUi^+8lX0~%zVJ_AAGKe8Ev1`sj39FCJ(7w~VhgMKkG_hP(4%QZ2Nnaqt2+eFF8 z^GW|a-PAn*5lUc!82DfXS2)5GFo0?r7~!^=2kz#R%>P)0Y!~@P*dRoX-FPAl-vA3> z0t)6wxVx4L#dgdGFI!hW9b`&QfBMNP43sLrSAJ9z@8D${mK?G z3msK=MWQZsh-4ncu^0z{z~3=}1`q+(HAEIf0OmB?kuk)Hom9byUj?p2^#oJGnO~p; z$j#l_j(y3O?3(=94*`G^(p6C3{e$xjpPYnR