-
Notifications
You must be signed in to change notification settings - Fork 330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add multi-language system prompts and BedrockChatAdapter implementation #576
Conversation
- Implement system prompts in English and Canadian French for AI interactions in `system_prompts.py`. - Enhance `BedrockChatAdapter` with prompt templates for QA, conversation, and follow-up questions in `base.py`. - Update `__init__.py` to include system prompt imports for easy access. - Configure logger in `base.py` to trace key operations for QA prompts and conversational prompts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for creation this PR! I think it's a good addition to the project (And sorry for the delay)
The build process failed. I recommend to run npm run-vetall
lib/model-interfaces/langchain/functions/request-handler/adapters/bedrock/base.py
Outdated
Show resolved
Hide resolved
lib/model-interfaces/langchain/functions/request-handler/adapters/bedrock/base.py
Outdated
Show resolved
Hide resolved
lib/model-interfaces/langchain/functions/request-handler/adapters/base/base.py
Outdated
Show resolved
Hide resolved
lib/model-interfaces/langchain/functions/request-handler/adapters/bedrock/base.py
Outdated
Show resolved
Hide resolved
lib/model-interfaces/langchain/functions/request-handler/adapters/base/base.py
Outdated
Show resolved
Hide resolved
...del-interfaces/langchain/functions/request-handler/adapters/shared/prompts/system_prompts.py
Outdated
Show resolved
Hide resolved
...del-interfaces/langchain/functions/request-handler/adapters/shared/prompts/system_prompts.py
Outdated
Show resolved
Hide resolved
"Si vous ne trouvez pas la réponse dans les documents, informez l'utilisateur que l'information n'est pas disponible. " | ||
"Si possible, dressez la liste des documents référencés.", | ||
# Prompt for conversational interaction between a human and AI (French-Canadian) | ||
'conversation_prompt': "Vous êtes un assistant IA utilisant la Génération Augmentée par Récupération (RAG). " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This prompt is used when RAG is not used. I think this prompt should be changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is modified according to the specification
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michel-heon Can you clarify what specification you are referring to?
My point is this prompt is a copy/similar to the one above qa_prompt
used when a workspace is selected suggesting to use documents and RAG (but this prompt is used when no workspace is set, so no documents)
For reference this is the english version: The following is a friendly conversation between a human and an AI. " "If the AI does not know the answer to a question, it truthfully says it does not know.
It is used here
aws-genai-llm-chatbot/lib/model-interfaces/langchain/functions/request-handler/adapters/base/base.py
Line 217 in cbe2635
chain = self.get_prompt() | self.llm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I misunderstood. I've changed the prompt to reflect the comment.
'conversation_prompt': "The following is a friendly conversation between a human and an AI. " | ||
"If the AI does not know the answer to a question, it truthfully says it does not know.", | ||
# Prompt for rephrasing a follow-up question to be a standalone question | ||
'condense_question_prompt': "Given the conversation inside the tags <conv></conv>, rephrase the follow up question inside <followup></followup> to be a standalone question.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could merge condense_question_prompt
and contextualize_q_system_prompt
since they have the same goal. (The later is only used by bedrock)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree
# Add other languages here if needed | ||
|
||
# Set default language (English) | ||
lang = Language.ENGLISH.value # Default language is set to English |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend adding this as a selection option of the cli and pass it as an env variable
Could be added later and maybe also a documentation page explaining how to add languages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm all for thinking things through, as both solutions offer their own list of advantages and disadvantages. I propose to create a new issue for this feature after this PR is closed.
template updates; - improve prompt system for multilingual support; - expand test coverage for Bedrock adapters with guardrail integration.
cbe2635
to
a101eb8
Compare
f12accb
to
dd91d3f
Compare
@michel-heon please tag me or click the re-request review button if you'd like me to have a look. |
@charles-marion In fact, the base.py_new file is an error that I've deleted. And indeed, the code is ready for revision. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the update.
Note that I appreciate the help with this change. I can address my comments and complete the PR if you'd like (I don't want to take too much of your time)
model=self.model_id, | ||
metric_type="token_usage", | ||
value=self.callback_handler.usage.get("total_tokens"), | ||
extra={ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing the JSON format here would break the metric in the dashboard. Please undo
https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/lib/monitoring/index.ts#L289
|
||
class Mode(Enum): | ||
CHAIN = "chain" | ||
|
||
|
||
def get_guardrails() -> dict: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only applicable for bedrock. Why did you add it here?
@@ -342,3 +362,245 @@ def run(self, prompt, workspace_id=None, *args, **kwargs): | |||
return self.run_with_chain(prompt, workspace_id) | |||
|
|||
raise ValueError(f"unknown mode {self._mode}") | |||
|
|||
|
|||
class BedrockChatAdapter(ModelAdapter): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a copy of https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/lib/model-interfaces/langchain/functions/request-handler/adapters/bedrock/base.py
It would revert this change in this file.
return { | ||
"guardrailIdentifier": os.environ["BEDROCK_GUARDRAILS_ID"], | ||
"guardrailVersion": os.environ.get("BEDROCK_GUARDRAILS_VERSION", "DRAFT"), | ||
} | ||
logger.info("No guardrails ID found.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.info("No guardrails ID found.") | |
logger.debug("No guardrails ID found.") |
Otherwise it will be logged on every llm call.
top_p = model_kwargs.get("topP") | ||
max_tokens = model_kwargs.get("maxTokens") | ||
|
||
if temperature: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would not set the value if temperature is 0
if temperature: | |
if temperature is not None: |
Standalone question:""" # noqa: E501 | ||
return PromptTemplateWithHistory( | ||
template=template, input_variables=["input", "chat_history"] | ||
# Change le niveau global à DEBUG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Change le niveau global à DEBUG |
# Change le niveau global à DEBUG | ||
# Fetch the prompt and translated words based on the current language | ||
condense_question_prompt = prompts[locale]["condense_question_prompt"] | ||
logger.info(f"condense_question_prompt: {condense_question_prompt}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.info(f"condense_question_prompt: {condense_question_prompt}") | |
logger.debug(f"condense_question_prompt: {condense_question_prompt}") |
I would recommend to mark them all as debug to reduce cloudwatch usage. (nitpick sorry)
# Setting programmatic log level | ||
# logger.setLevel("DEBUG") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Setting programmatic log level | |
# logger.setLevel("DEBUG") |
I would remove this because there is already a global log level setting here
https://github.com/aws-samples/aws-genai-llm-chatbot/blob/main/lib/shared/index.ts#L52
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to include this information in the developer's guide documentation?
enhanced logging for BedrockChatAdapter initialization, and streamlined QA prompts. Removed redundant base.py_new and ensured BedrockChatAdapter configuration is aligned with main branch consistency.
I've just finished making the corrections and pushed the updated changes. No worries about the time involved—it's genuinely a pleasure to contribute to this effort. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your help with this change!.
I will merge it later this week.
...del-interfaces/langchain/functions/request-handler/adapters/shared/prompts/system_prompts.py
Show resolved
Hide resolved
The i18n mechanism works for Bedrock, but not for azureopenai. This fix could be part of a future PR? |
Do you mean it breaks the Happy to merge today/tomorrow if that's not the case. |
In fact, the use of |
Build is blocked until the following is merged. #598 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran the integration tests and fixed the formatting.
LGTM. Thank you for you contribution!
Pull Request: Centralize and Internationalize System Prompts
This pull request addresses the issue of scattered system prompts across the codebase and the lack of support for internationalization, as described in the corresponding Git issue #571.
Changes:
256279db811d17f6c558ccf469bfbec0e0d93583
system_prompts.py
: A new module created to centralize all system prompts and support multiple languages (English and Canadian French).base.py
: Refactored methods (get_prompt
,get_condense_question_prompt
,get_qa_prompt
) to retrieve prompts fromsystem_prompts.py
.__init__.py
: Updated to import system prompts for simplified access.Key Improvements:
system_prompts.py
), improving manageability and scalability.azure-openai
,mistral
,claude
,titan
, andllama
are updated to use the new prompt management system, ensuring consistency and reducing code duplication.Testing Instructions:
system_prompts.py
.GenAIChatBotStack-LangchainInterfaceReques
Lambda function.prompt
field of the metadata variable in the AWS GenAI Chatbot console for further analysis.Expected Outcome: