You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be very useful to have a built in validator for when we want our LLM to generate python code (or ideally any type of code, but in this case I am using python.
Currently I am using this approach:
frompydanticimportBaseModelfrompydantic_aiimportAgent, RunContext, ModelRetryclassCodeSnippet(BaseModel):
code: str# Initialize the agent with the desired model and result typeagent=Agent(
model='openai:gpt-4o',
result_type=CodeSnippet,
retries=0
)
# Implement the result validator to check for syntax errors@agent.result_validatorasyncdefvalidate_python_code(ctx: RunContext, result: CodeSnippet) ->CodeSnippet:
try:
# Attempt to compile the code to check for syntax errorscompile(result.code, '<string>', 'exec')
exceptSyntaxErrorase:
# Raise ModelRetry to prompt the agent to generate a new resultraiseModelRetry(f'Syntax error in generated code: {e}') fromereturnresult# Use the agent to generate Python code based on a promptdefmain():
prompt='Generate a Python function that adds two numbers, but introduce a syntax error.'result=agent.run_sync(prompt)
print('Generated Python Code:')
print(result.data.code)
if__name__=='__main__':
main()
Any ideas if this is something that you would like to implement?
The text was updated successfully, but these errors were encountered:
It would be very useful to have a built in validator for when we want our LLM to generate python code (or ideally any type of code, but in this case I am using python.
Currently I am using this approach:
Any ideas if this is something that you would like to implement?
The text was updated successfully, but these errors were encountered: