Skip to content

Releases: gaudiy/langsmith-evaluation-helper

v0.1.5

15 Aug 01:58
4c0d3bf
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.4...v0.1.5

v0.1.4

13 Aug 05:17
3506ba2
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.3...v0.1.4

v0.1.3

29 Jul 05:11
e166abc
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.2...v0.1.3

v0.1.2

23 Jul 02:07
524b9c9
Compare
Choose a tag to compare

Initial release of LangSmith Evaluation Helper, an open-source library that simplifies the process of running evaluations using LangSmith.

Key Features

  • YAML-based Configuration: Easily set up and customize your evaluations using a simple YAML configuration file.
  • Flexible Prompt Handling: Support for both standard prompts and custom run scripts to accommodate various evaluation scenarios.
  • Multiple Model Support: Evaluate across different language models, including GPT-3.5 Turbo, GPT-4, Claude 3 Sonnet, and more.
  • Concurrent Evaluation: Run multiple evaluations in parallel to improve efficiency.
  • Built-in Assertions: Validate results using length checks, LLM-based judgments, and similarity comparisons.
  • Integration with LangSmith: Seamlessly view and analyze your evaluation results in the LangSmith platform.

Getting Started

  1. Install the package:

    pip install langsmith-evaluation-helper
    
  2. Create a config.yml file to define your evaluation parameters.

  3. Run your evaluation:

    langsmith-evaluation-helper evaluate path/to/your/config.yml
    

For more details on usage and configuration options, please refer to our README.

We look forward to seeing how you use LangSmith Evaluation Helper in your projects!