|
3 | 3 | The Unit Test Generator feature is powered by kaizen and automatically creates comprehensive unit tests for your code, improving code quality and test coverage.
|
4 | 4 |
|
5 | 5 | ## How it Works:
|
6 |
| -- Input the source code for which you want to generate unit tests. |
| 6 | +- Input the source code or directory for which you want to generate unit tests. |
7 | 7 | - The Unit Test Generator leverages advanced language models to analyze the code and generate appropriate unit tests in a format compatible with popular testing frameworks.
|
| 8 | +- The generator supports multiple programming languages and can handle entire directories of code files. |
8 | 9 |
|
9 | 10 | You can find an example [here](https://github.com/Cloud-Code-AI/kaizen/tree/main/examples/unittest/main.py)
|
10 | 11 |
|
11 | 12 | ## Using the Unit Test Generator:
|
12 |
| -- Provide the source code or filename for which you want to generate unit tests. |
13 |
| -- [Optional] Setup Github App to Trigger the Unit Test Generator to receive automatically created unit tests. |
14 |
| -- Review and integrate the generated tests into your test suite. |
| 13 | +1. Provide the source code file or directory path for which you want to generate unit tests. |
| 14 | +2. (Optional) Configure output path, verbosity, and critique settings. |
| 15 | +3. Run the generator to create unit tests. |
| 16 | +4. Review and integrate the generated tests into your test suite. |
| 17 | + |
| 18 | +## Supported Languages: |
| 19 | +- Python (.py) |
| 20 | +- JavaScript (.js) |
| 21 | +- TypeScript (.ts) |
| 22 | +- React (.jsx, .tsx) |
| 23 | +- Rust (.rs) |
| 24 | + |
| 25 | +## How to Run: |
| 26 | + |
| 27 | +### Installation |
| 28 | + |
| 29 | +Before using the Unit Test Generator, you need to install the Kaizen Cloud Code SDK. You can do this using pip: |
| 30 | + |
| 31 | +```bash |
| 32 | +pip install kaizen-cloudcode |
| 33 | +``` |
| 34 | + |
| 35 | +### Usage Guide |
| 36 | + |
| 37 | +Here's a step-by-step guide on how to use the Unit Test Generator: |
| 38 | + |
| 39 | +1. Import the UnitTestGenerator: |
| 40 | + ```python |
| 41 | + from kaizen.generator.unit_test import UnitTestGenerator |
| 42 | + ``` |
| 43 | + |
| 44 | +2. Create an instance of the generator: |
| 45 | + ```python |
| 46 | + generator = UnitTestGenerator() |
| 47 | + ``` |
| 48 | + |
| 49 | +3. Generate tests for a specific file: |
| 50 | + ```python |
| 51 | + generator.generate_tests( |
| 52 | + file_path="path/to/your/file.py", |
| 53 | + enable_critique=True, |
| 54 | + verbose=True |
| 55 | + ) |
| 56 | + ``` |
| 57 | + |
| 58 | +4. (Optional) Run the generated tests: |
| 59 | + ```python |
| 60 | + test_results = generator.run_tests() |
| 61 | + ``` |
| 62 | + |
| 63 | +5. (Optional) Display the test results: |
| 64 | + ```python |
| 65 | + for file_path, result in test_results.items(): |
| 66 | + print(f"Results for {file_path}:") |
| 67 | + if "error" in result: |
| 68 | + print(f" Error: {result['error']}") |
| 69 | + else: |
| 70 | + print(f" Tests run: {result.get('tests_run', 'N/A')}") |
| 71 | + print(f" Failures: {result.get('failures', 'N/A')}") |
| 72 | + print(f" Errors: {result.get('errors', 'N/A')}") |
| 73 | + print() |
| 74 | + ``` |
| 75 | + |
| 76 | +### Complete Example: |
| 77 | + |
| 78 | +Here's a complete example of how to use the Unit Test Generator: |
| 79 | + |
| 80 | +```python |
| 81 | +from kaizen.generator.unit_test import UnitTestGenerator |
| 82 | + |
| 83 | +# Create an instance of the generator |
| 84 | +generator = UnitTestGenerator() |
| 85 | + |
| 86 | +# Generate tests for a specific file |
| 87 | +generator.generate_tests( |
| 88 | + file_path="kaizen/helpers/output.py", |
| 89 | + enable_critique=True, |
| 90 | + verbose=True |
| 91 | +) |
| 92 | + |
| 93 | +# Run the generated tests |
| 94 | +test_results = generator.run_tests() |
| 95 | + |
| 96 | +# Display the test results |
| 97 | +for file_path, result in test_results.items(): |
| 98 | + print(f"Results for {file_path}:") |
| 99 | + if "error" in result: |
| 100 | + print(f" Error: {result['error']}") |
| 101 | + else: |
| 102 | + print(f" Tests run: {result.get('tests_run', 'N/A')}") |
| 103 | + print(f" Failures: {result.get('failures', 'N/A')}") |
| 104 | + print(f" Errors: {result.get('errors', 'N/A')}") |
| 105 | + print() |
| 106 | +``` |
15 | 107 |
|
16 | 108 | ## API Reference:
|
17 | 109 |
|
18 | 110 | ### Class: UnitTestGenerator
|
19 | 111 |
|
20 | 112 | #### Constructor
|
21 |
| -- `__init__(self)` |
22 |
| - Initializes the UnitTestGenerator with default settings. |
| 113 | +- `__init__(self, verbose=False)` |
| 114 | + Initializes the UnitTestGenerator with optional verbosity setting. |
23 | 115 |
|
24 | 116 | #### Methods
|
25 | 117 |
|
26 |
| -#### generate_tests_from_dir |
| 118 | +##### generate_tests_from_dir |
27 | 119 | - `generate_tests_from_dir(self, dir_path: str, output_path: str = None)`
|
28 |
| - Generates unit tests for files in a given directory |
| 120 | + Generates unit tests for all supported files in a given directory. |
29 | 121 | - Parameters:
|
30 |
| - - `dir_path`: Path of the directory containing source files. |
31 |
| - - `output_path`: (Optional) Custom output path for generated tests. |
| 122 | + - `dir_path`: Path of the directory containing source files. |
| 123 | + - `max_critique`: Maximum number of critique iterations. |
| 124 | + - `output_path`: (Optional) Custom output path for generated tests. |
| 125 | + - `verbose`: Enable verbose logging. |
| 126 | + - `enable_critique`: Enable AI critique and improvement of generated tests. |
| 127 | + - Returns: A tuple containing an empty dictionary and llm usage statistics. |
32 | 128 |
|
33 | 129 | ##### generate_tests
|
34 |
| -- `generate_tests(self, file_path: str, content: str = None, output_path: str = None) -> Tuple[Dict, Dict]` |
35 |
| - Generates unit tests for a given file. |
| 130 | +- `generate_tests(self, file_path: str, content: str = None, max_critique: int = 3, output_path: str = None, verbose: bool = False, enable_critique: bool = False)` |
| 131 | + Generates unit tests for a given file with various configuration options. |
36 | 132 | - Parameters:
|
37 | 133 | - `file_path`: Path of the file relative to the project root.
|
38 | 134 | - `content`: (Optional) File content.
|
| 135 | + - `max_critique`: Maximum number of critique iterations. |
39 | 136 | - `output_path`: (Optional) Custom output path for generated tests.
|
40 |
| - - Returns: A tuple containing an empty dictionary and usage statistics. |
41 |
| - |
42 |
| -##### generate_test_files |
43 |
| -- `generate_test_files(self, parsed_data, file_extension, file_path)` |
44 |
| - Generates test files for parsed data. |
45 |
| - |
46 |
| -##### generate_ai_tests |
47 |
| -- `generate_ai_tests(self, item, source_code) -> Tuple[str, Dict]` |
48 |
| - Generates AI-powered tests for a given item. |
49 |
| - |
50 |
| -##### review_ai_generated_tests |
51 |
| -- `review_ai_generated_tests(self, item, source_code, current_tests) -> Tuple[str, Dict]` |
52 |
| - Reviews AI-generated tests. |
53 |
| - |
54 |
| -##### review_test_file |
55 |
| -- `review_test_file(self, file_name, test_code) -> Tuple[str, Dict]` |
56 |
| - Reviews a test file. |
| 137 | + - `verbose`: Enable verbose logging. |
| 138 | + - `enable_critique`: Enable AI critique and improvement of generated tests. |
| 139 | + - Returns: A tuple containing an empty dictionary and llm usage statistics. |
57 | 140 |
|
58 | 141 | ##### run_tests
|
59 | 142 | - `run_tests(self) -> Dict`
|
60 | 143 | Runs the generated unit tests and returns the results.
|
61 | 144 |
|
62 |
| -#### Properties |
63 |
| -- `output_folder`: Directory for storing generated tests. |
64 |
| -- `total_usage`: Tracks token usage for API calls. |
65 |
| -- `supported_languages`: Dictionary of supported file extensions and their corresponding parsers. |
66 |
| -- `logger`: Logger instance for the class. |
67 |
| -- `provider`: LLMProvider instance for making API calls. |
| 145 | +#### Key Features: |
| 146 | +- Multi-language support |
| 147 | +- Directory-wide test generation |
| 148 | +- AI-powered test scenario creation |
| 149 | +- Test critique and improvement |
| 150 | +- Detailed logging and progress tracking |
| 151 | +- Token usage monitoring |
68 | 152 |
|
69 | 153 | ## Benefits:
|
70 | 154 | - Increased Test Coverage
|
71 | 155 | - Time Efficiency
|
72 | 156 | - Consistency in Testing
|
73 | 157 | - Early Bug Detection
|
| 158 | +- Support for Multiple Programming Languages |
| 159 | +- Continuous Improvement through AI Critique |
74 | 160 |
|
75 | 161 | ## Limitations:
|
76 | 162 | - AI Limitations: May not cover all edge cases or complex scenarios.
|
77 | 163 | - Human Oversight: Generated tests should be reviewed and potentially modified by developers.
|
| 164 | +- Language Support: Limited to the supported programming languages. |
| 165 | + |
| 166 | +## Advanced Usage: |
| 167 | +- Enable critique mode for AI-powered test improvement |
| 168 | +- Adjust verbosity for detailed logging |
| 169 | +- Customize output paths for generated tests |
| 170 | +- Configure maximum critique iterations for fine-tuned results |
78 | 171 |
|
79 |
| -The Unit Test Generator uses AI to enhance the testing process, improve code quality, and streamline the development workflow by automating the creation of unit tests. |
| 172 | +The Unit Test Generator uses AI to enhance the testing process, improve code quality, and streamline the development workflow by automating the creation of unit tests across multiple programming languages. |
0 commit comments