You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTE.md
+29
Original file line number
Diff line number
Diff line change
@@ -156,6 +156,7 @@ let sol = decomp
156
156
157
157
## Benchmarking
158
158
159
+
### Building Benchmarks
159
160
It is important to the project that we have benchmarks in place to evaluate the benefit of performance related changes. To make that process easier we provide some guidelines for writing benchmarks.
160
161
161
162
1. Test for a variety of sample sizes for most algorithms [1_000, 10_000, 20_000] will be sufficient. For algorithms where it's not too slow, use 100k instead of 20k.
@@ -169,3 +170,31 @@ It is important to the project that we have benchmarks in place to evaluate the
169
170
6. When benchmarking multi-target the target count should be within the following range: [2, 4].
170
171
7. In `BenchmarkId` include the values used to parametrize the benchmark. For example if we're doing Pls then we may have something like `Canonical-Nipals-5feats-1_000samples`
171
172
8. Pass data as an argument to the function being benched. This will prevent Criterion from including data creation time as part of the benchmark.
173
+
9. Add a profiler see [here](https://github.com/tikv/pprof-rs#integrate-with-criterion) for an example on how to do so with pprof, Criterion, and Flamegraph.
174
+
175
+
### Running Benchmarks
176
+
When running benchmarks sometimes you will want to profile the code execution. Assuming you have followed step 9 to add a pprof profiling hook for the linfa-ica package you can run the following to get your profiling results as a flamegraph.
0 commit comments