Skip to content

Latest commit

 

History

History
47 lines (29 loc) · 5.38 KB

batching.md

File metadata and controls

47 lines (29 loc) · 5.38 KB
description
Speed up your testing and merging workflow with Merge Batching

Batching

Batching allows multiple pull requests in the queue to be tested as a single unit. Given the same CI resources, a system with batching enabled can achieve higher throughput while also reducing the net amount of CI time spent per pull request.

By enabling batching, the cost per pull request in the Merge Queue can be reduced by almost 90%. For example, in the table below, you can see how batching affects the amount spent testing pull requests in the queue.

{% embed url="https://share.vidyard.com/watch/BsPi6f1KHvsa6wE18ySAJf" %} example of testing pull requests in batches of 3 {% endembed %}

Batch SizePull RequestsTesting CostSavings
1A, B, C, D, E, F, G, H, I, J, K, L12x0%
2AB, CD, EF, GH, IJ6x50%
4ABCD, EFGH, IJKL3x75%
8ABCDEFGH, IJKL1.5x87.5%
12ABCDEFGHIJKL1x92%

Enable Batching

Batching is enabled in the Merge Settings of your repo in the Trunk web app.

Configuring Batching

The behavior of batching is controlled by two settings in the Merge Queue:

Target Batch Size: The largest number of entries in the queue that will be tested in a single batch. A larger target batch size will help reduce CI cost per pull request but require more work to be performed when progressive failures necessitate bisection.

Maximum Wait Time: The maximum amount of time the Merge Queue should wait to fill the target batch size before beginning testing. A higher maximum wait time will cause the Time-In-Queue metric to increase but have the net effect of reducing CI costs per pull request.

Time (mm:ss)Batching 4; Maximum Wait 5 minutesTesting
00:00enqueue A----
01:00enqueue B----
02:30enqueue C----
05:005 min maximum wait time reachedBegin testing ABC

What happens when a batch fails testing?

If a batch fails, Trunk Merge Queue will move it to a separate queue for bisection analysis. In this queue, the batch will be split in various ways and tested in isolation to determine the PRs in the batch that introduced the failure. PRs that pass this way will be moved back to the main queue for re-testing. PRs that are believed to have caused the failure are kicked from the queue.

Batching + Optimistic Merging and Pending Failure Depth

By enabling batching along with pending failure depth and optimistic merging you can realize the major cost savings of batching while still reaping the anti-flake protection of optimistic merging and pending failure depth.

eventqueue
Enqueue A, B, C, D, E, F, Gmain <- ABC <- DEF +abc
Batch ABC failsmain <- ABC
pending failure depth keeps ABC from being evicted while DEFmain <- ABC (hold) <- DEF+abc
DEF passesmain <- ABC <- DEF+abc
optimistic merging allows ABC and DEF to mergemerge ABC, DEF

Combined, Pending Failure Depth, Optimistic Merging, and Batching can greatly improve your CI performance because now Merge can optimistically merge whole batches of PRs, with far less wasted testing.

What are the risks of batching?

The downsides here are very limited. Since batching combines multiple pull requests into one, you essentially give up the proof that every pull request in complete isolation can safely be merged into your protected branch. In the unlikely case that you have to revert a change from your protected branch or do a rollback, you will need to retest that revert or submit it to the queue to ensure nothing has broken. In practice, this re-testing is required in almost any case, regardless of how it was originally merged, and the downsides are fairly limited.