feat: Make allocations when decoding fallible #974
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR makes any allocations that occur in the decode path fallible, without any public API changes.
The changes are specifically scoped to repeated fields,
Vec<u8>
, andString
.Vec<...>
of values, we now callVec::try_reserve(1)
and map the possibleTryReserveError
to aDecodeError
. Reserving for a single element is what Vec already does so there shouldn't be a performance impact, andDecodeError
is already opaque so this doesn't change the public API surface.Vec<u8>
andString
, these types reserve space as part of thesealed::BytesAdapter::repliace_with
trait. This trait method was updated to return aResult<(), TryReserveError>
and now will fail if we can't reserve enough space. Given this is a sealed trait there are no public API changes.In addition to making allocations fallible, I also changed the merge impl of
Vec<u8>
to usebytes::merge_one_copy
likeString
does, this should result in strictly less allocations.Why make this change?
Previously users had no way to guard against OOMs in decoding. You could try to make a guess based on the size of the encoded message, but this fairly inaccurate because it would be very difficult (impossible?) to account for things like the ammortized growth of a Vec. Even if you did guess based on the size of the encoded message, there is no way you could account for multiple messages being decoded in parallel in individual tasks, e.g. handling multiple network requests in parallel.
What about the encoding path?
A similar change is not needed for the encoding path because you can already guard against OOMs by using the
encoded_len()
to allocate a buffer yourself guarding against OOMs, and then encode into this newly allocated buffer.