Add doc about mem.*Unsafe
consuming the input buffer
#8209
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've been reading the new buffer-pool-related code due to recent OOM problems with other libraries misusing
sync.Pool
, and the varying behavior betweenBuffer
andSliceBuffer
had me confused for quite a while.Buffer mutates itself when
read
, and implicitly callsFree
when fully consumed:grpc-go/mem/buffers.go
Lines 194 to 200 in 5edab9e
While SliceBuffer returns a sub-slice but does not modify itself:
grpc-go/mem/buffers.go
Lines 263 to 267 in 5edab9e
The only way these
*Unsafe
funcs can be used correctly is by replacing the input-buffer with the result-buffer(s), as the current code does:grpc-go/internal/transport/transport.go
Line 139 in 5edab9e
... which seems worth documenting since it feels like a huge footgun otherwise: you only trigger the self-mutating-and-freeing behavior for relatively large data, which is less common in tests.
Granted, this isn't a technical barrier at all, but the behavior differences between implementations mislead me for a while and this "must not use input Buffer" requirement doesn't seem obviously-necessary to me when reading the code. Might as well save future-people that effort.
On a related note,
Buffer
feels... a bit concerningly strange in regards to concurrency, and I didn't see it called out in the PR that introduced it so I think it might not have been noticed.E.g.
Buffer
s are not concurrency-safe:grpc-go/mem/buffers.go
Lines 39 to 41 in 5edab9e
But it uses an atomic refcount:
grpc-go/mem/buffers.go
Lines 75 to 80 in 5edab9e
... but it doesn't use it in an atomically-safe way, as it sets the value to nil when freeing:
grpc-go/mem/buffers.go
Line 160 in 5edab9e
And if you include the fact that the ref came from a pool:
grpc-go/mem/buffers.go
Line 157 in 5edab9e
it seems like this code cannot possibly be correct:
Put
refcounts and buffers (with a racingRef
,Free
,Ref
,Free
sequence)So this is introducing difficult-to-investigate failure modes that non-atomic or non-pooled instances would not have. It seems like it'd be better to either just use an
int
field or not-reuse atomics (if the goal is to detect races).Also I would be quite surprised if pooled
*int
values perform better than non-pooled, given the cheap initialization andsync.Pool
's overhead, but I don't have any evidence either way.Is there some benefit to this I'm missing? Maybe something about it can catch tricky races or misuse that would otherwise be missed? It seems misleading and dangerous to me otherwise.