-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(event cache): unload a linked chunk whenever we get a limited sync #4694
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's really exciting. Thank you for having worked on this! That's exactly what I had in mind and what we've talked together. Super happy we are aligned on this.
The novelty —compared to what I was imagining— is the replace_with
API which I find pretty elegant. Kudos for that.
I let a couple of feedback about possible unsafety. The way your patch is implemented doesn't create unsafety I think, but marking one or two methods unsafe
is primordial I believe.
Yes, tests are missing, but I know it's a first shot and I know you'll write them.
EventCache
, and so of the Timeline
will see an Update::Clear
, then an Update::NewItemsChunk
. Translated by linked_chunk::AsVector
, it gives VectorDiff::Clear
, then VectorDiff::PushBack
. Basically, the timeline will “blink”/“flash”. This is not ideal at all, knowing that it can happen pretty often…
I see two solutions here:
- Either we write an heuristic in
AsVector
:- when a
VectorDiff::Clear
is followed byVectorDiff::PushBack { values }
or other insertions, it can be folded/merged in aVectorDiff::Reset { values }
- however, the
Timeline
will re-create the timeline items, with new unique IDs, so the renderer on the app side will not be able to make a clear diff, and… “blink”/“flash” again (all timeline items will be dropped, and new items will be re-created) - we could optimise that on the
Timeline
side by re-using the same unique ID for items that have been removed and re-inserted based on their event$event_id
, but I think it starts to create many complications
- when a
- Either, instead of emitting an
Update::Clear
, we emit a bunch ofUpdate::RemoveChunk
until one chunk remains. It slightly changes the approach a bit, because instead of having areplace_with
, we get aremove_all_except_last
. The underlying code remains the same, but theUpdate
s are different⚠️ note thatAsVector
expectsRemoveChunk
to remove… an empty chunk!! It emits zeroVectorDiff
. If we go in that path, we must updateAsVector
consequently, nothing fancy, but it must be done (edit: a draft here feat(common):Update::RemoveChunk
emitsVectorDiff::Remove
#4696).
I am not inclined to approve this PR until we have a consensus around this question. I know you understand that. It doesn't mean your work is not good: it is excellent and I couldn't do better myself. Congrats for that. I think however we must answer these fundamental questions before moving forward.
This patch updates `Update::RemoveChunk` to emit `VectorDiff::Remove`. Until now, `RemoveChunk` was expecting the chunk to be empty, because it is how it is used so far. However, with matrix-org#4694, it can change rapidly.
This patch updates `Update::RemoveChunk` to emit `VectorDiff::Remove`. Until now, `RemoveChunk` was expecting the chunk to be empty, because it is how it is used so far. However, with #4694, it can change rapidly.
d0d20a3
to
b147456
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4694 +/- ##
==========================================
+ Coverage 85.90% 85.91% +0.01%
==========================================
Files 292 292
Lines 33850 33903 +53
==========================================
+ Hits 29078 29128 +50
- Misses 4772 4775 +3 ☔ View full report in Codecov by Sentry. |
For what it's worth, we've discussed about this offline, and came to the conclusion that correctness is more important than performance here. In the absence of this crucial fix, it might look like there are missing messages in a timeline. I also suspect that the batching at the output of the timeline's subscription would mostly hide the problem described here (or result in a timeline "flash", if the timeline happened to be opened while a new gappy sync happens), but let's see in multiple steps. |
b147456
to
daff99c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's even better! Well done.
I suspect we have a bug and that's why I can't approve the PR for the moment, please see my feedback.
// Run pagination once: it will consume prev-batch2 first, which is the most | ||
// recent token, which returns an empty batch, thus indicating the start of the | ||
// room. | ||
let pagination = room_event_cache.pagination(); | ||
|
||
let outcome = pagination.run_backwards_once(20).await.unwrap(); | ||
assert!(outcome.reached_start); | ||
assert!(outcome.events.is_empty()); | ||
assert!(stream.is_empty()); | ||
|
||
// Next, we lazy-load a next chunk from the store, and get the initial, empty | ||
// default events chunk. | ||
let outcome = pagination.run_backwards_once(20).await.unwrap(); | ||
assert!(outcome.reached_start.not()); | ||
assert!(outcome.events.is_empty()); | ||
assert!(stream.is_empty()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What? We are reaching the start of the timeline, then we paginate again and we are not reaching the start of the timeline?
How the Timeline
is supposed to know it has to paginate once again if reached_start
is set to true
? Is it a bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! This was happening because there was an inconsistency between network (which indicated that we've reached the start of the room) and the persisted storage on disk (where we may have an empty initial events chunk before the final gap we just resolved).
I will add a commit that makes sure to override this value based on the current state of the chunk first, before resorting to the reached_start
value obtained from network, if we couldn't figure it out ourselves (i.e. there wasn't any previous chunk).
In the future, we should consider not having empty chunks in the first place, as you hinted on Matrix, but I'd like to keep this PR smallish, and land this as soon as possible, as it's important for correctness purposes (and getting rid of empty chunks is rather an optimization in my opinion).
…orage updates And rename it accordingly to `RoomEvents::store_updates`. Note: no changelog, because this is an internal API only.
daff99c
to
2adce44
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are good now!
@@ -1335,7 +1331,7 @@ async fn test_no_gap_stored_after_deduplicated_backpagination() { | |||
let pagination = room_event_cache.pagination(); | |||
|
|||
let outcome = pagination.run_backwards_once(20).await.unwrap(); | |||
assert!(outcome.reached_start); | |||
assert!(outcome.reached_start.not()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spotted!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tweaked an above comment, thanks!
…tween network and disk It could be that we have a mismatch between network and disk, after running a back-pagination: - network indicates start of the timeline, aka there's no previous-batch token - but in the persisted storage, we do have an initial empty events chunk Because of this, we could have weird transitions from "I've reached the start of the room" to "I haven't actually reached it", if calling the `run_backwards()` method manually. This patch rewrites the logic when returning `reached_start`, so that it's more precise: - when reloading an events chunk from disk, rely on the previous chunk property to indicate whether we've reached the start of the timeline, thus avoiding unnecessary calls to back-paginations. - after resolving a gap via the network, override the result of `reached_start` with a boolean that indicates 1. there are no more gaps and 2. there's no previous chunk (actual previous or lazily-loaded). In the future, we should consider NOT having empty events chunks, if we can.
2adce44
to
df6108c
Compare
This implements unloading the linked chunk, so as to free memory on the one hand, and avoid some weird corner cases like #4684 on the other hand.
Unloading a linked chunk happens in two steps:
Then, we make use of that functionality whenever we receive a gap via sync. This resolves the situation where we start with a hot cache store, that has one old event E1; the room's state is actually [E1, E2, E3], and the last sync returns [Gap, E3]. In this case, since we don't render gaps yet in the timeline, the timeline would show [E1, E3], making it look like we missed event E2; although the next pagination would make it appear. Instead, we here unload the linked chunk to its last chunk (E3), so that it clears [E1] from rendering, and the next paginations will start from the latest gap.
Fixes #4684.
Part of #3280.