Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf!: replace default engine JSON reader's FileStream with concurrent futures #711

Merged
merged 39 commits into from
Mar 3, 2025

Conversation

zachschuermann
Copy link
Collaborator

@zachschuermann zachschuermann commented Feb 21, 2025

What changes are proposed in this pull request?

The original FileStream API, though intended to concurrently make GET requests to the object store, actually made serial requests and relied on a hand-written poll function in order to implement Stream. This PR aims to make a minimal change in order to (1) increase performance for the JSON reader by issuing concurrent GET requests and (2) simplify the code by removing the need for a custom Stream and instead leverage existing functions/adapters to convert the files to read into a Stream and issue concurrent requests through the futures::stream::buffered adapter.

This is effectively a similar improvement as in #595 but for the JSON reader.

Specifically, the changes are:

  1. replace the FileStream::new_async_read_iterator() call (the manually-implemented Stream) with an inline implementation of converting the files slice into a Stream (via stream::iter) and use the futures::stream::buffered adapter to concurrently execute file opening futures. It then sends results across an mpsc channel to bridge the async/sync gap.
  2. JsonOpener no longer implements FileOpener (which requires a synchronous fn open() and instead directly exposes an async fn open() for easier/simpler use above. This removes all reliance on FileStream/FileOpener in the JSON reader.
  3. adds a custom ObjectStore implementation: OrderedGetStore to deterministically control the ordering in which GET request futures are resolved

This PR affects the following public APIs

  • DefaultJsonHandler::with_readahead() renamed to DefaultJsonHandler::with_buffer_size()
  • DefaultJsonHandler's default buffer size: 10 => 1000
  • DefaultJsonHandler's default batch size: 1024 => 1000

How was this change tested?

added test with a new OrderedGetStore which will resolve the GET requests in a jumbled order but we expect the test to return the natural order of requests. in a additionally, manually validated that we went from serial JSON file reads to concurrent reads

Copy link

codecov bot commented Feb 21, 2025

Codecov Report

Attention: Patch coverage is 80.64516% with 66 lines in your changes missing coverage. Please review.

Project coverage is 84.21%. Comparing base (bbe8929) to head (f390be1).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
kernel/src/engine/default/json.rs 80.64% 59 Missing and 7 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #711      +/-   ##
==========================================
- Coverage   84.24%   84.21%   -0.04%     
==========================================
  Files          77       77              
  Lines       18105    18387     +282     
  Branches    18105    18387     +282     
==========================================
+ Hits        15252    15484     +232     
- Misses       2131     2181      +50     
  Partials      722      722              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@github-actions github-actions bot added the breaking-change Change that will require a version bump label Feb 21, 2025
@zachschuermann zachschuermann removed the breaking-change Change that will require a version bump label Feb 24, 2025
@github-actions github-actions bot added the breaking-change Change that will require a version bump label Feb 24, 2025
@github-actions github-actions bot added the breaking-change Change that will require a version bump label Feb 27, 2025
})
.collect();

let _ = future::join_all(handles).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually need this call if we use mpsc::IntoIter?

This iterator will block whenever next is called, waiting for a new message, and None will be returned if the corresponding channel has hung up.

(see above)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm I ran into some odd behavior without it - it looks like we just don't wait on any of the spawned tasks to finish and then we 'finish' the test without actually doing anything. can look into this more deeply later :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we probably need to do what the actual json read code is doing, and produce a flattened stream of futures.

}
}

/// Set the maximum number of batches to read ahead during [Self::read_json_files()].
/// Deprecated: use [Self::with_buffer_size()].
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to avoid a breaking change or something?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep exactly - I may just collect some of these "we need breaking change sometime" into an issue and then whenever we decide to pursue 0.8 (and have actual breaking changes need) then we can remove some of these deprecated functions

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed since we are fine with breaking change now :)

Comment on lines 626 to 627
// note: join_all is ordered
let files = future::join_all(file_futures).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may be ordered, but it also materializes the entire list up front (and could cause silent data loss if the mpsc overflows).

Is there not a way to try-flatten the streams into a single stream that we then convert to a blocking iterator?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said -- I don't think this test actually adds any value over the new test that leverages the ordered object store. Two items is too few to reliably catch races, and if there were a race, we don't want a test that only notices some of the time.

I think as long as we have tested that our stream machinery preserves order, and verified that the json reads return correct data at all, probably don't need much or any testing for the combination of the two?

Put another way -- what code path(s) does this test exercise, that other tests did not cover?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also -- what does it mean for join_all to be "ordered" in the first place? I thought spawn kicked off the tasks independently, and so they could complete in any order even if nobody ever joins on them?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(working on making this test a better one than just the two items it had before)

for context on both tests:

  1. test_ordered_get_store is just a test to validate that our special OrderedGetStore does the right thing
  2. test_read_json_files_ordering is actually using the OrderedGetStore to set up a specific out-of-order test so that we ensure read_json_files hands things back in the correct order

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also -- what does it mean for join_all to be "ordered" in the first place? I thought spawn kicked off the tasks independently, and so they could complete in any order even if nobody ever joins on them?

regardless of using spawn or not, it means that the list of futures (JoinHandles if spawn or some other futures if not) are resolved in order - the returned files is in the original order of the list of file_futures, NOT in the order that they are resolved.

Copy link
Collaborator

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't understand why the unit test behaves the way it does, but the logic in the actual json reader looks correct.

})
.collect();

let _ = future::join_all(handles).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we probably need to do what the actual json read code is doing, and produce a flattened stream of futures.

@@ -2,19 +2,22 @@

use std::io::BufReader;
use std::ops::Range;
use std::sync::Arc;
use std::task::{ready, Poll};
use std::sync::{mpsc, Arc};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider tokio::sync::mpsc instead? much faster, designed to be used in async context

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't depend on tokio in the default except for implementing executors in terms of it. it might be fine, but for now we can stay stdlib

let result = self.inner.get(location).await;

// we implement a future which only resolves once the requested path is next in order
future::poll_fn(move |cx| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when one slow task is at the front of the line? Everything just waits for that right? I think in an ideal network situation this works fine, but if one slow future is at the front it seems like this just log jams the entire process.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but what we're simulating here is a specific ordering of data returned. We're not trying to check if things are performant or anything. So if there's a "slow" request in this case, it implies that all the other requests must be slower, since we've specified the order they should return up front.

Really this is a test for "can kernel handle it when async stuff returns out of order".

Copy link
Collaborator

@scovich scovich Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In particular: kernel's log replay requires that results come back in the order they were requested in, not the order they completed in. That's a correctness constraint. And yes, if there's a straggler at the head of the queue (in real life) that does mean everybody else is waiting. I would hope the async machinery still allows the tasks deeper in the queue to make progress meanwhile.

This test is forcing out of order completion to ensure the results are still returned in order.

}

#[tokio::test(flavor = "multi_thread", worker_threads = 3)]
async fn test_read_json_files_ordering() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be helpful to have a test that exceeds the buffering limit.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added!

Copy link
Collaborator

@nicklan nicklan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a couple of small things but basically lgtm

@@ -159,3 +159,4 @@ tracing-subscriber = { version = "0.3", default-features = false, features = [
"env-filter",
"fmt",
] }
async-trait = "0.1" # only used for our custom SlowGetStore ObjectStore implementation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: keep alphabetical.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved up, to after our path-based deps but on top of others, though they aren't in order it doesn't look like..

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hah right, we should actually alphabatize those at some point :)

@@ -2,19 +2,22 @@

use std::io::BufReader;
use std::ops::Range;
use std::sync::Arc;
use std::task::{ready, Poll};
use std::sync::{mpsc, Arc};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't depend on tokio in the default except for implementing executors in terms of it. it might be fine, but for now we can stay stdlib

state.ordered_keys.pop_front().unwrap();

// there are three possible cases, either:
// 1. the next key has a waker already registered, in which case we wake it up
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe note that this is the case where something has already requested the next key in line, so that's why there is a waker waiting, and we need to wake it up

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added more!

let result = self.inner.get(location).await;

// we implement a future which only resolves once the requested path is next in order
future::poll_fn(move |cx| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but what we're simulating here is a specific ordering of data returned. We're not trying to check if things are performant or anything. So if there's a "slow" request in this case, it implies that all the other requests must be slower, since we've specified the order they should return up front.

Really this is a test for "can kernel handle it when async stuff returns out of order".

let result = self.inner.get(location).await;

// we implement a future which only resolves once the requested path is next in order
future::poll_fn(move |cx| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just have this return Poll::Ready(result)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

poll_fn takes an FnMut so it would require us being able to either (1) capture the result into the closure multiple times (impossible - would move multiple times) or (2) we would have to just directly do the self.inner.get inside the poll_fn which I think is also difficult since poll_fn is synchronous and we want to be able to .await.

let me know if I'm missing something but i played with it for a second and came up with those items!

@zachschuermann zachschuermann removed the breaking-change Change that will require a version bump label Feb 28, 2025
@github-actions github-actions bot added the breaking-change Change that will require a version bump label Feb 28, 2025
fn new(inner: T, ordered_keys: impl Into<VecDeque<Path>>) -> Self {
let ordered_keys = ordered_keys.into();
fn new(inner: T, ordered_keys: &[Path]) -> Self {
let ordered_keys: Vec<Path> = ordered_keys.to_vec();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably don't need the type annotation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yep

@zachschuermann zachschuermann changed the title perf: replace default engine JSON reader's FileStream with concurrent futures perf!: replace default engine JSON reader's FileStream with concurrent futures Mar 3, 2025
@zachschuermann zachschuermann merged commit 76271c7 into delta-io:main Mar 3, 2025
19 of 21 checks passed
@zachschuermann zachschuermann deleted the concurrent-json branch March 3, 2025 23:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking-change Change that will require a version bump
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants