-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ThreadPoolExecutor for AssetDaemon run submission #25888
base: master
Are you sure you want to change the base?
Add ThreadPoolExecutor for AssetDaemon run submission #25888
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. Join @OwenKephart and the rest of your teammates on Graphite |
16f35f2
to
4eaf79e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this all looks very reasonable to me. The one thing I am unsure about is whether we need some kind of lock on run_request_execution_data_cache now that it is being written to and read from from multiple threads simultaneously potentially? Is there anything else that might be shared between the threads that could also run into that issue?
@@ -384,6 +389,14 @@ def core_loop( | |||
thread_name_prefix="asset_daemon_worker", | |||
) | |||
) | |||
num_submit_workers = self._settings.get("num_submit_workers") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there will need to be a corresponding internal PR for this i imagine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
run_request_index=i, | ||
instance=instance, | ||
workspace_process_context=workspace_process_context, | ||
run_request_execution_data_cache=run_request_execution_data_cache, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this cache thread safe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my understanding here is that the risk with this cache is simply that multiple threads will end up making redundant queries for execution data.
in theory, the values returned by these separate queries should be identical, and there's no read -> modify -> write sort of pattern happening here, which is where the non-thread-safety of dictionaries would come into play. Given that all writes into the same key of the dictionary should theoretically contain the same value, that shouldn't be an issue.
@@ -212,6 +212,7 @@ def _evaluate_tick_daemon( | |||
threadpool_executor=self.threadpool_executor, | |||
amp_tick_futures=amp_tick_futures, | |||
debug_crash_flags={}, | |||
submit_threadpool_executor=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't know that we need to be running all these scenarios with the submit threadpool executor at all times, but at least as a one-off i think it would be useful to do it once manually to suss out any potential problems
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the concern here that the behavior of AMPs might differ in some way from the behavior of AutomationConditions?
Generally, I want to move away from this "scenario" pattern, as I've found that the test_e2e.py
(and the new test file I added in this PR) are a better representation of "real world" conditions. So to that end, I did update one of the e2e tests to use a submit threadpool executor.
Down to add an e2e test that uses AMPs, just want to avoid extending the scenario framework at this point, as my eventual goal is to delete all of it
4eaf79e
to
a468a34
Compare
Summary & Motivation
As title
How I Tested These Changes
Changelog
NOCHANGELOG