-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blocking and CPU-bound #791
Comments
Lambda the service will only ever have one request at a time outstanding to your function code (well, to a single instance of your function code). So that means from a performance perspective, all that matters here is the time between when your code receives the request and when it sends the response. So being more efficient in your use of the compute (blocking vs. non blocking, async, thread pools, whatever) only helps you if it drives your all-up invoke time down. Does that make sense? |
I'm struggling to understand how shared state will be accessed across different function calls. Will functions compete for this state? Or, within one process, will they be called sequentially, one after the other? I should probably conduct tests to observe this in practice (the only reliable verification method). |
Well, |
|
Hi. I plan to write a
Lambda
function for CPU-intensive work. While I have some intuition about blocking in traditional async Rust apps (https://ryhl.io/blog/async-what-is-blocking/), withlambda_runtime
, all my intuition goes away, and I'm no longer sure how to achieve maximum performance.For example, if I have a CPU-intensive task inside a
handler
, is it a good idea to spawn this task on the globalRayon
thread pool and thenawait
? Or should I always prefertokio::task::spawn_blocking
? It would be great to have anexample
if there are some pitfalls.The text was updated successfully, but these errors were encountered: