Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blocking and CPU-bound #791

Closed
cospectrum opened this issue Jan 23, 2024 · 4 comments
Closed

Blocking and CPU-bound #791

cospectrum opened this issue Jan 23, 2024 · 4 comments

Comments

@cospectrum
Copy link

cospectrum commented Jan 23, 2024

Hi. I plan to write a Lambda function for CPU-intensive work. While I have some intuition about blocking in traditional async Rust apps (https://ryhl.io/blog/async-what-is-blocking/), with lambda_runtime, all my intuition goes away, and I'm no longer sure how to achieve maximum performance.

For example, if I have a CPU-intensive task inside a handler, is it a good idea to spawn this task on the global Rayon thread pool and then await? Or should I always prefer tokio::task::spawn_blocking? It would be great to have an example if there are some pitfalls.

@greenwoodcm
Copy link
Contributor

Lambda the service will only ever have one request at a time outstanding to your function code (well, to a single instance of your function code). So that means from a performance perspective, all that matters here is the time between when your code receives the request and when it sends the response. So being more efficient in your use of the compute (blocking vs. non blocking, async, thread pools, whatever) only helps you if it drives your all-up invoke time down. Does that make sense?

@cospectrum
Copy link
Author

I'm struggling to understand how shared state will be accessed across different function calls. Will functions compete for this state? Or, within one process, will they be called sequentially, one after the other? I should probably conduct tests to observe this in practice (the only reliable verification method).

@cospectrum
Copy link
Author

Well, Lambda has documentation with visualizations where you can see that functions inside one process (execution environment) are called one after another. I assume that this lambda_runtime works exactly the same way, so tokio won't switch context at .await points.
Therefore we can close this issue.

Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for the maintainers of this repository to see.
If you need more assistance, please open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants