-
-
Notifications
You must be signed in to change notification settings - Fork 932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server would occupy a lot of memory when method is not async #980
Comments
It seams that there is a limitation to 32 threads in the source code |
|
No - we ought to have sensible configurations here on behalf of our users, rather than adding to the number of things they need to think about. If anyone's motivated enough to dig into clear justifications that the system defaults are appropriate for us, and has an actionable change that we oughta make, then let's consider that. Otherwise, let's just leave this as it is. |
I think this was for before migrating to anyio. I guess this is still configurable outside of Starlette, so we can keep this closed. |
Thanks @aminalaee 👍🏼 |
It's worth mentioning that you can modify the default capacity limiter on anyio. 👍 |
I don't think this is true |
Yup you're right, it can be modified, I thought you meant replace 😅 |
I just read the source code of starlette, and I think I found reason why it's occupying so much memory
The problem is in
starlette.routing.py
methodrequest_response()
My rest interface is not async, it will run in
loop.run_in_executor
, but starlette do not specify the executor here, so the default thread pool size should be os.cpu_count() * 5, my test machine has 40 cpus so I should have 200 threads in the pool. And after each request it will not release the object in these threads, unless the thread be reused by next request, which will occupy a a lot of memory. Especially when I wrap a large deep learning model in the server.My question is could we make the thread pool size configurable?
The text was updated successfully, but these errors were encountered: