Replies: 2 comments
-
Hi @mvilanova, Can you help me answer these questions? |
Beta Was this translation helpful? Give feedback.
-
Hi @ndtands
The short answer is that when we started the project 2.x was not available and we never prioritized switching to it. We may carve some time to do it in the future, but it's not guaranteed. If you have some cycles and want to take a stab at it, we would be happy to review your pull requests.
Your understanding is correct. Although Dispatch uses FastAPI (which supports async by design) and includes async middleware, most of its current API endpoints (such as those in the provided views.py) are implemented synchronously. Thus, the concurrency limit you described—where the number of concurrent requests is bounded by the number of available synchronous workers—still applies. The use of BackgroundTasks helps Dispatch quickly return HTTP responses and perform subsequent work without delaying the client response, which improves perceived responsiveness. However, it does not solve the fundamental concurrency limitation caused by synchronous database calls. To significantly increase concurrent request handling beyond the worker limit, Dispatch would need to adopt fully asynchronous endpoint handlers and database operations using async-compatible libraries (like asyncpg and SQLAlchemy’s async support).
Are you asking if it would be a good idea to build your q&a system on top of Dispatch? If so, while Dispatch can be useful for inspiration, I don't think it would be a good base project to build upon, as it's very tailored for case and incident management. Does that make sense? |
Beta Was this translation helpful? Give feedback.
-
I’m wondering why the repository is using SQLAlchemy version 1.3.x when it has already been upgraded to version 2.x.x, which also supports async?
As I understand, the current repository does not use async for API endpoints. If that's the case, it relies only on the number of workers. That means if I use 4 workers, only a maximum of 4 users can be served concurrently, right?
However, if I design the API using async, I can handle a higher number of concurrent users because a single worker can manage multiple requests via an event loop, handling tasks in a non-blocking way. This allows more concurrent users without being limited by the number of workers.
Is my understanding correct?
If I design a simple Q&A system that fully relies on third-party services that support async, such as Azure Search, LangGraph, and PostgreSQL, would this repository be a suitable base to build upon?
Beta Was this translation helpful? Give feedback.
All reactions