Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getAsync() fails when retrieving to many models using set URL #580

Open
champ opened this issue Nov 22, 2017 · 4 comments
Open

getAsync() fails when retrieving to many models using set URL #580

champ opened this issue Nov 22, 2017 · 4 comments

Comments

@champ
Copy link

champ commented Nov 22, 2017

Issue

In cases of a model having a HasMany relation, when trying to fetch related models asynchronously using getAsync(), the set URL used (if your collection and backend supports it) can exceed the maximum length allowed by typical web servers (Apache has a limit of around 8k by default).

Solution

A simple approach to solve this issue would be to add an option chunkSize to getAsync() to split requested models in different set requests (here), similar to the default behavior of generating one request per model.

@bpatram
Copy link
Collaborator

bpatram commented Nov 22, 2017

@champ I'm not sure if this is something that should be this library's responsibility. Also, I'm a bit confused as to what is being described here, can you go into more detail or provide an example use case that if affected by this problem? (you can fork this jsfiddle).

Are you talking about request body limitations or URL length limitations?

If this is related to URLs being to long then you can define custom url on your models and see if there is a better approach to how you write urls. Unless I am understanding your application setup incorrectly, this is not something that can be broken into multiple requests to solve this issue (as your solution suggestions).

@champ
Copy link
Author

champ commented Nov 22, 2017

@bpatram Thanks for a quick reply.

The issue is related to URLs generated behind the scenes in ‘getAsync()’ becoming too long causing an error on the proxy or web server layer.

As you probably know, ‘getAsync()’ has two strategies to fetch models:

• Single request - if the collection can generate a URL representing the models, assumed by checking if the collection generates different URLs when passing models as to when not passing any models (base URL)
• One request per model - fallback strategy

The issue reported concerns the first strategy since it will use a single URL which size depends on the number of models to fetch. This is how Backbone-tastypie work, for example (linked in the docs). In typical cases, this is not a problem, but with hundreds of models to fetch, the URL can exceed existing limits allowed by web servers or proxies, even browsers.

The proposed solution is to make ‘getAsync()’ split the models to fetch in chunks and generate one request per chunk instead of a single request.

Since this a mostly backend-related issue, it is difficult to prove in jsfiddle without an online API backend.

@bpatram
Copy link
Collaborator

bpatram commented Nov 22, 2017

@champ I understand what you are attempting to do here and I think this does in fact make sense as a feature in this library.

In your proposal does chunkSize refer to URL string length or number of models attempting to be fetched?

Here is a simple jsfiddle which I am assuming mimics your setup at a smaller scale: http://jsfiddle.net/xnq7xbyz/1/

@champ
Copy link
Author

champ commented Nov 22, 2017

@bpatram The jsfiddle mimics the setup - we use Backbone-tastypie which generates the set URLs in a similar way.

The idea with ’chunkSize’ was to indicate the maximum number of models to fetch in each request; i.e. if we have 150 models to fetch and we call ’getAsync()’ with the option ’chunkSize: 100’, it would generate 2 requests: the first with a URL generated passing the first 100 models, and a second with URL for the remaining 50.

Although ’chunkSize’ is a quite common term for this kind of logic in other libraries, maybe ’maxModelsPerRequest’ would be a more explicit name for such option. Any thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants