You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the spider finished crawling, all the data is sent all at once, making the client-side waiting for a long time. A JSON with the vertices (domain names, and node ids) and edges (destination id and source id) is sent. Spider.WWWtoJSON() contains this logic
It will be better if instead the client loads the network/graph as the bot is crawling the web.
To accomplish this, we need to send the new things crawled (not the whole state over and over, just the differences).
I was thinking in creating a GET route such as /getNeighbors?node=domain.com , then getting all the neighbors of the nodes received in client, and build the graph on the client, or something similar. (It won't matter much if a few neighbors are sent on every request?).
Or making a request every X seconds for the diff., but keeping routes organized.
The text was updated successfully, but these errors were encountered:
When the spider finished crawling, all the data is sent all at once, making the client-side waiting for a long time. A JSON with the vertices (domain names, and node ids) and edges (destination id and source id) is sent.
Spider.WWWtoJSON() contains this logic
It will be better if instead the client loads the network/graph as the bot is crawling the web.
To accomplish this, we need to send the new things crawled (not the whole state over and over, just the differences).
I was thinking in creating a GET route such as /getNeighbors?node=domain.com , then getting all the neighbors of the nodes received in client, and build the graph on the client, or something similar. (It won't matter much if a few neighbors are sent on every request?).
Or making a request every X seconds for the diff., but keeping routes organized.
The text was updated successfully, but these errors were encountered: