-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: context cache #70
base: master
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 6666516387
💛 - Coveralls |
This looks very neat, thanks @jeswr! Before I do an in-depth review, I have some general questions.
Pinging @sdevalk, as this will interest him regarding rubensworks/rdf-dereference.js#48 |
In the worst case it seems to be an extra 50% overhead. It can be worked around by trying to work by-reference rather than by hash most of the time (see https://github.com/inrupt/solid-client-vc-js/blob/0f8ce276b6ea8a977b9d2ea189bc92385ef44b48/src/parser/jsonld.ts#L70-L111) however; the danger here is that you accidentally consume a large amount of memory by using contexts as keys so I'm leaving this for a subsequent piece of work once better benchmarking is in place and we have a good way of pruning them quickly. FYI the custom caching mechanism in https://github.com/inrupt/solid-client-vc-js/blob/0f8ce276b6ea8a977b9d2ea189bc92385ef44b48/src/parser/jsonld.ts reduced the time on e2e tests from 20min to 20seconds.
I've disabled caching by default in b48cc07. So we will need to make an update to
I'm not seeing |
I suspect that caching by reference would consume less memory than hashing, as caching is only done on pointers towards shared memory.
Ooh, nice!
This will have to be tested by manually plugging this into jsonld-streaming-parser. |
My main concern here is that if we have a poor configuration of the |
I would suspect |
Creates a context caching mechanism. The use case of this is parsing numerous json-ld objects which all have the same context object; where context parsing is often the main bottleneck; see performance results below:
Parse a context that has not been cached; and without caching in place x 108 ops/sec ±0.32% (86 runs sampled)
Parse a list of iri contexts that have been cached x 79,985 ops/sec ±0.44% (90 runs sampled)
Parse a context object that has not been cached x 1,950 ops/sec ±1.36% (90 runs sampled)
Parse a context object that has been cached x 7,637 ops/sec ±0.20% (91 runs sampled)