-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support grouping multiple rows of first match in an interval #347
Comments
This issue is especially bad for Parca itself. On our cloud with the aggregate_view table, it shouldn't be such a bad problem for now. |
@metalmatze As far as I an tell the That said, I think this is different than what we do here in parca, where we take the first result in the interval. I'm wondering if maybe what we want is a |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Right now Parca queries FrostDB for all the timestamps and on the server side chooses to ignore metric samples when a sample has been found in the step's bucket.
https://github.com/parca-dev/parca/blob/ed90dbeb684186e9cdb295bc0f62c723ed3c5a9f/pkg/parcacol/querier.go#L571-L579
This isn't ideal since the data is still queried...
FrostDB should add support to find all rows for a bucket of interval and only then start aggregating.
The row data would look like this:
What we want. We want to only return the sum(value) for the first timestamp we find in the bucket that ranges from 0-9.
So what we want as a result is:
What we currently get is however the very first row that falls within that bucket
(1, stack1, 3)
and then the sum of it's value, which is basically a noop:The text was updated successfully, but these errors were encountered: