Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thanos Query: gaps in deduplicated data #7656

Open
ppietka-bp opened this issue Aug 21, 2024 · 8 comments
Open

Thanos Query: gaps in deduplicated data #7656

ppietka-bp opened this issue Aug 21, 2024 · 8 comments

Comments

@ppietka-bp
Copy link

Thanos, Prometheus and Golang version used:
thanos, version 0.35.1 (branch: HEAD, revision: 086a698)
build user: root@be0f036fd8fa
build date: 20240528-13:54:20
go version: go1.21.10
platform: linux/amd64
tags: netgo
prometheus, version 2.32.1 (branch: HEAD, revision: 41f1a8125e664985dd30674e5bdf6b683eff5d32)
build user: root@54b6dbd48b97
build date: 20211217-22:08:06
go version: go1.17.5
platform: linux/amd64

Object Storage Provider:
Ceph

What happened:
Thanos Query: gaps in deduplicated data

What you expected to happen:
Two instances of prometheus scrap data from sources and another federated prometheuses on OpenShift.
As long as we search the data without deduplication, the data is continuous.

Anything else we need to know:
Scereens attached.
prometheus2
prometheus1
thanos
thanos_dedup

What you expected to happen:
Deduplication should properly combine datasets.
prometheus2
prometheus1
thanos
thanos_dedup

@ppietka-bp
Copy link
Author

Results of the investigation,
Deduplication only works one way. If we deduplicate metrics according to a label, e.g. replica, which takes values of 0 or 1, missing data with label replica=‘0’ is filled in by data with replica=‘1’ but data with label replica=‘1’ is not filled in by data with label replica=‘0’.

In our opinion, deduplication should work both ways and assemble the data according to label replica so as to show a continuous set of data in the metric.
dedup
replica0
replica1

@MichaHoffmann
Copy link
Contributor

Deduplicating time series data is surprisingly hard! I have no great idea how to do it properly. The approach that Thanos takes during query time is roughly that we start with some replica and then if the gap gets too large we switch over. But this had numerous edge cases in the past. I wonder how we could improve it

@ppietka-bp
Copy link
Author

Thanks for your reply. I look forward to solving this agonizing problem.

@MichaHoffmann
Copy link
Contributor

Yeah I'm happy to brainstorm about this if you have an idea!

@ppietka-bp
Copy link
Author

For the moment, I still have no idea where to start and what guidelines we should adopt. except, of course, one consistent data set after deduplication. I wonder if the algorithm for deduplication on Compactor via the “--deduplication.func=penalty” applied to Querier would not solve the problem. Of course, if that's not the cause.

@MichaHoffmann
Copy link
Contributor

Penalty is the same algorithm that the querier uses though.

@lachruzam
Copy link

@MichaHoffmann Wouldn't putting a configurable upper bound on the penalty solve this issue (or at least allow fixing it by configuration)?

@MichaHoffmann
Copy link
Contributor

@MichaHoffmann Wouldn't putting a configurable upper bound on the penalty solve this issue (or at least allow fixing it by configuration)?

In the sense that we always switch replica if the gap is at least this configured size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants