You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
We noticed that our Thanos Stores, which uses an Objectstorage as storage backend, has significant longer loading times per block/index when the are generated from the Thanos Receiver compared to Thanos Sidecar.
We then checked the two buckets, which both have a similar size and the blocks are also similar in size. For Tenant 1 we have 3112 index files with an average of 348MB. For tenant 2 we have 2993 index files with an average of 309MB per file. But this does not explain why Tenant1 takes on average 2-3 times as long to load per Index Header compared to Tenant2.
What you expected to happen:
Regardless of which data source the blocks are written from, according to the documentation they should make no difference to how the Thanos Store generates the index headers.
With similarly large buckets and a similarly large number of blocks to be loaded, almost the same speeds should be achieved when generating the index header/block.
How to reproduce it (as minimally and precisely as possible):
Compares a bucket which was filled with Thanos Receive data and one which was filled with Thanos Sidecar data. Then start a store for the receive bucket and a store for the sidecar bucket. You should now be able to recognise the differences.
The text was updated successfully, but these errors were encountered:
roth-wine
changed the title
Thanos, Prometheus and Golang version used:
Longer loading times for index headers with Thanos Receive compared to Thanos Sidecar
Aug 27, 2024
Hey, yeah this is always reproducible even with new buckets. Currently we scrape and receive the same data for one tenant and also for this tenant this behavior is reproducible.
Thanos, Prometheus and Golang version used:
Object Storage Provider:
Ceph Cluster
What happened:
We noticed that our Thanos Stores, which uses an Objectstorage as storage backend, has significant longer loading times per block/index when the are generated from the Thanos Receiver compared to Thanos Sidecar.
We then checked the two buckets, which both have a similar size and the blocks are also similar in size. For Tenant 1 we have 3112 index files with an average of 348MB. For tenant 2 we have 2993 index files with an average of 309MB per file. But this does not explain why Tenant1 takes on average 2-3 times as long to load per Index Header compared to Tenant2.
What you expected to happen:
Regardless of which data source the blocks are written from, according to the documentation they should make no difference to how the Thanos Store generates the index headers.
With similarly large buckets and a similarly large number of blocks to be loaded, almost the same speeds should be achieved when generating the index header/block.
How to reproduce it (as minimally and precisely as possible):
Compares a bucket which was filled with Thanos Receive data and one which was filled with Thanos Sidecar data. Then start a store for the receive bucket and a store for the sidecar bucket. You should now be able to recognise the differences.
Full logs to relevant components:
Receiver Blocks
Sample Output from
thanos tools bucket inspect
Sidecar Blocks
Sample Output from
thanos tools bucket inspect
thanos_bucket_store_blocks_loaded
sum (thanos_bucket_store_blocks_loaded{region="fsn1", pod="store-test-0") by (namespace, region)
thanos_bucket_store_indexheader_load_duration_seconds_sum
thanos_objstore_bucket_operation_transferred_bytes_sum
The text was updated successfully, but these errors were encountered: