-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Problem] Unable to update grpc_server_max_recv_msg_size
and grpc_server_max_send_msg_size
in server stanza setting in TempoStack
#1105
Comments
grpc_server_max_recv_msg_size
and grpc_server_max_send_msg_size
in server stanza setting in TempoStack
grpc_server_max_recv_msg_size
and grpc_server_max_send_msg_size
in server stanza setting in TempoStackgrpc_server_max_recv_msg_size
and grpc_server_max_send_msg_size
in server stanza setting in TempoStack
Just for an update, i've updated the Tempo operator to the latest version still i do not find the option to update below parameters in
Any leads would be really helpful, Thanks in advance !! |
Let's move this issue to the tempo operator repo. I don't know much about it myself and they are more likely to be able to help you. |
@joe-elliott It seems Currently, the custom resource has no specific parameters to modify these values directly. Checking the config file I have found the following:
I was able to set this using
Thanks !! |
Now, i'm struggling with setting up below configuration under
|
This has been solved, Thanks! |
We have
Tempo Operator
version0.14.1-2 provided by Red Hat
installed in our environment and we have created aTempoStack
instance using below configuration.With above configuration, it has deployed the
TempoStack
and other dependent component.Here, i'm looking for 2 additional things as below:
whenever we are trying to set resource quota for individually for each component e.g
compactor
i.especs.template.compactor.resources.limits
it is getting overwritten and due to which compactor pod is not getting enough compute resources and restarting withCrashloopBackoff
.Looking deeper, we noticed the following set of errors in the pods of the Tempo Stack instance:
Distributor pod:
Ingestor pod:
Querier pod::
From some research, it seems like we need to bump up the maximum trace size. By default, that is set to 5000000 (5 MiB).
As per here (https://grafana.com/docs/tempo/latest/configuration/#ingestion-limits), "overrides" can be used to increase this (there is caution against going to large, however). we have added parameter as below.
The issue with the querier pod seems to be due to there being a gRPC message size limit between TempoStack components. As suggested here (grafana/tempo#1097), I think we need to change settings both in the tempo.yaml and tempo-query-frontend.yaml to increase these to at least the max_bytes_per_trace size.
we are trying to change the existing server stanza settings with below values however it is getting overwritten,
Can anyone help us here and suggest how to and where to change these settings.
Thanks in advance !!
The text was updated successfully, but these errors were encountered: