“Insufficient active worker nodes. Waited 5.00m for at least 1 workers, but only 0 workers are active” in my trino, where is problem? #16071
Unanswered
zhangxiao696
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Make sure you have the same catalog config files on all nodes. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I can't find any query-process log in my worker, but the program in worker is running
worker logs:
.........
2023-02-09T14:04:53.198+0800 INFO main Bootstrap exchange.sink-max-file-size 1GB 1GB Max size of files written by exchange sinks
2023-02-09T14:04:53.198+0800 INFO main Bootstrap exchange.source-concurrent-readers 4 4
2023-02-09T14:04:53.198+0800 INFO main Bootstrap exchange.max-output-partition-count 50 50
2023-02-09T14:04:53.198+0800 INFO main Bootstrap exchange.max-page-storage-size 16MB 16MB Max storage size of a page written to a sink, including the page itself and its size represented as an int
2023-02-09T14:04:53.303+0800 INFO main io.airlift.bootstrap.LifeCycleManager Life cycle starting...
2023-02-09T14:04:53.303+0800 INFO main io.airlift.bootstrap.LifeCycleManager Life cycle started
2023-02-09T14:04:53.303+0800 INFO main io.trino.exchange.ExchangeManagerRegistry -- Loaded exchange manager filesystem --
2023-02-09T14:04:53.321+0800 INFO main io.trino.server.Server ======== SERVER STARTED ========
2023-02-09T16:36:42.439+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: f39054cc-163a-4bd1-b6c1-75b181e4e6df (last seen at localhost)
2023-02-09T20:52:53.502+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: 1e7f41a0-b1d2-47bf-81af-a8d07ee2d0a9 (last seen at localhost)
2023-02-09T20:52:53.502+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: ffffffff-ffff-ffff-ffff-fffffffffffh (last seen at localhost)
2023-02-09T20:53:03.504+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: ffffffff-ffff-ffff-ffff-fffffffffffi (last seen at localhost)
2023-02-09T20:53:03.504+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: ffffffff-ffff-ffff-ffff-ffffffffffff (last seen at localhost)
2023-02-09T20:53:03.504+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: ffffffff-ffff-ffff-ffff-fffffffffffg (last seen at localhost)
2023-02-09T20:53:03.504+0800 INFO node-state-poller-0 io.trino.metadata.DiscoveryNodeManager Previously active node is missing: 95eb5baa-61eb-41d5-816d-e11a046ea174 (last seen at localhost)
【config-coordinator.properties】
node.id=ffffffff-ffff-ffff-ffff-ffffffffffff
node.environment=test
node.internal-address=localhost
experimental.concurrent-startup=true
http-server.http.port=8086
discovery.uri=http://service-xdata-trino:8086/
exchange.http-client.max-connections=1000
exchange.http-client.max-connections-per-server=1000
exchange.http-client.connect-timeout=1m
exchange.http-client.idle-timeout=1m
scheduler.http-client.max-connections=1000
scheduler.http-client.max-connections-per-server=1000
scheduler.http-client.connect-timeout=1m
scheduler.http-client.idle-timeout=1m
query.client.timeout=5m
query.min-expire-age=30m
query.max-memory=40GB
query.max-total-memory=60GB
query.max-memory-per-node=10GB
coordinator=true
node-scheduler.include-coordinator=false
【config-worker.properties】
node.environment=test
node.internal-address=localhost
experimental.concurrent-startup=true
http-server.http.port=8086
discovery.uri=http://service-xdata-trino:8086/
exchange.http-client.max-connections=1000
exchange.http-client.max-connections-per-server=1000
exchange.http-client.connect-timeout=1m
exchange.http-client.idle-timeout=1m
query.client.timeout=5m
query.min-expire-age=30m
query.max-memory=40GB
query.max-total-memory=60GB
query.max-memory-per-node=10GB
coordinator=false
#node-scheduler.include-coordinator=false
this is my cluster:
[root@ecs-project7-015 ~]# kubectl get pods --kubeconfig /root/.kube/docker2.config | grep trino
xdata-trino-coordinator-55b469db87-4zct7 1/1 Running 0 24h
xdata-trino-coordinator-55b469db87-c6q7c 1/1 Running 0 24h
xdata-trino-coordinator-55b469db87-dklw4 1/1 Running 0 24h
xdata-trino-coordinator-55b469db87-j787k 1/1 Running 0 24h
xdata-trino-coordinator-55b469db87-t7m8v 1/1 Running 0 24h
xdata-trino-coordinator-55b469db87-tq65k 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-5zvq8 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-66hzs 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-97fd8 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-bvrc7 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-c744t 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-cm4vs 1/1 Running 0 24h
xdata-trino-worker-5c4d99bb47-hvjzt 1/1 Running 0 24h
[root@ecs-project7-015 ~]# kubectl2 get svc | grep trino
service-xdata-trino ClusterIP fd11:1111:1111:15::152e 8086/TCP 87d
Beta Was this translation helpful? Give feedback.
All reactions