-
I often get this error when I download the backup on S3 (B2 in fact): error UploadCompressedStream return error: upload multipart failed, upload id: 4_za21a4ab9e5e1e9048ea50c19_f206fd03bda45e1fc_d20231031_m024808_c000_v0001411_t0017_u01698720488631, cause: operation error S3: UploadPart, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested logger=uploadTableData
error can't acquire semaphore during Upload data parts: context canceled logger=uploadTableData
error UploadCompressedStream return error: context canceled logger=uploadTableData
error one of upload table go-routine return error: one of uploadTableData go-routine return error: can't upload: upload multipart failed, upload id: 4_za21a4ab9e5e1e9048ea50c19_f206fd03bda45e1fc_d20231031_m024808_c000_v0001411_t0017_u01698720488631, cause: operation error S3: UploadPart, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested My configuration is : (click to expand)~# cat /etc/clickhouse-backup/config-lts.yml |grep -ve key -ve endpoint -ve kaiko -ve consul -ve bucket -ve username -ve passw
# Ansible managed
general:
remote_storage: s3
disable_progress_bar: true
backups_to_keep_local: 0
backups_to_keep_remote: 0
log_level: warn
allow_empty_backups: false
download_concurrency: 10
upload_concurrency: 10
restore_schema_on_cluster: "{cluster}"
upload_by_part: true
download_by_part: true
restore_database_mapping: {}
retries_on_failure: 3
upload_retries_pause: 100ms
watch_interval: 1h
full_interval: 24h
watch_backup_name_template: shard{shard}-{type}-{time:20060102150405}
retriesduration: 100ms
watchduration: 1h0m0s
fullduration: 24h0m0s
clickhouse:
host: localhost
port: 9000
disk_mapping: {}
skip_tables:
- system.*
- default.*
timeout: 5m
freeze_by_part: false
freeze_by_part_where: ""
use_embedded_backup_restore: false
embedded_backup_disk: ""
secure: false
skip_verify: false
sync_replicated_tables: false
log_sql_queries: true
config_dir: /etc/clickhouse-server/
restart_command: systemctl restart clickhouse-server
ignore_not_exists_error_during_freeze: true
check_replicas_before_attach: true
tls_cert: ""
tls_ca: ""
debug: false
s3:
region: "us-west-000"
acl: ""
assume_role_arn: ""
force_path_style: false
path: ""
disable_ssl: false
compression_level: 1
compression_format: tar
sse: ""
disable_cert_verification: false
use_custom_storage_class: false
storage_class: STANDARD
concurrency: 10
part_size: 0
max_parts_count: 10000
allow_multipart_download: false
debug: false
custom:
upload_command: "rclone sync --fast-list --drive-chunk-size=512M --transfers=40 --checkers=40 --buffer-size 256M --s3-upload-concurrency=40 --b2-disable-checksum "
download_command: "rclone sync --fast-list --drive-chunk-size=512M --transfers=40 --checkers=40 --buffer-size 256M --s3-upload-concurrency=40 --b2-disable-checksum "
list_command: ""
delete_command: ""
command_timeout: 20h
commandtimeoutduration: 20h0m0s Is it related to my config, or what can I do to avoid this ? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
root reason is
according to shared config Try to decrease also could you check logs and found root reason before retry happens? looks like B2 just can't receive your try decrease
and setup
to avoid unnecessary local disk space allocations |
Beta Was this translation helpful? Give feedback.
root reason is
failed to get rate limit token
according to shared config
looks like you tried to upload to backbaze B2 via S3 protocol? am i right?
Try to decrease
max_parts_count: 10000
tomax_parts_count: 5000
in s3 section, to avoid upload small chunksalso
upload_retries_pause: 100ms
is too quikly
could you check logs and found root reason before retry happens?
looks like B2 just can't receive your
try decrease