You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
There's been a commit 4dfc256 that possibly changes the original semantics of fluent-bit behavior when dealing with data that cannot be processed by an upstream.
Before this commit the data would stay in the chunk file in the given tail.0 directory and would be suitable to be observed and accessed whereas now the file disappears leaving little room for debugging the possible issue.
To Reproduce
Any log message that triggers a >=400 <500 error in an upstream.
Expected behavior
At the very least an option to keep those chunk files in the tail.0 directory
Screenshots
N/A
Your Environment
Version used: 3.1.6-debug
Configuration:
custom_parsers.conf: |
[PARSER[]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
fluent-bit.conf: |
[SERVICE[]
Daemon Off
Flush 1
Log_Level error
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
scheduler.cap 300
storage.path /var/log/flb-storage/
storage.max_chunks_up 128
storage.sync full
storage.backlog.mem_limit 5M
storage.delete_irrecoverable_chunks on
[INPUT[]
Name tail
Path /var/log/containers/*.log
multiline.parser cri
Tag kube.*
Skip_Long_Lines On
Skip_Empty_Lines On
Buffer_Chunk_Size 64KB
Buffer_Max_Size 128KB
DB /var/log/flb-storage/containers.db
storage.type filesystem
storage.pause_on_chunks_overlimit on
[INPUT[]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Systemd_Filter _SYSTEMD_UNIT=docker.service
Systemd_Filter _SYSTEMD_UNIT=containerd.service
DB /var/log/flb-storage/systemd.db
Read_From_Tail On
storage.type filesystem
storage.pause_on_chunks_overlimit on
[FILTER[]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Labels On
Annotations On
Buffer_Size 1MB
Use_Kubelet On
namespace_labels On
[FILTER[]
Name modify
Match host.*
Rename _HOSTNAME hostname
Rename _SYSTEMD_UNIT systemd_unit
Rename MESSAGE log
Remove_regex ^((?!hostname|systemd_unit|log).)*$
[FILTER[]
Name aws
Match host.*
imds_version v2
[FILTER[]
Name modify
Match *
Add environment_name env-name
Add cluster_name cluster-name
[FILTER[]
Name lua
Match *
script /fluent-bit/scripts/index_name_filter.lua
call index_name
[OUTPUT[]
Name http
Alias an-alias-name
Match *
Host a-host-name.com
Port 443
http_User ${FLUENTD_USER}
http_Passwd ${FLUENTD_PASSWORD}
URI /a-given-tag
Format json
header User-Agent a-user-agent
header_tag FLUENT-TAG
json_date_format iso8601
tls on
tls.verify off
compress gzip
Retry_Limit no_limits
net.dns.resolver async
log_suppress_interval 10s
storage.total_limit_size 500M
Log_Level error
Environment name and version (e.g. Kubernetes? What version?): v1.28.12-eks-2f46c53
Server type and version: fluent-bit:3.1.6-debug
Operating System and version: Linux
Filters and plugins: tail, systemd, kubernetes, modify, http
Additional context
This stops the ability to understand why things are not being processed. Furthermore it only becomes obvious if the output error level is in warn otherwise the files will disappear with no other warning.
Bug Report
Describe the bug
There's been a commit 4dfc256 that possibly changes the original semantics of fluent-bit behavior when dealing with data that cannot be processed by an upstream.
Before this commit the data would stay in the chunk file in the given tail.0 directory and would be suitable to be observed and accessed whereas now the file disappears leaving little room for debugging the possible issue.
To Reproduce
Expected behavior
At the very least an option to keep those chunk files in the tail.0 directory
Screenshots
N/A
Your Environment
Additional context
This stops the ability to understand why things are not being processed. Furthermore it only becomes obvious if the output error level is in warn otherwise the files will disappear with no other warning.
Logs from v3.0.3-debug
Logs from v3.1.6-debug in
error
levelLogs from v3.1.6-debug in
warn
levelThe text was updated successfully, but these errors were encountered: