Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feat:filebeat/input/redis/slowlog] Add client address and name to submitted slowlogs #41507

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
1 change: 1 addition & 0 deletions CHANGELOG.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ https://github.com/elastic/beats/compare/v8.8.1\...main[Check the HEAD diff]
- The performance of ingesting SQS data with the S3 input has improved by up to 60x for queues with many small events. `max_number_of_messages` config for SQS mode is now ignored, as the new design no longer needs a manual cap on messages. Instead, use `number_of_workers` to scale ingestion rate in both S3 and SQS modes. The increased efficiency may increase network bandwidth consumption, which can be throttled by lowering `number_of_workers`. It may also increase number of events stored in memory, which can be throttled by lowering the configured size of the internal queue. {pull}40699[40699]
- Fixes filestream logging the error "filestream input with ID 'ID' already exists, this will lead to data duplication[...]" on Kubernetes when using autodiscover. {pull}41585[41585]
- Add kafka compression support for ZSTD.
- Redis: Add client address and name to submitted slowlogs
- Filebeat fails to start if there is any input with a duplicated ID. It logs the duplicated IDs and the offending inputs configurations. {pull}41731[41731]
- Filestream inputs with duplicated IDs will fail to start. An error is logged showing the ID and the full input configuration. {issue}41938[41938] {pull}41954[41954]
- Filestream inputs can define `allow_deprecated_id_duplication: true` to run keep the previous behaviour of running inputs with duplicated IDs. {issue}41938[41938] {pull}41954[41954]
Expand Down
22 changes: 14 additions & 8 deletions filebeat/input/redis/harvester.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,17 @@ type Harvester struct {
// 4) 1) "slowlog"
// 2) "get"
// 3) "100"
// 5) "100.1.1.1:12345"
// 6) "client-name"
type log struct {
id int64
timestamp int64
duration int
cmd string
key string
args []string
id int64
timestamp int64
duration int
cmd string
key string
args []string
clientAddr string
clientName string
}

// NewHarvester creates a new harvester with the given connection
Expand Down Expand Up @@ -128,7 +132,7 @@ func (h *Harvester) Run() error {

var log log
var args []string
_, err = rd.Scan(entry, &log.id, &log.timestamp, &log.duration, &args)
_, err = rd.Scan(entry, &log.id, &log.timestamp, &log.duration, &args, &log.clientAddr, &log.clientName)
if err != nil {
logp.Err("Error scanning slowlog entry: %s", err)
continue
Expand All @@ -155,7 +159,9 @@ func (h *Harvester) Run() error {
"duration": mapstr.M{
"us": log.duration,
},
"role": role,
"role": role,
"clientAddr": log.clientAddr,
"clientName": log.clientName,
}

if log.args != nil {
Expand Down
2 changes: 2 additions & 0 deletions filebeat/tests/system/test_redis.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@ def test_input(self):
assert output["input.type"] == "redis"
assert "redis.slowlog.cmd" in output
assert "redis.slowlog.role" in output
assert "redis.slowlog.clientAddr" in output
assert "redis.slowlog.clientName" in output

def get_host(self):
return os.getenv('REDIS_HOST', 'localhost')
Expand Down
Loading