Skip to content

Commit

Permalink
improve(trace): incorporate the new logger (#277)
Browse files Browse the repository at this point in the history
* Exploring new tracing

* Committed files

* How polymorphism via a vtable might look

* Added multiple methods

* Run log through channel

* Switch to RecycleFBA

* Run logWithFields through channel

* Moved all methods

* Fix garbled text

* Remove chaining implementation

* Use allocated memory to also construct key value

* Some renaming

* Seitching scope

* Partial fix of leak

* Fix leak

* Add level guard

* Remove todo

* Added static dispatch polymorphism

* Rename variable

* Allocate based on size of fmt message

* Pass log message as a reference

* Added test logger that tests formats of log message

* Move struct used for testing inside testing block

* Reset alloc_allocator on print

* Pass log message as a value and not reference

* Fmt

* Snake case for variable

* Take the scope as a string

* Use if expressions

* Renaming

* Move custom logic from interface into implementation

* Have the union fields as pointers and removed use of ptrCast

* Removed use of @ptrCast in test logger

* Made scoping function public

* Remove the recycle_fba allocator from the test allocator

* Removed recyclefba from test logger

* Switch to use updated API for RecycleFBA

* Do not panic in standard logger but ignore the errors

* Drop the spawn method

* Return error on inti instead of panic

* Add methods

* Fix warnfWithFields

* Estimate field size and not hardcode to 512

* Fmt

* Removed unused line

* Test logger does not need the max_buffer

* Removed last @ptrCast and some minor clean ups

* Implementation out of the union

* Removed unintended diff

* Hack to ensure child logs show up in std err

* Ensure the test logger is only used in tests

* Remove panic from unscoped and scope

* Do not ignore errors but fallback to std.debug.print

* Typo

* Fixed hidden broken compilation

* Have implementation as struct instead of type functions

* Limit the scope of holding the lock on std err

* Remove the sleep

* Switched to pattern of concrete instance turning itself to interface

* Moved the write out of the loop

* Typo

* Update scope switch test

* First batch of changes

* Another batch of integrating the new logger

* Remove instances of the old trace module

* fmt

* Renamed trace_ng to trace

* Check in trace_ng renamed to trace

* Remove listing error sets

* Pass the Logger by value

* clean up in db.zig

* clean up in fuzz.zig

* clean up in cmd.zig

* Update test assertion

* clean up in hard_forks.zig

* clean up in shred.zig

* clean up in fuzz_service.zig

* clean up in fuzz_table.zig

* clean up in fuzz.zig

* clean up in service.zig

* clean up in database.zig

* clean up in insert_shred.zig

* clean up in socket_utils.zig

* clean up in repair_service.zig

* clean up in shred_tracker.zig

* clean up in estimate.zig

* errdefer instead of defer

* Switch API to support chaining for fields

* Fix up

* No need to have nooLoop var

* Update comment

* Remove the need for fmt

* Fixes after merge

* Remove unused

* Tem comment out code

* Compile fix

* Renaming

* Use Config only ChannelPrintLogger

* Pass by value for Entry instead of allocating

* Rename variable

* deinit the entry after logging

* Fix bus error

* Switching to init the noop logger by variant

* Remove unused import

* Fix .noop

* Fix test

* Set max_buffer to 1GB

* Pull out import

* Do not redefine std.ArrayList(u8) as Fields

* Rename exit_sig to exit

* Revert commented out tests

* Have fields show in logs

* Elide union name when setting the noop variant

* Move Self alias to after fields

* Fixes after merge conflict

* Replace places where empty format args are used in logs

* Added trace

* Create the writer in same scope as the lock

* Newline between method definition

* Print the fallthrough type using any

* No need to capture logger as a pointer in deinit

* Return DirectPrintLogger by value

* use same branch for test logger and noop

* More removal of empty fmt args after merge

* Remove unused variables

* Fixes after merge conflict

* fix(trace): require comptime message for log to avoid lifetime issues

this makes the code less flexible but the only alternative that is memory safe would be to dupe everything. dupe would be a waste since currently the strings are always comptime, so there's actually no need to dupe them. it's better to just require comptime for now and deal with allowing dynamic strings later if we need it.

* perf(trace): reduce allocations when formatting fields

* fix(trace): warn bug, and don't print in tests

* refactor(trace): consistent names for direct and channel print loggers

* fix(trace): use after free when log message is freed before being sent down the channel

also makes log_msg not optional in direct print entry

---------

Co-authored-by: Drew Nutter <[email protected]>
  • Loading branch information
dadepo and dnut authored Oct 8, 2024
1 parent d0752e4 commit f970d8b
Show file tree
Hide file tree
Showing 46 changed files with 1,007 additions and 918 deletions.
127 changes: 65 additions & 62 deletions src/accountsdb/db.zig

Large diffs are not rendered by default.

33 changes: 20 additions & 13 deletions src/accountsdb/download.zig
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,9 @@ pub fn downloadSnapshotsFromGossip(
output_dir: std.fs.Dir,
min_mb_per_sec: usize,
) !void {
logger.infof("starting snapshot download with min download speed: {d} MB/s", .{min_mb_per_sec});
logger
.info()
.logf("starting snapshot download with min download speed: {d} MB/s", .{min_mb_per_sec});

// TODO: maybe make this bigger? or dynamic?
var contact_info_buf: [1_000]ThreadSafeContactInfo = undefined;
Expand Down Expand Up @@ -208,7 +210,9 @@ pub fn downloadSnapshotsFromGossip(
i += r.len;
}
}
logger.infof("searched for snapshot peers: {s}", .{write_buf[0..i]});
logger
.info()
.logf("searched for snapshot peers: {s}", .{write_buf[0..i]});
}

for (available_snapshot_peers.items) |peer| {
Expand All @@ -230,7 +234,10 @@ pub fn downloadSnapshotsFromGossip(
});
defer allocator.free(snapshot_url);

logger.infof("downloading full_snapshot from: {s}", .{snapshot_url});
logger
.info()
.logf("downloading full_snapshot from: {s}", .{snapshot_url});

downloadFile(
allocator,
logger,
Expand All @@ -244,11 +251,11 @@ pub fn downloadSnapshotsFromGossip(
// downloadFile function
error.Unexpected => {},
error.TooSlow => {
logger.infof("peer is too slow, skipping", .{});
logger.info().logf("peer is too slow, skipping", .{});
try slow_peer_pubkeys.append(peer.contact_info.pubkey);
},
else => {
logger.infof("failed to download full_snapshot: {s}", .{@errorName(err)});
logger.info().logf("failed to download full_snapshot: {s}", .{@errorName(err)});
},
}
continue;
Expand All @@ -271,7 +278,7 @@ pub fn downloadSnapshotsFromGossip(
});
defer allocator.free(inc_snapshot_url);

logger.infof("downloading inc_snapshot from: {s}", .{inc_snapshot_url});
logger.info().logf("downloading inc_snapshot from: {s}", .{inc_snapshot_url});
_ = downloadFile(
allocator,
logger,
Expand All @@ -282,13 +289,13 @@ pub fn downloadSnapshotsFromGossip(
null,
) catch |err| {
// failure here is ok (for now?)
logger.warnf("failed to download inc_snapshot: {s}", .{@errorName(err)});
logger.warn().logf("failed to download inc_snapshot: {s}", .{@errorName(err)});
return;
};
}

// success
logger.infof("snapshot downloaded finished", .{});
logger.info().logf("snapshot downloaded finished", .{});
return;
}
}
Expand Down Expand Up @@ -359,7 +366,7 @@ const DownloadProgress = struct {
// each MB
const mb_read = self.bytes_read / 1024 / 1024;
if (mb_read == 0) {
self.logger.infof("download speed is too slow (<1MB/s) -- disconnecting", .{});
self.logger.info().logf("download speed is too slow (<1MB/s) -- disconnecting", .{});
return 0;
}

Expand All @@ -379,14 +386,14 @@ const DownloadProgress = struct {
self.has_checked_speed = true;
if (mb_per_second < self.min_mb_per_second.?) {
// not fast enough => abort
self.logger.infof("[download progress]: speed is too slow ({d} MB/s) -- disconnecting", .{mb_per_second});
self.logger.info().logf("[download progress]: speed is too slow ({d} MB/s) -- disconnecting", .{mb_per_second});
return 0;
} else {
self.logger.infof("[download progress]: speed is ok ({d} MB/s) -- maintaining", .{mb_per_second});
self.logger.info().logf("[download progress]: speed is ok ({d} MB/s) -- maintaining", .{mb_per_second});
}
}

self.logger.infof("[download progress]: {d}% done ({d} MB/s - {d}/{d}) (time left: {d})", .{
self.logger.info().logf("[download progress]: {d}% done ({d} MB/s - {d}/{d}) (time left: {d})", .{
self.file_memory_index * 100 / self.download_size,
mb_per_second,
self.file_memory_index,
Expand Down Expand Up @@ -437,7 +444,7 @@ pub fn downloadFile(
if (try head_resp.getHeader("content-length")) |content_length| {
download_size = try std.fmt.parseInt(usize, content_length.get(), 10);
} else {
logger.debugf("header request didnt have content-length...", .{});
logger.debug().logf("header request didnt have content-length...", .{});
return error.NoContentLength;
}

Expand Down
20 changes: 13 additions & 7 deletions src/accountsdb/fuzz.zig
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ const zstd = @import("zstd");

const AccountsDB = sig.accounts_db.AccountsDB;
const Logger = sig.trace.Logger;
const StandardErrLogger = sig.trace.ChannelPrintLogger;
const Level = sig.trace.Level;
const Account = sig.core.Account;
const Slot = sig.core.time.Slot;
const Pubkey = sig.core.pubkey.Pubkey;
Expand Down Expand Up @@ -55,9 +57,14 @@ pub fn run(seed: u64, args: *std.process.ArgIterator) !void {
defer _ = gpa_state.deinit();
const allocator = gpa_state.allocator();

const logger = Logger.init(allocator, .debug);
defer logger.deinit();
logger.spawn();
var std_logger = StandardErrLogger.init(.{
.allocator = allocator,
.max_level = Level.debug,
.max_buffer = 1 << 30,
}) catch @panic("Logger init failed");
defer std_logger.deinit();

const logger = std_logger.logger();

const use_disk = rand.boolean();

Expand Down Expand Up @@ -255,7 +262,7 @@ pub fn run(seed: u64, args: *std.process.ArgIterator) !void {
true,
);

logger.infof("fuzz[validate]: unpacked full snapshot at slot: {}", .{snapshot_info.slot});
logger.info().logf("fuzz[validate]: unpacked full snapshot at slot: {}", .{snapshot_info.slot});
var snapshot_files: sig.accounts_db.SnapshotFiles = .{
.full_snapshot = .{
.hash = snapshot_info.hash,
Expand Down Expand Up @@ -296,7 +303,6 @@ pub fn run(seed: u64, args: *std.process.ArgIterator) !void {

const inc_archive_file = try alternative_snapshot_dir.openFile(inc_archive_name.slice(), .{});
defer inc_archive_file.close();

try sig.accounts_db.snapshots.parallelUnpackZstdTarBall(
allocator,
.noop,
Expand All @@ -305,7 +311,7 @@ pub fn run(seed: u64, args: *std.process.ArgIterator) !void {
5,
true,
);
logger.infof("fuzz[validate]: unpacked inc snapshot at slot: {}", .{inc_snapshot_info.slot});
logger.info().logf("fuzz[validate]: unpacked inc snapshot at slot: {}", .{inc_snapshot_info.slot});

snapshot_files.incremental_snapshot = .{
.base_slot = inc_snapshot_info.base_slot,
Expand All @@ -328,7 +334,7 @@ pub fn run(seed: u64, args: *std.process.ArgIterator) !void {

_ = try alt_accounts_db.loadWithDefaults(allocator, &snapshot_fields, 1, true, 1_500);
const maybe_inc_slot = if (snapshot_files.incremental_snapshot) |inc| inc.slot else null;
logger.infof("loaded and validated snapshot at slot: {} (and inc snapshot @ slot {any})", .{ snapshot_info.slot, maybe_inc_slot });
logger.info().logf("loaded and validated snapshot at slot: {} (and inc snapshot @ slot {any})", .{ snapshot_info.slot, maybe_inc_slot });
}
}

Expand Down
6 changes: 3 additions & 3 deletions src/accountsdb/snapshots.zig
Original file line number Diff line number Diff line change
Expand Up @@ -1745,7 +1745,7 @@ pub const AllSnapshotFields = struct {
const rel_path_bounded = sig.utils.fmt.boundedFmt("snapshots/{0}/{0}", .{files.full_snapshot.slot});
const rel_path = rel_path_bounded.constSlice();

logger.infof("reading snapshot fields from: {s}", .{sig.utils.fmt.tryRealPath(snapshot_dir, rel_path)});
logger.info().logf("reading snapshot fields from: {s}", .{sig.utils.fmt.tryRealPath(snapshot_dir, rel_path)});

const full_file = try snapshot_dir.openFile(rel_path, .{});
defer full_file.close();
Expand All @@ -1759,7 +1759,7 @@ pub const AllSnapshotFields = struct {
const rel_path_bounded = sig.utils.fmt.boundedFmt("snapshots/{0}/{0}", .{incremental_snapshot_path.slot});
const rel_path = rel_path_bounded.constSlice();

logger.infof("reading inc snapshot fields from: {s}", .{sig.utils.fmt.tryRealPath(snapshot_dir, rel_path)});
logger.info().logf("reading inc snapshot fields from: {s}", .{sig.utils.fmt.tryRealPath(snapshot_dir, rel_path)});

const incremental_file = try snapshot_dir.openFile(rel_path, .{});
defer incremental_file.close();
Expand All @@ -1769,7 +1769,7 @@ pub const AllSnapshotFields = struct {

break :blk incremental_fields;
} else {
logger.info("no incremental snapshot fields found");
logger.info().log("no incremental snapshot fields found");
break :blk null;
}
};
Expand Down
1 change: 0 additions & 1 deletion src/benchmarks.zig
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ const meta = std.meta;
/// zig build benchmark -- gossip
pub fn main() !void {
const allocator = std.heap.c_allocator;
sig.trace.log.default_logger.* = sig.trace.Logger.init(allocator, .debug);

if (builtin.mode == .Debug) std.debug.print("warning: running benchmark in Debug mode\n", .{});

Expand Down
Loading

0 comments on commit f970d8b

Please sign in to comment.