Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: tpcc/multiregion/survive=zone/chaos=true failed [fatal raft error: match(10192) is out of range] #143058

Closed
cockroach-teamcity opened this issue Mar 18, 2025 · 4 comments
Assignees
Labels
B-runtime-assertions-enabled branch-release-24.3.9-rc C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-kv KV Team X-unactionable This was closed because it was unactionable.

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Mar 18, 2025

Note: This build has runtime assertions enabled. If the same failure was hit in a run without assertions enabled, there should be a similar failure without this message. If there isn't one, then this failure is likely due to an assertion violation or (assertion) timeout.

roachtest.tpcc/multiregion/survive=zone/chaos=true failed with artifacts on release-24.3.9-rc @ b97183a1624094224049587f5aa836c3ff03ea95:

(monitor.go:149).Wait: monitor failure: could not restart node :3: ~ COCKROACH_INTERNAL_DISABLE_METAMORPHIC_TESTING=true COCKROACH_CONNECT_TIMEOUT=1200 ./cockroach sql --url 'postgres://root@localhost:26257?options=-ccluster%3Dsystem&sslcert=.%2Fcerts%2Fclient.root.crt&sslkey=.%2Fcerts%2Fclient.root.key&sslmode=verify-full&sslrootcert=.%2Fcerts%2Fca.crt' -e "CREATE SCHEDULE IF NOT EXISTS test_only_backup FOR BACKUP INTO 'gs://cockroachdb-backup-testing/roachprod-scheduled-backups/teamcity-19129091-1742275784-136-n10cpu4-geo/system/1742299899661889254?AUTH=implicit' RECURRING '*/15 * * * *' FULL BACKUP '@hourly' WITH SCHEDULE OPTIONS first_run = 'now'": context canceled
unexpected node event: n3: cockroach process for system interface died (exit code 7)
test artifacts and logs in: /artifacts/tpcc/multiregion/survive=zone/chaos=true/run_1

Parameters:

  • arch=amd64
  • cloud=gce
  • coverageBuild=false
  • cpu=4
  • encrypted=true
  • fs=ext4
  • localSSD=true
  • runtimeAssertionsBuild=true
  • ssd=0
Help

See: roachtest README

See: How To Investigate (internal)

See: Grafana

Same failure on other branches

/cc @cockroachdb/sql-foundations

This test on roachdash | Improve this report!

Jira issue: CRDB-48640

@cockroach-teamcity cockroach-teamcity added B-runtime-assertions-enabled branch-release-24.3.9-rc C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) labels Mar 18, 2025
@rafiss
Copy link
Collaborator

rafiss commented Mar 18, 2025

There was a fatal error in raft. From n3 logs:

I250318 12:11:39.186338 896 1@raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 875  the server is terminating due to a fatal error (see the DEV channel for details)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876  match(10192) is out of range [lastIndex(10191)]. Was the raft log corrupted, truncated, or lost?
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !goroutine 896 [running]:
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/allstacks.GetWithBuf({0x0?, 0xc0045b0540?, 0x38?})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/allstacks/allstacks.go:27 +0x74
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/allstacks.Get(...)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/allstacks/allstacks.go:14
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(_, {{{0xc00687ab70, 0x24}, {0x6ed13e3, 0x1}, {0x6ed13e1, 0x1}, {0x6df4400, 0x6}, {0x6ed13e3, ...}}, ...})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/log/clog.go:294 +0xc6
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/log.logfDepthInternal({0x86a8df8, 0xc001b12f30}, 0x3, 0x4, 0x0, 0x0?, {0x7057804, 0x5a}, {0xc0016d0d60, 0x2, ...})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/log/channels.go:104 +0x5c5
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/log.logfDepth(...)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/log/channels.go:34
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/log.FatalfDepth(...)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	bazel-out/k8-opt/bin/pkg/util/log/log_channels_generated.go:920
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*raftLogger).Panicf(0xc0017a1bb0, {0x7057804, 0x5a}, {0xc0016d0d60, 0x2, 0x2})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/raft.go:114 +0xc7
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/raft.(*raft).checkMatch(0xc0026c5380, 0x27d0)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/raft/raft.go:2333 +0x10f
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/raft.(*raft).handleAppendEntries(0xc0026c5380, {0x3, 0x7, 0x2, 0x7, 0x7, 0x27d0, {0xc0388b8e08, 0xa, 0x12}, ...})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/raft/raft.go:2261 +0x32
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/raft.stepFollower(0xc0026c5380, {0x3, 0x7, 0x2, 0x7, 0x7, 0x27d0, {0xc0388b8e08, 0xa, 0x12}, ...})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/raft/raft.go:2191 +0x438
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/raft.(*raft).Step(0xc0026c5380, {0x3, 0x7, 0x2, 0x7, 0x7, 0x27d0, {0xc0388b8e08, 0xa, 0x12}, ...})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/raft/raft.go:1734 +0x1175
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/raft.(*RawNode).Step(0xaee390c1?, {0x3, 0x7, 0x2, 0x7, 0x7, 0x27d0, {0xc0388b8e08, 0xa, 0x12}, ...})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/raft/rawnode.go:140 +0x125
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Replica).stepRaftGroupRaftMuLocked.func1(0xc0027fe660)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/replica_raft.go:714 +0x6a5
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Replica).withRaftGroupLocked(0xc002585108, 0x0?)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/replica_raft.go:2316 +0x42
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Replica).withRaftGroup(0xc002585108, 0xc0045b1918)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/replica_raft.go:2353 +0x85
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Replica).stepRaftGroupRaftMuLocked(0xc002585108, 0xc012189948)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/replica_raft.go:640 +0xb7
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).processRaftRequestWithReplica(0x3fc3333333333333?, {0x86a8df8, 0xc001b12f60}, 0xc002585108, 0xc012189948)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/store_raft.go:407 +0x419
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).processRequestQueue.func1({0xc0067a7808?, 0x86a8df8?}, 0xc001e26270?)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/store_raft.go:627 +0x32
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).withReplicaForRequest(0xc0067a8228?, {0x86a8df8, 0xc001e26270}, 0xc012189948, 0xc0045b1dd8)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/store_raft.go:362 +0xd8
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).processRequestQueue(0xc0067a7808, {0x86a8df8, 0xc001e26270}, 0x12d)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/store_raft.go:625 +0x1c9
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*raftSchedulerShard).worker(0xc001320150, {0x86a8df8, 0xc001e26270}, {0x86d0c60, 0xc0067a7808}, 0xc00003f208)
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/scheduler.go:397 +0x1b0
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*raftScheduler).Start.func2({0x86a8df8?, 0xc001e26270?})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/kv/kvserver/scheduler.go:319 +0x45
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2({0x6e12f87?, 0x0?})
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/stop/stopper.go:498 +0x1f0
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx in goroutine 739
F250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 !	pkg/util/stop/stopper.go:488 +0x49c

@rafiss rafiss changed the title roachtest: tpcc/multiregion/survive=zone/chaos=true failed roachtest: tpcc/multiregion/survive=zone/chaos=true failed [fatal raft error: match(10192) is out of range] Mar 18, 2025
@rafiss rafiss added T-kv KV Team and removed T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) labels Mar 18, 2025
@tbg
Copy link
Member

tbg commented Mar 20, 2025

cockroachF250318 12:11:39.186377 896 raft/raft.go:2333 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 876 match(10192) is out of range [lastIndex(10191)]. Was the raft log corrupted, truncated, or lost?

This means that replica r301/7 (on s3) received an MsgApp which indicates that the leader believes that r301/7 should already have log index 10192 durably in its log. The fatal error shows that it's off by one: the follower has index 10191, but not 10192.

Typically, we see this right when the follower restarts, since this is the most likely situation in which a file system loses a write. This is the case here:

cockroachI250318 12:11:36.125216 1 util/log/file_sync_buffer.go:237 ⋮ [T1,config]   arguments: [‹./cockroach› ‹start› ‹--certs-dir› ‹certs› ‹--log› ‹file-defaults: {dir: 'logs', exit-on-error: false}› ‹--listen-addr=:26257› ‹--http-addr=:26258› ‹--advertise-addr=10.142.0.98:26257› ‹--join=104.196.118.48:26257› ‹--store› ‹path=/mnt/data1/cockroach,attrs=store1:node3:node3store1› ‹--enterprise-encryption› ‹path=/mnt/data1/cockroach,key=/mnt/data1/cockroach/aes-128.key,old-key=plain› ‹--cache=25%› ‹--locality=cloud=gce,region=us-east1,zone=us-east1-b› ‹--max-sql-memory=25%›]

is only 3s before the crash (12:11:39.186338).

We don't have the data directories, so it's going to be difficult to RCA what happened here. Some notes:

  • node was not that overloaded before it went down. I don't see the usual signs of disks falling behind. There seems to be a lot of contention (txn deadlock pushes etc) but not much else, and CPU is also not running hot.
  • there's encryption at rest
  • we see the below logging fire ~800 times (always for auxiliary/sstsnapshot/* files)
    // Files may exist within the registry but not on the filesystem
    // because registry updates are not atomic with filesystem edits.
    // Check if the file exists and elide the entry if the file does not
    // exist. This elision happens during store initialization, ensuring no
    // concurrent work should be ongoing which might have added an entry to
    // the file registry but not yet created the referenced file.
    path := filename
    if !filepath.IsAbs(path) {
    path = r.FS.PathJoin(r.DBDir, filename)
    }
    if _, err := r.FS.Stat(path); oserror.IsNotExist(err) {
    log.Infof(ctx, "eliding file registry entry %s", redact.SafeString(filename))
    batch.DeleteEntry(filename)
    }
    }
  • nothing unusual in 3.{dmesg,journalctl}.txt

My best guess is that the write was lost from the file system or pebble WAL. However, since we didn't power cycle the VM, it is more likely than not a problem above the file system.


I'm not sure how to make this actionable. We could, in principle, set up our testing infrastructure to retain disks for such clusters. Then we could, for instance, examine the WAL files that are still present. However, even that would not be enough, since pebble would have deleted the relevant WAL files in the seconds leading up to the crash. But at least we could do some more rudimentary verification of what the Raft log bounds are.

@tbg tbg self-assigned this Mar 20, 2025
@pav-kv
Copy link
Collaborator

pav-kv commented Mar 20, 2025

Anything interesting with r301 before the crash? Snapshots, conf changes, AddSST?
Any WAL failover stuff happening around that time?

we see the below logging fire ~800 times (always for auxiliary/sstsnapshot/* files)

Can someone from Storage interpret this?

@tbg
Copy link
Member

tbg commented Mar 20, 2025

No WAL failover stuff as far as I can tell:

Image

Re: snapshots and other stuff, no, pretty quiet on this range. n3 got its snapshot like 25min earlier, at index 115:

cockroachI250318 11:47:54.430060 41602 kv/kvserver/replica_raftstorage.go:521 ⋮ [T1,Vsystem,n3,s3,r301/7:‹/Table/116/1/"\xc0"/1{0/…-6/…}›] 354  applied snapshot f7b0df4d from (n4,s4):2 at applied index 115 (total=20ms data=41 MiB excise=true ingestion=6@17ms)

so that's unlikely to have anything to do with it (the crash is at log pos >10000).

@tbg tbg added the X-unactionable This was closed because it was unactionable. label Mar 24, 2025
@tbg tbg closed this as completed Mar 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
B-runtime-assertions-enabled branch-release-24.3.9-rc C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-kv KV Team X-unactionable This was closed because it was unactionable.
Projects
None yet
Development

No branches or pull requests

4 participants