@@ -28,12 +28,6 @@ more details.
28
28
Check out the [ data-digger] ( https://github.com/segmentio/data-digger ) for a command-line tool
29
29
that makes it easy to tail and summarize structured data in Kafka.
30
30
31
- ## Roadmap
32
-
33
- We are planning on making some changes to (optionally) remove the ZK dependency and also to support
34
- some additional security features like TLS. See
35
- [ this page] ( https://github.com/segmentio/topicctl/wiki/v1-Plan ) for the current plan.
36
-
37
31
## Getting started
38
32
39
33
### Installation
@@ -74,7 +68,7 @@ topicctl apply --skip-confirm examples/local-cluster/topics/*yaml
74
68
4 . Send some test messages to the ` topic-default ` topic:
75
69
76
70
```
77
- topicctl tester --zk -addr=localhost:2181 --topic=topic-default
71
+ topicctl tester --broker -addr=localhost:9092 --topic=topic-default
78
72
```
79
73
80
74
5 . Open up the repl (while keeping the tester running in a separate terminal):
@@ -205,19 +199,20 @@ only.
205
199
206
200
### Specifying the target cluster
207
201
208
- There are two patterns for specifying a target cluster in the ` topicctl ` subcommands:
202
+ There are three ways to specify a target cluster in the ` topicctl ` subcommands:
209
203
210
204
1 . ` --cluster-config=[path] ` , where the refererenced path is a cluster configuration
211
- in the format expected by the ` apply ` command described above * or*
212
- 2 . ` --zk-addr=[zookeeper address] ` and ` --zk-prefix=[optional prefix for cluster in zookeeper] `
205
+ in the format expected by the ` apply ` command described above,
206
+ 2 . ` --zk-addr=[zookeeper address] ` and ` --zk-prefix=[optional prefix for cluster in zookeeper] ` , * or*
207
+ 3 . ` --broker-addr=[bootstrap broker address] `
213
208
214
- All subcommands support the ` cluster-config ` pattern. The second is also supported
209
+ All subcommands support the ` cluster-config ` pattern. The last two are also supported
215
210
by the ` get ` , ` repl ` , ` reset-offsets ` , and ` tail ` subcommands since these can be run
216
211
independently of an ` apply ` workflow.
217
212
218
213
### Version compatibility
219
214
220
- We've tested ` topicctl ` on Kafka clusters with versions between ` 0.10.1 ` and ` 2.4 .1 ` , inclusive.
215
+ We've tested ` topicctl ` on Kafka clusters with versions between ` 0.10.1 ` and ` 2.7 .1 ` , inclusive.
221
216
If you run into any compatibility issues, please file a bug.
222
217
223
218
## Config formats
@@ -227,9 +222,9 @@ typically source-controlled so that changes can be reviewed before being applied
227
222
228
223
### Clusters
229
224
230
- Each cluster associated with a managed topic must have a config. These
231
- configs can also be used with the ` get ` , ` repl ` , and ` tail ` subcommands instead
232
- of specifying a ZooKeeper address.
225
+ Each cluster associated with a managed topic must have a config. These configs can also be used
226
+ with the ` get ` , ` repl ` , ` reset-offsets ` , and ` tail ` subcommands instead of specifying a broker or
227
+ ZooKeeper address.
233
228
234
229
The following shows an annotated example:
235
230
@@ -242,15 +237,30 @@ meta:
242
237
Test cluster for topicctl.
243
238
244
239
spec :
245
- versionMajor : v0.10 # Version
246
240
bootstrapAddrs : # One or more broker bootstrap addresses
247
241
- my-cluster.example.com:9092
248
- zkAddrs : # One or more cluster zookeeper addresses
249
- - zk.example.com:2181
250
- zkPrefix : my-cluster # Prefix for zookeeper nodes
242
+ clusterID : abc-123-xyz # Expected cluster ID for cluster (optional, used as safety check only)
243
+
244
+ # ZooKeeper access settings (only required for pre-v2 clusters)
245
+ zkAddrs : # One or more cluster zookeeper addresses; if these are
246
+ - zk.example.com:2181 # omitted, then the cluster will only be accessed via broker APIs;
247
+ # see the section below on cluster access for more details.
248
+ zkPrefix : my-cluster # Prefix for zookeeper nodes if using zookeeper access
251
249
zkLockPath : /topicctl/locks # Path used for apply locks (optional)
252
- clusterID : abc-123-xyz # Expected cluster ID for cluster (optional, used as
253
- # safety check only)
250
+
251
+ # TLS/SSL settings (optional, not supported if using ZooKeeper)
252
+ tls :
253
+ enabled : true # Whether TLS is enabled
254
+ caCertPath : path/to/ca.crt # Path to CA cert to be used (optional)
255
+ certPath : path/to/client.crt # Path to client cert to be used (optional)
256
+ keyPath : path/to/client.key # Path to client key to be used (optional)
257
+
258
+ # SASL settings (optional, not supported if using ZooKeeper)
259
+ sasl :
260
+ enabled : true # Whether SASL is enabled
261
+ mechanism : SCRAM-SHA-512 # Mechanism to use; choices are PLAIN, SCRAM-SHA-256, and SCRAM-SHA-512
262
+ username : my-username # Username; can also be set via TOPICCTL_SASL_USERNAME environment variable
263
+ password : my-password # Password; can also be set via TOPICCTL_SASL_PASSWORD environment variable
254
264
` ` `
255
265
256
266
Note that the ` name`, `environment`, `region`, and `description` fields are used
@@ -360,7 +370,7 @@ The `apply` subcommand can make changes, but under the following conditions:
360
370
7. Partition replica migrations are protected via
361
371
["throttles"](https://kafka.apache.org/0101/documentation.html#rep-throttle)
362
372
to prevent the cluster network from getting overwhelmed
363
- 8. Before applying, the tool checks the cluster ID in ZooKeeper against the expected value in the
373
+ 8. Before applying, the tool checks the cluster ID against the expected value in the
364
374
cluster config. This can help prevent errors around applying in the wrong cluster when multiple
365
375
clusters are accessed through the same address, e.g `localhost:2181`.
366
376
@@ -381,17 +391,76 @@ the process should continue from where it left off.
381
391
382
392
# # Cluster access details
383
393
384
- Most `topicctl` functionality interacts with the cluster through ZooKeeper. Currently, only
385
- the following depend on broker APIs :
394
+ # ## ZooKeeper vs. broker APIs
395
+
396
+ ` topicctl` can interact with a cluster through either ZooKeeper or by hitting broker APIs
397
+ directly.
398
+
399
+ Broker APIs are used exclusively if the tool is run with either of the following flags :
400
+
401
+ 1. `--broker-addr` *or*
402
+ 2. `--cluster-config` and the cluster config doesn't specify any ZK addresses
403
+
404
+ We recommend using this "broker only" access mode for all clusters running Kafka versions >= 2.4.
405
+
406
+ In all other cases, i.e. if `--zk-addr` is specified or the cluster config has ZK addresses, then
407
+ ZooKeeper will be used for most interactions. A few operations that are not possible via ZK
408
+ will still use broker APIs, however, including :
386
409
387
410
1. Group-related `get` commands : ` get groups` , `get lags`, `get members`
388
411
2. `get offsets`
389
412
3. `reset-offsets`
390
413
4. `tail`
391
414
5. `apply` with topic creation
392
415
393
- In the future, we may shift more functionality away from ZooKeeper, at least for newer cluster
394
- versions; see the "Roadmap" section above for more details.
416
+ This "mixed" mode is required for clusters running Kafka versions < 2.0.
417
+
418
+ # ## Limitations of broker-only access mode
419
+
420
+ There are a few limitations in the tool when using the broker APIs exclusively :
421
+
422
+ 1. Only newer versions of Kafka are supported. In particular :
423
+ - v2.0 or greater is required for read-only operations (`get brokers`, `get topics`, etc.)
424
+ - v2.4 or greater is required for applying topic changes
425
+ 2. Apply locking is not yet implemented; please be careful when applying to ensure that someone
426
+ else isn't applying changes in the same topic at the same time.
427
+ 3. The values of some dynamic broker properties, e.g. `leader.replication.throttled.rate`, are
428
+ marked as "sensitive" and not returned via the API; `topicctl` will show the value as
429
+ ` SENSITIVE` . This appears to be fixed in v2.6.
430
+ 4. Broker timestamps are not returned by the metadata API. These will be blank in the results
431
+ of `get brokers`.
432
+ 5. Applying is not fully compatible with clusters provisioned in Confluent Cloud. It appears
433
+ that Confluent prevents arbitrary partition reassignments, among other restrictions. Read-only
434
+ operations seem to work.
435
+
436
+ # ## TLS
437
+
438
+ TLS (referred to by the older name "SSL" in the Kafka documentation) is supported when running
439
+ ` topicctl` in the exclusive broker API mode. To use this, either set `--tls-enabled` in the
440
+ command-line or, if using a cluster config, set `enabled : true` in the `TLS` section of
441
+ the latter.
442
+
443
+ In addition to standard TLS, the tool also supports mutual TLS using custom certs, keys, and CA
444
+ certs (in PEM format). As with the enabling of TLS, these can be configured either on the
445
+ command-line or in a cluster config. See [this config](examples/auth/cluster.yaml) for an example.
446
+
447
+ # ## SASL
448
+
449
+ ` topicctl` supports SASL authentication when running in the exclusive broker API mode. To use this,
450
+ either set the `--sasl-mechanism`, `--sasl-username`, and `--sasl-password` flags on the command
451
+ line or fill out the `SASL` section of the cluster config.
452
+
453
+ If using the cluster config, the username and password can still be set on the command-line
454
+ or via the `TOPICCTL_SASL_USERNAME` and `TOPICCTL_SASL_PASSWORD` environment variables.
455
+
456
+ The tool currently supports the following SASL mechanisms :
457
+
458
+ 1. `PLAIN`
459
+ 2. `SCRAM-SHA-256`
460
+ 3. `SCRAM-SHA-512`
461
+
462
+ Note that SASL can be run either with or without TLS, although the former is generally more
463
+ secure.
395
464
396
465
# # Development
397
466
0 commit comments