Skip to content

Commit 4eca702

Browse files
Add topicctl v1 (#32)
* Create client interface * Use new version of kafka-go * Start on implementation of broker client * Support broker addresses in command-line entrypoints * Support updating configs and running leader elections * Start adding broker client * Add create partitions * Improve tests * Fix leader election and add tests * Add tests of get api versions * Clean up tests * Update kafka-go version * Improve support for broker-based admin client * Start testing out broker-based admins * Fix case where topic does not exist * Improve support for broker-based admin client * Skip tests that aren't possible to run with older kafka versions * Add circleci tests for v2.4.1 * Fix circleci configs * Add SSL and SASL support into v1 branch (#34) * Start working on connector refactoring * Keep refactoring connectors * Switch to connectors in more places * Add more TLS support * Get TLS working * Fix cluster paths * Update README * Update name of tls enabled parameter * Update README * Update default kafka version to 2.4.1 * Update kafka-go version and fix tests * Clean up SASL implementation * Allow overriding SASL username and password * Update README and examples * Update README * Update README * Update README * Update README * Update README * Fix sensitive configs * Fix bugs * Revert change to balanced extender * Update to work with latest kafka-go changes * Fix tests * Update README restrictions * Update kafka-go for v1 (#38) * Update kafka-go version * Revert "Update kafka-go version" This reverts commit 32edf5f. * Revert "Revert "Update kafka-go version"" This reverts commit 13ac457. * Update kafka-go version * Update kafka-go version again * Also push on v1 * Fix pip in CI * Don't block on test010 for pushing images * Fix awscli installation * Fix merge conflicts with master * Fix README * Fix bugs in 'get configs' and check cluster IDs * Update all images to golang 1.16 * Fix describegroups implementation (#42) * Fix describegroups implementation * Fix vet error * Fix another signal * Update circleci config * Update kafka-go version * Update README * Update version
1 parent 3c37625 commit 4eca702

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+3863
-1947
lines changed

.circleci/config.yml

+118-7
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
version: 2
22
jobs:
3-
test:
3+
test010:
44
working_directory: /go/src/github.com/segmentio/topicctl
55
docker:
6-
- image: circleci/golang:1.14
6+
- image: circleci/golang:1.17
77
environment:
88
GO111MODULE: "on"
99
ECR_ENABLED: True
@@ -102,10 +102,112 @@ jobs:
102102
paths:
103103
- "/go/pkg/mod"
104104

105+
test241:
106+
working_directory: /go/src/github.com/segmentio/topicctl
107+
docker:
108+
- image: circleci/golang:1.17
109+
environment:
110+
GO111MODULE: "on"
111+
ECR_ENABLED: True
112+
KAFKA_TOPICS_TEST_ZK_ADDR: zookeeper:2181
113+
KAFKA_TOPICS_TEST_KAFKA_ADDR: kafka1:9092
114+
115+
- image: wurstmeister/zookeeper
116+
name: zookeeper
117+
ports:
118+
- "2181:2181"
119+
120+
- image: wurstmeister/kafka:2.12-2.4.1
121+
name: kafka1
122+
ports:
123+
- "9092:9092"
124+
environment:
125+
KAFKA_BROKER_ID: 1
126+
KAFKA_BROKER_RACK: zone1
127+
KAFKA_ADVERTISED_HOST_NAME: kafka1
128+
KAFKA_ADVERTISED_PORT: 9092
129+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
130+
131+
- image: wurstmeister/kafka:2.12-2.4.1
132+
name: kafka2
133+
ports:
134+
- "9092:9092"
135+
environment:
136+
KAFKA_BROKER_ID: 2
137+
KAFKA_BROKER_RACK: zone1
138+
KAFKA_ADVERTISED_HOST_NAME: kafka2
139+
KAFKA_ADVERTISED_PORT: 9092
140+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
141+
142+
- image: wurstmeister/kafka:2.12-2.4.1
143+
name: kafka3
144+
ports:
145+
- "9092:9092"
146+
environment:
147+
KAFKA_BROKER_ID: 3
148+
KAFKA_BROKER_RACK: zone2
149+
KAFKA_ADVERTISED_HOST_NAME: kafka3
150+
KAFKA_ADVERTISED_PORT: 9092
151+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
152+
153+
- image: wurstmeister/kafka:2.12-2.4.1
154+
name: kafka4
155+
ports:
156+
- "9092:9092"
157+
environment:
158+
KAFKA_BROKER_ID: 4
159+
KAFKA_BROKER_RACK: zone2
160+
KAFKA_ADVERTISED_HOST_NAME: kafka4
161+
KAFKA_ADVERTISED_PORT: 9092
162+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
163+
164+
- image: wurstmeister/kafka:2.12-2.4.1
165+
name: kafka5
166+
ports:
167+
- "9092:9092"
168+
environment:
169+
KAFKA_BROKER_ID: 5
170+
KAFKA_BROKER_RACK: zone3
171+
KAFKA_ADVERTISED_HOST_NAME: kafka5
172+
KAFKA_ADVERTISED_PORT: 9092
173+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
174+
175+
- image: wurstmeister/kafka:2.12-2.4.1
176+
name: kafka6
177+
ports:
178+
- "9092:9092"
179+
environment:
180+
KAFKA_BROKER_ID: 6
181+
KAFKA_BROKER_RACK: zone3
182+
KAFKA_ADVERTISED_HOST_NAME: kafka6
183+
KAFKA_ADVERTISED_PORT: 9092
184+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
185+
186+
steps:
187+
- checkout
188+
- setup_remote_docker:
189+
reusable: true
190+
docker_layer_caching: true
191+
- restore_cache:
192+
keys:
193+
- go-modules-{{ checksum "go.sum" }}
194+
- run:
195+
name: Run tests
196+
command: make test-v2
197+
- run:
198+
name: Run Snyk
199+
environment:
200+
SNYK_LEVEL: 'FLHI'
201+
command: curl -sL https://raw.githubusercontent.com/segmentio/snyk_helpers/master/initialization/snyk.sh | sh
202+
- save_cache:
203+
key: go-modules-{{ checksum "go.sum" }}
204+
paths:
205+
- "/go/pkg/mod"
206+
105207
publish-ecr:
106208
working_directory: /go/src/github.com/segmentio/topicctl
107209
docker:
108-
- image: circleci/golang:1.14
210+
- image: circleci/golang:1.17
109211

110212
steps:
111213
- checkout
@@ -131,7 +233,7 @@ jobs:
131233
publish-dockerhub:
132234
working_directory: /go/src/github.com/segmentio/topicctl
133235
docker:
134-
- image: circleci/golang:1.14
236+
- image: circleci/golang:1.17
135237

136238
steps:
137239
- checkout
@@ -154,21 +256,30 @@ workflows:
154256
version: 2
155257
run:
156258
jobs:
157-
- test:
259+
- test010:
260+
context: snyk
261+
filters:
262+
tags:
263+
only: /.*/
264+
- test241:
158265
context: snyk
159266
filters:
160267
tags:
161268
only: /.*/
162269
- publish-ecr:
163270
context: segmentio-org-global
164-
requires: [test]
271+
requires:
272+
- test241
165273
filters:
166274
branches:
167275
only:
168276
- master
277+
- v1
278+
- yolken-v1-fix-group-lags
169279
- publish-dockerhub:
170280
context: docker-publish
171-
requires: [test]
281+
requires:
282+
- test241
172283
filters:
173284
# Never publish from a branch event
174285
branches:

Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM golang:1.14 as builder
1+
FROM golang:1.17 as builder
22
ENV SRC github.com/segmentio/topicctl
33
ENV CGO_ENABLED=0
44

Makefile

+6-2
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,15 @@ install:
1717

1818
.PHONY: vet
1919
vet:
20-
$Qgo vet ./...
20+
go vet ./...
2121

2222
.PHONY: test
2323
test: vet
24-
$Qgo test -count 1 -p 1 ./...
24+
go test -count 1 -p 1 ./...
25+
26+
.PHONY: test-v2
27+
test-v2: vet
28+
KAFKA_TOPICS_TEST_BROKER_ADMIN=1 go test -count 1 -p 1 ./...
2529

2630
.PHONY: clean
2731
clean:

README.md

+95-26
Original file line numberDiff line numberDiff line change
@@ -28,12 +28,6 @@ more details.
2828
Check out the [data-digger](https://github.com/segmentio/data-digger) for a command-line tool
2929
that makes it easy to tail and summarize structured data in Kafka.
3030

31-
## Roadmap
32-
33-
We are planning on making some changes to (optionally) remove the ZK dependency and also to support
34-
some additional security features like TLS. See
35-
[this page](https://github.com/segmentio/topicctl/wiki/v1-Plan) for the current plan.
36-
3731
## Getting started
3832

3933
### Installation
@@ -74,7 +68,7 @@ topicctl apply --skip-confirm examples/local-cluster/topics/*yaml
7468
4. Send some test messages to the `topic-default` topic:
7569

7670
```
77-
topicctl tester --zk-addr=localhost:2181 --topic=topic-default
71+
topicctl tester --broker-addr=localhost:9092 --topic=topic-default
7872
```
7973

8074
5. Open up the repl (while keeping the tester running in a separate terminal):
@@ -205,19 +199,20 @@ only.
205199

206200
### Specifying the target cluster
207201

208-
There are two patterns for specifying a target cluster in the `topicctl` subcommands:
202+
There are three ways to specify a target cluster in the `topicctl` subcommands:
209203

210204
1. `--cluster-config=[path]`, where the refererenced path is a cluster configuration
211-
in the format expected by the `apply` command described above *or*
212-
2. `--zk-addr=[zookeeper address]` and `--zk-prefix=[optional prefix for cluster in zookeeper]`
205+
in the format expected by the `apply` command described above,
206+
2. `--zk-addr=[zookeeper address]` and `--zk-prefix=[optional prefix for cluster in zookeeper]`, *or*
207+
3. `--broker-addr=[bootstrap broker address]`
213208

214-
All subcommands support the `cluster-config` pattern. The second is also supported
209+
All subcommands support the `cluster-config` pattern. The last two are also supported
215210
by the `get`, `repl`, `reset-offsets`, and `tail` subcommands since these can be run
216211
independently of an `apply` workflow.
217212

218213
### Version compatibility
219214

220-
We've tested `topicctl` on Kafka clusters with versions between `0.10.1` and `2.4.1`, inclusive.
215+
We've tested `topicctl` on Kafka clusters with versions between `0.10.1` and `2.7.1`, inclusive.
221216
If you run into any compatibility issues, please file a bug.
222217

223218
## Config formats
@@ -227,9 +222,9 @@ typically source-controlled so that changes can be reviewed before being applied
227222

228223
### Clusters
229224

230-
Each cluster associated with a managed topic must have a config. These
231-
configs can also be used with the `get`, `repl`, and `tail` subcommands instead
232-
of specifying a ZooKeeper address.
225+
Each cluster associated with a managed topic must have a config. These configs can also be used
226+
with the `get`, `repl`, `reset-offsets`, and `tail` subcommands instead of specifying a broker or
227+
ZooKeeper address.
233228

234229
The following shows an annotated example:
235230

@@ -242,15 +237,30 @@ meta:
242237
Test cluster for topicctl.
243238
244239
spec:
245-
versionMajor: v0.10 # Version
246240
bootstrapAddrs: # One or more broker bootstrap addresses
247241
- my-cluster.example.com:9092
248-
zkAddrs: # One or more cluster zookeeper addresses
249-
- zk.example.com:2181
250-
zkPrefix: my-cluster # Prefix for zookeeper nodes
242+
clusterID: abc-123-xyz # Expected cluster ID for cluster (optional, used as safety check only)
243+
244+
# ZooKeeper access settings (only required for pre-v2 clusters)
245+
zkAddrs: # One or more cluster zookeeper addresses; if these are
246+
- zk.example.com:2181 # omitted, then the cluster will only be accessed via broker APIs;
247+
# see the section below on cluster access for more details.
248+
zkPrefix: my-cluster # Prefix for zookeeper nodes if using zookeeper access
251249
zkLockPath: /topicctl/locks # Path used for apply locks (optional)
252-
clusterID: abc-123-xyz # Expected cluster ID for cluster (optional, used as
253-
# safety check only)
250+
251+
# TLS/SSL settings (optional, not supported if using ZooKeeper)
252+
tls:
253+
enabled: true # Whether TLS is enabled
254+
caCertPath: path/to/ca.crt # Path to CA cert to be used (optional)
255+
certPath: path/to/client.crt # Path to client cert to be used (optional)
256+
keyPath: path/to/client.key # Path to client key to be used (optional)
257+
258+
# SASL settings (optional, not supported if using ZooKeeper)
259+
sasl:
260+
enabled: true # Whether SASL is enabled
261+
mechanism: SCRAM-SHA-512 # Mechanism to use; choices are PLAIN, SCRAM-SHA-256, and SCRAM-SHA-512
262+
username: my-username # Username; can also be set via TOPICCTL_SASL_USERNAME environment variable
263+
password: my-password # Password; can also be set via TOPICCTL_SASL_PASSWORD environment variable
254264
```
255265
256266
Note that the `name`, `environment`, `region`, and `description` fields are used
@@ -360,7 +370,7 @@ The `apply` subcommand can make changes, but under the following conditions:
360370
7. Partition replica migrations are protected via
361371
["throttles"](https://kafka.apache.org/0101/documentation.html#rep-throttle)
362372
to prevent the cluster network from getting overwhelmed
363-
8. Before applying, the tool checks the cluster ID in ZooKeeper against the expected value in the
373+
8. Before applying, the tool checks the cluster ID against the expected value in the
364374
cluster config. This can help prevent errors around applying in the wrong cluster when multiple
365375
clusters are accessed through the same address, e.g `localhost:2181`.
366376

@@ -381,17 +391,76 @@ the process should continue from where it left off.
381391

382392
## Cluster access details
383393

384-
Most `topicctl` functionality interacts with the cluster through ZooKeeper. Currently, only
385-
the following depend on broker APIs:
394+
### ZooKeeper vs. broker APIs
395+
396+
`topicctl` can interact with a cluster through either ZooKeeper or by hitting broker APIs
397+
directly.
398+
399+
Broker APIs are used exclusively if the tool is run with either of the following flags:
400+
401+
1. `--broker-addr` *or*
402+
2. `--cluster-config` and the cluster config doesn't specify any ZK addresses
403+
404+
We recommend using this "broker only" access mode for all clusters running Kafka versions >= 2.4.
405+
406+
In all other cases, i.e. if `--zk-addr` is specified or the cluster config has ZK addresses, then
407+
ZooKeeper will be used for most interactions. A few operations that are not possible via ZK
408+
will still use broker APIs, however, including:
386409

387410
1. Group-related `get` commands: `get groups`, `get lags`, `get members`
388411
2. `get offsets`
389412
3. `reset-offsets`
390413
4. `tail`
391414
5. `apply` with topic creation
392415

393-
In the future, we may shift more functionality away from ZooKeeper, at least for newer cluster
394-
versions; see the "Roadmap" section above for more details.
416+
This "mixed" mode is required for clusters running Kafka versions < 2.0.
417+
418+
### Limitations of broker-only access mode
419+
420+
There are a few limitations in the tool when using the broker APIs exclusively:
421+
422+
1. Only newer versions of Kafka are supported. In particular:
423+
- v2.0 or greater is required for read-only operations (`get brokers`, `get topics`, etc.)
424+
- v2.4 or greater is required for applying topic changes
425+
2. Apply locking is not yet implemented; please be careful when applying to ensure that someone
426+
else isn't applying changes in the same topic at the same time.
427+
3. The values of some dynamic broker properties, e.g. `leader.replication.throttled.rate`, are
428+
marked as "sensitive" and not returned via the API; `topicctl` will show the value as
429+
`SENSITIVE`. This appears to be fixed in v2.6.
430+
4. Broker timestamps are not returned by the metadata API. These will be blank in the results
431+
of `get brokers`.
432+
5. Applying is not fully compatible with clusters provisioned in Confluent Cloud. It appears
433+
that Confluent prevents arbitrary partition reassignments, among other restrictions. Read-only
434+
operations seem to work.
435+
436+
### TLS
437+
438+
TLS (referred to by the older name "SSL" in the Kafka documentation) is supported when running
439+
`topicctl` in the exclusive broker API mode. To use this, either set `--tls-enabled` in the
440+
command-line or, if using a cluster config, set `enabled: true` in the `TLS` section of
441+
the latter.
442+
443+
In addition to standard TLS, the tool also supports mutual TLS using custom certs, keys, and CA
444+
certs (in PEM format). As with the enabling of TLS, these can be configured either on the
445+
command-line or in a cluster config. See [this config](examples/auth/cluster.yaml) for an example.
446+
447+
### SASL
448+
449+
`topicctl` supports SASL authentication when running in the exclusive broker API mode. To use this,
450+
either set the `--sasl-mechanism`, `--sasl-username`, and `--sasl-password` flags on the command
451+
line or fill out the `SASL` section of the cluster config.
452+
453+
If using the cluster config, the username and password can still be set on the command-line
454+
or via the `TOPICCTL_SASL_USERNAME` and `TOPICCTL_SASL_PASSWORD` environment variables.
455+
456+
The tool currently supports the following SASL mechanisms:
457+
458+
1. `PLAIN`
459+
2. `SCRAM-SHA-256`
460+
3. `SCRAM-SHA-512`
461+
462+
Note that SASL can be run either with or without TLS, although the former is generally more
463+
secure.
395464

396465
## Development
397466

0 commit comments

Comments
 (0)