This module demonstrates the following:
- The use of the Processor API, including
process()
and.schedule()
. - The processor context and the scheduling of tasks based on wall block time and stream time.
- The creation of a timestamped key-value store.
- Unit testing using Topology Test Driver.
In this module, records of type <String, KafkaUser>
are streamed from a topic named USER_TOPIC
.
The following tasks are performed:
- Processes the stream using a custom processor that performs the following tasks:
- Counts the number of users by nationality.
- Emits all counters of all nationalities every minute, based on the stream time.
- Resets the counters every two minutes, based on the wall clock time.
- Writes the processed results to a new topic named
USER_SCHEDULE_TOPIC
.
To compile and run this demo, you will need the following:
- Java 21
- Maven
- Docker
To run the application manually:
- Start a Confluent Platform in a Docker environment.
- Produce records of type
<String, KafkaUser>
to a topic namedUSER_TOPIC
. You can use the producer user to do this. - Start the Kafka Streams application.
To run the application in Docker, use the following command:
docker-compose up -d
This command will start the following services in Docker:
- 1 Kafka broker (KRaft mode)
- 1 Schema registry
- 1 Control Center
- 1 producer User
- 1 Kafka Streams Schedule