-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@PulsarListener ackTimeoutMillis conflicting with redelivery policy #1019
Comments
Hi @Programmer-yyds ,
|
Hello, has this problem been fixed in version 1.2.2? |
Hi @Programmer-yyds , unfortunately we have been busy and have not had a chance to get to this issue. That being said, the 1.2.3 release is tomorrow and I am have a few hours today to hopefully solve this in time for the release tomorrow. 🤞🏻 |
Hi @Programmer-yyds , The issue here is that once the listener awakes from its sleep it does in fact ack the message. This in turn takes the message out of the next redelivery as the message was in fact handled. The reason that you are seeing only 1 message go through the listener method is because the default concurrency of the listener is 1 so any subsequent delivery will block until it awakes (and acks the message it slept/blocked with). If you instead set concurrency on the listener to 10 you will see the behavior you expect. |
According to your prompt, the set code is as follows (I set ackTimeoutMillis for 2 s, hibernation for 300 s, and concurrency for 10): Problem: Check the log (the log is as follows). The message has been repeatedly consumed for 10 times, but the message has not been delivered to the dead letter queue after 10 times of consumption. 2025-02-18 09:51:41.839 [pulsar-client-io-1-3] INFO o.a.pulsar.client.impl.ProducerStatsRecorderImpl [,] - Starting Pulsar producer perf with config: {"topicName":"persistent://sample/ns1/topic-1","producerName":null,"sendTimeoutMs":30000,"blockIfQueueFull":false,"maxPendingMessages":0,"maxPendingMessagesAcrossPartitions":0,"messageRoutingMode":"RoundRobinPartition","hashingScheme":"JavaStringHash","cryptoFailureAction":"FAIL","batchingMaxPublishDelayMicros":1000,"batchingPartitionSwitchFrequencyByPublishDelay":10,"batchingMaxMessages":1000,"batchingMaxBytes":131072,"batchingEnabled":true,"chunkingEnabled":false,"chunkMaxMessageSize":-1,"encryptionKeys":[],"compressionType":"NONE","initialSequenceId":null,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{},"initialSubscriptionName":null,"nonPartitionedTopicExpected":false} |
In the case of a long running listener, there has to be enough concurrency in order to process the max amount of redeliveries, otherwise, the broker does not think anyone has processed the last retried message. Let's step through your example.
Now if we bump the concurrency to 11 it will look as follows:
|
@PulsarListener configured ackTimeoutMillis code to be re-delivered only twice after execution timeout, but my policy configured re-delivery for 10 times, and it was not delivered to the dead letter queue.
In the current code, the consumeString method is configured with timeout of 1000 and code dormancy of 30*1000. Under normal circumstances, timeout retry should be triggered, but it was only retried twice, while my policy was configured with 10 redeliveries, and it was not delivered to the dead letter queue topic-1-dlq-topic.
code segment:
`
@RestController
public class TaskAsyncController {
private static final DateTimeFormatter sdf = DateTimeFormatter.ofPattern("yyyyMMdd HH:mm:ss");
private static final Logger log = LoggerFactory.getLogger(TaskAsyncController.class);
static Pattern pattern = Pattern.compile("\.([a-zA-Z0-9]+)(?:\?|$)");
@resource
private PulsarTemplate strProducer;
}
`
log:

The text was updated successfully, but these errors were encountered: