You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have come across a few similar trace stacks but it seems to be due to timeouts which is not what my original warning shows.
I have a consumer listening on a very low throughput stream and all the process message does is publish the data to a topic. I haven't seen any data loss so far, it just seems to be that these warnings pop up from anywhere between 15 to 45min. and then it will go back to normal.
I have 4 different consumers (each subscribing to a different stream) running the exact same python code but only the one with the lowest throughput seems to have this issue. Any ideas what could be causing it?
WARN s.a.k.r.f.FanOutRecordsPublisher - shardId-000000000000: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2021-05-20T08:10:13.015372050Z id: shardId-000000000000-2238 -- software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error
(Service: kinesis, Status Code: 500, Request ID: f569c01a-aadd-0445-ac49-dbbf9e5d2c14)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: f569c01a-aadd-0445-ac49-dbbf9e5d2c14)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:114)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:74)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:86)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:62)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:41)
at software.amazon.awssdk.awscore.eventstream.EventStreamAsyncResponseTransformer.handleMessage(EventStreamAsyncResponseTransformer.java:272)
at software.amazon.eventstream.MessageDecoder.feed(MessageDecoder.java:123)
at software.amazon.eventstream.MessageDecoder.feed(MessageDecoder.java:71)
...
<omitted for brevity>
...
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at java.base/java.lang.Thread.run(Thread.java:832)
WARN s.a.k.l.ShardConsumerSubscriber - shardId-000000000000: onError(). Cancelling subscription, and marking self as failed.
.....
WARN s.a.k.l.ShardConsumerSubscriber - shardId-000000000000: Failure occurred in retrieval. Restarting data requests
......
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.LimitExceededException: Rate exceeded for consumer arn:aws:kinesis:... and shard shardId-000000000000 (Service: Kinesis, Status Code: 400, Request ID: ecc20959-e7ef-d353-b5e2-2b6cd36ffb02)
at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:61)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncExecutionFailureExceptionReportingStage.lambda$execute$0(AsyncExecutionFailureExceptionReportingStage.java:50)
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2152)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.retryResponseIfNeeded(AsyncRetryableStage.java:147)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.retryIfNeeded(AsyncRetryableStage.java:113)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:104)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2137)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$executeHttpRequest$1(MakeAsyncHttpRequestStage.java:134)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: software.amazon.awssdk.services.kinesis.model.LimitExceededException: Rate exceeded for consumer arn:aws:kinesis:... and shard shardId-000000000000 (Service: Kinesis, Status Code: 400, Request ID: ecc20959-e7ef-d353-b5e2-2b6cd36ffb02)
at software.amazon.awssdk.services.kinesis.model.LimitExceededException$BuilderImpl.build(LimitExceededException.java:118)
at software.amazon.awssdk.services.kinesis.model.LimitExceededException$BuilderImpl.build(LimitExceededException.java:78)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:86)
...
<omitted for brevity>
...
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:474)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
... 1 common frames omitted
The text was updated successfully, but these errors were encountered:
I have come across a few similar trace stacks but it seems to be due to timeouts which is not what my original warning shows.
I have a consumer listening on a very low throughput stream and all the process message does is publish the data to a topic. I haven't seen any data loss so far, it just seems to be that these warnings pop up from anywhere between 15 to 45min. and then it will go back to normal.
I have 4 different consumers (each subscribing to a different stream) running the exact same python code but only the one with the lowest throughput seems to have this issue. Any ideas what could be causing it?
The text was updated successfully, but these errors were encountered: