You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Backported from 13a5199.
After 95438b2 (#4342), there is a
section where chunks do not have a lock in `write_step_by_step()`.
`write_step_by_step()` must ensure their locks until passing them
to the block.
Otherwise, race condition can occur and it can cause emit error
by IOError.
Example of warning messages of emit error:
[warn]: #0 emit transaction failed: error_class=IOError error="closed stream" location=...
[warn]: #0 send an error event stream to @error: error_class=IOError error="closed stream" location=...
Signed-off-by: Daijiro Fukuda <[email protected]>
# The split will (might) cause size over so keep already processed
802
+
# 'split' content here (allow performance regression a bit).
803
+
chunk.commit
815
804
committed_bytesize=chunk.bytesize
805
+
end
806
+
end
816
807
817
-
ifsplit.size == 1# Check BufferChunkOverflowError again
818
-
ifadding_bytes > @chunk_limit_size
819
-
errors << "concatenated/appended a #{adding_bytes} bytes record (nth: #{writing_splits_index}) is larger than buffer chunk limit size (#{@chunk_limit_size})"
820
-
writing_splits_index += 1
821
-
next
822
-
else
823
-
# As already processed content is kept after rollback, then unstaged chunk should be queued.
824
-
# After that, re-process current split again.
825
-
# New chunk should be allocated, to do it, modify @stage and so on.
826
-
synchronize{@stage.delete(modified_metadata)}
827
-
staged_chunk_used=false
828
-
chunk.unstaged!
829
-
break
830
-
end
831
-
end
808
+
ifformat
809
+
chunk.concat(formatted_split,split.size)
810
+
else
811
+
chunk.append(split,compress: @compress)
812
+
end
813
+
adding_bytes=chunk.bytesize - committed_bytesize
832
814
833
-
ifchunk_size_full?(chunk) || split.size == 1
834
-
enqueue_chunk_before_retry=true
815
+
ifchunk_size_over?(chunk)# split size is larger than difference between size_full? and size_over?
816
+
chunk.rollback
817
+
committed_bytesize=chunk.bytesize
818
+
819
+
ifsplit.size == 1# Check BufferChunkOverflowError again
820
+
ifadding_bytes > @chunk_limit_size
821
+
errors << "concatenated/appended a #{adding_bytes} bytes record (nth: #{writing_splits_index}) is larger than buffer chunk limit size (#{@chunk_limit_size})"
822
+
writing_splits_index += 1
823
+
next
835
824
else
836
-
splits_count *= 10
825
+
# As already processed content is kept after rollback, then unstaged chunk should be queued.
826
+
# After that, re-process current split again.
827
+
# New chunk should be allocated, to do it, modify @stage and so on.
828
+
synchronize{@stage.delete(modified_metadata)}
829
+
staged_chunk_used=false
830
+
chunk.unstaged!
831
+
break
837
832
end
833
+
end
838
834
839
-
raiseShouldRetry
835
+
ifchunk_size_full?(chunk) || split.size == 1
836
+
enqueue_chunk_before_retry=true
837
+
else
838
+
splits_count *= 10
840
839
end
841
840
842
-
writing_splits_index += 1
841
+
raiseShouldRetry
842
+
end
843
843
844
-
ifchunk_size_full?(chunk)
845
-
break
846
-
end
844
+
writing_splits_index += 1
845
+
846
+
ifchunk_size_full?(chunk)
847
+
break
847
848
end
848
-
rescue
849
-
chunk.purgeifchunk.unstaged?# unstaged chunk will leak unless purge it
0 commit comments