GH-49502: [Parquet][C++] Fix missing overflow check for dictionary encoder indices count#49513
GH-49502: [Parquet][C++] Fix missing overflow check for dictionary encoder indices count#49513aryansri05 wants to merge 1 commit intoapache:mainfrom
Conversation
|
Thanks for opening a pull request! If this is not a minor PR. Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose Opening GitHub issues ahead of time contributes to the Openness of the Apache Arrow project. Then could you also rename the pull request title in the following format? or See also: |
|
|
raulcd
left a comment
There was a problem hiding this comment.
I don't think this is what we are looking for here. This will still fail the WriteLargeDictEncodedPage test which is one of the two tests that was supposed to be fixed by this PR. I've pushed a commit to my PR to add Large memory tests to CI with a fix. Basically the tests where done before having max_rows_per_page and we are expecting those tests to have huge data pages. See the commit here:
ab1c5ad
I am currently validating whether the tests are passing with my fix.
|
Thank you for the explanation! I can see now that Should I update my PR to include both fixes together, |
Closes #49502
Rationale for this change
When writing large dictionary-encoded Parquet data with
ARROW_LARGE_MEMORY_TESTS=ON, two tests were failing:TestColumnWriter.WriteLargeDictEncodedPage— expected 2 pages, got 7501TestColumnWriter.ThrowsOnDictIndicesTooLarge— expected ParquetException,got nothing thrown
The root cause is that
PutIndicesTyped()inDictEncoderImplhad no checkfor when the total number of buffered dictionary indices exceeds
INT32_MAX.The existing overflow check in
FlushValues()only checks the buffer size inbytes, not the index count, so it never triggered for this case.
What changes are included in this PR?
Added an overflow check in
DictEncoderImpl::PutIndicesTyped()immediatelyafter
buffered_indices_.resize():if (buffered_indices_.size() >
static_cast<size_t>(std::numeric_limits<int32_t>::max())) {
throw ParquetException("Total dictionary indices count (",
buffered_indices_.size(),
") exceeds maximum int value");
}
This makes the encoder throw a
ParquetExceptionwith a message containing"exceeds maximum int value" when the index count overflows, which is exactly
what
ThrowsOnDictIndicesTooLargeexpects.Are these changes tested?
Yes — the existing tests in
column_writer_test.cccover this fix:TestColumnWriter.ThrowsOnDictIndicesTooLargeTestColumnWriter.WriteLargeDictEncodedPageBoth tests were failing before this fix and should pass after.
Tests require building with
ARROW_LARGE_MEMORY_TESTS=ON.This PR contains a "Critical Fix"— previously, writing dictionary-encoded
data with more than INT32_MAX indices would silently produce incorrect output
(wrong page count) instead of raising an error. This fix makes the encoder
correctly throw a
ParquetExceptionin that scenario.