[SPARK-55968][SQL] Do not treat vectorized reader capacity overflow a…#54805
Open
azmatsiddique wants to merge 2 commits intoapache:masterfrom
Open
[SPARK-55968][SQL] Do not treat vectorized reader capacity overflow a…#54805azmatsiddique wants to merge 2 commits intoapache:masterfrom
azmatsiddique wants to merge 2 commits intoapache:masterfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This PR modifies
DataSourceUtils.shouldIgnoreCorruptFileExceptionto stop silently swallowingRuntimeExceptioninstances that are the result of integer overflows when vectorizing columns in WritableColumnVector.Why are the changes needed?
Currently, when
spark.sql.files.ignoreCorruptFilesis enabled, Spark will catch anyRuntimeExceptionwhile reading data files and treat the file as corrupted.In particular, the vectorized Parquet / ORC readers can fail with
java.lang.RuntimeException: Cannot reserve additional contiguous bytes in the vectorized reader (integer overflow). Because this overflow exception is aRuntimeException, it is caught byshouldIgnoreCorruptFileExceptionand ignored. This results in the data files being skipped entirely and silently dropping user data without any explicit task failure, rather than warning the user that their vectorized batch size is too large.This change ensures that this specific capacity exception explicitly propagates to fail the task, allowing users to apply the recommended workarounds (reducing batch size, disabling vectorized reader, etc.).
Does this PR introduce any user-facing change?
Yes.
Previously, if the vectorized reader encountered an integer overflow while reading a file, the file would be silently skipped and its data lost if
spark.sql.files.ignoreCorruptFileswas enabled.With this change, the job will explicitly fail with the
RuntimeException: Cannot reserve additional contiguous bytes...error, alerting the user to properly tune their reader settings instead of silently losing data.How was this patch tested?
build/sbt "sql/testOnly org.apache.spark.sql.execution.datasources.parquet.ParquetQuerySuite".OrcQuerySuiteandParquetQuerySuitetests involvingignoreCorruptFiles.Was this patch authored or co-authored using generative AI tooling?
No.