Skip to content

[SPARK-55968][SQL] Do not treat vectorized reader capacity overflow a…#54805

Open
azmatsiddique wants to merge 2 commits intoapache:masterfrom
azmatsiddique:SPARK-55968
Open

[SPARK-55968][SQL] Do not treat vectorized reader capacity overflow a…#54805
azmatsiddique wants to merge 2 commits intoapache:masterfrom
azmatsiddique:SPARK-55968

Conversation

@azmatsiddique
Copy link

What changes were proposed in this pull request?

This PR modifies DataSourceUtils.shouldIgnoreCorruptFileException to stop silently swallowing RuntimeException instances that are the result of integer overflows when vectorizing columns in WritableColumnVector.

Why are the changes needed?

Currently, when spark.sql.files.ignoreCorruptFiles is enabled, Spark will catch any RuntimeException while reading data files and treat the file as corrupted.

In particular, the vectorized Parquet / ORC readers can fail with java.lang.RuntimeException: Cannot reserve additional contiguous bytes in the vectorized reader (integer overflow). Because this overflow exception is a RuntimeException, it is caught by shouldIgnoreCorruptFileException and ignored. This results in the data files being skipped entirely and silently dropping user data without any explicit task failure, rather than warning the user that their vectorized batch size is too large.

This change ensures that this specific capacity exception explicitly propagates to fail the task, allowing users to apply the recommended workarounds (reducing batch size, disabling vectorized reader, etc.).

Does this PR introduce any user-facing change?

Yes.
Previously, if the vectorized reader encountered an integer overflow while reading a file, the file would be silently skipped and its data lost if spark.sql.files.ignoreCorruptFiles was enabled.
With this change, the job will explicitly fail with the RuntimeException: Cannot reserve additional contiguous bytes... error, alerting the user to properly tune their reader settings instead of silently losing data.

How was this patch tested?

  • Manually verified via local build build/sbt "sql/testOnly org.apache.spark.sql.execution.datasources.parquet.ParquetQuerySuite".
  • Verified via existing OrcQuerySuite and ParquetQuerySuite tests involving ignoreCorruptFiles.

Was this patch authored or co-authored using generative AI tooling?

No.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant