Add bloom filter folding to automatically size SBBF filters#9628
Add bloom filter folding to automatically size SBBF filters#9628adriangb wants to merge 15 commits intoapache:mainfrom
Conversation
Instead of requiring users to guess NDV (number of distinct values) upfront, bloom filters now support a folding mode: allocate a conservatively large filter (sized for worst-case NDV = max row group rows), insert all values during writing, then fold down at flush time to meet a target FPP. When NDV is not explicitly set (the new default), folding mode activates automatically. Setting NDV explicitly preserves the existing fixed-size behavior for backward compatibility. Key changes: - BloomFilterProperties.ndv is now Option<u64> (None = folding mode) - Added BloomFilterProperties.max_bytes for explicit initial size control - Default FPP changed from 0.05 to 0.01 - Added Sbbf::fold_to_target_fpp() which merges adjacent block pairs - Both ColumnValueEncoderImpl and ByteArrayEncoder fold at flush time Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Revert DEFAULT_BLOOM_FILTER_FPP back to 0.05 (no behavior change) - Add comprehensive docstrings on Sbbf, fold_once, estimated_fpp_after_fold, and fold_to_target_fpp explaining the mathematical basis, SBBF adaptation (adjacent pairs vs halves), FPP estimation, and correctness guarantees - Add citation to Sailhan & Stehr "Folding and Unfolding Bloom Filters" (IEEE iThings 2012, doi:10.1109/GreenCom.2012.16) - Keep module-level docs short, pointing to struct/method docs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Hey @adriangb, cool idea. What motivated this if you don't mind me asking? Are any other Parquet implementations doing this? |
My motivation was that looking at our data this is a consistent problem: we have high cardinality data (trace ids) that when packed into 1M row row groups saturate the bloom filters (making them useless) but also waste a ton of space in small files. In looking for a solution I came across this neat trick. I don't know if other Parquet implementations use this, but TimescaleDB does (linked above). |
Co-authored-by: emkornfield <emkornfield@gmail.com>
| assert!( | ||
| len >= 2, | ||
| "Cannot fold a bloom filter with fewer than 2 blocks" | ||
| ); |
There was a problem hiding this comment.
assert!(len % 2 == 0)?
I think fold_once can only work if len is not odd.
There was a problem hiding this comment.
I think is should work fine with odd values as long, we are sure that the last value doesn't do an out of bound index? (i.e. the last block is not modified for the odd case). But I think we probably truncate too much for odd values.
| let block_fill = set_bits as f64 / 256.0; | ||
| total_fpp += block_fill.powi(8); | ||
| } | ||
| total_fpp / half as f64 |
There was a problem hiding this comment.
why is cast needed here, can this be avoided by setting the type explicitly on total_fpp?
parquet/src/bloom_filter/mod.rs
Outdated
| /// | ||
| /// ## Why adjacent pairs (not halves)? | ||
| /// | ||
| /// Standard Bloom filter folding merges the two halves (`B[i] | B[i + m/2]`) because |
There was a problem hiding this comment.
nit: as an explanation it might pay to reverse, this I'm not sure whether readers would commonly be aware of bloom filter folding. So it might be better to explain why half first and then indicate why this is different then the linked paper.
| .bloom_filter_properties(descr.path()) | ||
| .map(|props| Sbbf::new_with_ndv_fpp(props.ndv, props.fpp)) | ||
| .transpose()?; | ||
| let (bloom_filter, bloom_filter_target_fpp) = |
There was a problem hiding this comment.
Is the bloom filter creation logic the same as encoder.rs? Maybe we can extract fn create_bloom_filter?
viirya
left a comment
There was a problem hiding this comment.
Do we have e2e tests that cover this folding mode behavior already?
I can add them. Where would you recommend? I'm not all that familiar with the test structure here. |
Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Okay. Actually existing integration roundtrip tests after this PR will cover folding path automatically because they don't set NDV. So it turns out that old behavior fixed-size mode will not be covered by these roundtrip tests. Seems we should add roundtrip tests for fixed-size mode. Arrow writer roundtrip tests are in parquet/src/arrow/arrow_writer/mod.rs, like i32_column_bloom_filter, i32_column_bloom_filter_at_end, etc. Arrow reader roundtrip tests like |
|
I added a test for the legacy path. Should we deprecate it? I think the intent is better captured by the new path. One may want to create exact size bloom filters, but I don't think setting the NDV and FPP is the right way to do that (a setting for specifying the size directly would be better). |
Yea, I think we can deprecate the old behavior and maybe remove it after few releases. |
Do you want to do that in this PR or in a followup (maybe once this is out in the wild and known to be working well)? |
I think we can do it in this PR. |
|
Coming from the dev list. The parquet-java implementation tried to optimize the disk size by creating multiple bloom filter writers with different NDVs and choosing the best in the end. The approach in this PR looks more elegant and worth porting to other implementations. |
If we want to deprecate the existing NDV I think we're better off re-interpreting it to mean "maximum ndv" or "initial ndv". That way existing users who are setting the ndv also benefit from folding. This means there will be no way to disable folding but I also don't see any reason anyone would want to do that beyond requiring a fixed-size bloom filter (in which case relying on a combination of fpp + ndv giving you a fixed size was probably a bad choice to begin with given I don't think we made any such API promise, and they should open an issue requesting an explicit API for this). Thus the only changes vs. main now are:
|
| (SMALL_SIZE as i32 + 1..SMALL_SIZE as i32 + 10).collect(), | ||
| ); | ||
|
|
||
| // NDV smaller than actual distinct values — tests the underestimate path |
There was a problem hiding this comment.
array has only 7 distinct value. So "NDV smaller than actual distinct values" seems incorrect?
Summary
Bloom filters now support folding mode: allocate a conservatively large filter (sized for worst-case NDV), insert all values during writing, then fold down at flush time to meet a target FPP. This eliminates the need to guess NDV upfront and produces optimally-sized filters automatically.
Changes
BloomFilterProperties.ndvchanged fromu64toOption<u64>— whenNone(new default), the filter is sized based onmax_row_group_row_count; whenSome(n), the explicit NDV is usedDEFAULT_BLOOM_FILTER_NDVredefined toDEFAULT_MAX_ROW_GROUP_ROW_COUNT as u64(was hardcoded1_000_000)Sbbf::fold_to_target_fpp()and supporting methods (fold_once,estimated_fpp_after_fold,num_blocks) with comprehensive documentationflush_bloom_filter()in bothColumnValueEncoderImplandByteArrayEncodernow folds the filter before returning itcreate_bloom_filter()helper inencoder.rscentralizes bloom filter construction logicHow folding works
The SBBF fold operation merges adjacent block pairs (
block[2i] | block[2i+1]) via bitwise OR, halving the filter size. This differs from standard Bloom filter folding (which merges halves at distancem/2) because SBBF uses multiplicative hashing for block selection:When
num_blocksis halved, the new index becomesfloor(original_index / 2), so adjacent blocks map to the same position.FPP is estimated per-block as
avg(block_fill^8)since SBBF membership checks are localized to a single 256-bit block.References
Sailhan & Stehr, "Folding and Unfolding Bloom Filters", IEEE iThings 2012.
Liang, "Blocked Bloom Filters: Speeding Up Point Lookups in Tiger Postgres' Native Columnstore"
Breaking changes
BloomFilterProperties.ndv:u64→Option<u64>(direct struct construction must be updated)Test plan
i32_column_bloom_filter_fixed_ndv— roundtrip with both overestimated and underestimated NDVcargo test -p parquetpasses🤖 Generated with Claude Code