Add proto-array for incremental fork choice#235
Conversation
Replace the O(validators × chain_depth) per-call LMD-GHOST recomputation in update_head with an incremental proto-array that tracks vote deltas and propagates weights in a single backward pass over a flat node array. Proto-array serves update_head only (up to 2 calls/slot). update_safe_target keeps the existing spec implementation since it runs once per slot and uses a different attestation pool (known + new merged). A debug oracle assertion (#[cfg(debug_assertions)]) runs both implementations in parallel and asserts equality, validated by all 27 fork choice spec tests.
🤖 Kimi Code ReviewSecurity & Correctness Issues1. Critical: Pruning Race Condition (
|
🤖 Codex Code Review
Briefly: the incremental proto-array integration and debug oracle comparison in store.rs:99 are strong additions; structure is generally clean. I couldn’t run tests in this environment because Automated review by OpenAI Codex · custom prompt |
🤖 Claude Code ReviewHere is the review: PR Review: Add proto-array for incremental fork choiceOverall, the architecture is clean and the debug oracle is an excellent safety net. However, there are two correctness bugs in Bug 1 — Stale
|
| Severity | Location | Issue |
|---|---|---|
| Critical | proto_array.rs:175 |
Stale best_child when current best loses votes to a sibling |
| Critical | store.rs:811 |
Node weights not zeroed after prune + reset() |
| Minor | proto_array.rs:31 |
slot field is write-only, #[allow(dead_code)] hides the signal |
| Minor | store.rs:633 |
ForkChoiceState fields should be encapsulated to enforce safe co-mutation |
| Nit | proto_array.rs:90 |
Dead guard parent_idx < deltas.len() + undocumented slice mutation |
Automated review by Claude (Anthropic) · custom prompt
🤖 Kimi Code ReviewOverall Assessment: Well-structured implementation of the proto-array fork choice optimization with good test coverage and a debug oracle for spec compliance. The code is memory-safe and follows Rust idioms. Critical consensus logic appears correct (LMD GHOST with lexicographic tie-breaking, proper weight propagation). Issues Found: **1. Lost Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt |
🤖 Codex Code Review
I could not run the Rust test suite here: Automated review by OpenAI Codex · gpt-5.4 · custom prompt |
🤖 Claude Code ReviewNow I have enough context for a thorough review. PR 235 – Proto-Array Incremental Fork Choice: ReviewSummaryGood architectural direction. The PR correctly introduces Critical:
|
Greptile SummaryThis PR replaces the O(validators × chain_depth) Two correctness bugs are present in the core algorithm:
Confidence Score: 2/5
|
| Filename | Overview |
|---|---|
| crates/blockchain/fork_choice/src/proto_array.rs | New ProtoArray + VoteTracker implementation; contains a stale-weight comparison bug in maybe_update_best_child that can select the wrong head, and a zero-weight node guard is missing. |
| crates/blockchain/src/store.rs | Integrates ForkChoiceState into all store functions; prune+reset pattern after finalization preserves node weights while clearing vote history, causing double-counted weights on the next update. |
| crates/blockchain/fork_choice/src/lib.rs | Trivial change: adds mod proto_array and re-exports ProtoArray + VoteTracker; no issues. |
| crates/blockchain/src/lib.rs | Wires ForkChoiceState into BlockChainServer and threads it through on_tick, on_block, and produce_block_with_signatures; mechanical changes, no logic issues. |
| crates/blockchain/tests/forkchoice_spectests.rs | Updated spec tests to thread ForkChoiceState through correctly; fc is created once per test case and reused across all steps as intended. |
| crates/blockchain/tests/signature_spectests.rs | Creates a fresh ForkChoiceState for each independent single-block test case, which is correct since each test has its own isolated store. |
Sequence Diagram
sequenceDiagram
participant BCS as BlockChainServer
participant Store
participant FCS as ForkChoiceState
participant PA as ProtoArray
participant VT as VoteTracker
note over BCS: Startup
BCS->>Store: get_live_chain()
Store-->>BCS: blocks (sorted by slot)
BCS->>PA: on_block(root, parent, slot) ×N
BCS->>Store: extract_latest_known_attestations()
Store-->>BCS: attestations
BCS->>VT: compute_deltas(attestations, pa)
VT-->>BCS: deltas[]
BCS->>PA: apply_score_changes(deltas)
note over BCS: ForkChoiceState ready
note over BCS: New block arrives (on_block_core)
BCS->>Store: state_transition(block)
Store-->>BCS: post_state (justified, finalized)
alt finalization advanced
BCS->>PA: prune(finalized.root)
BCS->>VT: reset()
end
BCS->>PA: on_block(block_root, parent, slot)
BCS->>FCS: update_head(store, fc, false)
FCS->>Store: extract_latest_known_attestations()
Store-->>FCS: attestations
FCS->>VT: compute_deltas(attestations, pa)
VT-->>FCS: deltas[]
FCS->>PA: apply_score_changes(deltas)
PA-->>FCS: (weights + best_child updated)
FCS->>PA: find_head(justified_root)
PA-->>FCS: new_head
FCS->>Store: update_checkpoints(head_only(new_head))
note over BCS: Slot tick (interval 0/4)
BCS->>Store: promote_new_aggregated_payloads()
BCS->>FCS: update_head(store, fc, log_tree)
Prompt To Fix All With AI
This is a comment left during a code review.
Path: crates/blockchain/fork_choice/src/proto_array.rs
Line: 155-171
Comment:
**Stale weight comparison in `maybe_update_best_child` can select incorrect head**
`maybe_update_best_child` compares the challenger child's *updated* weight against the current `best_child`'s *stale* (pre-update) weight. Because `apply_score_changes` iterates backward (highest index first), a sibling with a **lower** index (the current `best_child`) has not yet had its delta applied at the time the comparison is made.
Concrete failure scenario:
- `root(0) -> a(idx=1, root=h(170)), b(idx=2, root=h(100))` — both children of root
- Initial weights: `a=10`, `b=5`, `best_child[root]=a`
- Deltas this round: `a=-8`, `b=+5`
- **Expected after update**: `a.weight=2`, `b.weight=10` → `b` should become `best_child`
Trace through `apply_score_changes`:
1. Process `b` (idx=2): `b.weight = 10`. Call `maybe_update_best_child(root, b)`. Compares `(10, h(100))` vs `a`'s **stale** `(10, h(170))`. Since `(10, h(100)) < (10, h(170))`, `a` wins. `best_child[root]` stays `a`.
2. Process `a` (idx=1): `a.weight = 2`. Call `maybe_update_best_child(root, a)`. `best_child` is still `a`, so it compares `a` against **itself** — a no-op.
Final result: `best_child[root] = a` (weight 2) even though `b.weight = 10`. The head is wrong.
The root cause is that when the lower-indexed sibling (`a`) is compared in step 2, it's compared against itself rather than against `b`. The fix requires either comparing all children after all weights are updated, or using a second pass to recompute `best_child` pointers.
None of the existing tests catch this because they use `h(1) < h(2)` (byte-ordered), so the higher-rooted node always wins. A test with the same weights but reversed root hash ordering would expose it.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 667-672
Comment:
**Pruning preserves node weights but resets vote history, causing weight double-counting**
After `fc.proto_array.prune(finalized.root)`, the surviving nodes **retain their accumulated subtree weights** (via `node.clone()` in `prune`). Then `fc.vote_tracker.reset()` clears all vote history. On the very next `update_head` call, `compute_deltas` treats every existing attestation as a **brand-new vote** (no `old_root` to subtract), adding +1 per validator for their current target. This inflates all surviving node weights by the current vote count on top of the already-accumulated historical weights.
Example:
- After finalization, `C1.weight = 3` (3 validators voted), `C2.weight = 1`
- prune + reset
- Next `compute_deltas` with the same 3+1 validators: `C1` gets delta +3, `C2` gets +1
- `apply_score_changes`: `C1.weight = 6`, `C2.weight = 2`
If the distribution is uniform, head selection is unaffected (relative ordering preserved). But if validators subsequently shift votes, the inflated baseline distorts relative weights:
- Next round: 1 validator moves from `C1` to `C2`
- Proto: `C1.weight = 5`, `C2.weight = 3` → `C1` wins
- Spec: `C1` has 2 votes, `C2` has 2 votes → **tied, tiebreak by root**
Depending on root hashes, proto-array and spec may disagree, triggering the debug oracle assertion in `debug_assertions` builds. In release builds the corruption silently persists, compounding on each finalization.
The fix is to also reset each surviving node's weight to 0 after pruning (or rebuild from `from_store`), so the weight reflects only post-prune votes.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: crates/blockchain/fork_choice/src/proto_array.rs
Line: 59-77
Comment:
**Zero-weight nodes can be set as `best_child`, pulling head past the voted block**
`apply_score_changes` calls `maybe_update_best_child(parent, i)` for **every** node in the backward pass, including nodes whose net weight remains 0. Because `maybe_update_best_child` unconditionally sets `best_child = Some(child_idx)` when the parent has no prior `best_child` (`None => true`), a zero-weight leaf can become the `best_child` of an ancestor even when the ancestor itself has positive votes.
Minimal reproducer:
```rust
// chain: root(0) -> a(1) -> b(2)
// validator votes for a (the middle node), not the leaf b
pa.on_block(root, ZERO, 0);
pa.on_block(a, root, 1);
pa.on_block(b, a, 2); // b has no votes
let mut att = HashMap::new();
att.insert(0, make_attestation(a, 1)); // vote for a, not b
let mut deltas = vt.compute_deltas(&att, &pa);
pa.apply_score_changes(&mut deltas);
// deltas = [0, +1, 0]; when processing b (idx=2, weight=0):
// a has no best_child → best_child[a] = b (weight 0!)
// find_head(root) → root → a → b (WRONG; should stop at a)
assert_eq!(pa.find_head(root), a); // fails: returns b
```
In practice Ethereum validators always vote for the chain tip, so this is unlikely to trigger with real attestations. It would however be exposed if the proto-array is initialised via `from_store` with stale attestations that don't reach the current tip, or during startup with a partially-synced chain. Adding a guard `if child_weight > 0` (or `>= 0` with a more refined viability check) in `maybe_update_best_child` would prevent zero-weight nodes from becoming `best_child`.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 88-114
Comment:
**Debug oracle re-uses `attestations` after it was consumed by `compute_deltas`**
`extract_latest_known_attestations()` is called once (line 89) and the result is passed to both `compute_deltas` (line 95) and the debug oracle's `compute_lmd_ghost_head` (line 103–108). `compute_deltas` takes `&HashMap<…>` so the value is not moved, but both calls **see the same snapshot**. This is intentional and correct.
However, the oracle also calls `store.get_live_chain()` (line 102) **inside** the `debug_assertions` block, meaning it is only evaluated in debug builds. If any future refactor moves the `get_live_chain()` call to before `store.update_checkpoints(ForkCheckpoints::head_only(new_head))` at line 122, the live chain would include the new head set by the proto-array result, creating a circular dependency. Consider adding a comment that the oracle must use the chain state **before** `update_checkpoints` to remain valid.
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: "Merge branch 'main' ..."
| fn maybe_update_best_child(&mut self, parent_idx: usize, child_idx: usize) { | ||
| let child_weight = self.nodes[child_idx].weight; | ||
| let child_root = self.nodes[child_idx].root; | ||
|
|
||
| let dominated = match self.nodes[parent_idx].best_child { | ||
| None => true, | ||
| Some(current_best) => { | ||
| let best_weight = self.nodes[current_best].weight; | ||
| let best_root = self.nodes[current_best].root; | ||
| (child_weight, child_root) > (best_weight, best_root) | ||
| } | ||
| }; | ||
|
|
||
| if dominated { | ||
| self.nodes[parent_idx].best_child = Some(child_idx); | ||
| } | ||
| } |
There was a problem hiding this comment.
Stale weight comparison in
maybe_update_best_child can select incorrect head
maybe_update_best_child compares the challenger child's updated weight against the current best_child's stale (pre-update) weight. Because apply_score_changes iterates backward (highest index first), a sibling with a lower index (the current best_child) has not yet had its delta applied at the time the comparison is made.
Concrete failure scenario:
root(0) -> a(idx=1, root=h(170)), b(idx=2, root=h(100))— both children of root- Initial weights:
a=10,b=5,best_child[root]=a - Deltas this round:
a=-8,b=+5 - Expected after update:
a.weight=2,b.weight=10→bshould becomebest_child
Trace through apply_score_changes:
- Process
b(idx=2):b.weight = 10. Callmaybe_update_best_child(root, b). Compares(10, h(100))vsa's stale(10, h(170)). Since(10, h(100)) < (10, h(170)),awins.best_child[root]staysa. - Process
a(idx=1):a.weight = 2. Callmaybe_update_best_child(root, a).best_childis stilla, so it comparesaagainst itself — a no-op.
Final result: best_child[root] = a (weight 2) even though b.weight = 10. The head is wrong.
The root cause is that when the lower-indexed sibling (a) is compared in step 2, it's compared against itself rather than against b. The fix requires either comparing all children after all weights are updated, or using a second pass to recompute best_child pointers.
None of the existing tests catch this because they use h(1) < h(2) (byte-ordered), so the higher-rooted node always wins. A test with the same weights but reversed root hash ordering would expose it.
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/blockchain/fork_choice/src/proto_array.rs
Line: 155-171
Comment:
**Stale weight comparison in `maybe_update_best_child` can select incorrect head**
`maybe_update_best_child` compares the challenger child's *updated* weight against the current `best_child`'s *stale* (pre-update) weight. Because `apply_score_changes` iterates backward (highest index first), a sibling with a **lower** index (the current `best_child`) has not yet had its delta applied at the time the comparison is made.
Concrete failure scenario:
- `root(0) -> a(idx=1, root=h(170)), b(idx=2, root=h(100))` — both children of root
- Initial weights: `a=10`, `b=5`, `best_child[root]=a`
- Deltas this round: `a=-8`, `b=+5`
- **Expected after update**: `a.weight=2`, `b.weight=10` → `b` should become `best_child`
Trace through `apply_score_changes`:
1. Process `b` (idx=2): `b.weight = 10`. Call `maybe_update_best_child(root, b)`. Compares `(10, h(100))` vs `a`'s **stale** `(10, h(170))`. Since `(10, h(100)) < (10, h(170))`, `a` wins. `best_child[root]` stays `a`.
2. Process `a` (idx=1): `a.weight = 2`. Call `maybe_update_best_child(root, a)`. `best_child` is still `a`, so it compares `a` against **itself** — a no-op.
Final result: `best_child[root] = a` (weight 2) even though `b.weight = 10`. The head is wrong.
The root cause is that when the lower-indexed sibling (`a`) is compared in step 2, it's compared against itself rather than against `b`. The fix requires either comparing all children after all weights are updated, or using a second pass to recompute `best_child` pointers.
None of the existing tests catch this because they use `h(1) < h(2)` (byte-ordered), so the higher-rooted node always wins. A test with the same weights but reversed root hash ordering would expose it.
How can I resolve this? If you propose a fix, please make it concise.| // Prune proto-array on finalization advance and reset vote tracker | ||
| // (must happen after store.update_checkpoints which prunes storage) | ||
| if let Some(finalized) = finalized { | ||
| fc.proto_array.prune(finalized.root); | ||
| fc.vote_tracker.reset(); | ||
| } |
There was a problem hiding this comment.
Pruning preserves node weights but resets vote history, causing weight double-counting
After fc.proto_array.prune(finalized.root), the surviving nodes retain their accumulated subtree weights (via node.clone() in prune). Then fc.vote_tracker.reset() clears all vote history. On the very next update_head call, compute_deltas treats every existing attestation as a brand-new vote (no old_root to subtract), adding +1 per validator for their current target. This inflates all surviving node weights by the current vote count on top of the already-accumulated historical weights.
Example:
- After finalization,
C1.weight = 3(3 validators voted),C2.weight = 1 - prune + reset
- Next
compute_deltaswith the same 3+1 validators:C1gets delta +3,C2gets +1 apply_score_changes:C1.weight = 6,C2.weight = 2
If the distribution is uniform, head selection is unaffected (relative ordering preserved). But if validators subsequently shift votes, the inflated baseline distorts relative weights:
- Next round: 1 validator moves from
C1toC2 - Proto:
C1.weight = 5,C2.weight = 3→C1wins - Spec:
C1has 2 votes,C2has 2 votes → tied, tiebreak by root
Depending on root hashes, proto-array and spec may disagree, triggering the debug oracle assertion in debug_assertions builds. In release builds the corruption silently persists, compounding on each finalization.
The fix is to also reset each surviving node's weight to 0 after pruning (or rebuild from from_store), so the weight reflects only post-prune votes.
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 667-672
Comment:
**Pruning preserves node weights but resets vote history, causing weight double-counting**
After `fc.proto_array.prune(finalized.root)`, the surviving nodes **retain their accumulated subtree weights** (via `node.clone()` in `prune`). Then `fc.vote_tracker.reset()` clears all vote history. On the very next `update_head` call, `compute_deltas` treats every existing attestation as a **brand-new vote** (no `old_root` to subtract), adding +1 per validator for their current target. This inflates all surviving node weights by the current vote count on top of the already-accumulated historical weights.
Example:
- After finalization, `C1.weight = 3` (3 validators voted), `C2.weight = 1`
- prune + reset
- Next `compute_deltas` with the same 3+1 validators: `C1` gets delta +3, `C2` gets +1
- `apply_score_changes`: `C1.weight = 6`, `C2.weight = 2`
If the distribution is uniform, head selection is unaffected (relative ordering preserved). But if validators subsequently shift votes, the inflated baseline distorts relative weights:
- Next round: 1 validator moves from `C1` to `C2`
- Proto: `C1.weight = 5`, `C2.weight = 3` → `C1` wins
- Spec: `C1` has 2 votes, `C2` has 2 votes → **tied, tiebreak by root**
Depending on root hashes, proto-array and spec may disagree, triggering the debug oracle assertion in `debug_assertions` builds. In release builds the corruption silently persists, compounding on each finalization.
The fix is to also reset each surviving node's weight to 0 after pruning (or rebuild from `from_store`), so the weight reflects only post-prune votes.
How can I resolve this? If you propose a fix, please make it concise.| pub fn apply_score_changes(&mut self, deltas: &mut [i64]) { | ||
| for i in (0..self.nodes.len()).rev() { | ||
| if i < deltas.len() { | ||
| self.nodes[i].weight += deltas[i]; | ||
| } | ||
|
|
||
| let Some(parent_idx) = self.nodes[i].parent else { | ||
| continue; | ||
| }; | ||
|
|
||
| // Propagate this node's delta to parent | ||
| if i < deltas.len() && parent_idx < deltas.len() { | ||
| deltas[parent_idx] += deltas[i]; | ||
| } | ||
|
|
||
| // Update best_child: pick the child with highest weight, tiebreak by root hash | ||
| self.maybe_update_best_child(parent_idx, i); | ||
| } | ||
| } |
There was a problem hiding this comment.
Zero-weight nodes can be set as
best_child, pulling head past the voted block
apply_score_changes calls maybe_update_best_child(parent, i) for every node in the backward pass, including nodes whose net weight remains 0. Because maybe_update_best_child unconditionally sets best_child = Some(child_idx) when the parent has no prior best_child (None => true), a zero-weight leaf can become the best_child of an ancestor even when the ancestor itself has positive votes.
Minimal reproducer:
// chain: root(0) -> a(1) -> b(2)
// validator votes for a (the middle node), not the leaf b
pa.on_block(root, ZERO, 0);
pa.on_block(a, root, 1);
pa.on_block(b, a, 2); // b has no votes
let mut att = HashMap::new();
att.insert(0, make_attestation(a, 1)); // vote for a, not b
let mut deltas = vt.compute_deltas(&att, &pa);
pa.apply_score_changes(&mut deltas);
// deltas = [0, +1, 0]; when processing b (idx=2, weight=0):
// a has no best_child → best_child[a] = b (weight 0!)
// find_head(root) → root → a → b (WRONG; should stop at a)
assert_eq!(pa.find_head(root), a); // fails: returns bIn practice Ethereum validators always vote for the chain tip, so this is unlikely to trigger with real attestations. It would however be exposed if the proto-array is initialised via from_store with stale attestations that don't reach the current tip, or during startup with a partially-synced chain. Adding a guard if child_weight > 0 (or >= 0 with a more refined viability check) in maybe_update_best_child would prevent zero-weight nodes from becoming best_child.
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/blockchain/fork_choice/src/proto_array.rs
Line: 59-77
Comment:
**Zero-weight nodes can be set as `best_child`, pulling head past the voted block**
`apply_score_changes` calls `maybe_update_best_child(parent, i)` for **every** node in the backward pass, including nodes whose net weight remains 0. Because `maybe_update_best_child` unconditionally sets `best_child = Some(child_idx)` when the parent has no prior `best_child` (`None => true`), a zero-weight leaf can become the `best_child` of an ancestor even when the ancestor itself has positive votes.
Minimal reproducer:
```rust
// chain: root(0) -> a(1) -> b(2)
// validator votes for a (the middle node), not the leaf b
pa.on_block(root, ZERO, 0);
pa.on_block(a, root, 1);
pa.on_block(b, a, 2); // b has no votes
let mut att = HashMap::new();
att.insert(0, make_attestation(a, 1)); // vote for a, not b
let mut deltas = vt.compute_deltas(&att, &pa);
pa.apply_score_changes(&mut deltas);
// deltas = [0, +1, 0]; when processing b (idx=2, weight=0):
// a has no best_child → best_child[a] = b (weight 0!)
// find_head(root) → root → a → b (WRONG; should stop at a)
assert_eq!(pa.find_head(root), a); // fails: returns b
```
In practice Ethereum validators always vote for the chain tip, so this is unlikely to trigger with real attestations. It would however be exposed if the proto-array is initialised via `from_store` with stale attestations that don't reach the current tip, or during startup with a partially-synced chain. Adding a guard `if child_weight > 0` (or `>= 0` with a more refined viability check) in `maybe_update_best_child` would prevent zero-weight nodes from becoming `best_child`.
How can I resolve this? If you propose a fix, please make it concise.| fn update_head(store: &mut Store, fc: &mut ForkChoiceState, log_tree: bool) { | ||
| let attestations = store.extract_latest_known_attestations(); | ||
| let justified_root = store.latest_justified().root; | ||
|
|
||
| // Incremental fork choice via proto-array | ||
| let mut deltas = fc | ||
| .vote_tracker | ||
| .compute_deltas(&attestations, &fc.proto_array); | ||
| fc.proto_array.apply_score_changes(&mut deltas); | ||
| let new_head = fc.proto_array.find_head(justified_root); | ||
|
|
||
| // Debug oracle: verify proto-array matches spec implementation | ||
| #[cfg(debug_assertions)] | ||
| { | ||
| let blocks = store.get_live_chain(); | ||
| let (spec_head, _) = ethlambda_fork_choice::compute_lmd_ghost_head( | ||
| justified_root, | ||
| &blocks, | ||
| &attestations, | ||
| 0, | ||
| ); | ||
| assert_eq!( | ||
| new_head, spec_head, | ||
| "proto-array diverged from spec: proto={:?} spec={:?}", | ||
| new_head, spec_head | ||
| ); | ||
| } |
There was a problem hiding this comment.
Debug oracle re-uses
attestations after it was consumed by compute_deltas
extract_latest_known_attestations() is called once (line 89) and the result is passed to both compute_deltas (line 95) and the debug oracle's compute_lmd_ghost_head (line 103–108). compute_deltas takes &HashMap<…> so the value is not moved, but both calls see the same snapshot. This is intentional and correct.
However, the oracle also calls store.get_live_chain() (line 102) inside the debug_assertions block, meaning it is only evaluated in debug builds. If any future refactor moves the get_live_chain() call to before store.update_checkpoints(ForkCheckpoints::head_only(new_head)) at line 122, the live chain would include the new head set by the proto-array result, creating a circular dependency. Consider adding a comment that the oracle must use the chain state before update_checkpoints to remain valid.
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 88-114
Comment:
**Debug oracle re-uses `attestations` after it was consumed by `compute_deltas`**
`extract_latest_known_attestations()` is called once (line 89) and the result is passed to both `compute_deltas` (line 95) and the debug oracle's `compute_lmd_ghost_head` (line 103–108). `compute_deltas` takes `&HashMap<…>` so the value is not moved, but both calls **see the same snapshot**. This is intentional and correct.
However, the oracle also calls `store.get_live_chain()` (line 102) **inside** the `debug_assertions` block, meaning it is only evaluated in debug builds. If any future refactor moves the `get_live_chain()` call to before `store.update_checkpoints(ForkCheckpoints::head_only(new_head))` at line 122, the live chain would include the new head set by the proto-array result, creating a circular dependency. Consider adding a comment that the oracle must use the chain state **before** `update_checkpoints` to remain valid.
How can I resolve this? If you propose a fix, please make it concise.| let mut proto_array = ProtoArray::new(); | ||
|
|
||
| // Load all blocks from live chain and insert in topological order (by slot) | ||
| let blocks = store.get_live_chain(); |
There was a problem hiding this comment.
We should remove the livechain
| } | ||
| 3 => { | ||
| // Update safe target for validators | ||
| // Update safe target for validators (uses spec implementation, not proto-array) |
There was a problem hiding this comment.
Can we use proto-array here too?
| let mut deltas = fc | ||
| .vote_tracker | ||
| .compute_deltas(&attestations, &fc.proto_array); | ||
| fc.proto_array.apply_score_changes(&mut deltas); |
There was a problem hiding this comment.
Should these be a single function?
Summary
ProtoArray+VoteTrackerin the fork_choice crate for incremental LMD-GHOST head computationupdate_headwith delta-based weight propagation: O(changed_votes) + O(nodes) single backward pass + O(depth) head lookupupdate_safe_targetunchanged — runs once/slot with a different attestation pool, not worth optimizing in this phase#[cfg(debug_assertions)]) runs both implementations in parallel and asserts equalityDesign
ForkChoiceStatestruct wrapsProtoArray+VoteTracker, owned byBlockChainServerForkChoiceState::from_store()on_block_coreregisters new blocks and prunes on finalizationupdate_headuses proto-array; falls back to spec impl for tree visualization (log_tree=true, 1x/slot)Test plan
make fmt— cleanmake lint— cleanupdate_headcall)