Skip to content

Conversation

@eserilev
Copy link
Member

Issue Addressed

Fixes an issue mentioned in this comment regarding data column rpc requests:
#6572 (comment)

@eserilev eserilev requested a review from jxs as a code owner October 20, 2025 21:00
@eserilev eserilev added das Data Availability Sampling fulu Required for the upcoming Fulu hard fork v8.0.0 Q4 2025 Fusaka Mainnet Release ready-for-review The code is ready for review labels Oct 20, 2025
@mergify
Copy link

mergify bot commented Oct 20, 2025

Some required checks have failed. Could you please take a look @eserilev? 🙏

@mergify mergify bot added waiting-on-author The reviewer has suggested changes and awaits thier implementation. and removed ready-for-review The code is ready for review labels Oct 20, 2025
@eserilev eserilev added ready-for-review The code is ready for review and removed waiting-on-author The reviewer has suggested changes and awaits thier implementation. labels Oct 20, 2025
Copy link
Member

@michaelsproul michaelsproul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just one small observation.

I might wait for a 2nd review before merging, seeing as this is potentially quite sensitive.

Comment on lines +4390 to +4391
harness.chain.column_data_availability_boundary(),
Some(fulu_fork_epoch)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noting that this is in the future from the node's PoV. I think the impact of this is minimal, it maybe makes us attempt to respond to DataColumnsByRange requests from peers before the Fulu epoch is reached?

This doesn't seem to be a huge issue, as I think the block root iterator here will just be empty:

let block_roots =
self.get_block_roots_for_slot_range(req.start_slot, req.count, "DataColumnsByRange")?;

We have the same behaviour for blobs by range as well (when Deneb is scheduled but hasn't happened yet). I think this is OK, it's very cheap for us to respond to this request, so I think it doesn't pose a DoS risk

jimmygchen added a commit that referenced this pull request Oct 21, 2025
Squashed commit of the following:

commit f2df97e
Author: Michael Sproul <[email protected]>
Date:   Tue Oct 21 15:13:12 2025 +1100

    Remove commented out code

commit 358f5fc
Author: Eitan Seri-Levi <[email protected]>
Date:   Mon Oct 20 15:24:09 2025 -0700

    Fix

commit 346bbaa
Author: Eitan Seri-Levi <[email protected]>
Date:   Mon Oct 20 15:23:58 2025 -0700

    Fix

commit b432cb3
Author: Eitan Seri-Levi <[email protected]>
Date:   Mon Oct 20 13:59:15 2025 -0700

    Fix

commit b207727
Merge: 8d06310 092aaae
Author: Eitan Seri-Levi <[email protected]>
Date:   Mon Oct 20 10:49:12 2025 -0700

    Merge branch 'unstable' of https://github.com/sigp/lighthouse into fix-data-column-rpc

commit 8d06310
Author: Eitan Seri-Levi <[email protected]>
Date:   Mon Oct 20 10:49:10 2025 -0700

    Fix RPC
@jimmygchen jimmygchen mentioned this pull request Oct 21, 2025
};

if request_start_slot < oldest_data_column_slot {
if request_start_slot < earliest_custodied_data_column_slot {
Copy link
Member

@jimmygchen jimmygchen Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think earliest_custodied_data_column_slot gives you the slot that columns advertised at the current cgc are all available right?

It may not necessary mean the node doesn't have a particular column. Would it be more suitable to use custdoy context to check for column availability here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah if we're ok with not serving columns while backfilling columns that's ok too - since peers are unlikely to request from us given our earliest_data_column_slot.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the assumption that a peer will be requesting columns from us based on our nodes status v2 response. during a cgc change our status v2 will only return the earliest slot in which we've fulfilled our custody requirements for the current cgc. So they shouldn't request anything before the earliest_data_column_slot after a cgc change.

On a slightly separate note, if we didn't have custody backfill I could be convinced to cross reference custody context to figure out if we could actually serve a column or not. But because of custody backfill I think we should be ok with just checking the earliest_data_column_slot. It keeps things simple here with the caveat that we're not serving columns that we may potentially have for some limited period of time.

@michaelsproul michaelsproul added waiting-on-author The reviewer has suggested changes and awaits thier implementation. and removed ready-for-review The code is ready for review labels Oct 21, 2025
@eserilev eserilev added ready-for-review The code is ready for review and removed waiting-on-author The reviewer has suggested changes and awaits thier implementation. labels Oct 21, 2025
@michaelsproul michaelsproul added ready-for-merge This PR is ready to merge. and removed ready-for-review The code is ready for review labels Oct 21, 2025
@michaelsproul
Copy link
Member

Updates look good. Merging!

@mergify mergify bot added the queued label Oct 21, 2025
mergify bot added a commit that referenced this pull request Oct 21, 2025
@mergify mergify bot merged commit 46dde9a into sigp:unstable Oct 21, 2025
37 checks passed
@mergify mergify bot removed the queued label Oct 21, 2025
mergify bot pushed a commit that referenced this pull request Oct 23, 2025
Open PRs to include for the release
- #7907
- #8247
- #8251
- #8253
- #8254
- #8265
- #8269
- #8266


  


Co-Authored-By: Jimmy Chen <[email protected]>

Co-Authored-By: Jimmy Chen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

das Data Availability Sampling fulu Required for the upcoming Fulu hard fork ready-for-merge This PR is ready to merge. v8.0.0 Q4 2025 Fusaka Mainnet Release

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants