-
Notifications
You must be signed in to change notification settings - Fork 957
Fix data column rpc request #8247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Some required checks have failed. Could you please take a look @eserilev? 🙏 |
michaelsproul
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just one small observation.
I might wait for a 2nd review before merging, seeing as this is potentially quite sensitive.
| harness.chain.column_data_availability_boundary(), | ||
| Some(fulu_fork_epoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noting that this is in the future from the node's PoV. I think the impact of this is minimal, it maybe makes us attempt to respond to DataColumnsByRange requests from peers before the Fulu epoch is reached?
This doesn't seem to be a huge issue, as I think the block root iterator here will just be empty:
lighthouse/beacon_node/network/src/network_beacon_processor/rpc_methods.rs
Lines 1252 to 1253 in 358f5fc
| let block_roots = | |
| self.get_block_roots_for_slot_range(req.start_slot, req.count, "DataColumnsByRange")?; |
We have the same behaviour for blobs by range as well (when Deneb is scheduled but hasn't happened yet). I think this is OK, it's very cheap for us to respond to this request, so I think it doesn't pose a DoS risk
Squashed commit of the following: commit f2df97e Author: Michael Sproul <[email protected]> Date: Tue Oct 21 15:13:12 2025 +1100 Remove commented out code commit 358f5fc Author: Eitan Seri-Levi <[email protected]> Date: Mon Oct 20 15:24:09 2025 -0700 Fix commit 346bbaa Author: Eitan Seri-Levi <[email protected]> Date: Mon Oct 20 15:23:58 2025 -0700 Fix commit b432cb3 Author: Eitan Seri-Levi <[email protected]> Date: Mon Oct 20 13:59:15 2025 -0700 Fix commit b207727 Merge: 8d06310 092aaae Author: Eitan Seri-Levi <[email protected]> Date: Mon Oct 20 10:49:12 2025 -0700 Merge branch 'unstable' of https://github.com/sigp/lighthouse into fix-data-column-rpc commit 8d06310 Author: Eitan Seri-Levi <[email protected]> Date: Mon Oct 20 10:49:10 2025 -0700 Fix RPC
| }; | ||
|
|
||
| if request_start_slot < oldest_data_column_slot { | ||
| if request_start_slot < earliest_custodied_data_column_slot { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think earliest_custodied_data_column_slot gives you the slot that columns advertised at the current cgc are all available right?
It may not necessary mean the node doesn't have a particular column. Would it be more suitable to use custdoy context to check for column availability here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah if we're ok with not serving columns while backfilling columns that's ok too - since peers are unlikely to request from us given our earliest_data_column_slot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was under the assumption that a peer will be requesting columns from us based on our nodes status v2 response. during a cgc change our status v2 will only return the earliest slot in which we've fulfilled our custody requirements for the current cgc. So they shouldn't request anything before the earliest_data_column_slot after a cgc change.
On a slightly separate note, if we didn't have custody backfill I could be convinced to cross reference custody context to figure out if we could actually serve a column or not. But because of custody backfill I think we should be ok with just checking the earliest_data_column_slot. It keeps things simple here with the caveat that we're not serving columns that we may potentially have for some limited period of time.
|
Updates look good. Merging! |
Open PRs to include for the release - #7907 - #8247 - #8251 - #8253 - #8254 - #8265 - #8269 - #8266 Co-Authored-By: Jimmy Chen <[email protected]> Co-Authored-By: Jimmy Chen <[email protected]>
Issue Addressed
Fixes an issue mentioned in this comment regarding data column rpc requests:
#6572 (comment)