Conversation
… than block.slot + 1
…use conversion to int to avoid overflows
|
|
||
| return ( | ||
| end_epoch > start_epoch + 1 | ||
| or (end_epoch == start_epoch + 1 and start_slot % SLOTS_PER_EPOCH == 0)) |
There was a problem hiding this comment.
If start_slot = 0 and end_slot = 31 (where end_epoch == start_epoch == 0), do we count it as "includes an entire epoch"? If not, should we describe the function as exclusive end_slot?
There was a problem hiding this comment.
Another way would be to change the doc to
Returns True if the range from
start_slottoend_slot(inclusive of both) includes an entire epoch
In this way, we are not saying that if the range includes an entire epoch, then the function returns True.
Only that whenever the function returns True than the range includes an entire epoch.
The above works with both inclusive and exclusive.
| if is_full_validator_set_for_block_covered(store, block_root): | ||
| return is_one_confirmed(store, block_root) | ||
| else: | ||
| block = store.blocks[block_root] | ||
| return ( | ||
| is_one_confirmed(store, block_root) | ||
| and is_lmd_confirmed(store, block.parent_root) | ||
| ) |
There was a problem hiding this comment.
Is it different from the definition 6 (lmd-safety condition) in the paper? The paper require all the ancestors of the block to be is_lmd_confirmed, regardless of whether full validator set is covered or not.
There was a problem hiding this comment.
Yes. It is different.
This does not always yield exactly the same result as in the paper, but it does in most cases, and it is quicker to compute.
| current_slot = get_current_slot(store) | ||
| block = store.blocks[block_root] | ||
| parent_block = store.blocks[block.parent_root] | ||
| support = int(get_weight(store, block_root)) |
There was a problem hiding this comment.
Is this the weight from the fork_choice data, which can be queried from the beacon API?
|
|
||
| block_epoch = compute_epoch_at_slot(block.slot) | ||
|
|
||
| # If `block_epoch` is not either the current or previous epoch, then return `store.finalized_checkpoint.root` |
There was a problem hiding this comment.
What is the confirmation rule for the block between finalized checkpoint and justified checkpoint?
There was a problem hiding this comment.
Any block descendant of the latest finalized checkpoint is treated in the same way
| support = int(get_weight(store, block_root)) | ||
| justified_state = store.checkpoint_states[store.justified_checkpoint] | ||
| maximum_support = int( | ||
| get_committee_weight_between_slots(justified_state, Slot(parent_block.slot + 1), Slot(current_slot - 1)) |
There was a problem hiding this comment.
Do we assume that we run the protocol at the beginning of each slot, when the block for the current slot is not proposed? Otherwise, for the block proposed in the current slot (with its parent proposed in the previous slot), Slot(parent_block.slot + 1) > Slot(current_slot - 1).
There was a problem hiding this comment.
The protocol is run at the beginning of each slot regardless of whether we propose or not a block in that slot. Also, we do not run the protocol on blocks for the current slots regardless.
| min( | ||
| ceil_div(total_active_balance * CONFIRMATION_BYZANTINE_THRESHOLD, 100), | ||
| CONFIRMATION_SLASHING_THRESHOLD, | ||
| ffg_support_for_checkpoint | ||
| ) |
There was a problem hiding this comment.
Is ceil_div(total_active_balance * CONFIRMATION_BYZANTINE_THRESHOLD, 100) always smaller than or equal to CONFIRMATION_SLASHING_THRESHOLD?
There was a problem hiding this comment.
In the paper, it is ceil_div((total_active_balance - remaining_ffg_weight) * CONFIRMATION_BYZANTINE_THRESHOLD, 100). I wonder whether it is a typo.
There was a problem hiding this comment.
Is ceil_div(total_active_balance * CONFIRMATION_BYZANTINE_THRESHOLD, 100) always smaller than or equal to CONFIRMATION_SLASHING_THRESHOLD
Not necessarily.
There was a problem hiding this comment.
Is ceil_div(total_active_balance * CONFIRMATION_BYZANTINE_THRESHOLD, 100) always smaller than or equal to CONFIRMATION_SLASHING_THRESHOLD?
Which paper are you referring to?
1d3f67d to
e6d9150
Compare
However, I think theoretically this attack can still occur under semi - synchronous conditions (for example, in the case of network partition). It just requires withholding for a longer time. |
Yeah, even in synchronous conditions, this attack can still be conducted. We have shown such a way in the appendix of our paper eprint. Also, we have designed a solution to address all reorganizations including this attack in this paper. |
This comment was marked as spam.
This comment was marked as spam.
| # for an explanation of the formula used below. | ||
|
|
||
| # First, calculate the number of committees in the end epoch | ||
| num_slots_in_end_epoch = int(compute_slots_since_epoch_start(end_slot) + 1) |
There was a problem hiding this comment.
| num_slots_in_end_epoch = int(compute_slots_since_epoch_start(end_slot) + 1) | |
| num_slots_in_end_epoch = int(compute_slots_since_epoch_start(end_slot)) |
I think it’s an off-by-one. Consider end_slot=63, SLOTS_PER_EPOCH=32, then num_slots_in_end_epoch = 32 which doesn’t seem correct
There was a problem hiding this comment.
We have built an elegant and provable solution to solve all known attacks for Ethereum PoS, the EIP can be found in eip.
There was a problem hiding this comment.
I think it’s an off-by-one. Consider
end_slot=63, SLOTS_PER_EPOCH=32, thennum_slots_in_end_epoch = 32which doesn’t seem correct
This should be correct as, by the function spec, we also want to consider the weights associated with end_slot.
There was a problem hiding this comment.
Right, and compute_slots_since_epoch_start(end_slot) should already include the end_slot to my observation.
compute_slots_since_epoch_start(end_slot) evaluates to: end_slot - compute_start_slot_at_epoch(compute_epoch_at_slot(end_slot)) = end_slot - epoch_start_slot, what am I missing?
Introduction
The objective of this PR is to introduce a Confirmation Rule for the Ethereum protocol.
A confirmation rule is an algorithm run by nodes, outputting whether a certain block is confirmed. When that is the case, the block is guaranteed to never be reorged, under certain assumptions, primarily about network synchrony and about the percentage of honest stake.
Detailed Explanation
For a detailed explanation of the algorithm, see this article.
The algorithm specified in this PR corresponds to Algorithm 5 in the paper.
TODO
Here is a non-exclusive list of TODOs
setup.pyare correct. The current changes allow linting the confirmation rule spec, but they may not be entirely correct.Last things to do before merging
linter.ini. These changes have been introduced just to speed up the development process by relaxing the requirement on the maximum line length.fork_choice/confirmation_rule.mdtospecs/bellatrixand delete thefork_choicefolder.