Note: All code used in this notebook is contained in the notebooks/beaconrunner2049 folder of the Beacon runner repo, and does not use current PoS specs. Most of the content here remains applicable.
In this notebook, we extend the Beacon Runner to create network-level events. This is a second step towards an abstracted (but realistic) simulation environment to study economic properties of validation of Proof-of-Stake (PoS) Ethereum, also called the "consensus layer". We will make use of our p2p network abstraction to simulate a network partition, where for some reason (an adversary, a Georgian granny, World War III) the network is split in two with both sides unable to communicate with each other.
To recap' what happened in our first notebook, we wrapped the current PoS specs in a cadCAD environment, using the radCAD library. This wrap augmented the dynamics of chain updates (forming and adding a new block) with validator policies, which inform the state updates. We assumed zero network latency, i.e., all validators aware of all events as they happen, and sharing the same view of the chain. In practice, this assumption does not hold. We usually assume a partially synchronous model, where all validators are eventually informed of any event, with no bound on how long this may take (but a finite time in any case).
Under a partially synchronous model, validators may not share the same view of the chain, since they may not be aware that a new block was created when they have not yet received it from the underlying p2p network. In the worst-case, an adversary manipulating the network can successfully game the view of other validators. If they are skilled enough, they could even pretend that their manipulation is not malicious and is the result of unfortunate network delays. How can we create policies that are robust to network uncertainty and strategic interactions?
We will develop these questions throughout, getting insight from our models and simulations.
First, we need to load the beacon chain specs, as defined by the canonical PoS repo.
Dealing with the specs constants, like the number of slots per epoch or the base reward factors is also much easier now. We define two different config files, fast.yaml
and medium.yaml
. fast
has a value of SLOTS_PER_EPOCH
set to 4. We don't have time to waste! medium
sets it to 16 and changes a few other things -- we'll tell you more about it when we get there.
So we import our own custom-built specs
, prepare the fast
config file and reload the specs to apply this configuration.
%%capture
import specs
import importlib
from eth2spec.config.config_util import prepare_config
prepare_config(".", "fast.yaml")
importlib.reload(specs)
We import a few libraries we will need for our simulations: network
contains the network implementation we describe below, while brlib
contains our validator policies.
We also import radCAD's libraries and pandas
to work with the simulations output.
import network as nt
import brlib
import copy
from radcad import Model, Simulation, Experiment
from radcad.engine import Engine, Backend
import pandas as pd
import plotly.express as px
import plotly.io as pio
pd.options.plotting.backend = "plotly"
pio.renderers.default = "plotly_mimetype+notebook_connected"
import plotly.graph_objects as go
We use a simple p2p network abstraction. Nodes are identified by an index and belong to various information sets. Node $a$ produces an event (a new attestation, a new block) at $t=0$. Let $I(a,t)$ denote the set of nodes who know about events produced by $a$ after duration $t$.
Progressively, more and more nodes learn about the event as it is propagated over the p2p network. The implementation is available in the network.py
file. Let's take a simple example.
set_a = nt.NetworkSet(validators=list([0,1]))
set_b = nt.NetworkSet(validators=list([1,2]))
set_c = nt.NetworkSet(validators=list([2,3]))
net = nt.Network(sets=list([set_a, set_b, set_c]))
Here we have four validators, 0, 1, 2 and 3 connected in a chain. 0 is connected to 1, who is connected to 2, who is connected to 3. Our information sets are stored in the network.sets
array with the following indices:
index: [validators]
-------------------
0: [0,1]
1: [1,2]
2: [2,3]
Same as we did in the first notebook, we now create a dummy genesis state with four validators who each deposit 32 ETH in the contract -- the minimum to start validating.
genesis_state = brlib.get_genesis_state(4)
brlib.process_genesis_block(genesis_state)
specs.process_slots(genesis_state, 1)
In the next code chunk, validator 3 disseminates an Attestation
object, which is kept in the items
attributes of the network.
Note: We use brlib.py -- the Beacon Runner library of validator behaviours -- to create this attestation object. Unlike the first notebook, we won’t be going over the code governing validator policies, but if you’re curious to see how it works, check it out here.
attestation = brlib.honest_attest(genesis_state, 3)
nt.disseminate_attestation(net, 3, attestation)
print("there are", len(net.attestations), "attestations in network")
print("validator sets who know about this attestation:", [d for d in net.attestations[0].info_sets])
there are 1 attestations in network validator sets who know about this attestation: [2]
As you can see from the print
statement above, at this stage only the validator set at index 2 knows about the item, i.e., $I(3,1) = \{ 3, 2 \}$.
We call update_network
to diffuse items in the network by one step. Since validator 2 is in $I(3, 1)$, and validator 1 shares an information set with validator 2, then we expect validator 1 to know about the attestation at $t=2$ (which means $I(3,2) = \{ 3, 2, 1 \}$).
nt.update_network(net)
print("validator sets who know about this attestation:", [d for d in net.attestations[0].info_sets])
validator sets who know about this attestation: [2, 1]
As expected, we now see that validator sets 2 and 1 know about the event.
Since validator 0 shares an information set with validator 1, if we call update_network
again, we should see that all sets have learned about the event.
nt.update_network(net)
print("validator sets who know about this attestation:", [d for d in net.attestations[0].info_sets])
validator sets who know about this attestation: [2, 1, 0]
Finally, to see things from the perspective of an individual validator, we can use knowledge_set(network, validator_index)
.
nt.knowledge_set(net, 2)
{'attestations': [(0, NetworkAttestation(Container) item: Attestation = Attestation(Container) aggregation_bits: SpecialBitlistView = Bitlist[2048](1 bits: 1) data: AttestationData = AttestationData(Container) slot: Slot = 0 index: CommitteeIndex = 0 beacon_block_root: Root = 0xb6ecaf3995644680caedd44613f7e95bee66e6cce84c650334acf1594cc9466e source: Checkpoint = Checkpoint(Container) epoch: Epoch = 0 root: Root = 0x0000000000000000000000000000000000000000000000000000000000000000 target: Checkpoint = Checkpoint(Container) epoch: Epoch = 0 root: Root = 0xb6ecaf3995644680caedd44613f7e95bee66e6cce84c650334acf1594cc9466e signature: BLSSignature = 0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 info_sets: List[NetworkSetIndex, 1099511627776] = List[NetworkSetIndex, 1099511627776]<<len=3>>(2, 1, 0))]}
knowledge_set
returns a dictionary. In this notebook, we only consider attestations on the network, but in the next, we'll add blocks on there too. Each value in the dictionary is an array of pairs, each pair giving the item and the index of the item in the network queue.
In a network partition, the network is split in two (or more) components, which are unable to communicate with each other. We can represent it simply with our network class by defining two information sets that do not share any member in common. In particular, to represent the example given in Incentives in Ethereum's hybrid Casper protocol by Buterin, Reijsbergen, Leonardos and Piliouras (Figure 6), we assume the first partition contains 60% of the validators, while the remaining 40% are in the second partition.
Our simulation will investigate how finalisation resumes after inactive validators lose their stake to protocol penalties. This time, we will have 100 validators in our network. Since we have more validators, let's go ahead and set SLOTS_PER_EPOCH
to 16, giving them more room to breathe (note that current specs set SLOTS_PER_EPOCH
to 32). We'll also increase the penalty from the chain not finalising anymore, so that we can speed up the interesting things in the simulation.
%%capture
prepare_config(".", "medium.yaml")
importlib.reload(specs)
importlib.reload(nt)
importlib.reload(brlib)
Neither partition sees what is going on with the other partition, so we create two beacon states that will evolve in parallel. We associate a beacon state with each network information set. Both states start out in the same initial condition, with 100 registered validators, although, as we will see, only 60 will be active in consensus formation in the first partition, compared to 40 in the second.
genesis_state = brlib.get_genesis_state(100)
brlib.process_genesis_block(genesis_state)
specs.process_slots(genesis_state, 1)
genesis_state_2 = copy.deepcopy(genesis_state)
set_a = nt.NetworkSet(validators=list(range(0,60)), beacon_state=genesis_state)
set_b = nt.NetworkSet(validators=list(range(60,100)), beacon_state=genesis_state_2)
network = nt.Network(sets=list([set_a, set_b]))
Why are network partitions an issue? Attestations produced by validators in one partition cannot make it to validators in the other. At the start, all validators are assumed to have equal stake (the 32 ETH needed to become a validator), so there is no way for either partition to finalise checkpoints since neither has 2/3rds of the total stake (respectively, they have 60 and 40% of the total stake).
The partition pretty much splits the chain into two branches. We can think of validators as having two accounts: one on their side of the partition (the branch they can keep up with), and one on the other side (the branch they can't see).
The only way for finalisation to resume is for inactive validators on either side to start losing their stake -- this has the effect of progressively increasing the stake of active validators (as a percentage of the total stake). At some point, active validators will have over 2/3rds of the total stake.
Validators in partition 0 do not see any activity from validators in partition 1, so the branch run by partition 0 slowly penalises validators by emptying their accounts and giving more relative weight to the active validators in partition 0.
The penalties come in different flavours:
MIN_EPOCHS_TO_INACTIVITY_PENALTY
epochs, an additional penalty kicks in to nullify the gains of honest (and timely) validators and add a penalty proportional to the number of epochs since the last finalised block. Gulp!The second penalty might seem overly strict. But a chain that is not finalising is no good to anyone! The idea is to make it highly unprofitable to keep validating on a chain where some partition or malicious set of validators prevents finalisation.
When the issue is a "simple" partition, as we'll see, the mechanism leaks enough cash from the inactive validators that finalisation resumes after some time, so waiting it out works. But in the case of a malicious attack, where a set of validators may be censoring another set, finalisation can be durably prevented. Compared to having no reward from honestly validating, the cost of coordinating off-chain to fork away malicious validators doesn't seem so bad now.
We have quite a few new building blocks to our simulation, with most of the code is abstracted away in our brlib.py file. We also introduce observers, metrics collected throughout the simulation, such as the active stake on each branch.
# Average effective stake of active validators in the first partition
def active_stake_branch_0(state):
network = state["network"]
return sum(
[network.sets[0].beacon_state.validators[validator_index].effective_balance
for validator_index in network.sets[0].validators]
) / 60
# Average effective stake of inactive validators in the first partition
def inactive_stake_branch_0(state):
network = state["network"]
return sum(
[network.sets[0].beacon_state.validators[validator_index].effective_balance
for validator_index in network.sets[1].validators]
) / 40
# Activity ratio of validators in the first partition
def activity_ratio_0(state):
return state['active_stake_branch_0'] * 60 / state['inactive_stake_branch_0'] / 40
# Average effective stake of active validators in the second partition
def active_stake_branch_1(state):
network = state["network"]
return sum(
[network.sets[1].beacon_state.validators[validator_index].effective_balance
for validator_index in network.sets[1].validators]
) / 40
# Average effective stake of inactive validators in the second partition
def inactive_stake_branch_1(state):
network = state["network"]
return sum(
[network.sets[1].beacon_state.validators[validator_index].effective_balance
for validator_index in network.sets[0].validators]
) / 60
# Activity ratio of validators in the second partition
def activity_ratio_1(state):
return state['active_stake_branch_1'] * 40 / state['inactive_stake_branch_1'] / 60
def slot(state):
network = state["network"]
return network.sets[0].beacon_state.slot
def epoch(state):
network = state["network"]
return specs.get_current_epoch(network.sets[0].beacon_state)
def finalized_epoch(state):
network = state["network"]
return network.sets[0].beacon_state.finalized_checkpoint.epoch
def just_bits(state):
network = state["network"]
return network.sets[0].beacon_state.justification_bits
def prev_justified_cp(state):
network = state["network"]
return network.sets[0].beacon_state.previous_justified_checkpoint.epoch
def curr_justified_cp(state):
network = state["network"]
return network.sets[0].beacon_state.current_justified_checkpoint.epoch
def proposer(state):
network = state["network"]
return specs.get_beacon_proposer_index(network.sets[0].beacon_state)
def attestations_length(state):
network = state["network"]
proposer = state["proposer"]
return len(nt.knowledge_set(network, proposer)["attestations"])
def percent_attesting_previous_epoch(state):
if specs.get_current_epoch(state) <= 0 + 1:
return 0.0
previous_epoch = specs.get_previous_epoch(state)
matching_target_attestations = specs.get_matching_target_attestations(state, previous_epoch)
percent_attest = float(specs.get_attesting_balance(state, matching_target_attestations)) / specs.get_total_active_balance(state) * 100
return percent_attest
def percent_attesting_current_epoch(state):
if specs.get_current_epoch(state) <= 0 + 1:
return 0.0
current_epoch = specs.get_current_epoch(state)
matching_target_attestations = specs.get_matching_target_attestations(state, current_epoch) # Current epoch
percent_attest = float(specs.get_attesting_balance(state, matching_target_attestations)) / specs.get_total_active_balance(state) * 100
return percent_attest
def attest_prev(state):
network = state["network"]
return round(
percent_attesting_previous_epoch(network.sets[0].beacon_state), 2
)
def attest_curr(state):
network = state["network"]
return round(
percent_attesting_current_epoch(network.sets[0].beacon_state), 2
)
def attestations_for_0(state):
network = state["network"]
return len([item for item in network.attestations if 0 in item.info_sets])
observers = {
"active_stake_branch_0": active_stake_branch_0,
"active_stake_branch_1": active_stake_branch_1,
"inactive_stake_branch_0": inactive_stake_branch_0,
"inactive_stake_branch_1": inactive_stake_branch_1,
"activity_ratio_0": activity_ratio_0,
"activity_ratio_1": activity_ratio_1,
"slot": slot,
"epoch": epoch,
"finalized_epoch": finalized_epoch,
"just_bits": just_bits,
"prev_justified_cp": prev_justified_cp,
"curr_justified_cp": curr_justified_cp,
"proposer": proposer,
"attestations_length": attestations_length,
"attest_prev": attest_prev,
"attest_curr": attest_curr,
"attestations_for_0": attestations_for_0,
}
You should be able to get how the simulation will be built from the way we organise our block_attestation_psub
array in the following code snippet. If you are not sure what this means, take a look at the first notebook!
%%capture
from cadCADsupSUP import *
initial_conditions = {
'network': network
}
block_attestation_psub = [
# Step 1+2
{
'policies': {
'action': brlib.attest_policy
},
'variables': {
'network': brlib.disseminate_attestations
}
},
# Step 3+4
{
'policies': {
'action': brlib.propose_policy
},
'variables': {
'network': brlib.disseminate_blocks
}
}
]
observed_ic = get_observed_initial_conditions(initial_conditions, observers)
observed_psubs = get_observed_psubs(block_attestation_psub, observers)
model = Model(
initial_state=observed_ic,
state_update_blocks=observed_psubs,
params={},
)
simulation = Simulation(model=model, timesteps=300, runs=1)
experiment = Experiment([simulation])
experiment.engine = Engine(deepcopy=False, backend=Backend.SINGLE_PROCESS)
result = experiment.run()
df = pd.DataFrame(result)
Our simulation has one state variable, declared in initial_conditions
, our network
object. We run the simulation for 300 steps (= 300 slots). Each step can be broken down in the following substeps:
Note that our partition introduces a few difficulties:
network.items
list. If we did not have perfect latency, we would want to keep these attestations around, so that the validator whose turn it is to produce a block in a partition, and who may be unaware of all the blocks created in that same partition, can include attestations that, from their perspective, have not yet been included.In the background, we have also optimised the storage of attestations with aggregates. If the chain works as expected, validators certainly won't get too creative with what they are attesting, and we should expect a whole lot of attestations to look like each other. Why bother keeping track of a thousand times the same object? Aggregates allow us to securely batch these attestations.
Since this topic is fairly central to the larger eth2 construction, let's dive in for a moment.
If you remember how we built attestations in the initial Beacon Runner, we had an aggregation_bits
attribute in Attestation
instances. In the previous notebook, we set all bits of the aggregation_bits
to 0, except for the one bit corresponding to the validator's index in its committee, which we set to 1.
If however, in the ideal case, we expect all validators from the same committee to cast the exact same vote (same source, target and head), why bother collating all these votes as single Attestation
s?
Instead, we set the bits of the aggregation_bits
attribute to represent which validators from the committee have cast this exact attestation.
a = Attestation(AttestationData(source, target, head, committee, slot))
# Possible values
a.aggregation_bits = [0, 1] # val. 1 cast attestation a
a.aggregation_bits = [1, 0] # val. 0 cast attestation a
a.aggregation_bits = [1, 1] # vals. 0 and 1 cast attestation a
In the consensus layer specifications, each slot, some validators are randomly chosen as aggregators. Their job is to collect a bunch of attestations from diverse validators and package them in unique attestations, with aggregation_bits
fields set to denote which validators have attested to a particular vote.
In the best case, if all validators of a committee have attested to the same vote, an aggregator will take in their big pile of attestations and output a single one with all bits set to 1. How can they do that? Why wouldn't they set the bits to anything they want? This is the magic of BLS signatures, and the particular case of BLS12-381, an especially friendly elliptic curve that allows us to aggregate signatures securely.
One caveat here is that while signatures are aggregatable, aggregates themselves are not! This means that we couldn't reduce further two aggregates of the same attestation with the following bits: [0, 1, 1]
and [1, 1, 0]
. In both aggregates, the second bit is set to 1, meaning that both aggregates have seen the attestation produced by the second validator. Aren't we double-counting then?
We are not! The beacon chain is quite permissive and the second validator would just see their vote included twice: once in the aggregate with bits [0, 1, 1]
, and once in the aggregate with bits [1, 1, 0]
. Validators are rewarded for their timeliness, or how fast their attestations are included in the chain. In case a validator's vote is included twice or more, the reward measures how fast their first inclusion was made and only counts that vote. Good guy beacon state!
In our simulation, we follow more or less the same pattern but we do not model the behaviour of aggregators (yet!) Validators produce attestations on their own, setting the aggregate bit to 1 for themselves only. Once the block producer has collected all the attestations that were not included in their version of the chain (remember that we have two partitions!), they aggregate them as much as possible. This is done with the following two pieces of code:
def build_aggregate(state, attestations):
# All attestations are from the same slot, committee index and vote for
# same source, target and beacon block.
if len(attestations) == 0:
return []
aggregation_bits = Bitlist[specs.MAX_VALIDATORS_PER_COMMITTEE](*([0] * len(attestations[0].aggregation_bits)))
for attestation in attestations:
validator_index_in_committee = attestation.aggregation_bits.index(1)
aggregation_bits[validator_index_in_committee] = True
return specs.Attestation(
aggregation_bits=aggregation_bits,
data=attestations[0].data
)
def aggregate_attestations(state, attestations):
# Take in a set of attestations
# Output aggregated attestations
hashes = set([hash_tree_root(att.data) for att in attestations])
return [build_aggregate(
state,
[att for att in attestations if att_hash == hash_tree_root(att.data)]
) for att_hash in hashes]
The first function, given a set of attestations vouching for the same source, target, head and from the same slot and committee, returns an aggregate attestation with the correct bits set to 1. The second function takes in a set of diverse attestations and groups identical attestations together.
OK, so validators in both partitions are working hard on their own version of the beacon chain, but nothing is finalising. This is because part of the stake is inactive, that is, the validators stuck on the other side cannot attest to anything.
In order to finalise, we need at least 2/3rds of the stake attesting, meaning that active validators should have at least twice as much stake as inactive ones.
Another way of saying this is the following: If we define the activity ratio as the stake of active validators divided by the stake of inactive validators, then this ratio must be at least 2 in order for checkpoints to start finalising.
At the beginning of our simulation, the first partition has an activity ratio of 60/40 = 3/2, while the second partition is at 40/60 = 2/3. Neither is quite at 2 yet! But as time goes by, inactive validators start losing stake to the protocol penalties. Let's see how much.
First, let's plot the average stake of active and inactive validators in partition 0.
df[df.substep == 1].plot('timestep', ['active_stake_branch_0', 'inactive_stake_branch_0'])
As expected, active validators maintain their effective balance at 32 ETH, while inactive validators quickly feel the wrath of eth2 (in fact, feeling it much quicker than expected since we turned up the INACTIVITY_PENALTY_QUOTIENT
constant quite a bit). How do our activity ratios compare?
df[df.substep == 1].plot('timestep', ['activity_ratio_0', 'activity_ratio_1'])
Unsurprisingly, partition 0 is the quickest to reach an activity ratio of 2 and so resumes finalisation much earlier than partition 1. Let's dive into what happens as the chains start finalising again.
Here we'll focus on the first partition, where validators 0 to 59 are active. In the table below we show the moment when justification resumes, in other words when active validators have over twice the weight of inactive ones.
subs = df[(df.slot >= 142) & (df.slot < 146) & (df.substep == 1)]
subs[['epoch', 'slot', 'attest_prev', 'attest_curr',
'prev_justified_cp', 'curr_justified_cp', 'finalized_epoch', 'just_bits']]
epoch | slot | attest_prev | attest_curr | prev_justified_cp | curr_justified_cp | finalized_epoch | just_bits | |
---|---|---|---|---|---|---|---|---|
283 | 8 | 142 | 68.57 | 52.57 | 0 | 0 | 0 | [0, 0, 0, 0] |
285 | 8 | 143 | 68.57 | 58.29 | 0 | 0 | 0 | [0, 0, 0, 0] |
287 | 9 | 144 | 62.77 | 0.04 | 0 | 7 | 0 | [0, 1, 0, 0] |
289 | 9 | 145 | 73.85 | 0.04 | 0 | 7 | 0 | [0, 1, 0, 0] |
just_bits
is a 4 bits array indicating which of the four latest epochs are finalised. Notice that in the last step of epoch 8, attest_prev
is over 66.666...%. The beacon chain has enough weighted votes to justify epoch 7! During the transition from epoch 8 to epoch 9, the justification bits are set to indicate that the second latest epoch (i.e., epoch 7) is justified, while the latest (epoch 8) and third and fourth latest (epochs 5 and 6) were not.
just_bits | 0 | 1 | 0 | 0 |
----------|---|---|---|---|
epoch | 8 | 7 | 6 | 5 |
Casper FFG talks of supermajority links: heavy (with over 2/3rds of the stake) sets of votes taking as source the latest known justified checkpoint and as target a more recent checkpoint. In this case, genesis epoch 0 is the latest justified checkpoint since there was never enough weight to justify anything else, while the target was epoch 7. So we have the following links:
Supermajority links
-------------------
0 -> 7
But nothing is finalised yet, as finalized_epoch_0
is still 0. When does finalisation happen?
subs = df[(df.slot >= 158) & (df.slot < 162) & (df.substep == 1)]
subs[['epoch', 'slot', 'attest_prev', 'attest_curr',
'prev_justified_cp', 'curr_justified_cp', 'finalized_epoch', 'just_bits']]
epoch | slot | attest_prev | attest_curr | prev_justified_cp | curr_justified_cp | finalized_epoch | just_bits | |
---|---|---|---|---|---|---|---|---|
315 | 9 | 158 | 73.85 | 56.62 | 0 | 7 | 0 | [0, 1, 0, 0] |
317 | 9 | 159 | 73.85 | 62.77 | 0 | 7 | 0 | [0, 1, 0, 0] |
319 | 10 | 160 | 67.69 | 0.04 | 7 | 9 | 7 | [1, 1, 1, 0] |
321 | 10 | 161 | 73.85 | 0.04 | 7 | 9 | 7 | [1, 1, 1, 0] |
Now we focus on the transition from epoch 9 to epoch 10. By the last slot of epoch 9, slot 159, attest_prev
is well above 2/3rds with 73.85% of the (weighted) votes, which indicates that epoch 8 is justified. attest_curr
however is only around 62.77% at that point, but this is hiding the fact that the last block of the epoch was not processed yet. Indeed, epoch 10 starts with an attest_prev
of 67.69%, meaning that epoch 9 was also justified.
just_bits | 1 | 1 | 1 | 0 |
----------|---|---|---|---|
epoch | 9 | 8 | 7 | 6 |
We have two consecutive sets of epochs which are both justified. In the language of Casper FFG, we have the following supermajority links:
Supermajority links
-------------------
0 -> 7
7 -> 9
Finalisation of epoch 7 happens if all following three conditions are satisfied:
We have a link from epoch 7 to epoch 9. Epoch 9 is two epochs away, and as we see from the justification bits, epoch 8 is also justified! So epoch 7 is finalised.
Let's fast-forward a bit more...
subs = df[(df.slot >= 174) & (df.slot < 178) & (df.substep == 1)]
subs[['epoch', 'slot', 'attest_prev', 'attest_curr',
'prev_justified_cp', 'curr_justified_cp', 'finalized_epoch', 'just_bits']]
epoch | slot | attest_prev | attest_curr | prev_justified_cp | curr_justified_cp | finalized_epoch | just_bits | |
---|---|---|---|---|---|---|---|---|
347 | 10 | 174 | 73.85 | 59.08 | 7 | 9 | 7 | [1, 1, 1, 0] |
349 | 10 | 175 | 73.85 | 62.77 | 7 | 9 | 7 | [1, 1, 1, 0] |
351 | 11 | 176 | 67.69 | 0.04 | 9 | 10 | 9 | [1, 1, 1, 1] |
353 | 11 | 177 | 73.85 | 0.04 | 9 | 10 | 9 | [1, 1, 1, 1] |
Now epoch 9 is finalised, with supermajority links:
Supermajority links
-------------------
0 -> 7
7 -> 9
9 -> 10
OK last one...
subs = df[(df.slot >= 190) & (df.slot < 194) & (df.substep == 1)]
subs[['epoch', 'slot', 'attest_prev', 'attest_curr',
'prev_justified_cp', 'curr_justified_cp', 'finalized_epoch', 'just_bits']]
epoch | slot | attest_prev | attest_curr | prev_justified_cp | curr_justified_cp | finalized_epoch | just_bits | |
---|---|---|---|---|---|---|---|---|
379 | 11 | 190 | 73.85 | 57.85 | 9 | 10 | 9 | [1, 1, 1, 1] |
381 | 11 | 191 | 73.85 | 61.54 | 9 | 10 | 9 | [1, 1, 1, 1] |
383 | 12 | 192 | 66.46 | 0.04 | 10 | 10 | 9 | [0, 1, 1, 1] |
385 | 12 | 193 | 73.85 | 0.04 | 10 | 10 | 9 | [0, 1, 1, 1] |
What happened here? Things seemed to be going so well! Unfortunately, the last block in epoch 11 didn't carry enough votes to push attest_curr
above 66.666%, scoring a close 66.46% (seen in attest_prev
of slot 192).
Why did this happen? Validators attesting at slot 191 are the last ones to do so before the next epoch begins. If their attestations had been included at the end of epoch 11 their weight would have pushed the attesting stake to 73.85%, but their attestations are only included at least one slot later, i.e., in the first slot of epoch 12, i.e., right after the accounting is over. This means our supermajority links are unchanged:
Supermajority links
-------------------
0 -> 7
7 -> 9
9 -> 10
and epoch 9 remains the last finalised epoch.
This notebook introduced a key element of distributed systems: the abstraction of a P2P network over which agents exchange data. We observed how partitions that are more or less evenly split temporarily prevent finalisation on either side of the chain.
Had the partition resolved itself and the network found integrity once more, validators would have needed to find consensus over the “true” state of the chain. Given the fork-choice rule based on GHOST, it is the chain with the most "work" (in the sense of staked votes) that would have won over and been accepted as canonical.
We have not touched upon this fork rule yet, but we will do so in the next notebook where we introduce latency over the production of blocks. In this piece, we assumed no latency within each partition, so that validators in the same partition are always synced with each other. We'll relax this assumption and simulate how conflicts are resolved when block producers either do not release their blocks on time or experience delays communicating their blocks to other validators.