We've discussed EIP 1559 over the course of several notebooks. A competing proposal, dubbed "escalator", was introduced by Dan Finlay, taking inspiration from a paper by Agoric.
We take the current first-price auction paradigm as our benchmark. 1559 removes the auction component, attempting to quote a price for users based on supply of gas and demand for it. Users become price-takers most of the time, with their only decision being whether to transact or not.
The escalator proposal is a somewhat orthogonal direction from 1559. It retains some aspect of the first-price auction mechanism (users competing against each other) but allows users to "bump up" their bid following a linear formula. For instance, I initially bid 10 and specify that with each passing block, my bid should be increased by 5, until it reaches some maximum that I also defined. If I am included immediately, my gas price is 10. One block later, it is 15, etc.
The pattern of resubmitting transactions at a higher bid is known to most users of Ethereum. Manual resubmission of transactions is enabled by most wallets, while services such as any.sender allow you to programmatically emulate resubmission. The escalator automates it in protocol, in the sense that users do not need to manually resubmit, but set the parameters for the fee increase once and for all before sending the transaction to the pool, where its bid increases.
So on one axis we have a protocol-determined objective fee equalising supply and demand. On the other, we have control over bidding behaviour. The first is useful most of the time, in particular when demand is stationary. Yet the second may be desirable for these short periods where demand drastically changes and user behaviour reverts to strategic first-price auction-style bidding. Could we combine the two?
In this notebook, we investigate the floating escalator, a proposal to do so. We'll introduce its dynamics and study some user behaviours under this paradigm.
See this repository's README, for more instructions on how to run this notebook. The first step is to import the relevant objects from our library.
import os, sys
sys.path.insert(1, os.path.realpath(os.path.pardir))
# You may remove the two lines above if you have installed abm1559 from pypi
from typing import Sequence
from abm1559.utils import (
constants,
)
from abm1559.txpool import TxPool
from abm1559.users import (
UserFloatingEsc,
)
from abm1559.userpool import UserPool
from abm1559.chain import (
Chain,
Block1559,
)
from abm1559.simulator import (
spawn_poisson_heterogeneous_demand,
update_basefee,
)
from abm1559.txs import (
Transaction,
TxFloatingEsc,
)
import pandas as pd
pd.set_option('display.max_rows', 20)
import numpy as np
import seaborn as sns
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 150
Since the floating escalator relies on a combination of both 1559 and the escalator, we'll introduce each one in turn before looking at their combination.
EIP 1559 targets a specific block size c
. When blocks are too full, a price known as the basefee increases. More people want in? Fine, we'll raise the price. Over time, the basefee fluctuates, with higher values reached when more users want to transact.
The basefee is governed by a simple equation
basefee[t+1] = basefee[t] * (1 + d * (gas_used[t] - c) / c)
where gas_used[t]
is the amount of gas used by block t
. Note that blocks can use at most 2 * c
gas, twice the target. When they do, and are full, the update rule above becomes basefee[t+1] = basefee[t] * (1 + d)
.
The adjustment speed d
is currently set at 12.5%, which implies that the basefee after a full block increases by 12.5% (and basefee after an empty block decreases by 12.5%) .
In the escalator paradigm, a user submits:
start_bid
start_block
length
max_bid
Over length
blocks, the true bid bid[t]
at block t
follows a linear interpolation of start_bid
and max_bid
, with
bid[t] = start_bid + (t - start_block) / length * (max_bid - start_bid)
At t = start_block
, bid[t] = start_bid
. At t = end_block
, bid[t] = end_block
.
In other words, the bid escalates slower or faster, depending on how much time it is valid for and how high the maximum bid is.
For example, if I submit 10 as my initial bid, 20 as my maximum, and 5 as the number of blocks over which my bid is valid for, then, as long as my transaction hasn't been included yet, my bid will increase by 2 every block for the next 5 blocks (in other words, until my bid reaches 20).
In the escalator paradigm, users must set a large number of parameters (start bid, max bid, duration of the escalator). Meanwhile, 1559 gives us this nice gas price "oracle" that tells you the market conditions as you start transacting. Combining the two would be nice! What if our escalator could "start" with the current 1559-predicated price, the basefee, and climb the bid from there?
What exactly climbs in the floating escalator? Remember that, under 1559, users specify a gas premium (or the maximum amount that a miner can receive from including the transaction). During transitions -- for example, a spike in demand -- we've seen users become strategic and "overbid". Ideally, there would exist some sort of fixed premium that would be expected to compensate the cost for a miner to include one extra transaction. The escalator governs the dynamics of this premium.
We call this hybrid "floating": we see the basefee as a kind of tide, rising and lowering with demand. The escalator starts at or near the tide, depending on start_premium
. Meanwhile, the gas premium offered to the miners climbs in excess of the basefee. For instance, assume Alice starts her escalator at the basefee equal to 5 Gwei and increases the bid by 1 Gwei each block. She also specifies that she never wants to pay more than 15 Gwei.
basefees = [5, 6, 8, 10, 9, 7, 8, 10, 13, 16]
bids_alice = [min(15, basefees[i] + i) for i in range(10)]
df = pd.DataFrame({
"block_height": range(10),
"basefee": basefees,
"bid_alice": bids_alice,
})
df.plot("block_height", ["basefee", "bid_alice"])
<AxesSubplot:xlabel='block_height'>
Notice the distance between the basefee (in blue) and Alice's bids (in orange) increases over time, by 1 Gwei per block, until Alice reaches her maximum bid of 15.
At block 6, the basefee is 8 Gwei, so Alice's bid is 14 Gwei (8 Gwei from the basefee, 6 Gwei from her escalator). Then at block 7, the basefee increases to 10 Gwei. While Alice's bid ought to be 17 Gwei (10 Gwei from the basefee, 7 Gwei from her escalator), it is capped at 15 Gwei (the maximum amount Alice is willing to pay). We'll assume that after 10 blocks, Alice's transaction simply drops out.
Suppose a different user, Bob, starts at the same block as Alice, with an increase of 0.5 Gwei per block, and the same 15 Gwei limit.
bids_bob = [min(15, basefees[i] + 0.5 * i) for i in range(10)]
df["bid_bob"] = bids_bob
df.plot("block_height", ["basefee", "bid_alice", "bid_bob"])
<AxesSubplot:xlabel='block_height'>
We see Bob's bids in green. Notice that they keep below Alice's bids. In a sense, Bob is more conservative than Alice is. Alice might be in a hurry to get her bid included, and doesn't mind "overpaying" (i.e. taking the risk that the increment she chose was too large). Bob, on the other hand, prefers to slowly escalate his bid. All things equal, Alice should be included before Bob is, since miners receive the difference between her bid and the basefee.
In our model, users have both a value for the transaction, $v$, and a cost for waiting, $c$. Getting included immediately nets you a payoff of $v$, minus the transaction fees expended. Getting included 2 blocks later, $v - 2c$, minus the transaction fees, etc.
Since $c$ represents the time preferences of the user (a higher $c$ means it is more costly for me to wait), we could decide the escalator slope based on $c$: the higher the $c$, the higher the slope and escalator increments. This is an appropriate strategy for users who care about getting in as fast as possible given their waiting costs. For instance, users chasing an arbitrage opportunity or optimistic rollup dispute resolution transactions have high waiting costs, and thus would ramp up quickly within a short amount time.
Which brings us to the question: How long should the escalator ramp up for? Given bid increments of amount $s$, after $t$ blocks, assuming a constant basefee $b$, my bid is $b + t \times s$. Meanwhile, if my transaction is included at $t$, my payoff is $v - t \times c - (b + t \times s)$. We never want this payoff to become negative: since this would mean we would be worse off transacting than not! To ensure this never happens, we can figure out the number of blocks $t$ after which the previous expression becomes negative and use that as the duration of the escalator.
So how large should our increments be? We could set them to some fraction of the cost per unit, to respect the intuition that users who are in a hurry should set higher increments. To simplify for now, we'll set them to be exactly the user's cost per unit.
In our simulations, the users' bids won't start on the basefee exactly. We'll define the start_premium
parameter as one escalator increment: if this increment is $c$, the first bid the user places is $b + c$ [2].
We've written a "dummy" UserFloatingEsc
class in the library (abm1559/users.py
) that we extend here to specify the parameters discussed above.
class UserHurryEsc(UserFloatingEsc):
def decide_parameters(self, env):
basefee = env["basefee"]
slope = self.cost_per_unit
escalator_length = int(((self.value - basefee) / self.cost_per_unit - 1) / 2)
max_fee = basefee + (escalator_length + 1) * self.cost_per_unit
max_block = self.wakeup_block + escalator_length
start_premium = slope
tx_params = {
"max_fee": max_fee, # in wei
"start_premium": start_premium, # in wei
"start_block": self.wakeup_block,
"max_block": max_block,
"basefee": basefee,
}
return tx_params
Note that since the floating escalators have an expiry date (the max block after which they cannot be included), we create a new transaction pool which removes expired transactions.
class TxPoolFloatingEsc(TxPool):
def add_txs(self, txs: Sequence[Transaction], env: dict) -> None:
invalid_txs = [tx_hash for tx_hash, tx in self.txs.items() if not tx.is_valid(env)]
self.remove_txs(invalid_txs)
super().add_txs(txs)
As in our previous notebooks, we'll write out the main simulation loop.
def simulate(demand_scenario, shares_scenario, TxPool=TxPoolFloatingEsc, rng=None):
# Instantiate a couple of things
txpool = TxPool()
basefee = constants["INITIAL_BASEFEE"]
chain = Chain()
metrics = []
user_pool = UserPool()
for t in range(len(demand_scenario)):
# `env` is the "environment" of the simulation
env = {
"basefee": basefee,
"current_block": t,
}
# We return some demand which on expectation yields demand_scenario[t] new users per round
users = spawn_poisson_heterogeneous_demand(t, demand_scenario[t], shares_scenario[t], rng=rng)
# Add users to the pool and check who wants to transact
# We query each new user with the current basefee value
# Users either return a transaction or None if they prefer to balk
decided_txs = user_pool.decide_transactions(users, env)
# New transactions are added to the transaction pool
txpool.add_txs(decided_txs, env)
# The best valid transactions are taken out of the pool for inclusion
selected_txs = txpool.select_transactions(env, user_pool=user_pool, rng=rng)
txpool.remove_txs([tx.tx_hash for tx in selected_txs])
# We create a block with these transactions
block = Block1559(txs = selected_txs, parent_hash = chain.current_head, height = t, basefee = basefee)
# The block is added to the chain
chain.add_block(block)
row_metrics = {
"block": t,
"basefee": basefee / (10 ** 9),
"users": len(users),
"decided_txs": len(decided_txs),
"included_txs": len(selected_txs),
"blk_avg_gas_price": block.average_gas_price(),
"blk_avg_tip": block.average_tip(),
"pool_length": txpool.pool_length,
}
metrics.append(row_metrics)
# Finally, basefee is updated and a new round starts
basefee = update_basefee(block, basefee)
return (pd.DataFrame(metrics), user_pool, chain)
users_per_round = 2500
blocks = 50
We'll only simulate UserHurryEsc
first, setting the average number of new users spawning between two blocks at 2,500. Our blocks can only accommodate 952 of them at most, so this will create congestion.
rng = np.random.default_rng(42)
demand_scenario = [users_per_round for i in range(blocks)]
shares_scenario = [{
UserHurryEsc: 1,
} for i in range(blocks)]
(df_hurry, user_pool_hurry, chain_hurry) = simulate(demand_scenario, shares_scenario, rng=rng)
Let's observe some results!
df_hurry
block | basefee | users | decided_txs | included_txs | blk_avg_gas_price | blk_avg_tip | pool_length | |
---|---|---|---|---|---|---|---|---|
0 | 0 | 1.000000 | 2542 | 2355 | 952 | 1.796545 | 0.796545 | 1403 |
1 | 1 | 1.124900 | 2473 | 2269 | 952 | 2.052800 | 0.927900 | 2684 |
2 | 2 | 1.265400 | 2515 | 2302 | 952 | 2.415915 | 1.150515 | 3907 |
3 | 3 | 1.423448 | 2437 | 2215 | 952 | 2.854479 | 1.431031 | 5015 |
4 | 4 | 1.601237 | 2430 | 2143 | 952 | 3.211069 | 1.609831 | 5968 |
... | ... | ... | ... | ... | ... | ... | ... | ... |
45 | 45 | 15.687228 | 2520 | 469 | 469 | 16.176957 | 0.489729 | 0 |
46 | 46 | 15.657618 | 2508 | 505 | 505 | 16.181792 | 0.524174 | 0 |
47 | 47 | 15.776029 | 2426 | 459 | 459 | 16.280334 | 0.504306 | 0 |
48 | 48 | 15.704839 | 2479 | 496 | 496 | 16.182209 | 0.477370 | 0 |
49 | 49 | 15.786505 | 2464 | 460 | 460 | 16.243491 | 0.456987 | 0 |
50 rows × 8 columns
We recognise dynamics that should be familiar to us now. While the same average number of users
spawn each block, and blocks are full in the first few steps, by the end of the simulation a much smaller number of users decides to actually transact (decided_txs
). A new phenomenon is the pool_length
being exactly zero by the end. Since transactions expire, old unincluded transactions are removed, while new transactions in the pool are all included. The basefee has reached its stationary level after which most users are priced out. This is confirmed by the following plot.
df_hurry.plot("block", ["basefee", "blk_avg_tip"])
<AxesSubplot:xlabel='block'>
Note the average tip in orange: in the first 20 blocks, when there is true competition from a shift in demand, many users want in given the low basefee amount, too many for all to be included. Those who wait in the pool see their bids escalate with increments equal to their cost per unit of waiting time. Miners of the blocks including highly escalated bids receive a much heftier tip, the difference between the user's bid and the current basefee. This comes to an end once basefee is stationary, after which priced out users do not even care to join the pool, escalating bids or not.
# Obtain the pool of users (all users spawned by the simulation)
user_pool_hurry_df = user_pool_hurry.export().rename(columns={ "pub_key": "sender" })
# Export the trace of the chain, all transactions included in blocks
chain_hurry_df = chain_hurry.export()
# Join the two to associate transactions with their senders
user_txs_hurry_df = chain_hurry_df.join(user_pool_hurry_df.set_index("sender"), on="sender")
# We'll only look at the first 16 blocks
first_blocks = user_txs_hurry_df[user_txs_hurry_df.block_height <= 15].copy()
first_blocks["wakeup_block"] = first_blocks["wakeup_block"].astype("category")
Below, we are plotting the distribution of users included in each successive block. On the x-axis, we represent the value of the user $v_i$, while on the y-axis, we plot the cost per unit of time waiting $c_i$. Each point on one plot corresponds to one included transaction, with the point located at the (value, cost per unit) coordinates of the user. Additionally, we give a distinct colour to each wave of new users: users spawned before block 0 are blue, those spawned between blocks 0 and 1 are orange etc.
g = sns.FacetGrid(data=first_blocks, col="block_height", col_wrap = 4)
g.map_dataframe(sns.scatterplot, x="value", y="cost_per_unit", hue="wakeup_block", palette="muted")
g.add_legend()
g.fig.set_figwidth(8)
g.fig.set_figheight(8)
The plot is quite busy, but we can observe the following:
block_height = 0
), we clearly see that only users with higher time preferences (high cost per unit) are included. This is not entirely surprising since they offer a premium equal to their cost for waiting, so miners who rank users according to that premium will prefer users with higher premiums -- and thus higher time preferences.block_height = 1
), we see that most included users are new users in orange (who appeared just before block 1), while some users from the previous wave (in blue) are included too. These late users are users spawned before block 0 with relatively low cost per unit. Yet, having waited one block already, their escalating premium is higher than some of the new users spawned just before block 1, which justifies their inclusion.The question then is: does the extra transaction expressivity afforded by the escalator improve the efficiency of the fee market? We first need to be clear what efficiency means in this context. A common measure in algorithmic game theory is the social welfare: the total payoff received by all users (transaction senders and miners) minus the costs they incur.
In our case, we have users with personal values $v_i$ and cost for waiting $c_i$. We run the fee market for blocks over time period $T$, when blocks $B_1, \dots, B_T$ are produced. Transaction senders pay transaction fees to miners, so this cost to the senders is merely extra payoff for the miners. In other words, the transaction fees do not participate to the social welfare calculation. The basefee does however, it is burnt, irretrievable and constitutes a cost to the system of miners and senders [3].
$$ \text{Social welfare}((b_t)_t, (w_i)_i) = \sum_{t \in T} \sum_{i \in B_t} g_i ( v_i - w_i \times c_i - b_t) $$where $g_i$ is the gas used by sender $i$, $w_i$ is how long the user has waited and $b_t$ is the basefee at block $t$. The social welfare (SW) is determined by the realisation of $(b_t)_t$ and $(w_i)_i$, which are not exogenous to the system (while $(g_i)_i, (v_i)_i$ and $(c_i)_i$ are).
The sum above is only carried over included users. Users who are not included pay nothing and receive no value either [4]. All things equal, SW is higher whenever users with higher costs for waiting get in quickly, or whenever users with higher values are included.
Let's investigate by computing the social welfare in the previous simulation. user_txs_hurry_df
holds user data of all users who were included. Since all transactions in our simulation use the same amount of gas, we do not include this parameter in the social welfare.
user_txs_hurry_df["total_sw"] = user_txs_hurry_df.apply(
lambda row: row.value - (row.block_height - row.wakeup_block) * row.cost_per_unit - row.basefee,
axis = 1
)
We store in a new column total_sw
the welfare achieved by each included transaction. We'll now sum them all up to obtain the social welfare.
sw_hurry = sum(user_txs_hurry_df["total_sw"])
sw_hurry
216733.6431947058
Hmmm, cool, I guess? That number alone is not very useful -- we'd better find something to compare it with. Why not try a different user strategy?
We've looked at a behaviour, hurry, where the length of a user escalator depends on the value and the cost of waiting for that user. Now, we look at the fixed strategy, which assumes the length of the escalator is fixed and the slope depends on the user value and waiting costs.
This strategy is suitable for users who know how long they are willing to wait for but don't necessarily care for being included so quickly, for instance, buying an NFT and waiting for delivery.
Say all users set up their escalators to last $\ell = 10$ blocks. After waiting for 10 blocks, the value for inclusion to user $i$ is $\overline{v}_i = v_i - 10c_i$. We can use this value as the maximum fee they are ever willing to pay. If the basefee increases significantly in the meantime, their bid may reach the limit earlier, but at least users are guaranteed to never overpay and realise a negative payoff.
Once again, we'll want their initial bid to be the basefee plus one increment of their escalator. To determine the slope of the escalator, we simply look for the value $s_i$ such that
$$ b + s_i + \ell \times s_i = \overline{v}_i \Leftrightarrow s_i = \frac{\overline{v}_i - b}{\ell + 1} $$This is implemented in the UserFixedDuration
class below.
class UserFixedDuration(UserFloatingEsc):
def decide_parameters(self, env):
escalator_length = 10
max_fee = self.value - escalator_length * self.cost_per_unit
slope = (max_fee - env["basefee"]) / (escalator_length + 1)
max_block = self.wakeup_block + escalator_length
start_premium = slope
tx_params = {
"max_fee": max_fee, # in wei
"start_premium": start_premium, # in wei
"start_block": self.wakeup_block,
"max_block": max_block,
"basefee": env["basefee"],
}
return tx_params
We'll run the simulation again with these new users.
rng = np.random.default_rng(42)
demand_scenario = [users_per_round for i in range(blocks)]
shares_scenario = [{
UserFixedDuration: 1,
} for i in range(blocks)]
(df_fixed, user_pool_fixed, chain_fixed) = simulate(demand_scenario, shares_scenario, rng=rng)
Checking out the results...
df_fixed
block | basefee | users | decided_txs | included_txs | blk_avg_gas_price | blk_avg_tip | pool_length | |
---|---|---|---|---|---|---|---|---|
0 | 0 | 1.000000 | 2542 | 2542 | 952 | 1.968813 | 0.968813 | 1590 |
1 | 1 | 1.124900 | 2575 | 2575 | 952 | 2.170934 | 1.046034 | 2422 |
2 | 2 | 1.265400 | 2525 | 2525 | 952 | 2.474802 | 1.209402 | 3158 |
3 | 3 | 1.423448 | 2534 | 2534 | 952 | 2.843921 | 1.420473 | 3894 |
4 | 4 | 1.601237 | 2435 | 2435 | 952 | 3.252617 | 1.651380 | 4463 |
... | ... | ... | ... | ... | ... | ... | ... | ... |
45 | 45 | 11.223676 | 2492 | 2492 | 540 | 11.473954 | 0.250277 | 1978 |
46 | 46 | 11.411673 | 2473 | 2473 | 468 | 11.679688 | 0.268015 | 2005 |
47 | 47 | 11.387138 | 2532 | 2532 | 495 | 11.641212 | 0.254074 | 2042 |
48 | 48 | 11.443362 | 2470 | 2470 | 457 | 11.703143 | 0.259782 | 2013 |
49 | 49 | 11.385716 | 2480 | 2480 | 459 | 11.644335 | 0.258619 | 2023 |
50 rows × 8 columns
What stands out immediately is that many more users decide to join the pool, even in the last blocks, after the basefee has mostly priced everyone out. Previously, we always observed decided_txs == included_txs
by blocks 45-49, with priced out users not deciding to join the pool in the first place.
But with the fixed duration strategy implemented, we never allow a user to ever realise a negative payoff, so there is no reason for a user not to join, and so we have users == decided_txs
. Meanwhile, the transaction pool checks for transaction validity, and removes all invalid transactions. We can define validity in two ways. Either we only call a transaction valid when its max_block
parameter is higher than the current block, or we add the constraint that the basefee must be smaller than the gas price posted by the transaction [5]. A pool might accept transactions from users already submerged by the basefee, but it certainly cannot include them in a block.
df_fixed.plot("block", ["basefee", "blk_avg_tip"])
<AxesSubplot:xlabel='block'>
In the chart above, we note that, this time, the basefee settled at a level lower than in the previous simulation. This could be a clue that the user strategy we have defined, with fixed escalator lengths, achieves lower efficiency: this is because a high basefee indicates the presence of high-value users in the system (if there were none, the basefee would settle lower).
Let's repeat the steps we've done in the first simulation to understand our results.
# Obtain the pool of users (all users spawned by the simulation)
user_pool_fixed_df = user_pool_fixed.export().rename(columns={ "pub_key": "sender" })
# Export the trace of the chain, all transactions included in blocks
chain_fixed_df = chain_fixed.export()
# Join the two to associate transactions with their senders
user_txs_fixed_df = chain_fixed_df.join(user_pool_fixed_df.set_index("sender"), on="sender")
# We'll only look at the first 16 blocks
first_blocks = user_txs_fixed_df[user_txs_fixed_df.block_height <= 15].copy()
first_blocks["wakeup_block"] = first_blocks["wakeup_block"].astype("category")
Once again, we plot the value and costs of included users across the first 16 blocks.
g = sns.FacetGrid(data=first_blocks, col="block_height", col_wrap = 4)
g.map_dataframe(sns.scatterplot, x="value", y="cost_per_unit", hue="wakeup_block", palette="muted")
g.add_legend()
g.fig.set_figwidth(8)
g.fig.set_figheight(8)
There are some interesting regularities!
What of the social welfare now?
user_txs_fixed_df["total_sw"] = user_txs_fixed_df.apply(
lambda row: row.value - (row.block_height - row.wakeup_block) * row.cost_per_unit - row.basefee,
axis = 1
)
sw_fixed = sum(user_txs_fixed_df["total_sw"])
sw_fixed
285514.69211695745
Recall that the social welfare with hurry strategies was given by:
sw_hurry
216733.6431947058
Seems that our alternative, fixed, has a higher social welfare than the hurry strategies we explored previously. What gives?
Although both social welfare measures are in the same "unit" (payoffs minus costs, so ethers), we are still not fairly comparing the two situations. In a sense, the "fixed length" escalator users are not at equilibrium: a high value user ought to bid higher to get in quickly, and not be discouraged by the possibility of not being included after the (arbitrary) escalator duration.
This is not to say that all users implementing the hurry strategy is an equilibrium either. Yet the basefee settles at a noticeably higher level, as if it were better able to match the existing demand to the supply. Since the social welfare as we defined it is lower when the basefee is higher, it is worse in the hurry scenario than in the fixed one.
The takeaway is that social welfare comparisons are not that informative when comparing two non-equilibrium profiles. As an alternative, we could adopt the following metric: look only at the profile of users who are included and compute the value they obtained -- their initial value minus waiting costs. We call this user efficiency.
User efficiency is more helpful to compare the efficiency of the mechanism (and strategies) relative to some optimum measure. But defining the optimum is another issue. For instance, here is an unreasonable optimum: everyone is included! Since this is not a feasible scenario, we restrict our optimum to meaningful cases, where the constraints of the chain apply.
With each new wave of users, we have a new distribution of values and costs. We instantiate here a "greedy" optimum definition. At each wave, we include in a block as many valid users as we can, with users who receive the highest current value from inclusion (value minus waiting costs) first. The dynamics are the same as before, except we replace our strategic miner optimising for their tips, with a benevolent dictator whose sole objective is to maximise the total current value of users in a block.
We subclass our TxPool
object to implement this benevolent miner behaviour.
class GreedyTxPool(TxPoolFloatingEsc):
def select_transactions(self, env, **kwargs):
user_pool = kwargs["user_pool"]
basefee = env["basefee"]
# Miner side
max_tx_in_block = int(constants["MAX_GAS_EIP1559"] / constants["SIMPLE_TRANSACTION_GAS"])
# Get all users with transactions in the transaction pool
users_in_pool = [user_pool.get_user(tx.sender) for tx in self.txs.values()]
# Keep only users who would transact now, given their current value
valid_users = [user for user in users_in_pool if user.current_value(env) >= basefee]
# Users are sorted with higher current value users included first
sorted_valid_demand = sorted(
valid_users,
key = lambda user: -user.current_value(env)
)
selected_users = [user.pub_key for user in sorted_valid_demand[0:max_tx_in_block]]
selected_txs = [tx for tx in self.txs.values() if tx.sender in selected_users]
return selected_txs
# We'll keep users around and let the pool select the ones it prefers
# among users with high enough current value
def remove_invalid_txs(self, env):
pass
We'll also modify our users to have them transact always. The pool will decide who gets in or not.
class AlwaysOnUser(UserHurryEsc):
def create_transaction(self, env):
# The tx_params don't really matter, since
# greedy transaction pools disregard them to decide who is included
tx_params = self.decide_parameters(env)
tx = TxFloatingEsc(
sender = self.pub_key,
tx_params = tx_params,
rng = self.rng,
)
return tx
Let's rerun the simulation loop, using our new GreedyTxPool
and AlwaysOnUser
s.
rng = np.random.default_rng(42)
demand_scenario = [users_per_round for i in range(blocks)]
shares_scenario = [{
AlwaysOnUser: 1,
} for i in range(blocks)]
(df_greedy, user_pool_greedy, chain_greedy) = simulate(
demand_scenario, shares_scenario,
TxPool = GreedyTxPool,
rng=rng,
)
# Obtain the pool of users (all users spawned by the simulation)
user_pool_greedy_df = user_pool_greedy.export().rename(columns={ "pub_key": "sender" })
# Export the trace of the chain, all transactions included in blocks
chain_greedy_df = chain_greedy.export()
# Join the two to associate transactions with their senders
user_txs_greedy_df = chain_greedy_df.join(user_pool_greedy_df.set_index("sender"), on="sender")
We now obtain the total user efficiency, the sum of all [value minus waiting costs] of users. We'll also check the average level of basefee after it stabilises.
def get_user_efficiency(df):
return sum(
df.apply(
lambda row: row.value - (row.block_height - row.wakeup_block) * row.cost_per_unit,
axis = 1
)
)
def get_average_basefee(df):
return np.mean(df[df.block >= 30]["basefee"])
pd.DataFrame({
"simulation": ["greedy", "hurry", "fixed"],
"user_efficiency": [get_user_efficiency(df) for df in [user_txs_greedy_df, user_txs_hurry_df, user_txs_fixed_df]],
"basefee": [get_average_basefee(df) for df in [df_greedy, df_hurry, df_fixed]],
})
simulation | user_efficiency | basefee | |
---|---|---|---|
0 | greedy | 592807.735952 | 16.187640 |
1 | hurry | 527935.126412 | 15.721727 |
2 | fixed | 523601.811824 | 11.366102 |
Both user efficiencies of hurry and fixed simulations are below our greedy optimum [6]. They are not far from each other, but user efficiency of hurry is higher than fixed. This sits better with our intuition that a higher basefee entails the "correct" users are matched, so the user value realised with everyone playing hurry should be greater than the value under fixed.
To expand on this, hurry puts users with higher waiting costs in front of line, at the expense of potentially high value users who have low waiting costs. Meanwhile, fixed gives precendence to high value users with low waiting costs, who will have a higher escalator slope. The downside is that high value users with high waiting costs do not join, since they expect a negative payoff. Losing this part of the population means the basefee can afford to be lower than it is under the hurry strategy.
In the best of all worlds, the same users who are included in the idealised greedy scenario are actually included on-chain. Is this possible? It could be if we find a user strategy which induces an equilibrium, a situation where no user would prefer to play a different strategy, and if in turn this equilibrium induces the optimal allocation.
Yet the optimal allocation is not always an equilibrium (think about the Prisoner's dilemma). Is this the case here? What is the best strategy for users in the floating escalator paradigm? And under this strategy, does the floating escalator achieve higher user efficiency than plain vanilla 1559 does?
We could check the efficiency of a "bad" transaction pool behaviour to get a "lower bound" benchmark. A very bad transaction pool behaviour may be to include low value users first, the opposite of the optimum pool, or include no one at all, in which case the total value is zero. We don't think these are very informative. A reasonably bad transaction pool behaviour would sample at random valid transactions in its pool and include them in the next block.
class RandomTxPool(TxPoolFloatingEsc):
def select_transactions(self, env, **kwargs):
rng = kwargs["rng"]
user_pool = kwargs["user_pool"]
basefee = env["basefee"]
# Miner side
max_tx_in_block = int(constants["MAX_GAS_EIP1559"] / constants["SIMPLE_TRANSACTION_GAS"])
# Get all users with transactions in the transaction pool
users_in_pool = [user_pool.get_user(tx.sender) for tx in self.txs.values()]
# Keep only users who would transact now, given their current value
valid_users = [user.pub_key for user in users_in_pool if user.current_value(env) >= basefee]
# Find their transactions in the pool
valid_txs = [tx for tx in self.txs.values() if tx.sender in valid_users]
# Shuffle them and pick at random to fill the block
rng.shuffle(valid_txs)
selected_txs = valid_txs[0:max_tx_in_block]
return selected_txs
# We'll keep users around and let the pool select the ones it prefers
# among users with high enough current value
def remove_invalid_txs(self, env):
pass
rng = np.random.default_rng(42)
demand_scenario = [users_per_round for i in range(blocks)]
shares_scenario = [{
AlwaysOnUser: 1,
} for i in range(blocks)]
(df_random, user_pool_random, chain_random) = simulate(
demand_scenario, shares_scenario,
TxPool = RandomTxPool,
rng=rng,
)
Let's look at the basefee and average tip in the block.
df_random.plot("block", ["basefee", "blk_avg_tip"])
<AxesSubplot:xlabel='block'>
How much value do users achieve?
# Obtain the pool of users (all users spawned by the simulation)
user_pool_random_df = user_pool_random.export().rename(columns={ "pub_key": "sender" })
# Export the trace of the chain, all transactions included in blocks
chain_random_df = chain_random.export()
# Join the two to associate transactions with their senders
user_txs_random_df = chain_random_df.join(user_pool_random_df.set_index("sender"), on="sender")
get_user_efficiency(user_txs_random_df)
509149.6237617843
This is lower than anything we've seen so far, confirming that the random inclusion policy is not a very good one. Let's compare all four environments.
pd.DataFrame({
"simulation": ["greedy", "hurry", "fixed", "random"],
"user_efficiency": [get_user_efficiency(df) for df in [user_txs_greedy_df, user_txs_hurry_df, user_txs_fixed_df, user_txs_random_df]],
"basefee": [get_average_basefee(df) for df in [df_greedy, df_hurry, df_fixed, df_random]],
})
simulation | user_efficiency | basefee | |
---|---|---|---|
0 | greedy | 592807.735952 | 16.187640 |
1 | hurry | 527935.126412 | 15.721727 |
2 | fixed | 523601.811824 | 11.366102 |
3 | random | 509149.623762 | 16.136528 |