While I roughly agree with the thesis that different replacement policies offer marginal block reward gains _in the current state_ of the ecosystem, I would be more conservative about extending the conclusions to the medium/long-term future.

> I suspect the "economically rational" choice would be to happily trade
> off that immediate loss against even a small chance of a simpler policy
> encouraging higher adoption of bitcoin, _or_ a small chance of more
> on-chain activity due to higher adoption of bitcoin protocols like
> lightning and thus a lower chance of an empty mempool in future.

This is making the assumption that the economic interests of the different class of actors in the Bitcoin ecosystem are not only well-understood but also aligned. We have seen in the past mining actors behaviors delaying the adoption of protocol upgrades which were expected to encourage higher adoption of Bitcoin. Further, if miners likely have an incentive to see an increase of on-chain activity, there is also the possibility that lightning will be so throughput-efficient to drain mempools backlog, to a point where the block demand is not high enough to pay back the cost of mining hardware and operational infrastructure. Or at least not matching the return on mining investments expectations.

Of course, it could be argued that as a utxo-sharing protocol like lightning just compresses the number of payments per block space unit, it lowers the fees burden, thus making Bitcoin as a payment system far more attractive for a wider population of users. In fine increasing the block space demand and satisfying the miners.

In the state of today's knowledge, this hypothesis sounds the most plausible. Though, I would say it's better to be cautious until we understand better the interactions between the different layers of the Bitcoin ecosystem ?

> Certainly those percentages can be expected to double every four years as
> the block reward halves (assuming we don't also reduce the min relay fee
> and block min tx fee), but I think for both miners and network stability,
> it'd be better to have the mempool backlog increase over time, which
> would both mean there's no/less need to worry about the special case of
> the mempool being empty, and give a better incentive for people to pay
> higher fees for quicker confirmations.

Intuitively, if we assume that liquidity isn't free on lightning [0], there should be a subjective equilibrium where it's cheaper to open new channels  to reduce one's own graph traversal instead of paying too high routing fees.

As the core of the network should start to be more busy, I think we should see more LN actors doing that kind of arbitrage, guaranteeing in the long-term mempools backlog.

> If you really want to do that
> optimally, I think you have to have a mempool that retains conflicting
> txs and runs a dynamic programming solution to pick the best set, rather
> than today's simple greedy algorithms both for building the block and
> populating the mempool?

As of today, I think power efficiency of mining chips and access to affordable sources of energy are more significant factors of the rentability of mining operations rather than optimality of block construction/replacement policy. IMO, making the argument that small deltas in block reward gains aren't that much relevant.

That said, the former factors might become a commodity, and the latter one become a competitive advantage. It could incentivize the development of such greedy algorithms, potentially in a covert way as we have seen with AsicBoost ?

> Is there a plausible example where the difference isn't that marginal?

The paradigm might change in the future. If we see the deployment of channel factories/payment pools, we might have users competing to spend a shared-utxo with different liquidity needs and thus ready to overbid. Lack of a "conflict pool" logic might make you lose income.

> Always accepting (package/descendent) fee rate increases removes the
possibility of pinning entirely, I think

I think the pinnings we're affected with today are due to the ability of a malicious counterparty to halt the on-chain resolution of the channel. The presence of a  pinning commitment transaction with low-chance of confirmation (abuse of BIP125 rule 3)
prevents the honest counterparty to fee-bump her own version of the commitment, thus redeeming a HTLC before timelock expiration. As long as one commitment confirms, independently of who issued it, the pinning is over. I think moving to replace-by-feerate allows the honest counterparty to fee-bump her commitment, thus offering a compelling block space demand, or forces the malicious counterparty to enter in a fee race.


To gather my thinking on the subject, the replace-by-feerate policy could produce lower fees blocks in the presence of today's environment of empty/low-fulfilled blocks. That said, the delta sounds marginal enough w.r.t other factors of mining business units
to not be worried (or at least low-key) about the potential implications on centralization. If the risk is perceived as too intolerable, it could be argued an intermediate solution would be to deploy a "dual" RBF policy (replace-by-fee for the top of the mempool, replace-by-feerate
for the remaining part).

Still, I believe we might have to adopt more sophisticated replacement policies in the long term to level the field among the mining ecosystem if block construction/mempool acceptance strategies become a competitive factor. Default to do so might provoke a latent centralization of mining due to heterogeneity in the block reward offered. This heterogeneity would also likely downgrade the safety of L2 nodes, as those actors wouldn't be able to know how to format their fee-bumpings, in the lack of _a_ mempool replacement standard.

> Note that if we did have this policy, you could abuse it to cheaply drain
> people's mempools: if there was a 300MB backlog, you could publish 2980
> 100kB txs paying a fee rate just below the next block fee, meaning you'd
> kick out the previous backlog and your transactions take up all but the
> top 2MB of the mempool; if you then replace them all with perhaps 2980
> 100B txs paying a slightly higher fee rate, the default mempool will be
> left with only 2.3MB, at an ultimate cost to you of only about 30% of a
> block in fees, and you could then fill the mempool back up by spamming
> 300MB of ultra low fee rate txs.

I believe we might have bandwidth-bleeding issues with our current replacement policy. I think it would be good to have a cost estimate of them and ensure a newer replacement policy would stay in the same bounds.

> I think spam prevention at the outbound relay level isn't enough to
> prevent that: an attacker could contact every public node and relay the
> txs directly, clearing out the mempool of most public nodes directly. So
> you'd want some sort of spam prevention on inbound txs too?

That we have to think about replacement spam prevention sounds reasonable to me. I would be worried about utxo-based replacement limitations which could be abused in the context of multi-party protocol (introducing a new pinning vector). One solution
could be to have a per-party transaction "tag" and allocate a replacement slot in function ? Maybe preventing a malicious counterparty to abuse a "global" utxo slot during periods of low fees...

Antoine

[0] https://twitter.com/alexbosworth/status/1476946257939628035

Le jeu. 17 févr. 2022 à 09:32, Anthony Towns via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> a écrit :
On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev wrote:
> This is where *all* the complexity comes from. If our goal is to "ensure a
> bump increases a miner's overall revenue" (thus not wasting relay for
> everyone else), then we precisely *do* need
> > Special consideration for "what should be in the next
> > block" and/or the caching of block templates seems like an imposing
> > dependency
> Whether a transaction increases a miner's revenue depends precisely on
> whether the transaction (package) being replaced is in the next block - if
> it is, you care about the absolute fee of the package and its replacement.

On Thu, Feb 10, 2022 at 11:44:38PM +0000, darosior via bitcoin-dev wrote:
> It's not that simple. As a miner, if i have less than 1vMB of transactions in my mempool. I don't want a 10sats/vb transaction paying 100000sats by a 100sats/vb transaction paying only 10000sats.

Is it really true that miners do/should care about that?

If you did this particular example, the miner would be losing 90k sats
in fees, which would be at most 1.44 *millionths* of a percent of the
block reward with the subsidy at 6.25BTC per block, even if there were
no other transactions in the mempool. Even cumulatively, 10sats/vb over
1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.

I suspect the "economically rational" choice would be to happily trade
off that immediate loss against even a small chance of a simpler policy
encouraging higher adoption of bitcoin, _or_ a small chance of more
on-chain activity due to higher adoption of bitcoin protocols like
lightning and thus a lower chance of an empty mempool in future.

If the network has an "empty mempool" (say less than 2MvB-10MvB of
backlog even if you have access to every valid 1+ sat/vB tx on any node
connected to the network), then I don't think you'll generally have txs
with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),
which means your maximum loss is about 3% of block revenue, at least
while the block subsidy remains at 6.25BTC/block.

Certainly those percentages can be expected to double every four years as
the block reward halves (assuming we don't also reduce the min relay fee
and block min tx fee), but I think for both miners and network stability,
it'd be better to have the mempool backlog increase over time, which
would both mean there's no/less need to worry about the special case of
the mempool being empty, and give a better incentive for people to pay
higher fees for quicker confirmations.

If we accept that logic (and assuming we had some additional policy
to prevent p2p relay spam due to replacement txs), we could make
the mempool accept policy for replacements just be (something like)
"[package] feerate is greater than max(descendent fee rate)", which
seems like it'd be pretty straightforward to deal with in general?



Thinking about it a little more; I think the decision as to whether
you want to have a "100kvB at 10sat/vb" tx or a conflicting "1kvB at
100sat/vb" tx in your mempool if you're going to take into account
unrelated, lower fee rate txs that are also in the mempool makes block
building "more" of an NP-hard problem and makes the greedy solution
we've currently got much more suboptimal -- if you really want to do that
optimally, I think you have to have a mempool that retains conflicting
txs and runs a dynamic programming solution to pick the best set, rather
than today's simple greedy algorithms both for building the block and
populating the mempool?

For example, if you had two such replacements come through the network,
a miner could want to flip from initially accepting the first replacement,
to unaccepting it:

Initial mempool: two big txs at 100k each, many small transactions at
15s/vB and 1s/vB

 [100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at 1s/vB]
   -> 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)

Replacement for the 20s/vB tx paying a higher fee rate but lower total
fee; that's worth including:

 [10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]
   -> 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)

Later, replacement for the 12s/vB tx comes in, also paying higher fee
rate but lower total fee. Worth including, but only if you revert the
original replacement:

 [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.16 BTC for 1MvB (150*20 + 850*15)

 [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]
   -> 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)

Algorithms/mempool policies you might have, and their results with
this example:

 * current RBF rules: reject both replacements because they don't
   increase the absolute fee, thus get the minimum block fees of
   0.148 BTC

 * reject RBF unless it increases the fee rate, and get 0.1484 BTC in
   fees

 * reject RBF if it's lower fee rate or immediately decreases the block
   reward: so, accept the first replacement, but reject the second,
   getting 0.1499 BTC

 * only discard a conflicting tx when it pays both a lower fee rate and
   lower absolute fees, and choose amongst conflicting txs optimally
   via some complicated tx allocation algorithm when generating a block,
   and get 0.16 BTC

In this example, those techniques give 92.5%, 92.75%, 93.69% and 100% of
total possible fees you could collect; and 99.813%, 99.819%, 99.84% and
100% of the total possible block reward at 6.25BTC/block.

Is there a plausible example where the difference isn't that marginal?
Seems like the simplest solution of just checking the (package/descendent)
fee rate increases works well enough here at least.

If 90kvB of unrelated txs at 14s/vB were then added to the mempool, then
replacing both txs becomes (just barely) optimal, meaning the smartest
possible algorithm and the dumbest one of just considering the fee rate
produce the same result, while the others are worse:

 [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]
   -> 0.1601 BTC for 1MvB
   (accepting both)

 [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]
   -> 0.1575 BTC for 1MvB
   (accepting only the second replacement)

 [10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
   -> 0.1551 BTC for 1MvB
   (first replacement only, optimal tx selection: 10*100, 850*15, 50*14, 100*12)

 [100kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
   -> 0.1545 BTC for 1MvB
   (accepting neither replacement)

 [10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]
   -> 0.1506 BTC for 1MvB
   (first replacement only, greedy tx selection: 10*100, 850*15, 90*14, 50*1)

Always accepting (package/descendent) fee rate increases removes the
possibility of pinning entirely, I think -- you still have the problem
where someone else might get a conflicting transaction confirmed first,
but they can't get a conflicting tx stuck in the mempool without
confirming if you're willing to pay enough to get it confirmed.



Note that if we did have this policy, you could abuse it to cheaply drain
people's mempools: if there was a 300MB backlog, you could publish 2980
100kB txs paying a fee rate just below the next block fee, meaning you'd
kick out the previous backlog and your transactions take up all but the
top 2MB of the mempool; if you then replace them all with perhaps 2980
100B txs paying a slightly higher fee rate, the default mempool will be
left with only 2.3MB, at an ultimate cost to you of only about 30% of a
block in fees, and you could then fill the mempool back up by spamming
300MB of ultra low fee rate txs.

I think spam prevention at the outbound relay level isn't enough to
prevent that: an attacker could contact every public node and relay the
txs directly, clearing out the mempool of most public nodes directly. So
you'd want some sort of spam prevention on inbound txs too?

So I think you'd need to carefully think about relay spam before making
this sort of change.  Also, if we had tx rebroadcast implemented then
having just a few nodes with large mempools might allow the network to
recover from this situation automatically.

Cheers,
aj

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev