public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoin-dev] A fee-bumping model
@ 2021-11-29 14:27 darosior
  2021-11-30  1:43 ` Antoine Riard
  2021-12-09 13:50 ` Peter Todd
  0 siblings, 2 replies; 9+ messages in thread
From: darosior @ 2021-11-29 14:27 UTC (permalink / raw)
  To: Bitcoin Protocol Discussion

Hi everyone,

Fee-bumping is paramount to the security of many protocols building on Bitcoin, as they require the
confirmation of a transaction (which might be presigned) before the expiration of a timelock at any
point after the establishment of the contract.

The part of Revault using presigned transactions (the delegation from a large to a smaller multisig)
is no exception. We have been working on how to approach this for a while now and i'd like to share
what we have in order to open a discussion on this problem so central to what seem to be The Right
Way [0] to build on Bitcoin but which has yet to be discussed in details (at least publicly).

I'll discuss what we came up with for Revault (at least for what will be its first iteration) but my
intent with posting to the mailing list is more to frame the questions to this problem we are all
going to face rather than present the results of our study tailored to the Revault usecase.
The discussion is still pretty Revault-centric (as it's the case study) but hopefully this can help
future protocol designers and/or start a discussion around what everyone's doing for existing ones.


## 1. Reminder about Revault

The part of Revault we are interested in for this study is the delegation process, and more
specifically the application of spending policies by network monitors (watchtowers).
Coins are received on a large multisig. Participants of this large multisig create 2 [1]
transactions. The Unvault, spending a deposit UTxO, creates an output paying either to the small
multisig after a timelock or to the large multisig immediately. The Cancel, spending the Unvault
output through the non-timelocked path, creates a new deposit UTxO.
Participants regularly exchange the Cancel transaction signatures for each deposit, sharing the
signatures with the watchtowers they operate. They then optionally [2] sign the Unvault transaction
and share the signatures with the small multisig participants who can in turn use them to proceed
with a spending. Watchtowers can enforce spending policies (say, can't Unvault outside of business
hours) by having the Cancel transaction be confirmed before the expiration of the timelock.


## 2. Problem statement

For any delegated vault, ensure the confirmation of a Cancel transaction in a configured number of
blocks at any point. In so doing, minimize the overpayments and the UTxO set footprint. Overpayments
increase the burden on the watchtower operator by increasing the required frequency of refills of the
fee-bumping wallet, which is already the worst user experience. You are likely to manage a number of
UTxOs with your number of vaults, which comes at a cost for you as well as everyone running a full
node.

Note that this assumes miners are economically rationale, are incentivized by *public* fees and that
you have a way to propagate your fee-bumped transaction to them. We also don't consider the block
space bounds.

In the previous paragraph and the following text, "vault" can generally be replaced with "offchain
contract".


## 3. With presigned transactions

As you all know, the first difficulty is to get to be able to unilaterally enforce your contract
onchain. That is, any participant must be able to unilaterally bump the fees of a transaction even
if it was co-signed by other participants.

For Revault we can afford to introduce malleability in the Cancel transaction since there is no
second-stage transaction depending on its txid. Therefore it is pre-signed with ANYONECANPAY. We
can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage
the carve out rule, and neither can any other more-than-two-parties contract.
This has a significant implication for the rest, as we are entirely burning fee-bumping UTxOs.

This opens up a pinning vector, or at least a significant nuisance: any other party can largely
increase the absolute fee without increasing the feerate, leveraging the RBF rules to prevent you
from replacing it without paying an insane fee. And you might not see it in your own mempool and
could only suppose it's happening by receiving non-full blocks or with transactions paying a lower
feerate.
Unfortunately i know of no other primitive that can be used by multi-party (i mean, >2) presigned
transactions protocols for fee-bumping that aren't (more) vulnerable to pinning.


## 4. We are still betting on future feerate

The problem is still missing one more constraint. "Ensuring confirmation at any time" involves ensuring
confirmation at *any* feerate, which you *cannot* do. So what's the limit? In theory you should be ready
to burn as much in fees as the value of the funds you want to get out of the contract. So... For us
it'd mean keeping for each vault an equivalent amount of funds sitting there on the watchtower's hot
wallet. For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all your
channels balances sitting there unallocated "just in case". This is not reasonable.

So you need to keep a maximum feerate, above which you won't be able to ensure the enforcement of
all your contracts onchain at the same time. We call that the "reserve feerate" and you can have
different strategies for choosing it, for instance:
- The 85th percentile over the last year of transactions feerates
- The maximum historical feerate
- The maximum historical feerate adjusted in dollars (makes more sense but introduces a (set of?)
  trusted oracle(s) in a security-critical component)
- Picking a random high feerate (why not? It's an arbitrary assumption anyways)

Therefore, even if we don't have to bet on the broadcast-time feerate market at signing time anymore
(since we can unilaterally bump), we still need some kind of prediction in preparation of making
funds available to bump the fees at broadcast time.
Apart from judging that 500sat/vb is probably more reasonable than 10sat/vbyte, this unfortunately
sounds pretty much crystal-ball-driven.

We currently use the maximum of the 95th percentiles over 90-days windows over historical block chain
feerates. [4]


## 5. How much funds does my watchtower need?

That's what we call the "reserve". Depending on your reserve feerate strategy it might vary over
time. This is easier to reason about with a per-contract reserve. For Revault it's pretty
straightforward since the Cancel transaction size is static: `reserve_feerate * cancel_size`. For
other protocols with dynamic transaction sizes (or even packages of transactions) it's less so. For
your Lightning channel you would probably take the maximum size of your commitment transaction
according to your HTLC exposure settings + the size of as many `htlc_success` transaction?

Then you either have your software or your user guesstimate how many offchain contracts the
watchtower will have to watch, time that by the per-contract reserve and refill this amount (plus
some slack in practice). Once again, a UX tradeoff (not even mentioning the guesstimation UX):
overestimating leads to too many unallocated funds sitting on a hot wallet, underestimating means
(at best) inability to participate in new contracts or being "at risk" (not being able to enforce
all your contracts onchain at your reserve feerate) before a new refill.

For vaults you likely have large-value UTxOs and small transactions (the Cancel is one-in one-out in
Revault). For some other applications with large transactions and lower-value UTxOs on average it's
likely that only part of the offchain contracts might be enforceable at a reasonable feerate. Is it
reasonable?


## 6. UTxO pool layout

Now that you somehow managed to settle on a refill amount, how are you going to use these funds?
Also, you'll need to manage your pool across time (consolidating small coins, and probably fanning
out large ones).

You could keep a single large UTxO and peel it as you need to sponsor transactions. But this means
that you need to create a coin of a specific value according to your need at the current feerate
estimation, hope to have it confirmed in a few blocks (at least for now! [5]), and hope that the
value won't be obsolete by the time it confirmed. Also, you'd have to do that for any number of
Cancel, chaining feebump coin creation transactions off the change of the previous ones or replacing
them with more outputs. Both seem to become really un-manageable (and expensive) in many edge-cases,
shortening the time you have to confirm the actual Cancel transaction and creating uncertainty about
the reserve (how much is my just-in-time fanout going to cost me in fees that i need to refill in
advance on my watchtower wallet?).
This is less of a concern for protocols using CPFP to sponsor transactions, but they rely on a
policy rule specific to 2-parties contracts.

Therefore for Revault we fan-out the coins per-vault in advance. We do so at refill time so the
refiller can give an excess to pay for the fees of the fanout transaction (which is reasonable since
it will occur just after the refilling transaction confirms). When the watchtower is asked to watch
for a new delegated vault it will allocate coins from the pool of fanned-out UTxOs to it (failing
that, it would refuse the delegation).
What is a good distribution of UTxOs amounts per vault? We want to minimize the number of coins,
still have coins small enough to not overpay (remember, we can't have change) and be able to bump a
Cancel up to the reserve feerate using these coins. The two latter constraints are directly in
contradiction as the minimal value of a coin usable at the reserve feerate (paying for its own input
fee + bumping the feerate by, say, 5sat/vb) is already pretty high. Therefore we decided to go with
two distributions per vault. The "reserve distribution" alone ensures that we can bump up to the
reserve feerate and is usable for high feerates. The "bonus distribution" is not, but contains
smaller coins useful to prevent overpayments during low and medium fee periods (which is most of the
time).
Both distributions are based on a basic geometric suite [6]. Each value is half the previous one.
This exponentially decreases the value, limiting the number of coins. But this also allows for
pretty small coins to exist and each coin's value is equal to the sum of the smaller coins,
or smaller by at most the value of the smallest coin. Therefore bounding the maximum overpayment to
the smallest coin's value [7].

For the management of the UTxO pool across time we merged the consolidation with the fanout. When
fanning out a refilled UTxO, we scan the pool for coins that need to be consolidated according to a
heuristic. An instance of a heuristic is "the coin isn't allocated and would not have been able to
increase the fee at the median feerate over the past 90 days of blocks".
We had this assumption that feerate would tend to go up with time and therefore discarded having to
split some UTxOs from the pool. We however overlooked that a large increase in the exchange price of
BTC as we've seen during the past year could invalidate this assumption and that should arguably be
reconsidered.


## 7. Bumping and re-bumping

First of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like,
given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate and
only re-bumping every N blocks. You would then start aggressively bumping at every block after M
blocks have passed. But that's actually a bet (in disguised?) that the next block feerate in M blocks
will be lower than the current one. In the absence of any predictive model it is more reasonable to
just start being aggressive immediately.
You probably want to base your estimates on `estimatesmartfee` and as a consequence you would re-bump
(if needed )after each block connection, when your estimates get updated and you notice your
transaction was not included in the block.

In the event that you notice a consequent portion of the block is filled with transactions paying
less than your own, you might want to start panicking and bump your transaction fees by a certain
percentage with no consideration for your fee estimator. You might skew miners incentives in doing
so: if you increase the fees by a factor of N, any miner with a fraction larger than 1/N of the
network hashrate now has an incentive to censor your transaction at first to get you to panic. Also
note this can happen if you want to pay the absolute fees for the 'pinning' attack mentioned in
section #2, and that might actually incentivize miners to perform it themselves..

The gist is that the most effective way to bump and rebump (RBF the Cancel tx) seems to just be to
consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block your tx isn't included in, and
to RBF it if the feerate is higher.
In addition, we fallback to a block chain based estimation when estimates aren't available (eg if
the user stopped their WT for say a hour and we come back up): we use the 85th percentile over the
feerates in the last 6 blocks. Sure, miners can try to have an influence on that by stuffing their
blocks with large fee self-paying transactions, but they would need to:
1. Be sure to catch a significant portion of the 6 blocks (at least 2, actually)
2. Give up on 25% of the highest fee-paying transactions (assuming they got the 6 blocks, it's
   proportionally larger and incertain as they get less of them)
3. Hope that our estimator will fail and we need to fall back to the chain-based estimation


## 8. Our study

We essentially replayed the historical data with different deployment configurations (number of
participants and timelock) and probability of an event occurring (event being say an Unvault, an
invalid Unvault, a new delegation, ..). We then observed different metrics such as the time at risk
(when we can't enforce all our contracts at the reserve feerate at the same time), or the
operational cost.
We got the historical fee estimates data from Statoshi [9], Txstats [10] and the historical chain
data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!

The (research-quality..) code can be found at https://github.com/revault/research under the section
"Fee bumping". Again it's very Revault specific, but at least the data can probably be reused for
studying other protocols.


## 9. Insurances

Of course, given it's all hacks and workarounds and there is no good answer to "what is a reasonable
feerate up to which we need to make contracts enforceable onchain?", there is definitely room for an
insurance market. But this enters the realm of opinions. Although i do have some (having discussed
this topic for the past years with different people), i would like to keep this post focused on the
technical aspects of this problem.



[0] As far as i can tell, having offchain contracts be enforceable onchain by confirming a
transaction before the expiration of a timelock is a widely agreed-upon approach. And i don't think
we can opt for any other fundamentally different one, as you want to know you can claim back your
coins from a contract after a deadline before taking part in it.

[1] The Real Revault (tm) involves more transactions, but for the sake of conciseness i only
detailed a minimum instance of the problem.

[2] Only presigning part of the Unvault transactions allows to only delegate part of the coins,
which can be abstracted as "delegate x% of your stash" in the user interface.

[3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html

[4] https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329

[5] https://github.com/bitcoin/bitcoin/pull/23121

[6] https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507

[7] Of course this assumes a combinatorial coin selection, but i believe it's ok given we limit the
number of coins beforehand.

[8] Although there is the argument to outbid a censorship, anyone censoring you isn't necessarily a
miner.

[9] https://www.statoshi.info/

[10] https://www.statoshi.info/

[11] https://github.com/RCasatta/blocks_iterator


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-11-29 14:27 [bitcoin-dev] A fee-bumping model darosior
@ 2021-11-30  1:43 ` Antoine Riard
  2021-11-30 15:19   ` darosior
  2021-12-09 13:50 ` Peter Todd
  1 sibling, 1 reply; 9+ messages in thread
From: Antoine Riard @ 2021-11-30  1:43 UTC (permalink / raw)
  To: darosior, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 27526 bytes --]

Hi Darosior,

Nice work, few thoughts binding further your model for Lightning.

> For any delegated vault, ensure the confirmation of a Cancel transaction
in a configured number of
> blocks at any point. In so doing, minimize the overpayments and the UTxO
set footprint. Overpayments
> increase the burden on the watchtower operator by increasing the required
frequency of refills of the
> fee-bumping wallet, which is already the worst user experience. You are
likely to manage a number of
> UTxOs with your number of vaults, which comes at a cost for you as well
as everyone running a full
> node.

For any opened channel, ensure the confirmation of a Commitment transaction
and the children HTLC-Success/HTLC-Timeout transactions. Note, in the
Lightning security game you have to consider (at least) 4 types of players
moves and incentives : your node, your channel counterparties, the miners,
the crowd of bitcoin users. The number of the last type of players is
unknown from your node, however it should not be forgotten you're in
competition for block space, therefore their block demands bids should be
anticipated and reacted to in consequence. With that remark in mind,
implications for your LN fee-bumping strategy will be raised afterwards.

For a LN service provider, on-chain overpayments are bearing on your
operational costs, thus downgrading your economic competitiveness. For the
average LN user, overpayment might price out outside a LN non-custodial
deployment, as you don't have the minimal security budget to be on your own.

> This opens up a pinning vector, or at least a significant nuisance: any
other party can largely
> increase the absolute fee without increasing the feerate, leveraging the
RBF rules to prevent you
> from replacing it without paying an insane fee. And you might not see it
in your own mempool and
> could only suppose it's happening by receiving non-full blocks or with
transactions paying a lower
> feerate.

Same issue with Lightning, we can be pinned today on the basis of
replace-by-fee rule 3. We can be also blinded by network mempool
partitions, a pinning counterparty can segregate all the full-nodes  in as
many subsets by broadcasting a revoked Commitment transaction different for
each. For Revault, I think you can also do unlimited partitions by mutating
the ANYONECANPAY-input of the Cancel.

That said, if you have a distributed towers deployment, spread across the
p2p network topology, and they can't be clustered together through
cross-layers or intra-layer heuristics, you should be able to reliably
observe such partitions. I think such distributed monitors are deployed by
few L1 merchants accepting 0-conf to detect naive double-spend.

> Unfortunately i know of no other primitive that can be used by
multi-party (i mean, >2) presigned
> transactions protocols for fee-bumping that aren't (more) vulnerable to
pinning.

Have we already discussed a fee-bumping "shared cache", a CPFP variation ?
Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO
from the main "offchain contract" one. This UTXO is locked by a multi-sig.
For any Commitment transaction pre-signed, also counter-sign a CPFP with
top mempool feerate included, spending a Commitment anchor output and the
shared-cache UTXO. If the fees spike,  you can re-sign a high-feerate CPFP,
assuming interactivity. As the CPFP is counter-signed by everyone, the
outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is
feeded at parity, there shouldn't be an incentive to waste or maliciously
inflate the feerate. I think this solution can be easily generalized to
more than 2 counterparties by using a multi-signature scheme. Big issue, if
the feerate is short due to fee spikes and you need to re-sign a
higher-feerate CPFP, you're trusting your counterparty to interact, though
arguably not worse than the current update fee mechanism.

> For Lightning, it'd mean keeping an equivalent amount of funds as the sum
of all your
channels balances sitting there unallocated "just in case". This is not
reasonable.

Agree, game-theory wise, you would like to keep a full fee-bumping reserve,
ready to burn as much in fees as the contested HTLC value, as it's the
maximum gain of your counterparty. Though perfect equilibrium is hard to
achieve because your malicious counterparty might have an edge pushing you
to broadcast your Commitment first by witholding HTLC resolution.

Fractional fee-bumping reserves are much more realistic to expect in the LN
network. Lower fee-bumping reserve, higher liquidity deployed, in theory
higher routing fees. By observing historical feerates, average offchain
balances at risk and routing fees expected gains, you should be able to
discover an equilibrium where higher levels of reserve aren't worth the
opportunity cost. I guess this  equilibrium could be your LN fee-bumping
reserve max feerate.

Note, I think the LN approach is a bit different from what suits a custody
protocol like Revault,  as you compute a direct return of the frozen
fee-bumping liquidity. With Revault, if you have numerous bitcoins
protected, it's might be more interesting to adopt a "buy the mempool,
stupid" strategy than risking fund safety for few percentages of interest
returns.

> This is easier to reason about with a per-contract reserve.

For Lightning, this per-channel approach is safer too, as one Commitment
transaction pinned or jammed could affect the confirmation odds of your
remaining LN Commitment transactions.

> For your Lightning channel you would probably take the maximum size of
your commitment transaction
> according to your HTLC exposure settings + the size of as many
`htlc_success` transactions?

Yes, I guess it's your holder's `max_accepted_htcls` * `HTLC-Success
weight` + counterparty's `max_accepted_htlcs` * `HTLC-Timeout weight`
Better to adopt this worst-case as the base transaction weight to fee-bump,
as currently we can't dynamically update channel policies.

> For some other applications with large transactions and lower-value UTxOs
on average it's
> likely that only part of the offchain contracts might be enforceable at a
reasonable feerate. Is it
> reasonable?

This is where the "anticipate the crowd of bitcoin users move" point can be
laid out. As the crowd of bitcoin users' fee-bumping reserves are
ultimately unknown from your node knowledge, you should be ready to be a
bit more conservative than the vanilla fee-bumping strategies shipped by
default. In case of massive mempool congestion, your additional
conservatism might get your time-sensitive transactions and game on the
crowd of bitcoin users. First Problem: if all offchain bitcoin software
adopt that strategy we might inflate the worst-case feerate rate at the
benefit of the miners, without holistically improving block throughput.
Second problem : your class of offchain bitcoin softwares might have
ridiculous fee-bumping reserve compared
to other classes of offchain bitcoin softwares (Revault > Lightning) and
just be priced out bydesign in case of mempool congestion. Third problem :
as the number of offchain bitcoin applications should go up with time, your
fee-bumping reserve levels based from historical data might be always late
by one "bank-run" scenario.

For Lightning, if you're short in fee-bumping reserves you might still do
preemptive channel closures, either cooperatively or unilaterally and get
back the off-chain liquidity to protect the more economically interesting
channels. Though again, that kind of automatic behavior might be compelling
at the individual node-level, but make the mempol congestion worse
holistically.

> First of all, when to fee-bump? At fixed time intervals? At each block
connection?

In case of massive mempool congestion, you might try to front-run the crowd
of bitcoin users relying on block connections for fee-bumping, and thus
start your fee-bumping as soon as you observe feerate groups fluctuations
in your local mempool(s).

Also you might proceed your fee-bumping ticks on a local clock instead of
block connections in case of time-dilation or deeper eclipse attacks of
your local node. Your view of the chain might be compromised but not your
ability to broadcast transactions thanks to emergency channels (in the
non-LN sense...though in fact quid of txn wrapped in onions ?) of
communication.

> You might skew miners incentives in doing
> so: if you increase the fees by a factor of N, any miner with a fraction
larger than 1/N of the
> network hashrate now has an incentive to censor your transaction at first
to get you to panic.

Yes I think miner-harvesting attacks should be weighed carefully in the
design of offchain contracts fee-bumping strategies, at least in the future
when the mining reward exhausts further. I wonder if a more refined formula
should encompass the miner loss for empty blocks and ensure this loss stays
more substantial than the fees increased. So something like computing "for
X censored blocks, the Y average loss should be superior to the Z
fee-bumping increase".

> Of course, given it's all hacks and workarounds and there is no good
answer to "what is a reasonable
> feerate up to which we need to make contracts enforceable onchain?",
there is definitely room for an
> insurance market.

Yes, stay open the question on how you enforce this block insurance market.
Reputation, which might be to avoid due to the latent centralization
effect, might be hard to stack and audit reliably for an emergency
mechanism running, hopefully, once in a halvening period. Maybe maybe some
cryptographic or economically based mechanism on slashing or swaps could be
found...

Antoine

Le lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> a écrit :

> Hi everyone,
>
> Fee-bumping is paramount to the security of many protocols building on
> Bitcoin, as they require the
> confirmation of a transaction (which might be presigned) before the
> expiration of a timelock at any
> point after the establishment of the contract.
>
> The part of Revault using presigned transactions (the delegation from a
> large to a smaller multisig)
> is no exception. We have been working on how to approach this for a while
> now and i'd like to share
> what we have in order to open a discussion on this problem so central to
> what seem to be The Right
> Way [0] to build on Bitcoin but which has yet to be discussed in details
> (at least publicly).
>
> I'll discuss what we came up with for Revault (at least for what will be
> its first iteration) but my
> intent with posting to the mailing list is more to frame the questions to
> this problem we are all
> going to face rather than present the results of our study tailored to the
> Revault usecase.
> The discussion is still pretty Revault-centric (as it's the case study)
> but hopefully this can help
> future protocol designers and/or start a discussion around what everyone's
> doing for existing ones.
>
>
> ## 1. Reminder about Revault
>
> The part of Revault we are interested in for this study is the delegation
> process, and more
> specifically the application of spending policies by network monitors
> (watchtowers).
> Coins are received on a large multisig. Participants of this large
> multisig create 2 [1]
> transactions. The Unvault, spending a deposit UTxO, creates an output
> paying either to the small
> multisig after a timelock or to the large multisig immediately. The
> Cancel, spending the Unvault
> output through the non-timelocked path, creates a new deposit UTxO.
> Participants regularly exchange the Cancel transaction signatures for each
> deposit, sharing the
> signatures with the watchtowers they operate. They then optionally [2]
> sign the Unvault transaction
> and share the signatures with the small multisig participants who can in
> turn use them to proceed
> with a spending. Watchtowers can enforce spending policies (say, can't
> Unvault outside of business
> hours) by having the Cancel transaction be confirmed before the expiration
> of the timelock.
>
>
> ## 2. Problem statement
>
> For any delegated vault, ensure the confirmation of a Cancel transaction
> in a configured number of
> blocks at any point. In so doing, minimize the overpayments and the UTxO
> set footprint. Overpayments
> increase the burden on the watchtower operator by increasing the required
> frequency of refills of the
> fee-bumping wallet, which is already the worst user experience. You are
> likely to manage a number of
> UTxOs with your number of vaults, which comes at a cost for you as well as
> everyone running a full
> node.
>
> Note that this assumes miners are economically rationale, are incentivized
> by *public* fees and that
> you have a way to propagate your fee-bumped transaction to them. We also
> don't consider the block
> space bounds.
>
> In the previous paragraph and the following text, "vault" can generally be
> replaced with "offchain
> contract".
>
>
> ## 3. With presigned transactions
>
> As you all know, the first difficulty is to get to be able to unilaterally
> enforce your contract
> onchain. That is, any participant must be able to unilaterally bump the
> fees of a transaction even
> if it was co-signed by other participants.
>
> For Revault we can afford to introduce malleability in the Cancel
> transaction since there is no
> second-stage transaction depending on its txid. Therefore it is pre-signed
> with ANYONECANPAY. We
> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
> Note how we can't leverage
> the carve out rule, and neither can any other more-than-two-parties
> contract.
> This has a significant implication for the rest, as we are entirely
> burning fee-bumping UTxOs.
>
> This opens up a pinning vector, or at least a significant nuisance: any
> other party can largely
> increase the absolute fee without increasing the feerate, leveraging the
> RBF rules to prevent you
> from replacing it without paying an insane fee. And you might not see it
> in your own mempool and
> could only suppose it's happening by receiving non-full blocks or with
> transactions paying a lower
> feerate.
> Unfortunately i know of no other primitive that can be used by multi-party
> (i mean, >2) presigned
> transactions protocols for fee-bumping that aren't (more) vulnerable to
> pinning.
>
>
> ## 4. We are still betting on future feerate
>
> The problem is still missing one more constraint. "Ensuring confirmation
> at any time" involves ensuring
> confirmation at *any* feerate, which you *cannot* do. So what's the limit?
> In theory you should be ready
> to burn as much in fees as the value of the funds you want to get out of
> the contract. So... For us
> it'd mean keeping for each vault an equivalent amount of funds sitting
> there on the watchtower's hot
> wallet. For Lightning, it'd mean keeping an equivalent amount of funds as
> the sum of all your
> channels balances sitting there unallocated "just in case". This is not
> reasonable.
>
> So you need to keep a maximum feerate, above which you won't be able to
> ensure the enforcement of
> all your contracts onchain at the same time. We call that the "reserve
> feerate" and you can have
> different strategies for choosing it, for instance:
> - The 85th percentile over the last year of transactions feerates
> - The maximum historical feerate
> - The maximum historical feerate adjusted in dollars (makes more sense but
> introduces a (set of?)
>   trusted oracle(s) in a security-critical component)
> - Picking a random high feerate (why not? It's an arbitrary assumption
> anyways)
>
> Therefore, even if we don't have to bet on the broadcast-time feerate
> market at signing time anymore
> (since we can unilaterally bump), we still need some kind of prediction in
> preparation of making
> funds available to bump the fees at broadcast time.
> Apart from judging that 500sat/vb is probably more reasonable than
> 10sat/vbyte, this unfortunately
> sounds pretty much crystal-ball-driven.
>
> We currently use the maximum of the 95th percentiles over 90-days windows
> over historical block chain
> feerates. [4]
>
>
> ## 5. How much funds does my watchtower need?
>
> That's what we call the "reserve". Depending on your reserve feerate
> strategy it might vary over
> time. This is easier to reason about with a per-contract reserve. For
> Revault it's pretty
> straightforward since the Cancel transaction size is static:
> `reserve_feerate * cancel_size`. For
> other protocols with dynamic transaction sizes (or even packages of
> transactions) it's less so. For
> your Lightning channel you would probably take the maximum size of your
> commitment transaction
> according to your HTLC exposure settings + the size of as many
> `htlc_success` transaction?
>
> Then you either have your software or your user guesstimate how many
> offchain contracts the
> watchtower will have to watch, time that by the per-contract reserve and
> refill this amount (plus
> some slack in practice). Once again, a UX tradeoff (not even mentioning
> the guesstimation UX):
> overestimating leads to too many unallocated funds sitting on a hot
> wallet, underestimating means
> (at best) inability to participate in new contracts or being "at risk"
> (not being able to enforce
> all your contracts onchain at your reserve feerate) before a new refill.
>
> For vaults you likely have large-value UTxOs and small transactions (the
> Cancel is one-in one-out in
> Revault). For some other applications with large transactions and
> lower-value UTxOs on average it's
> likely that only part of the offchain contracts might be enforceable at a
> reasonable feerate. Is it
> reasonable?
>
>
> ## 6. UTxO pool layout
>
> Now that you somehow managed to settle on a refill amount, how are you
> going to use these funds?
> Also, you'll need to manage your pool across time (consolidating small
> coins, and probably fanning
> out large ones).
>
> You could keep a single large UTxO and peel it as you need to sponsor
> transactions. But this means
> that you need to create a coin of a specific value according to your need
> at the current feerate
> estimation, hope to have it confirmed in a few blocks (at least for now!
> [5]), and hope that the
> value won't be obsolete by the time it confirmed. Also, you'd have to do
> that for any number of
> Cancel, chaining feebump coin creation transactions off the change of the
> previous ones or replacing
> them with more outputs. Both seem to become really un-manageable (and
> expensive) in many edge-cases,
> shortening the time you have to confirm the actual Cancel transaction and
> creating uncertainty about
> the reserve (how much is my just-in-time fanout going to cost me in fees
> that i need to refill in
> advance on my watchtower wallet?).
> This is less of a concern for protocols using CPFP to sponsor
> transactions, but they rely on a
> policy rule specific to 2-parties contracts.
>
> Therefore for Revault we fan-out the coins per-vault in advance. We do so
> at refill time so the
> refiller can give an excess to pay for the fees of the fanout transaction
> (which is reasonable since
> it will occur just after the refilling transaction confirms). When the
> watchtower is asked to watch
> for a new delegated vault it will allocate coins from the pool of
> fanned-out UTxOs to it (failing
> that, it would refuse the delegation).
> What is a good distribution of UTxOs amounts per vault? We want to
> minimize the number of coins,
> still have coins small enough to not overpay (remember, we can't have
> change) and be able to bump a
> Cancel up to the reserve feerate using these coins. The two latter
> constraints are directly in
> contradiction as the minimal value of a coin usable at the reserve feerate
> (paying for its own input
> fee + bumping the feerate by, say, 5sat/vb) is already pretty high.
> Therefore we decided to go with
> two distributions per vault. The "reserve distribution" alone ensures that
> we can bump up to the
> reserve feerate and is usable for high feerates. The "bonus distribution"
> is not, but contains
> smaller coins useful to prevent overpayments during low and medium fee
> periods (which is most of the
> time).
> Both distributions are based on a basic geometric suite [6]. Each value is
> half the previous one.
> This exponentially decreases the value, limiting the number of coins. But
> this also allows for
> pretty small coins to exist and each coin's value is equal to the sum of
> the smaller coins,
> or smaller by at most the value of the smallest coin. Therefore bounding
> the maximum overpayment to
> the smallest coin's value [7].
>
> For the management of the UTxO pool across time we merged the
> consolidation with the fanout. When
> fanning out a refilled UTxO, we scan the pool for coins that need to be
> consolidated according to a
> heuristic. An instance of a heuristic is "the coin isn't allocated and
> would not have been able to
> increase the fee at the median feerate over the past 90 days of blocks".
> We had this assumption that feerate would tend to go up with time and
> therefore discarded having to
> split some UTxOs from the pool. We however overlooked that a large
> increase in the exchange price of
> BTC as we've seen during the past year could invalidate this assumption
> and that should arguably be
> reconsidered.
>
>
> ## 7. Bumping and re-bumping
>
> First of all, when to fee-bump? At fixed time intervals? At each block
> connection? It sounds like,
> given a large enough timelock, you could try to greed by "trying your
> luck" at a lower feerate and
> only re-bumping every N blocks. You would then start aggressively bumping
> at every block after M
> blocks have passed. But that's actually a bet (in disguised?) that the
> next block feerate in M blocks
> will be lower than the current one. In the absence of any predictive model
> it is more reasonable to
> just start being aggressive immediately.
> You probably want to base your estimates on `estimatesmartfee` and as a
> consequence you would re-bump
> (if needed )after each block connection, when your estimates get updated
> and you notice your
> transaction was not included in the block.
>
> In the event that you notice a consequent portion of the block is filled
> with transactions paying
> less than your own, you might want to start panicking and bump your
> transaction fees by a certain
> percentage with no consideration for your fee estimator. You might skew
> miners incentives in doing
> so: if you increase the fees by a factor of N, any miner with a fraction
> larger than 1/N of the
> network hashrate now has an incentive to censor your transaction at first
> to get you to panic. Also
> note this can happen if you want to pay the absolute fees for the
> 'pinning' attack mentioned in
> section #2, and that might actually incentivize miners to perform it
> themselves..
>
> The gist is that the most effective way to bump and rebump (RBF the Cancel
> tx) seems to just be to
> consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block your
> tx isn't included in, and
> to RBF it if the feerate is higher.
> In addition, we fallback to a block chain based estimation when estimates
> aren't available (eg if
> the user stopped their WT for say a hour and we come back up): we use the
> 85th percentile over the
> feerates in the last 6 blocks. Sure, miners can try to have an influence
> on that by stuffing their
> blocks with large fee self-paying transactions, but they would need to:
> 1. Be sure to catch a significant portion of the 6 blocks (at least 2,
> actually)
> 2. Give up on 25% of the highest fee-paying transactions (assuming they
> got the 6 blocks, it's
>    proportionally larger and incertain as they get less of them)
> 3. Hope that our estimator will fail and we need to fall back to the
> chain-based estimation
>
>
> ## 8. Our study
>
> We essentially replayed the historical data with different deployment
> configurations (number of
> participants and timelock) and probability of an event occurring (event
> being say an Unvault, an
> invalid Unvault, a new delegation, ..). We then observed different metrics
> such as the time at risk
> (when we can't enforce all our contracts at the reserve feerate at the
> same time), or the
> operational cost.
> We got the historical fee estimates data from Statoshi [9], Txstats [10]
> and the historical chain
> data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!
>
> The (research-quality..) code can be found at
> https://github.com/revault/research under the section
> "Fee bumping". Again it's very Revault specific, but at least the data can
> probably be reused for
> studying other protocols.
>
>
> ## 9. Insurances
>
> Of course, given it's all hacks and workarounds and there is no good
> answer to "what is a reasonable
> feerate up to which we need to make contracts enforceable onchain?", there
> is definitely room for an
> insurance market. But this enters the realm of opinions. Although i do
> have some (having discussed
> this topic for the past years with different people), i would like to keep
> this post focused on the
> technical aspects of this problem.
>
>
>
> [0] As far as i can tell, having offchain contracts be enforceable onchain
> by confirming a
> transaction before the expiration of a timelock is a widely agreed-upon
> approach. And i don't think
> we can opt for any other fundamentally different one, as you want to know
> you can claim back your
> coins from a contract after a deadline before taking part in it.
>
> [1] The Real Revault (tm) involves more transactions, but for the sake of
> conciseness i only
> detailed a minimum instance of the problem.
>
> [2] Only presigning part of the Unvault transactions allows to only
> delegate part of the coins,
> which can be abstracted as "delegate x% of your stash" in the user
> interface.
>
> [3]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html
>
> [4]
> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329
>
> [5] https://github.com/bitcoin/bitcoin/pull/23121
>
> [6]
> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507
>
> [7] Of course this assumes a combinatorial coin selection, but i believe
> it's ok given we limit the
> number of coins beforehand.
>
> [8] Although there is the argument to outbid a censorship, anyone
> censoring you isn't necessarily a
> miner.
>
> [9] https://www.statoshi.info/
>
> [10] https://www.statoshi.info/
>
> [11] https://github.com/RCasatta/blocks_iterator
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 29629 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-11-30  1:43 ` Antoine Riard
@ 2021-11-30 15:19   ` darosior
  2021-12-07 17:24     ` Gloria Zhao
  2021-12-08 23:56     ` Antoine Riard
  0 siblings, 2 replies; 9+ messages in thread
From: darosior @ 2021-11-30 15:19 UTC (permalink / raw)
  To: Antoine Riard; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 27220 bytes --]

Hi Antoine,

Thanks for your comment. I believe for Lightning it's simpler with regard to the management of the UTxO pool, but harder with regard to choosing
a threat model.
Responses inline.

> For any opened channel, ensure the confirmation of a Commitment transaction and the children HTLC-Success/HTLC-Timeout transactions. Note, in the Lightning security game you have to consider (at least) 4 types of players moves and incentives : your node, your channel counterparties, the miners, the crowd of bitcoin users. The number of the last type of players is unknown from your node, however it should not be forgotten you're in competition for block space, therefore their block demands bids should be anticipated and reacted to in consequence. With that remark in mind, implications for your LN fee-bumping strategy will be raised afterwards.
>
> For a LN service provider, on-chain overpayments are bearing on your operational costs, thus downgrading your economic competitiveness. For the average LN user, overpayment might price out outside a LN non-custodial deployment, as you don't have the minimal security budget to be on your own.

I think this problem statement can be easily generalised to any offchain contract. And your points stand for all of them.
"For any opened contract, ensure at any point the confirmation of a (set of) transaction(s) in a given number of blocks"

> Same issue with Lightning, we can be pinned today on the basis of replace-by-fee rule 3. We can be also blinded by network mempool partitions, a pinning counterparty can segregate all the full-nodes in as many subsets by broadcasting a revoked Commitment transaction different for each. For Revault, I think you can also do unlimited partitions by mutating the ANYONECANPAY-input of the Cancel.

Well you can already do unlimited partitions by adding different inputs to it. You could malleate the witness, but since we are using Miniscript i'm confident you would only be able in a marginal way.

> That said, if you have a distributed towers deployment, spread across the p2p network topology, and they can't be clustered together through cross-layers or intra-layer heuristics, you should be able to reliably observe such partitions. I think such distributed monitors are deployed by few L1 merchants accepting 0-conf to detect naive double-spend.

We should aim to more than 0-conf (in)security level..
It seems to me the only policy-level mitigation for RBF pinning around the "don't decrease the abolute fees of a less-than-a-block mempool" would be to drop the requirement on increasing absolute fees if the mempool is "full enough" (and the feerate increases exponentially, of course).
Another approach could be by introducing new consensus rules as proposed by Jeremy last year [0]. If we go in the realm of new consensus rules, then i think that simply committing to a maximum tx size would fix pinning by RBF rule 3. Could be in the annex, or in the unused sequence bits (although they currently are by Lightning, meh). You could also check in the output script that the input commits to this.

[0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html

> Have we already discussed a fee-bumping "shared cache", a CPFP variation ? Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO from the main "offchain contract" one. This UTXO is locked by a multi-sig. For any Commitment transaction pre-signed, also counter-sign a CPFP with top mempool feerate included, spending a Commitment anchor output and the shared-cache UTXO. If the fees spike, you can re-sign a high-feerate CPFP, assuming interactivity. As the CPFP is counter-signed by everyone, the outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is feeded at parity, there shouldn't be an incentive to waste or maliciously inflate the feerate. I think this solution can be easily generalized to more than 2 counterparties by using a multi-signature scheme. Big issue, if the feerate is short due to fee spikes and you need to re-sign a higher-feerate CPFP, you're trusting your counterparty to interact, though arguably not worse than the current update fee mechanism.

It really looks just like `update_fee`. Except maybe with the property that you have the channel liquidity not depend on the onchain feerate.
In any case, for Lightning i think it's a bad idea to re-introduce trust on this side post anchor outputs. For Revault it's clearly out of the question to introduce trust in your counterparties (why would you bother having a fee-bumping mechanism in the first place then?). Probably the same holds for all offchain contracts.

>> For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all your
> channels balances sitting there unallocated "just in case". This is not reasonable.
>
> Agree, game-theory wise, you would like to keep a full fee-bumping reserve, ready to burn as much in fees as the contested HTLC value, as it's the maximum gain of your counterparty. Though perfect equilibrium is hard to achieve because your malicious counterparty might have an edge pushing you to broadcast your Commitment first by witholding HTLC resolution.
>
> Fractional fee-bumping reserves are much more realistic to expect in the LN network. Lower fee-bumping reserve, higher liquidity deployed, in theory higher routing fees. By observing historical feerates, average offchain balances at risk and routing fees expected gains, you should be able to discover an equilibrium where higher levels of reserve aren't worth the opportunity cost. I guess this equilibrium could be your LN fee-bumping reserve max feerate.
>
> Note, I think the LN approach is a bit different from what suits a custody protocol like Revault, as you compute a direct return of the frozen fee-bumping liquidity. With Revault, if you have numerous bitcoins protected, it's might be more interesting to adopt a "buy the mempool, stupid" strategy than risking fund safety for few percentages of interest returns.

True for routing nodes. For wallets (if receiving funds), it's not about an investment: just users expectations to being able to transact without risking to lose their funds (ie being able to enforce their contract onchain). Although wallets they are much less at risk.

> This is where the "anticipate the crowd of bitcoin users move" point can be laid out. As the crowd of bitcoin users' fee-bumping reserves are ultimately unknown from your node knowledge, you should be ready to be a bit more conservative than the vanilla fee-bumping strategies shipped by default. In case of massive mempool congestion, your additional conservatism might get your time-sensitive transactions and game on the crowd of bitcoin users. First Problem: if all offchain bitcoin software adopt that strategy we might inflate the worst-case feerate rate at the benefit of the miners, without holistically improving block throughput. Second problem : your class of offchain bitcoin softwares might have ridiculous fee-bumping reserve compared
> to other classes of offchain bitcoin softwares (Revault > Lightning) and just be priced out bydesign in case of mempool congestion. Third problem : as the number of offchain bitcoin applications should go up with time, your fee-bumping reserve levels based from historical data might be always late by one "bank-run" scenario.

Black swan event 2.0? Just rule n°3 is inherent to any kind of fee estimation.

> For Lightning, if you're short in fee-bumping reserves you might still do preemptive channel closures, either cooperatively or unilaterally and get back the off-chain liquidity to protect the more economically interesting channels. Though again, that kind of automatic behavior might be compelling at the individual node-level, but make the mempol congestion worse holistically.

Yeah so we are back to the "fractional reserve" model: you can only enforce X% of the offchain contracts your participate in.. Actually it's even an added assumption: that you still have operating contracts, with honest counterparties.

> In case of massive mempool congestion, you might try to front-run the crowd of bitcoin users relying on block connections for fee-bumping, and thus start your fee-bumping as soon as you observe feerate groups fluctuations in your local mempool(s).

I don't think any kind of mempool-based estimate generalizes well, since at any point the expected time before the next block is 10 minutes (and a lot can happen in 10min).

> Also you might proceed your fee-bumping ticks on a local clock instead of block connections in case of time-dilation or deeper eclipse attacks of your local node. Your view of the chain might be compromised but not your ability to broadcast transactions thanks to emergency channels (in the non-LN sense...though in fact quid of txn wrapped in onions ?) of communication.

Oh, yeah, i didn't explicit "not getting eclipsed" (or more generally "data availability") as an assumption since it's generally one made by participants of any offchain contract. In this case you can't even have decent fee estimation, so you are screwed anyways.

> Yes, stay open the question on how you enforce this block insurance market. Reputation, which might be to avoid due to the latent centralization effect, might be hard to stack and audit reliably for an emergency mechanism running, hopefully, once in a halvening period. Maybe maybe some cryptographic or economically based mechanism on slashing or swaps could be found...

Unfortunately, given current mining centralisation, pools are in a very good position to offer pretty decent SLAs around that. With a block space insurance, you of course don't need all these convoluted fee-bumping hacks.
I'm very concerned that large stakeholders of the "offchain contracts ecosystem" would just go this (easier) way and further increase mining centralisation pressure.

I agree that a cryptography-based scheme around this type of insurance services would be the best way out.

> Antoine
>
> Le lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> a écrit :
>
>> Hi everyone,
>>
>> Fee-bumping is paramount to the security of many protocols building on Bitcoin, as they require the
>> confirmation of a transaction (which might be presigned) before the expiration of a timelock at any
>> point after the establishment of the contract.
>>
>> The part of Revault using presigned transactions (the delegation from a large to a smaller multisig)
>> is no exception. We have been working on how to approach this for a while now and i'd like to share
>> what we have in order to open a discussion on this problem so central to what seem to be The Right
>> Way [0] to build on Bitcoin but which has yet to be discussed in details (at least publicly).
>>
>> I'll discuss what we came up with for Revault (at least for what will be its first iteration) but my
>> intent with posting to the mailing list is more to frame the questions to this problem we are all
>> going to face rather than present the results of our study tailored to the Revault usecase.
>> The discussion is still pretty Revault-centric (as it's the case study) but hopefully this can help
>> future protocol designers and/or start a discussion around what everyone's doing for existing ones.
>>
>> ## 1. Reminder about Revault
>>
>> The part of Revault we are interested in for this study is the delegation process, and more
>> specifically the application of spending policies by network monitors (watchtowers).
>> Coins are received on a large multisig. Participants of this large multisig create 2 [1]
>> transactions. The Unvault, spending a deposit UTxO, creates an output paying either to the small
>> multisig after a timelock or to the large multisig immediately. The Cancel, spending the Unvault
>> output through the non-timelocked path, creates a new deposit UTxO.
>> Participants regularly exchange the Cancel transaction signatures for each deposit, sharing the
>> signatures with the watchtowers they operate. They then optionally [2] sign the Unvault transaction
>> and share the signatures with the small multisig participants who can in turn use them to proceed
>> with a spending. Watchtowers can enforce spending policies (say, can't Unvault outside of business
>> hours) by having the Cancel transaction be confirmed before the expiration of the timelock.
>>
>> ## 2. Problem statement
>>
>> For any delegated vault, ensure the confirmation of a Cancel transaction in a configured number of
>> blocks at any point. In so doing, minimize the overpayments and the UTxO set footprint. Overpayments
>> increase the burden on the watchtower operator by increasing the required frequency of refills of the
>> fee-bumping wallet, which is already the worst user experience. You are likely to manage a number of
>> UTxOs with your number of vaults, which comes at a cost for you as well as everyone running a full
>> node.
>>
>> Note that this assumes miners are economically rationale, are incentivized by *public* fees and that
>> you have a way to propagate your fee-bumped transaction to them. We also don't consider the block
>> space bounds.
>>
>> In the previous paragraph and the following text, "vault" can generally be replaced with "offchain
>> contract".
>>
>> ## 3. With presigned transactions
>>
>> As you all know, the first difficulty is to get to be able to unilaterally enforce your contract
>> onchain. That is, any participant must be able to unilaterally bump the fees of a transaction even
>> if it was co-signed by other participants.
>>
>> For Revault we can afford to introduce malleability in the Cancel transaction since there is no
>> second-stage transaction depending on its txid. Therefore it is pre-signed with ANYONECANPAY. We
>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties contract.
>> This has a significant implication for the rest, as we are entirely burning fee-bumping UTxOs.
>>
>> This opens up a pinning vector, or at least a significant nuisance: any other party can largely
>> increase the absolute fee without increasing the feerate, leveraging the RBF rules to prevent you
>> from replacing it without paying an insane fee. And you might not see it in your own mempool and
>> could only suppose it's happening by receiving non-full blocks or with transactions paying a lower
>> feerate.
>> Unfortunately i know of no other primitive that can be used by multi-party (i mean, >2) presigned
>> transactions protocols for fee-bumping that aren't (more) vulnerable to pinning.
>>
>> ## 4. We are still betting on future feerate
>>
>> The problem is still missing one more constraint. "Ensuring confirmation at any time" involves ensuring
>> confirmation at *any* feerate, which you *cannot* do. So what's the limit? In theory you should be ready
>> to burn as much in fees as the value of the funds you want to get out of the contract. So... For us
>> it'd mean keeping for each vault an equivalent amount of funds sitting there on the watchtower's hot
>> wallet. For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all your
>> channels balances sitting there unallocated "just in case". This is not reasonable.
>>
>> So you need to keep a maximum feerate, above which you won't be able to ensure the enforcement of
>> all your contracts onchain at the same time. We call that the "reserve feerate" and you can have
>> different strategies for choosing it, for instance:
>> - The 85th percentile over the last year of transactions feerates
>> - The maximum historical feerate
>> - The maximum historical feerate adjusted in dollars (makes more sense but introduces a (set of?)
>> trusted oracle(s) in a security-critical component)
>> - Picking a random high feerate (why not? It's an arbitrary assumption anyways)
>>
>> Therefore, even if we don't have to bet on the broadcast-time feerate market at signing time anymore
>> (since we can unilaterally bump), we still need some kind of prediction in preparation of making
>> funds available to bump the fees at broadcast time.
>> Apart from judging that 500sat/vb is probably more reasonable than 10sat/vbyte, this unfortunately
>> sounds pretty much crystal-ball-driven.
>>
>> We currently use the maximum of the 95th percentiles over 90-days windows over historical block chain
>> feerates. [4]
>>
>> ## 5. How much funds does my watchtower need?
>>
>> That's what we call the "reserve". Depending on your reserve feerate strategy it might vary over
>> time. This is easier to reason about with a per-contract reserve. For Revault it's pretty
>> straightforward since the Cancel transaction size is static: `reserve_feerate * cancel_size`. For
>> other protocols with dynamic transaction sizes (or even packages of transactions) it's less so. For
>> your Lightning channel you would probably take the maximum size of your commitment transaction
>> according to your HTLC exposure settings + the size of as many `htlc_success` transaction?
>>
>> Then you either have your software or your user guesstimate how many offchain contracts the
>> watchtower will have to watch, time that by the per-contract reserve and refill this amount (plus
>> some slack in practice). Once again, a UX tradeoff (not even mentioning the guesstimation UX):
>> overestimating leads to too many unallocated funds sitting on a hot wallet, underestimating means
>> (at best) inability to participate in new contracts or being "at risk" (not being able to enforce
>> all your contracts onchain at your reserve feerate) before a new refill.
>>
>> For vaults you likely have large-value UTxOs and small transactions (the Cancel is one-in one-out in
>> Revault). For some other applications with large transactions and lower-value UTxOs on average it's
>> likely that only part of the offchain contracts might be enforceable at a reasonable feerate. Is it
>> reasonable?
>>
>> ## 6. UTxO pool layout
>>
>> Now that you somehow managed to settle on a refill amount, how are you going to use these funds?
>> Also, you'll need to manage your pool across time (consolidating small coins, and probably fanning
>> out large ones).
>>
>> You could keep a single large UTxO and peel it as you need to sponsor transactions. But this means
>> that you need to create a coin of a specific value according to your need at the current feerate
>> estimation, hope to have it confirmed in a few blocks (at least for now! [5]), and hope that the
>> value won't be obsolete by the time it confirmed. Also, you'd have to do that for any number of
>> Cancel, chaining feebump coin creation transactions off the change of the previous ones or replacing
>> them with more outputs. Both seem to become really un-manageable (and expensive) in many edge-cases,
>> shortening the time you have to confirm the actual Cancel transaction and creating uncertainty about
>> the reserve (how much is my just-in-time fanout going to cost me in fees that i need to refill in
>> advance on my watchtower wallet?).
>> This is less of a concern for protocols using CPFP to sponsor transactions, but they rely on a
>> policy rule specific to 2-parties contracts.
>>
>> Therefore for Revault we fan-out the coins per-vault in advance. We do so at refill time so the
>> refiller can give an excess to pay for the fees of the fanout transaction (which is reasonable since
>> it will occur just after the refilling transaction confirms). When the watchtower is asked to watch
>> for a new delegated vault it will allocate coins from the pool of fanned-out UTxOs to it (failing
>> that, it would refuse the delegation).
>> What is a good distribution of UTxOs amounts per vault? We want to minimize the number of coins,
>> still have coins small enough to not overpay (remember, we can't have change) and be able to bump a
>> Cancel up to the reserve feerate using these coins. The two latter constraints are directly in
>> contradiction as the minimal value of a coin usable at the reserve feerate (paying for its own input
>> fee + bumping the feerate by, say, 5sat/vb) is already pretty high. Therefore we decided to go with
>> two distributions per vault. The "reserve distribution" alone ensures that we can bump up to the
>> reserve feerate and is usable for high feerates. The "bonus distribution" is not, but contains
>> smaller coins useful to prevent overpayments during low and medium fee periods (which is most of the
>> time).
>> Both distributions are based on a basic geometric suite [6]. Each value is half the previous one.
>> This exponentially decreases the value, limiting the number of coins. But this also allows for
>> pretty small coins to exist and each coin's value is equal to the sum of the smaller coins,
>> or smaller by at most the value of the smallest coin. Therefore bounding the maximum overpayment to
>> the smallest coin's value [7].
>>
>> For the management of the UTxO pool across time we merged the consolidation with the fanout. When
>> fanning out a refilled UTxO, we scan the pool for coins that need to be consolidated according to a
>> heuristic. An instance of a heuristic is "the coin isn't allocated and would not have been able to
>> increase the fee at the median feerate over the past 90 days of blocks".
>> We had this assumption that feerate would tend to go up with time and therefore discarded having to
>> split some UTxOs from the pool. We however overlooked that a large increase in the exchange price of
>> BTC as we've seen during the past year could invalidate this assumption and that should arguably be
>> reconsidered.
>>
>> ## 7. Bumping and re-bumping
>>
>> First of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like,
>> given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate and
>> only re-bumping every N blocks. You would then start aggressively bumping at every block after M
>> blocks have passed. But that's actually a bet (in disguised?) that the next block feerate in M blocks
>> will be lower than the current one. In the absence of any predictive model it is more reasonable to
>> just start being aggressive immediately.
>> You probably want to base your estimates on `estimatesmartfee` and as a consequence you would re-bump
>> (if needed )after each block connection, when your estimates get updated and you notice your
>> transaction was not included in the block.
>>
>> In the event that you notice a consequent portion of the block is filled with transactions paying
>> less than your own, you might want to start panicking and bump your transaction fees by a certain
>> percentage with no consideration for your fee estimator. You might skew miners incentives in doing
>> so: if you increase the fees by a factor of N, any miner with a fraction larger than 1/N of the
>> network hashrate now has an incentive to censor your transaction at first to get you to panic. Also
>> note this can happen if you want to pay the absolute fees for the 'pinning' attack mentioned in
>> section #2, and that might actually incentivize miners to perform it themselves..
>>
>> The gist is that the most effective way to bump and rebump (RBF the Cancel tx) seems to just be to
>> consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block your tx isn't included in, and
>> to RBF it if the feerate is higher.
>> In addition, we fallback to a block chain based estimation when estimates aren't available (eg if
>> the user stopped their WT for say a hour and we come back up): we use the 85th percentile over the
>> feerates in the last 6 blocks. Sure, miners can try to have an influence on that by stuffing their
>> blocks with large fee self-paying transactions, but they would need to:
>> 1. Be sure to catch a significant portion of the 6 blocks (at least 2, actually)
>> 2. Give up on 25% of the highest fee-paying transactions (assuming they got the 6 blocks, it's
>> proportionally larger and incertain as they get less of them)
>> 3. Hope that our estimator will fail and we need to fall back to the chain-based estimation
>>
>> ## 8. Our study
>>
>> We essentially replayed the historical data with different deployment configurations (number of
>> participants and timelock) and probability of an event occurring (event being say an Unvault, an
>> invalid Unvault, a new delegation, ..). We then observed different metrics such as the time at risk
>> (when we can't enforce all our contracts at the reserve feerate at the same time), or the
>> operational cost.
>> We got the historical fee estimates data from Statoshi [9], Txstats [10] and the historical chain
>> data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!
>>
>> The (research-quality..) code can be found at https://github.com/revault/research under the section
>> "Fee bumping". Again it's very Revault specific, but at least the data can probably be reused for
>> studying other protocols.
>>
>> ## 9. Insurances
>>
>> Of course, given it's all hacks and workarounds and there is no good answer to "what is a reasonable
>> feerate up to which we need to make contracts enforceable onchain?", there is definitely room for an
>> insurance market. But this enters the realm of opinions. Although i do have some (having discussed
>> this topic for the past years with different people), i would like to keep this post focused on the
>> technical aspects of this problem.
>>
>> [0] As far as i can tell, having offchain contracts be enforceable onchain by confirming a
>> transaction before the expiration of a timelock is a widely agreed-upon approach. And i don't think
>> we can opt for any other fundamentally different one, as you want to know you can claim back your
>> coins from a contract after a deadline before taking part in it.
>>
>> [1] The Real Revault (tm) involves more transactions, but for the sake of conciseness i only
>> detailed a minimum instance of the problem.
>>
>> [2] Only presigning part of the Unvault transactions allows to only delegate part of the coins,
>> which can be abstracted as "delegate x% of your stash" in the user interface.
>>
>> [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html
>>
>> [4] https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329
>>
>> [5] https://github.com/bitcoin/bitcoin/pull/23121
>>
>> [6] https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507
>>
>> [7] Of course this assumes a combinatorial coin selection, but i believe it's ok given we limit the
>> number of coins beforehand.
>>
>> [8] Although there is the argument to outbid a censorship, anyone censoring you isn't necessarily a
>> miner.
>>
>> [9] https://www.statoshi.info/
>>
>> [10] https://www.statoshi.info/
>>
>> [11] https://github.com/RCasatta/blocks_iterator
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 34206 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-11-30 15:19   ` darosior
@ 2021-12-07 17:24     ` Gloria Zhao
  2021-12-08 14:51       ` darosior
  2021-12-09  0:55       ` Antoine Riard
  2021-12-08 23:56     ` Antoine Riard
  1 sibling, 2 replies; 9+ messages in thread
From: Gloria Zhao @ 2021-12-07 17:24 UTC (permalink / raw)
  To: darosior; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 36691 bytes --]

Hi Darosior and Ariard,

Thank you for your work looking into fee-bumping so thoroughly, and for
sharing your results. I agree about fee-bumping's importance in contract
security and feel that it's often under-prioritized. In general, what
you've described in this post, to me, is strong motivation for some of the
proposed changes to RBF we've been discussing. Mostly, I have some
questions.

> The part of Revault we are interested in for this study is the delegation
process, and more
> specifically the application of spending policies by network monitors
(watchtowers).

I'd like to better understand how fee-bumping would be used, i.e. how the
watchtower model works:
- Do all of the vault parties both deposit to the vault and a refill/fee to
the watchtower, is there a reward the watchtower collects for a successful
Cancel, or something else? (Apologies if there's a thorough explanation
somewhere that I haven't already seen).
- Do we expect watchtowers tracking multiple vaults to be batching multiple
Cancel transaction fee-bumps?
- Do we expect vault users to be using multiple watchtowers for a better
trust model? If so, and we're expecting batched fee-bumps, won't those
conflict?

> For Revault we can afford to introduce malleability in the Cancel
transaction since there is no
> second-stage transaction depending on its txid. Therefore it is
pre-signed with ANYONECANPAY. We
> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
Note how we can't leverage
> the carve out rule, and neither can any other more-than-two-parties
contract.

We've already talked about this offline, but I'd like to point out here
that even transactions signed with ANYONECANPAY|ALL can be pinned by RBF
unless we add an ancestor score rule. [0], [1] (numbers are inaccurate,
Cancel Tx feerates wouldn't be that low, but just to illustrate what the
attack would look like)

[0]:
https://user-images.githubusercontent.com/25183001/135104603-9e775062-5c8d-4d55-9bc9-6e9db92cfe6d.png
[1]:
https://user-images.githubusercontent.com/25183001/145044333-2f85da4a-af71-44a1-bc21-30c388713a0d.png

> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
Note how we can't leverage
> the carve out rule, and neither can any other more-than-two-parties
contract.

Well stated about CPFP carve out. I suppose the generalization is that
allowing n extra ancestorcount=2 descendants to a transaction means it can
help contracts with <=n+1 parties (more accurately, outputs)? I wonder if
it's possible to devise a different approach for limiting
ancestors/descendants, e.g. by height/width/branching factor of the family
instead of count... :shrug:

> You could keep a single large UTxO and peel it as you need to sponsor
transactions. But this means
> that you need to create a coin of a specific value according to your need
at the current feerate
> estimation, hope to have it confirmed in a few blocks (at least for now!
[5]), and hope that the
> value won't be obsolete by the time it confirmed.

IIUC, a Cancel transaction can be generalized as a 1-in-1-out where the
input is presigned with counterparties, SIGHASH_ANYONECANPAY. The fan-out
UTXO pool approach is a clever solution. I also think this smells like a
case where improving lower-level RBF rules is more appropriate than
requiring applications to write workarounds and generate extra
transactions. Seeing that the BIP125#2 (no new unconfirmed inputs)
restriction really hurts in this case, if that rule were removed, would you
be able to simply keep the 1 big UTXO per vault and cut out the exact
nValue you need to fee-bump Cancel transactions? Would that feel less like
"burning" for the sake of fee-bumping?

> First of all, when to fee-bump? At fixed time intervals? At each block
connection? It sounds like,
> given a large enough timelock, you could try to greed by "trying your
luck" at a lower feerate and
> only re-bumping every N blocks. You would then start aggressively bumping
at every block after M
> blocks have passed.

I'm wondering if you also considered other questions like:
- Should a fee-bumping strategy be dependent upon the rate of incoming
transactions? To me, it seems like the two components are (1) what's in the
mempool and (2) what's going to trickle into the mempool between now and
the target block. The first component is best-effort keeping
incentive-compatible mempool; historical data and crystal ball look like
the only options for incorporating the 2nd component.
- Should the fee-bumping strategy depend on how close you are to your
timelock expiry? (though this seems like a potential privacy leak, and the
game theory could get weird as you mentioned).
- As long as you have a good fee estimator (i.e. given a current mempool,
can get an accurate feerate given a % probability of getting into target
block n), is there any reason to devise a fee-bumping strategy beyond
picking a time interval?

It would be interesting to see stats on the spread of feerates in blocks
during periods of fee fluctuation.

> > In the event that you notice a consequent portion of the block is
filled with transactions paying
> > less than your own, you might want to start panicking and bump your
transaction fees by a certain
> > percentage with no consideration for your fee estimator. You might skew
miners incentives in doing
> > so: if you increase the fees by a factor of N, any miner with a
fraction larger than 1/N of the
> > network hashrate now has an incentive to censor your transaction at
first to get you to panic.

> Yes I think miner-harvesting attacks should be weighed carefully in the
design of offchain contracts fee-bumping strategies, at least in the future
when the mining reward exhausts further.

Miner-harvesting (such cool naming!) is interesting, but I want to clarify
the value of N - I don't think it's the factor by which you increase the
fees on just your transaction.

To codify: your transaction pays a fee of `f1` right now and might pay a
fee of `f2` in a later block that the miner expects to mine with 1/N
probability. The economically rational miner isn't incentivized if simply
`f2 = N * f1` unless their mempool is otherwise empty.
By omitting your transaction in this block, the miner can include another
transaction/package paying `g1` fees instead, so they lose `f1-g1` in fees
right now. In the future block, they have the choice between collecting
`f2` or `g2` (from another transaction/package) in fees, so their gain is
`max(f2-g2, 0)`.
So the equation is more like: a miner with 1/N of the hashrate, employing
this censorship strategy, gains only if `max(f2-g2, 0) > N * (f1-g1)`. More
broadly, the miner only profits if `f2` is significantly higher than `g2`
and `f1` is about the same feerate as everything else in your mempool: it
seems like they're betting on how much you _overshoot_, not how much you
bump.

In general, I agree it would really suck to inadvertently create a game
where miners can drive feerates up by triggering desperation-driven
fee-bumping procedures. I guess this is a reason to avoid
increasingly-aggressive feebumping, or strategies where we predictably
overshoot.

Slightly related question: in contracts, generally, the timelock deadline
is revealed in the script, so the miner knows how "desperate" we are right?
Is that a problem? For Revault, if your Cancel transaction is a keypath
spend (I think I remember reading that somewhere?) and you don't reveal the
script, they don't see your timelock deadline yes?

Again, thanks for the digging and sharing. :)

Best,
Gloria

On Tue, Nov 30, 2021 at 3:27 PM darosior via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:

> Hi Antoine,
>
> Thanks for your comment. I believe for Lightning it's simpler with regard
> to the management of the UTxO pool, but harder with regard to choosing
> a threat model.
> Responses inline.
>
>
> For any opened channel, ensure the confirmation of a Commitment
> transaction and the children HTLC-Success/HTLC-Timeout transactions. Note,
> in the Lightning security game you have to consider (at least) 4 types of
> players moves and incentives : your node, your channel counterparties, the
> miners, the crowd of bitcoin users. The number of the last type of players
> is unknown from your node, however it should not be forgotten you're in
> competition for block space, therefore their block demands bids should be
> anticipated and reacted to in consequence. With that remark in mind,
> implications for your LN fee-bumping strategy will be raised afterwards.
>
> For a LN service provider, on-chain overpayments are bearing on your
> operational costs, thus downgrading your economic competitiveness. For the
> average LN user, overpayment might price out outside a LN non-custodial
> deployment, as you don't have the minimal security budget to be on your own.
>
>
> I think this problem statement can be easily generalised to any offchain
> contract. And your points stand for all of them.
> "For any opened contract, ensure at any point the confirmation of a (set
> of) transaction(s) in a given number of blocks"
>
>
> Same issue with Lightning, we can be pinned today on the basis of
> replace-by-fee rule 3. We can be also blinded by network mempool
> partitions, a pinning counterparty can segregate all the full-nodes  in as
> many subsets by broadcasting a revoked Commitment transaction different for
> each. For Revault, I think you can also do unlimited partitions by mutating
> the ANYONECANPAY-input of the Cancel.
>
>
> Well you can already do unlimited partitions by adding different inputs to
> it. You could malleate the witness, but since we are using Miniscript i'm
> confident you would only be able in a marginal way.
>
>
> That said, if you have a distributed towers deployment, spread across the
> p2p network topology, and they can't be clustered together through
> cross-layers or intra-layer heuristics, you should be able to reliably
> observe such partitions. I think such distributed monitors are deployed by
> few L1 merchants accepting 0-conf to detect naive double-spend.
>
>
> We should aim to more than 0-conf (in)security level..
> It seems to me the only policy-level mitigation for RBF pinning around the
> "don't decrease the abolute fees of a less-than-a-block mempool" would be
> to drop the requirement on increasing absolute fees if the mempool is "full
> enough" (and the feerate increases exponentially, of course).
> Another approach could be by introducing new consensus rules as proposed
> by Jeremy last year [0]. If we go in the realm of new consensus rules, then
> i think that simply committing to a maximum tx size would fix pinning by
> RBF rule 3. Could be in the annex, or in the unused sequence bits (although
> they currently are by Lightning, meh). You could also check in the output
> script that the input commits to this.
>
> [0]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
>
>
> Have we already discussed a fee-bumping "shared cache", a CPFP variation ?
> Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO
> from the main "offchain contract" one. This UTXO is locked by a multi-sig.
> For any Commitment transaction pre-signed, also counter-sign a CPFP with
> top mempool feerate included, spending a Commitment anchor output and the
> shared-cache UTXO. If the fees spike,  you can re-sign a high-feerate CPFP,
> assuming interactivity. As the CPFP is counter-signed by everyone, the
> outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is
> feeded at parity, there shouldn't be an incentive to waste or maliciously
> inflate the feerate. I think this solution can be easily generalized to
> more than 2 counterparties by using a multi-signature scheme. Big issue, if
> the feerate is short due to fee spikes and you need to re-sign a
> higher-feerate CPFP, you're trusting your counterparty to interact, though
> arguably not worse than the current update fee mechanism.
>
>
> It really looks just like `update_fee`. Except maybe with the property
> that you have the channel liquidity not depend on the onchain feerate.
> In any case, for Lightning i think it's a bad idea to re-introduce trust
> on this side post anchor outputs. For Revault it's clearly out of the
> question to introduce trust in your counterparties (why would you bother
> having a fee-bumping mechanism in the first place then?). Probably the same
> holds for all offchain contracts.
>
>
> > For Lightning, it'd mean keeping an equivalent amount of funds as the
> sum of all your
> channels balances sitting there unallocated "just in case". This is not
> reasonable.
>
> Agree, game-theory wise, you would like to keep a full fee-bumping
> reserve, ready to burn as much in fees as the contested HTLC value, as it's
> the maximum gain of your counterparty. Though perfect equilibrium is hard
> to achieve because your malicious counterparty might have an edge pushing
> you to broadcast your Commitment first by witholding HTLC resolution.
>
> Fractional fee-bumping reserves are much more realistic to expect in the
> LN network. Lower fee-bumping reserve, higher liquidity deployed, in theory
> higher routing fees. By observing historical feerates, average offchain
> balances at risk and routing fees expected gains, you should be able to
> discover an equilibrium where higher levels of reserve aren't worth the
> opportunity cost. I guess this  equilibrium could be your LN fee-bumping
> reserve max feerate.
>
> Note, I think the LN approach is a bit different from what suits a custody
> protocol like Revault,  as you compute a direct return of the frozen
> fee-bumping liquidity. With Revault, if you have numerous bitcoins
> protected, it's might be more interesting to adopt a "buy the mempool,
> stupid" strategy than risking fund safety for few percentages of interest
> returns.
>
>
> True for routing nodes. For wallets (if receiving funds), it's not about
> an investment: just users expectations to being able to transact without
> risking to lose their funds (ie being able to enforce their contract
> onchain). Although wallets they are much less at risk.
>
>
> This is where the "anticipate the crowd of bitcoin users move" point can
> be laid out. As the crowd of bitcoin users' fee-bumping reserves are
> ultimately unknown from your node knowledge, you should be ready to be a
> bit more conservative than the vanilla fee-bumping strategies shipped by
> default. In case of massive mempool congestion, your additional
> conservatism might get your time-sensitive transactions and game on the
> crowd of bitcoin users. First Problem: if all offchain bitcoin software
> adopt that strategy we might inflate the worst-case feerate rate at the
> benefit of the miners, without holistically improving block throughput.
> Second problem : your class of offchain bitcoin softwares might have
> ridiculous fee-bumping reserve compared
> to other classes of offchain bitcoin softwares (Revault > Lightning) and
> just be priced out bydesign in case of mempool congestion. Third problem :
> as the number of offchain bitcoin applications should go up with time, your
> fee-bumping reserve levels based from historical data might be always late
> by one "bank-run" scenario.
>
>
> Black swan event 2.0? Just rule n°3 is inherent to any kind of fee
> estimation.
>
> For Lightning, if you're short in fee-bumping reserves you might still do
> preemptive channel closures, either cooperatively or unilaterally and get
> back the off-chain liquidity to protect the more economically interesting
> channels. Though again, that kind of automatic behavior might be compelling
> at the individual node-level, but make the mempol congestion worse
> holistically.
>
>
> Yeah so we are back to the "fractional reserve" model: you can only
> enforce X% of the offchain contracts your participate in.. Actually it's
> even an added assumption: that you still have operating contracts, with
> honest counterparties.
>
>
> In case of massive mempool congestion, you might try to front-run the
> crowd of bitcoin users relying on block connections for fee-bumping, and
> thus start your fee-bumping as soon as you observe feerate groups
> fluctuations in your local mempool(s).
>
>
> I don't think any kind of mempool-based estimate generalizes well, since
> at any point the expected time before the next block is 10 minutes (and a
> lot can happen in 10min).
>
> Also you might proceed your fee-bumping ticks on a local clock instead of
> block connections in case of time-dilation or deeper eclipse attacks of
> your local node. Your view of the chain might be compromised but not your
> ability to broadcast transactions thanks to emergency channels (in the
> non-LN sense...though in fact quid of txn wrapped in onions ?) of
> communication.
>
>
> Oh, yeah, i didn't explicit "not getting eclipsed" (or more generally
> "data availability") as an assumption since it's generally one made by
> participants of any offchain contract. In this case you can't even have
> decent fee estimation, so you are screwed anyways.
>
>
> Yes, stay open the question on how you enforce this block insurance
> market. Reputation, which might be to avoid due to the latent
> centralization effect, might be hard to stack and audit reliably for an
> emergency mechanism running, hopefully, once in a halvening period. Maybe
> maybe some cryptographic or economically based mechanism on slashing or
> swaps could be found...
>
>
> Unfortunately, given current mining centralisation, pools are in a very
> good position to offer pretty decent SLAs around that. With a block space
> insurance, you of course don't need all these convoluted fee-bumping hacks.
> I'm very concerned that large stakeholders of the "offchain contracts
> ecosystem" would just go this (easier) way and further increase mining
> centralisation pressure.
>
> I agree that a cryptography-based scheme around this type of insurance
> services would be the best way out.
>
>
> Antoine
>
> Le lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>
>> Hi everyone,
>>
>> Fee-bumping is paramount to the security of many protocols building on
>> Bitcoin, as they require the
>> confirmation of a transaction (which might be presigned) before the
>> expiration of a timelock at any
>> point after the establishment of the contract.
>>
>> The part of Revault using presigned transactions (the delegation from a
>> large to a smaller multisig)
>> is no exception. We have been working on how to approach this for a while
>> now and i'd like to share
>> what we have in order to open a discussion on this problem so central to
>> what seem to be The Right
>> Way [0] to build on Bitcoin but which has yet to be discussed in details
>> (at least publicly).
>>
>> I'll discuss what we came up with for Revault (at least for what will be
>> its first iteration) but my
>> intent with posting to the mailing list is more to frame the questions to
>> this problem we are all
>> going to face rather than present the results of our study tailored to
>> the Revault usecase.
>> The discussion is still pretty Revault-centric (as it's the case study)
>> but hopefully this can help
>> future protocol designers and/or start a discussion around what
>> everyone's doing for existing ones.
>>
>>
>> ## 1. Reminder about Revault
>>
>> The part of Revault we are interested in for this study is the delegation
>> process, and more
>> specifically the application of spending policies by network monitors
>> (watchtowers).
>> Coins are received on a large multisig. Participants of this large
>> multisig create 2 [1]
>> transactions. The Unvault, spending a deposit UTxO, creates an output
>> paying either to the small
>> multisig after a timelock or to the large multisig immediately. The
>> Cancel, spending the Unvault
>> output through the non-timelocked path, creates a new deposit UTxO.
>> Participants regularly exchange the Cancel transaction signatures for
>> each deposit, sharing the
>> signatures with the watchtowers they operate. They then optionally [2]
>> sign the Unvault transaction
>> and share the signatures with the small multisig participants who can in
>> turn use them to proceed
>> with a spending. Watchtowers can enforce spending policies (say, can't
>> Unvault outside of business
>> hours) by having the Cancel transaction be confirmed before the
>> expiration of the timelock.
>>
>>
>> ## 2. Problem statement
>>
>> For any delegated vault, ensure the confirmation of a Cancel transaction
>> in a configured number of
>> blocks at any point. In so doing, minimize the overpayments and the UTxO
>> set footprint. Overpayments
>> increase the burden on the watchtower operator by increasing the required
>> frequency of refills of the
>> fee-bumping wallet, which is already the worst user experience. You are
>> likely to manage a number of
>> UTxOs with your number of vaults, which comes at a cost for you as well
>> as everyone running a full
>> node.
>>
>> Note that this assumes miners are economically rationale, are
>> incentivized by *public* fees and that
>> you have a way to propagate your fee-bumped transaction to them. We also
>> don't consider the block
>> space bounds.
>>
>> In the previous paragraph and the following text, "vault" can generally
>> be replaced with "offchain
>> contract".
>>
>>
>> ## 3. With presigned transactions
>>
>> As you all know, the first difficulty is to get to be able to
>> unilaterally enforce your contract
>> onchain. That is, any participant must be able to unilaterally bump the
>> fees of a transaction even
>> if it was co-signed by other participants.
>>
>> For Revault we can afford to introduce malleability in the Cancel
>> transaction since there is no
>> second-stage transaction depending on its txid. Therefore it is
>> pre-signed with ANYONECANPAY. We
>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
>> Note how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties
>> contract.
>> This has a significant implication for the rest, as we are entirely
>> burning fee-bumping UTxOs.
>>
>> This opens up a pinning vector, or at least a significant nuisance: any
>> other party can largely
>> increase the absolute fee without increasing the feerate, leveraging the
>> RBF rules to prevent you
>> from replacing it without paying an insane fee. And you might not see it
>> in your own mempool and
>> could only suppose it's happening by receiving non-full blocks or with
>> transactions paying a lower
>> feerate.
>> Unfortunately i know of no other primitive that can be used by
>> multi-party (i mean, >2) presigned
>> transactions protocols for fee-bumping that aren't (more) vulnerable to
>> pinning.
>>
>>
>> ## 4. We are still betting on future feerate
>>
>> The problem is still missing one more constraint. "Ensuring confirmation
>> at any time" involves ensuring
>> confirmation at *any* feerate, which you *cannot* do. So what's the
>> limit? In theory you should be ready
>> to burn as much in fees as the value of the funds you want to get out of
>> the contract. So... For us
>> it'd mean keeping for each vault an equivalent amount of funds sitting
>> there on the watchtower's hot
>> wallet. For Lightning, it'd mean keeping an equivalent amount of funds as
>> the sum of all your
>> channels balances sitting there unallocated "just in case". This is not
>> reasonable.
>>
>> So you need to keep a maximum feerate, above which you won't be able to
>> ensure the enforcement of
>> all your contracts onchain at the same time. We call that the "reserve
>> feerate" and you can have
>> different strategies for choosing it, for instance:
>> - The 85th percentile over the last year of transactions feerates
>> - The maximum historical feerate
>> - The maximum historical feerate adjusted in dollars (makes more sense
>> but introduces a (set of?)
>>   trusted oracle(s) in a security-critical component)
>> - Picking a random high feerate (why not? It's an arbitrary assumption
>> anyways)
>>
>> Therefore, even if we don't have to bet on the broadcast-time feerate
>> market at signing time anymore
>> (since we can unilaterally bump), we still need some kind of prediction
>> in preparation of making
>> funds available to bump the fees at broadcast time.
>> Apart from judging that 500sat/vb is probably more reasonable than
>> 10sat/vbyte, this unfortunately
>> sounds pretty much crystal-ball-driven.
>>
>> We currently use the maximum of the 95th percentiles over 90-days windows
>> over historical block chain
>> feerates. [4]
>>
>>
>> ## 5. How much funds does my watchtower need?
>>
>> That's what we call the "reserve". Depending on your reserve feerate
>> strategy it might vary over
>> time. This is easier to reason about with a per-contract reserve. For
>> Revault it's pretty
>> straightforward since the Cancel transaction size is static:
>> `reserve_feerate * cancel_size`. For
>> other protocols with dynamic transaction sizes (or even packages of
>> transactions) it's less so. For
>> your Lightning channel you would probably take the maximum size of your
>> commitment transaction
>> according to your HTLC exposure settings + the size of as many
>> `htlc_success` transaction?
>>
>> Then you either have your software or your user guesstimate how many
>> offchain contracts the
>> watchtower will have to watch, time that by the per-contract reserve and
>> refill this amount (plus
>> some slack in practice). Once again, a UX tradeoff (not even mentioning
>> the guesstimation UX):
>> overestimating leads to too many unallocated funds sitting on a hot
>> wallet, underestimating means
>> (at best) inability to participate in new contracts or being "at risk"
>> (not being able to enforce
>> all your contracts onchain at your reserve feerate) before a new refill.
>>
>> For vaults you likely have large-value UTxOs and small transactions (the
>> Cancel is one-in one-out in
>> Revault). For some other applications with large transactions and
>> lower-value UTxOs on average it's
>> likely that only part of the offchain contracts might be enforceable at a
>> reasonable feerate. Is it
>> reasonable?
>>
>>
>> ## 6. UTxO pool layout
>>
>> Now that you somehow managed to settle on a refill amount, how are you
>> going to use these funds?
>> Also, you'll need to manage your pool across time (consolidating small
>> coins, and probably fanning
>> out large ones).
>>
>> You could keep a single large UTxO and peel it as you need to sponsor
>> transactions. But this means
>> that you need to create a coin of a specific value according to your need
>> at the current feerate
>> estimation, hope to have it confirmed in a few blocks (at least for now!
>> [5]), and hope that the
>> value won't be obsolete by the time it confirmed. Also, you'd have to do
>> that for any number of
>> Cancel, chaining feebump coin creation transactions off the change of the
>> previous ones or replacing
>> them with more outputs. Both seem to become really un-manageable (and
>> expensive) in many edge-cases,
>> shortening the time you have to confirm the actual Cancel transaction and
>> creating uncertainty about
>> the reserve (how much is my just-in-time fanout going to cost me in fees
>> that i need to refill in
>> advance on my watchtower wallet?).
>> This is less of a concern for protocols using CPFP to sponsor
>> transactions, but they rely on a
>> policy rule specific to 2-parties contracts.
>>
>> Therefore for Revault we fan-out the coins per-vault in advance. We do so
>> at refill time so the
>> refiller can give an excess to pay for the fees of the fanout transaction
>> (which is reasonable since
>> it will occur just after the refilling transaction confirms). When the
>> watchtower is asked to watch
>> for a new delegated vault it will allocate coins from the pool of
>> fanned-out UTxOs to it (failing
>> that, it would refuse the delegation).
>> What is a good distribution of UTxOs amounts per vault? We want to
>> minimize the number of coins,
>> still have coins small enough to not overpay (remember, we can't have
>> change) and be able to bump a
>> Cancel up to the reserve feerate using these coins. The two latter
>> constraints are directly in
>> contradiction as the minimal value of a coin usable at the reserve
>> feerate (paying for its own input
>> fee + bumping the feerate by, say, 5sat/vb) is already pretty high.
>> Therefore we decided to go with
>> two distributions per vault. The "reserve distribution" alone ensures
>> that we can bump up to the
>> reserve feerate and is usable for high feerates. The "bonus distribution"
>> is not, but contains
>> smaller coins useful to prevent overpayments during low and medium fee
>> periods (which is most of the
>> time).
>> Both distributions are based on a basic geometric suite [6]. Each value
>> is half the previous one.
>> This exponentially decreases the value, limiting the number of coins. But
>> this also allows for
>> pretty small coins to exist and each coin's value is equal to the sum of
>> the smaller coins,
>> or smaller by at most the value of the smallest coin. Therefore bounding
>> the maximum overpayment to
>> the smallest coin's value [7].
>>
>> For the management of the UTxO pool across time we merged the
>> consolidation with the fanout. When
>> fanning out a refilled UTxO, we scan the pool for coins that need to be
>> consolidated according to a
>> heuristic. An instance of a heuristic is "the coin isn't allocated and
>> would not have been able to
>> increase the fee at the median feerate over the past 90 days of blocks".
>> We had this assumption that feerate would tend to go up with time and
>> therefore discarded having to
>> split some UTxOs from the pool. We however overlooked that a large
>> increase in the exchange price of
>> BTC as we've seen during the past year could invalidate this assumption
>> and that should arguably be
>> reconsidered.
>>
>>
>> ## 7. Bumping and re-bumping
>>
>> First of all, when to fee-bump? At fixed time intervals? At each block
>> connection? It sounds like,
>> given a large enough timelock, you could try to greed by "trying your
>> luck" at a lower feerate and
>> only re-bumping every N blocks. You would then start aggressively bumping
>> at every block after M
>> blocks have passed. But that's actually a bet (in disguised?) that the
>> next block feerate in M blocks
>> will be lower than the current one. In the absence of any predictive
>> model it is more reasonable to
>> just start being aggressive immediately.
>> You probably want to base your estimates on `estimatesmartfee` and as a
>> consequence you would re-bump
>> (if needed )after each block connection, when your estimates get updated
>> and you notice your
>> transaction was not included in the block.
>>
>> In the event that you notice a consequent portion of the block is filled
>> with transactions paying
>> less than your own, you might want to start panicking and bump your
>> transaction fees by a certain
>> percentage with no consideration for your fee estimator. You might skew
>> miners incentives in doing
>> so: if you increase the fees by a factor of N, any miner with a fraction
>> larger than 1/N of the
>> network hashrate now has an incentive to censor your transaction at first
>> to get you to panic. Also
>> note this can happen if you want to pay the absolute fees for the
>> 'pinning' attack mentioned in
>> section #2, and that might actually incentivize miners to perform it
>> themselves..
>>
>> The gist is that the most effective way to bump and rebump (RBF the
>> Cancel tx) seems to just be to
>> consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block
>> your tx isn't included in, and
>> to RBF it if the feerate is higher.
>> In addition, we fallback to a block chain based estimation when estimates
>> aren't available (eg if
>> the user stopped their WT for say a hour and we come back up): we use the
>> 85th percentile over the
>> feerates in the last 6 blocks. Sure, miners can try to have an influence
>> on that by stuffing their
>> blocks with large fee self-paying transactions, but they would need to:
>> 1. Be sure to catch a significant portion of the 6 blocks (at least 2,
>> actually)
>> 2. Give up on 25% of the highest fee-paying transactions (assuming they
>> got the 6 blocks, it's
>>    proportionally larger and incertain as they get less of them)
>> 3. Hope that our estimator will fail and we need to fall back to the
>> chain-based estimation
>>
>>
>> ## 8. Our study
>>
>> We essentially replayed the historical data with different deployment
>> configurations (number of
>> participants and timelock) and probability of an event occurring (event
>> being say an Unvault, an
>> invalid Unvault, a new delegation, ..). We then observed different
>> metrics such as the time at risk
>> (when we can't enforce all our contracts at the reserve feerate at the
>> same time), or the
>> operational cost.
>> We got the historical fee estimates data from Statoshi [9], Txstats [10]
>> and the historical chain
>> data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!
>>
>> The (research-quality..) code can be found at
>> https://github.com/revault/research under the section
>> "Fee bumping". Again it's very Revault specific, but at least the data
>> can probably be reused for
>> studying other protocols.
>>
>>
>> ## 9. Insurances
>>
>> Of course, given it's all hacks and workarounds and there is no good
>> answer to "what is a reasonable
>> feerate up to which we need to make contracts enforceable onchain?",
>> there is definitely room for an
>> insurance market. But this enters the realm of opinions. Although i do
>> have some (having discussed
>> this topic for the past years with different people), i would like to
>> keep this post focused on the
>> technical aspects of this problem.
>>
>>
>>
>> [0] As far as i can tell, having offchain contracts be enforceable
>> onchain by confirming a
>> transaction before the expiration of a timelock is a widely agreed-upon
>> approach. And i don't think
>> we can opt for any other fundamentally different one, as you want to know
>> you can claim back your
>> coins from a contract after a deadline before taking part in it.
>>
>> [1] The Real Revault (tm) involves more transactions, but for the sake of
>> conciseness i only
>> detailed a minimum instance of the problem.
>>
>> [2] Only presigning part of the Unvault transactions allows to only
>> delegate part of the coins,
>> which can be abstracted as "delegate x% of your stash" in the user
>> interface.
>>
>> [3]
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html
>>
>> [4]
>> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329
>>
>> [5] https://github.com/bitcoin/bitcoin/pull/23121
>>
>> [6]
>> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507
>>
>> [7] Of course this assumes a combinatorial coin selection, but i believe
>> it's ok given we limit the
>> number of coins beforehand.
>>
>> [8] Although there is the argument to outbid a censorship, anyone
>> censoring you isn't necessarily a
>> miner.
>>
>> [9] https://www.statoshi.info/
>>
>> [10] https://www.statoshi.info/
>>
>> [11] https://github.com/RCasatta/blocks_iterator
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 43225 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-12-07 17:24     ` Gloria Zhao
@ 2021-12-08 14:51       ` darosior
  2021-12-09  0:55       ` Antoine Riard
  1 sibling, 0 replies; 9+ messages in thread
From: darosior @ 2021-12-08 14:51 UTC (permalink / raw)
  To: Gloria Zhao; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 40338 bytes --]

Hi Gloria,

I agree with regard to the RBF changes. To me, we should (obviously?) do ancestor feerate instead of requiring confirmed inputs (#23121).
However, we are yet to come with a reasonable policy-only fix to "rule 3 pinning".

Responses inline!

>> The part of Revault we are interested in for this study is the delegation process, and more
>> specifically the application of spending policies by network monitors (watchtowers).
>
> I'd like to better understand how fee-bumping would be used, i.e. how the watchtower model works:
> - Do all of the vault parties both deposit to the vault and a refill/fee to the watchtower, is there a reward the watchtower collects for a successful Cancel, or something else? (Apologies if there's a thorough explanation somewhere that I haven't already seen).
> - Do we expect watchtowers tracking multiple vaults to be batching multiple Cancel transaction fee-bumps?
> - Do we expect vault users to be using multiple watchtowers for a better trust model? If so, and we're expecting batched fee-bumps, won't those conflict?

Grossly, it could be described as "enforce spending policied on 10BTC worth of delegated coins by allocating 5mBTC to 3 different watchtowers".
Each participant that will be delegating coins is expected to run a number of watchtowers. They should ideally be replicated (for full disclosure if
it wasn't obvious providing replication is the business model of the company behind the Revault project :p).
You can't batch fee-bumps as you *could* (maybe not *would*) do with anchor outputs channels, since the latter use CPFP and we "sponsor"
(or whatever the name of "supplementing the fees of a transaction by adding inputs" is).
In the current model, watchtowers enforcing the same policy do compete in that they broadcast conflicting transactions since they attach different
fee-bumping inputs. Ideally we could sign the added feebumping inputs themselves with ACP so they are allowed to cooperate. However doing that
would allow anyone on the network to snip the added fees to perform a "RBF rule-3 pinning".
Finally, there could be concerns around game theory of letting others' watchtowers feebump for you. I'm convinced however that in our case the fee
is completely dwarfed by the value at stake. Trying to delay your own WT's fee-bump reaction to hope someone else will pay the 10k sats for enforcing
a 1BTC contract, hmmm, i wouldn't do that.

>> For Revault we can afford to introduce malleability in the Cancel transaction since there is no
>> second-stage transaction depending on its txid. Therefore it is pre-signed with ANYONECANPAY. We
>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties contract.
>
> We've already talked about this offline, but I'd like to point out here that even transactions signed with ANYONECANPAY|ALL can be pinned by RBF unless we add an ancestor score rule. [0], [1] (numbers are inaccurate, Cancel Tx feerates wouldn't be that low, but just to illustrate what the attack would look like)

Thanks for expliciting that, i should have mentioned it. For everyone reading the PR is at https://github.com/bitcoin/bitcoin/pull/23121 .

>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties contract.
>
> Well stated about CPFP carve out. I suppose the generalization is that allowing n extra ancestorcount=2 descendants to a transaction means it can help contracts with <=n+1 parties (more accurately, outputs)? I wonder if it's possible to devise a different approach for limiting ancestors/descendants, e.g. by height/width/branching factor of the family instead of count... :shrug:

I don't think so, because you want any party involved in the contract to be able to unilaterally enforce it. With >2 anchor outputs any 2-parties can
collude against the other one(s) by pinning the transaction using the first party's output to hit the descendant chain limit and the second one to trigger
the carve-out.

Ideally i think it'd be better that all contracts move toward using sponsoring ("tx malleation") when we can (ie for all transactions that are at the end of
the chain, or post-ANYPREVOUT any transaction really) instead of CPFP for fee-bumping because:
1. It's way easier to reason about wrt mempool DOS protections (the fees don't depend on a chain of childs, it's just a function of the transaction alone)
2. It's more space efficient (and thereby economical): you don't need to create a new transaction (or a set of new txs) to bump your fees.

Unfortunately, having to use ACP instead of ACP|SINGLE is a showstopper. Managing a fee-bumping UTxO pool is a massive burden.
On a side note, thinking back about ACP|SINGLE vs ACP i'm not so sure anymore the latter opens up more pinning vectors than the former..

> IIUC, a Cancel transaction can be generalized as a 1-in-1-out where the input is presigned with counterparties, SIGHASH_ANYONECANPAY. The fan-out UTXO pool approach is a clever solution. I also think this smells like a case where improving lower-level RBF rules is more appropriate than requiring applications to write workarounds and generate extra transactions. Seeing that the BIP125#2 (no new unconfirmed inputs) restriction really hurts in this case, if that rule were removed, would you be able to simply keep the 1 big UTXO per vault and cut out the exact nValue you need to fee-bump Cancel transactions? Would that feel less like "burning" for the sake of fee-bumping?

I am not sure. It's a question i raised when i was made aware of your finding of the "no unconfirmed" rule defect and your proposal to move to ancestor
score instead. Without further consideration i'd say yes, but this needs more research. I'm also biased as i really want to get rid of this coin pool for both
the complexity and the social cost..

>> First of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like,
>> given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate and
>> only re-bumping every N blocks. You would then start aggressively bumping at every block after M
>> blocks have passed.
>
> I'm wondering if you also considered other questions like:
> - Should a fee-bumping strategy be dependent upon the rate of incoming transactions? To me, it seems like the two components are (1) what's in the mempool and (2) what's going to trickle into the mempool between now and the target block. The first component is best-effort keeping incentive-compatible mempool; historical data and crystal ball look like the only options for incorporating the 2nd component.
> - Should the fee-bumping strategy depend on how close you are to your timelock expiry? (though this seems like a potential privacy leak, and the game theory could get weird as you mentioned).
> - As long as you have a good fee estimator (i.e. given a current mempool, can get an accurate feerate given a % probability of getting into target block n), is there any reason to devise a fee-bumping strategy beyond picking a time interval?

I think (again, ideally) applications should take `estimatesmarfee` as a black box, and not look into the mempool by themselves. Now whether we should
take into account the mempool data for short target estimation, i don't know. The first issue that comes to mind is how to measure whether your mempool
is "up-to-date" (i mean if you have most of the current unconfirmed transactions). Weak blocks were mentionned elsewhere, and i think they can help for
this (you don't influence your estimate by the rate of new unconfirmed transactions you hear about, but what miners are currently working on). Now, sure,
the expected time before the next block would be 10min. But for a short target estimate it still seems better to base your estimate on the most up-to-date
data you can get? (maybe not? Can a statistician chime in?)

This section was about arguing that it doesn't make sense to start low and get to next-block feerate as you approach your timelock expiration. Is your
question about whether it makes sense to start, as you get closer to timelock maturation, feebumping not on the basis of what your fee estimator gives
you but blindly i don't believe it does. If all you have is a fee-bumping method, all confirmation problems look like a fee-paying one :p.
You are already assuming your fee estimator is working and isn't being manipulated. If it does, and you don't get confirmed after X blocks and as many
re-bumping attempts the problem is elsewhere imo.

> It would be interesting to see stats on the spread of feerates in blocks during periods of fee fluctuation.
>
>> > In the event that you notice a consequent portion of the block is filled with transactions paying
>> > less than your own, you might want to start panicking and bump your transaction fees by a certain
>> > percentage with no consideration for your fee estimator. You might skew miners incentives in doing
>> > so: if you increase the fees by a factor of N, any miner with a fraction larger than 1/N of the
>> > network hashrate now has an incentive to censor your transaction at first to get you to panic.
>
>> Yes I think miner-harvesting attacks should be weighed carefully in the design of offchain contracts fee-bumping strategies, at least in the future when the mining reward exhausts further.
>
> Miner-harvesting (such cool naming!) is interesting, but I want to clarify the value of N - I don't think it's the factor by which you increase the fees on just your transaction.
> To codify: your transaction pays a fee of `f1` right now and might pay a fee of `f2` in a later block that the miner expects to mine with 1/N probability. The economically rational miner isn't incentivized if simply `f2 = N * f1` unless their mempool is otherwise empty.
> By omitting your transaction in this block, the miner can include another transaction/package paying `g1` fees instead, so they lose `f1-g1` in fees right now. In the future block, they have the choice between collecting `f2` or `g2` (from another transaction/package) in fees, so their gain is `max(f2-g2, 0)`.
> So the equation is more like: a miner with 1/N of the hashrate, employing this censorship strategy, gains only if `max(f2-g2, 0) > N * (f1-g1)`. More broadly, the miner only profits if `f2` is significantly higher than `g2` and `f1` is about the same feerate as everything else in your mempool: it seems like they're betting on how much you _overshoot_, not how much you bump.

Right. I was talking in the worst case where they don't have a replacement package with a feerate of `g1`. They are even more incentivized to try that if they do.
Since `f1` is already expected to be the next block feerate, by how much you bump is technically by how much your overshoot. So much for dismissing your fee
estimator!

> Slightly related question: in contracts, generally, the timelock deadline is revealed in the script, so the miner knows how "desperate" we are right?

For P2WSH, yes.

> Is that a problem?

I don't think so. As long as we don't bump the feerate by the N factor mentioned above, they have an incentive to try to take the fees while they still can (or someone else will).

> For Revault, if your Cancel transaction is a keypath spend (I think I remember reading that somewhere?) and you don't reveal the script, they don't see your timelock deadline yes?

It *could* be once we move to Taproot (2weeks). Yep! They would only know about it on a successful spend which would reveal the branch with the timelock. It's
a good point in that the attack above could be made impractical through privacy. Although i don't think it's realistic: due to the necessary script-path spends it would
be trivial to cluster coins managed by a Revault setup and deduce whether a given transaction is a Cancel with very high accuracy.

> Again, thanks for the digging and sharing. :)

Thanks for the interest and getting me to re-think through this!

Best,
Antoine

> Best,
> Gloria
>
> On Tue, Nov 30, 2021 at 3:27 PM darosior via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Hi Antoine,
>>
>> Thanks for your comment. I believe for Lightning it's simpler with regard to the management of the UTxO pool, but harder with regard to choosing
>> a threat model.
>> Responses inline.
>>
>>> For any opened channel, ensure the confirmation of a Commitment transaction and the children HTLC-Success/HTLC-Timeout transactions. Note, in the Lightning security game you have to consider (at least) 4 types of players moves and incentives : your node, your channel counterparties, the miners, the crowd of bitcoin users. The number of the last type of players is unknown from your node, however it should not be forgotten you're in competition for block space, therefore their block demands bids should be anticipated and reacted to in consequence. With that remark in mind, implications for your LN fee-bumping strategy will be raised afterwards.
>>>
>>> For a LN service provider, on-chain overpayments are bearing on your operational costs, thus downgrading your economic competitiveness. For the average LN user, overpayment might price out outside a LN non-custodial deployment, as you don't have the minimal security budget to be on your own.
>>
>> I think this problem statement can be easily generalised to any offchain contract. And your points stand for all of them.
>> "For any opened contract, ensure at any point the confirmation of a (set of) transaction(s) in a given number of blocks"
>>
>>> Same issue with Lightning, we can be pinned today on the basis of replace-by-fee rule 3. We can be also blinded by network mempool partitions, a pinning counterparty can segregate all the full-nodes in as many subsets by broadcasting a revoked Commitment transaction different for each. For Revault, I think you can also do unlimited partitions by mutating the ANYONECANPAY-input of the Cancel.
>>
>> Well you can already do unlimited partitions by adding different inputs to it. You could malleate the witness, but since we are using Miniscript i'm confident you would only be able in a marginal way.
>>
>>> That said, if you have a distributed towers deployment, spread across the p2p network topology, and they can't be clustered together through cross-layers or intra-layer heuristics, you should be able to reliably observe such partitions. I think such distributed monitors are deployed by few L1 merchants accepting 0-conf to detect naive double-spend.
>>
>> We should aim to more than 0-conf (in)security level..
>> It seems to me the only policy-level mitigation for RBF pinning around the "don't decrease the abolute fees of a less-than-a-block mempool" would be to drop the requirement on increasing absolute fees if the mempool is "full enough" (and the feerate increases exponentially, of course).
>> Another approach could be by introducing new consensus rules as proposed by Jeremy last year [0]. If we go in the realm of new consensus rules, then i think that simply committing to a maximum tx size would fix pinning by RBF rule 3. Could be in the annex, or in the unused sequence bits (although they currently are by Lightning, meh). You could also check in the output script that the input commits to this.
>>
>> [0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
>>
>>> Have we already discussed a fee-bumping "shared cache", a CPFP variation ? Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO from the main "offchain contract" one. This UTXO is locked by a multi-sig. For any Commitment transaction pre-signed, also counter-sign a CPFP with top mempool feerate included, spending a Commitment anchor output and the shared-cache UTXO. If the fees spike, you can re-sign a high-feerate CPFP, assuming interactivity. As the CPFP is counter-signed by everyone, the outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is feeded at parity, there shouldn't be an incentive to waste or maliciously inflate the feerate. I think this solution can be easily generalized to more than 2 counterparties by using a multi-signature scheme. Big issue, if the feerate is short due to fee spikes and you need to re-sign a higher-feerate CPFP, you're trusting your counterparty to interact, though arguably not worse than the current update fee mechanism.
>>
>> It really looks just like `update_fee`. Except maybe with the property that you have the channel liquidity not depend on the onchain feerate.
>> In any case, for Lightning i think it's a bad idea to re-introduce trust on this side post anchor outputs. For Revault it's clearly out of the question to introduce trust in your counterparties (why would you bother having a fee-bumping mechanism in the first place then?). Probably the same holds for all offchain contracts.
>>
>>>> For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all your
>>> channels balances sitting there unallocated "just in case". This is not reasonable.
>>>
>>> Agree, game-theory wise, you would like to keep a full fee-bumping reserve, ready to burn as much in fees as the contested HTLC value, as it's the maximum gain of your counterparty. Though perfect equilibrium is hard to achieve because your malicious counterparty might have an edge pushing you to broadcast your Commitment first by witholding HTLC resolution.
>>>
>>> Fractional fee-bumping reserves are much more realistic to expect in the LN network. Lower fee-bumping reserve, higher liquidity deployed, in theory higher routing fees. By observing historical feerates, average offchain balances at risk and routing fees expected gains, you should be able to discover an equilibrium where higher levels of reserve aren't worth the opportunity cost. I guess this equilibrium could be your LN fee-bumping reserve max feerate.
>>>
>>> Note, I think the LN approach is a bit different from what suits a custody protocol like Revault, as you compute a direct return of the frozen fee-bumping liquidity. With Revault, if you have numerous bitcoins protected, it's might be more interesting to adopt a "buy the mempool, stupid" strategy than risking fund safety for few percentages of interest returns.
>>
>> True for routing nodes. For wallets (if receiving funds), it's not about an investment: just users expectations to being able to transact without risking to lose their funds (ie being able to enforce their contract onchain). Although wallets they are much less at risk.
>>
>>> This is where the "anticipate the crowd of bitcoin users move" point can be laid out. As the crowd of bitcoin users' fee-bumping reserves are ultimately unknown from your node knowledge, you should be ready to be a bit more conservative than the vanilla fee-bumping strategies shipped by default. In case of massive mempool congestion, your additional conservatism might get your time-sensitive transactions and game on the crowd of bitcoin users. First Problem: if all offchain bitcoin software adopt that strategy we might inflate the worst-case feerate rate at the benefit of the miners, without holistically improving block throughput. Second problem : your class of offchain bitcoin softwares might have ridiculous fee-bumping reserve compared
>>> to other classes of offchain bitcoin softwares (Revault > Lightning) and just be priced out bydesign in case of mempool congestion. Third problem : as the number of offchain bitcoin applications should go up with time, your fee-bumping reserve levels based from historical data might be always late by one "bank-run" scenario.
>>
>> Black swan event 2.0? Just rule n°3 is inherent to any kind of fee estimation.
>>
>>> For Lightning, if you're short in fee-bumping reserves you might still do preemptive channel closures, either cooperatively or unilaterally and get back the off-chain liquidity to protect the more economically interesting channels. Though again, that kind of automatic behavior might be compelling at the individual node-level, but make the mempol congestion worse holistically.
>>
>> Yeah so we are back to the "fractional reserve" model: you can only enforce X% of the offchain contracts your participate in.. Actually it's even an added assumption: that you still have operating contracts, with honest counterparties.
>>
>>> In case of massive mempool congestion, you might try to front-run the crowd of bitcoin users relying on block connections for fee-bumping, and thus start your fee-bumping as soon as you observe feerate groups fluctuations in your local mempool(s).
>>
>> I don't think any kind of mempool-based estimate generalizes well, since at any point the expected time before the next block is 10 minutes (and a lot can happen in 10min).
>>
>>> Also you might proceed your fee-bumping ticks on a local clock instead of block connections in case of time-dilation or deeper eclipse attacks of your local node. Your view of the chain might be compromised but not your ability to broadcast transactions thanks to emergency channels (in the non-LN sense...though in fact quid of txn wrapped in onions ?) of communication.
>>
>> Oh, yeah, i didn't explicit "not getting eclipsed" (or more generally "data availability") as an assumption since it's generally one made by participants of any offchain contract. In this case you can't even have decent fee estimation, so you are screwed anyways.
>>
>>> Yes, stay open the question on how you enforce this block insurance market. Reputation, which might be to avoid due to the latent centralization effect, might be hard to stack and audit reliably for an emergency mechanism running, hopefully, once in a halvening period. Maybe maybe some cryptographic or economically based mechanism on slashing or swaps could be found...
>>
>> Unfortunately, given current mining centralisation, pools are in a very good position to offer pretty decent SLAs around that. With a block space insurance, you of course don't need all these convoluted fee-bumping hacks.
>> I'm very concerned that large stakeholders of the "offchain contracts ecosystem" would just go this (easier) way and further increase mining centralisation pressure.
>>
>> I agree that a cryptography-based scheme around this type of insurance services would be the best way out.
>>
>>> Antoine
>>>
>>> Le lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>
>>>> Hi everyone,
>>>>
>>>> Fee-bumping is paramount to the security of many protocols building on Bitcoin, as they require the
>>>> confirmation of a transaction (which might be presigned) before the expiration of a timelock at any
>>>> point after the establishment of the contract.
>>>>
>>>> The part of Revault using presigned transactions (the delegation from a large to a smaller multisig)
>>>> is no exception. We have been working on how to approach this for a while now and i'd like to share
>>>> what we have in order to open a discussion on this problem so central to what seem to be The Right
>>>> Way [0] to build on Bitcoin but which has yet to be discussed in details (at least publicly).
>>>>
>>>> I'll discuss what we came up with for Revault (at least for what will be its first iteration) but my
>>>> intent with posting to the mailing list is more to frame the questions to this problem we are all
>>>> going to face rather than present the results of our study tailored to the Revault usecase.
>>>> The discussion is still pretty Revault-centric (as it's the case study) but hopefully this can help
>>>> future protocol designers and/or start a discussion around what everyone's doing for existing ones.
>>>>
>>>> ## 1. Reminder about Revault
>>>>
>>>> The part of Revault we are interested in for this study is the delegation process, and more
>>>> specifically the application of spending policies by network monitors (watchtowers).
>>>> Coins are received on a large multisig. Participants of this large multisig create 2 [1]
>>>> transactions. The Unvault, spending a deposit UTxO, creates an output paying either to the small
>>>> multisig after a timelock or to the large multisig immediately. The Cancel, spending the Unvault
>>>> output through the non-timelocked path, creates a new deposit UTxO.
>>>> Participants regularly exchange the Cancel transaction signatures for each deposit, sharing the
>>>> signatures with the watchtowers they operate. They then optionally [2] sign the Unvault transaction
>>>> and share the signatures with the small multisig participants who can in turn use them to proceed
>>>> with a spending. Watchtowers can enforce spending policies (say, can't Unvault outside of business
>>>> hours) by having the Cancel transaction be confirmed before the expiration of the timelock.
>>>>
>>>> ## 2. Problem statement
>>>>
>>>> For any delegated vault, ensure the confirmation of a Cancel transaction in a configured number of
>>>> blocks at any point. In so doing, minimize the overpayments and the UTxO set footprint. Overpayments
>>>> increase the burden on the watchtower operator by increasing the required frequency of refills of the
>>>> fee-bumping wallet, which is already the worst user experience. You are likely to manage a number of
>>>> UTxOs with your number of vaults, which comes at a cost for you as well as everyone running a full
>>>> node.
>>>>
>>>> Note that this assumes miners are economically rationale, are incentivized by *public* fees and that
>>>> you have a way to propagate your fee-bumped transaction to them. We also don't consider the block
>>>> space bounds.
>>>>
>>>> In the previous paragraph and the following text, "vault" can generally be replaced with "offchain
>>>> contract".
>>>>
>>>> ## 3. With presigned transactions
>>>>
>>>> As you all know, the first difficulty is to get to be able to unilaterally enforce your contract
>>>> onchain. That is, any participant must be able to unilaterally bump the fees of a transaction even
>>>> if it was co-signed by other participants.
>>>>
>>>> For Revault we can afford to introduce malleability in the Cancel transaction since there is no
>>>> second-stage transaction depending on its txid. Therefore it is pre-signed with ANYONECANPAY. We
>>>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note how we can't leverage
>>>> the carve out rule, and neither can any other more-than-two-parties contract.
>>>> This has a significant implication for the rest, as we are entirely burning fee-bumping UTxOs.
>>>>
>>>> This opens up a pinning vector, or at least a significant nuisance: any other party can largely
>>>> increase the absolute fee without increasing the feerate, leveraging the RBF rules to prevent you
>>>> from replacing it without paying an insane fee. And you might not see it in your own mempool and
>>>> could only suppose it's happening by receiving non-full blocks or with transactions paying a lower
>>>> feerate.
>>>> Unfortunately i know of no other primitive that can be used by multi-party (i mean, >2) presigned
>>>> transactions protocols for fee-bumping that aren't (more) vulnerable to pinning.
>>>>
>>>> ## 4. We are still betting on future feerate
>>>>
>>>> The problem is still missing one more constraint. "Ensuring confirmation at any time" involves ensuring
>>>> confirmation at *any* feerate, which you *cannot* do. So what's the limit? In theory you should be ready
>>>> to burn as much in fees as the value of the funds you want to get out of the contract. So... For us
>>>> it'd mean keeping for each vault an equivalent amount of funds sitting there on the watchtower's hot
>>>> wallet. For Lightning, it'd mean keeping an equivalent amount of funds as the sum of all your
>>>> channels balances sitting there unallocated "just in case". This is not reasonable.
>>>>
>>>> So you need to keep a maximum feerate, above which you won't be able to ensure the enforcement of
>>>> all your contracts onchain at the same time. We call that the "reserve feerate" and you can have
>>>> different strategies for choosing it, for instance:
>>>> - The 85th percentile over the last year of transactions feerates
>>>> - The maximum historical feerate
>>>> - The maximum historical feerate adjusted in dollars (makes more sense but introduces a (set of?)
>>>> trusted oracle(s) in a security-critical component)
>>>> - Picking a random high feerate (why not? It's an arbitrary assumption anyways)
>>>>
>>>> Therefore, even if we don't have to bet on the broadcast-time feerate market at signing time anymore
>>>> (since we can unilaterally bump), we still need some kind of prediction in preparation of making
>>>> funds available to bump the fees at broadcast time.
>>>> Apart from judging that 500sat/vb is probably more reasonable than 10sat/vbyte, this unfortunately
>>>> sounds pretty much crystal-ball-driven.
>>>>
>>>> We currently use the maximum of the 95th percentiles over 90-days windows over historical block chain
>>>> feerates. [4]
>>>>
>>>> ## 5. How much funds does my watchtower need?
>>>>
>>>> That's what we call the "reserve". Depending on your reserve feerate strategy it might vary over
>>>> time. This is easier to reason about with a per-contract reserve. For Revault it's pretty
>>>> straightforward since the Cancel transaction size is static: `reserve_feerate * cancel_size`. For
>>>> other protocols with dynamic transaction sizes (or even packages of transactions) it's less so. For
>>>> your Lightning channel you would probably take the maximum size of your commitment transaction
>>>> according to your HTLC exposure settings + the size of as many `htlc_success` transaction?
>>>>
>>>> Then you either have your software or your user guesstimate how many offchain contracts the
>>>> watchtower will have to watch, time that by the per-contract reserve and refill this amount (plus
>>>> some slack in practice). Once again, a UX tradeoff (not even mentioning the guesstimation UX):
>>>> overestimating leads to too many unallocated funds sitting on a hot wallet, underestimating means
>>>> (at best) inability to participate in new contracts or being "at risk" (not being able to enforce
>>>> all your contracts onchain at your reserve feerate) before a new refill.
>>>>
>>>> For vaults you likely have large-value UTxOs and small transactions (the Cancel is one-in one-out in
>>>> Revault). For some other applications with large transactions and lower-value UTxOs on average it's
>>>> likely that only part of the offchain contracts might be enforceable at a reasonable feerate. Is it
>>>> reasonable?
>>>>
>>>> ## 6. UTxO pool layout
>>>>
>>>> Now that you somehow managed to settle on a refill amount, how are you going to use these funds?
>>>> Also, you'll need to manage your pool across time (consolidating small coins, and probably fanning
>>>> out large ones).
>>>>
>>>> You could keep a single large UTxO and peel it as you need to sponsor transactions. But this means
>>>> that you need to create a coin of a specific value according to your need at the current feerate
>>>> estimation, hope to have it confirmed in a few blocks (at least for now! [5]), and hope that the
>>>> value won't be obsolete by the time it confirmed. Also, you'd have to do that for any number of
>>>> Cancel, chaining feebump coin creation transactions off the change of the previous ones or replacing
>>>> them with more outputs. Both seem to become really un-manageable (and expensive) in many edge-cases,
>>>> shortening the time you have to confirm the actual Cancel transaction and creating uncertainty about
>>>> the reserve (how much is my just-in-time fanout going to cost me in fees that i need to refill in
>>>> advance on my watchtower wallet?).
>>>> This is less of a concern for protocols using CPFP to sponsor transactions, but they rely on a
>>>> policy rule specific to 2-parties contracts.
>>>>
>>>> Therefore for Revault we fan-out the coins per-vault in advance. We do so at refill time so the
>>>> refiller can give an excess to pay for the fees of the fanout transaction (which is reasonable since
>>>> it will occur just after the refilling transaction confirms). When the watchtower is asked to watch
>>>> for a new delegated vault it will allocate coins from the pool of fanned-out UTxOs to it (failing
>>>> that, it would refuse the delegation).
>>>> What is a good distribution of UTxOs amounts per vault? We want to minimize the number of coins,
>>>> still have coins small enough to not overpay (remember, we can't have change) and be able to bump a
>>>> Cancel up to the reserve feerate using these coins. The two latter constraints are directly in
>>>> contradiction as the minimal value of a coin usable at the reserve feerate (paying for its own input
>>>> fee + bumping the feerate by, say, 5sat/vb) is already pretty high. Therefore we decided to go with
>>>> two distributions per vault. The "reserve distribution" alone ensures that we can bump up to the
>>>> reserve feerate and is usable for high feerates. The "bonus distribution" is not, but contains
>>>> smaller coins useful to prevent overpayments during low and medium fee periods (which is most of the
>>>> time).
>>>> Both distributions are based on a basic geometric suite [6]. Each value is half the previous one.
>>>> This exponentially decreases the value, limiting the number of coins. But this also allows for
>>>> pretty small coins to exist and each coin's value is equal to the sum of the smaller coins,
>>>> or smaller by at most the value of the smallest coin. Therefore bounding the maximum overpayment to
>>>> the smallest coin's value [7].
>>>>
>>>> For the management of the UTxO pool across time we merged the consolidation with the fanout. When
>>>> fanning out a refilled UTxO, we scan the pool for coins that need to be consolidated according to a
>>>> heuristic. An instance of a heuristic is "the coin isn't allocated and would not have been able to
>>>> increase the fee at the median feerate over the past 90 days of blocks".
>>>> We had this assumption that feerate would tend to go up with time and therefore discarded having to
>>>> split some UTxOs from the pool. We however overlooked that a large increase in the exchange price of
>>>> BTC as we've seen during the past year could invalidate this assumption and that should arguably be
>>>> reconsidered.
>>>>
>>>> ## 7. Bumping and re-bumping
>>>>
>>>> First of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like,
>>>> given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate and
>>>> only re-bumping every N blocks. You would then start aggressively bumping at every block after M
>>>> blocks have passed. But that's actually a bet (in disguised?) that the next block feerate in M blocks
>>>> will be lower than the current one. In the absence of any predictive model it is more reasonable to
>>>> just start being aggressive immediately.
>>>> You probably want to base your estimates on `estimatesmartfee` and as a consequence you would re-bump
>>>> (if needed )after each block connection, when your estimates get updated and you notice your
>>>> transaction was not included in the block.
>>>>
>>>> In the event that you notice a consequent portion of the block is filled with transactions paying
>>>> less than your own, you might want to start panicking and bump your transaction fees by a certain
>>>> percentage with no consideration for your fee estimator. You might skew miners incentives in doing
>>>> so: if you increase the fees by a factor of N, any miner with a fraction larger than 1/N of the
>>>> network hashrate now has an incentive to censor your transaction at first to get you to panic. Also
>>>> note this can happen if you want to pay the absolute fees for the 'pinning' attack mentioned in
>>>> section #2, and that might actually incentivize miners to perform it themselves..
>>>>
>>>> The gist is that the most effective way to bump and rebump (RBF the Cancel tx) seems to just be to
>>>> consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block your tx isn't included in, and
>>>> to RBF it if the feerate is higher.
>>>> In addition, we fallback to a block chain based estimation when estimates aren't available (eg if
>>>> the user stopped their WT for say a hour and we come back up): we use the 85th percentile over the
>>>> feerates in the last 6 blocks. Sure, miners can try to have an influence on that by stuffing their
>>>> blocks with large fee self-paying transactions, but they would need to:
>>>> 1. Be sure to catch a significant portion of the 6 blocks (at least 2, actually)
>>>> 2. Give up on 25% of the highest fee-paying transactions (assuming they got the 6 blocks, it's
>>>> proportionally larger and incertain as they get less of them)
>>>> 3. Hope that our estimator will fail and we need to fall back to the chain-based estimation
>>>>
>>>> ## 8. Our study
>>>>
>>>> We essentially replayed the historical data with different deployment configurations (number of
>>>> participants and timelock) and probability of an event occurring (event being say an Unvault, an
>>>> invalid Unvault, a new delegation, ..). We then observed different metrics such as the time at risk
>>>> (when we can't enforce all our contracts at the reserve feerate at the same time), or the
>>>> operational cost.
>>>> We got the historical fee estimates data from Statoshi [9], Txstats [10] and the historical chain
>>>> data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!
>>>>
>>>> The (research-quality..) code can be found at https://github.com/revault/research under the section
>>>> "Fee bumping". Again it's very Revault specific, but at least the data can probably be reused for
>>>> studying other protocols.
>>>>
>>>> ## 9. Insurances
>>>>
>>>> Of course, given it's all hacks and workarounds and there is no good answer to "what is a reasonable
>>>> feerate up to which we need to make contracts enforceable onchain?", there is definitely room for an
>>>> insurance market. But this enters the realm of opinions. Although i do have some (having discussed
>>>> this topic for the past years with different people), i would like to keep this post focused on the
>>>> technical aspects of this problem.
>>>>
>>>> [0] As far as i can tell, having offchain contracts be enforceable onchain by confirming a
>>>> transaction before the expiration of a timelock is a widely agreed-upon approach. And i don't think
>>>> we can opt for any other fundamentally different one, as you want to know you can claim back your
>>>> coins from a contract after a deadline before taking part in it.
>>>>
>>>> [1] The Real Revault (tm) involves more transactions, but for the sake of conciseness i only
>>>> detailed a minimum instance of the problem.
>>>>
>>>> [2] Only presigning part of the Unvault transactions allows to only delegate part of the coins,
>>>> which can be abstracted as "delegate x% of your stash" in the user interface.
>>>>
>>>> [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html
>>>>
>>>> [4] https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329
>>>>
>>>> [5] https://github.com/bitcoin/bitcoin/pull/23121
>>>>
>>>> [6] https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507
>>>>
>>>> [7] Of course this assumes a combinatorial coin selection, but i believe it's ok given we limit the
>>>> number of coins beforehand.
>>>>
>>>> [8] Although there is the argument to outbid a censorship, anyone censoring you isn't necessarily a
>>>> miner.
>>>>
>>>> [9] https://www.statoshi.info/
>>>>
>>>> [10] https://www.statoshi.info/
>>>>
>>>> [11] https://github.com/RCasatta/blocks_iterator
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 49781 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-11-30 15:19   ` darosior
  2021-12-07 17:24     ` Gloria Zhao
@ 2021-12-08 23:56     ` Antoine Riard
  1 sibling, 0 replies; 9+ messages in thread
From: Antoine Riard @ 2021-12-08 23:56 UTC (permalink / raw)
  To: darosior; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 31936 bytes --]

Hi Antoine,

> It seems to me the only policy-level mitigation for RBF pinning around
the "don't decrease the abolute fees of a less-than-a-block mempool" would
be to drop the requirement on increasing absolute fees if the mempool is
"full enough" (and the feerate increases exponentially, of course).

Yes, it's hard to say the "less-than-a-block-mempool" scenario is long-term
realistic. In the future, you can expect liquidity operations to be
triggered as soon as the network mempools start to be empty.  At a given
block space price, there is always room to improve your routing topology.

That said, you would like the default block construction strategy to be
"all-weather" economically aligned. To build such a more robust strategy, I
think a miner would have interest to level the  "full enough" bar.

I still think a policy-level mitigation is possible, where you have a
replace-by-fee rate above X MB of blocks and replace-by-fee under X.
Responsibility is on the L2 fee-bumper to guarantee the  honest bid is in
the X MB of blocks or the malicious pinning attacker has to overbid.

At first sight, yes committing the maximum tx size in the annex covered by
your counterparty signature should still allow you to add high-feerate
input. Though niice if we can save a consensus rule to fix pinnings.

> In any case, for Lightning i think it's a bad idea to re-introduce trust
on this side post anchor outputs. For Revault it's clearly out of the
question to introduce trust in your counterparties (why would you bother
having a fee-bumping mechanism in the >first place then?). Probably the
same holds for all offchain contracts.

Yeah it was a strawman exercise on the question "not knowledge of other
primitive that can be used by multi-party" :) I wouldn't recommend that
kind of fee-bumping "shared cache" scheme for a  trust-minimized setup.
Maybe interesting for watchtowers/LSP topologies.

> Black swan event 2.0? Just rule n°3 is inherent to any kind of fee
estimation.

It's just the old good massive mempool congestion systemic risk known since
the LN whitepaper. AFAIK, anchor output fee-bumping schemes have not really
started the work to be robust against that. What I'm aiming to point out is
that it might be even harder to build a fault-tolerant fee-bumping strategy
because of the "limited rationality" of your local node towards the
behaviors of the other bitcoin users in face of this phenomena. Would be
nice to have more research on that front.

> I don't think any kind of mempool-based estimate generalizes well, since
at any point the expected time before the next block is 10 minutes (and a
lot can happen in 10min).

Sure, you might be off-bid because of block variance, though if you're
ready to pay multiple RBF penalties which are linear, you might adjust your
shots in function of "real-time" mempool congestion.

> I'm very concerned that large stakeholders of the "offchain contracts
ecosystem" would just go this (easier) way and further increase mining
centralisation pressure.

*back on the whiteboard sweating on a consensus-enforced timestop primitive*

Cheers,
Antoine

Le mar. 30 nov. 2021 à 10:19, darosior <darosior@protonmail•com> a écrit :

> Hi Antoine,
>
> Thanks for your comment. I believe for Lightning it's simpler with regard
> to the management of the UTxO pool, but harder with regard to choosing
> a threat model.
> Responses inline.
>
>
> For any opened channel, ensure the confirmation of a Commitment
> transaction and the children HTLC-Success/HTLC-Timeout transactions. Note,
> in the Lightning security game you have to consider (at least) 4 types of
> players moves and incentives : your node, your channel counterparties, the
> miners, the crowd of bitcoin users. The number of the last type of players
> is unknown from your node, however it should not be forgotten you're in
> competition for block space, therefore their block demands bids should be
> anticipated and reacted to in consequence. With that remark in mind,
> implications for your LN fee-bumping strategy will be raised afterwards.
>
> For a LN service provider, on-chain overpayments are bearing on your
> operational costs, thus downgrading your economic competitiveness. For the
> average LN user, overpayment might price out outside a LN non-custodial
> deployment, as you don't have the minimal security budget to be on your own.
>
>
> I think this problem statement can be easily generalised to any offchain
> contract. And your points stand for all of them.
> "For any opened contract, ensure at any point the confirmation of a (set
> of) transaction(s) in a given number of blocks"
>
>
> Same issue with Lightning, we can be pinned today on the basis of
> replace-by-fee rule 3. We can be also blinded by network mempool
> partitions, a pinning counterparty can segregate all the full-nodes  in as
> many subsets by broadcasting a revoked Commitment transaction different for
> each. For Revault, I think you can also do unlimited partitions by mutating
> the ANYONECANPAY-input of the Cancel.
>
>
> Well you can already do unlimited partitions by adding different inputs to
> it. You could malleate the witness, but since we are using Miniscript i'm
> confident you would only be able in a marginal way.
>
>
> That said, if you have a distributed towers deployment, spread across the
> p2p network topology, and they can't be clustered together through
> cross-layers or intra-layer heuristics, you should be able to reliably
> observe such partitions. I think such distributed monitors are deployed by
> few L1 merchants accepting 0-conf to detect naive double-spend.
>
>
> We should aim to more than 0-conf (in)security level..
> It seems to me the only policy-level mitigation for RBF pinning around the
> "don't decrease the abolute fees of a less-than-a-block mempool" would be
> to drop the requirement on increasing absolute fees if the mempool is "full
> enough" (and the feerate increases exponentially, of course).
> Another approach could be by introducing new consensus rules as proposed
> by Jeremy last year [0]. If we go in the realm of new consensus rules, then
> i think that simply committing to a maximum tx size would fix pinning by
> RBF rule 3. Could be in the annex, or in the unused sequence bits (although
> they currently are by Lightning, meh). You could also check in the output
> script that the input commits to this.
>
> [0]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
>
>
> Have we already discussed a fee-bumping "shared cache", a CPFP variation ?
> Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO
> from the main "offchain contract" one. This UTXO is locked by a multi-sig.
> For any Commitment transaction pre-signed, also counter-sign a CPFP with
> top mempool feerate included, spending a Commitment anchor output and the
> shared-cache UTXO. If the fees spike,  you can re-sign a high-feerate CPFP,
> assuming interactivity. As the CPFP is counter-signed by everyone, the
> outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is
> feeded at parity, there shouldn't be an incentive to waste or maliciously
> inflate the feerate. I think this solution can be easily generalized to
> more than 2 counterparties by using a multi-signature scheme. Big issue, if
> the feerate is short due to fee spikes and you need to re-sign a
> higher-feerate CPFP, you're trusting your counterparty to interact, though
> arguably not worse than the current update fee mechanism.
>
>
> It really looks just like `update_fee`. Except maybe with the property
> that you have the channel liquidity not depend on the onchain feerate.
> In any case, for Lightning i think it's a bad idea to re-introduce trust
> on this side post anchor outputs. For Revault it's clearly out of the
> question to introduce trust in your counterparties (why would you bother
> having a fee-bumping mechanism in the first place then?). Probably the same
> holds for all offchain contracts.
>
>
> > For Lightning, it'd mean keeping an equivalent amount of funds as the
> sum of all your
> channels balances sitting there unallocated "just in case". This is not
> reasonable.
>
> Agree, game-theory wise, you would like to keep a full fee-bumping
> reserve, ready to burn as much in fees as the contested HTLC value, as it's
> the maximum gain of your counterparty. Though perfect equilibrium is hard
> to achieve because your malicious counterparty might have an edge pushing
> you to broadcast your Commitment first by witholding HTLC resolution.
>
> Fractional fee-bumping reserves are much more realistic to expect in the
> LN network. Lower fee-bumping reserve, higher liquidity deployed, in theory
> higher routing fees. By observing historical feerates, average offchain
> balances at risk and routing fees expected gains, you should be able to
> discover an equilibrium where higher levels of reserve aren't worth the
> opportunity cost. I guess this  equilibrium could be your LN fee-bumping
> reserve max feerate.
>
> Note, I think the LN approach is a bit different from what suits a custody
> protocol like Revault,  as you compute a direct return of the frozen
> fee-bumping liquidity. With Revault, if you have numerous bitcoins
> protected, it's might be more interesting to adopt a "buy the mempool,
> stupid" strategy than risking fund safety for few percentages of interest
> returns.
>
>
> True for routing nodes. For wallets (if receiving funds), it's not about
> an investment: just users expectations to being able to transact without
> risking to lose their funds (ie being able to enforce their contract
> onchain). Although wallets they are much less at risk.
>
>
> This is where the "anticipate the crowd of bitcoin users move" point can
> be laid out. As the crowd of bitcoin users' fee-bumping reserves are
> ultimately unknown from your node knowledge, you should be ready to be a
> bit more conservative than the vanilla fee-bumping strategies shipped by
> default. In case of massive mempool congestion, your additional
> conservatism might get your time-sensitive transactions and game on the
> crowd of bitcoin users. First Problem: if all offchain bitcoin software
> adopt that strategy we might inflate the worst-case feerate rate at the
> benefit of the miners, without holistically improving block throughput.
> Second problem : your class of offchain bitcoin softwares might have
> ridiculous fee-bumping reserve compared
> to other classes of offchain bitcoin softwares (Revault > Lightning) and
> just be priced out bydesign in case of mempool congestion. Third problem :
> as the number of offchain bitcoin applications should go up with time, your
> fee-bumping reserve levels based from historical data might be always late
> by one "bank-run" scenario.
>
>
> Black swan event 2.0? Just rule n°3 is inherent to any kind of fee
> estimation.
>
> For Lightning, if you're short in fee-bumping reserves you might still do
> preemptive channel closures, either cooperatively or unilaterally and get
> back the off-chain liquidity to protect the more economically interesting
> channels. Though again, that kind of automatic behavior might be compelling
> at the individual node-level, but make the mempol congestion worse
> holistically.
>
>
> Yeah so we are back to the "fractional reserve" model: you can only
> enforce X% of the offchain contracts your participate in.. Actually it's
> even an added assumption: that you still have operating contracts, with
> honest counterparties.
>
>
> In case of massive mempool congestion, you might try to front-run the
> crowd of bitcoin users relying on block connections for fee-bumping, and
> thus start your fee-bumping as soon as you observe feerate groups
> fluctuations in your local mempool(s).
>
>
> I don't think any kind of mempool-based estimate generalizes well, since
> at any point the expected time before the next block is 10 minutes (and a
> lot can happen in 10min).
>
> Also you might proceed your fee-bumping ticks on a local clock instead of
> block connections in case of time-dilation or deeper eclipse attacks of
> your local node. Your view of the chain might be compromised but not your
> ability to broadcast transactions thanks to emergency channels (in the
> non-LN sense...though in fact quid of txn wrapped in onions ?) of
> communication.
>
>
> Oh, yeah, i didn't explicit "not getting eclipsed" (or more generally
> "data availability") as an assumption since it's generally one made by
> participants of any offchain contract. In this case you can't even have
> decent fee estimation, so you are screwed anyways.
>
>
> Yes, stay open the question on how you enforce this block insurance
> market. Reputation, which might be to avoid due to the latent
> centralization effect, might be hard to stack and audit reliably for an
> emergency mechanism running, hopefully, once in a halvening period. Maybe
> maybe some cryptographic or economically based mechanism on slashing or
> swaps could be found...
>
>
> Unfortunately, given current mining centralisation, pools are in a very
> good position to offer pretty decent SLAs around that. With a block space
> insurance, you of course don't need all these convoluted fee-bumping hacks.
> I'm very concerned that large stakeholders of the "offchain contracts
> ecosystem" would just go this (easier) way and further increase mining
> centralisation pressure.
>
> I agree that a cryptography-based scheme around this type of insurance
> services would be the best way out.
>
>
> Antoine
>
> Le lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>
>> Hi everyone,
>>
>> Fee-bumping is paramount to the security of many protocols building on
>> Bitcoin, as they require the
>> confirmation of a transaction (which might be presigned) before the
>> expiration of a timelock at any
>> point after the establishment of the contract.
>>
>> The part of Revault using presigned transactions (the delegation from a
>> large to a smaller multisig)
>> is no exception. We have been working on how to approach this for a while
>> now and i'd like to share
>> what we have in order to open a discussion on this problem so central to
>> what seem to be The Right
>> Way [0] to build on Bitcoin but which has yet to be discussed in details
>> (at least publicly).
>>
>> I'll discuss what we came up with for Revault (at least for what will be
>> its first iteration) but my
>> intent with posting to the mailing list is more to frame the questions to
>> this problem we are all
>> going to face rather than present the results of our study tailored to
>> the Revault usecase.
>> The discussion is still pretty Revault-centric (as it's the case study)
>> but hopefully this can help
>> future protocol designers and/or start a discussion around what
>> everyone's doing for existing ones.
>>
>>
>> ## 1. Reminder about Revault
>>
>> The part of Revault we are interested in for this study is the delegation
>> process, and more
>> specifically the application of spending policies by network monitors
>> (watchtowers).
>> Coins are received on a large multisig. Participants of this large
>> multisig create 2 [1]
>> transactions. The Unvault, spending a deposit UTxO, creates an output
>> paying either to the small
>> multisig after a timelock or to the large multisig immediately. The
>> Cancel, spending the Unvault
>> output through the non-timelocked path, creates a new deposit UTxO.
>> Participants regularly exchange the Cancel transaction signatures for
>> each deposit, sharing the
>> signatures with the watchtowers they operate. They then optionally [2]
>> sign the Unvault transaction
>> and share the signatures with the small multisig participants who can in
>> turn use them to proceed
>> with a spending. Watchtowers can enforce spending policies (say, can't
>> Unvault outside of business
>> hours) by having the Cancel transaction be confirmed before the
>> expiration of the timelock.
>>
>>
>> ## 2. Problem statement
>>
>> For any delegated vault, ensure the confirmation of a Cancel transaction
>> in a configured number of
>> blocks at any point. In so doing, minimize the overpayments and the UTxO
>> set footprint. Overpayments
>> increase the burden on the watchtower operator by increasing the required
>> frequency of refills of the
>> fee-bumping wallet, which is already the worst user experience. You are
>> likely to manage a number of
>> UTxOs with your number of vaults, which comes at a cost for you as well
>> as everyone running a full
>> node.
>>
>> Note that this assumes miners are economically rationale, are
>> incentivized by *public* fees and that
>> you have a way to propagate your fee-bumped transaction to them. We also
>> don't consider the block
>> space bounds.
>>
>> In the previous paragraph and the following text, "vault" can generally
>> be replaced with "offchain
>> contract".
>>
>>
>> ## 3. With presigned transactions
>>
>> As you all know, the first difficulty is to get to be able to
>> unilaterally enforce your contract
>> onchain. That is, any participant must be able to unilaterally bump the
>> fees of a transaction even
>> if it was co-signed by other participants.
>>
>> For Revault we can afford to introduce malleability in the Cancel
>> transaction since there is no
>> second-stage transaction depending on its txid. Therefore it is
>> pre-signed with ANYONECANPAY. We
>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
>> Note how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties
>> contract.
>> This has a significant implication for the rest, as we are entirely
>> burning fee-bumping UTxOs.
>>
>> This opens up a pinning vector, or at least a significant nuisance: any
>> other party can largely
>> increase the absolute fee without increasing the feerate, leveraging the
>> RBF rules to prevent you
>> from replacing it without paying an insane fee. And you might not see it
>> in your own mempool and
>> could only suppose it's happening by receiving non-full blocks or with
>> transactions paying a lower
>> feerate.
>> Unfortunately i know of no other primitive that can be used by
>> multi-party (i mean, >2) presigned
>> transactions protocols for fee-bumping that aren't (more) vulnerable to
>> pinning.
>>
>>
>> ## 4. We are still betting on future feerate
>>
>> The problem is still missing one more constraint. "Ensuring confirmation
>> at any time" involves ensuring
>> confirmation at *any* feerate, which you *cannot* do. So what's the
>> limit? In theory you should be ready
>> to burn as much in fees as the value of the funds you want to get out of
>> the contract. So... For us
>> it'd mean keeping for each vault an equivalent amount of funds sitting
>> there on the watchtower's hot
>> wallet. For Lightning, it'd mean keeping an equivalent amount of funds as
>> the sum of all your
>> channels balances sitting there unallocated "just in case". This is not
>> reasonable.
>>
>> So you need to keep a maximum feerate, above which you won't be able to
>> ensure the enforcement of
>> all your contracts onchain at the same time. We call that the "reserve
>> feerate" and you can have
>> different strategies for choosing it, for instance:
>> - The 85th percentile over the last year of transactions feerates
>> - The maximum historical feerate
>> - The maximum historical feerate adjusted in dollars (makes more sense
>> but introduces a (set of?)
>>   trusted oracle(s) in a security-critical component)
>> - Picking a random high feerate (why not? It's an arbitrary assumption
>> anyways)
>>
>> Therefore, even if we don't have to bet on the broadcast-time feerate
>> market at signing time anymore
>> (since we can unilaterally bump), we still need some kind of prediction
>> in preparation of making
>> funds available to bump the fees at broadcast time.
>> Apart from judging that 500sat/vb is probably more reasonable than
>> 10sat/vbyte, this unfortunately
>> sounds pretty much crystal-ball-driven.
>>
>> We currently use the maximum of the 95th percentiles over 90-days windows
>> over historical block chain
>> feerates. [4]
>>
>>
>> ## 5. How much funds does my watchtower need?
>>
>> That's what we call the "reserve". Depending on your reserve feerate
>> strategy it might vary over
>> time. This is easier to reason about with a per-contract reserve. For
>> Revault it's pretty
>> straightforward since the Cancel transaction size is static:
>> `reserve_feerate * cancel_size`. For
>> other protocols with dynamic transaction sizes (or even packages of
>> transactions) it's less so. For
>> your Lightning channel you would probably take the maximum size of your
>> commitment transaction
>> according to your HTLC exposure settings + the size of as many
>> `htlc_success` transaction?
>>
>> Then you either have your software or your user guesstimate how many
>> offchain contracts the
>> watchtower will have to watch, time that by the per-contract reserve and
>> refill this amount (plus
>> some slack in practice). Once again, a UX tradeoff (not even mentioning
>> the guesstimation UX):
>> overestimating leads to too many unallocated funds sitting on a hot
>> wallet, underestimating means
>> (at best) inability to participate in new contracts or being "at risk"
>> (not being able to enforce
>> all your contracts onchain at your reserve feerate) before a new refill.
>>
>> For vaults you likely have large-value UTxOs and small transactions (the
>> Cancel is one-in one-out in
>> Revault). For some other applications with large transactions and
>> lower-value UTxOs on average it's
>> likely that only part of the offchain contracts might be enforceable at a
>> reasonable feerate. Is it
>> reasonable?
>>
>>
>> ## 6. UTxO pool layout
>>
>> Now that you somehow managed to settle on a refill amount, how are you
>> going to use these funds?
>> Also, you'll need to manage your pool across time (consolidating small
>> coins, and probably fanning
>> out large ones).
>>
>> You could keep a single large UTxO and peel it as you need to sponsor
>> transactions. But this means
>> that you need to create a coin of a specific value according to your need
>> at the current feerate
>> estimation, hope to have it confirmed in a few blocks (at least for now!
>> [5]), and hope that the
>> value won't be obsolete by the time it confirmed. Also, you'd have to do
>> that for any number of
>> Cancel, chaining feebump coin creation transactions off the change of the
>> previous ones or replacing
>> them with more outputs. Both seem to become really un-manageable (and
>> expensive) in many edge-cases,
>> shortening the time you have to confirm the actual Cancel transaction and
>> creating uncertainty about
>> the reserve (how much is my just-in-time fanout going to cost me in fees
>> that i need to refill in
>> advance on my watchtower wallet?).
>> This is less of a concern for protocols using CPFP to sponsor
>> transactions, but they rely on a
>> policy rule specific to 2-parties contracts.
>>
>> Therefore for Revault we fan-out the coins per-vault in advance. We do so
>> at refill time so the
>> refiller can give an excess to pay for the fees of the fanout transaction
>> (which is reasonable since
>> it will occur just after the refilling transaction confirms). When the
>> watchtower is asked to watch
>> for a new delegated vault it will allocate coins from the pool of
>> fanned-out UTxOs to it (failing
>> that, it would refuse the delegation).
>> What is a good distribution of UTxOs amounts per vault? We want to
>> minimize the number of coins,
>> still have coins small enough to not overpay (remember, we can't have
>> change) and be able to bump a
>> Cancel up to the reserve feerate using these coins. The two latter
>> constraints are directly in
>> contradiction as the minimal value of a coin usable at the reserve
>> feerate (paying for its own input
>> fee + bumping the feerate by, say, 5sat/vb) is already pretty high.
>> Therefore we decided to go with
>> two distributions per vault. The "reserve distribution" alone ensures
>> that we can bump up to the
>> reserve feerate and is usable for high feerates. The "bonus distribution"
>> is not, but contains
>> smaller coins useful to prevent overpayments during low and medium fee
>> periods (which is most of the
>> time).
>> Both distributions are based on a basic geometric suite [6]. Each value
>> is half the previous one.
>> This exponentially decreases the value, limiting the number of coins. But
>> this also allows for
>> pretty small coins to exist and each coin's value is equal to the sum of
>> the smaller coins,
>> or smaller by at most the value of the smallest coin. Therefore bounding
>> the maximum overpayment to
>> the smallest coin's value [7].
>>
>> For the management of the UTxO pool across time we merged the
>> consolidation with the fanout. When
>> fanning out a refilled UTxO, we scan the pool for coins that need to be
>> consolidated according to a
>> heuristic. An instance of a heuristic is "the coin isn't allocated and
>> would not have been able to
>> increase the fee at the median feerate over the past 90 days of blocks".
>> We had this assumption that feerate would tend to go up with time and
>> therefore discarded having to
>> split some UTxOs from the pool. We however overlooked that a large
>> increase in the exchange price of
>> BTC as we've seen during the past year could invalidate this assumption
>> and that should arguably be
>> reconsidered.
>>
>>
>> ## 7. Bumping and re-bumping
>>
>> First of all, when to fee-bump? At fixed time intervals? At each block
>> connection? It sounds like,
>> given a large enough timelock, you could try to greed by "trying your
>> luck" at a lower feerate and
>> only re-bumping every N blocks. You would then start aggressively bumping
>> at every block after M
>> blocks have passed. But that's actually a bet (in disguised?) that the
>> next block feerate in M blocks
>> will be lower than the current one. In the absence of any predictive
>> model it is more reasonable to
>> just start being aggressive immediately.
>> You probably want to base your estimates on `estimatesmartfee` and as a
>> consequence you would re-bump
>> (if needed )after each block connection, when your estimates get updated
>> and you notice your
>> transaction was not included in the block.
>>
>> In the event that you notice a consequent portion of the block is filled
>> with transactions paying
>> less than your own, you might want to start panicking and bump your
>> transaction fees by a certain
>> percentage with no consideration for your fee estimator. You might skew
>> miners incentives in doing
>> so: if you increase the fees by a factor of N, any miner with a fraction
>> larger than 1/N of the
>> network hashrate now has an incentive to censor your transaction at first
>> to get you to panic. Also
>> note this can happen if you want to pay the absolute fees for the
>> 'pinning' attack mentioned in
>> section #2, and that might actually incentivize miners to perform it
>> themselves..
>>
>> The gist is that the most effective way to bump and rebump (RBF the
>> Cancel tx) seems to just be to
>> consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block
>> your tx isn't included in, and
>> to RBF it if the feerate is higher.
>> In addition, we fallback to a block chain based estimation when estimates
>> aren't available (eg if
>> the user stopped their WT for say a hour and we come back up): we use the
>> 85th percentile over the
>> feerates in the last 6 blocks. Sure, miners can try to have an influence
>> on that by stuffing their
>> blocks with large fee self-paying transactions, but they would need to:
>> 1. Be sure to catch a significant portion of the 6 blocks (at least 2,
>> actually)
>> 2. Give up on 25% of the highest fee-paying transactions (assuming they
>> got the 6 blocks, it's
>>    proportionally larger and incertain as they get less of them)
>> 3. Hope that our estimator will fail and we need to fall back to the
>> chain-based estimation
>>
>>
>> ## 8. Our study
>>
>> We essentially replayed the historical data with different deployment
>> configurations (number of
>> participants and timelock) and probability of an event occurring (event
>> being say an Unvault, an
>> invalid Unvault, a new delegation, ..). We then observed different
>> metrics such as the time at risk
>> (when we can't enforce all our contracts at the reserve feerate at the
>> same time), or the
>> operational cost.
>> We got the historical fee estimates data from Statoshi [9], Txstats [10]
>> and the historical chain
>> data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!
>>
>> The (research-quality..) code can be found at
>> https://github.com/revault/research under the section
>> "Fee bumping". Again it's very Revault specific, but at least the data
>> can probably be reused for
>> studying other protocols.
>>
>>
>> ## 9. Insurances
>>
>> Of course, given it's all hacks and workarounds and there is no good
>> answer to "what is a reasonable
>> feerate up to which we need to make contracts enforceable onchain?",
>> there is definitely room for an
>> insurance market. But this enters the realm of opinions. Although i do
>> have some (having discussed
>> this topic for the past years with different people), i would like to
>> keep this post focused on the
>> technical aspects of this problem.
>>
>>
>>
>> [0] As far as i can tell, having offchain contracts be enforceable
>> onchain by confirming a
>> transaction before the expiration of a timelock is a widely agreed-upon
>> approach. And i don't think
>> we can opt for any other fundamentally different one, as you want to know
>> you can claim back your
>> coins from a contract after a deadline before taking part in it.
>>
>> [1] The Real Revault (tm) involves more transactions, but for the sake of
>> conciseness i only
>> detailed a minimum instance of the problem.
>>
>> [2] Only presigning part of the Unvault transactions allows to only
>> delegate part of the coins,
>> which can be abstracted as "delegate x% of your stash" in the user
>> interface.
>>
>> [3]
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html
>>
>> [4]
>> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329
>>
>> [5] https://github.com/bitcoin/bitcoin/pull/23121
>>
>> [6]
>> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507
>>
>> [7] Of course this assumes a combinatorial coin selection, but i believe
>> it's ok given we limit the
>> number of coins beforehand.
>>
>> [8] Although there is the argument to outbid a censorship, anyone
>> censoring you isn't necessarily a
>> miner.
>>
>> [9] https://www.statoshi.info/
>>
>> [10] https://www.statoshi.info/
>>
>> [11] https://github.com/RCasatta/blocks_iterator
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>

[-- Attachment #2: Type: text/html, Size: 37931 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-12-07 17:24     ` Gloria Zhao
  2021-12-08 14:51       ` darosior
@ 2021-12-09  0:55       ` Antoine Riard
  1 sibling, 0 replies; 9+ messages in thread
From: Antoine Riard @ 2021-12-09  0:55 UTC (permalink / raw)
  To: Gloria Zhao; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 41683 bytes --]

Hi Gloria,

For LN, I think 3 tower rewards models have been discussed : per-penalty
on-chain bounty/per-job micropayment/customer subscription. If curious, see
the wip specification :
https://github.com/sr-gi/bolt13/blob/master/13-watchtowers.md

> - Do we expect watchtowers tracking multiple vaults to be batching
multiple
> Cancel transaction fee-bumps?

For LN, I can definitely see LSP to batch closure of their spokes, with one
CPFP spending multiple anchors outputs of commitment transactions, and
RBF'ing when needed.

> - Do we expect vault users to be using multiple watchtowers for a better
> trust model? If so, and we're expecting batched fee-bumps, won't those
> conflict?

Even worse, a malicious counterparty could force an unilateral closure by
the honest participant and observe the fee-bumping transaction propagation
by the towers to discover their full-nodes topologies. Might be good to
have an ordering algo among your towers to select who is fee-bumping first,
and broadcast all when you're reaching near timelock expiration.

> Well stated about CPFP carve out. I suppose the generalization is that
> allowing n extra ancestorcount=2 descendants to a transaction means it can
> help contracts with <=n+1 parties (more accurately, outputs)? I wonder if
> it's possible to devise a different approach for limiting
> ancestors/descendants, e.g. by height/width/branching factor of the family
> instead of count... :shrug:

I think CPFP carve out can be deprecated once package relay and a
pinning-hardened RBF is deployed ?  Like if your counterparty is abusing
the ancestors/descendants limits, your RBF'ed package should evict the
malicious pinning starting by the root commitment transaction (I think).
And I believe it can be generalized to n-parties contracts, if your
transaction includes one "any-contract-can-spend" anchor ouput.

> - Should the fee-bumping strategy depend on how close you are to your
> timelock expiry? (though this seems like a potential privacy leak, and the
> game theory could get weird as you mentioned).

Yes, at first it's hard to predict how tight it is going to be and it's
nice to save on fees. At some point, you might fall-back off this
fee-bumping warm up-phase to accelerate the rate and start to be more
aggressive. In that direction, see DLC spec fee-bumping recommendation :
https://github.com/discreetlogcontracts/dlcspecs/blob/master/Non-Interactive-Protocol.md

Note, at least for LN, the transaction weight isn't proportional with the
value at stake, and there  is a focal point where it's more interesting to
save fee reserves rather than keep bumping.

> - As long as you have a good fee estimator (i.e. given a current mempool,
can get an accurate feerate given a % probability of getting into target
block n), is there any reason to devise a fee-bumping strategy beyond
picking a time interval?

You might be a LSP, you observe rapid changes in the global network HTLC
traffic and would like to react in consequence. You accelerate the
fee-bumping to free/reallocate your liquidity elsewhere.

> So the equation is more like: a miner with 1/N of the hashrate, employing
this censorship strategy, gains only if `max(f2-g2, 0) > N * (f1-g1)`. More
broadly, the miner only profits if `f2` is significantly higher than `g2

This is where it becomes hard. From your "limited rationality" of a
fee-bumping node `g2` is unknown, And you might be incentivized to
overshoot to front-run `g2` issuer (?)

> In general, I agree it would really suck to inadvertently create a game
where miners can drive feerates up by triggering desperation-driven
fee-bumping procedures. I guess this is a reason to avoid
increasingly-aggressive feebumping, or strategies where we predictably
overshoot.

Good topic of research! Few other vectors of analysis :
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-February/002569.html

Cheers,
Antoine

Le mar. 7 déc. 2021 à 12:24, Gloria Zhao <gloriajzhao@gmail•com> a écrit :

> Hi Darosior and Ariard,
>
> Thank you for your work looking into fee-bumping so thoroughly, and for
> sharing your results. I agree about fee-bumping's importance in contract
> security and feel that it's often under-prioritized. In general, what
> you've described in this post, to me, is strong motivation for some of the
> proposed changes to RBF we've been discussing. Mostly, I have some
> questions.
>
> > The part of Revault we are interested in for this study is the
> delegation process, and more
> > specifically the application of spending policies by network monitors
> (watchtowers).
>
> I'd like to better understand how fee-bumping would be used, i.e. how the
> watchtower model works:
> - Do all of the vault parties both deposit to the vault and a refill/fee
> to the watchtower, is there a reward the watchtower collects for a
> successful Cancel, or something else? (Apologies if there's a thorough
> explanation somewhere that I haven't already seen).
> - Do we expect watchtowers tracking multiple vaults to be batching
> multiple Cancel transaction fee-bumps?
> - Do we expect vault users to be using multiple watchtowers for a better
> trust model? If so, and we're expecting batched fee-bumps, won't those
> conflict?
>
> > For Revault we can afford to introduce malleability in the Cancel
> transaction since there is no
> > second-stage transaction depending on its txid. Therefore it is
> pre-signed with ANYONECANPAY. We
> > can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
> Note how we can't leverage
> > the carve out rule, and neither can any other more-than-two-parties
> contract.
>
> We've already talked about this offline, but I'd like to point out here
> that even transactions signed with ANYONECANPAY|ALL can be pinned by RBF
> unless we add an ancestor score rule. [0], [1] (numbers are inaccurate,
> Cancel Tx feerates wouldn't be that low, but just to illustrate what the
> attack would look like)
>
> [0]:
> https://user-images.githubusercontent.com/25183001/135104603-9e775062-5c8d-4d55-9bc9-6e9db92cfe6d.png
> [1]:
> https://user-images.githubusercontent.com/25183001/145044333-2f85da4a-af71-44a1-bc21-30c388713a0d.png
>
> > can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
> Note how we can't leverage
> > the carve out rule, and neither can any other more-than-two-parties
> contract.
>
> Well stated about CPFP carve out. I suppose the generalization is that
> allowing n extra ancestorcount=2 descendants to a transaction means it can
> help contracts with <=n+1 parties (more accurately, outputs)? I wonder if
> it's possible to devise a different approach for limiting
> ancestors/descendants, e.g. by height/width/branching factor of the family
> instead of count... :shrug:
>
> > You could keep a single large UTxO and peel it as you need to sponsor
> transactions. But this means
> > that you need to create a coin of a specific value according to your
> need at the current feerate
> > estimation, hope to have it confirmed in a few blocks (at least for now!
> [5]), and hope that the
> > value won't be obsolete by the time it confirmed.
>
> IIUC, a Cancel transaction can be generalized as a 1-in-1-out where the
> input is presigned with counterparties, SIGHASH_ANYONECANPAY. The fan-out
> UTXO pool approach is a clever solution. I also think this smells like a
> case where improving lower-level RBF rules is more appropriate than
> requiring applications to write workarounds and generate extra
> transactions. Seeing that the BIP125#2 (no new unconfirmed inputs)
> restriction really hurts in this case, if that rule were removed, would you
> be able to simply keep the 1 big UTXO per vault and cut out the exact
> nValue you need to fee-bump Cancel transactions? Would that feel less like
> "burning" for the sake of fee-bumping?
>
> > First of all, when to fee-bump? At fixed time intervals? At each block
> connection? It sounds like,
> > given a large enough timelock, you could try to greed by "trying your
> luck" at a lower feerate and
> > only re-bumping every N blocks. You would then start aggressively
> bumping at every block after M
> > blocks have passed.
>
> I'm wondering if you also considered other questions like:
> - Should a fee-bumping strategy be dependent upon the rate of incoming
> transactions? To me, it seems like the two components are (1) what's in the
> mempool and (2) what's going to trickle into the mempool between now and
> the target block. The first component is best-effort keeping
> incentive-compatible mempool; historical data and crystal ball look like
> the only options for incorporating the 2nd component.
> - Should the fee-bumping strategy depend on how close you are to your
> timelock expiry? (though this seems like a potential privacy leak, and the
> game theory could get weird as you mentioned).
> - As long as you have a good fee estimator (i.e. given a current mempool,
> can get an accurate feerate given a % probability of getting into target
> block n), is there any reason to devise a fee-bumping strategy beyond
> picking a time interval?
>
> It would be interesting to see stats on the spread of feerates in blocks
> during periods of fee fluctuation.
>
> > > In the event that you notice a consequent portion of the block is
> filled with transactions paying
> > > less than your own, you might want to start panicking and bump your
> transaction fees by a certain
> > > percentage with no consideration for your fee estimator. You might
> skew miners incentives in doing
> > > so: if you increase the fees by a factor of N, any miner with a
> fraction larger than 1/N of the
> > > network hashrate now has an incentive to censor your transaction at
> first to get you to panic.
>
> > Yes I think miner-harvesting attacks should be weighed carefully in the
> design of offchain contracts fee-bumping strategies, at least in the future
> when the mining reward exhausts further.
>
> Miner-harvesting (such cool naming!) is interesting, but I want to clarify
> the value of N - I don't think it's the factor by which you increase the
> fees on just your transaction.
>
> To codify: your transaction pays a fee of `f1` right now and might pay a
> fee of `f2` in a later block that the miner expects to mine with 1/N
> probability. The economically rational miner isn't incentivized if simply
> `f2 = N * f1` unless their mempool is otherwise empty.
> By omitting your transaction in this block, the miner can include another
> transaction/package paying `g1` fees instead, so they lose `f1-g1` in fees
> right now. In the future block, they have the choice between collecting
> `f2` or `g2` (from another transaction/package) in fees, so their gain is
> `max(f2-g2, 0)`.
> So the equation is more like: a miner with 1/N of the hashrate, employing
> this censorship strategy, gains only if `max(f2-g2, 0) > N * (f1-g1)`. More
> broadly, the miner only profits if `f2` is significantly higher than `g2`
> and `f1` is about the same feerate as everything else in your mempool: it
> seems like they're betting on how much you _overshoot_, not how much you
> bump.
>
> In general, I agree it would really suck to inadvertently create a game
> where miners can drive feerates up by triggering desperation-driven
> fee-bumping procedures. I guess this is a reason to avoid
> increasingly-aggressive feebumping, or strategies where we predictably
> overshoot.
>
> Slightly related question: in contracts, generally, the timelock deadline
> is revealed in the script, so the miner knows how "desperate" we are right?
> Is that a problem? For Revault, if your Cancel transaction is a keypath
> spend (I think I remember reading that somewhere?) and you don't reveal the
> script, they don't see your timelock deadline yes?
>
> Again, thanks for the digging and sharing. :)
>
> Best,
> Gloria
>
> On Tue, Nov 30, 2021 at 3:27 PM darosior via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Hi Antoine,
>>
>> Thanks for your comment. I believe for Lightning it's simpler with regard
>> to the management of the UTxO pool, but harder with regard to choosing
>> a threat model.
>> Responses inline.
>>
>>
>> For any opened channel, ensure the confirmation of a Commitment
>> transaction and the children HTLC-Success/HTLC-Timeout transactions. Note,
>> in the Lightning security game you have to consider (at least) 4 types of
>> players moves and incentives : your node, your channel counterparties, the
>> miners, the crowd of bitcoin users. The number of the last type of players
>> is unknown from your node, however it should not be forgotten you're in
>> competition for block space, therefore their block demands bids should be
>> anticipated and reacted to in consequence. With that remark in mind,
>> implications for your LN fee-bumping strategy will be raised afterwards.
>>
>> For a LN service provider, on-chain overpayments are bearing on your
>> operational costs, thus downgrading your economic competitiveness. For the
>> average LN user, overpayment might price out outside a LN non-custodial
>> deployment, as you don't have the minimal security budget to be on your own.
>>
>>
>> I think this problem statement can be easily generalised to any offchain
>> contract. And your points stand for all of them.
>> "For any opened contract, ensure at any point the confirmation of a (set
>> of) transaction(s) in a given number of blocks"
>>
>>
>> Same issue with Lightning, we can be pinned today on the basis of
>> replace-by-fee rule 3. We can be also blinded by network mempool
>> partitions, a pinning counterparty can segregate all the full-nodes  in as
>> many subsets by broadcasting a revoked Commitment transaction different for
>> each. For Revault, I think you can also do unlimited partitions by mutating
>> the ANYONECANPAY-input of the Cancel.
>>
>>
>> Well you can already do unlimited partitions by adding different inputs
>> to it. You could malleate the witness, but since we are using Miniscript
>> i'm confident you would only be able in a marginal way.
>>
>>
>> That said, if you have a distributed towers deployment, spread across the
>> p2p network topology, and they can't be clustered together through
>> cross-layers or intra-layer heuristics, you should be able to reliably
>> observe such partitions. I think such distributed monitors are deployed by
>> few L1 merchants accepting 0-conf to detect naive double-spend.
>>
>>
>> We should aim to more than 0-conf (in)security level..
>> It seems to me the only policy-level mitigation for RBF pinning around
>> the "don't decrease the abolute fees of a less-than-a-block mempool" would
>> be to drop the requirement on increasing absolute fees if the mempool is
>> "full enough" (and the feerate increases exponentially, of course).
>> Another approach could be by introducing new consensus rules as proposed
>> by Jeremy last year [0]. If we go in the realm of new consensus rules, then
>> i think that simply committing to a maximum tx size would fix pinning by
>> RBF rule 3. Could be in the annex, or in the unused sequence bits (although
>> they currently are by Lightning, meh). You could also check in the output
>> script that the input commits to this.
>>
>> [0]
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
>>
>>
>> Have we already discussed a fee-bumping "shared cache", a CPFP variation
>> ? Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO
>> from the main "offchain contract" one. This UTXO is locked by a multi-sig.
>> For any Commitment transaction pre-signed, also counter-sign a CPFP with
>> top mempool feerate included, spending a Commitment anchor output and the
>> shared-cache UTXO. If the fees spike,  you can re-sign a high-feerate CPFP,
>> assuming interactivity. As the CPFP is counter-signed by everyone, the
>> outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is
>> feeded at parity, there shouldn't be an incentive to waste or maliciously
>> inflate the feerate. I think this solution can be easily generalized to
>> more than 2 counterparties by using a multi-signature scheme. Big issue, if
>> the feerate is short due to fee spikes and you need to re-sign a
>> higher-feerate CPFP, you're trusting your counterparty to interact, though
>> arguably not worse than the current update fee mechanism.
>>
>>
>> It really looks just like `update_fee`. Except maybe with the property
>> that you have the channel liquidity not depend on the onchain feerate.
>> In any case, for Lightning i think it's a bad idea to re-introduce trust
>> on this side post anchor outputs. For Revault it's clearly out of the
>> question to introduce trust in your counterparties (why would you bother
>> having a fee-bumping mechanism in the first place then?). Probably the same
>> holds for all offchain contracts.
>>
>>
>> > For Lightning, it'd mean keeping an equivalent amount of funds as the
>> sum of all your
>> channels balances sitting there unallocated "just in case". This is not
>> reasonable.
>>
>> Agree, game-theory wise, you would like to keep a full fee-bumping
>> reserve, ready to burn as much in fees as the contested HTLC value, as it's
>> the maximum gain of your counterparty. Though perfect equilibrium is hard
>> to achieve because your malicious counterparty might have an edge pushing
>> you to broadcast your Commitment first by witholding HTLC resolution.
>>
>> Fractional fee-bumping reserves are much more realistic to expect in the
>> LN network. Lower fee-bumping reserve, higher liquidity deployed, in theory
>> higher routing fees. By observing historical feerates, average offchain
>> balances at risk and routing fees expected gains, you should be able to
>> discover an equilibrium where higher levels of reserve aren't worth the
>> opportunity cost. I guess this  equilibrium could be your LN fee-bumping
>> reserve max feerate.
>>
>> Note, I think the LN approach is a bit different from what suits a
>> custody protocol like Revault,  as you compute a direct return of the
>> frozen fee-bumping liquidity. With Revault, if you have numerous bitcoins
>> protected, it's might be more interesting to adopt a "buy the mempool,
>> stupid" strategy than risking fund safety for few percentages of interest
>> returns.
>>
>>
>> True for routing nodes. For wallets (if receiving funds), it's not about
>> an investment: just users expectations to being able to transact without
>> risking to lose their funds (ie being able to enforce their contract
>> onchain). Although wallets they are much less at risk.
>>
>>
>> This is where the "anticipate the crowd of bitcoin users move" point can
>> be laid out. As the crowd of bitcoin users' fee-bumping reserves are
>> ultimately unknown from your node knowledge, you should be ready to be a
>> bit more conservative than the vanilla fee-bumping strategies shipped by
>> default. In case of massive mempool congestion, your additional
>> conservatism might get your time-sensitive transactions and game on the
>> crowd of bitcoin users. First Problem: if all offchain bitcoin software
>> adopt that strategy we might inflate the worst-case feerate rate at the
>> benefit of the miners, without holistically improving block throughput.
>> Second problem : your class of offchain bitcoin softwares might have
>> ridiculous fee-bumping reserve compared
>> to other classes of offchain bitcoin softwares (Revault > Lightning) and
>> just be priced out bydesign in case of mempool congestion. Third problem :
>> as the number of offchain bitcoin applications should go up with time, your
>> fee-bumping reserve levels based from historical data might be always late
>> by one "bank-run" scenario.
>>
>>
>> Black swan event 2.0? Just rule n°3 is inherent to any kind of fee
>> estimation.
>>
>> For Lightning, if you're short in fee-bumping reserves you might still do
>> preemptive channel closures, either cooperatively or unilaterally and get
>> back the off-chain liquidity to protect the more economically interesting
>> channels. Though again, that kind of automatic behavior might be compelling
>> at the individual node-level, but make the mempol congestion worse
>> holistically.
>>
>>
>> Yeah so we are back to the "fractional reserve" model: you can only
>> enforce X% of the offchain contracts your participate in.. Actually it's
>> even an added assumption: that you still have operating contracts, with
>> honest counterparties.
>>
>>
>> In case of massive mempool congestion, you might try to front-run the
>> crowd of bitcoin users relying on block connections for fee-bumping, and
>> thus start your fee-bumping as soon as you observe feerate groups
>> fluctuations in your local mempool(s).
>>
>>
>> I don't think any kind of mempool-based estimate generalizes well, since
>> at any point the expected time before the next block is 10 minutes (and a
>> lot can happen in 10min).
>>
>> Also you might proceed your fee-bumping ticks on a local clock instead of
>> block connections in case of time-dilation or deeper eclipse attacks of
>> your local node. Your view of the chain might be compromised but not your
>> ability to broadcast transactions thanks to emergency channels (in the
>> non-LN sense...though in fact quid of txn wrapped in onions ?) of
>> communication.
>>
>>
>> Oh, yeah, i didn't explicit "not getting eclipsed" (or more generally
>> "data availability") as an assumption since it's generally one made by
>> participants of any offchain contract. In this case you can't even have
>> decent fee estimation, so you are screwed anyways.
>>
>>
>> Yes, stay open the question on how you enforce this block insurance
>> market. Reputation, which might be to avoid due to the latent
>> centralization effect, might be hard to stack and audit reliably for an
>> emergency mechanism running, hopefully, once in a halvening period. Maybe
>> maybe some cryptographic or economically based mechanism on slashing or
>> swaps could be found...
>>
>>
>> Unfortunately, given current mining centralisation, pools are in a very
>> good position to offer pretty decent SLAs around that. With a block space
>> insurance, you of course don't need all these convoluted fee-bumping hacks.
>> I'm very concerned that large stakeholders of the "offchain contracts
>> ecosystem" would just go this (easier) way and further increase mining
>> centralisation pressure.
>>
>> I agree that a cryptography-based scheme around this type of insurance
>> services would be the best way out.
>>
>>
>> Antoine
>>
>> Le lun. 29 nov. 2021 à 09:34, darosior via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>
>>> Hi everyone,
>>>
>>> Fee-bumping is paramount to the security of many protocols building on
>>> Bitcoin, as they require the
>>> confirmation of a transaction (which might be presigned) before the
>>> expiration of a timelock at any
>>> point after the establishment of the contract.
>>>
>>> The part of Revault using presigned transactions (the delegation from a
>>> large to a smaller multisig)
>>> is no exception. We have been working on how to approach this for a
>>> while now and i'd like to share
>>> what we have in order to open a discussion on this problem so central to
>>> what seem to be The Right
>>> Way [0] to build on Bitcoin but which has yet to be discussed in details
>>> (at least publicly).
>>>
>>> I'll discuss what we came up with for Revault (at least for what will be
>>> its first iteration) but my
>>> intent with posting to the mailing list is more to frame the questions
>>> to this problem we are all
>>> going to face rather than present the results of our study tailored to
>>> the Revault usecase.
>>> The discussion is still pretty Revault-centric (as it's the case study)
>>> but hopefully this can help
>>> future protocol designers and/or start a discussion around what
>>> everyone's doing for existing ones.
>>>
>>>
>>> ## 1. Reminder about Revault
>>>
>>> The part of Revault we are interested in for this study is the
>>> delegation process, and more
>>> specifically the application of spending policies by network monitors
>>> (watchtowers).
>>> Coins are received on a large multisig. Participants of this large
>>> multisig create 2 [1]
>>> transactions. The Unvault, spending a deposit UTxO, creates an output
>>> paying either to the small
>>> multisig after a timelock or to the large multisig immediately. The
>>> Cancel, spending the Unvault
>>> output through the non-timelocked path, creates a new deposit UTxO.
>>> Participants regularly exchange the Cancel transaction signatures for
>>> each deposit, sharing the
>>> signatures with the watchtowers they operate. They then optionally [2]
>>> sign the Unvault transaction
>>> and share the signatures with the small multisig participants who can in
>>> turn use them to proceed
>>> with a spending. Watchtowers can enforce spending policies (say, can't
>>> Unvault outside of business
>>> hours) by having the Cancel transaction be confirmed before the
>>> expiration of the timelock.
>>>
>>>
>>> ## 2. Problem statement
>>>
>>> For any delegated vault, ensure the confirmation of a Cancel transaction
>>> in a configured number of
>>> blocks at any point. In so doing, minimize the overpayments and the UTxO
>>> set footprint. Overpayments
>>> increase the burden on the watchtower operator by increasing the
>>> required frequency of refills of the
>>> fee-bumping wallet, which is already the worst user experience. You are
>>> likely to manage a number of
>>> UTxOs with your number of vaults, which comes at a cost for you as well
>>> as everyone running a full
>>> node.
>>>
>>> Note that this assumes miners are economically rationale, are
>>> incentivized by *public* fees and that
>>> you have a way to propagate your fee-bumped transaction to them. We also
>>> don't consider the block
>>> space bounds.
>>>
>>> In the previous paragraph and the following text, "vault" can generally
>>> be replaced with "offchain
>>> contract".
>>>
>>>
>>> ## 3. With presigned transactions
>>>
>>> As you all know, the first difficulty is to get to be able to
>>> unilaterally enforce your contract
>>> onchain. That is, any participant must be able to unilaterally bump the
>>> fees of a transaction even
>>> if it was co-signed by other participants.
>>>
>>> For Revault we can afford to introduce malleability in the Cancel
>>> transaction since there is no
>>> second-stage transaction depending on its txid. Therefore it is
>>> pre-signed with ANYONECANPAY. We
>>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
>>> Note how we can't leverage
>>> the carve out rule, and neither can any other more-than-two-parties
>>> contract.
>>> This has a significant implication for the rest, as we are entirely
>>> burning fee-bumping UTxOs.
>>>
>>> This opens up a pinning vector, or at least a significant nuisance: any
>>> other party can largely
>>> increase the absolute fee without increasing the feerate, leveraging the
>>> RBF rules to prevent you
>>> from replacing it without paying an insane fee. And you might not see it
>>> in your own mempool and
>>> could only suppose it's happening by receiving non-full blocks or with
>>> transactions paying a lower
>>> feerate.
>>> Unfortunately i know of no other primitive that can be used by
>>> multi-party (i mean, >2) presigned
>>> transactions protocols for fee-bumping that aren't (more) vulnerable to
>>> pinning.
>>>
>>>
>>> ## 4. We are still betting on future feerate
>>>
>>> The problem is still missing one more constraint. "Ensuring confirmation
>>> at any time" involves ensuring
>>> confirmation at *any* feerate, which you *cannot* do. So what's the
>>> limit? In theory you should be ready
>>> to burn as much in fees as the value of the funds you want to get out of
>>> the contract. So... For us
>>> it'd mean keeping for each vault an equivalent amount of funds sitting
>>> there on the watchtower's hot
>>> wallet. For Lightning, it'd mean keeping an equivalent amount of funds
>>> as the sum of all your
>>> channels balances sitting there unallocated "just in case". This is not
>>> reasonable.
>>>
>>> So you need to keep a maximum feerate, above which you won't be able to
>>> ensure the enforcement of
>>> all your contracts onchain at the same time. We call that the "reserve
>>> feerate" and you can have
>>> different strategies for choosing it, for instance:
>>> - The 85th percentile over the last year of transactions feerates
>>> - The maximum historical feerate
>>> - The maximum historical feerate adjusted in dollars (makes more sense
>>> but introduces a (set of?)
>>>   trusted oracle(s) in a security-critical component)
>>> - Picking a random high feerate (why not? It's an arbitrary assumption
>>> anyways)
>>>
>>> Therefore, even if we don't have to bet on the broadcast-time feerate
>>> market at signing time anymore
>>> (since we can unilaterally bump), we still need some kind of prediction
>>> in preparation of making
>>> funds available to bump the fees at broadcast time.
>>> Apart from judging that 500sat/vb is probably more reasonable than
>>> 10sat/vbyte, this unfortunately
>>> sounds pretty much crystal-ball-driven.
>>>
>>> We currently use the maximum of the 95th percentiles over 90-days
>>> windows over historical block chain
>>> feerates. [4]
>>>
>>>
>>> ## 5. How much funds does my watchtower need?
>>>
>>> That's what we call the "reserve". Depending on your reserve feerate
>>> strategy it might vary over
>>> time. This is easier to reason about with a per-contract reserve. For
>>> Revault it's pretty
>>> straightforward since the Cancel transaction size is static:
>>> `reserve_feerate * cancel_size`. For
>>> other protocols with dynamic transaction sizes (or even packages of
>>> transactions) it's less so. For
>>> your Lightning channel you would probably take the maximum size of your
>>> commitment transaction
>>> according to your HTLC exposure settings + the size of as many
>>> `htlc_success` transaction?
>>>
>>> Then you either have your software or your user guesstimate how many
>>> offchain contracts the
>>> watchtower will have to watch, time that by the per-contract reserve and
>>> refill this amount (plus
>>> some slack in practice). Once again, a UX tradeoff (not even mentioning
>>> the guesstimation UX):
>>> overestimating leads to too many unallocated funds sitting on a hot
>>> wallet, underestimating means
>>> (at best) inability to participate in new contracts or being "at risk"
>>> (not being able to enforce
>>> all your contracts onchain at your reserve feerate) before a new refill.
>>>
>>> For vaults you likely have large-value UTxOs and small transactions (the
>>> Cancel is one-in one-out in
>>> Revault). For some other applications with large transactions and
>>> lower-value UTxOs on average it's
>>> likely that only part of the offchain contracts might be enforceable at
>>> a reasonable feerate. Is it
>>> reasonable?
>>>
>>>
>>> ## 6. UTxO pool layout
>>>
>>> Now that you somehow managed to settle on a refill amount, how are you
>>> going to use these funds?
>>> Also, you'll need to manage your pool across time (consolidating small
>>> coins, and probably fanning
>>> out large ones).
>>>
>>> You could keep a single large UTxO and peel it as you need to sponsor
>>> transactions. But this means
>>> that you need to create a coin of a specific value according to your
>>> need at the current feerate
>>> estimation, hope to have it confirmed in a few blocks (at least for now!
>>> [5]), and hope that the
>>> value won't be obsolete by the time it confirmed. Also, you'd have to do
>>> that for any number of
>>> Cancel, chaining feebump coin creation transactions off the change of
>>> the previous ones or replacing
>>> them with more outputs. Both seem to become really un-manageable (and
>>> expensive) in many edge-cases,
>>> shortening the time you have to confirm the actual Cancel transaction
>>> and creating uncertainty about
>>> the reserve (how much is my just-in-time fanout going to cost me in fees
>>> that i need to refill in
>>> advance on my watchtower wallet?).
>>> This is less of a concern for protocols using CPFP to sponsor
>>> transactions, but they rely on a
>>> policy rule specific to 2-parties contracts.
>>>
>>> Therefore for Revault we fan-out the coins per-vault in advance. We do
>>> so at refill time so the
>>> refiller can give an excess to pay for the fees of the fanout
>>> transaction (which is reasonable since
>>> it will occur just after the refilling transaction confirms). When the
>>> watchtower is asked to watch
>>> for a new delegated vault it will allocate coins from the pool of
>>> fanned-out UTxOs to it (failing
>>> that, it would refuse the delegation).
>>> What is a good distribution of UTxOs amounts per vault? We want to
>>> minimize the number of coins,
>>> still have coins small enough to not overpay (remember, we can't have
>>> change) and be able to bump a
>>> Cancel up to the reserve feerate using these coins. The two latter
>>> constraints are directly in
>>> contradiction as the minimal value of a coin usable at the reserve
>>> feerate (paying for its own input
>>> fee + bumping the feerate by, say, 5sat/vb) is already pretty high.
>>> Therefore we decided to go with
>>> two distributions per vault. The "reserve distribution" alone ensures
>>> that we can bump up to the
>>> reserve feerate and is usable for high feerates. The "bonus
>>> distribution" is not, but contains
>>> smaller coins useful to prevent overpayments during low and medium fee
>>> periods (which is most of the
>>> time).
>>> Both distributions are based on a basic geometric suite [6]. Each value
>>> is half the previous one.
>>> This exponentially decreases the value, limiting the number of coins.
>>> But this also allows for
>>> pretty small coins to exist and each coin's value is equal to the sum of
>>> the smaller coins,
>>> or smaller by at most the value of the smallest coin. Therefore bounding
>>> the maximum overpayment to
>>> the smallest coin's value [7].
>>>
>>> For the management of the UTxO pool across time we merged the
>>> consolidation with the fanout. When
>>> fanning out a refilled UTxO, we scan the pool for coins that need to be
>>> consolidated according to a
>>> heuristic. An instance of a heuristic is "the coin isn't allocated and
>>> would not have been able to
>>> increase the fee at the median feerate over the past 90 days of blocks".
>>> We had this assumption that feerate would tend to go up with time and
>>> therefore discarded having to
>>> split some UTxOs from the pool. We however overlooked that a large
>>> increase in the exchange price of
>>> BTC as we've seen during the past year could invalidate this assumption
>>> and that should arguably be
>>> reconsidered.
>>>
>>>
>>> ## 7. Bumping and re-bumping
>>>
>>> First of all, when to fee-bump? At fixed time intervals? At each block
>>> connection? It sounds like,
>>> given a large enough timelock, you could try to greed by "trying your
>>> luck" at a lower feerate and
>>> only re-bumping every N blocks. You would then start aggressively
>>> bumping at every block after M
>>> blocks have passed. But that's actually a bet (in disguised?) that the
>>> next block feerate in M blocks
>>> will be lower than the current one. In the absence of any predictive
>>> model it is more reasonable to
>>> just start being aggressive immediately.
>>> You probably want to base your estimates on `estimatesmartfee` and as a
>>> consequence you would re-bump
>>> (if needed )after each block connection, when your estimates get updated
>>> and you notice your
>>> transaction was not included in the block.
>>>
>>> In the event that you notice a consequent portion of the block is filled
>>> with transactions paying
>>> less than your own, you might want to start panicking and bump your
>>> transaction fees by a certain
>>> percentage with no consideration for your fee estimator. You might skew
>>> miners incentives in doing
>>> so: if you increase the fees by a factor of N, any miner with a fraction
>>> larger than 1/N of the
>>> network hashrate now has an incentive to censor your transaction at
>>> first to get you to panic. Also
>>> note this can happen if you want to pay the absolute fees for the
>>> 'pinning' attack mentioned in
>>> section #2, and that might actually incentivize miners to perform it
>>> themselves..
>>>
>>> The gist is that the most effective way to bump and rebump (RBF the
>>> Cancel tx) seems to just be to
>>> consider the `estimatesmartfee 2 CONSERVATIVE` feerate at every block
>>> your tx isn't included in, and
>>> to RBF it if the feerate is higher.
>>> In addition, we fallback to a block chain based estimation when
>>> estimates aren't available (eg if
>>> the user stopped their WT for say a hour and we come back up): we use
>>> the 85th percentile over the
>>> feerates in the last 6 blocks. Sure, miners can try to have an influence
>>> on that by stuffing their
>>> blocks with large fee self-paying transactions, but they would need to:
>>> 1. Be sure to catch a significant portion of the 6 blocks (at least 2,
>>> actually)
>>> 2. Give up on 25% of the highest fee-paying transactions (assuming they
>>> got the 6 blocks, it's
>>>    proportionally larger and incertain as they get less of them)
>>> 3. Hope that our estimator will fail and we need to fall back to the
>>> chain-based estimation
>>>
>>>
>>> ## 8. Our study
>>>
>>> We essentially replayed the historical data with different deployment
>>> configurations (number of
>>> participants and timelock) and probability of an event occurring (event
>>> being say an Unvault, an
>>> invalid Unvault, a new delegation, ..). We then observed different
>>> metrics such as the time at risk
>>> (when we can't enforce all our contracts at the reserve feerate at the
>>> same time), or the
>>> operational cost.
>>> We got the historical fee estimates data from Statoshi [9], Txstats [10]
>>> and the historical chain
>>> data from Riccardo Casatta's `blocks_iterator` [11]. Thanks!
>>>
>>> The (research-quality..) code can be found at
>>> https://github.com/revault/research under the section
>>> "Fee bumping". Again it's very Revault specific, but at least the data
>>> can probably be reused for
>>> studying other protocols.
>>>
>>>
>>> ## 9. Insurances
>>>
>>> Of course, given it's all hacks and workarounds and there is no good
>>> answer to "what is a reasonable
>>> feerate up to which we need to make contracts enforceable onchain?",
>>> there is definitely room for an
>>> insurance market. But this enters the realm of opinions. Although i do
>>> have some (having discussed
>>> this topic for the past years with different people), i would like to
>>> keep this post focused on the
>>> technical aspects of this problem.
>>>
>>>
>>>
>>> [0] As far as i can tell, having offchain contracts be enforceable
>>> onchain by confirming a
>>> transaction before the expiration of a timelock is a widely agreed-upon
>>> approach. And i don't think
>>> we can opt for any other fundamentally different one, as you want to
>>> know you can claim back your
>>> coins from a contract after a deadline before taking part in it.
>>>
>>> [1] The Real Revault (tm) involves more transactions, but for the sake
>>> of conciseness i only
>>> detailed a minimum instance of the problem.
>>>
>>> [2] Only presigning part of the Unvault transactions allows to only
>>> delegate part of the coins,
>>> which can be abstracted as "delegate x% of your stash" in the user
>>> interface.
>>>
>>> [3]
>>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017835.html
>>>
>>> [4]
>>> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L323-L329
>>>
>>> [5] https://github.com/bitcoin/bitcoin/pull/23121
>>>
>>> [6]
>>> https://github.com/revault/research/blob/1df953813708287c32a15e771ba74957ec44f354/feebumping/model/statemachine.py#L494-L507
>>>
>>> [7] Of course this assumes a combinatorial coin selection, but i believe
>>> it's ok given we limit the
>>> number of coins beforehand.
>>>
>>> [8] Although there is the argument to outbid a censorship, anyone
>>> censoring you isn't necessarily a
>>> miner.
>>>
>>> [9] https://www.statoshi.info/
>>>
>>> [10] https://www.statoshi.info/
>>>
>>> [11] https://github.com/RCasatta/blocks_iterator
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>

[-- Attachment #2: Type: text/html, Size: 48109 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
  2021-11-29 14:27 [bitcoin-dev] A fee-bumping model darosior
  2021-11-30  1:43 ` Antoine Riard
@ 2021-12-09 13:50 ` Peter Todd
  1 sibling, 0 replies; 9+ messages in thread
From: Peter Todd @ 2021-12-09 13:50 UTC (permalink / raw)
  To: darosior, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 2671 bytes --]

On Mon, Nov 29, 2021 at 02:27:23PM +0000, darosior via bitcoin-dev wrote:
> ## 2. Problem statement
> 
> For any delegated vault, ensure the confirmation of a Cancel transaction in a configured number of
> blocks at any point. In so doing, minimize the overpayments and the UTxO set footprint. Overpayments
> increase the burden on the watchtower operator by increasing the required frequency of refills of the
> fee-bumping wallet, which is already the worst user experience. You are likely to manage a number of
> UTxOs with your number of vaults, which comes at a cost for you as well as everyone running a full
> node.
> 
> Note that this assumes miners are economically rationale, are incentivized by *public* fees and that
> you have a way to propagate your fee-bumped transaction to them. We also don't consider the block
> space bounds.
> 
> In the previous paragraph and the following text, "vault" can generally be replaced with "offchain
> contract".

For this section I think it'd help if you re-wrote it mathematically in terms
of probabilities, variance, and costs. It's impossible to ensure confirmation
with 100% probability, so obviously we can start by asking what is the cost of
failing to get a confirmation by the deadline?

Now suppose that cost is X. Note how the lowest _variance_ approach would be to
pay a fee of X immediately: that would get us the highest possible probability
of securing a confirmation prior to the deadline, without paying more than that
confirmation is worth.

Of course, this is silly! So the next step is to trade off some variance -
higher probability of failure - for lower expected cost. A trivial way to do
that could be to just bump the fee linearly between now and that deadline. That
lowers expected cost. But does increase the probability of failure. We can of
course account for this in an expected cost.


One final nuance here is if this whole process is visible in the UI we might
want to take into account user discomfort: if I know the process could fail,
the user will probably be happier if it succeeds quickly, even if the
probability of success in the future is still very high.


FWIW the approach taken by the OpenTimestamps calendars is a trivial linear
increase. While they don't have deadlines in the same sense as your
application, there is a trade-off between cost and confirmation time. So our
strategy is to spend money at a constant rate by simply bumping the fee by the
same amount at every new block. I could improve this by using knowledge of the
mempool. But so far I haven't bothered.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [bitcoin-dev] A fee-bumping model
@ 2021-11-30  1:47 Prayank
  0 siblings, 0 replies; 9+ messages in thread
From: Prayank @ 2021-11-30  1:47 UTC (permalink / raw)
  To: darosior; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 2432 bytes --]

Good morning darosior,

Subject of the email looks interesting and I have few comments on the things shared:

> The part of Revault we are interested in for this study is the delegation process, and more specifically the application of spending policies by network monitors (watchtowers). Participants regularly exchange the Cancel transaction signatures for each deposit, sharing the signatures with the watchtowers they operate.Watchtowers can enforce spending policies (say, can't Unvault outside of business hours) by having the Cancel transaction be confirmed before the expiration of the timelock.

What are the privacy issues associated with such watchtowers?

> ## 4. We are still betting on future feerate
The problem is still missing one more constraint. "Ensuring confirmation at any time" involves ensuring confirmation at *any* feerate, which you *cannot* do.

Agree

> historical feerate: We currently use the maximum of the 95th percentiles over 90-days windows over historical block chain
feerates.

Disagree that fee rates used in past should matter.

> Apart from judging that 500sat/vb is probably more reasonable than 10sat/vbyte, this unfortunately sounds pretty much crystal-ball-driven.

Agree

> ## 7. Bumping and re-bumping
First of all, when to fee-bump? At fixed time intervals? At each block connection? It sounds like, given a large enough timelock, you could try to greed by "trying your luck" at a lower feerate and only re-bumping every N blocks. You would then start aggressively bumping at every block after M blocks have passed.

Agree

> You probably want to base your estimates on `estimatesmartfee`

Disagree. `estimatesmartfee` RPC has few issues: https://github.com/bitcoin/bitcoin/pull/22722#issuecomment-901907447

> ## 9. Insurances
there is definitely room for an insurance market.

Agree. I think its possible using discreet log contracts with some trust assumptions and use of multi oracles.

I had one idea about creating insurance project for LGBTQ community in India as they don't have enough options like others. Have shared the details here: https://gist.github.com/prayank23/f30ab1ab68bffe6bcb2ceacec599cd36 
As final point, I guess you already know about this presentation by Jack Mallers in which he has described how we could create derivatives for users to hedge fees: https://youtu.be/rBCG0btUlTw

-- 
Prayank

A3B1 E430 2298 178F

[-- Attachment #2: Type: text/html, Size: 3588 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-12-09 13:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-29 14:27 [bitcoin-dev] A fee-bumping model darosior
2021-11-30  1:43 ` Antoine Riard
2021-11-30 15:19   ` darosior
2021-12-07 17:24     ` Gloria Zhao
2021-12-08 14:51       ` darosior
2021-12-09  0:55       ` Antoine Riard
2021-12-08 23:56     ` Antoine Riard
2021-12-09 13:50 ` Peter Todd
2021-11-30  1:47 Prayank

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox